Before calling configure, first customize the default configuration. To customize the default configuration, modify env_conf.xml and env_mach_pes.xml before invoking configure. The env_build.xml and env_run.xml files can also be changed at this step.
env_mach_pes.xml contains variables that determine the layout of the components across the hardware processors. Those variables specify the number of processors for each component and determine the layout of components across the processors used. See env_mach_pes.xml variables for a summary of all env_mach_pes.xml variables.
env_conf.xml contains several different kinds of variables including variables for case initialization, variables that specify the regridding files, and variables that set component-specific namelists and component-specific CPP variables. See env_conf.xml variables for a summary of all env_conf.xml variables.
Optimizing the throughput or efficiency of a CESM experiment often involves customizing the processor (PE) layout for load balancing. The component PE layout is set in env_mach_pes.xml.
CESM1 has significant flexibility with respect to the layout of components across different hardware processors. In general, its CESM components -- atm, lnd, ocn, ice, glc, and cpl -- can run on overlapping or mutually unique processors. Each component is associated with a unique MPI communicator. In addition, the driver runs on the union of all processors and controls the sequencing and hardware partitioning. The processor layout for each component is specified in the env_mach_pes.xml file via three settings: the number of MPI tasks, the number of OpenMP threads per task, and the root MPI processor number from the global set.
For example, these settings in env_mach_pes.xml:
<entry id="NTASKS_OCN" value="128" /> <entry id="NTHRDS_OCN" value="1" /> <entry id="ROOTPE_OCN" value="0" /> |
cause the ocean component to run on 128 hardware processors with 128 MPI tasks using one thread per task starting from global MPI task 0 (zero).
In this next example:
<entry id="NTASKS_ATM" value="16" /> <entry id="NTHRDS_ATM" value="4" /> <entry id="ROOTPE_ATM" value="32" /> |
the atmosphere component will run on 64 hardware processors using 16 MPI tasks and 4 threads per task starting at global MPI task 32. There are NTASKS, NTHRDS, and ROOTPE input variables for every component in env_mach_pes.xml. There are some important things to note.
NTASKS must be greater or equal to 1 (one) even for inactive (stub) components.
NTHRDS must be greater or equal to 1 (one). If NTHRDS is set to 1, this generally means threading parallelization will be off for that component. NTHRDS should never be set to zero.
The total number of hardware processors allocated to a component is NTASKS * NTHRDS.
The coupler processor inputs specify the pes used by coupler computation such as mapping, merging, diagnostics, and flux calculation. This is distinct from the driver which always automatically runs on the union of all processors to manage model concurrency and sequencing.
The root processor is set relative to the MPI global communicator, not the hardware processors counts. An example of this is below.
The layout of components on processors has no impact on the science. The scientific sequencing is hardwired into the driver. Changing processor layouts does not change intrinsic coupling lags or coupling sequencing. ONE IMPORTANT POINT is that for a fully active configuration, the atmosphere component is hardwired in the driver to never run concurrently with the land or ice component. Performance improvements associated with processor layout concurrency is therefore constrained in this case such that there is never a performance reason not to overlap the atmosphere component with the land and ice components. Beyond that constraint, the land, ice, coupler and ocean models can run concurrently, and the ocean model can also run concurrently with the atmosphere model.
If all components have identical NTASKS, NTHRDS, and ROOTPE set, all components will run sequentially on the same hardware processors.
The root processor is set relative to the MPI global communicator, not the hardware processor counts. For instance, in the following example:
<entry id="NTASKS_ATM" value="16" /> <entry id="NTHRDS_ATM" value="4" /> <entry id="ROOTPE_ATM" value="0" /> <entry id="NTASKS_OCN" value="64" /> <entry id="NTHRDS_OCN" value="1" /> <entry id="ROOTPE_OCN" value="16" /> |
the atmosphere and ocean are running concurrently, each on 64 processors with the atmosphere running on MPI tasks 0-15 and the ocean running on MPI tasks 16-79. The first 16 tasks are each threaded 4 ways for the atmosphere. The batch submission script ($CASE.$MACH.run) should automatically request 128 hardware processors, and the first 16 MPI tasks will be laid out on the first 64 hardware processors with a stride of 4. The next 64 MPI tasks will be laid out on the second set of 64 hardware processors.
If you set ROOTPE_OCN=64 in the preceding example, then a total of 176 processors would have been requested and the atmosphere would have been laid out on the first 64 hardware processors in 16x4 fashion, and the ocean model would have been laid out on hardware processors 113-176. Hardware processors 65-112 would have been allocated but completely idle.
Note: env_mach_pes.xml cannot be modified after "configure -case" has been invoked without first invoking "configure -cleanmach". For an example of changing pes, see the Section called Changing PE layout in Chapter 9
The case initialization type is set in env_conf.xml. A CESM
run can be initialized in one of three ways; startup, branch, or
hybrid. The variable $RUN_TYPE
determines the
initialization type and is set to "startup" by default when
create_newcase is invoked. This setting is only important for
the initial run of a production run when the $CONTINUE_RUN
variable is set to FALSE. After the initial run, the $CONTINUE_RUN
variable is set to TRUE, and the model restarts exactly using input
files in a case, date, and bit-for-bit continuous fashion.
Run initialization type. Valid values: startup, hybrid, branch. Default: startup.
Start date for the run in yyyy-mm-dd format. This is only used for startup or hybrid runs.
Reference case for hybrid or branch runs.
Reference date in yyyy-mm-dd format for hybrid or branch runs.
This is a detailed description of the different ways that CESM initialization runs.
In a startup run (the default), all components are initialized using baseline states. These baseline states are set independently by each component and can include the use of restart files, initial files, external observed data files, or internal initialization (i.e., a "cold start"). In a startup run, the coupler sends the start date to the components at initialization. In addition, the coupler does not need an input data file. In a startup initialization, the ocean model does not start until the second ocean coupling (normally the second day).
In a branch run, all components are initialized using a
consistent set of restart files from a previous run (determined by
the $RUN_REFCASE
and $RUN_REFDATE
variables in env_conf.xml).
The case name is generally changed for a branch run, although it does
not have to be. In a branch run, setting $RUN_STARTDATE
in env_conf.xml is ignored because the model components obtain the
start date from their restart datasets. Therefore, the start date
cannot be changed for a branch run. This is the same mechanism that is
used for performing a restart run (where $CONTINUE_RUN
is set to TRUE in the env_run.xml file).
Branch runs are typically used when sensitivity or parameter
studies are required, or when settings for history file output streams
need to be modified while still maintaining bit-for-bit
reproducibility. Under this scenario, the new case is able to produce
an exact bit-for-bit restart in the same manner as a continuation run
if no source code or component namelist inputs
are modified. All models use restart files to perform this type of
run. $RUN_REFCASE
and $RUN_REFDATE
are
required for branch runs.
To set up a branch run, locate the restart tar file or restart
directory for $RUN_REFCASE
and $RUN_REFDATE
from a previous run, then place those
files in the $RUNDIR
directory. See setting up a branch run for an
example.
A hybrid run indicates that CESM will be initialized more like
a startup, but will use initialization datasets from a previous
case. This is somewhat analogous to a branch run with relaxed
restart constraints. A hybrid run allows users to bring together
combinations of initial/restart files from a previous case
(specified by $RUN_REFCASE
) at a given model output
date (specified by $RUN_REFDATE
). Unlike a branch
run, the starting date of a hybrid run (specified by
$RUN_STARTDATE
) can be modified relative to the
reference case. In a hybrid run, the model does not continue in a
bit-for-bit fashion with respect to the reference case. The resulting
climate, however, should be continuous provided that no model source
code or namelists are changed in the hybrid run.
In a hybrid initialization, the
ocean model does not start until the second ocean coupling (normally
the second day), and the coupler does a "cold start" without a restart file.