How to set up a case and customize the PE layout

Calling cesm_setup

The cesm_setup command does the following:

cesm_setup -clean moves $CASEROOT/$CASE.run and a copy of env_mach_pes.xml to a time-stamped directory in MachinesHist. The $CASEROOT directory will now appear as if create_newcase had just been run with the exception that already created Macros and user_nl_xxx files will not be touched and local modifications to the env_*.xml files will be preserved. After further modifications are made to env_mach_pes.xml, cesm_setup must be rerun before you can build and run the model.

If env_mach_pes.xml variables need to be changed after cesm_setup has been called, then cesm_setup -clean must be run first, followed by cesm_setup.

The following summarizes the new directories and files that are created by cesm_setup. For more information about the files in the case directory, see the Section called BASICS: What are the directories and files in my case directory? in Chapter 6.

Table 2-2. Result of calling cesm_setup

File or DirectoryDescription
Macros File containing machine-specific makefile directives for your target platform/compiler. This is only created the first time that cesm_setup is called. Calling cesm_setup -clean will not remove the Macros file once it has been created.
user_nl_xxx[_NNNN] files Files where all user modifications to component namelists are made. xxx denotes the set of components targeted for the specific case. NNNN goes from 0001 to the number of instances of that component (see the multiple instance discussion below). For example, for a B_ compset, xxx would denote [cam,clm,rtm,cice,pop2,cpl]. For a case where there is only 1 instance of each component (default) NNNN will not appear in the user_nl file names. A user_nl file of a given name will only be created once. Calling cesm_setup -clean will not remove any user_nl files. Changing the number of instances in the env_mach_pes.xml will only cause new user_nl files to be added to $CASEROOT.
$CASE.runFile containing the necessary batch directives to run the model on the required machine for the requested PE layout. Runs the CESM model and performs short-term archiving of output data (see running CESM).
CaseDocs/Directory that contains all the component namelists for the run. This is for reference only and files in this directory SHOULD NOT BE EDITED since they will be overwritten at build time and run time.
env_derivedFile containing environmental variables derived from other settings. Should not be modified by the user.

Changing the PE layout

env_mach_pes.xml variables determine the number of processors for each component, the number of instances of each component and the layout of the components across the hardware processors. Optimizing the throughput and efficiency of a CESM experiment often involves customizing the processor (PE) layout for load balancing. CESM has significant flexibility with respect to the layout of components across different hardware processors. In general, the CESM components -- atm, lnd, ocn, ice, glc, rof, and cpl -- can run on overlapping or mutually unique processors. Whereas Each component is associated with a unique MPI communicator, the driver runs on the union of all processors and controls the sequencing and hardware partitioning. The component processor layout is via three settings: the number of MPI tasks, the number of OpenMP threads per task, and the root MPI processor number from the global set.

For example, the following env_mach_pes.xml settings


<entry id="NTASKS_OCN" value="128" />
<entry id="NTHRDS_OCN" value="1" />
<entry id="ROOTPE_OCN" value="0" />

would cause the ocean component to run on 128 hardware processors with 128 MPI tasks using one thread per task starting from global MPI task 0 (zero).

In this next example:


<entry id="NTASKS_ATM" value="16" />
<entry id="NTHRDS_ATM" value="4"  />
<entry id="ROOTPE_ATM" value="32" />

the atmosphere component will run on 64 hardware processors using 16 MPI tasks and 4 threads per task starting at global MPI task 32. There are NTASKS, NTHRDS, and ROOTPE input variables for every component in env_mach_pes.xml. There are some important things to note.

The root processor is set relative to the MPI global communicator, not the hardware processor counts. For instance, in the following example:


<entry id="NTASKS_ATM" value="16" />
<entry id="NTHRDS_ATM" value="4"  />
<entry id="ROOTPE_ATM" value="0"  />
<entry id="NTASKS_OCN" value="64" />
<entry id="NTHRDS_OCN" value="1"  />
<entry id="ROOTPE_OCN" value="16" />

the atmosphere and ocean are running concurrently, each on 64 processors with the atmosphere running on MPI tasks 0-15 and the ocean running on MPI tasks 16-79. The first 16 tasks are each threaded 4 ways for the atmosphere. The batch submission script ($CASE.run) should automatically request 128 hardware processors, and the first 16 MPI tasks will be laid out on the first 64 hardware processors with a stride of 4. The next 64 MPI tasks will be laid out on the second set of 64 hardware processors.

If you set ROOTPE_OCN=64 in the preceding example, then a total of 176 processors would have been requested and the atmosphere would have been laid out on the first 64 hardware processors in 16x4 fashion, and the ocean model would have been laid out on hardware processors 113-176. Hardware processors 65-112 would have been allocated but completely idle.

Note: env_mach_pes.xml cannot be modified after "./cesm_setup" has been invoked without first invoking "cesm_setup -clean". For an example of changing pes, see the Section called BASICS: How do I change processor counts and component layouts on processors? in Chapter 6