Here in the introduction we first give a simple guide to understand the document conventions in How to Use This Document. The next section What is new with CLM4 in CESM1.0.4 since previous public releases? describes the differences between CLM4 in CESM1.0.4 and CLM4.0.00 (for each CESM release version up to CESM1.0.4) as well as between CLM4.0.00 and CLM3.5, both from a scientific as well as a software engineering point of view. It also talks about differences in the configuration, namelist, and history fields. The next section Quickstart to using CLM4 is for users that are already experts in using CLM and gives a quickstart guide to the bare details on how to use CLM4. The next What is scientifically validated and functional in CLM4? tells you about what has been extensively tested and scientifically validated (and maybe more importantly) what has NOT. What are the UNIX utilities required to use CLM? lists the UNIX utilities required to use CLM4 and is important if you are running on non-NCAR machines, generic local machines, or machines NOT as well tested by us at NCAR. Next we have Important Notes and Best Practices for Usage of CLM4 to detail some of the best practices for using CLM4 for science. The last introductory section is Other resources to get help from which lists different resources for getting help with CESM1.0 and CLM4.
Chapter 1 goes into detail on how to setup and run simulations with CLM4 and especially how to customize cases. Details of configure modes and build-namelist options as well as namelist options are given in this chapter.
Chapter 2 gives instructions on the CLM4 tools for creating input datasets for use by CLM, for the expert user. There's an overview of what each tool does, and some general notes on how to build the FORTRAN tools. Then each tool is described in detail along with different ways in which the tool might be used. A final section on how to customize datasets for observational sites for very savvy expert users is given as the last section of this chapter.
As a followup to the tools chapter, Chapter 3 tells how to add files to the XML database for build-namelist to use. This is important if you want to use the XML database to automatically select user-created input files that you have created when you setup new cases with CLM.
In Chapter 4, again for the expert user, we give details on how to do some particularly difficult special cases. For example, we give the protocol for spinning up both the CLMCN model and CLM with dynamic vegetation active (CNDV). We give instructions to do a spinup case from a previous case with Coupler history output for atmospheric forcing. We also give instructions on running both the prognostic crop and irrigation models. We also review how to validate a port to a new machine using the Perturbation error growth technique. Lastly we tell the user how to use the DATM model to send historical CO2 data to CLM.
Chapter 5 outlines how to do single-point or
regional simulations using CLM4.
This is useful to either compare CLM simulations with point observational stations,
such as tower sites (which might include your own atmospheric forcing), or
to do quick simulations with CLM for example to test a new parameterization. There are
several different ways given on how to perform
single-point simulations which range from simple PTS_MODE
to more complex where you create all your own datasets, tying into
Chapter 2 and also Chapter 3 to add the
files into the build-namelist XML database. After this chapter
Chapter 6 chapter outlines how to use the PTCLM python script to
help you run single-point simulations.
Finally, Chapter 7 gives some guidance on trouble-shooting problems when using CLM4. It doesn't cover all possible problems with CLM, but gives you some guidelines for things that can be done for some common problems.
In the appendices we talk about some issues that are useful for advanced users and developers of CLM. In Appendix A we give some basic background to the CLM developer on how to edit the models/lnd/clm/bld/clm.cpl7.template. This is a very difficult exercise and we don't recommend it for any, but the most advanced users of CLM who are also experts in UNIX and UNIX scripting.
In Appendix B we go over how to run the script runinit_ibm.csh" that will interpolate standard resolution initial condition dataset to several other resolutions at once. It also runs CLM to create template files as well as doing the interpolation using interpinic. In general this is only something that a developer would want to do. Most users will only want to interpolate for a few specific resolutions.
In Appendix C we go over the automated testing scripts for validating that the CLM is working correctly. The test scripts run many different configurations and options with CLM making sure that they work, as well as doing automated testing to verify restarts are working correctly, and testing at many different resolutions. In general this is an activity important only for a developer of CLM, but could also be used by users who are doing extensive code modifications and want to ensure that the model continues to work correctly.
Finally in Appendix D we give instructions on how to build the documentation associated with CLM (i.e. how to build this document). This document is included in every CLM distribution and can be built so that you can view a local copy rather than having to go to the CESM website. This also could be useful for developers who need to update the documentation due to changes they have made.