The create_test tool is located in the scripts directory and can be used to set up a standalone test case. The test cases are fixed and defined within the CCSM scripts. To see the list of test cases or for additional help, type "create_test -help" from the scripts directory. To use create_test, do something like
> cd $CCSMROOT/scripts > ./create_test -testname ERS.f19_g16.X.bluefire_ibm -testid t01 > cd ERS.f19_g16.X.bluefire_ibm.t01 > ERS.f19_g16.X.bluefire_ibm.t01.test_build submit ERS.f19_g16.X.bluefire.t01.test Check your test results. A successful test produces "PASS" as the first word in the file TestStatus |
The above sets up an exact restart test (ERS) at the 1.9x2.5_gx1v6 resolution using a dead model compset (X) for the machine bluefire. The testid provides a unique tag for the test in case it needs to be rerun (i.e. using -testid t02). Some things to note about CCSM tests
For more information about the create_test tool, run "create_test -help".
Test results are set in the TestStatus file. The TestStatus.out file provides additional details.
Tests are not always easily re-runable from an existing test directory. Rather than rerun a previous test case, it's best to set up a clean test case (i.e. with a new testid).
Tests are built using the .test_build script. This is different from cases which are built using the .build script. Some tests require more than one executable, the .test_build script builds all required executables upfront interactively.
The costs of tests vary widely. Some are short and some are long.
If a test fails, see the Section called Debugging Tests That Fail.
There are -compare and -generate options for the create_test tool that support regression testing.
There are extra test options that can be added to the test such as _D, _E, or _P*. These are described in more detail in the create_test -help output.
The test status results have the following meaning
Test Result | Description |
---|---|
BFAIL | compare test couldn't find base result |
BUILD | build succeeded, test not submitted |
CFAIL | env variable or build error |
CHECK | manual review of data is required |
ERROR | test checker failed, test may or may not have passed |
FAIL | test failed |
GEN | test has been generated |
PASS | test passed |
PEND | test has been submitted |
RUN | test is currently running OR it hung, timed out, or exitied ungracefully |
SFAIL | generation of test failed in scripts |
TFAIL | test setup error |
UNDEF | undefined result |
The following tests are available at the time of writing
Test | Description |
---|---|
SMS | smoke test |
ERS | exact restart from startup, default 6 days + 5 days |
ERB | branch/exact restart test |
ERH | hybrid/exact restart test |
ERI | hybrid/branch/exact restart test |
ERT | exact restart from startup, default 2 months + 1 month |
SEQ | sequencing bit-for-bit test |
PEA | single processor testing with mpi and mpi-serial |
PEM | pe counts mpi bit-for-bit test |
PET | pe counts mpi/openmp bit-for-bit test |
CME | compare mct and esmf interfaces test |
NCK | single vs multi instance validation test |