16th June 2011

OpenFOAM 2.0.0: Run-time Control

Run-time Code Compilation

OpenFOAM now includes the capability to compile, load and execute C++ code at run-time. It includes a new general directive called #codeStream that can be used in any input files for run-time compilation. For example, the following code in the controlDict file looks up dictionary entries and does a simple calculation for the write interval:

startTime       0;
endTime         100;
writeInterval   #codeStream
        scalar start = readScalar(dict.lookup("startTime"));
        scalar end = readScalar(dict.lookup("endTime"));
        label nDumps = 5;
        os << ((end - start)/nDumps);

The following code would also work, since #codeStream can recognise regular macro substitutions using the ‘$’ syntax.

startTime       0;
endTime         100;
nDumps          5;
writeInterval   #codeStream
        label interval = ($endTime - $startTime);
        label nDumps = $nDumps;
        os  << (interval / nDumps);

It also includes a new codedFixedValue boundary condition that compiles a piece of code to calculate the boundary value, e.g. for a simple ramped inlet condition, a field file could include:

    type            codedFixedValue;
    value           uniform 0;
    redirectType    ramp;
        operator==(min(10, 0.1*this->db().time().value()));

There is also a new coded function object that compiles and executes piece of code to generate some new post-processed data at run-time, e.g. the following example (added to the controlDict file, writes out the average pressure to the terminal during a simulation:

        functionObjectLibs ("");
        type coded;
        redirectType average;
        outputControl outputTime;
            const volScalarField& p =
            Info<<"p avg:" << average(p) << endl;

Source code

  • OpenFOAM library
  • codedFixedValue BC
  • coded function object

Further information

User Guide: Input/output file format

Residual/Convergence Control

Solvers using the SIMPLE or PIMPLE algorithms now include convergence controls based on residuals of fields. The controls are specified through a residualControls sub-dictionary in the fvSolution file. The user specifies a tolerance for one or more solved fields and when the residual for every field falls below the corresponding residual, the simulation terminates. The following example sets tolerances for p, U and k and epsilon:

    nNonOrthogonalCorrectors 0;

        p               1e-2;
        U               1e-3;
        "(k|epsilon)"   1e-3;

Source code

  • solutionControl classes


  • Any example case running SIMPLE/PIMPLE

Help Information

The help information when executing OpenFOAM applications, invoked by the -help option has been improved to contain more description. For example, when executing blockMesh with the -help option in versions of OpenFOAM prior to 2.0.0, the code would return the following to the terminal command line, containing the list of execution options:

>> blockMesh -help        # v1.7.1

Usage: blockMesh [-dict dictionary] [-case dir] [-blockTopology] [-region name]  [-help] [-doc] [-srcDoc]

In the latest version of OpenFOAM, there is a description of each of the options, providing better help to the user:

>> blockMesh -help        # v2.0.0

Usage: blockMesh [OPTIONS]
  -blockTopology    write block edges and centres as .obj files
  -case >>dir<<

specify alternate case directory, default is the cwd -dict specify alternative dictionary for the blockMesh description -region specify alternative mesh region -srcDoc display source code in browser -doc display application document

Source code

  • argList class

File Modification

OpenFOAM has a run-time file modification system that allows files that are modified during a simulation to be re-read. This allows a user to change a setting, e.g. time step, end-time, solver tolerance, during a simulation and the change is picked up by the solver.

Previously, all objects were monitored for changes that were registered on the database of the simulation with the MUST_READ flag. In v2.0.0, we introduce a new flag, MUST_READ_IF_MODIFIED that is for objects that are to be monitored for change; objects registered with the MUST_READ flag, such as fields (velocity, pressure, etc.), are no longer monitored.

In addition, there are four modes of monitoring, set by the fileModificationChecking switch in the OptimisationSwitches sub-dictionary in the global controlDict file (in $WM_PROJECT_DIR/etc. The modes are:

  • timeStamp – checks for modification by monitoring time stamps on files (standard method before v2.0.0);
  • inotify – uses the inotify monitoring framework of the Linux system which is potentially much faster than checking time stamps;
  • timeStampMaster and inotifyMaster – equivalent methods for running in parallel on distributed systems where files are only checked that belong to the master node, and slave nodes get file contents from the master.

File setting

  • Global configuration file

Parallel Running

New decomposition methods have been implemented in OpenFOAM. Firstly, the ptscotch library has been integrated with OpenFOAM to allow decomposition to run in parallel, e.g. for performing load balancing during mesh generation in parallel with snappyHexMesh.

A structured decomposition modifier has been added to do a 2D decomposition of a mesh. The modifier performs a decomposition of a specified patch, using one of the standard methods, e.g. scotch, which it then extends into the adjoining cells. The method can typically be used with an extruded mesh.

A multiLevel decomposition modifier allows decomposition in levels, e.g. where the first level decomposes into a number of nodes and the second level onto number of cores per node. The method can therefore minimise off-node communication in the case of multi-core CPUs. Each level of decomposition uses one the stantard decomposition methods, e.g. scotch.

The decomposePar utility now maps each polyPatch instead of recreating it so that a polyPatch that holds data, such as directMapped can retain the data.

OpenFOAM now uses non-blocking communications wherever possible. This will lead to lower requirements for MPI_BUFFER_SIZE and possibly better start-up performance on larger numbers of processors.