How do I use Fluidity on HECToR?

Created by Tim Greaves
Keywords:
Last updated by:
Jon Hill

This FAQ is intended primarily for users of Fluidity who wish to use HECToR for running production simulations. It is not intended to replace in any way the real HECToR users guide, found at http://www.hector.ac.uk/support/documentation/userguide/hectoruser/hectoruser.html , which should be the first place you look for answers to questions not adequately addressed here.

There are two ways to use Fluidity on HECToR: using the pre-built package, or compiling your own version.

*** Using the pre-built package

The fluidity package will provide the correct environment on HECToR to be able to run the fluidity binary and other executables that form the Fluidity package.

To access the package, simply use the following command:

     module swap PrgEnv-cray PrgEnv-fluidity

However, issuing this command on the front-end of HECToR is not particularly useful. To run a Fluidity simulation, you need to submit your job to the back-end of HECToR. See the [http://www.hector.ac.uk/support/documentation/userguide/hectoruser/hectoruser.html user guide] for more information on this. An example submission script for HECToR to run a fluidity simulation is given at https://answers.launchpad.net/fluidity/+faq/2234

That submission script uses 2 processors, which is set by the mppwidth variable. For runs of less than 32 processors (which is the number of cores on one node), the variable mppnppn should be set to the same. When using 32 or more processors, set this to 32. For example, to use 512 processors, the top part of the script will look like this:

     #!/bin/bash --login
     #PBS -N fluidity_run
     #PBS -l mppwidth=512
     #PBS -l mppnppn=32
     #PBS -l walltime=0:10:00
     #PBS -A n04-IC

Other important things to change are the length of time for your run (walltime), which can be up to 24 hours (for between 64 and 2048 processors, 12 hours otherwise); and your budget code. Here, this is set to n04-IC. You should have been given a suitable budget code by your PI.

In order to use the embedded python functionality of Fluidity fully, you must copy the fluidity python directory to your current working directory. This is done by the script above by copying the central fluidity python and then deleting it when the run has completed.

*** Compiling your own version

**** Obtaining Fluidity

HECToR has bzr installed, so you can use that to obtain a copy of fluidity or download the tarballs from http://launchpad.net/fluidity

Bzr can be loaded using the following command:

     module load bzr

Then checkout, branch or update Fluidity and run:

     module unload bzr

The Python modules required for Fluidity currently do not work with the bzr module, hence the reason for unloading.

**** Using central module

There is a centrally available environment which contains the necessary modules for compiling Fluidity, switched to by running:

     module swap PrgEnv-cray PrgEnv-fluidity

This module contains the same modules as documented below.

**** Compiling in serial queue

Building fluidity should be done in the serial queue. This is simple to do and the code can be in home or work. A sample PBS script for this is given on https://answers.launchpad.net/fluidity/+faq/2235

**** Using your own module

***** Initial Setup

The environment on HECToR is controlled by the user using environment modules. The following configuration files are recommended and the remaining instructions assume that these files have been created.

The file $HOME/.bash_profile should contain:

     # Get the aliases and functions
     if [ -f ~/.bashrc ]; then
         . ~/.bashrc
     fi

The file $HOME/.bashrc should contain:

     # .bashrc

     # Source global definitions
     if [ -f /etc/bashrc ]; then
       . /etc/bashrc
     fi

     export MODULEPATH=$MODULEPATH:$HOME/modules

Next you should create your own directory to store your own environment modules, with:

     mkdir -p ~/modules

Note this matches up with MODULEPATH set in your .bashrc file as above.

***** Static Build Environment: GNU

You can use the following module, PrgEnv-fluidity-myownmodule, to load an environment suitable for building Fluidity using the GNU compilers. To load this module use the command:

     module swap PrgEnv-cray PrgEnv-fluidity-myownmodule

The module should be in a file called PrgEnv-fluidity-myownmodule in your module path (i.e. ~/home/module) and contain something like the module given in https://answers.launchpad.net/fluidity/+faq/2236

This is the most up-to-date working module on HECToR that is used for the buildbot test.

*** Profiling on HECToR

Having compiled your own copy of ICOM/fluidity on HECToR, to get the best performance some profiling may be necessary to help tuning of the code. On HECToR, CrayPat and Vampir are provided for this purpose. Both of these tools need to be built or linked into your binary. Below are details of how to setup your environment and build ICOM/fluidity to do this.

**** Vampir

The module fluidity-vampir sets up an environment to build ICOM/Fluidity with support for analysis using VampirTrace. The contents of this module are given at https://answers.launchpad.net/fluidity/+faq/2237

**** CrayPat

https://answers.launchpad.net/fluidity/+faq/2238 gives the contents of a module file, fluidity-gcc-xt4-craypat, for setting up the appropriate environment for building ICOM/fluidity to then profile using CrayPat.

This can be pretty CPU intensive, so it is often better to submit this to the serial queue. An example PBS submission script to do this is:

     #!/bin/sh
     #
     #PBS -q serial
     #PBS -l cput=01:00:00
     #PBS -A n04-IC

     cd $PBS_O_WORKDIR

     module load xt-craypat

     pat_build -u bin/fluidity bin/fluidity+pat

Use the instrumented binary as you would normally. Note that due to the overhead of the instrumentation, the binary could run at approximately half speed.

After you run your simulation you will have a file with the extension .xf. To convert this into a report you can read use the command (substitute in your own executable name):

     pat_report fluidity+pat+18324-1718tdt.xf > report.dat

This will give you a raw text version of the report in the file report.dat. You will also notice that another file with the extension .ap2 is created after running pat_report. You can look at this using a spiffy GUI interface - via Apprentice2. To invoke:

     module load apprentice2
     app2 fluidity+pat+18324-1718tdt.ap2

**** CrayPat (API)

Below is the method used to find MPI profiling statistics for particular routines and sections of ICOM. This method involves using calls to the CrayPat API to manually instrument the code rather than doing a complete sampling/tracing experiment as described above. The CrayPat API can be used not just for MPI statistics but for all profiling information available from CrayPat. It is useful when you only want to profile small sections of code and not trace the whole program. All of the following information only details profiling MPI.

In each file you intend to call the CrayPat API from you must include the relevant header file:

C/C++:
     #include <pat_api.h>

Fortran:
     include ‘pat_apif.h’

To profile MPI statistics for particular routines we call the appropriate CrayPat functions in the main program, to make sure profiling is initially off. We then switch profiling back on within or around the routines we're interested in.

Switching on/off profiling is done using the PAT_record function. To turn off profiling give PAT_STATE_OFF as the first argument to the function, or to turn on profiling give PAT_STATE_ON.

C/C++:
 PAT_record(PAT_STATE_OFF);
 PAT_record(PAT_STATE_ON);

Fortran:
 integer :: istat
 call PAT_record(PAT_STATE_OFF, istat)
 call PAT_record(PAT_STATE_ON, istat)

Once you have added calls to the CrayPat API you build the code as follows.Follow the instructions in the 'compiling' section above to load the static build environment. You must then load the CrayPat module which adds the relevant header files to your include paths:

     module load xt-craypat

Build the code as normal.

Finally we need to build the instrumented executable. To do this we use the pat_build utility. In this case we’re looking for MPI information about the specified region so use the “-g mpi” flag:

     pat_build -g mpi bin/dfluidity bin/dfluidity+mpi+regions+pat

This step can be time consuming so you should run it in the serial queue. This can be done with the following PBS script:

     #!/bin/sh --login
     #
     #PBS -q serial
     #PBS -l cput=01:00:00
     #PBS -A n04-IC

     cd $PBS_O_WORKDIR

     #set the environment
     module load fluidity-gcc-xt4

     #load the CrayPat module
     module load xt-craypat

     #create the instrumented executable
     pat_build -g mpi bin/fluidity bin/fluidity+regions+mpi+pat

You can then run the instrumented executable, fluidity+regions+mpi+pat as normal. This produces the same output file as described above and this can be formatted into a report or viewed using Apprentice2 in the same way.