toc
Introduction
Singularity enables users to have full control of their environment. Singularity containers can be used to package entire scientific workflows, software and libraries, and even data. This means that you don’t have to ask your cluster admin to install anything for you - you can put it in a Singularity container and run. (Ref: http://singularity.lbl.gov/)
...
No Format |
---|
Initial Prep:
=============
Copy the images into the local container folder:
mkdir ~/Container
cp /sw/Containers/singularity/images/delft3d4_latest.sif ~/Container
cp /sw/Containers/singularity/images/delft3dfm_latest.sif ~/Container
Shell into the container and create launcher script:
====================================================
Initial setup
1. shell into the container
=============================
For example:
singularity shell -B /scratch/s12345:/scratch --pwd /scratch/s12345:/scratch --pwd /scratch /export/home/s12345/Container/delft3d4_latest.sif
For example:
singularity shell -B /scratch/s2961948:/scratch --pwd /scratch/s2961948:/scratch --pwd /scratch /export/home/s2961948/Container/delft3d4_latest.sif
2. Create a variable file you can source
Once in the shell, create a variable file with contents like this:
vi /scratch/delft3denv.sh
export LD_LIBRARY_PATH=/opt/delft3d_latest/lnx64/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=/opt/delft3d_latest/lnx64/lib:$LIBRARY_PATH
export PATH=/opt/delft3d_latest/lnx64/bin:$PATH
module load mpich-x86_64
3. Manual Test Run
===================
source /scratch/delft3denv.sh
d_hydro
For example:
Singularity delft3d4_latest.sif:/scratch> d_hydro
d_hydro ABORT: Improper usage. Execute "d_hydro -?" for command-line syntax
4. Create a Launcher Script;
===========================
Once you are happy that all step 1 ,2 and 3 works, you can create a launcherScript
While still inside the shell:
vi /scratch/d_hydroLauncher.sh
An example of the content is this:
>>>>>>>>>>>>
export LD_LIBRARY_PATH=/opt/delft3d_latest/lnx64/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=/opt/delft3d_latest/lnx64/lib:$LIBRARY_PATH
export PATH=/opt/delft3d_latest/lnx64/bin:$PATH
module load mpich-x86_64
cd /scratch
d_hydro
>>>>>>>>>>>>>
make it executable:
chmod +x d_hydroLauncher.sh
Check if it gives the expected results:
./scratch/d_hydroLauncher.sh
5. Running it as a batch job:
==============================
Exit the shell and create this pbs script anywhere in your home directory:
A: Final d_hydroLauncher.sh
===========================
more ~/scratch/d_hydroLauncher.sh
/opt/delft3d_latest/lnx64/bin/d_hydro
B: Final PBS script
====================
more pbs.01
>>>>>>>>>
#!/bin/bash
#PBS -m abe
#PBS -M YourEmail@griffith.edu.au
#PBS -N DelftJob
#PBS -q gworkq
#PBS -l select=1:ncpus=1:mem=12gb,walltime=00:00:30
cd $PBS_O_WORKDIR
DELFT_PATH="/opt/delft3d_latest/lnx64/"
##export SINGULARITY_BINDPATH="$DELFT_PATH/bin" # Append this location to the container PATH
##export SINGULARITY_BINDPATH=/scratch,/fast/tmp,/gpfs01
export SINGULARITYENV_LD_LIBRARY_PATH=$DELFT_PATH/lib:$LD_LIBRARY_PATH
export SINGULARITYENV_LIBRARY_PATH=$DELFT_PATH/lib:$LIBRARY_PATH
export SINGULARITYENV_PATH=$DELFT_PATH/bin:$PATH
singularity exec -B /scratch/s12345:/scratch /export/home/s12345/Container/delft3d4_latest.sif "/scratch/d_hydroLauncher.sh"
##d_hydro
exit
sleep 2
>>>>>>
Submit the job and check the results.
qsub pbs.01
Second method without using the launcher script:
=================================================
#!/bin/bash
#PBS -m abe
#PBS -V
#PBS -M YourEmail@griffith.edu.au
#PBS -N DelftJob
#PBS -q gworkq
#PBS -l select=1:ncpus=1:mem=12gb,walltime=00:00:30
DELFT_PATH="/opt/delft3d_latest/lnx64/"
export SINGULARITYENV_LD_LIBRARY_PATH=$DELFT_PATH/lib:
export SINGULARITYENV_LIBRARY_PATH=$DELFT_PATH/lib
export SINGULARITYENV_PATH=$DELFT_PATH/bin
cd $HOME/scratch
singularity exec /export/home/s12345/Container/delft3d4_latest.sif d_hydro swaninit
exit
sleep 2
|
...
To create containers, we use public.docker.itc.griffith.edu.au
Simple "singularity pull" with s3proxy stopped working for docker hubs around Oct 2020. This is due to the changes (limits) docker introduced. To avoid hitting the limits, Griffith team implemented steps to stop direct access through the s3proxy http proxy to the docker hub registry (see https://www.docker.com/blog/understanding-inner-loop-development-and-pull-rates/ for notes)
...
The -e or --cleanenv option of the singularity [run, exec, or shell] command lets you clean the environment before running the container. As shown in the table below, adding -e to the singularity exec command retains variables on the left column but removes those on the right column
Variables retained with -e enabled | Variables removed with -e enabled |
---|---|
SINGULARITY_XXX | DISPLAY, PYTHONSTRTUP |
PATH (re-defined), LD_LIBRARY_PATH (re-defined) | SSH_XXX, XXX_RSH, SYSTEMD_LESS |
PWD, HOME, uid, user | xxxMODULExxx, MANPATH, LMFILES |
LANG, TERM, SHELL, SHLVL, PS1 | HOSTxxx, MACHTYPE, OSTYPE, VENDOR |
. | USER, USER_PATH, LOGNAME, GROUP |
. | MAIL, EXINIT, CSHEDIT, OSCAR_HOME |
There may be a difference in behavior when running your container with or without the -e option. For example, if you want to run a graphical application, such as ParaView, within the container, you should add --env DISPLAY=$DISPLAY to the singularity shell command if the -e option is used to start the container:
...