Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Table of Contents

Introduction

Singularity enables users to have full control of their environment. Singularity containers can be used to package entire scientific workflows, software and libraries, and even data. This means that you don’t have to ask your cluster admin to install anything for you - you can put it in a Singularity container and run. (Ref: http://singularity.lbl.gov/)

...

No Format
Initial Prep:
=============

Copy the images into the local container folder:

mkdir ~/Container
cp /sw/Containers/singularity/images/delft3d4_latest.sif  ~/Container
cp /sw/Containers/singularity/images/delft3dfm_latest.sif  ~/Container


Shell into the container and create launcher script:
====================================================

Initial setup

1. shell into the container
=============================
For example:

singularity  shell -B /scratch/s12345:/scratch --pwd /scratch/s12345:/scratch --pwd /scratch  /export/home/s12345/Container/delft3d4_latest.sif

For example:
singularity  shell -B /scratch/s2961948:/scratch --pwd /scratch/s2961948:/scratch --pwd /scratch  /export/home/s2961948/Container/delft3d4_latest.sif

2. Create a variable file you can source

Once in the shell, create a variable file with contents like this:

vi /scratch/delft3denv.sh
export LD_LIBRARY_PATH=/opt/delft3d_latest/lnx64/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=/opt/delft3d_latest/lnx64/lib:$LIBRARY_PATH
export PATH=/opt/delft3d_latest/lnx64/bin:$PATH
module load mpich-x86_64

3. Manual Test Run
===================
source /scratch/delft3denv.sh
d_hydro

For example:

Singularity delft3d4_latest.sif:/scratch> d_hydro
d_hydro ABORT: Improper usage.  Execute "d_hydro -?" for command-line syntax


4. Create a Launcher Script;
===========================
Once you are happy that all step 1 ,2 and 3 works, you can create a launcherScript
While still inside the shell:

vi /scratch/d_hydroLauncher.sh

An example of the content is this:
>>>>>>>>>>>>
export LD_LIBRARY_PATH=/opt/delft3d_latest/lnx64/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=/opt/delft3d_latest/lnx64/lib:$LIBRARY_PATH
export PATH=/opt/delft3d_latest/lnx64/bin:$PATH
module load mpich-x86_64
cd /scratch
d_hydro 
>>>>>>>>>>>>>
make it executable:
chmod +x d_hydroLauncher.sh
Check if it gives the expected results:

./scratch/d_hydroLauncher.sh


5. Running it as a batch job:
==============================
Exit the shell and create this pbs script anywhere in your home directory:

A: Final d_hydroLauncher.sh
===========================
more ~/scratch/d_hydroLauncher.sh
/opt/delft3d_latest/lnx64/bin/d_hydro

B: Final PBS script
====================
more pbs.01
>>>>>>>>>
#!/bin/bash 
#PBS -m abe
#PBS -M YourEmail@griffith.edu.au
#PBS -N DelftJob  
#PBS -q gworkq  
#PBS -l select=1:ncpus=1:mem=12gb,walltime=00:00:30
cd  $PBS_O_WORKDIR
DELFT_PATH="/opt/delft3d_latest/lnx64/"
##export SINGULARITY_BINDPATH="$DELFT_PATH/bin" # Append this location to the container PATH
##export SINGULARITY_BINDPATH=/scratch,/fast/tmp,/gpfs01
export SINGULARITYENV_LD_LIBRARY_PATH=$DELFT_PATH/lib:$LD_LIBRARY_PATH
export SINGULARITYENV_LIBRARY_PATH=$DELFT_PATH/lib:$LIBRARY_PATH
export SINGULARITYENV_PATH=$DELFT_PATH/bin:$PATH
singularity exec -B /scratch/s12345:/scratch  /export/home/s12345/Container/delft3d4_latest.sif "/scratch/d_hydroLauncher.sh"
##d_hydro
exit
sleep 2
>>>>>>

Submit the job and check the results.

qsub pbs.01

Second method without using the launcher script:
=================================================
#!/bin/bash 
#PBS -m abe
#PBS -V
#PBS -M YourEmail@griffith.edu.au
#PBS -N DelftJob  
#PBS -q gworkq  
#PBS -l select=1:ncpus=1:mem=12gb,walltime=00:00:30
DELFT_PATH="/opt/delft3d_latest/lnx64/"
export SINGULARITYENV_LD_LIBRARY_PATH=$DELFT_PATH/lib:
export SINGULARITYENV_LIBRARY_PATH=$DELFT_PATH/lib
export SINGULARITYENV_PATH=$DELFT_PATH/bin
cd $HOME/scratch
singularity exec /export/home/s12345/Container/delft3d4_latest.sif d_hydro swaninit
exit
sleep 2



...

To create containers, we use public.docker.itc.griffith.edu.au
Simple "singularity pull" with s3proxy stopped working for docker hubs around Oct 2020. This is due to the changes (limits) docker introduced. To avoid hitting the limits, Griffith team implemented steps to stop direct access through the s3proxy http proxy to the docker hub registry (see https://www.docker.com/blog/understanding-inner-loop-development-and-pull-rates/ for notes)

...

No Format
To build an image 
singularity pull  docker://public.docker.itc.griffith.edu.au/biocontainers/blast:2.2.31

To run a container
singularity run docker://public.docker.itc.griffith.edu.au/godlovedc/lolcow

# Download the container into a permanent image file
###Old method which will not work anymore ###http_proxy=http://s3proxy.itc.griffith.edu.au:3128  singularity pull my_blast.sif docker://biocontainers/blast:2.2.31 

singularity pull my_blast.sif docker://public.docker.itc.griffith.edu.au/biocontainers/blast:2.2.31

Another example:
singularity pull lolcow_latest.sif docker://public.docker.itc.griffith.edu.au/godlovedc/lolcow

# Now run it from the image file
singularity exec my_blast.sif blastp -version
blastp: 2.2.31+
Package: blast 2.2.31, build Apr 23 2016 15:49:47 Package: blast 2.2.31, build Apr 23 2016 15:49:47
Using GPU(s) with a Container
This is useful on the n060 node with gpu cards

A docker or singularity container built to use an NVIDIA GPU should be run with the--nv option 

singularity pull docker://public.docker.itc.griffith.edu.au/tensorflow/tensorflow:latest-gpu-jupyter

singularity run --nv  tensorflow_latest-gpu-jupyter.sif

(-B /run is needed for some images (using e.g. jupyter, that give an error otherwise. For that, use this syntax:
singularity run --nv –B /run tensorflow_latest-gpu-jupyter.sif) 
Most docker containers run successfully under Singularity Some need to be run with special options
Few cannot be used 
Docker is optimized for running services in isolated containers 

Need to add back other cluster filesystems with -B option... 
singularity exec --contain -B /project -B /archive 

singularity exec -B /scratch/s1234:/scratch --pwd /scratch/scratch/s1234:/scratch --pwd /scratch  /export/home/s1234/Container/openfoam6-paraview54_latest.sif-centos env|more

singularity exec -B /scratch/s1234:/scratch --pwd /scratch/scratch/s1234:/scratch --pwd /scratch  /export/home/s1234/Container/openfoam6-paraview54_latest.sif-centos cat /scratch/singularity_launch.03

look at the runscript, and envrionment
=================================
singularity inspect -e  /export/home/s1234/Container/openfoam6-paraview54_latest.sif-centos

singularity inspect -r  /export/home/s1234/Container/openfoam6-paraview54_latest.sif-centos

singularity exec -B /scratch/s1234:/scratch --pwd /scratch/scratch/s1234:/scratch --pwd /scratch  /export/home/1234/Container/openfoam6-paraview54_latest.sif-centos /.singularity.d/runscript

Another pbs example from an actual run

Code Block
cat pbs.01
>>>>
#!/bin/bash 
#PBS -m e
#PBS -M email@griffith.edu.au
#PBS -N CEINMS256Calib
#PBS -q medium
#PBS -l select=1:ncpus=4:mem=16gb,walltime=30:00:00
cd  $PBS_O_WORKDIR
singularity exec -B /scratch/s123456:/scratch --pwd /scratch/s123456:/scratch --pwd /scratch  /export/home/s123456/Container/ceinms_latest.sif "/scratch/Scripts/ceinmsLauncher.sh"
exit
sleep 2
>>>>>>

cat /scratch/s2984644/Scripts/ceinmsLauncher.sh
>>>>>>
export LD_LIBRARY_PATH=/usr/local/lib:/opensim/opensim_install/lib
export PATH=/CEINMS/install/bin:$PATH
CEINMScalibrate -S '/scratch/data/Dev02/Barefoot/ceinms/calibratedSubjects/setupCalibration_t00000000.xml'
>>>>>>
Before you submit the job, make sure it is executable. You do that with this:
chmod 700 /scratch/s2984644/Scripts/ceinmsLauncher.sh
>>>>>>
qsub pbs.01
>>>>>
Results:
+-+-+-+-+-+-+
|C|E|I|N|M|S|
+-+-+-+-+-+-+-+-+-+-+
|C|a|l|i|b|r|a|t|e|d|
+-+-+-+-+-+-+-+-+-+-+-+-+
|E|M|G|-|I|n|f|o|r|m|e|d|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|N|e|u|r|o|m|u|s|c|u|l|o|s|k|e|l|e|t|a|l|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|T|o|o|l|b|o|x|
+-+-+-+-+-+-+-+

CEINMScalibrate version 0.30.1
Copyright (C) Nov  3 2022
Claudio Pizzolato, Monica Reggiani, Massimo Sartori, David Lloyd
Software developers: Claudio Pizzolato, Monica Reggiani
readNMSmodelCfg
filesystem error: cannot make canonical path: No such file or directory [/scratch/data/Dev02/Barefoot/ceinms/calibratedSubjects/subjectCalibrated_t00000000.xml]Reading subject file: /scratch/data/Dev02/Barefoot/ceinms/calibratedSubjects/uncalibrated.xml .
Contact model found
Calibration configuration
 - Algorithm
 -- Simulated Annealing
 --- noEpsilon 4
 --- NS 15
 --- NT 5
 --- maxNoEval 200000
 --- rt 0.3
 --- T 20
 --- epsilon 1e-05
 - NMSmodel
 - CalibrationSteps
 -- Step
 Objective Function: TorqueErrorNormalised
 - Targets Type: Torque
 - Targets: hip_flexion_r knee_angle_r ankle_angle_r 
 - Weight: 1
 - Exponent: 1
 - Trials
 -- /scratch/data/Dev02/Barefoot/ceinms/calibratedSubjects/running10.xml
 -- /scratch/data/Dev02/Barefoot/ceinms/calibratedSubjects/running11.xml
 -- /scratch/data/Dev02/Barefoot/ceinms/calibratedSubjects/running7.xml


Reading subject file: /scratch/data/Dev02/Barefoot/ceinms/calibratedSubjects/uncalibrated.xml .
activeForceLength
passiveForceLength
forceVelocity
tendonForceStrain
Assuming addbrev_r pennation angle in radians: 0.114781
Assuming addlong_r pennation angle in radians: 0.13777
Assuming addmagDist_r pennation angle in radians: 0.194705
Assuming addmagIsch_r pennation angle in radians: 0.168044
Assuming addmagMid_r pennation angle in radians: 0.207308
Assuming addmagProx_r pennation angle in radians: 0.311483
<snip>
EMG: Reading emg file.../scratch/data/Dev02/Barefoot/ceinms/calibratedSubjects/../../dynamicElaborations/running10/emg.mot
Muscle excitations to muscle mapping:
addbrev_r -> addbrev_r
addlong_r -> addlong_r
addmagDist_r -> addmagDist_r
addmagIsch_r -> addmagIsch_r
addmagMid_r -> addmagMid_r
addmagProx_r -> addmagProx_r
bflh_r -> bflh_r
bfsh_r -> bfsh_r
edl_r -> edl_r
ehl_r -> ehl_r
fdl_r -> fdl_r

<snip>





>>>>


Converting a docker image into singularity image

Ref: https://www.nas.nasa.gov/hecc/support/kb/converting-docker-images-to-singularity-for-use-on-pleiades_643.html

Method 1: Using an Existing Docker Image on Your Local Machine

No Format
1. Find the Docker image ID.
docker images
REPOSITORY        TAG       IMAGE ID        CREATED        SIZE
hello-world       latest    bf756fb1ae65    5 months ago   13.3kB
godlovedc/lolcow  latest    577c1fe8e6d8    2 years ago    241MB

2. Create a tarball of the Docker image. 
For the Docker image you want to port to Griffith HPC, 
for example, godlovedc/lolcow with an image ID of 577c1fe8e6d8, 
create a tarball using the docker save command:
docker save 577c1fe8e6d8 -o lolcow.tar 

3. Copy the tarball to Griffith HPC
Use scp or winscp or cyberduck etc 
scp lolcow.tar 10.250.250.3:/tmp

4. Convert the tarball to a Singularity image.
module load singularity
singularity build --sandbox lolcow docker-archive://lolcow.tar

If the tarball is not in the current working directory, specify the path, for example, /tmp:
singularity build --sandbox lolcow docker-archive:///tmp/lolcow.tar
singularity build --sandbox pytorch docker-archive:////sw/Containers/docker/pytorchnew.tar

In this example, lolcow is the directory name of the Singularity image.

5. Run the Singularity sandbox as usual for testing. Once testing is over, use it in a pbs script (see example above). 
For example: 
singularity shell lolcow
singularity exec lolcow cowsay hello
singularity run lolcow

Method 2: Using an Existing Docker Image on Docker Hub

See section earlier on how to do this.

Method 3: Using a Working Dockerfile Without a Docker Image 

ref: https://www.nas.nasa.gov/hecc/support/kb/building-an-image-using-the-singularity-build-tool_639.html


No Format
This is for advance users

source /usr local/bin/s3proxy.sh
module load singularity
singularity build [--fakeroot] --sandbox lolcow lolcow.def


Running a Sandbox May Be More Efficient than Running the .sif File

Ref: https://www.nas.nasa.gov/hecc/support/kb/best-practices-for-running-singularity-on-nas-systems_659.html

During runtime, the .sif file will be converted to a sandbox behind the scenes before being started. The conversion from .sif to sandbox may take a long time or even hang, especially if you are running the container in parallel where there will be multiple conversions at the same time. Consider converting the .sif file into a sandbox explicitly ahead of time, and running the container with the sandbox instead of the .sif file in your run script:

singularity build --sandbox your_container_sandbox your_container.sif
singularity exec [options] your_container_sandbox

Ensure All Host Filesystems Needed for Read and/or Write are Mounted

When Singularity swaps the host operating system for the one inside your container, the host filesystems become inaccessible. If you need access to any of your host filesystems or directories (such as /scratch/snumber for a certain module or your /export/home/snumber for reading/writing data) you must bind-mount them with the -B or --bind command-line option, or by setting the SINGULARITY_BIND environment variable:

singularity shell -B /scratch:/scratch -B /export/home/snumber:/export/home/snumber /export/home/snumber/sw/Containers/container_sandbox

Know the Environment Settings of your Container and the Effect of Using the -e Option

Environment variables set in your current session after sourcing your system startup file, such as .cshrc or .bashrc, or after loading a modulefile, are exported into the container. The only exceptions are PATH and LD_LIBRARY_PATH, which are set differently inside the container compared to the host environment.

The -e or --cleanenv option of the singularity [run, exec, or shell] command lets you clean the environment before running the container. As shown in the table below, adding -e to the singularity exec command retains variables on the left column but removes those on the right column


Variables retained with -e enabledVariables removed with -e enabled
SINGULARITY_XXXDISPLAY, PYTHONSTRTUP
PATH (re-defined), LD_LIBRARY_PATH (re-defined)SSH_XXX, XXX_RSH, SYSTEMD_LESS
PWD, HOME, uid, userxxxMODULExxx, MANPATH, LMFILES
LANG, TERM, SHELL, SHLVL, PS1HOSTxxx, MACHTYPE, OSTYPE, VENDOR
.USER, USER_PATH, LOGNAME, GROUP
.MAIL, EXINIT, CSHEDIT, OSCAR_HOME

There may be a difference in behavior when running your container with or without the -e option. For example, if you want to run a graphical application, such as ParaView, within the container, you should add --env DISPLAY=$DISPLAY to the singularity shell command if the -e option is used to start the container:


singularity shell -e --env DISPLAY=$DISPLAY your_container_sandbox

Likely Places to Find Executables or Libraries Inside the Container

For many containers, the default PATH or LD_LIBRARY_PATH might not include paths of executables or libraries needed to run the application of interest. If the container includes MPI libraries, they are commonly installed inside the container in the /usr/local or /usr/local/mpi directories. Other packages are likely installed under the /opt directory of the container. If you still cannot find the executables or libraries and there is no documentation about the container, contact the provider of the container.

Reference

  1. http://singularity.lbl.gov/
  2. https://www.singularity-hub.org/
  3. https://singularityhub.github.io/containers/registry/singularity-hub-registry/
  4. https://singularity.lbl.gov/user-guide
  5. https://hpc.research.uts.edu.au/software_general/singularity/
  6. https://cran.r-project.org/web/views/HighPerformanceComputing.html
  7. https://support.pawsey.org.au/documentation/display/US/Running+RStudio+on+Zeus+with+Singularity
  8. https://www.katacoda.com/courses/docker
  9. https://pawseysc.github.io/containers-bioinformatics-workshop/5.build/index.html
  10. https://quay.io/
  11. https://pawseysc.github.io/containers-bioinformatics-workshop/3.pipeline/index.html
  12. https://ist.mit.edu/xwin32
  13. https://pawseysc.github.io/containers-bioinformatics-workshop/1.prep1-ssh/index.html

...