Introduction
Singularity enables users to have full control of their environment. Singularity containers can be used to package entire scientific workflows, software and libraries, and even data. This means that you don’t have to ask your cluster admin to install anything for you - you can put it in a Singularity container and run. (Ref: http://singularity.lbl.gov/)
Usage
Singularity is installed on all nodes on the new cluster. On the old cluster, usage is as follows: module load singularity/2.4.6 OR module load singularity/2.2Installation Images are kept here: /sw/Containers/singularity/images singularity shell /sw/Containers/singularity/images/ubuntu-16.04.4-10GB.img Another example: module load singularity/2.4.6 singularity shell /sw/Containers/singularity/images/openEMS-ubuntu-16.04.img
Sample PBS script
#!/bin/bash -l #PBS -m abe #PBS -M YourEmail@griffith.edu.au #PBS -V #PBS -N TestYade #PBS -q dljun@n060 #PBS -W group_list=deeplearning -A deeplearning #PBS -l select=1:ncpus=1:ngpus=1:mem=32gb,walltime=30:00:00 cd $PBS_O_WORKDIR ###GPUNUM=`echo $CUDA_VISIBLE_DEVICES` module load singularity/3.2.0 #singularity shell /sw/centos7/singularity/Containers/singularity/3.2.0/images/YadeBox.simg ##singularity shell --nv /sw/centos7/singularity/Containers/singularity/3.2.0/images/YadeBox.simg /export/home/s123456/pbs/singularity/yadeLauncher.sh singularity exec /sw/centos7/singularity/Containers/singularity/3.2.0/images/YadeBox.simg /export/home/s123456/pbs/singularity/yadeLauncher.sh #cat /etc/os-release exit sleep 2
cat yadeLauncher.sh #! /bin/bash #load the anaconda tensorflow environment module load anaconda/5.3.1py3 source activate tensorflow-gpu ###echo $CUDA_VISIBLE_DEVICES ###GPUNUM=`echo $CUDA_VISIBLE_DEVICES` #navigate to Your directory ##cd /export/home/s123456/pbs/singularity/yade/ #run the code yade /export/home/s123456/pbs/singularity/yade/DST_parametric_M1c_Layer1.py
Singularity on the new cluster
Singularity rpm is installed on all compute nodes
Here is how to use the ubuntu container named YadeBox.simg on the new cluster e.g: singularity shell /sw/singularity/Containers/singularity/3.2.0/images/YadeBox.simg Singularity YadeBox.simg:/> cat /etc/*release* DISTRIB_ID=Ubuntu DISTRIB_RELEASE=16.04 DISTRIB_CODENAME=xenial DISTRIB_DESCRIPTION="Ubuntu 16.04.6 LTS" NAME="Ubuntu" VERSION="16.04.6 LTS (Xenial Xerus)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 16.04.6 LTS" VERSION_ID="16.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" VERSION_CODENAME=xenial UBUNTU_CODENAME=xenial
Example of using one of the containers - delft3d4
Initial Prep: ============= Copy the images into the local container folder: mkdir ~/Container cp /sw/Containers/singularity/images/delft3d4_latest.sif ~/Container cp /sw/Containers/singularity/images/delft3dfm_latest.sif ~/Container Shell into the container and create launcher script: ==================================================== Initial setup 1. shell into the container ============================= For example: singularity shell -B /scratch/s12345:/scratch --pwd /scratch/s12345:/scratch --pwd /scratch /export/home/s12345/Container/delft3d4_latest.sif For example: singularity shell -B /scratch/s2961948:/scratch --pwd /scratch/s2961948:/scratch --pwd /scratch /export/home/s2961948/Container/delft3d4_latest.sif 2. Create a variable file you can source Once in the shell, create a variable file with contents like this: vi /scratch/delft3denv.sh export LD_LIBRARY_PATH=/opt/delft3d_latest/lnx64/lib:$LD_LIBRARY_PATH export LIBRARY_PATH=/opt/delft3d_latest/lnx64/lib:$LIBRARY_PATH export PATH=/opt/delft3d_latest/lnx64/bin:$PATH module load mpich-x86_64 3. Manual Test Run =================== source /scratch/delft3denv.sh d_hydro For example: Singularity delft3d4_latest.sif:/scratch> d_hydro d_hydro ABORT: Improper usage. Execute "d_hydro -?" for command-line syntax 4. Create a Launcher Script; =========================== Once you are happy that all step 1 ,2 and 3 works, you can create a launcherScript While still inside the shell: vi /scratch/d_hydroLauncher.sh An example of the content is this: >>>>>>>>>>>> export LD_LIBRARY_PATH=/opt/delft3d_latest/lnx64/lib:$LD_LIBRARY_PATH export LIBRARY_PATH=/opt/delft3d_latest/lnx64/lib:$LIBRARY_PATH export PATH=/opt/delft3d_latest/lnx64/bin:$PATH module load mpich-x86_64 cd /scratch d_hydro >>>>>>>>>>>>> make it executable: chmod +x d_hydroLauncher.sh Check if it gives the expected results: ./scratch/d_hydroLauncher.sh 5. Running it as a batch job: ============================== Exit the shell and create this pbs script anywhere in your home directory: A: Final d_hydroLauncher.sh =========================== more ~/scratch/d_hydroLauncher.sh /opt/delft3d_latest/lnx64/bin/d_hydro B: Final PBS script ==================== more pbs.01 >>>>>>>>> #!/bin/bash #PBS -m abe #PBS -M YourEmail@griffith.edu.au #PBS -N DelftJob #PBS -q gworkq #PBS -l select=1:ncpus=1:mem=12gb,walltime=00:00:30 cd $PBS_O_WORKDIR DELFT_PATH="/opt/delft3d_latest/lnx64/" ##export SINGULARITY_BINDPATH="$DELFT_PATH/bin" # Append this location to the container PATH export SINGULARITYENV_LD_LIBRARY_PATH=$DELFT_PATH/lib:$LD_LIBRARY_PATH export SINGULARITYENV_LIBRARY_PATH=$DELFT_PATH/lib:$LIBRARY_PATH export SINGULARITYENV_PATH=$DELFT_PATH/bin:$PATH singularity exec -B /scratch/s12345:/scratch /export/home/s12345/Container/delft3d4_latest.sif "/scratch/d_hydroLauncher.sh" ##d_hydro exit sleep 2 >>>>>> Submit the job and check the results. qsub pbs.01 Second method without using the launcher script: ================================================= #!/bin/bash #PBS -m abe #PBS -V #PBS -M YourEmail@griffith.edu.au #PBS -N DelftJob #PBS -q gworkq #PBS -l select=1:ncpus=1:mem=12gb,walltime=00:00:30 DELFT_PATH="/opt/delft3d_latest/lnx64/" export SINGULARITYENV_LD_LIBRARY_PATH=$DELFT_PATH/lib: export SINGULARITYENV_LIBRARY_PATH=$DELFT_PATH/lib export SINGULARITYENV_PATH=$DELFT_PATH/bin cd $HOME/scratch singularity exec /export/home/s12345/Container/delft3d4_latest.sif d_hydro swaninit exit sleep 2
PBS script: Running an mpi job with singularity
#!/bin/bash #PBS -V #PBS -M YourEmail@griffith.edu.au #PBS -N DelftJobMPI #PBS -q gworkq #PBS -l select=1:ncpus=5:mem=12gb,walltime=00:02:00 module load mpi/mpich-3.2-x86_64 # note that this is the available module DELFT_PATH="/opt/delft3d_latest/lnx64" MPICH_ROOT="/usr/lib64/mpich-3.2" ##export SINGULARITY_BINDPATH="$DELFT_PATH/bin" # Append this location to the container PATH export SINGULARITYENV_LD_LIBRARY_PATH=$DELFT_PATH/lib:$MPICH_ROOT/lib # export SINGULARITYENV_LIBRARY_PATH=$DELFT_PATH/lib:$MPICH_ROOT export SINGULARITYENV_PATH=$DELFT_PATH/bin export SINGULARITY_BINDPATH="$MPICH_ROOT" cd $PBS_O_WORKDIR NPROCS=$( cat $PBS_NODEFILE | wc -l ) mpiexec -n $NPROCS singularity exec $HOME/Container/delft3d4_latest.sif d_hydro swaninit exit sleep 2
Another example: Using MitoZ container
Ref: https://wiki.srce.hr/display/RKI/MitoZ Initial Prep: ============= Copy the images into the local container folder: mkdir ~/Container cp /sw/Containers/singularity/images/MitoZ.simg ~/Container/ Shell into the container and create launcher script: ==================================================== Initial setup 1. shell into the container ============================= For example: singularity shell -B /scratch/s12345:/scratch --pwd /scratch/s12345:/scratch --pwd /scratch /export/home/s12345/Container/MitoZ.simg For example: singularity shell -B /scratch/s2981868:/scratch --pwd /scratch/s2981868:/scratch --pwd /scratch /export/home/s2981868/Container/MitoZ.simg 2. Determine where the app is installed e.g: /app/release_MitoZ_v2.2 You may need to export the PATh variable as well as LD_LIBRARY_PATH and LIBRARY_PATH 3. Create a variable file you can source Once in the shell, create a variable file with contents like this, commenting out stuff that is not needed. more /scratch/Mitoenv.sh echo "export PATH=/app/release_MitoZ_v2.2:/app/release_MitoZ_v2.2/bin:$PATH" >/scratch/Mitoenv.sh #Other environmental variables that may need to be exported are the following #echo "export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH" >>/scratch/Mitoenv.sh #echo "export LIBRARY_PATH=/usr/local/lib:$LIBRARY_PATH" >>/scratch/Mitoenv.sh #echo "export PATH=/app/release_MitoZ_v2.2:/app/release_MitoZ_v2.2/bin:$PATH" >>/scratch/Mitoenv.sh #module load mpich-x86_64 source this file and check if the app launches properly e.g: source /scratch/Mitoenv.sh 4. Create a Launcher Script; =========================== Once you are happy that all step 1 ,2 and 3 works, you can create a launcherScript You can do this after exiting the virtual environment: vi ~/scratch/mitozLauncher.sh An example of the content is this: >>>>>>>>>>>> export PATH=/app/release_MitoZ_v2.2:/app/release_MitoZ_v2.2/bin:$PATH #Other environmental variables that may need to be exported are the following #export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH" #export LIBRARY_PATH=/usr/local/lib:$LIBRARY_PATH" #export PATH=/app/release_MitoZ_v2.2:/app/release_MitoZ_v2.2/bin:$PATH #module load mpich-x86_64 cd /scratch MitoZ.py >>>>>>>>>>>>> make it executable: chmod +x ~/scratch/mitozLauncher.sh Check if it gives the expected results: singularity exec -B /scratch/s2981868:/scratch --pwd /scratch/s2981868:/scratch --pwd /scratch /export/home/s2981868/Container/MitoZ.simg "/scratch/mitozLauncher.sh" Once you are satisfied, you may add complex command instructions into this launcher file. 5. Running it as a batch job: ============================== Outside of the virtual environment, please create this pbs script anywhere in your home directory: vi ~/scratch/pbs.01 >>>>>>>>> #!/bin/bash #PBS -m abe #PBS -M YourEmail@griffith.edu.au #PBS -N TestMitoZ #PBS -q workq #PBS -l select=1:ncpus=1:mem=2gb,walltime=30:00:00 cd $PBS_O_WORKDIR singularity exec -B /scratch/s2981868:/scratch --pwd /scratch/s2981868:/scratch --pwd /scratch /export/home/s2981868/Container/MitoZ.simg "/scratch/mitozLauncher.sh" exit sleep 2 >>>>>> Submit the job and check the results. qsub pbs.01
Useful ways to pass environmental variables
if you need to change the $PATH of your container at runtime there are a few environmental variables you can use: SINGULARITYENV_PREPEND_PATH=/good/stuff/at/beginning to prepend directories to the beginning of the $PATH SINGULARITYENV_APPEND_PATH=/good/stuff/at/end to append directories to the end of the $PATH SINGULARITYENV_PATH=/a/new/path to override the $PATH within the container
Training Notes and Videos
- Part 1: Singularity Containers
- Part 2: Customising Containers
- Part 3: Containers in the HPC environment
- Tutorial and GitHub
- Setup singularity on your workstation
- https://pawseysc.github.io/singularity-containers/32-writable-trinity/index.html
- https://pawseysc.github.io/singularity-containers/setup.html
- https://support.pawsey.org.au/documentation/display/US/Running+RStudio+on+Zeus+with+Singularity
Singularity installation notes on the gowonda cluster
Please note: The build process is not required for using pre-built containers and and can only be done on the user's machine and not on the cluster
module purge module load misc/squashfs/4.3 module load misc/libarchive/3.3.1 cd /sw/singularity/2.4.6/src/singularity-2.4.6 ./autogen.sh ./configure --prefix=/sw/singularity/2.4.6 2>&1 | tee configureLog.txt make 2>&1 | tee makeLog.txt make install 2>&1 | tee makeInstall.txt
module load misc/singularity/2.4.6 cd /project/mus/singularity singularity build centos7.img centos7.def cat centos7.def >>>>>>>>>>> BootStrap: yum OSVersion: 7 MirrorURL: http://mirror.centos.org/centos-%{OSVERSION}/%{OSVERSION}/os/$basearch/ Include: yum # If you want the latest updates to be installed inside the container # during the bootstrapping process then uncomment the following line: # UpdateURL: http://mirror.centos.org/centos-%{OSVERSION}/%{OSVERSION}/updates/$basearch/ %runscript echo "This is what happens when you run the container..." %setup # These are only required if running on the HPC. export SINGULARITY_ROOTFS=/project/mus/singularity mkdir -p ${SINGULARITY_ROOTFS}/opt mkdir -p ${SINGULARITY_ROOTFS}/scratch mkdir -p ${SINGULARITY_ROOTFS}/shared %post echo "Hello from inside the container" # yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm -y yum install epel-release -y yum -y install vim-minimal >>>>>>>>>>> Other options: singularity pull shub://singularity-hub.org/truatpasteurdotfr/singularity-docker-centos7-devtoolset
Singularity usage on Griffith HPC
To create containers, we use public.docker.itc.griffith.edu.au
Simple "singularity pull" with s3proxy stopped working for docker hubs around Oct 2020. This is due to the changes (limits) docker introduced. To avoid hitting the limits, Griffith team implemented steps to stop direct access through the s3proxy http proxy to the docker hub registry (see https://www.docker.com/blog/understanding-inner-loop-development-and-pull-rates/ for notes)
To build an image singularity pull docker://public.docker.itc.griffith.edu.au/biocontainers/blast:2.2.31 To run a container singularity run docker://public.docker.itc.griffith.edu.au/godlovedc/lolcow # Download the container into a permanent image file ###Old method which will not work anymore ###http_proxy=http://s3proxy.itc.griffith.edu.au:3128 singularity pull my_blast.sif docker://biocontainers/blast:2.2.31 singularity pull my_blast.sif docker://public.docker.itc.griffith.edu.au/biocontainers/blast:2.2.31 Another example: singularity pull lolcow_latest.sif docker://public.docker.itc.griffith.edu.au/godlovedc/lolcow # Now run it from the image file singularity exec my_blast.sif blastp -version blastp: 2.2.31+ Package: blast 2.2.31, build Apr 23 2016 15:49:47 Package: blast 2.2.31, build Apr 23 2016 15:49:47 Using GPU(s) with a Container This is useful on the n060 node with gpu cards A docker or singularity container built to use an NVIDIA GPU should be run with the--nv option singularity pull docker://public.docker.itc.griffith.edu.au/tensorflow/tensorflow:latest-gpu-jupyter singularity run --nv tensorflow_latest-gpu-jupyter.sif (-B /run is needed for some images (using e.g. jupyter, that give an error otherwise. For that, use this syntax: singularity run --nv –B /run tensorflow_latest-gpu-jupyter.sif) Most docker containers run successfully under Singularity Some need to be run with special options Few cannot be used Docker is optimized for running services in isolated containers Need to add back other cluster filesystems with -B option... singularity exec --contain -B /project -B /archive singularity exec -B /scratch/s1234:/scratch --pwd /scratch/scratch/s1234:/scratch --pwd /scratch /export/home/s1234/Container/openfoam6-paraview54_latest.sif-centos env|more singularity exec -B /scratch/s1234:/scratch --pwd /scratch/scratch/s1234:/scratch --pwd /scratch /export/home/s1234/Container/openfoam6-paraview54_latest.sif-centos cat /scratch/singularity_launch.03 look at the runscript, and envrionment ================================= singularity inspect -e /export/home/s1234/Container/openfoam6-paraview54_latest.sif-centos singularity inspect -r /export/home/s1234/Container/openfoam6-paraview54_latest.sif-centos singularity exec -B /scratch/s1234:/scratch --pwd /scratch/scratch/s1234:/scratch --pwd /scratch /export/home/1234/Container/openfoam6-paraview54_latest.sif-centos /.singularity.d/runscript
Another pbs example from an actual run
cat pbs.01 >>>> #!/bin/bash #PBS -m e #PBS -M email@griffith.edu.au #PBS -N CEINMS256Calib #PBS -q medium #PBS -l select=1:ncpus=4:mem=16gb,walltime=30:00:00 cd $PBS_O_WORKDIR singularity exec -B /scratch/s123456:/scratch --pwd /scratch/s123456:/scratch --pwd /scratch /export/home/s123456/Container/ceinms_latest.sif "/scratch/Scripts/ceinmsLauncher.sh" exit sleep 2 >>>>>> cat /scratch/s2984644/Scripts/ceinmsLauncher.sh >>>>>> export LD_LIBRARY_PATH=/usr/local/lib:/opensim/opensim_install/lib export PATH=/CEINMS/install/bin:$PATH CEINMScalibrate -S '/scratch/data/Dev02/Barefoot/ceinms/calibratedSubjects/setupCalibration_t00000000.xml' >>>>>> Before you submit the job, make sure it is executable. You do that with this: chmod 700 /scratch/s2984644/Scripts/ceinmsLauncher.sh >>>>>> qsub pbs.01 >>>>> Results: +-+-+-+-+-+-+ |C|E|I|N|M|S| +-+-+-+-+-+-+-+-+-+-+ |C|a|l|i|b|r|a|t|e|d| +-+-+-+-+-+-+-+-+-+-+-+-+ |E|M|G|-|I|n|f|o|r|m|e|d| +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |N|e|u|r|o|m|u|s|c|u|l|o|s|k|e|l|e|t|a|l| +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |T|o|o|l|b|o|x| +-+-+-+-+-+-+-+ CEINMScalibrate version 0.30.1 Copyright (C) Nov 3 2022 Claudio Pizzolato, Monica Reggiani, Massimo Sartori, David Lloyd Software developers: Claudio Pizzolato, Monica Reggiani readNMSmodelCfg filesystem error: cannot make canonical path: No such file or directory [/scratch/data/Dev02/Barefoot/ceinms/calibratedSubjects/subjectCalibrated_t00000000.xml]Reading subject file: /scratch/data/Dev02/Barefoot/ceinms/calibratedSubjects/uncalibrated.xml . Contact model found Calibration configuration - Algorithm -- Simulated Annealing --- noEpsilon 4 --- NS 15 --- NT 5 --- maxNoEval 200000 --- rt 0.3 --- T 20 --- epsilon 1e-05 - NMSmodel - CalibrationSteps -- Step Objective Function: TorqueErrorNormalised - Targets Type: Torque - Targets: hip_flexion_r knee_angle_r ankle_angle_r - Weight: 1 - Exponent: 1 - Trials -- /scratch/data/Dev02/Barefoot/ceinms/calibratedSubjects/running10.xml -- /scratch/data/Dev02/Barefoot/ceinms/calibratedSubjects/running11.xml -- /scratch/data/Dev02/Barefoot/ceinms/calibratedSubjects/running7.xml Reading subject file: /scratch/data/Dev02/Barefoot/ceinms/calibratedSubjects/uncalibrated.xml . activeForceLength passiveForceLength forceVelocity tendonForceStrain Assuming addbrev_r pennation angle in radians: 0.114781 Assuming addlong_r pennation angle in radians: 0.13777 Assuming addmagDist_r pennation angle in radians: 0.194705 Assuming addmagIsch_r pennation angle in radians: 0.168044 Assuming addmagMid_r pennation angle in radians: 0.207308 Assuming addmagProx_r pennation angle in radians: 0.311483 <snip> EMG: Reading emg file.../scratch/data/Dev02/Barefoot/ceinms/calibratedSubjects/../../dynamicElaborations/running10/emg.mot Muscle excitations to muscle mapping: addbrev_r -> addbrev_r addlong_r -> addlong_r addmagDist_r -> addmagDist_r addmagIsch_r -> addmagIsch_r addmagMid_r -> addmagMid_r addmagProx_r -> addmagProx_r bflh_r -> bflh_r bfsh_r -> bfsh_r edl_r -> edl_r ehl_r -> ehl_r fdl_r -> fdl_r <snip> >>>>
Converting a docker image into singularity image
Method 1: Using an Existing Docker Image on Your Local Machine
1. Find the Docker image ID. docker images REPOSITORY TAG IMAGE ID CREATED SIZE hello-world latest bf756fb1ae65 5 months ago 13.3kB godlovedc/lolcow latest 577c1fe8e6d8 2 years ago 241MB 2. Create a tarball of the Docker image. For the Docker image you want to port to Griffith HPC, for example, godlovedc/lolcow with an image ID of 577c1fe8e6d8, create a tarball using the docker save command: docker save 577c1fe8e6d8 -o lolcow.tar 3. Copy the tarball to Griffith HPC Use scp or winscp or cyberduck etc scp lolcow.tar 10.250.250.3:/tmp 4. Convert the tarball to a Singularity image. module load singularity singularity build --sandbox lolcow docker-archive://lolcow.tar If the tarball is not in the current working directory, specify the path, for example, /tmp: singularity build --sandbox lolcow docker-archive:///tmp/lolcow.tar In this example, lolcow is the directory name of the Singularity image. 5. Run the Singularity sandbox as usual for testing. Once testing is over, use it in a pbs script (see example above). For example: singularity shell lolcow singularity exec lolcow cowsay hello singularity run lolcow
Method 2: Using an Existing Docker Image on Docker Hub
See section earlier on how to do this.
Method 3: Using a Working Dockerfile Without a Docker Image
ref: https://www.nas.nasa.gov/hecc/support/kb/building-an-image-using-the-singularity-build-tool_639.html
This is for advance users source /usr local/bin/s3proxy.sh module load singularity singularity build [--fakeroot] --sandbox lolcow lolcow.def
Reference
- http://singularity.lbl.gov/
- https://www.singularity-hub.org/
- https://singularityhub.github.io/containers/registry/singularity-hub-registry/
- https://singularity.lbl.gov/user-guide
- https://hpc.research.uts.edu.au/software_general/singularity/
- https://cran.r-project.org/web/views/HighPerformanceComputing.html
- https://support.pawsey.org.au/documentation/display/US/Running+RStudio+on+Zeus+with+Singularity
- https://www.katacoda.com/courses/docker
- https://pawseysc.github.io/containers-bioinformatics-workshop/5.build/index.html
- https://quay.io/
- https://pawseysc.github.io/containers-bioinformatics-workshop/3.pipeline/index.html
- https://ist.mit.edu/xwin32
- https://pawseysc.github.io/containers-bioinformatics-workshop/1.prep1-ssh/index.html