toc
Introduction
Singularity enables users to have full control of their environment. Singularity containers can be used to package entire scientific workflows, software and libraries, and even data. This means that you don’t have to ask your cluster admin to install anything for you - you can put it in a Singularity container and run. (Ref: http://singularity.lbl.gov/)
...
No Format |
---|
Initial Prep:
=============
Copy the images into the local container folder:
mkdir ~/Container
cp /sw/Containers/singularity/images/delft3d4_latest.sif ~/Container
cp /sw/Containers/singularity/images/delft3dfm_latest.sif ~/Container
Shell into the container and create launcher script:
====================================================
Initial setup
1. shell into the container
=============================
For example:
singularity shell -B /scratch/s12345:/scratch --pwd /scratch/s12345:/scratch --pwd /scratch /export/home/s12345/Container/delft3d4_latest.sif
For example:
singularity shell -B /scratch/s2961948:/scratch --pwd /scratch/s2961948:/scratch --pwd /scratch /export/home/s2961948/Container/delft3d4_latest.sif
2. Create a variable file you can source
Once in the shell, create a variable file with contents like this:
vi /scratch/delft3denv.sh
export LD_LIBRARY_PATH=/opt/delft3d_latest/lnx64/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=/opt/delft3d_latest/lnx64/lib:$LIBRARY_PATH
export PATH=/opt/delft3d_latest/lnx64/bin:$PATH
module load mpich-x86_64
3. Manual Test Run
===================
source /scratch/delft3denv.sh
d_hydro
For example:
Singularity delft3d4_latest.sif:/scratch> d_hydro
d_hydro ABORT: Improper usage. Execute "d_hydro -?" for command-line syntax
4. Create a Launcher Script;
===========================
Once you are happy that all step 1 ,2 and 3 works, you can create a launcherScript
While still inside the shell:
vi /scratch/d_hydroLauncher.sh
An example of the content is this:
>>>>>>>>>>>>
export LD_LIBRARY_PATH=/opt/delft3d_latest/lnx64/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=/opt/delft3d_latest/lnx64/lib:$LIBRARY_PATH
export PATH=/opt/delft3d_latest/lnx64/bin:$PATH
module load mpich-x86_64
cd /scratch
d_hydro
>>>>>>>>>>>>>
make it executable:
chmod +x d_hydroLauncher.sh
Check if it gives the expected results:
./scratch/d_hydroLauncher.sh
5. Running it as a batch job:
==============================
Exit the shell and create this pbs script anywhere in your home directory:
A: Final d_hydroLauncher.sh
===========================
more ~/scratch/d_hydroLauncher.sh
/opt/delft3d_latest/lnx64/bin/d_hydro
B: Final PBS script
====================
more pbs.01
>>>>>>>>>
#!/bin/bash
#PBS -m abe
#PBS -M YourEmail@griffith.edu.au
#PBS -N DelftJob
#PBS -q gworkq
#PBS -l select=1:ncpus=1:mem=12gb,walltime=00:00:30
cd $PBS_O_WORKDIR
DELFT_PATH="/opt/delft3d_latest/lnx64/"
##export SINGULARITY_BINDPATH="$DELFT_PATH/bin" # Append this location to the container PATH
##export SINGULARITY_BINDPATH=/scratch,/fast/tmp,/gpfs01
export SINGULARITYENV_LD_LIBRARY_PATH=$DELFT_PATH/lib:$LD_LIBRARY_PATH
export SINGULARITYENV_LIBRARY_PATH=$DELFT_PATH/lib:$LIBRARY_PATH
export SINGULARITYENV_PATH=$DELFT_PATH/bin:$PATH
singularity exec -B /scratch/s12345:/scratch /export/home/s12345/Container/delft3d4_latest.sif "/scratch/d_hydroLauncher.sh"
##d_hydro
exit
sleep 2
>>>>>>
Submit the job and check the results.
qsub pbs.01
Second method without using the launcher script:
=================================================
#!/bin/bash
#PBS -m abe
#PBS -V
#PBS -M YourEmail@griffith.edu.au
#PBS -N DelftJob
#PBS -q gworkq
#PBS -l select=1:ncpus=1:mem=12gb,walltime=00:00:30
DELFT_PATH="/opt/delft3d_latest/lnx64/"
export SINGULARITYENV_LD_LIBRARY_PATH=$DELFT_PATH/lib:
export SINGULARITYENV_LIBRARY_PATH=$DELFT_PATH/lib
export SINGULARITYENV_PATH=$DELFT_PATH/bin
cd $HOME/scratch
singularity exec /export/home/s12345/Container/delft3d4_latest.sif d_hydro swaninit
exit
sleep 2
|
...
To create containers, we use public.docker.itc.griffith.edu.au
Simple "singularity pull" with s3proxy stopped working for docker hubs around Oct 2020. This is due to the changes (limits) docker introduced. To avoid hitting the limits, Griffith team implemented steps to stop direct access through the s3proxy http proxy to the docker hub registry (see https://www.docker.com/blog/understanding-inner-loop-development-and-pull-rates/ for notes)
...
No Format |
---|
1. Find the Docker image ID. docker images REPOSITORY TAG IMAGE ID CREATED SIZE hello-world latest bf756fb1ae65 5 months ago 13.3kB godlovedc/lolcow latest 577c1fe8e6d8 2 years ago 241MB 2. Create a tarball of the Docker image. For the Docker image you want to port to Griffith HPC, for example, godlovedc/lolcow with an image ID of 577c1fe8e6d8, create a tarball using the docker save command: docker save 577c1fe8e6d8 -o lolcow.tar 3. Copy the tarball to Griffith HPC Use scp or winscp or cyberduck etc scp lolcow.tar 10.250.250.3:/tmp 4. Convert the tarball to a Singularity image. module load singularity singularity build --sandbox lolcow docker-archive://lolcow.tar If the tarball is not in the current working directory, specify the path, for example, /tmp: singularity build --sandbox lolcow docker-archive:///tmp/lolcow.tar singularity build --sandbox pytorch docker-archive:////sw/Containers/docker/pytorchnew.tar In this example, lolcow is the directory name of the Singularity image. 5. Run the Singularity sandbox as usual for testing. Once testing is over, use it in a pbs script (see example above). For example: singularity shell lolcow singularity exec lolcow cowsay hello singularity run lolcow |
Method 2: Using an Existing Docker Image on Docker Hub
See section earlier on how to do this.
Method 3: Using a Working Dockerfile Without a Docker Image
...
Image
ref: https://www.nas.nasa.gov/hecc/support/kb/building-an-image-using-the-singularity-build-tool_639.html
No Format |
---|
This is for advance users
source /usr local/bin/s3proxy.sh
module load singularity
singularity build [--fakeroot] --sandbox lolcow lolcow.def |
Running a Sandbox May Be More Efficient than Running the .sif File
During runtime, the .sif file will be converted to a sandbox behind the scenes before being started. The conversion from .sif to sandbox may take a long time or even hang, especially if you are running the container in parallel where there will be multiple conversions at the same time. Consider converting the .sif file into a sandbox explicitly ahead of time, and running the container with the sandbox instead of the .sif file in your run script:
singularity build --sandbox your_container_sandbox your_container.sif
singularity exec [options] your_container_sandbox
Ensure All Host Filesystems Needed for Read and/or Write are Mounted
When Singularity swaps the host operating system for the one inside your container, the host filesystems become inaccessible. If you need access to any of your host filesystems or directories (such as /scratch/snumber for a certain module or your /export/home/snumber for reading/writing data) you must bind-mount them with the -B or --bind command-line option, or by setting the SINGULARITY_BIND environment variable:
singularity shell -B /scratch:/scratch -B /export/home/snumber:/export/home/snumber /export/home/snumber/sw/Containers/container_sandbox
Know the Environment Settings of your Container and the Effect of Using the -e Option
Environment variables set in your current session after sourcing your system startup file, such as .cshrc or .bashrc, or after loading a modulefile, are exported into the container. The only exceptions are PATH and LD_LIBRARY_PATH, which are set differently inside the container compared to the host environment.
The -e or --cleanenv option of the singularity [run, exec, or shell] command lets you clean the environment before running the container. As shown in the table below, adding -e to the singularity exec command retains variables on the left column but removes those on the right column
Variables retained with -e enabled | Variables removed with -e enabled |
---|---|
SINGULARITY_XXX | DISPLAY, PYTHONSTRTUP |
PATH (re-defined), LD_LIBRARY_PATH (re-defined) | SSH_XXX, XXX_RSH, SYSTEMD_LESS |
PWD, HOME, uid, user | xxxMODULExxx, MANPATH, LMFILES |
LANG, TERM, SHELL, SHLVL, PS1 | HOSTxxx, MACHTYPE, OSTYPE, VENDOR |
. | USER, USER_PATH, LOGNAME, GROUP |
. | MAIL, EXINIT, CSHEDIT, OSCAR_HOME |
There may be a difference in behavior when running your container with or without the -e option. For example, if you want to run a graphical application, such as ParaView, within the container, you should add --env DISPLAY=$DISPLAY to the singularity shell command if the -e option is used to start the container:
singularity shell -e --env DISPLAY=$DISPLAY your_container_sandbox
Likely Places to Find Executables or Libraries Inside the Container
For many containers, the default PATH or LD_LIBRARY_PATH might not include paths of executables or libraries needed to run the application of interest. If the container includes MPI libraries, they are commonly installed inside the container in the /usr/local or /usr/local/mpi directories. Other packages are likely installed under the /opt directory of the container. If you still cannot find the executables or libraries and there is no documentation about the container, contact the provider of the container.
Reference
- http://singularity.lbl.gov/
- https://www.singularity-hub.org/
- https://singularityhub.github.io/containers/registry/singularity-hub-registry/
- https://singularity.lbl.gov/user-guide
- https://hpc.research.uts.edu.au/software_general/singularity/
- https://cran.r-project.org/web/views/HighPerformanceComputing.html
- https://support.pawsey.org.au/documentation/display/US/Running+RStudio+on+Zeus+with+Singularity
- https://www.katacoda.com/courses/docker
- https://pawseysc.github.io/containers-bioinformatics-workshop/5.build/index.html
- https://quay.io/
- https://pawseysc.github.io/containers-bioinformatics-workshop/3.pipeline/index.html
- https://ist.mit.edu/xwin32
- https://pawseysc.github.io/containers-bioinformatics-workshop/1.prep1-ssh/index.html
...