Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 38 Next »


Introduction

Singularity enables users to have full control of their environment. Singularity containers can be used to package entire scientific workflows, software and libraries, and even data. This means that you don’t have to ask your cluster admin to install anything for you - you can put it in a Singularity container and run. (Ref: http://singularity.lbl.gov/)

Usage

Singularity is installed on all nodes on the new cluster. 

On the old cluster, usage is as follows:
module load singularity/2.4.6
OR
module load singularity/2.2Installation

Images are kept here: /sw/Containers/singularity/images

singularity shell /sw/Containers/singularity/images/ubuntu-16.04.4-10GB.img

Another example:
module load singularity/2.4.6
singularity shell /sw/Containers/singularity/images/openEMS-ubuntu-16.04.img 


Sample PBS script

#!/bin/bash -l
#PBS -m abe
#PBS -M YourEmail@griffith.edu.au
#PBS -V
#PBS -N TestYade
#PBS -q dljun@n060
#PBS -W group_list=deeplearning -A deeplearning
#PBS -l select=1:ncpus=1:ngpus=1:mem=32gb,walltime=30:00:00
cd  $PBS_O_WORKDIR
###GPUNUM=`echo $CUDA_VISIBLE_DEVICES`
module load singularity/3.2.0
#singularity shell  /sw/centos7/singularity/Containers/singularity/3.2.0/images/YadeBox.simg
##singularity shell --nv /sw/centos7/singularity/Containers/singularity/3.2.0/images/YadeBox.simg /export/home/s123456/pbs/singularity/yadeLauncher.sh
singularity exec /sw/centos7/singularity/Containers/singularity/3.2.0/images/YadeBox.simg /export/home/s123456/pbs/singularity/yadeLauncher.sh
#cat /etc/os-release
exit
sleep 2
cat yadeLauncher.sh

#! /bin/bash

#load the anaconda tensorflow environment
module load anaconda/5.3.1py3
source activate tensorflow-gpu
###echo $CUDA_VISIBLE_DEVICES
###GPUNUM=`echo $CUDA_VISIBLE_DEVICES`

#navigate to Your directory
##cd /export/home/s123456/pbs/singularity/yade/

#run the code
yade /export/home/s123456/pbs/singularity/yade/DST_parametric_M1c_Layer1.py


Singularity on the new cluster

Singularity rpm is installed on all compute nodes


Here is how to use the ubuntu container named YadeBox.simg on the new cluster

e.g: singularity shell /sw/singularity/Containers/singularity/3.2.0/images/YadeBox.simg

Singularity YadeBox.simg:/> cat /etc/*release*
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.6 LTS"
NAME="Ubuntu"
VERSION="16.04.6 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.6 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial


Example of using one of the containers - delft3d4

Initial Prep:
=============

Copy the images into the local container folder:

mkdir ~/Container
cp /sw/Containers/singularity/images/delft3d4_latest.sif  ~/Container
cp /sw/Containers/singularity/images/delft3dfm_latest.sif  ~/Container


Shell into the container and create launcher script:
====================================================

Initial setup

1. shell into the container
=============================
For example:

singularity  shell -B /scratch/s12345:/scratch --pwd /scratch/s12345:/scratch --pwd /scratch  /export/home/s12345/Container/delft3d4_latest.sif

For example:
singularity  shell -B /scratch/s2961948:/scratch --pwd /scratch/s2961948:/scratch --pwd /scratch  /export/home/s2961948/Container/delft3d4_latest.sif

2. Create a variable file you can source

Once in the shell, create a variable file with contents like this:

vi /scratch/delft3denv.sh
export LD_LIBRARY_PATH=/opt/delft3d_latest/lnx64/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=/opt/delft3d_latest/lnx64/lib:$LIBRARY_PATH
export PATH=/opt/delft3d_latest/lnx64/bin:$PATH
module load mpich-x86_64

3. Manual Test Run
===================
source /scratch/delft3denv.sh
d_hydro

For example:

Singularity delft3d4_latest.sif:/scratch> d_hydro
d_hydro ABORT: Improper usage.  Execute "d_hydro -?" for command-line syntax


4. Create a Launcher Script;
===========================
Once you are happy that all step 1 ,2 and 3 works, you can create a launcherScript
While still inside the shell:

vi /scratch/d_hydroLauncher.sh

An example of the content is this:
>>>>>>>>>>>>
export LD_LIBRARY_PATH=/opt/delft3d_latest/lnx64/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=/opt/delft3d_latest/lnx64/lib:$LIBRARY_PATH
export PATH=/opt/delft3d_latest/lnx64/bin:$PATH
module load mpich-x86_64
cd /scratch
d_hydro 
>>>>>>>>>>>>>
make it executable:
chmod +x d_hydroLauncher.sh
Check if it gives the expected results:

./scratch/d_hydroLauncher.sh


5. Running it as a batch job:
==============================
Exit the shell and create this pbs script anywhere in your home directory:

A: Final d_hydroLauncher.sh
===========================
more ~/scratch/d_hydroLauncher.sh
/opt/delft3d_latest/lnx64/bin/d_hydro

B: Final PBS script
====================
more pbs.01
>>>>>>>>>
#!/bin/bash 
#PBS -m abe
#PBS -M YourEmail@griffith.edu.au
#PBS -N DelftJob  
#PBS -q gworkq  
#PBS -l select=1:ncpus=1:mem=12gb,walltime=00:00:30
cd  $PBS_O_WORKDIR
DELFT_PATH="/opt/delft3d_latest/lnx64/"
##export SINGULARITY_BINDPATH="$DELFT_PATH/bin" # Append this location to the container PATH
export SINGULARITYENV_LD_LIBRARY_PATH=$DELFT_PATH/lib:$LD_LIBRARY_PATH
export SINGULARITYENV_LIBRARY_PATH=$DELFT_PATH/lib:$LIBRARY_PATH
export SINGULARITYENV_PATH=$DELFT_PATH/bin:$PATH
singularity exec -B /scratch/s12345:/scratch  /export/home/s12345/Container/delft3d4_latest.sif "/scratch/d_hydroLauncher.sh"
##d_hydro
exit
sleep 2
>>>>>>

Submit the job and check the results.

qsub pbs.01

Second method without using the launcher script:
=================================================
#!/bin/bash 
#PBS -m abe
#PBS -V
#PBS -M YourEmail@griffith.edu.au
#PBS -N DelftJob  
#PBS -q gworkq  
#PBS -l select=1:ncpus=1:mem=12gb,walltime=00:00:30
DELFT_PATH="/opt/delft3d_latest/lnx64/"
export SINGULARITYENV_LD_LIBRARY_PATH=$DELFT_PATH/lib:
export SINGULARITYENV_LIBRARY_PATH=$DELFT_PATH/lib
export SINGULARITYENV_PATH=$DELFT_PATH/bin
cd $HOME/scratch
singularity exec /export/home/s12345/Container/delft3d4_latest.sif d_hydro swaninit
exit
sleep 2



PBS script: Running an mpi job with singularity
#!/bin/bash
#PBS -V
#PBS -M YourEmail@griffith.edu.au
#PBS -N DelftJobMPI
#PBS -q gworkq
#PBS -l select=1:ncpus=5:mem=12gb,walltime=00:02:00
module load mpi/mpich-3.2-x86_64 # note that this is the available module
DELFT_PATH="/opt/delft3d_latest/lnx64"
MPICH_ROOT="/usr/lib64/mpich-3.2"
##export SINGULARITY_BINDPATH="$DELFT_PATH/bin" # Append this location to the container PATH
export SINGULARITYENV_LD_LIBRARY_PATH=$DELFT_PATH/lib:$MPICH_ROOT/lib
# export SINGULARITYENV_LIBRARY_PATH=$DELFT_PATH/lib:$MPICH_ROOT
export SINGULARITYENV_PATH=$DELFT_PATH/bin
export SINGULARITY_BINDPATH="$MPICH_ROOT"
cd $PBS_O_WORKDIR
NPROCS=$( cat $PBS_NODEFILE | wc -l )
mpiexec -n $NPROCS singularity exec $HOME/Container/delft3d4_latest.sif d_hydro swaninit
exit
sleep 2


Another example: Using MitoZ container

Ref: https://wiki.srce.hr/display/RKI/MitoZ

Initial Prep:
=============

Copy the images into the local container folder:
mkdir ~/Container
cp /sw/Containers/singularity/images/MitoZ.simg ~/Container/

Shell into the container and create launcher script:
====================================================

Initial setup

1. shell into the container
=============================
For example:

singularity  shell -B /scratch/s12345:/scratch --pwd /scratch/s12345:/scratch --pwd /scratch  /export/home/s12345/Container/MitoZ.simg

For example:
singularity  shell -B /scratch/s2981868:/scratch --pwd /scratch/s2981868:/scratch --pwd /scratch  /export/home/s2981868/Container/MitoZ.simg

2. Determine where the app is installed

e.g: /app/release_MitoZ_v2.2
You may need to export the PATh variable as well as LD_LIBRARY_PATH and LIBRARY_PATH

3. Create a variable file you can source

Once in the shell, create a variable file with contents like this, commenting out stuff that is not needed.

more /scratch/Mitoenv.sh
echo "export PATH=/app/release_MitoZ_v2.2:/app/release_MitoZ_v2.2/bin:$PATH" >/scratch/Mitoenv.sh
#Other environmental variables that may need to be exported are the following
#echo "export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH" >>/scratch/Mitoenv.sh
#echo "export LIBRARY_PATH=/usr/local/lib:$LIBRARY_PATH" >>/scratch/Mitoenv.sh
#echo "export PATH=/app/release_MitoZ_v2.2:/app/release_MitoZ_v2.2/bin:$PATH" >>/scratch/Mitoenv.sh
#module load mpich-x86_64

source this file and check if the app launches properly
e.g:

source /scratch/Mitoenv.sh


4. Create a Launcher Script;
===========================
Once you are happy that all step 1 ,2 and 3 works, you can create a launcherScript
You can do this after exiting the virtual environment:

vi ~/scratch/mitozLauncher.sh

An example of the content is this:
>>>>>>>>>>>>

export PATH=/app/release_MitoZ_v2.2:/app/release_MitoZ_v2.2/bin:$PATH
#Other environmental variables that may need to be exported are the following
#export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH"
#export LIBRARY_PATH=/usr/local/lib:$LIBRARY_PATH"
#export PATH=/app/release_MitoZ_v2.2:/app/release_MitoZ_v2.2/bin:$PATH
#module load mpich-x86_64
cd /scratch
MitoZ.py 
>>>>>>>>>>>>>
make it executable:
chmod +x ~/scratch/mitozLauncher.sh
Check if it gives the expected results:

singularity exec -B /scratch/s2981868:/scratch --pwd /scratch/s2981868:/scratch --pwd /scratch  /export/home/s2981868/Container/MitoZ.simg "/scratch/mitozLauncher.sh"

Once you are satisfied, you may add complex command instructions into this launcher file.

5. Running it as a batch job:
==============================
Outside of the virtual environment, please create this pbs script anywhere in your home directory:

vi ~/scratch/pbs.01

>>>>>>>>>
#!/bin/bash 
#PBS -m abe
#PBS -M YourEmail@griffith.edu.au
#PBS -N TestMitoZ
#PBS -q workq
#PBS -l select=1:ncpus=1:mem=2gb,walltime=30:00:00
cd  $PBS_O_WORKDIR
singularity exec -B /scratch/s2981868:/scratch --pwd /scratch/s2981868:/scratch --pwd /scratch  /export/home/s2981868/Container/MitoZ.simg "/scratch/mitozLauncher.sh"
exit
sleep 2
>>>>>>

Submit the job and check the results.

qsub pbs.01


Useful ways to pass environmental variables


if you need to change the $PATH of your container at runtime there are a few environmental variables you can use:


SINGULARITYENV_PREPEND_PATH=/good/stuff/at/beginning to prepend directories to the beginning of the $PATH
SINGULARITYENV_APPEND_PATH=/good/stuff/at/end to append directories to the end of the $PATH
SINGULARITYENV_PATH=/a/new/path to override the $PATH within the container


Training Notes and Videos

  1. Part 1: Singularity Containers
  2. Part 2: Customising Containers
  3. Part 3: Containers in the HPC environment 
  4. Tutorial and GitHub
  5. Setup singularity on your workstation
  6. https://pawseysc.github.io/singularity-containers/32-writable-trinity/index.html
  7. https://pawseysc.github.io/singularity-containers/setup.html
  8. https://support.pawsey.org.au/documentation/display/US/Running+RStudio+on+Zeus+with+Singularity


Singularity installation notes on the gowonda cluster

Please note: The build process is not required for using pre-built containers and and can only be done on the user's machine and not on the cluster

module purge
module load misc/squashfs/4.3
module load misc/libarchive/3.3.1


cd /sw/singularity/2.4.6/src/singularity-2.4.6
./autogen.sh
 ./configure --prefix=/sw/singularity/2.4.6 2>&1 | tee configureLog.txt

make 2>&1 | tee makeLog.txt
make install 2>&1 | tee makeInstall.txt
module load misc/singularity/2.4.6
cd /project/mus/singularity

singularity  build centos7.img centos7.def

cat centos7.def
>>>>>>>>>>>
BootStrap: yum
OSVersion: 7
MirrorURL: http://mirror.centos.org/centos-%{OSVERSION}/%{OSVERSION}/os/$basearch/
Include: yum

# If you want the latest updates to be installed inside the container
# during the bootstrapping process then uncomment the following line:
# UpdateURL: http://mirror.centos.org/centos-%{OSVERSION}/%{OSVERSION}/updates/$basearch/

%runscript
    echo "This is what happens when you run the container..."

%setup
    # These are only required if running on the HPC.
export SINGULARITY_ROOTFS=/project/mus/singularity
    mkdir -p ${SINGULARITY_ROOTFS}/opt
    mkdir -p ${SINGULARITY_ROOTFS}/scratch
    mkdir -p ${SINGULARITY_ROOTFS}/shared

%post
    echo "Hello from inside the container"
# yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm  -y
yum install epel-release -y
yum -y install vim-minimal
>>>>>>>>>>>



Other options:

 singularity pull shub://singularity-hub.org/truatpasteurdotfr/singularity-docker-centos7-devtoolset



Singularity usage on Griffith HPC

To create containers, we use public.docker.itc.griffith.edu.au
Simple "singularity pull" with s3proxy stopped working for docker hubs around Oct 2020. This is due to the changes (limits) docker introduced. To avoid hitting the limits, Griffith team implemented steps to stop direct access through the s3proxy http proxy to the docker hub registry (see https://www.docker.com/blog/understanding-inner-loop-development-and-pull-rates/ for notes)


To build an image 
singularity pull  docker://public.docker.itc.griffith.edu.au/biocontainers/blast:2.2.31

To run a container
singularity run docker://public.docker.itc.griffith.edu.au/godlovedc/lolcow

# Download the container into a permanent image file
http_proxy=http://s3proxy.itc.griffith.edu.au:3128  singularity pull my_blast.sif docker://biocontainers/blast:2.2.31 

singularity pull my_blast.sif docker://public.docker.itc.griffith.edu.au/biocontainers/blast:2.2.31

Another example:
singularity pull lolcow_latest.sif docker://public.docker.itc.griffith.edu.au/godlovedc/lolcow

# Now run it from the image file
singularity exec my_blast.sif blastp -version
blastp: 2.2.31+
Package: blast 2.2.31, build Apr 23 2016 15:49:47 Package: blast 2.2.31, build Apr 23 2016 15:49:47
Using GPU(s) with a Container
This is useful on the n060 node with gpu cards

A docker or singularity container built to use an NVIDIA GPU should be run with the--nv option 

singularity pull docker://public.docker.itc.griffith.edu.au/tensorflow/tensorflow:latest-gpu-jupyter

singularity run --nv  tensorflow_latest-gpu-jupyter.sif

(-B /run is needed for some images (using e.g. jupyter, that give an error otherwise. For that, use this syntax:
singularity run --nv –B /run tensorflow_latest-gpu-jupyter.sif) 
Most docker containers run successfully under Singularity Some need to be run with special options
Few cannot be used 
Docker is optimized for running services in isolated containers 

Need to add back other cluster filesystems with -B option... 
singularity exec --contain -B /project -B /archive 

singularity exec -B /scratch/s1234:/scratch --pwd /scratch/scratch/s1234:/scratch --pwd /scratch  /export/home/s1234/Container/openfoam6-paraview54_latest.sif-centos env|more

singularity exec -B /scratch/s1234:/scratch --pwd /scratch/scratch/s1234:/scratch --pwd /scratch  /export/home/s1234/Container/openfoam6-paraview54_latest.sif-centos cat /scratch/singularity_launch.03

look at the runscript, and envrionment
=================================
singularity inspect -e  /export/home/s1234/Container/openfoam6-paraview54_latest.sif-centos

singularity inspect -r  /export/home/s1234/Container/openfoam6-paraview54_latest.sif-centos

singularity exec -B /scratch/s1234:/scratch --pwd /scratch/scratch/s1234:/scratch --pwd /scratch  /export/home/1234/Container/openfoam6-paraview54_latest.sif-centos /.singularity.d/runscript


Reference

  1. http://singularity.lbl.gov/
  2. https://www.singularity-hub.org/
  3. https://singularityhub.github.io/containers/registry/singularity-hub-registry/
  4. https://singularity.lbl.gov/user-guide
  5. https://hpc.research.uts.edu.au/software_general/singularity/
  6. https://cran.r-project.org/web/views/HighPerformanceComputing.html
  7. https://support.pawsey.org.au/documentation/display/US/Running+RStudio+on+Zeus+with+Singularity
  8. https://www.katacoda.com/courses/docker
  9. https://pawseysc.github.io/containers-bioinformatics-workshop/5.build/index.html
  10. https://quay.io/
  11. https://pawseysc.github.io/containers-bioinformatics-workshop/3.pipeline/index.html
  12. https://ist.mit.edu/xwin32
  13. https://pawseysc.github.io/containers-bioinformatics-workshop/1.prep1-ssh/index.html


  • No labels