Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents


*Qs 1: How do I cite or mention the cluster in papers? or what is the preferred method?

...

No Format
You could try a trace.
cmake --trace . 2>&1 | tee /tmp/cmakeOut.txt

You may have to explicitly mention the path, For example:
cmake . -DLAPACK_LIBRARIES=/sw/library/lapack/lapack-3.6.0/3.6.0/lib64/liblapack.so -DBLAS_LIBRARIES=/sw/library/blas/CBLAS/lib/cblas_LINUX.so
 
OR:
 cmake . -DLAPACK_LIBRARIES=/sw/library/lapack/lapack-3.6.0/3.6.0/lib64/liblapack.so -DBLAS_LIBRARIES=/sw/library/blas/CBLAS/lib/cblas_LINUX.so -DCMAKE_INSTALL_PREFIX=/sw/simbody/353


Qs28:  How do I check number of CPUs my job is using

No Format
You can find out on which compute node your job is running:
qstat -1an|grep snumber
>>>>>>>>>>>>>>>>>>>
e.g:
qstat -1an|grep s2761086
4598354.pbsserv s2761086 workq    DT_k-e_04   21795   1   1   10gb 99999 R 523:2 n010/0
Here you see it is running on n010
Then you can do this:
ssh nodename -t "htop"  or ssh nodename -t "htop -u username" 
e.g.ssh n010 -t "htop -u s2761086"
Press <F2> key, go to "Columns", and add PROCESSOR under "Available Columns".
The currently used CPU ID of each process will appear under "CPU" column.



Qs29:   How to customize an environmental variable using modules

...

Qs 31: Multi Cores are requested and allocated by PBs but job runs only on 1 core. Why is that?

This contribution is from Nicholas Dhal and is acknowledged. Nick is an active Grifith HPC user.


>>>>>>>>>

When a job is executed, the processes running the job are divided up amongst the requested number of cores on the HPC compute node. The processes have an associated setting called 'affinity' which is a logical (true or false) test that specifies which CPU core the processes are allowed to run on. In serial cases, this usually makes no difference because everything is on a single CPU only. However, in parallel runs with multiple cores, sometimes the affinity is set incorrectly by the scheduler and this limits performance.

...

The command 'htop' uses some CPU resources on the node, so limit your time logged into the node (no more than 10 minutes). To exit 'htop' use the key 'F10'. To exit the node use the command 'exit'.

goo.gl/koYII3


>>>>>>>>>>

Qs 32: How to check the remaining licenses on the license server

...

e.g scp -r /export/home/s5284664/folder n061:/lscratch/s5284664/

Qs.56: NCMAS process and application

NCMAS facilities overview and who should apply
https://youtu.be/7ZZVk4HtdDY

NCMAS process and application 2021
https://youtu.be/hmV_j5GFgI0


Qs 57: What kind of storage and compute is available on Griffith HPC

...

Additionally and in parallel, you can also apply directly when the application opens

https://my.nci.org.au/mancini/ncmas/2022/

Please note that projects will be given a fixed allocation which is given per quarter on a use it or loose it basis. Allocations cannot be carried forward or backward into other quarters. Standard disk space per project is 75GB in /scratch and if a project needs more you will need to contact help@nci.org.au.

Students cannot be a lead CI on an NCI project however, for the QCIF share postdocs can be. For NCMAS the lead CI is required to have an ARC or NHMRC grant or equivalent which is why larger groups apply for NCMAS. A grant is not required for a project under QCIF. However, the QCIF allocations are small, around 20-50 thousand per quarter. Larger allocations are only available through NCMAS.

Some applications like Mathematica and Matlab are licensed software. Mathematica is only available to ANU researchers on NCI. For Matlab, Griffith will need to get in contact with NCI to set up their institutional license. At the moment this is not available so one cannot use it. Unless you have your own license. But also in that case you would need to get in touch with NCI first to see if you can use Matlab on Gadi or not.

In general, allocations are given in service units SUs. 1 core hour is charged at 2 SUs. So if you have a calculation running using 4 cores and taking 48 hours then you will be charged 4*48*2=384 SUs for that calculation.

If a larger disk space (e.g 300GB) is needed, you would need to talk to NCI to increase the space in /scratch to accommodate this. If a larger RAM (e.g 400GB ), then you would need to make sure you run in a queue that supports that ram request. They could be charged more than 2 SUs per core hour though, so you would need to factor that in.

But talk to NCI, help@nci.org.au, first to see if you could use the application (e.g Matlab) onNCI before you even consider applying for an allocation on NCI.


Qs69: How to install bioinformatics software in your home directory

...

Reference: https://kb.hlrs.de/platforms/index.php/Batch_System_PBSPro_(vulcan)#DISPLAY:_X11_applications_on_interactive_batch_jobs

The login or head node of each cluster is a resource that is shared by many users. Running a GUI job on the login node is prohibited and may adversely affect other users. X11 Forwarding is only possible for interactive jobs.

Please note that there is a performance penalty when running a GUI job on the compute nodes using the method outlined below. 

Set up X11 forwarding

To use X11 port forwarding, Install Xming X Server on Windows laptop/desktop first. Install the xming fonts package as well.
See instructions here: https://griffith.atlassian.net/wiki/spaces/GHCD/pages/4035477/xming

...

  1. http://mobaxterm.mobatek.net/download-home-edition.html
  2. putty
  3. Filezillia
  4. Windows WSL system lets you run the linux versions of ssh under windows.
       wsl --install
    This should get you command line:  ssh,  scp,  and sftp; 

...

To transfer from remote to locally: scp -r remotehostname:/export/home/snumber/FolderName ~/ 

e.g scp -r s5323827@10.250.250.3:/export/home/s5323827/FolderName ~/
To transfer from local to remote:
scp -r Foldername s5323827@10.250.250.3:/export/home/s5323827/
You will get a pingID MFA request for this.

...

No Format
This will be illustrated using an actual question that came to our support team:

Support Request: 
================
I’ve been trying for some time to get gene regulatory analysis to run on a large dataset on Gowonda. Originally I tried doing this using Genie3, which begins running but errors out after about 5 hours due to exceeding memory, even if I run it on bigmem requesting 900GB.

There’s a new package – arboreto – that has a better version of Genie3 but also a much more computationally efficient methodology called grnboost2. I’ve made a new conda environment and have installed arboreto, however can’t get grnboost2 to run. The files load fine, but it stalls at the grnboost2 step with the error ‘TypeError: descriptor '__call__' for 'type' objects doesn't apply to a 'property' object’. Help forums indicate that that this may be due to an issue with the Python and Dask versions. I’ve tried making different Conda environments using different version combinations, but still can’t get it to work.

Our Answer:
===========
We looked at this page: https://github.com/aertslab/pySCENIC/issues/163
and we think the 1st solution using singularity containers is good. Containers are better suited when issues with version mismatch occurs (e.g specific versions of python, dask etc).

We went to this link as directed from above link.

This is the copy and paste of the section using Apptainer (used to be called Singularity)
>>>>>> Clip from the link >>>>>>>>>>>>
Singularity/Apptainer
Singularity/Apptainer images can be build from the Docker Hub image as source:
# pySCENIC CLI version.
singularity build aertslab-pyscenic-0.12.1.sif docker://aertslab/pyscenic:0.12.1
apptainer build aertslab-pyscenic-0.12.1.sif docker://aertslab/pyscenic:0.12.1

# pySCENIC CLI version + ipython kernel + scanpy.
singularity build aertslab-pyscenic-scanpy-0.12.1-1.9.1.sif docker://aertslab/pyscenic_scanpy:0.12.1_1.9.1
apptainer build aertslab-pyscenic-0.12.1-1.9.1.sif docker://aertslab/pyscenic_scanpy:0.12.1_1.9.1
>>>>>>>>>End of the clip from the link >>>>>>>>>>

To download the container from any docker link at Griffith, we have to alter the commands slightly (also documented on the singularity page of our wiki) 
(The change is to replace docker:// with docker://public.docker.itc.griffith.edu.au/)
So the altered command becomes:
singularity build aertslab-pyscenic-0.12.1.sif docker://public.docker.itc.griffith.edu.au/aertslab/pyscenic:0.12.1
singularity build aertslab-pyscenic-scanpy-0.12.1-1.9.1.sif docker://public.docker.itc.griffith.edu.au/aertslab/pyscenic_scanpy:0.12.1_1.9.1

After sourcing the proxy file (source /usr/local/bin/s3proxy.sh), you can run the above two commands to download those containers.
We have done this for you and placed the containers in this directory: ~/Containers

To check the functionality of the container, please do this:
singularity shell -B /scratch/snumber:/scratch --pwd /scratch/snumber:/scratch --pwd /scratch  /export/home/snumber/Containers/aertslab-pyscenic-0.12.1.sif 
>>>>>>>>>
From inside the container shell, you can run a few commands to check for functionality
e.g:
Apptainer>cd scripts
Apptainer> python grnboost3.py 
INFO:root:Initializing Dask client...
DEBUG:asyncio:Using selector: EpollSelector
DEBUG:asyncio:Using selector: EpollSelector
DEBUG:asyncio:Using selector: EpollSelector
DEBUG:asyncio:Using selector: EpollSelector
DEBUG:asyncio:Using selector: EpollSelector
DEBUG:asyncio:Using selector: EpollSelector
DEBUG:asyncio:Using selector: EpollSelector
DEBUG:asyncio:Using selector: EpollSelector
DEBUG:asyncio:Using selector: EpollSelector
DEBUG:asyncio:Using selector: EpollSelector
DEBUG:asyncio:Using selector: EpollSelector
INFO:root:Dask client initialized: <Client: 'tcp://127.0.0.1:41411' processes=9 threads=72, memory=251.23 GiB>
INFO:root:Loading and transposing expression data...
ERROR:root:An error occurred: [Errno 2] No such file or directory: 'FinalCounts.csv'
<snip>
exit

>>>>>>>>>>>>>
This looks ok (you need to supply the missing files FinalCounts.csv)

If you are happy with this, go to the next step:
(Please note: We have put all the sample pbs and run scripts in your ~scratch/scripts folder)


Next part is to create a pbs script and a run script 

mkdir /scratch/snumber/scripts
mkdir /scratch/snumber/scripts/data
cd  /scratch/snumber/scripts
You can copy all the scripts into this folder as well as create a run script
e.g: cp grnboost3.py ~/scratch/scripts/grnboost3.py

Here is a sample pbs script to use 6 cpus
>>>>>>>>>>>>>>>
#!/bin/bash
#PBS -m e
#PBS -M YourEmail@griffith.edu.au
#PBS -N GRN_pearl
#PBS -q workq
#PBS -l select=1:ncpus=6:mem=66gb,walltime=30:00:00
cd  $PBS_O_WORKDIR
singularity exec -B /scratch/:/scratch --pwd /scratch/snumber:/scratch --pwd /scratch  /export/home/snumber/Container/aertslab-pyscenic-0.12.1.sif "/scratch/scripts/run1.sh"
#singularity exec -B /scratch/snumber:/scratch --pwd /scratch/snumber:/scratch --pwd /scratch  /export/home/snumber/Container/aertslab-pyscenic-scanpy-0.12.1-1.9.1.sif "/scratch/scripts/run2.sh"
exit
sleep 2

>>>>>>>>>>>>>>>>


Next create a run script inside : /scratch/snumber/scripts

cat run1.sh
>>>>>>>>>>>>
cd /scratch/scripts
python /scratch/scripts/grnboost3.py

>>>>>>>>>>>>
make it executable:
chmod +x d_run1.sh

Now simply submit the pbs job:
cd  /scratch/snumber/scripts
qsub pbs.01

After the run has completed, check through the output and error files and fix any errors. ViolaVoilà!