Table of Contents |
---|
*Qs 1: How do I cite or mention the cluster in papers? or what is the preferred method?
...
No Format |
---|
You could try a trace. cmake --trace . 2>&1 | tee /tmp/cmakeOut.txt You may have to explicitly mention the path, For example: cmake . -DLAPACK_LIBRARIES=/sw/library/lapack/lapack-3.6.0/3.6.0/lib64/liblapack.so -DBLAS_LIBRARIES=/sw/library/blas/CBLAS/lib/cblas_LINUX.so OR: cmake . -DLAPACK_LIBRARIES=/sw/library/lapack/lapack-3.6.0/3.6.0/lib64/liblapack.so -DBLAS_LIBRARIES=/sw/library/blas/CBLAS/lib/cblas_LINUX.so -DCMAKE_INSTALL_PREFIX=/sw/simbody/353 |
Qs28: How do I check number of CPUs my job is using
No Format |
---|
You can find out on which compute node your job is running: qstat -1an|grep snumber >>>>>>>>>>>>>>>>>>> e.g: qstat -1an|grep s2761086 4598354.pbsserv s2761086 workq DT_k-e_04 21795 1 1 10gb 99999 R 523:2 n010/0 Here you see it is running on n010 Then you can do this: ssh nodename -t "htop" or ssh nodename -t "htop -u username" e.g.ssh n010 -t "htop -u s2761086" Press <F2> key, go to "Columns", and add PROCESSOR under "Available Columns". The currently used CPU ID of each process will appear under "CPU" column. |
Qs29: How to customize an environmental variable using modules
...
Qs 31: Multi Cores are requested and allocated by PBs but job runs only on 1 core. Why is that?
This contribution is from Nicholas Dhal and is acknowledged. Nick is an active Grifith HPC user.
>>>>>>>>>
...
>>>>>>>>>>
Qs 32: How to check the remaining licenses on the license server
...
No Format |
---|
#!/bin/bash #PBS -N jobName ####PBS -m abe ####PBS -M YourEmail@griffith.edu.au #PBS -q routeq #PBS -l select=1:ncpus=1:mem=2gb,walltime=5:00:00 #======================================================# # USER CONFIG #======================================================# INPUT_FILE="hello.py" OUTPUT_FILE="$PBS_JOBNAME.out" MODULE_NAME="python/3.7.4" PROGRAM_NAME="python" # Set as true if you need those /lscratch files. COPY_SCRATCH_BACK=true #======================================================# # MODULE is loaded #======================================================# NP=‘wc -l < $PBS_NODEFILE‘ source /etc/profile.d/modules.sh module load $MODULE_NAME cat $PBS_NODEFILE #======================================================# # SCRATCH directory is created at the local disks #======================================================# SCRDIR=/lscratch/$LOGNAME/$PBS_JOBID if [ ! -d "$SCRDIR" ]; then mkdir $SCRDIR fi #======================================================# # TRANSFER input files to the scratch directory #======================================================# # just copy input file cp -r $PBS_O_WORKDIR/$INPUT_FILE $SCRDIR # copy everything (Option) #cp -r $PBS_O_WORKDIR/* $SCRDIR #======================================================# # PROGRAM is executed with the output or log file # direct to the working directory #======================================================# echo "START TO RUN WORK" cd $SCRDIR # Run a system wide sequential program ##$PROGRAM_NAME < $INPUT_FILE >& $PBS_O_WORKDIR/$OUTPUT_FILE $PROGRAM_NAME $INPUT_FILE >& $SCRDIR/$OUTPUT_FILE ###$PROGRAM_NAME $INPUT_FILE >& $PBS_O_WORKDIR/$OUTPUT_FILE # Run a MPI program (Option) ###For openmpi, use the following syntax#### #module load mpi/openmpi/4.0.2 #mpiexec $PROGRAM NAME < $INPUT FILE >& $OUTPUT FILE ####For intel mpi, use the following syntax#### #module load intel/2019up5/mpi #mpiexec -n $NP $PROGRAM NAME < $INPUT FILE >& $OUTPUT FILE # mpirun -np $NP $PROGRAM NAME < $INPUT FILE >& $OUTPUT FILE # Run a OpenMP program(Option) # export OMP NUM THREADS=$NP # $PROGRAM NAME < $INPUT FILE >& $OUTPUT FILE sleep 60 #======================================================# # RESULTS are migrated back to the working directory #======================================================# if [[ "$COPY_SCRATCH_BACK" == *true* ]] then echo "COPYING SCRACH FILES TO " $PBS_O_WORKDIR/$PBS_JOBID cp -rp $SCRDIR/* $PBS_O_WORKDIR if [ $? != 0 ]; then { echo "Sync ERROR: problem copying files from $tdir to $PBS_O_WORKDIR;" echo "Contact HPC admin for a solution." exit 1 } fi fi #======================================================# # DELETING the local scratch directory #======================================================# cd $PBS_O_WORKDIR if [[ "$SCRDIR" == *scratch* ]] then echo "DELETING SCRATCH DIRECTORY" $SCRDIR rm -rf $SCRDIR echo "ALL DONE!" fi #======================================================# # ALL DONE #======================================================# ## End-of-job summary echo "qstat -H $PBS_JOBID" echo "qstat -xf $PBS_JOBID" |
Qs.56: NCMAS process and application
NCMAS facilities overview and who should apply
https://youtu.be/7ZZVk4HtdDY
NCMAS process and application 2021
https://youtu.be/hmV_j5GFgI0
Qs 57: What kind of storage and compute is available on Griffith HPC
...
Additionally and in parallel, you can also apply directly when the application opens
Please note that projects will be given a fixed allocation which is given per quarter on a use it or loose it basis. Allocations cannot be carried forward or backward into other quarters. Standard disk space per project is 75GB in /scratch and if a project needs more you will need to contact help@nci.org.au.
Students cannot be a lead CI on an NCI project however, for the QCIF share postdocs can be. For NCMAS the lead CI is required to have an ARC or NHMRC grant or equivalent which is why larger groups apply for NCMAS. A grant is not required for a project under QCIF. However, the QCIF allocations are small, around 20-50 thousand per quarter. Larger allocations are only available through NCMAS.
Some applications like Mathematica and Matlab are licensed software. Mathematica is only available to ANU researchers on NCI. For Matlab, Griffith will need to get in contact with NCI to set up their institutional license. At the moment this is not available so one cannot use it. Unless you have your own license. But also in that case you would need to get in touch with NCI first to see if you can use Matlab on Gadi or not.
In general, allocations are given in service units SUs. 1 core hour is charged at 2 SUs. So if you have a calculation running using 4 cores and taking 48 hours then you will be charged 4*48*2=384 SUs for that calculation.
If a larger disk space (e.g 300GB) is needed, you would need to talk to NCI to increase the space in /scratch to accommodate this. If a larger RAM (e.g 400GB ), then you would need to make sure you run in a queue that supports that ram request. They could be charged more than 2 SUs per core hour though, so you would need to factor that in.
But talk to NCI, help@nci.org.au, first to see if you could use the application (e.g Matlab) onNCI before you even consider applying for an allocation on NCI.
Qs69: How to install bioinformatics software in your home directory
...
The login or head node of each cluster is a resource that is shared by many users. Running a GUI job on the login node is prohibited and may adversely affect other users. X11 Forwarding is only possible for interactive jobs.
Please note that there is a performance penalty when running a GUI job on the compute nodes using the method outlined below.
Set up X11 forwarding
To use X11 port forwarding, Install Xming X Server on Windows laptop/desktop first. Install the xming fonts package as well.
See instructions here: https://griffith.atlassian.net/wiki/spaces/GHCD/pages/4035477/xming
...
HPC Bastion servers provide Multi-Factor Authentication (MFA) as an additional layer of cybersecurity. One will need to use appropriate methods that Griffith supports (pingID app, yubi keys, etc) to authenticate. . Unfortunately this option is no longer available to HPC users.
No Format |
---|
ssh -o ProxyCommand="ssh -W %h:%p s123456@gc-prd-bastion-1.itc.griffith.edu.au" s123456@10.250.250.3 OR ssh -l s123456 \ -o 'ProxyCommand ssh -l s123456 %h nc 10.250.250.3 22' \ -o 'HostKeyAlias 10.250.250.3' \ gc-prd-bastion-1.itc.griffith.edu.au You may update your ~/.ssh/config file and make the command simpler. Host hpclogin1 Hostname 10.250.250.3 ProxyCommand ssh s123456@gc-prd-bastion-1.itc.griffith.edu.au -W %h:%p Now all I have to do is type the following ssh command ssh s123456@hpclogin1 Note 1: ======= OpenSSH version 7.3 or above: If there are multiple jump hosts, you can set multiple jump host using a comma-separated list and the servers will be visited in the order listed: Host hpclogin1 Hostname 10.250.250.3 ProxyCommand ssh gc-prd-bastion-1.itc.griffith.edu.au,na-prd-bastion-1.itc.griffith.edu.au -W %h:%p User s123456 Note 2: Multihop transfers =========================== sftp -o ProxyCommand="ssh -W %h:%p s123456@gc-prd-bastion-1.itc.griffith.edu.au" s123456@10.250.250.3 scp -o ProxyCommand="ssh -W %h:%p user1@server1" user2@server2:/<remotePath> <localpath> To copy a file named core.10437 from HPC login node to local machine: ===================================================================== scp -o ProxyCommand="ssh -W %h:%p s123456@gc-prd-bastion-1.itc.griffith.edu.au" s123456@10.250.250.3:/export/home/s123456/core.10437 . To copy a directory named tmp2 from local machine to HPC login node ==================================================================== scp -o ProxyCommand="ssh -W %h:%p s123456@gc-prd-bastion-1.itc.griffith.edu.au" -r tmp2 s123456@10.250.250.3:/export/home/s123456/ |
...
- http://mobaxterm.mobatek.net/download-home-edition.html
- putty
- Filezillia
- Windows WSL system lets you run the linux versions of ssh under windows.wsl --installThis should get you command line: ssh, scp, and sftp;
...