Table of Contents |
---|
Contents
...
This document is loosely modeled after “A Beginner's Guide to the Barrine Linux HPC Cluster” written by Dr. David Green (HPC manager, UQ) , ICE Cluster User Guide written by Bryan Hughes and Wiki pages of the City University of New York (CUNY) HPC Center (see reference for further details).
2. Gowonda Overview
2.1 Hardware
2.1.1 Hardware (2024 upgrade) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.2 Hardware (2019 upgrade) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Node TypeNames | Total Number of Cores | Total Amount of Memory (GB) | Compute Nodes | Cores Per nodeNumber | Mem per Node (GB)Memory per Corenode | Cores per node | Processor Type | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Small Memory Nodes | 48 | 48 | 4
| 12 | 12 | 1 | Intel(R) Xeon(R) CPU X5650 @ 2.67GHz | ||||||||||
Medium Memory Nodes | 108 | 216 | 9
| 12 | 24 | 2GPU card | |||||||||||
gc-prd-hpcn001 gc-prd-hpcn002 gc-prd-hpcn003 gc-prd-hpcn004 gc-prd-hpcn005 gc-prd-hpcn006 | 6 | 192GB | 72 | 2x Intel Xeon 6140 | |||||||||||||
n061 (gpu node) | 1 | 500 GB | 96 | 2X AMD EPYC 7413 24-Core Processor | 5 X A100 NVIDIA A100 80GB PCIe | ||||||||||||
n060 (gpu node) | 1 | 380 GB | 72 | Intel(R) Xeon(R) Gold 6140 CPU X5650@ 2. 67GHz | Large Memory Nodes | 72 | 288 | 6
| 12 | 48 | 4 | 30GHz | 8 X V100 NVIDIA V100-PCIE-32GB |
2.2 Old Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Node Type | Total Number of Cores | Total Amount of Memory (GB) | Compute Nodes | Cores Per node | Mem per Node (GB) | Memory per Core | Processor Type | ||
---|---|---|---|---|---|---|---|---|---|
Small Memory Nodes | 48 | 48 |
| 4 | 12 | 1 | Intel(R) Xeon(R) CPU X5650 @ 2.67GHz | ||
Medium Memory Nodes | 108 | 216 | 9
| 12 | 24 | 2 | Intel(R) Xeon(R) CPU X5650 @ 2.67GHz | ||
Large Memory Nodes | 72 | 288 | 6
| 12 | 48 | 4 | Intel(R) Xeon(R) CPU X5650 @ 2.67GHz | ||
Extra Large Memory Nodes with GPU (see table below for more details about GPU) | 48 | 384 | 4
| 12 | 96 | 8 | Intel(R) Xeon(R) CPU X5650 @ 2.67GHz | ||
Extra Large Memory Nodes (no GPU) | 96 | 768 | 8
| 12 | 96 | 8 | Intel(R) Xeon(R) CPU X5650 @ 2.67GHz | ||
Special Nodes (no GPU) | 64 | 128 | 16
| 16 | 32 | 2 | Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz | ||
Special Large Memory Nodes (no GPU) | 64 | 1024 | 8
| 16 | 256 | 16 | Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz | ||
Special Large Memory Nodes (no GPU) | 192 | 1536 | 24
| 16 | 128 | 8 | Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz |
Please note that each of the Extra Large nodes (n020,n021, n022 and n023) have 2 nvidia tesla C-2050 GPU cards.
Node Type | Programming Model | Total Number of CUDA Cores | Total Amount of Memory (GB) | Compute Nodes | CUDA Cores Per node | CUDA cards per node | Mem per Node (GB) | Memory per Core | Processor Type | ||
---|---|---|---|---|---|---|---|---|---|---|---|
Extra Large Memory Nodes with GPU | GPU | 4X2X448 | 384 | 4
| 2X448 | 2 | 96 | 8 | Intel(R) Xeon(R) CPU X5650 @ 2.67GHz |
Special Administrative Nodes (Not used for computing purposes)
Node Type | Node Name | Total Number of Cores | Total Amount of Memory (GB) | Mem per Node (GB) | Memory per Core | Processor Type |
---|---|---|---|---|---|---|
File Servers | n024,n025 | 24 | 96G | 48GB | 4GB | Intel(R) Xeon(R) CPU X5650 @ 2.67GHz |
Test Node | testhpc | 12 | 24G | 24GB | 2GB | Intel(R) Xeon(R) CPU X5650 @ 2.67GHz |
login Node | gowonda | 12 | 48G | 48GB | 4GB | Intel(R) Xeon(R) CPU X5650 @ 2.67GHz |
Admin Node | n030 (admin) | 12 | 24G | 24GB | 2GB | Intel(R) Xeon(R) CPU X5650 @ 2.67GHz |
More information about the Intel(R) Xeon(R) CPU X5650 processor can be obtained here.
In addition to the above, there is a special windows HPC node.
Node type | Total Number of Cores | Total Amount of Memory (GB) | Compute Nodes | Cores Per node | Mem per Node (GB) | Memory per Core | Processor Type | OS | ||
---|---|---|---|---|---|---|---|---|---|---|
Windows 2008 Large Memory Node | 12 | 48 | 1
| 12 | 48 | 4 | Intel(R) Xeon(R) CPU X5650 @ 2.67GHz | Windows 2008 R2 with windows HPC pack |
Instructions for using the Windows HPC is given in a separate user guide
2.
...
3 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The operating system on gowonda is RedHat Enterprise Linux (RHEL) 6.1 updated with SGI foundation-2.4 suite and accelerate-1.2 support package. The queuing system used is PBS Pro 11 . The exception is the Windows HPC node which runs Windows 2008 R2 with Windows HPC pack and Windows HPCS job scheduler. Gowonda has the following compilers and parallel library software. Much more detail on each can be found below.
...
The following third party applications are currently installed or will be installed shortly. The Gowonda HPC Center staff will be happy to work with any user interested in installing additional applications, subject to meeting that application's license requirements.
Software | Version | Usage | Status |
---|---|---|---|
AutoDOCK | 4.2.3 | Module load autodock423 autodockvina112 | Installed |
Bioperl |
TBI (To be Installed) | ||
Blast |
Installed | |||||
CUDA | 4.0 |
| Installed) | ||
Gaussian03 |
| Installed | |||
Gaussian09 |
| Installed | ||||
Gromacs |
Installed | |||||
gromos | 1.0.0 |
| Installed | ||
MATLAB | 2009b,2011a |
| Installed | ||
MrBayes |
TBI (To be Installed) | |||
NAMD |
module load NAMD/NAMD28b1 | Installed | ||||
numpy | 1.5.1 |
| Installed | ||
PyCogent | - |
| Installed | ||
qiime |
To be Installed | ||
R |
| Installed | ||||
SciPy | 0.9.0 |
| Installed | ||
VASP | - | - | TBI |
The following graphics, IO, and scientific libraries are also supported.
Software | Version | Usage | Status | ||
---|---|---|---|---|---|
Atlas | 3.9.39 |
| Installed | ||
FFTW | 3.2.2.,3.3a |
| Installed | ||
GSL- | 1.09,1.15 |
| Installed | ||
LAPACK- | 3.3.0 | - | - | ||
NETCDF- | 3.6.2,3.6.3,4.0,4.1.1,4.1.2 |
| Installed |
3 Support
3.1 Hours of Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
...
Users with further questions or requiring immediate assistance in use of the systems should submit a ticket here
Support staff may be contacted via:
Griffith Library and IT help 3735 5555 or X55555
email support: Submit form
You can log cases on service desk (category: eResearch services.HPC)
eResearch Services, Griffith University
Phone: +61 - 7 - 373 56649 (GMT +10 Hours)
Email: Submit a Question to the HPC Support Team
Web: eresearch.griffith.edu.aueresearch.griffith.edu.au/eresearch-services
3.3 Service Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
...
In addition to the home directory, users have access to a /scratch directory. A soft link to this scratch directory is also provided in the home directory. Files on system temporary and scratch directories are not backed up. All files older than 15 days will be deleted from the /scratch/<snumber> directory.
Anchor | ||||
---|---|---|---|---|
|
4 Access
4.1 Request an account on gowonda
Please fill out this form:
https://conf-ers.griffith.edu.au/display/GHCD/Griffith+HPC+AccountSupport+Request+Form?src=contextnavpagetreemode
A staff from the gowonda HPC cluster team will contact you to provide you with login details.
...
To log in to the cluster, ssh to
No Format |
---|
gowondagc-prd-hpclogin1.rcs.griffith.edu.au |
You will need to be connected to the Griffith network (either at Griffith or through vpn from home).
Please check VPN installation instruction here:
https://intranet.secure.griffith.edu.au/computing/remote-access/accessing-resources/virtual-private-network
ssh on windows platform . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . ..
To use X11 port forwarding, Install Xming X Server for Windows first.
See instructions here
If X11 forwading is not needed (true for most cases), do not install it.
To install a ssh client e.g. putty, please follow this instruction
ssh on Linux platform and mac platform . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . ..
ssh -Y gowonda-Y gc-prd-hpclogin1.rcs.griffith.edu.au
Once you are on the system, have a look around. Your home directory is stored in:
/exports/home/<SNumber>
...
No Format |
---|
Warning Do not run jobs on the login node "gowondagc-prd-hpclogin1.rcs.grifithgriffith.edu.au" Please use it for compilation and small debugging runs only. |
...
7.1 Submit Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
qsub <pbscriptFile>
A pbs simple script file is as follows:
...
No Format |
---|
qstat -1an |
7.3 Delete Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
No Format |
---|
qdel <jobID>
|
7.4 Check the current status of the cluster .. . . . . . . . . . . . . . . . . . . . . . . . .
pbsnodes
pbsjobs
...
pbsqueues
No Format |
---|
To view the current cluster status, you use the elinks text browser on the login node to view the status like below:
pbsnodes (elinks http://localhost:3000/nodes)
pbsjobs (elinks http://localhost:3000/jobs)
pbsqueues (elinks http://localhost:3000/queues)
(You can press "Q" to quit from the below text-based browsers |
8 Examples
8.1 Intel MPI and Compiler with Interactive Session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
...
We encourage you to acknowledge significant support you have received from the Griffith University eResearch Services University eResearch Service & Specialised Platforms Team in your publications.
...
"We gratefully acknowledge the support of the Griffith University eResearch Services University eResearch Service & Specialised Platforms Team and the use of the High Performance Computing Cluster "Gowonda" to complete this research."
If you need to give hardware specifics for readers to reproduce results, it is:
Intel(R) Xeon
CPU X5650 processor@2.67GHzQDR 4 x(R) Gold 6140 CPU @ 2.30GHz
EDR InfiniBand Interconnect
For more technical information, please check here
If you need to give hardware specifics of the gpu node for readers to reproduce results, it is:
Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
- EDR InfiniBand Interconnect
GPU card: Tesla V100-PCIE-32GB
HPE Proliant HPE XL270d Gen 10 Node CTO server
Please advise us of this acknowledgment by emailing us at eresearch-services@griffith.edu.au for our record keeping.
...