...
This document is loosely modeled after “A Beginner's Guide to the Barrine Linux HPC Cluster” written by Dr. David Green (HPC manager, UQ) , ICE Cluster User Guide written by Bryan Hughes and Wiki pages of the City University of New York (CUNY) HPC Center (see reference for further details).
2. Gowonda Overview
2.1 Hardware
2.1.1 Hardware (2024 upgrade) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.2 Hardware (2019 upgrade) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Node Names | Total Number | Mem per node | Cores per node | Processor Type | GPU card | ||
---|---|---|---|---|---|---|---|
gc-prd-hpcn001 gc-prd-hpcn002 gc-prd-hpcn003 gc-prd-hpcn004 gc-prd-hpcn005 gc-prd-hpcn006 | 6 | 192GB | 72 | 2x Intel Xeon 6140 | |||
n061 (gpu node) | 1 | 500 GB | 96 | 2X AMD EPYC 7413 24-Core Processor | 5 X A100 NVIDIA A100 80GB PCIe | ||
n060 (gpu node) | 1 | 380 GB | 72 | Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz | 8 X V100 NVIDIA V100-PCIE-32GB |
2.2 Old Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Node Type | Total Number of Cores | Total Amount of Memory (GB) | Compute Nodes | Cores Per node | Mem per Node (GB) | Memory per Core | Processor Type | ||
---|---|---|---|---|---|---|---|---|---|
Small Memory Nodes | 48 | 484 |
| 124 | 12 | 1 | Intel(R) Xeon(R) CPU X5650 @ 2.67GHz | ||
Medium Memory Nodes | 108 | 216 | 9
| 12 | 24 | 2 | Intel(R) Xeon(R) CPU X5650 @ 2.67GHz | ||
Large Memory Nodes | 72 | 288 | 6
| 12 | 48 | 4 | Intel(R) Xeon(R) CPU X5650 @ 2.67GHz | ||
Extra Large Memory Nodes with GPU (see table below for more details about GPU) | 48 | 384 | 4
| 12 | 96 | 8 | Intel(R) Xeon(R) CPU X5650 @ 2.67GHz | ||
Extra Large Memory Nodes (no GPU) | 96 | 768 | 8
| 12 | 96 | 8 | Intel(R) Xeon(R) CPU X5650 @ 2.67GHz | ||
Special Nodes (no GPU) | 64 | 128 | 16
| 16 | 32 | 2 | Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz | ||
Special Large Memory Nodes (no GPU) | 64 | 1024 | 8
| 16 | 256 | 16 | Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz | ||
Special Large Memory Nodes (no GPU) | 192 | 1536 | 24
| 16 | 128 | 8 | Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz |
...
Instructions for using the Windows HPC is given in a separate user guide
2.
...
3 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The operating system on gowonda is RedHat Enterprise Linux (RHEL) 6.1 updated with SGI foundation-2.4 suite and accelerate-1.2 support package. The queuing system used is PBS Pro 11 . The exception is the Windows HPC node which runs Windows 2008 R2 with Windows HPC pack and Windows HPCS job scheduler. Gowonda has the following compilers and parallel library software. Much more detail on each can be found below.
...
In addition to the home directory, users have access to a /scratch directory. A soft link to this scratch directory is also provided in the home directory. Files on system temporary and scratch directories are not backed up. All files older than 15 days will be deleted from the /scratch/<snumber> directory.
Anchor | ||||
---|---|---|---|---|
|
4 Access
4.1 Request an account on gowonda
Please fill out this form:
https://conf-ers.griffith.edu.au/display/GHCD/Griffith+HPC+Support+Request+Form?src=contextnavpagetreemode
A staff from the gowonda HPC cluster team will contact you to provide you with login details.
...
4.2 Login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
To log in to the cluster, ssh to
...
Please check VPN installation instruction here:
https://intranet.secure.griffith.edu.au/computing/remote-access/accessing-resources/virtual-private-network
ssh on windows platform . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . ..
To use X11 port forwarding, Install Xming X Server for Windows first.
See instructions here
If X11 forwading is not needed (true for most cases), do not install it.
To install a ssh client e.g. putty, please follow this instruction
ssh on Linux platform and mac platform . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . ..
...