Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This document is loosely modeled after “A Beginner's Guide to the Barrine Linux HPC Cluster” written by Dr. David Green (HPC manager, UQ) , ICE Cluster User Guide written by Bryan Hughes and Wiki pages of the City University of New York (CUNY) HPC Center (see reference for further details).

2. Gowonda Overview

2.1 Hardware

2.1.1 Hardware (2024 upgrade) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.1.2 Hardware (2019 upgrade) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Node
Type
Names Total
Number of Cores

Total Amount of Memory (GB)

Compute Nodes

Cores Per node
Number Mem per
Node (GB)Memory per Core

Medium Memory Nodes

108

216

9

No Format
(n005-n009,n010-n012,n019)

12

24

2
nodeCores per nodeProcessor Type

Small Memory Nodes

48

48

4

No Format
(n001-n004)

12

12

1

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

GPU card 

gc-prd-hpcn001

gc-prd-hpcn002

gc-prd-hpcn003

gc-prd-hpcn004

gc-prd-hpcn005

gc-prd-hpcn006


6
192GB
72
2x Intel Xeon 6140 



n061 (gpu node)1500 GB96

2X AMD EPYC 7413 

24-Core Processor

5 X A100

NVIDIA A100 80GB PCIe


n060 (gpu node)1380 GB72

Intel(R) Xeon(R) Gold 6140 CPU

X5650

@ 2.

67GHz

Large Memory Nodes

72

288

6

No Format
(n013-n018)

12

48

4

30GHz

8 X V100

NVIDIA V100-PCIE-32GB



2.2 Old Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Node Type

Total Number of Cores

Total Amount of Memory (GB)

Compute Nodes

Cores Per node

Mem per Node (GB)

Memory per Core

Processor Type

Small Memory Nodes



4848


No Format
(n001-n004)



4121Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

Extra Large Memory Nodes with Medium Memory Nodes

108

216

9

No Format
(n005-n009,n010-n012,n019)


12

24

2

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

Large Memory Nodes

72

288

6

No Format
(n013-n018)


12

48

4

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

Extra Large Memory Nodes with GPU (see table below for more details about GPU)

48

384

4

No Format
(n020-n023)


12

96

8

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz
NVIDIA Tesla C2050

Extra Large Memory Nodes (no GPU)

96

768

8

No Format
(n031-n038)


12

96

8

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

Special Nodes (no GPU)

64

128

16

No Format
(n039-n042)


16

32

2

Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz

Special Large Memory Nodes (no GPU)

64

1024

8

No Format
(n044-n047)


16

256

16

Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz

Special Large Memory Nodes (no GPU)

192

1536

24

No Format
(aspen01-aspen12)


16

128

8

Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz

...

Instructions for using the Windows HPC is given in a separate user guide

2.

...

3 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

The operating system on gowonda is RedHat Enterprise Linux (RHEL) 6.1 updated with SGI foundation-2.4 suite and accelerate-1.2 support package. The queuing system used is PBS Pro 11 . The exception is the Windows HPC node which runs Windows 2008 R2 with Windows HPC pack and Windows HPCS job scheduler. Gowonda has the following compilers and parallel library software. Much more detail on each can be found below.

...

Support staff may be contacted via:
Griffith Library and IT help 3735 5555 or X55555
email support: Submit form 
You can log cases on service desk (category: eResearch services.HPC)

eResearch Services, Griffith University
Phone: +61 - 7 - 373 56649 (GMT +10 Hours)
Email:  Submit a Question to the HPC Support Team
Web: eresearch. griffith.edu.au/eresearch-services

3.3 Service Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

...

In addition to the home directory, users have access to a /scratch directory. A soft link to this scratch directory is also provided in the home directory. Files on system temporary and scratch directories are not backed up. All files older than 15 days will be deleted from the /scratch/<snumber> directory.

Anchor
login
login

4 Access

4.1 Request an account on gowonda

Please fill out this form:
https://conf-ers.griffith.edu.au/display/GHCD/Griffith+HPC+AccountSupport+Request+Form?src=contextnavpagetreemode
A staff from the gowonda HPC cluster team will contact you to provide you with login details.

...

Please check VPN installation instruction here:

https://intranet.secure.griffith.edu.au/computing/remote-access/accessing-resources/virtual-private-network

ssh on windows platform . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . ..

To use X11 port forwarding, Install Xming X Server for Windows first.
See instructions here

If X11 forwading is not needed (true for most cases), do not install it.

To install a ssh client e.g. putty, please follow this instruction

ssh on Linux platform and mac platform . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . ..

...

7.3 Delete Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

qdel <jobID>
No Format
qdel <jobID>

7.4 Check the current status of the cluster  .. . . . . . . . . . . . . . . . . . . . . . . . .

pbsnodes

pbsjobs

pbsqueues

No Format
To view the current cluster status, you use the elinks text browser on the login node to view the status like below:

pbsnodes   (elinks http://localhost:3000/nodes)

pbsjobs (elinks http://localhost:3000/jobs)

pbsqueues (elinks http://localhost:3000/queues)

(You can press "Q" to quit from the below text-based browsers

8 Examples

8.1 Intel MPI and Compiler with Interactive Session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

...

We encourage you to acknowledge significant support you have received from the Griffith University eResearch Services University eResearch Service & Specialised Platforms Team in your publications.

...

"We gratefully acknowledge the support of the Griffith University eResearch Services University eResearch Service & Specialised Platforms Team and the use of the High Performance Computing Cluster "Gowonda" to complete this research."

If you need to give hardware specifics for readers to reproduce results, it is:

  • Intel(R) Xeon

    CPU X5650 processor@2.67GHzQDR 4 x

    (R) Gold 6140 CPU @ 2.30GHz


  • EDR InfiniBand Interconnect

For more technical information, please check here


If you need to give hardware specifics of the gpu node for readers to reproduce results, it is:

  • Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz

  • EDR InfiniBand Interconnect
  • GPU card: Tesla V100-PCIE-32GB
  • HPE Proliant HPE XL270d Gen 10 Node CTO server

Please advise us of this acknowledgment by emailing us at eresearch-services@griffith.edu.au for our record keeping.

...