Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This document is loosely modeled after “A Beginner's Guide to the Barrine Linux HPC Cluster” written by Dr. David Green (HPC manager, UQ) , ICE Cluster User Guide written by Bryan Hughes and Wiki pages of the City University of New York (CUNY) HPC Center (see reference for further details).

2. Gowonda Overview

2.1 Hardware

2.1.1 Hardware (2024 upgrade) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.1.2 Hardware (2019 upgrade) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2
Node TypeNames Total Number of Cores

Total Amount of Memory (GB)

Compute Nodes

Cores Per nodeNumber Mem per Node (GB)Memory per CorenodeCores per nodeProcessor Type

Small Memory Nodes

48

48

4

No Format
(n001-n004)

12

12

1

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

Medium Memory Nodes

108

216

9

No Format
(n005-n009,n010-n012,n019)

12

24

GPU card 

gc-prd-hpcn001

gc-prd-hpcn002

gc-prd-hpcn003

gc-prd-hpcn004

gc-prd-hpcn005

gc-prd-hpcn006


6
192GB
72
2x Intel Xeon 6140 



n061 (gpu node)1500 GB96

2X AMD EPYC 7413 

24-Core Processor

5 X A100

NVIDIA A100 80GB PCIe


n060 (gpu node)1380 GB72

Intel(R) Xeon(R) Gold 6140 CPU

X5650

@ 2.

67GHz

Large Memory Nodes

72

288

6

No Format
(n013-n018)

12

48

4

30GHz

8 X V100

NVIDIA V100-PCIE-32GB



2.2 Old Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Node Type

Total Number of Cores

Total Amount of Memory (GB)

Compute Nodes

Cores Per node

Mem per Node (GB)

Memory per Core

Processor Type

Small Memory Nodes



4848


No Format
(n001-n004)



4121Intel(R) Xeon(R) CPU X5650 @ 2.67GHz
Extra Large Memory Nodes with

Medium Memory Nodes

108

216

9

No Format
(n005-n009,n010-n012,n019)


12

24

2

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

Large Memory Nodes

72

288

6

No Format
(n013-n018)


12

48

4

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

Extra Large Memory Nodes with GPU (see table below for more details about GPU)

48

384

4

No Format
(n020-n023)


12

96

8

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz
NVIDIA Tesla C2050

Extra Large Memory Nodes (no GPU)

96

768

8

No Format
(n031-n038)


12

96

8

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

Special Nodes (no GPU)

64

128

16

No Format
(n039-n042)


16

32

2

Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz

Special Large Memory Nodes (no GPU)

64

1024

8

No Format
(n044-n047)


16

256

16

Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz

Special Large Memory Nodes (no GPU)

192

1536

24

No Format
(aspen01-aspen12)


16

128

8

Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz

Please note that each of the Extra Large nodes (n020,n021, n022 and n023) have 2 nvidia tesla C-2050 GPU cards.

Node Type

Programming Model

Total Number of CUDA Cores

Total Amount of Memory (GB)

Compute Nodes

CUDA Cores Per node

CUDA cards per node

Mem per Node (GB)

Memory per Core

Processor Type

Extra Large Memory Nodes with GPU

GPU

4X2X448

384

4

No Format
(n020-n023)


2X448

2

96

8

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz
NVIDIA Tesla C2050

Special Administrative Nodes (Not used for computing purposes)

Node Type

Node Name

Total Number of Cores

Total Amount of Memory (GB)

Mem per Node (GB)

Memory per Core

Processor Type

File Servers

n024,n025

24

96G

48GB

4GB

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

Test Node

testhpc

12

24G

24GB

2GB

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

login Node

gowonda

12

48G

48GB

4GB

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

Admin Node

n030 (admin)

12

24G

24GB

2GB

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

More information about the Intel(R) Xeon(R) CPU X5650 processor can be obtained here.

In addition to the above, there is a special windows HPC node.

Node type

Total Number of Cores

Total Amount of Memory (GB)

Compute Nodes

Cores Per node

Mem per Node (GB)

Memory per Core

Processor Type

OS

Windows 2008 Large Memory Node

12

48

1

No Format
n029


12

48

4

Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

Windows 2008 R2 with windows HPC pack

Instructions for using the Windows HPC is given in a separate user guide

2.

...

3 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

The operating system on gowonda is RedHat Enterprise Linux (RHEL) 6.1 updated with SGI foundation-2.4 suite and accelerate-1.2 support package. The queuing system used is PBS Pro 11 . The exception is the Windows HPC node which runs Windows 2008 R2 with Windows HPC pack and Windows HPCS job scheduler. Gowonda has the following compilers and parallel library software. Much more detail on each can be found below.

...

The following third party applications are currently installed or will be installed shortly. The Gowonda HPC Center staff will be happy to work with any user interested in installing additional applications, subject to meeting that application's license requirements.

Software

Version

Usage

Status

AutoDOCK

4.2.3

Module load autodock423 autodockvina112

Installed

Bioperl

 

 


TBI (To be Installed)

Blast

  



Installed

CUDA

4.0


No Format
module load cuda/4.0


Installed)

Gaussian03

 


No Format
module load gaussian/g03


Installed

Gaussian09

 



No Format
module load gaussian/g09


Installed

Gromacs

  



Installed

gromos

1.0.0


No Format
module load gromos/1.0.0


Installed

MATLAB

2009b,2011a


No Format
module load matlab/2009b,module load matlab/2011a 


Installed

MrBayes

  



TBI (To be Installed)

NAMD

 


module load NAMD/NAMD28b1

Installed

numpy

1.5.1


No Format
module load python/2.7.1


Installed

PyCogent

-


No Format
module load python/2.7.1


Installed

qiime

  


To be Installed

R

 


No Format
module load R/2.13.0


Installed

SciPy

0.9.0


No Format
module load python/2.7.1


Installed

VASP

-

-

TBI

The following graphics, IO, and scientific libraries are also supported.

Software

Version

Usage

Status

Atlas

3.9.39


No Format
 module load ATLAS/3.9.39


Installed

FFTW

3.2.2.,3.3a


No Format
  module load fftw/3.3-alpha-intel 


Installed

GSL-

1.09,1.15


No Format
 module load  module load gsl/gsl-1.15 


Installed

LAPACK-

3.3.0

-

-

NETCDF-

3.6.2,3.6.3,4.0,4.1.1,4.1.2


No Format
 e.g. module load NetCDF/4.1.2


Installed

3 Support

3.1 Hours of Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..

...

Users with further questions or requiring immediate assistance in use of the systems should send an email to:

...

submit a ticket here


Support staff may be contacted via:
Griffith Library and IT help 3735 5555 or X55555
email support: hpc-services@gi.griffith.edu.au Submit form 
You can log cases on service desk (category: eResearch services.HPC)

eResearch Services, Griffith University
Phone: +61 - 7 - 373 56649 (GMT +10 Hours)
Email:  hpc-services@gi.griffith.edu.au  Submit a Question to the HPC Support Team
Web: eresearch. griffith.edu.au/eresearch-services

3.3 Service Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

...

In addition to the home directory, users have access to a /scratch directory. A soft link to this scratch directory is also provided in the home directory. Files on system temporary and scratch directories are not backed up. All files older than 15 days will be deleted from the /scratch/<snumber> directory.

Anchor
login
login

4 Access

4.1 Request an account on gowonda

Please fill out this form:
https://conf-ers.griffith.edu.au/display/GHCD/Griffith+HPC+AccountSupport+Request+Form?src=contextnavpagetreemode
A staff from the gowonda HPC cluster team will contact you to provide you with login details.

...

To log in to the cluster, ssh to

No Format
gowondagc-prd-hpclogin1.rcs.griffith.edu.au

You will need to be connected to the Griffith network (either at Griffith or through vpn from home).

Please check VPN installation instruction here:

https://intranet.secure.griffith.edu.au/computing/remote-access/accessing-resources/virtual-private-network

ssh on windows platform . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . ..

To use X11 port forwarding, Install Xming X Server for Windows first.
See instructions here

If X11 forwading is not needed (true for most cases), do not install it.

To install a ssh client e.g. putty, please follow this instruction

ssh on Linux platform and mac platform . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . ..

ssh -Y gowonda-Y gc-prd-hpclogin1.rcs.griffith.edu.au

Once you are on the system, have a look around. Your home directory is stored in:
/exports/home/<SNumber>

...

No Format
 Warning
Do not run jobs on the login node "gowondagc-prd-hpclogin1.rcs.grifithgriffith.edu.au" Please use it for compilation and small debugging runs only. 

...

7.1 Submit Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


qsub <pbscriptFile> 


A pbs simple script file is as follows:

...

7.3 Delete Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

qdel <jobID>
No Format
qdel <jobID>

7.4 Check the current status of the cluster  .. . . . . . . . . . . . . . . . . . . . . . . . .

pbsnodes

pbsjobs

pbsqueues

No Format
To view the current cluster status, you use the elinks text browser on the login node to view the status like below:

pbsnodes   (elinks http://localhost:3000/nodes)

pbsjobs (elinks http://localhost:3000/jobs)

pbsqueues (elinks http://localhost:3000/queues)

(You can press "Q" to quit from the below text-based browsers

8 Examples

8.1 Intel MPI and Compiler with Interactive Session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

...

http://confluence.rcs.griffith.edu.au:8080/display/GHPC/Windows+HPC+User+Guide

Update: This facility has been removed and is no longer available on gowonda

10 Acknowledgements

We encourage you to acknowledge significant support you have received from the Griffith University eResearch Services University eResearch Service & Specialised Platforms Team in your publications.

...

"We gratefully acknowledge the support of the Griffith University eResearch Services University eResearch Service & Specialised Platforms Team and the use of the High Performance Computing Cluster "Gowonda" to complete this research."

If you need to give hardware specifics for readers to reproduce results, it is:

  • Intel(R) Xeon

    CPU X5650 processor@2.67GHzQDR 4 x

    (R) Gold 6140 CPU @ 2.30GHz


  • EDR InfiniBand Interconnect

For more technical information, please check here


If you need to give hardware specifics of the gpu node for readers to reproduce results, it is:

  • Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz

  • EDR InfiniBand Interconnect
  • GPU card: Tesla V100-PCIE-32GB
  • HPE Proliant HPE XL270d Gen 10 Node CTO server

Please advise us of this acknowledgment by emailing us at eresearch-services@griffith.edu.au for our record keeping.

...