Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


(under development)

Table of Contents

Introduction

...

Using Conda Environments: create a Conda environment on the login node

No Format
# from behind VPN if off-campus or on wireless
ssh snumber@gc-prd-hpclogin1.rcs.griffith.edu.au
source /usr/local/bin/s3proxy.sh  #To gain internet access on the login node
module load anaconda3/2021.11
conda create --name snumber-tf-cpu ipykernel tensorflow pandas matplotlib
(e.g: conda create --name s123456-tf-cpu ipykernel tensorflow pandas matplotlib)
exit

...

If additional packages are needed after this install, you may log into the login node and do the following:


No Format
module# load anaconda3/2021.11
source activate <you from behind VPN if off-campus or on wireless
ssh snumber@gc-prd-hpclogin1.rcs.griffith.edu.au
module load anaconda3/2021.11
source activate <you environment>   #e.g source activate s123456-tf-cpu
conda install <another-package-1> <another-package-2>
#To see the packages in your Conda environment, run this command
conda list
conda deactivate
exit
For some packages you will need to add the conda-forge channel or even perform the installation using pip as the last step.

Using Custom Conda Environment

The procedure above will only be useful if you only need the base Conda environment which includes just less than three hundred packages. If you need custom packages then you should create a new Conda environment and include jupyter in addition to the other packages that you need. The necessary modifications are shown below:


No Format
ssh snumber@@gc-prd-# from behind VPN if off-campus or on wireless
ssh snumber@gc-prd-hpclogin1.rcs.griffith.edu.au
ssh snumber@@gc-prd-hpclogin1.rcs.griffith.edu.au
module load anaconda3/2020.11
source /usr/local/bin/s3proxy.sh 
conda create --name myenv jupyter <package-2> <package-3> 
source activate myenv
#To see the packages in your Conda environment, run this command
conda list
conda deactivate
exit

Using Widgets

No Format
source /usr/local/bin/s3proxy.sh  #To gain internet access on the login node
module load anaconda3/2021.11
conda create --name widg-env --channel conda-forge matplotlib jupyterlab ipywidgets ipympl
source activate widg-env
#To see the packages in your Conda environment, run this command
conda list
conda deactivate
exit

...

Do Not Run Jupyter on the Login Nodes

The login or head node of each cluster is a resource that is shared by many users. Running Jupyter on one of these nodes may adversely affect other users. Please use one of the approaches described on this page to carry out your work. 

Internet is Not Available on Compute Nodes. Jupyter sessions will have to run on the compute nodes which do not have Internet access. This means that you will not be able to download files, clone a repo from GitHub, install packages, etc. You will need to perform these operations on the login node before starting the session. You can run commands which need Internet access on the login nodes (gc-prd-hpclogin1). Any files that you download while on the login node will be available on the compute nodes. 

...


The packages in the base environment will not be available in your custom environment unless you explicitly list them (e.g., numpy, matplotlib, scipy).

Using Widgets

No Format
# from behind VPN if off-campus or on wireless
ssh snumber@gc-prd-hpclogin1.rcs.griffith.edu.au
source /usr/local/bin/s3proxy.sh  #To gain internet access on the login node
module load anaconda3/2021.11
conda create --name widg-env --channel conda-forge matplotlib jupyterlab ipywidgets ipympl
source activate widg-env
#To see the packages in your Conda environment, run this command
conda list
conda deactivate
exit

Learn more about ipywidgets and ipympl


Usage

Do Not Run Jupyter on the Login Nodes

The login or head node of each cluster is a resource that is shared by many users. Running Jupyter on one of these nodes may adversely affect other users. Please use one of the approaches described on this page to carry out your work. 

Internet is Not  Available on Compute Nodes. Jupyter sessions will have to run on the compute nodes which do not have Internet access. This means that you will not be able to download files, clone a repo from GitHub, install packages, etc on the compute nodes. You will need to perform these operations on the login node  node (e.g gc-prd-hpclogin1.rcs.griffith.edu.au)  before starting the session. You can run commands which need Internet access on the login nodes (gc-prd-hpclogin1). Any files that you download while on the login node will be available on the compute nodes. 

Running on a Compute Node via Interative pbs

First, from the login node (gc-prd-hpclogin1), request an interactive session on a compute node. 
The command below requests 1 CPU-core with 4 GB of memory for 1 hour:


No Format
qsub -I -q workq -l select=1:ncpus=1:mem=4gb,walltime=1:00:00


Once the node has been allocated, run the hostname command to get the name of the node (e.g gc-prd-hpcn002) 

Within the interactive session, you will be automatically transferred to an available compute node. If none available, you may have to wait till one becomes available.

On the compute node inside the interactive job, do the following:


No Format
module load anaconda3/2021.11

source activate myenv  #e.g source activate s123456-tf-cpu

jupyter-notebook --no-browser --port=8889 --ip=0.0.0.0
or
jupyter-lab --no-browser --port=8889 --ip=0.0.0.0
#note the last line of the output which will be something like
http://127.0.0.1:8889/?token=61f8a2aa8ad5e469d14d6a1f59baac05a8d9577916bd7eb0
# leave the session running


Next, start a second terminal session on your local machine (e.g., laptop) and setup the tunnel as follows:
In the command below, be sure to replace gc-prd-hpcn002 with the hostname of the node that pbs assigned to you.

No Format
ssh -N -f -L 8889:gc-prd-hpcn002:8889 snumber@gc-prd-hpclogin1.rcs.griffith.edu.au
#e.g: ssh -N -f -L 8890:gc-prd-hpcn002:8890 s123456@gc-prd-hpclogin1.rcs.griffith.edu.au

On a windows computer,  Open a Command Prompt terminal by searching “Command Prompt” in the
Window search bar at the bottom left of your screen. In the command
line, type ssh -N -f -L 8889:gc-prd-hpcn002:8889 snumber@gc-prd-hpclogin1.rcs.griffith.edu.au and click enter

Note that we selected the Linux port 8889 to connect to the notebook. If you don't specify the port, it will default to port 8888 but sometimes this port can be already in use either on the remote machine or the local one 
(i.e., your laptop). If the port you selected is unavailable, you will get an error message, in which case you should just pick another one. It is best to keep it greater than 1024.
Consider starting with 8888 and increment by 1 if it fails, e.g., try 8888, 8889, 8890 and so on. If you are running on a different port then substitute your port number for 8889.

Lastly, open a web browser on your laptop/desktop and copy and paste the URL from the previous output:

No Format
http://127.0.0.1:8889/?token=61f8a2aa8ad5e469d14d6a1f59baac05a8d9577916bd7eb0
Another example:

http://127.0.0.1:88898890/?token=61f8a2aa8ad5e469d14d6a1f59baac05a8d9577916bd7eb08df8a9a79c00f0813055d48dfc79785c8ff6597cc0b1c456
#
leaveChoose the"New" session running

...

No Format
ssh -N -f -L localhost:8889:localhost:8889 snumber@n059.rcs.griffith.edu.au

Custom Conda Environment

The procedure above will only be useful if you only need the base Conda environment which includes just less than three hundred packages. If you need custom packages then you should create a new Conda environment and include jupyter in addition to the other packages that you need. The necessary modifications are shown below:

No Format
# from behind VPN if off-campus or on wireless
ssh snumber@gc-prd-hpclogin1.rcs.griffith.edu.au
module load anaconda3/2020.11
source /usr/local/bin/s3proxy.sh 
conda create --name myenv jupyter <package-2> <package-3> 
source activate myenv
#To see the packages in your Conda environment, run this command
conda list
conda deactivate
exit

...

Running on a Compute Node via interative pbs

First, from the head node (gc-prd-hpclogin1), request an interactive session on a compute node. The command below requests 1 CPU-core with 4 GB of memory for 1 hour:
qsub -I -q workq  -l select=1:ncpus=1:mem=4gb,walltime=1:00:00

...

gc-prd-hpcn002) 

.You should not be in the login node if you are inside the job.
On the compute node:

module load anaconda3/2021.11

source activate myenv  #e.g source activate s123456-tf-cpu

jupyter-notebook --no-browser --port=8889 --ip=0.0.0.0
or
jupyter-lab --no-browser --port=8889 --ip=0.0.0.0
#note the last line of the output which will be something like
http://127.0.0.1:8889/?token=61f8a2aa8ad5e469d14d6a1f59baac05a8d9577916bd7eb0
# leave the session running

...

then "Python 3" to launch a new notebook. Note that Jupyter may use a port that is different than the one you specified. This is why it is important to copy and paste the URL. 
When you are done, terminate the ssh tunnel on your local machine (desktop/laptop) by running lsof -i tcp:8889 to get the PID and then kill -9 <PID> (e.g., kill -9 6010).


Running it as a PBS job


The second way of running Jupyter on the cluster is by submitting a job via pbs that launches Jupyter on the compute node

In order to do this we need a submission script like the following called jupyter.pbs

No Format
#!/bin/bash
#PBS -N jupyterNotebook
#PBS -m abe
#PBS -M myEmail@griffithuni.edu.au
#PBS -q workq
#PBS -l select=1:ncpus=1:mem=12gb,walltime=5:00:00

# get tunneling info
XDG_RUNTIME_DIR=""
node=$(hostname -s)
user=$(whoami)
cluster="gc-prd-hpclogin1"
port=8889
##choose your own unique port between 8000 and 9999

cd  $PBS_O_WORKDIR
# print tunneling instructions to tunnel.$PBS_JOBID.txt
JJID=`echo $PBS_JOBID|sed 's/\.gc-prd-hpcadm//g'`
echo -e "
Command to create ssh tunnel:
ssh -N -f -L ${port}:${node}:${port} ${user}@${cluster}.rcs.griffith.edu.au

Use a Browser on your local machine to go to:
localhost:${port}  (prefix w/ https:// if using password)" >tunnel.$JJID.txt
# load modules or conda environments here
module load anaconda3/2021.11
source activate snumber-tf-cpu
# Run Jupyter
jupyter-notebook --no-browser --port=${port} --ip=${node} 2>&1 | tee jupnote.$JJID.log


This job launches Jupyter on the allocated compute node and we can access it through an ssh tunnel as we did in the previous section.

First, from the head node, we submit the job to the queue:

qsub jupyter.pbs

Once the job is running, a log file will be created that is called jupnote.<jobid>.log. The log file contains information on how to connect to Jupyter, and the necessary token.

In order to connect to Jupyter that is running on the compute node, we set up a tunnel on the local machine as follows:

No Format
ssh -N -f -L 8889:gc-prd

...

Note that we selected the Linux port 8889 to connect to the notebook. If you don't specify the port, it will default to port 8888 but sometimes this port can be already in use either on the remote machine or the local one (i.e., your laptop). If the port you selected is unavailable, you will get an error message, in which case you should just pick another one. It is best to keep it greater than 1024. Consider starting with 8888 and increment by 1 if it fails, e.g., try 8888, 8889, 8890 and so on. If you are running on a different port then substitute your port number for 8889.

...

-hpcn002:8889 s123456@gc-prd-hpclogin1.rcs.griffith.edu.au

Have a look at the file named tunnel.$PBS_JOBID.txt for exact syntax. Copy and paste that on a laptop (if running linux or mac). On a windows computer,  Open a Command Prompt terminal by searching “Command Prompt” in the
Window search bar at the bottom left of your screen. In the command
line, type ssh -N -f -L 8889:gc-prd-hpcn002:8889 snumber@gc-prd-hpclogin1.rcs.griffith.edu.au and click enter

where gc-prd-hpcn002 is the name of the node that was allocated in this case.

In order to access Jupyter, have a look at the file named "jupnote.$JJID.log" and copy and paste one of these URLs. As an example:

http://127.0.0.1:8889/?token=

...

http://127.0.0.1:8890/?token=8df8a9a79c00f0813055d48dfc79785c8ff6597cc0b1c456

...

Aside on ssh

Looking at the man page for ssh, the relevant flags are:

...

Aside on Open Ports

Jupyter will automatically find an open port if you happen to specify one that is occupied. If you wish to do the scanning yourself then run the command below:

netstat -antp | grep :88 | sort

...

Internet access is available when running Jupyter on a OnDemand node. There is no job scheduler on the onDemand nodes. Be sure to use these nodes in a way that is to fair all users.

No Format
# from behind VPN if off-campus or on wireless
ssh snumber@n059,rcs,griffith.edu.au
module load anaconda3/2020.11
source activate snumber-tf-cpu #e.g source activate s123456-tf-cpu
jupyter-notebook --no-browser --port=8889 --ip=127.0.0.1
# note the last line of the output which will be something like
http://127.0.0.1:8889/?token=61f8a2aa8ad5e469d14d6a1f59baac05a8d9577916bd7eb0
# leave the session running

Then in a new terminal on your laptop,

...

2ac632f932d17059377fcfe25afc9d52fd29bf83ba6c0999


Only packages that are available to the user are those made available by loading the anaconda3 module. If you have created your own Conda environment then you will need to activate it (e.g source activate myenv)

Aside on ssh

Looking at the man page for ssh, the relevant flags are:

-N  Do not execute a remote command. This is useful for just forwarding ports.

-f  Requests ssh to go to background just before command execution. This is useful if ssh is
going to ask for passwords or passphrases, but the user wants it in the background.

-L  Specifies that the given port on the local (client) host is to be forwarded to the given
host and port on the remote side

Aside on Open Ports

Jupyter will automatically find an open port if you happen to specify one that is occupied. If you wish to do the scanning yourself then run the command below:

netstat -antp | grep :88 | sort


onDemand Nodes

Internet access is available when running Jupyter on a OnDemand node (currently only n059,rcs,griffith.edu.au). There is no job scheduler on the onDemand nodes. Be sure to use these nodes in a way that is to fair all users.

No Format
# from behind VPN if off-campus or on wireless
ssh -Y snumber@n059,rcs,griffith.edu.au
module load anaconda3/2020.11
source activate snumber-tf-cpu #e.g source activate s123456-tf-cpu
jupyter-notebook --no-browser --port=8889 --ip=127.0.0.1
# note the last line of the output which will be something like
http://127.0.0.1:8889/?token=61f8a2aa8ad5e469d14d6a1f59baac05a8d9577916bd7eb0
# leave the session running


Then in a new terminal on your laptop,

No Format
ssh -N -f -L localhost:8889:localhost:8889 snumber@n059.rcs.griffith.edu.au

On a windows computer,  Open a Command Prompt terminal by searching “Command Prompt” in the
Window search bar at the bottom left of your screen. In the command
line, type ssh -N -f -L localhost:8889:localhost:8889 snumber@n059.rcs.griffith.edu.au

If the above procedure fails then try again using another port number as discussed above.


Lastly, open a web browser and copy and paste the URL from the previous output:

...

When you are done, terminate the ssh tunnel by running lsof -i tcp:8889 to get the PID and then kill -9 <PID> (e.g., kill -9 6010).kill -9 6010).



Tech support videos: 

Available on request.

/wiki/spaces/AS/pages/153550880 

Reference

1. https://jupyter.org/try
2. https://researchcomputing.princeton.edu/support/knowledge-base/jupyter

3. Jupyter on the gpu visualisation node

4. https://blogs.iu.edu/ncgas/2021/05/07/tunneling-a-jupyter-notebook-from-an-hpc/