(under development)
Introduction
Jupyter Notebook is the original web application for creating and sharing computational document. It offers a simple, streamlined, document-centric experience. Jupyter Notebook is an interactive computational environment, in which you can combine code execution, rich text, mathematics, plots and rich media.
We have adopted these instructions from Princeton Research Computing HPC wiki page. We acknowledge it as the source and the page to go to for running jupyter notebooks on HPC.
Installation
Using Conda Environments: create a Conda environment on the login node
source /usr/local/bin/s3proxy.sh #To gain internet access on the login node module load anaconda3/2021.11 conda create --name snumber-tf-cpu ipykernel tensorflow pandas matplotlib (e.g: conda create --name s123456-tf-cpu ipykernel tensorflow pandas matplotlib) exit
The ipykernel package should be installed
If additional packages are needed after this install, you may log into the login node and do the following:
module load anaconda3/2021.11 source activate <you environment> #e.g source activate s123456-tf-cpu conda install <another-package-1> <another-package-2> conda deactivate exit For some packages you will need to add the conda-forge channel or even perform the installation using pip as the last step.
Using Widgets
conda create --name widg-env --channel conda-forge matplotlib jupyterlab ipywidgets ipympl
Usage
Do Not Run Jupyter on the Login Nodes
The login or head node of each cluster is a resource that is shared by many users. Running Jupyter on one of these nodes may adversely affect other users. Please use one of the approaches described on this page to carry out your work.
Internet is Not Available on Compute Nodes. Jupyter sessions will have to run on the compute nodes which do not have Internet access. This means that you will not be able to download files, clone a repo from GitHub, install packages, etc. You will need to perform these operations on the login node before starting the session. You can run commands which need Internet access on the login nodes (gc-prd-hpclogin1). Any files that you download while on the login node will be available on the compute nodes.
# from behind VPN if off-campus or on wireless ssh snumber@n059,rcs,griffith.edu.au module load anaconda3/2020.11
source activate snumber-tf-cpu #e.g source activate s123456-tf-cpu jupyter-notebook --no-browser --port=8889 --ip=127.0.0.1 # note the last line of the output which will be something like http://127.0.0.1:8889/?token=61f8a2aa8ad5e469d14d6a1f59baac05a8d9577916bd7eb0 # leave the session running
Then in a new terminal on your laptop,
ssh -N -f -L localhost:8889:localhost:8889 snumber@n059.rcs.griffith.edu.au
Lastly, open a web browser and copy and paste the URL from the previous output:
http://127.0.0.1:8889/?token=61f8a2aa8ad5e469d14d6a1f59baac05a8d9577916bd7eb0
Choose "New" then "Python 3" to launch a new notebook. Note that Jupyter may use a port that is different than the one you specified. This is why it is import to copy and paste the URL.
When you are done, terminate the ssh tunnel by running lsof -i tcp:8889 to get the PID and then kill -9 <PID> (e.g., kill -9 6010).
Custom Conda Environment
The procedue above will only be useful if you only need the base Conda environment which includes just less than three hundred packages. If you need custom packages then you should create a new Conda environment and include jupyter in addition to the other packages that you need. The necessary modifications are shown below:
ssh snumber@n059.rcs.griffith.edu.au module load anaconda3/2020.11
source /usr/local/bin/s3proxy.sh conda create --name myenv jupyter <package-2> <package-3> conda activate myenv jupyter-notebook --no-browser --port=8889 --ip=127.0.0.1
The packages in the base environment will not be available in your custom environment unless you explicitly list them (e.g., numpy, matplotlib, scipy).
Running on a Compute Node via interative pbs
First, from the head node (gc-prd-hpclogin1), request an interactive session on a compute node. The command below requests 1 CPU-core with 4 GB of memory for 1 hour:
qsub -I -q workq -l select=1:ncpus=1:mem=4gb,walltime=1:00:00
Once the node has been allocated, run the hostname command to get the name of the node (e.g
gc-prd-hpcn002)
.You should not be in the login node if you are inside the job.
On the compute node:
module load anaconda3/2021.11
source activate myenv #e.g source activate s123456-tf-cpu
jupyter-notebook --no-browser --port=8889 --ip=0.0.0.0
or jupyter-lab --no-browser --port=8889 --ip=0.0.0.0 #note the last line of the output which will be something like http://127.0.0.1:8889/?token=61f8a2aa8ad5e469d14d6a1f59baac05a8d9577916bd7eb0 # leave the session running
Next, start a second terminal session on your local machine (e.g., laptop) and setup the tunnel as follows:
ssh -N -f -L 8889:gc-prd-hpcn002:8889 snumber@gc-prd-hpclogin1.rcs.griffith.edu.au
e.g: ssh -N -f -L 8890:gc-prd-hpcn002:8890 s123456@gc-prd-hpclogin1.rcs.griffith.edu.au
In the command above, be sure to replace gc-prd-hpcn002 with the hostname of the node that pbs assigned to you.
Note that we selected the Linux port 8889 to connect to the notebook. If you don't specify the port, it will default to port 8888 but sometimes this port can be already in use either on the remote machine or the local one (i.e., your laptop). If the port you selected is unavailable, you will get an error message, in which case you should just pick another one. It is best to keep it greater than 1024. Consider starting with 8888 and increment by 1 if it fails, e.g., try 8888, 8889, 8890 and so on. If you are running on a different port then substitute your port number for 8889.
Lastly, open a web browser and copy and paste the URL from the previous output:
http://127.0.0.1:8889/?token=61f8a2aa8ad5e469d14d6a1f59baac05a8d9577916bd7eb0
Another example:
http://127.0.0.1:8890/?token=8df8a9a79c00f0813055d48dfc79785c8ff6597cc0b1c456
Choose "New" then "Python 3" to launch a new notebook. Note that Jupyter may use a port that is different than the one you specified. This is why it is import to copy and paste the URL. When you are done, terminate the ssh tunnel on your local machine by running lsof -i tcp:8889 to get the PID and then kill -9 <PID> (e.g., kill -9 6010).
Aside on ssh
Looking at the man page for ssh, the relevant flags are:
-N Do not execute a remote command. This is useful for just forwarding ports. -f Requests ssh to go to background just before command execution. This is useful if ssh is going to ask for passwords or passphrases, but the user wants it in the background. -L Specifies that the given port on the local (client) host is to be forwarded to the given host and port on the remote side
Aside on Open Ports
Jupyter will automatically find an open port if you happen to specify one that is occupied. If you wish to do the scanning yourself then run the command below:
netstat -antp | grep :88 | sort
onDemand Nodes
Internet access is available when running Jupyter on a OnDemand node. There is no job scheduler on the onDemand nodes. Be sure to use these nodes in a way that is to fair all users.
Reference
1. https://jupyter.org/try
2. https://researchcomputing.princeton.edu/support/knowledge-base/jupyter