...
No Format |
---|
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=4G
#SBATCH --time=00:05:00
#SBATCH --job-name=jupyter-notebook
# get tunneling info
XDG_RUNTIME_DIR=""
node=$(hostname -s)
user=$(whoami)
cluster="tigercpu"
port=8889
# print tunneling instructions jupyter-log
echo -e "
Command to create ssh tunnel:
ssh -N -f -L ${port}:${node}:${port} ${user}@${cluster}.princeton.edu
Use a Browser on your local machine to go to:
localhost:${port} (prefix w/ https:// if using password)
"
# load modules or conda environments here
module load anaconda3/2020.11
source activate myenv
# Run Jupyter
jupyter-notebook --no-browser --port=${port} --ip=${node} |
This job launches Jupyter on the allocated compute node and we can access it through an ssh tunnel as we did in the previous section.
First, from the head node, we submit the job to the queue:
qsub jupyter.pbs
Once the job is running, a log file will be created that is called jupyter-notebook-<jobid>.log. The log file contains information on how to connect to Jupyter, and the necessary token.
In order to connect to Jupyter that is running on the compute node, we set up a tunnel on the local machine as follows:
No Format |
---|
ssh -N -f -L 8889:gc-prd-hpcn002:8889 <YourNetID>@tigercpu.princeton.edu |
where gc-prd-hpcn002 is the name of the node that was allocated in this case.
In order to access Jupyter, navigate to http://localhost:8889/ In the directions on this page, the only packages that are available to the user are those made available by loading the anaconda3 module. If you have created your own Conda environment then you will need to activate it (e.g source activate myenv)
Aside on ssh
Looking at the man page for ssh, the relevant flags are:
...