Skip to main content

Jupyter via Port-Forwarding

You can easily run Jupyter Notebooks and Jupyter Lab instances on a dedicated partition of our CPU-clusters (hpda1_jupyter). Here, you can start Jupyter Labs and Notebooks with a maximum of 4 physical cores and 48 hours of runtime. GPU-users are allowed to run Jupyter Notebooks and Labs on the GPU cluster (hpda2_compute_gpu).

Portal

For standard use cases of Jupyter, you should consider using the terrabyte Portal in order to launch Jupyter Labs and Notebooks from an intuitive GUI.

Login node

Please do not execute Jupyter-Jobs on the login node! Violation of the usage restrictions on the login node may lead to your account being blocked from further access to the cluster, apart from your processes being forcibly removed.

Here we provide a step-by-step instruction on how to set up and access a Jupyter Lab environment on our cluster nodes.

  1. Log into login node

  2. Install required packages into local conda environment

    # load the miniconda module
    module add miniconda3

    # create the environment
    conda create -n myenv python=3.9

    # install ipython + Jupyter Lab
    conda install -n myenv ipython jupyterlab

    # install other packages (whatever you like...here we just give some examples)
    conda install -n myenv numpy dask zarr
    conda install -n myenv -c conda-forge stackstac
  3. Start an interactive bash on a worker node. Choose the required resources as you need. The resource limitations of the respective cluster-partition apply.

    1. Example for the CPU-Cluster:

      # Here we ordered 4 phyiscal cores (i.e. 8 hyperthreads) with 45 GB RAM for 8 hours on the Jupyter-CPU-cluster.
      # If --mem is not set, you will automatically get 12,5 GB per physical core.
      srun --cluster=hpda1 --partition=hpda1_jupyter --nodes=1 --ntasks-per-node=1 --cpus-per-task=8 --mem=45gb --time=08:00:00 --pty bash -i
    2. Example for the GPU-Cluster:

      # Here we ordered 1 GPU for 8 hours on the GPU-Compute-Cluster
      # If --cpus-per-task and --mem are not set, you will automatically get 6 physical CPU-cores and 40 GB RAM per physical core.
      srun --cluster=hpda2 --partition=hpda2_compute_gpu --nodes=1 --ntasks-per-node=1 --gres=gpu:1 --time=08:00:00 --pty bash -i
  4. On the worker node, start Jupyter Lab with the individual conda environment.

    # load the miniconda module
    module add miniconda3

    # activate the conda environment
    conda activate myenv

    # run Jupyter Lab (choose the port you like)
    # --ip=localhost, like is often shown in examples, does not work on a worker node
    jupyter lab --no-browser --ip="0.0.0.0" --port 8888
  5. Access the Jupyter Lab from your local machine via ssh and port forwarding (e.g. through a local terminal in Mobaxterm). Substitute hpdar03c02s06 by the worker node you are on and insert your terrabyte User ID

    ssh -L 8888:hpdar03c02s06:8888 <user-id>@login.terrabyte.lrz.de
  6. From your local browser, access the Jupyter Lab via http://localhost:8888/lab