JupyterHub

JupyterHub is the best way to serve Jupyter notebook for multiple users. It can be used in a class of students, a corporate data science group or scientific research group. It is a multi-user Hub that spawns, manages, and proxies multiple instances of the single-user Jupyter notebook server.

To make life easier, JupyterHub has distributions. Be sure to take a look at them before continuing with the configuration of the broad original system of JupyterHub. Today, you can find two main cases:

  1. If you need a simple case for a small amount of users (0-100) and single server take a look at The Littlest JupyterHub distribution.

  2. If you need to allow for even more users, a dynamic amount of servers can be used on a cloud, take a look at the Zero to JupyterHub with Kubernetes .

Four subsystems make up JupyterHub:

  • a Hub (tornado process) that is the heart of JupyterHub

  • a configurable http proxy (node-http-proxy) that receives the requests from the client’s browser

  • multiple single-user Jupyter notebook servers (Python/IPython/tornado) that are monitored by Spawners

  • an authentication class that manages how users can access the system

Besides these central pieces, you can add optional configurations through a config.py file and manage users kernels on an admin panel. A simplification of the whole system can be seen in the figure below:

JupyterHub subsystems

JupyterHub performs the following functions:

  • The Hub launches a proxy

  • The proxy forwards all requests to the Hub by default

  • The Hub handles user login and spawns single-user servers on demand

  • The Hub configures the proxy to forward URL prefixes to the single-user notebook servers

For convenient administration of the Hub, its users, and services, JupyterHub also provides a REST API.

The JupyterHub team and Project Jupyter value our community, and JupyterHub follows the Jupyter Community Guides.

Contents

Distributions

A JupyterHub distribution is tailored towards a particular set of use cases. These are generally easier to set up than setting up JupyterHub from scratch, assuming they fit your use case.

The two popular ones are:

Installation Guide

Installation

These sections cover how to get up-and-running with JupyterHub. They cover some basics of the tools needed to deploy JupyterHub as well as how to get it running on your own infrastructure.

Quickstart
Prerequisites

Before installing JupyterHub, you will need:

  • a Linux/Unix based system

  • Python 3.6 or greater. An understanding of using pip or conda for installing Python packages is helpful.

  • nodejs/npm. Install nodejs/npm, using your operating system’s package manager.

    • If you are using conda, the nodejs and npm dependencies will be installed for you by conda.

    • If you are using pip, install a recent version of nodejs/npm. For example, install it on Linux (Debian/Ubuntu) using:

      sudo apt-get install nodejs npm
      

      nodesource is a great resource to get more recent versions of the nodejs runtime, if your system package manager only has an old version of Node.js (e.g. 10 or older).

  • A pluggable authentication module (PAM) to use the default Authenticator. PAM is often available by default on most distributions, if this is not the case it can be installed by using the operating system’s package manager.

  • TLS certificate and key for HTTPS communication

  • Domain name

Before running the single-user notebook servers (which may be on the same system as the Hub or not), you will need:

Installation

JupyterHub can be installed with pip (and the proxy with npm) or conda:

pip, npm:

python3 -m pip install jupyterhub
npm install -g configurable-http-proxy
python3 -m pip install jupyterlab notebook  # needed if running the notebook servers in the same environment

conda (one command installs jupyterhub and proxy):

conda install -c conda-forge jupyterhub  # installs jupyterhub and proxy
conda install jupyterlab notebook  # needed if running the notebook servers in the same environment

Test your installation. If installed, these commands should return the packages’ help contents:

jupyterhub -h
configurable-http-proxy -h
Start the Hub server

To start the Hub server, run the command:

jupyterhub

Visit http://localhost:8000 in your browser, and sign in with your unix credentials.

To allow multiple users to sign in to the Hub server, you must start jupyterhub as a privileged user, such as root:

sudo jupyterhub

The wiki describes how to run the server as a less privileged user. This requires additional configuration of the system.

Using Docker

Important

We highly recommend following the Zero to JupyterHub tutorial for installing JupyterHub.

Alternate installation using Docker

A ready to go docker image gives a straightforward deployment of JupyterHub.

Note

This jupyterhub/jupyterhub docker image is only an image for running the Hub service itself. It does not provide the other Jupyter components, such as Notebook installation, which are needed by the single-user servers. To run the single-user servers, which may be on the same system as the Hub or not, Jupyter Notebook version 4 or greater must be installed.

Starting JupyterHub with docker

The JupyterHub docker image can be started with the following command:

docker run -d -p 8000:8000 --name jupyterhub jupyterhub/jupyterhub jupyterhub

This command will create a container named jupyterhub that you can stop and resume with docker stop/start.

The Hub service will be listening on all interfaces at port 8000, which makes this a good choice for testing JupyterHub on your desktop or laptop.

If you want to run docker on a computer that has a public IP then you should (as in MUST) secure it with ssl by adding ssl options to your docker configuration or using a ssl enabled proxy.

Mounting volumes will allow you to store data outside the docker image (host system) so it will be persistent, even when you start a new image.

The command docker exec -it jupyterhub bash will spawn a root shell in your docker container. You can use the root shell to create system users in the container. These accounts will be used for authentication in JupyterHub’s default configuration.

Installation Basics
Platform support

JupyterHub is supported on Linux/Unix based systems. To use JupyterHub, you need a Unix server (typically Linux) running somewhere that is accessible to your team on the network. The JupyterHub server can be on an internal network at your organization, or it can run on the public internet (in which case, take care with the Hub’s security).

JupyterHub officially does not support Windows. You may be able to use JupyterHub on Windows if you use a Spawner and Authenticator that work on Windows, but the JupyterHub defaults will not. Bugs reported on Windows will not be accepted, and the test suite will not run on Windows. Small patches that fix minor Windows compatibility issues (such as basic installation) may be accepted, however. For Windows-based systems, we would recommend running JupyterHub in a docker container or Linux VM.

Additional Reference: Tornado’s documentation on Windows platform support

Planning your installation

Prior to beginning installation, it’s helpful to consider some of the following:

  • deployment system (bare metal, Docker)

  • Authentication (PAM, OAuth, etc.)

  • Spawner of singleuser notebook servers (Docker, Batch, etc.)

  • Services (nbgrader, etc.)

  • JupyterHub database (default SQLite; traditional RDBMS such as PostgreSQL,) MySQL, or other databases supported by SQLAlchemy)

Folders and File Locations

It is recommended to put all of the files used by JupyterHub into standard UNIX filesystem locations.

  • /srv/jupyterhub for all security and runtime files

  • /etc/jupyterhub for all configuration files

  • /var/log for log files

Getting Started

Get Started

This section covers how to configure and customize JupyterHub for your needs. It contains information about authentication, networking, security, and other topics that are relevant to individuals or organizations deploying their own JupyterHub.

Configuration Basics

The section contains basic information about configuring settings for a JupyterHub deployment. The Technical Reference documentation provides additional details.

This section will help you learn how to:

  • generate a default configuration file, jupyterhub_config.py

  • start with a specific configuration file

  • configure JupyterHub using command line options

  • find information and examples for some common deployments

Generate a default config file

On startup, JupyterHub will look by default for a configuration file, jupyterhub_config.py, in the current working directory.

To generate a default config file, jupyterhub_config.py:

jupyterhub --generate-config

This default jupyterhub_config.py file contains comments and guidance for all configuration variables and their default values. We recommend storing configuration files in the standard UNIX filesystem location, i.e. /etc/jupyterhub.

Start with a specific config file

You can load a specific config file and start JupyterHub using:

jupyterhub -f /path/to/jupyterhub_config.py

If you have stored your configuration file in the recommended UNIX filesystem location, /etc/jupyterhub, the following command will start JupyterHub using the configuration file:

jupyterhub -f /etc/jupyterhub/jupyterhub_config.py

The IPython documentation provides additional information on the config system that Jupyter uses.

Configure using command line options

To display all command line options that are available for configuration:

    jupyterhub --help-all

Configuration using the command line options is done when launching JupyterHub. For example, to start JupyterHub on 10.0.1.2:443 with https, you would enter:

    jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert

All configurable options may technically be set on the command line, though some are inconvenient to type. To set a particular configuration parameter, c.Class.trait, you would use the command line option, --Class.trait, when starting JupyterHub. For example, to configure the c.Spawner.notebook_dir trait from the command line, use the --Spawner.notebook_dir option:

jupyterhub --Spawner.notebook_dir='~/assignments'
Configure for various deployment environments

The default authentication and process spawning mechanisms can be replaced, and specific authenticators and spawners can be set in the configuration file. This enables JupyterHub to be used with a variety of authentication methods or process control and deployment environments. Some examples, meant as illustration, are:

Run the proxy separately

This is not strictly necessary, but useful in many cases. If you use a custom proxy (e.g. Traefik), this is also not needed.

Connections to user servers go through the proxy, and not the hub itself. If the proxy stays running when the hub restarts (for maintenance, re-configuration, etc.), then user connections are not interrupted. For simplicity, by default the hub starts the proxy automatically, so if the hub restarts, the proxy restarts, and user connections are interrupted. It is easy to run the proxy separately, for information see the separate proxy page.

Networking basics

This section will help you with basic proxy and network configuration to:

  • set the proxy’s IP address and port

  • set the proxy’s REST API URL

  • configure the Hub if the Proxy or Spawners are remote or isolated

  • set the hub_connect_ip which services will use to communicate with the hub

Set the Proxy’s IP address and port

The Proxy’s main IP address setting determines where JupyterHub is available to users. By default, JupyterHub is configured to be available on all network interfaces ('') on port 8000. Note: Use of '*' is discouraged for IP configuration; instead, use of '0.0.0.0' is preferred.

Changing the Proxy’s main IP address and port can be done with the following JupyterHub command line options:

jupyterhub --ip=192.168.1.2 --port=443

Or by placing the following lines in a configuration file, jupyterhub_config.py:

c.JupyterHub.ip = '192.168.1.2'
c.JupyterHub.port = 443

Port 443 is used in the examples since 443 is the default port for SSL/HTTPS.

Configuring only the main IP and port of JupyterHub should be sufficient for most deployments of JupyterHub. However, more customized scenarios may need additional networking details to be configured.

Note that c.JupyterHub.ip and c.JupyterHub.port are single values, not tuples or lists – JupyterHub listens to only a single IP address and port.

Set the Proxy’s REST API communication URL (optional)

By default, this REST API listens on port 8001 of localhost only. The Hub service talks to the proxy via a REST API on a secondary port. The API URL can be configured separately to override the default settings.

Set api_url

The URL to access the API, c.configurableHTTPProxy.api_url, is configurable. An example entry to set the proxy’s API URL in jupyterhub_config.py is:

c.ConfigurableHTTPProxy.api_url = 'http://10.0.1.4:5432'
proxy_api_ip and proxy_api_port (Deprecated in 0.8)

If running the Proxy separate from the Hub, configure the REST API communication IP address and port by adding this to the jupyterhub_config.py file:

# ideally a private network address
c.JupyterHub.proxy_api_ip = '10.0.1.4'
c.JupyterHub.proxy_api_port = 5432

We recommend using the proxy’s api_url setting instead of the deprecated settings, proxy_api_ip and proxy_api_port.

Configure the Hub if the Proxy or Spawners are remote or isolated

The Hub service listens only on localhost (port 8081) by default. The Hub needs to be accessible from both the proxy and all Spawners. When spawning local servers, an IP address setting of localhost is fine.

If either the Proxy or (more likely) the Spawners will be remote or isolated in containers, the Hub must listen on an IP that is accessible.

c.JupyterHub.hub_ip = '10.0.1.4'
c.JupyterHub.hub_port = 54321

Added in 0.8: The c.JupyterHub.hub_connect_ip setting is the IP address or hostname that other services should use to connect to the Hub. A common configuration for, e.g. docker, is:

c.JupyterHub.hub_ip = '0.0.0.0'  # listen on all interfaces
c.JupyterHub.hub_connect_ip = '10.0.1.4'  # IP as seen on the docker network. Can also be a hostname.
Adjusting the hub’s URL

The hub will most commonly be running on a hostname of its own. If it is not – for example, if the hub is being reverse-proxied and being exposed at a URL such as https://proxy.example.org/jupyter/ – then you will need to tell JupyterHub the base URL of the service. In such a case, it is both necessary and sufficient to set c.JupyterHub.base_url = '/jupyter/' in the configuration.

Security settings

Important

You should not run JupyterHub without SSL encryption on a public network.

Security is the most important aspect of configuring Jupyter. Three configuration settings are the main aspects of security configuration:

  1. SSL encryption (to enable HTTPS)

  2. Cookie secret (a key for encrypting browser cookies)

  3. Proxy authentication token (used for the Hub and other services to authenticate to the Proxy)

The Hub hashes all secrets (e.g., auth tokens) before storing them in its database. A loss of control over read-access to the database should have minimal impact on your deployment; if your database has been compromised, it is still a good idea to revoke existing tokens.

Enabling SSL encryption

Since JupyterHub includes authentication and allows arbitrary code execution, you should not run it without SSL (HTTPS).

Using an SSL certificate

This will require you to obtain an official, trusted SSL certificate or create a self-signed certificate. Once you have obtained and installed a key and certificate you need to specify their locations in the jupyterhub_config.py configuration file as follows:

c.JupyterHub.ssl_key = '/path/to/my.key'
c.JupyterHub.ssl_cert = '/path/to/my.cert'

Some cert files also contain the key, in which case only the cert is needed. It is important that these files be put in a secure location on your server, where they are not readable by regular users.

If you are using a chain certificate, see also chained certificate for SSL in the JupyterHub Troubleshooting FAQ.

Using letsencrypt

It is also possible to use letsencrypt to obtain a free, trusted SSL certificate. If you run letsencrypt using the default options, the needed configuration is (replace mydomain.tld by your fully qualified domain name):

c.JupyterHub.ssl_key = '/etc/letsencrypt/live/{mydomain.tld}/privkey.pem'
c.JupyterHub.ssl_cert = '/etc/letsencrypt/live/{mydomain.tld}/fullchain.pem'

If the fully qualified domain name (FQDN) is example.com, the following would be the needed configuration:

c.JupyterHub.ssl_key = '/etc/letsencrypt/live/example.com/privkey.pem'
c.JupyterHub.ssl_cert = '/etc/letsencrypt/live/example.com/fullchain.pem'
If SSL termination happens outside of the Hub

In certain cases, for example if the hub is running behind a reverse proxy, and SSL termination is being provided by NGINX, it is reasonable to run the hub without SSL.

To achieve this, simply omit the configuration settings c.JupyterHub.ssl_key and c.JupyterHub.ssl_cert (setting them to None does not have the same effect, and is an error).

Proxy authentication token

The Hub authenticates its requests to the Proxy using a secret token that the Hub and Proxy agree upon. Note that this applies to the default ConfigurableHTTPProxy implementation. Not all proxy implementations use an auth token.

The value of this token should be a random string (for example, generated by openssl rand -hex 32). You can store it in the configuration file or an environment variable

Generating and storing token in the configuration file

You can set the value in the configuration file, jupyterhub_config.py:

c.ConfigurableHTTPProxy.api_token = 'abc123...' # any random string
Generating and storing as an environment variable

You can pass this value of the proxy authentication token to the Hub and Proxy using the CONFIGPROXY_AUTH_TOKEN environment variable:

export CONFIGPROXY_AUTH_TOKEN=$(openssl rand -hex 32)

This environment variable needs to be visible to the Hub and Proxy.

Default if token is not set

If you don’t set the Proxy authentication token, the Hub will generate a random key itself, which means that any time you restart the Hub you must also restart the Proxy. If the proxy is a subprocess of the Hub, this should happen automatically (this is the default configuration).

Cookies used by JupyterHub authentication

The following cookies are used by the Hub for handling user authentication.

This section was created based on this post from Discourse.

jupyterhub-hub-login

This is the login token used when visiting Hub-served pages that are protected by authentication such as the main home, the spawn form, etc. If this cookie is set, then the user is logged in.

Resetting the Hub cookie secret effectively revokes this cookie.

This cookie is restricted to the path /hub/.

jupyterhub-user-<username>

This is the cookie used for authenticating with a single-user server. It is set by the single-user server after OAuth with the Hub.

Effectively the same as jupyterhub-hub-login, but for the single-user server instead of the Hub. It contains an OAuth access token, which is checked with the Hub to authenticate the browser.

Each OAuth access token is associated with a session id (see jupyterhub-session-id section below).

To avoid hitting the Hub on every request, the authentication response is cached. And to avoid a stale cache the cache key is comprised of both the token and session id.

Resetting the Hub cookie secret effectively revokes this cookie.

This cookie is restricted to the path /user/<username>, so that only the user’s server receives it.

jupyterhub-session-id

This is a random string, meaningless in itself, and the only cookie shared by the Hub and single-user servers.

Its sole purpose is to coordinate logout of the multiple OAuth cookies.

This cookie is set to / so all endpoints can receive it, or clear it, etc.

jupyterhub-user-<username>-oauth-state

A short-lived cookie, used solely to store and validate OAuth state. It is only set while OAuth between the single-user server and the Hub is processing.

If you use your browser development tools, you should see this cookie for a very brief moment before your are logged in, with an expiration date shorter than jupyterhub-hub-login or jupyterhub-user-<username>.

This cookie should not exist after you have successfully logged in.

This cookie is restricted to the path /user/<username>, so that only the user’s server receives it.

Authentication and User Basics

The default Authenticator uses PAM to authenticate system users with their username and password. With the default Authenticator, any user with an account and password on the system will be allowed to login.

Create a set of allowed users

You can restrict which users are allowed to login with a set, Authenticator.allowed_users:

c.Authenticator.allowed_users = {'mal', 'zoe', 'inara', 'kaylee'}

Users in the allowed_users set are added to the Hub database when the Hub is started.

Warning

If this configuration value is not set, then all authenticated users will be allowed into your hub.

Configure admins (admin_users)

Note

As of JupyterHub 2.0, the full permissions of admin_users should not be required. Instead, you can assign [roles][] to users or groups with only the scopes they require.

Admin users of JupyterHub, admin_users, can add and remove users from the user allowed_users set. admin_users can take actions on other users’ behalf, such as stopping and restarting their servers.

A set of initial admin users, admin_users can be configured as follows:

c.Authenticator.admin_users = {'mal', 'zoe'}

Users in the admin set are automatically added to the user allowed_users set, if they are not already present.

Each authenticator may have different ways of determining whether a user is an administrator. By default JupyterHub uses the PAMAuthenticator which provides the admin_groups option and can set administrator status based on a user group. For example we can let any user in the wheel group be admin:

c.PAMAuthenticator.admin_groups = {'wheel'}
Give admin access to other users’ notebook servers (admin_access)

Since the default JupyterHub.admin_access setting is False, the admins do not have permission to log in to the single user notebook servers owned by other users. If JupyterHub.admin_access is set to True, then admins have permission to log in as other users on their respective machines, for debugging. As a courtesy, you should make sure your users know if admin_access is enabled.

Add or remove users from the Hub

Users can be added to and removed from the Hub via either the admin panel or the REST API. When a user is added, the user will be automatically added to the allowed_users set and database. Restarting the Hub will not require manually updating the allowed_users set in your config file, as the users will be loaded from the database.

After starting the Hub once, it is not sufficient to remove a user from the allowed users set in your config file. You must also remove the user from the Hub’s database, either by deleting the user from JupyterHub’s admin page, or you can clear the jupyterhub.sqlite database and start fresh.

Use LocalAuthenticator to create system users

The LocalAuthenticator is a special kind of authenticator that has the ability to manage users on the local system. When you try to add a new user to the Hub, a LocalAuthenticator will check if the user already exists. If you set the configuration value, create_system_users, to True in the configuration file, the LocalAuthenticator has the privileges to add users to the system. The setting in the config file is:

c.LocalAuthenticator.create_system_users = True

Adding a user to the Hub that doesn’t already exist on the system will result in the Hub creating that user via the system adduser command line tool. This option is typically used on hosted deployments of JupyterHub, to avoid the need to manually create all your users before launching the service. This approach is not recommended when running JupyterHub in situations where JupyterHub users map directly onto the system’s UNIX users.

Use DummyAuthenticator for testing

The DummyAuthenticator is a simple authenticator that allows for any username/password unless a global password has been set. If set, it will allow for any username as long as the correct password is provided. To set a global password, add this to the config file:

c.DummyAuthenticator.password = "some_password"
Spawners and single-user notebook servers

Since the single-user server is an instance of jupyter notebook, an entire separate multi-process application, there are many aspects of that server that can be configured, and a lot of ways to express that configuration.

At the JupyterHub level, you can set some values on the Spawner. The simplest of these is Spawner.notebook_dir, which lets you set the root directory for a user’s server. This root notebook directory is the highest level directory users will be able to access in the notebook dashboard. In this example, the root notebook directory is set to ~/notebooks, where ~ is expanded to the user’s home directory.

c.Spawner.notebook_dir = '~/notebooks'

You can also specify extra command line arguments to the notebook server with:

c.Spawner.args = ['--debug', '--profile=PHYS131']

This could be used to set the users default page for the single user server:

c.Spawner.args = ['--NotebookApp.default_url=/notebooks/Welcome.ipynb']

Since the single-user server extends the notebook server application, it still loads configuration from the jupyter_notebook_config.py config file. Each user may have one of these files in $HOME/.jupyter/. Jupyter also supports loading system-wide config files from /etc/jupyter/, which is the place to put configuration that you want to affect all of your users.

External services

When working with JupyterHub, a Service is defined as a process that interacts with the Hub’s REST API. A Service may perform a specific action or task. For example, shutting down individuals’ single user notebook servers that have been idle for some time is a good example of a task that could be automated by a Service. Let’s look at how the jupyterhub_idle_culler script can be used as a Service.

Real-world example to cull idle servers

JupyterHub has a REST API that can be used by external services. This document will:

  • explain some basic information about API tokens

  • clarify that API tokens can be used to authenticate to single-user servers as of version 0.8.0

  • show how the jupyterhub_idle_culler script can be:

    • used in a Hub-managed service

    • run as a standalone script

Both examples for jupyterhub_idle_culler will communicate tasks to the Hub via the REST API.

API Token basics
Create an API token

To run such an external service, an API token must be created and provided to the service.

As of version 0.6.0, the preferred way of doing this is to first generate an API token:

openssl rand -hex 32

In version 0.8.0, a TOKEN request page for generating an API token is available from the JupyterHub user interface:

Request API TOKEN page

API TOKEN success page

Pass environment variable with token to the Hub

In the case of cull_idle_servers, it is passed as the environment variable called JUPYTERHUB_API_TOKEN.

Use API tokens for services and tasks that require external access

While API tokens are often associated with a specific user, API tokens can be used by services that require external access for activities that may not correspond to a specific human, e.g. adding users during setup for a tutorial or workshop. Add a service and its API token to the JupyterHub configuration file, jupyterhub_config.py:

c.JupyterHub.services = [
    {'name': 'adding-users', 'api_token': 'super-secret-token'},
]
Restart JupyterHub

Upon restarting JupyterHub, you should see a message like below in the logs:

Adding API token for <username>
Authenticating to single-user servers using API token

In JupyterHub 0.7, there is no mechanism for token authentication to single-user servers, and only cookies can be used for authentication. 0.8 supports using JupyterHub API tokens to authenticate to single-user servers.

Configure the idle culler to run as a Hub-Managed Service

Install the idle culler:

pip install jupyterhub-idle-culler

In jupyterhub_config.py, add the following dictionary for the idle-culler Service to the c.JupyterHub.services list:

c.JupyterHub.services = [
    {
        'name': 'idle-culler',
        'command': [sys.executable, '-m', 'jupyterhub_idle_culler', '--timeout=3600'],
    }
]

c.JupyterHub.load_roles = [
    {
        "name": "list-and-cull", # name the role
        "services": [
            "idle-culler", # assign the service to this role
        ],
        "scopes": [
            # declare what permissions the service should have
            "list:users", # list users
            "read:users:activity", # read user last-activity
            "admin:servers", # start/stop servers
        ],
    }
]

where:

  • command indicates that the Service will be launched as a subprocess, managed by the Hub.

Changed in version 2.0: Prior to 2.0, the idle-culler required ‘admin’ permissions. It now needs the scopes:

  • list:users to access the user list endpoint

  • read:users:activity to read activity info

  • admin:servers to start/stop servers

Run cull-idle manually as a standalone script

Now you can run your script by providing it the API token and it will authenticate through the REST API to interact with it.

This will run the idle culler service manually. It can be run as a standalone script anywhere with access to the Hub, and will periodically check for idle servers and shut them down via the Hub’s REST API. In order to shutdown the servers, the token given to cull-idle must have permission to list users and admin their servers.

Generate an API token and store it in the JUPYTERHUB_API_TOKEN environment variable. Run jupyterhub_idle_culler manually.

    export JUPYTERHUB_API_TOKEN='token'
    python -m jupyterhub_idle_culler [--timeout=900] [--url=http://127.0.0.1:8081/hub/api]
Frequently asked questions
Institutional FAQ

This page contains common questions from users of JupyterHub, broken down by their roles within organizations.

For all
Is it appropriate for adoption within a larger institutional context?

Yes! JupyterHub has been used at-scale for large pools of users, as well as complex and high-performance computing. For example, UC Berkeley uses JupyterHub for its Data Science Education Program courses (serving over 3,000 students). The Pangeo project uses JupyterHub to provide access to scalable cloud computing with Dask. JupyterHub is stable and customizable to the use-cases of large organizations.

I keep hearing about Jupyter Notebook, JupyterLab, and now JupyterHub. What’s the difference?

Here is a quick breakdown of these three tools:

  • The Jupyter Notebook is a document specification (the .ipynb) file that interweaves narrative text with code cells and their outputs. It is also a graphical interface that allows users to edit these documents. There are also several other graphical interfaces that allow users to edit the .ipynb format (nteract, Jupyter Lab, Google Colab, Kaggle, etc).

  • JupyterLab is a flexible and extendible user interface for interactive computing. It has several extensions that are tailored for using Jupyter Notebooks, as well as extensions for other parts of the data science stack.

  • JupyterHub is an application that manages interactive computing sessions for multiple users. It also connects them with infrastructure those users wish to access. It can provide remote access to Jupyter Notebooks and JupyterLab for many people.

For management
Briefly, what problem does JupyterHub solve for us?

JupyterHub provides a shared platform for data science and collaboration. It allows users to utilize familiar data science workflows (such as the scientific Python stack, the R tidyverse, and Jupyter Notebooks) on institutional infrastructure. It also allows administrators some control over access to resources, security, environments, and authentication.

Is JupyterHub mature? Why should we trust it?

Yes - the core JupyterHub application recently reached 1.0 status, and is considered stable and performant for most institutions. JupyterHub has also been deployed (along with other tools) to work on scalable infrastructure, large datasets, and high-performance computing.

Who else uses JupyterHub?

JupyterHub is used at a variety of institutions in academia, industry, and government research labs. It is most-commonly used by two kinds of groups:

  • Small teams (e.g., data science teams, research labs, or collaborative projects) to provide a shared resource for interactive computing, collaboration, and analytics.

  • Large teams (e.g., a department, a large class, or a large group of remote users) to provide access to organizational hardware, data, and analytics environments at scale.

Here is a sample of organizations that use JupyterHub:

  • Universities and colleges: UC Berkeley, UC San Diego, Cal Poly SLO, Harvard University, University of Chicago, University of Oslo, University of Sheffield, Université Paris Sud, University of Versailles

  • Research laboratories: NASA, NCAR, NOAA, the Large Synoptic Survey Telescope, Brookhaven National Lab, Minnesota Supercomputing Institute, ALCF, CERN, Lawrence Livermore National Laboratory

  • Online communities: Pangeo, Quantopian, mybinder.org, MathHub, Open Humans

  • Computing infrastructure providers: NERSC, San Diego Supercomputing Center, Compute Canada

  • Companies: Capital One, SANDVIK code, Globus

See the Gallery of JupyterHub deployments for a more complete list of JupyterHub deployments at institutions.

How does JupyterHub compare with hosted products, like Google Colaboratory, RStudio.cloud, or Anaconda Enterprise?

JupyterHub puts you in control of your data, infrastructure, and coding environment. In addition, it is vendor neutral, which reduces lock-in to a particular vendor or service. JupyterHub provides access to interactive computing environments in the cloud (similar to each of these services). Compared with the tools above, it is more flexible, more customizable, free, and gives administrators more control over their setup and hardware.

Because JupyterHub is an open-source, community-driven tool, it can be extended and modified to fit an institution’s needs. It plays nicely with the open source data science stack, and can serve a variety of computing enviroments, user interfaces, and computational hardware. It can also be deployed anywhere - on enterprise cloud infrastructure, on High-Performance-Computing machines, on local hardware, or even on a single laptop, which is not possible with most other tools for shared interactive computing.

For IT
How would I set up JupyterHub on institutional hardware?

That depends on what kind of hardware you’ve got. JupyterHub is flexible enough to be deployed on a variety of hardware, including in-room hardware, on-prem clusters, cloud infrastructure, etc.

The most common way to set up a JupyterHub is to use a JupyterHub distribution, these are pre-configured and opinionated ways to set up a JupyterHub on particular kinds of infrastructure. The two distributions that we currently suggest are:

  • Zero to JupyterHub for Kubernetes is a scalable JupyterHub deployment and guide that runs on Kubernetes. Better for larger or dynamic user groups (50-10,000) or more complex compute/data needs.

  • The Littlest JupyterHub is a lightweight JupyterHub that runs on a single single machine (in the cloud or under your desk). Better for smaller user groups (4-80) or more lightweight computational resources.

Does JupyterHub run well in the cloud?

Yes - most deployments of JupyterHub are run via cloud infrastructure and on a variety of cloud providers. Depending on the distribution of JupyterHub that you’d like to use, you can also connect your JupyterHub deployment with a number of other cloud-native services so that users have access to other resources from their interactive computing sessions.

For example, if you use the Zero to JupyterHub for Kubernetes distribution, you’ll be able to utilize container-based workflows of other technologies such as the dask-kubernetes project for distributed computing.

The Z2JH Helm Chart also has some functionality built in for auto-scaling your cluster up and down as more resources are needed - allowing you to utilize the benefits of a flexible cloud-based deployment.

Is JupyterHub secure?

The short answer: yes. JupyterHub as a standalone application has been battle-tested at an institutional level for several years, and makes a number of “default” security decisions that are reasonable for most users.

The longer answer: it depends on your deployment. Because JupyterHub is very flexible, it can be used in a variety of deployment setups. This often entails connecting your JupyterHub to other infrastructure (such as a Dask Gateway service). There are many security decisions to be made in these cases, and the security of your JupyterHub deployment will often depend on these decisions.

If you are worried about security, don’t hesitate to reach out to the JupyterHub community in the Jupyter Community Forum. This community of practice has many individuals with experience running secure JupyterHub deployments.

Does JupyterHub provide computing or data infrastructure?

No - JupyterHub manages user sessions and can control computing infrastructure, but it does not provide these things itself. You are expected to run JupyterHub on your own infrastructure (local or in the cloud). Moreover, JupyterHub has no internal concept of “data”, but is designed to be able to communicate with data repositories (again, either locally or remotely) for use within interactive computing sessions.

How do I manage users?

JupyterHub offers a few options for managing your users. Upon setting up a JupyterHub, you can choose what kind of authentication you’d like to use. For example, you can have users sign up with an institutional email address, or choose a username / password when they first log-in, or offload authentication onto another service such as an organization’s OAuth.

The users of a JupyterHub are stored locally, and can be modified manually by an administrator of the JupyterHub. Moreover, the active users on a JupyterHub can be found on the administrator’s page. This page gives you the abiltiy to stop or restart kernels, inspect user filesystems, and even take over user sessions to assist them with debugging.

How do I manage software environments?

A key benefit of JupyterHub is the ability for an administrator to define the environment(s) that users have access to. There are many ways to do this, depending on what kind of infrastructure you’re using for your JupyterHub.

For example, The Littlest JupyterHub runs on a single VM. In this case, the administrator defines an environment by installing packages to a shared folder that exists on the path of all users. The JupyterHub for Kubernetes deployment uses Docker images to define environments. You can create your own list of Docker images that users can select from, and can also control things like the amount of RAM available to users, or the types of machines that their sessions will use in the cloud.

How does JupyterHub manage computational resources?

For interactive computing sessions, JupyterHub controls computational resources via a spawner. Spawners define how a new user session is created, and are customized for particular kinds of infrastructure. For example, the KubeSpawner knows how to control a Kubernetes deployment to create new pods when users log in.

For more sophisticated computational resources (like distributed computing), JupyterHub can connect with other infrastructure tools (like Dask or Spark). This allows users to control scalable or high-performance resources from within their JupyterHub sessions. The logic of how those resources are controlled is taken care of by the non-JupyterHub application.

Can JupyterHub be used with my high-performance computing resources?

Yes - JupyterHub can provide access to many kinds of computing infrastructure. Especially when combined with other open-source schedulers such as Dask, you can manage fairly complex computing infrastructures from the interactive sessions of a JupyterHub. For example see the Dask HPC page.

How much resources do user sessions take?

This is highly configurable by the administrator. If you wish for your users to have simple data analytics environments for prototyping and light data exploring, you can restrict their memory and CPU based on the resources that you have available. If you’d like your JupyterHub to serve as a gateway to high-performance compute or data resources, you may increase the resources available on user machines, or connect them with computing infrastructures elsewhere.

Can I customize the look and feel of a JupyterHub?

JupyterHub provides some customization of the graphics displayed to users. The most common modification is to add custom branding to the JupyterHub login page, loading pages, and various elements that persist across all pages (such as headers).

For Technical Leads
Will JupyterHub “just work” with our team’s interactive computing setup?

Depending on the complexity of your setup, you’ll have different experiences with “out of the box” distributions of JupyterHub. If all of the resources you need will fit on a single VM, then The Littlest JupyterHub should get you up-and-running within a half day or so. For more complex setups, such as scalable Kubernetes clusters or access to high-performance computing and data, it will require more time and expertise with the technologies your JupyterHub will use (e.g., dev-ops knowledge with cloud computing).

In general, the base JupyterHub deployment is not the bottleneck for setup, it is connecting your JupyterHub with the various services and tools that you wish to provide to your users.

How well does JupyterHub scale? What are JupyterHub’s limitations?

JupyterHub works well at both a small scale (e.g., a single VM or machine) as well as a high scale (e.g., a scalable Kubernetes cluster). It can be used for teams as small as 2, and for user bases as large as 10,000. The scalability of JupyterHub largely depends on the infrastructure on which it is deployed. JupyterHub has been designed to be lightweight and flexible, so you can tailor your JupyterHub deployment to your needs.

Is JupyterHub resilient? What happens when a machine goes down?

For JupyterHubs that are deployed in a containerized environment (e.g., Kubernetes), it is possible to configure the JupyterHub to be fairly resistant to failures in the system. For example, if JupyterHub fails, then user sessions will not be affected (though new users will not be able to log in). When a JupyterHub process is restarted, it should seamlessly connect with the user database and the system will return to normal. Again, the details of your JupyterHub deployment (e.g., whether it’s deployed on a scalable cluster) will affect the resiliency of the deployment.

What interfaces does JupyterHub support?

Out of the box, JupyterHub supports a variety of popular data science interfaces for user sessions, such as JupyterLab, Jupyter Notebooks, and RStudio. Any interface that can be served via a web address can be served with a JupyterHub (with the right setup).

Does JupyterHub make it easier for our team to collaborate?

JupyterHub provides a standardized environment and access to shared resources for your teams. This greatly reduces the cost associated with sharing analyses and content with other team members, and makes it easier to collaborate and build off of one another’s ideas. Combined with access to high-performance computing and data, JupyterHub provides a common resource to amplify your team’s ability to prototype their analyses, scale them to larger data, and then share their results with one another.

JupyterHub also provides a computational framework to share computational narratives between different levels of an organization. For example, data scientists can share Jupyter Notebooks rendered as Voilà dashboards with those who are not familiar with programming, or create publicly-available interactive analyses to allow others to interact with your work.

Can I use JupyterHub with R/RStudio or other languages and environments?

Yes, Jupyter is a polyglot project, and there are over 40 community-provided kernels for a variety of languages (the most common being Python, Julia, and R). You can also use a JupyterHub to provide access to other interfaces, such as RStudio, that provide their own access to a language kernel.

Technical Reference

Technical Reference

This section covers more of the details of the JupyterHub architecture, as well as what happens under-the-hood when you deploy and configure your JupyterHub.

Technical Overview

The Technical Overview section gives you a high-level view of:

  • JupyterHub’s Subsystems: Hub, Proxy, Single-User Notebook Server

  • how the subsystems interact

  • the process from JupyterHub access to user login

  • JupyterHub’s default behavior

  • customizing JupyterHub

The goal of this section is to share a deeper technical understanding of JupyterHub and how it works.

The Subsystems: Hub, Proxy, Single-User Notebook Server

JupyterHub is a set of processes that together provide a single user Jupyter Notebook server for each person in a group. Three major subsystems are started by the jupyterhub command line program:

  • Hub (Python/Tornado): manages user accounts, authentication, and coordinates Single User Notebook Servers using a Spawner.

  • Proxy: the public facing part of JupyterHub that uses a dynamic proxy to route HTTP requests to the Hub and Single User Notebook Servers. configurable http proxy (node-http-proxy) is the default proxy.

  • Single-User Notebook Server (Python/Tornado): a dedicated, single-user, Jupyter Notebook server is started for each user on the system when the user logs in. The object that starts the single-user notebook servers is called a Spawner.

JupyterHub subsystems

How the Subsystems Interact

Users access JupyterHub through a web browser, by going to the IP address or the domain name of the server.

The basic principles of operation are:

  • The Hub spawns the proxy (in the default JupyterHub configuration)

  • The proxy forwards all requests to the Hub by default

  • The Hub handles login, and spawns single-user notebook servers on demand

  • The Hub configures the proxy to forward url prefixes to single-user notebook servers

The proxy is the only process that listens on a public interface. The Hub sits behind the proxy at /hub. Single-user servers sit behind the proxy at /user/[username].

Different authenticators control access to JupyterHub. The default one (PAM) uses the user accounts on the server where JupyterHub is running. If you use this, you will need to create a user account on the system for each user on your team. Using other authenticators, you can allow users to sign in with e.g. a GitHub account, or with any single-sign-on system your organization has.

Next, spawners control how JupyterHub starts the individual notebook server for each user. The default spawner will start a notebook server on the same machine running under their system username. The other main option is to start each server in a separate container, often using Docker.

The Process from JupyterHub Access to User Login

When a user accesses JupyterHub, the following events take place:

  • Login data is handed to the Authenticator instance for validation

  • The Authenticator returns the username if the login information is valid

  • A single-user notebook server instance is spawned for the logged-in user

  • When the single-user notebook server starts, the proxy is notified to forward requests to /user/[username]/* to the single-user notebook server.

  • A cookie is set on /hub/, containing an encrypted token. (Prior to version 0.8, a cookie for /user/[username] was used too.)

  • The browser is redirected to /user/[username], and the request is handled by the single-user notebook server.

The single-user server identifies the user with the Hub via OAuth:

  • on request, the single-user server checks a cookie

  • if no cookie is set, redirect to the Hub for verification via OAuth

  • after verification at the Hub, the browser is redirected back to the single-user server

  • the token is verified and stored in a cookie

  • if no user is identified, the browser is redirected back to /hub/login

Default Behavior

By default, the Proxy listens on all public interfaces on port 8000. Thus you can reach JupyterHub through either:

  • http://localhost:8000

  • or any other public IP or domain pointing to your system.

In their default configuration, the other services, the Hub and Single-User Notebook Servers, all communicate with each other on localhost only.

By default, starting JupyterHub will write two files to disk in the current working directory:

  • jupyterhub.sqlite is the SQLite database containing all of the state of the Hub. This file allows the Hub to remember which users are running and where, as well as storing other information enabling you to restart parts of JupyterHub separately. It is important to note that this database contains no sensitive information other than Hub usernames.

  • jupyterhub_cookie_secret is the encryption key used for securing cookies. This file needs to persist so that a Hub server restart will avoid invalidating cookies. Conversely, deleting this file and restarting the server effectively invalidates all login cookies. The cookie secret file is discussed in the Cookie Secret section of the Security Settings document.

The location of these files can be specified via configuration settings. It is recommended that these files be stored in standard UNIX filesystem locations, such as /etc/jupyterhub for all configuration files and /srv/jupyterhub for all security and runtime files.

Customizing JupyterHub

There are two basic extension points for JupyterHub:

  • How users are authenticated by Authenticators

  • How user’s single-user notebook server processes are started by Spawners

Each is governed by a customizable class, and JupyterHub ships with basic defaults for each.

To enable custom authentication and/or spawning, subclass Authenticator or Spawner, and override the relevant methods.

JupyterHub URL scheme

This document describes how JupyterHub routes requests.

This does not include the REST API urls.

In general, all URLs can be prefixed with c.JupyterHub.base_url to run the whole JupyterHub application on a prefix.

All authenticated handlers redirect to /hub/login to login users prior to being redirected back to the originating page. The returned request should preserve all query parameters.

/

The top-level request is always a simple redirect to /hub/, to be handled by the default JupyterHub handler.

In general, all requests to /anything that do not start with /hub/ but are routed to the Hub, will be redirected to /hub/anything before being handled by the Hub.

/hub/

This is an authenticated URL.

This handler redirects users to the default URL of the application, which defaults to the user’s default server. That is, it redirects to /hub/spawn if the user’s server is not running, or the server itself (/user/:name) if the server is running.

This default url behavior can be customized in two ways:

To redirect users to the JupyterHub home page (/hub/home) instead of spawning their server, set redirect_to_server to False:

c.JupyterHub.redirect_to_server = False

This might be useful if you have a Hub where you expect users to be managing multiple server configurations and automatic spawning is not desirable.

Second, you can customise the landing page to any page you like, such as a custom service you have deployed e.g. with course information:

c.JupyterHub.default_url = '/services/my-landing-service'
/hub/home

The Hub home page with named servers enabled

By default, the Hub home page has just one or two buttons for starting and stopping the user’s server.

If named servers are enabled, there will be some additional tools for management of named servers.

Version added: 1.0 named server UI is new in 1.0.

/hub/login

This is the JupyterHub login page. If you have a form-based username+password login, such as the default PAMAuthenticator, this page will render the login form.

A login form

If login is handled by an external service, e.g. with OAuth, this page will have a button, declaring “Login with …” which users can click to login with the chosen service.

A login redirect button

If you want to skip the user-interaction to initiate logging in via the button, you can set

c.Authenticator.auto_login = True

This can be useful when the user is “already logged in” via some mechanism, but a handshake via redirects is necessary to complete the authentication with JupyterHub.

/hub/logout

Visiting /hub/logout clears cookies from the current browser. Note that logging out does not stop a user’s server(s) by default.

If you would like to shutdown user servers on logout, you can enable this behavior with:

c.JupyterHub.shutdown_on_logout = True

Be careful with this setting because logging out one browser does not mean the user is no longer actively using their server from another machine.

/user/:username[/:servername]

If a user’s server is running, this URL is handled by the user’s given server, not the Hub. The username is the first part and, if using named servers, the server name is the second part.

If the user’s server is not running, this will be redirected to /hub/user/:username/...

/hub/user/:username[/:servername]

This URL indicates a request for a user server that is not running (because /user/... would have been handled by the notebook server if the specified server were running).

Handling this URL is the most complicated condition in JupyterHub, because there can be many states:

  1. server is not active a. user matches b. user doesn’t match

  2. server is ready

  3. server is pending, but not ready

If the server is pending spawn, the browser will be redirected to /hub/spawn-pending/:username/:servername to see a progress page while waiting for the server to be ready.

If the server is not active at all, a page will be served with a link to /hub/spawn/:username/:servername. Following that link will launch the requested server. The HTTP status will be 503 in this case because a request has been made for a server that is not running.

If the server is ready, it is assumed that the proxy has not yet registered the route. Some checks are performed and a delay is added before redirecting back to /user/:username/:servername/.... If something is really wrong, this can result in a redirect loop.

Visiting this page will never result in triggering the spawn of servers without additional user action (i.e. clicking the link on the page)

Visiting a URL for a server that's not running

Version changed: 1.0

Prior to 1.0, this URL itself was responsible for spawning servers, and served the progress page if it was pending, redirected to running servers, and This was useful because it made sure that requested servers were restarted after they stopped, but could also be harmful because unused servers would continuously be restarted if e.g. an idle JupyterLab frontend were open pointed at it, which constantly makes polling requests.

Special handling of API requests

Requests to /user/:username[/:servername]/api/... are assumed to be from applications connected to stopped servers. These are failed with 503 and an informative JSON error message indicating how to spawn the server. This is meant to help applications such as JupyterLab that are connected to a server that has stopped.

Version changed: 1.0

JupyterHub 0.9 failed these API requests with status 404, but 1.0 uses 503.

/user-redirect/...

This URL is for sharing a URL that will redirect a user to a path on their own default server. This is useful when users have the same file at the same URL on their servers, and you want a single link to give to any user that will open that file on their server.

e.g. a link to /user-redirect/notebooks/Index.ipynb will send user hortense to /user/hortense/notebooks/Index.ipynb

DO NOT share links to your own server with other users. This will not work in general, unless you grant those users access to your server.

Contributions welcome: The JupyterLab “shareable link” should share this link when run with JupyterHub, but it does not. See jupyterlab-hub where this should probably be done and this issue in JupyterLab that is intended to make it possible.

Spawning
/hub/spawn[/:username[/:servername]]

Requesting /hub/spawn will spawn the default server for the current user. If username and optionally servername are specified, then the specified server for the specified user will be spawned. Once spawn has been requested, the browser is redirected to /hub/spawn-pending/....

If Spawner.options_form is used, this will render a form, and a POST request will trigger the actual spawn and redirect.

The spawn form

Version added: 1.0

1.0 adds the ability to specify username and servername. Prior to 1.0, only /hub/spawn was recognized for the default server.

Version changed: 1.0

Prior to 1.0, this page redirected back to /hub/user/:username, which was responsible for triggering spawn and rendering progress, etc.

/hub/spawn-pending[/:username[/:servername]]

The spawn pending page

Version added: 1.0 this URL is new in JupyterHub 1.0.

This page renders the progress view for the given spawn request. Once the server is ready, the browser is redirected to the running server at /user/:username/:servername/....

If this page is requested at any time after the specified server is ready, the browser will be redirected to the running server.

Requesting this page will never trigger any side effects. If the server is not running (e.g. because the spawn has failed), the spawn failure message (if applicable) will be displayed, and the page will show a link back to /hub/spawn/....

/hub/token

The token management page

On this page, users can manage their JupyterHub API tokens. They can revoke access and request new tokens for writing scripts against the JupyterHub REST API.

/hub/admin

The admin panel

Administrators can take various administrative actions from this page:

  1. add/remove users

  2. grant admin privileges

  3. start/stop user servers

  4. shutdown JupyterHub itself

Security Overview

The Security Overview section helps you learn about:

  • the design of JupyterHub with respect to web security

  • the semi-trusted user

  • the available mitigations to protect untrusted users from each other

  • the value of periodic security audits.

This overview also helps you obtain a deeper understanding of how JupyterHub works.

Semi-trusted and untrusted users

JupyterHub is designed to be a simple multi-user server for modestly sized groups of semi-trusted users. While the design reflects serving semi-trusted users, JupyterHub is not necessarily unsuitable for serving untrusted users.

Using JupyterHub with untrusted users does mean more work by the administrator. Much care is required to secure a Hub, with extra caution on protecting users from each other as the Hub is serving untrusted users.

One aspect of JupyterHub’s design simplicity for semi-trusted users is that the Hub and single-user servers are placed in a single domain, behind a proxy. If the Hub is serving untrusted users, many of the web’s cross-site protections are not applied between single-user servers and the Hub, or between single-user servers and each other, since browsers see the whole thing (proxy, Hub, and single user servers) as a single website (i.e. single domain).

Protect users from each other

To protect users from each other, a user must never be able to write arbitrary HTML and serve it to another user on the Hub’s domain. JupyterHub’s authentication setup prevents a user writing arbitrary HTML and serving it to another user because only the owner of a given single-user notebook server is allowed to view user-authored pages served by the given single-user notebook server.

To protect all users from each other, JupyterHub administrators must ensure that:

  • A user does not have permission to modify their single-user notebook server, including:

    • A user may not install new packages in the Python environment that runs their single-user server.

    • If the PATH is used to resolve the single-user executable (instead of using an absolute path), a user may not create new files in any PATH directory that precedes the directory containing jupyterhub-singleuser.

    • A user may not modify environment variables (e.g. PATH, PYTHONPATH) for their single-user server.

  • A user may not modify the configuration of the notebook server (the ~/.jupyter or JUPYTER_CONFIG_DIR directory).

If any additional services are run on the same domain as the Hub, the services must never display user-authored HTML that is neither sanitized nor sandboxed (e.g. IFramed) to any user that lacks authentication as the author of a file.

Mitigate security issues

Several approaches to mitigating these issues with configuration options provided by JupyterHub include:

Enable subdomains

JupyterHub provides the ability to run single-user servers on their own subdomains. This means the cross-origin protections between servers has the desired effect, and user servers and the Hub are protected from each other. A user’s single-user server will be at username.jupyter.mydomain.com. This also requires all user subdomains to point to the same address, which is most easily accomplished with wildcard DNS. Since this spreads the service across multiple domains, you will need wildcard SSL, as well. Unfortunately, for many institutional domains, wildcard DNS and SSL are not available. If you do plan to serve untrusted users, enabling subdomains is highly encouraged, as it resolves the cross-site issues.

Disable user config

If subdomains are not available or not desirable, JupyterHub provides a configuration option Spawner.disable_user_config, which can be set to prevent the user-owned configuration files from being loaded. After implementing this option, PATHs and package installation and PATHs are the other things that the admin must enforce.

Prevent spawners from evaluating shell configuration files

For most Spawners, PATH is not something users can influence, but care should be taken to ensure that the Spawner does not evaluate shell configuration files prior to launching the server.

Isolate packages using virtualenv

Package isolation is most easily handled by running the single-user server in a virtualenv with disabled system-site-packages. The user should not have permission to install packages into this environment.

It is important to note that the control over the environment only affects the single-user server, and not the environment(s) in which the user’s kernel(s) may run. Installing additional packages in the kernel environment does not pose additional risk to the web application’s security.

Encrypt internal connections with SSL/TLS

By default, all communication on the server, between the proxy, hub, and single -user notebooks is performed unencrypted. Setting the internal_ssl flag in jupyterhub_config.py secures the aforementioned routes. Turning this feature on does require that the enabled Spawner can use the certificates generated by the Hub (the default LocalProcessSpawner can, for instance).

It is also important to note that this encryption does not (yet) cover the zmq tcp sockets between the Notebook client and kernel. While users cannot submit arbitrary commands to another user’s kernel, they can bind to these sockets and listen. When serving untrusted users, this eavesdropping can be mitigated by setting KernelManager.transport to ipc. This applies standard Unix permissions to the communication sockets thereby restricting communication to the socket owner. The internal_ssl option will eventually extend to securing the tcp sockets as well.

Security audits

We recommend that you do periodic reviews of your deployment’s security. It’s good practice to keep JupyterHub, configurable-http-proxy, and nodejs versions up to date.

A handy website for testing your deployment is Qualsys’ SSL analyzer tool.

Vulnerability reporting

If you believe you’ve found a security vulnerability in JupyterHub, or any Jupyter project, please report it to security@ipython.org. If you prefer to encrypt your security reports, you can use this PGP public key.

Authenticators

The Authenticator is the mechanism for authorizing users to use the Hub and single user notebook servers.

The default PAM Authenticator

JupyterHub ships with the default PAM-based Authenticator, for logging in with local user accounts via a username and password.

The OAuthenticator

Some login mechanisms, such as OAuth, don’t map onto username and password authentication, and instead use tokens. When using these mechanisms, you can override the login handlers.

You can see an example implementation of an Authenticator that uses GitHub OAuth at OAuthenticator.

JupyterHub’s OAuthenticator currently supports the following popular services:

  • Auth0

  • Bitbucket

  • CILogon

  • GitHub

  • GitLab

  • Globus

  • Google

  • MediaWiki

  • Okpy

  • OpenShift

A generic implementation, which you can use for OAuth authentication with any provider, is also available.

The Dummy Authenticator

When testing, it may be helpful to use the jupyterhub.auth.DummyAuthenticator. This allows for any username and password unless if a global password has been set. Once set, any username will still be accepted but the correct password will need to be provided.

Additional Authenticators

A partial list of other authenticators is available on the JupyterHub wiki.

Technical Overview of Authentication
How the Base Authenticator works

The base authenticator uses simple username and password authentication.

The base Authenticator has one central method:

Authenticator.authenticate method
Authenticator.authenticate(handler, data)

This method is passed the Tornado RequestHandler and the POST data from JupyterHub’s login form. Unless the login form has been customized, data will have two keys:

  • username

  • password

The authenticate method’s job is simple:

  • return the username (non-empty str) of the authenticated user if authentication is successful

  • return None otherwise

Writing an Authenticator that looks up passwords in a dictionary requires only overriding this one method:

from IPython.utils.traitlets import Dict
from jupyterhub.auth import Authenticator

class DictionaryAuthenticator(Authenticator):

    passwords = Dict(config=True,
        help="""dict of username:password for authentication"""
    )

    async def authenticate(self, handler, data):
        if self.passwords.get(data['username']) == data['password']:
            return data['username']
Normalize usernames

Since the Authenticator and Spawner both use the same username, sometimes you want to transform the name coming from the authentication service (e.g. turning email addresses into local system usernames) before adding them to the Hub service. Authenticators can define normalize_username, which takes a username. The default normalization is to cast names to lowercase

For simple mappings, a configurable dict Authenticator.username_map is used to turn one name into another:

c.Authenticator.username_map  = {
  'service-name': 'localname'
}

When using PAMAuthenticator, you can set c.PAMAuthenticator.pam_normalize_username = True, which will normalize usernames using PAM (basically round-tripping them: username to uid to username), which is useful in case you use some external service that allows multiple usernames mapping to the same user (such as ActiveDirectory, yes, this really happens). When pam_normalize_username is on, usernames are not normalized to lowercase.

Validate usernames

In most cases, there is a very limited set of acceptable usernames. Authenticators can define validate_username(username), which should return True for a valid username and False for an invalid one. The primary effect this has is improving error messages during user creation.

The default behavior is to use configurable Authenticator.username_pattern, which is a regular expression string for validation.

To only allow usernames that start with ‘w’:

c.Authenticator.username_pattern = r'w.*'
How to write a custom authenticator

You can use custom Authenticator subclasses to enable authentication via other mechanisms. One such example is using GitHub OAuth.

Because the username is passed from the Authenticator to the Spawner, a custom Authenticator and Spawner are often used together. For example, the Authenticator methods, Authenticator.pre_spawn_start() and Authenticator.post_spawn_stop(), are hooks that can be used to do auth-related startup (e.g. opening PAM sessions) and cleanup (e.g. closing PAM sessions).

See a list of custom Authenticators on the wiki.

If you are interested in writing a custom authenticator, you can read this tutorial.

Registering custom Authenticators via entry points

As of JupyterHub 1.0, custom authenticators can register themselves via the jupyterhub.authenticators entry point metadata. To do this, in your setup.py add:

setup(
  ...
  entry_points={
    'jupyterhub.authenticators': [
        'myservice = mypackage:MyAuthenticator',
    ],
  },
)

If you have added this metadata to your package, users can select your authenticator with the configuration:

c.JupyterHub.authenticator_class = 'myservice'

instead of the full

c.JupyterHub.authenticator_class = 'mypackage:MyAuthenticator'

previously required. Additionally, configurable attributes for your authenticator will appear in jupyterhub help output and auto-generated configuration files via jupyterhub --generate-config.

Authentication state

JupyterHub 0.8 adds the ability to persist state related to authentication, such as auth-related tokens. If such state should be persisted, .authenticate() should return a dictionary of the form:

{
  'name': username,
  'auth_state': {
    'key': 'value',
  }
}

where username is the username that has been authenticated, and auth_state is any JSON-serializable dictionary.

Because auth_state may contain sensitive information, it is encrypted before being stored in the database. To store auth_state, two conditions must be met:

  1. persisting auth state must be enabled explicitly via configuration

    c.Authenticator.enable_auth_state = True
    
  2. encryption must be enabled by the presence of JUPYTERHUB_CRYPT_KEY environment variable, which should be a hex-encoded 32-byte key. For example:

    export JUPYTERHUB_CRYPT_KEY=$(openssl rand -hex 32)
    

JupyterHub uses Fernet to encrypt auth_state. To facilitate key-rotation, JUPYTERHUB_CRYPT_KEY may be a semicolon-separated list of encryption keys. If there are multiple keys present, the first key is always used to persist any new auth_state.

Using auth_state

Typically, if auth_state is persisted it is desirable to affect the Spawner environment in some way. This may mean defining environment variables, placing certificate in the user’s home directory, etc. The Authenticator.pre_spawn_start() method can be used to pass information from authenticator state to Spawner environment:

class MyAuthenticator(Authenticator):
    async def authenticate(self, handler, data=None):
        username = await identify_user(handler, data)
        upstream_token = await token_for_user(username)
        return {
            'name': username,
            'auth_state': {
                'upstream_token': upstream_token,
            },
        }

    async def pre_spawn_start(self, user, spawner):
        """Pass upstream_token to spawner via environment variable"""
        auth_state = await user.get_auth_state()
        if not auth_state:
            # auth_state not enabled
            return
        spawner.environment['UPSTREAM_TOKEN'] = auth_state['upstream_token']
Authenticator-managed group membership

New in version 2.2.

Some identity providers may have their own concept of group membership that you would like to preserve in JupyterHub. This is now possible with Authenticator.managed_groups.

You can set the config:

c.Authenticator.manage_groups = True

to enable this behavior. The default is False for Authenticators that ship with JupyterHub, but may be True for custom Authenticators. Check your Authenticator’s documentation for manage_groups support.

If True, Authenticator.authenticate() and Authenticator.refresh_user() may include a field groups which is a list of group names the user should be a member of:

  • Membership will be added for any group in the list

  • Membership in any groups not in the list will be revoked

  • Any groups not already present in the database will be created

  • If None is returned, no changes are made to the user’s group membership

If authenticator-managed groups are enabled, all group-management via the API is disabled.

pre_spawn_start and post_spawn_stop hooks

Authenticators uses two hooks, Authenticator.pre_spawn_start() and Authenticator.post_spawn_stop(user, spawner)() to add pass additional state information between the authenticator and a spawner. These hooks are typically used auth-related startup, i.e. opening a PAM session, and auth-related cleanup, i.e. closing a PAM session.

JupyterHub as an OAuth provider

Beginning with version 0.8, JupyterHub is an OAuth provider.

Spawners

A Spawner starts each single-user notebook server. The Spawner represents an abstract interface to a process, and a custom Spawner needs to be able to take three actions:

  • start the process

  • poll whether the process is still running

  • stop the process

Examples

Custom Spawners for JupyterHub can be found on the JupyterHub wiki. Some examples include:

  • DockerSpawner for spawning user servers in Docker containers

    • dockerspawner.DockerSpawner for spawning identical Docker containers for each users

    • dockerspawner.SystemUserSpawner for spawning Docker containers with an environment and home directory for each users

    • both DockerSpawner and SystemUserSpawner also work with Docker Swarm for launching containers on remote machines

  • SudoSpawner enables JupyterHub to run without being root, by spawning an intermediate process via sudo

  • BatchSpawner for spawning remote servers using batch systems

  • YarnSpawner for spawning notebook servers in YARN containers on a Hadoop cluster

  • SSHSpawner to spawn notebooks on a remote server using SSH

Spawner control methods
Spawner.start

Spawner.start should start the single-user server for a single user. Information about the user can be retrieved from self.user, an object encapsulating the user’s name, authentication, and server info.

The return value of Spawner.start should be the (ip, port) of the running server, or a full URL as a string.

Most Spawner.start functions will look similar to this example:

async def start(self):
    self.ip = '127.0.0.1'
    self.port = random_port()
    # get environment variables,
    # several of which are required for configuring the single-user server
    env = self.get_env()
    cmd = []
    # get jupyterhub command to run,
    # typically ['jupyterhub-singleuser']
    cmd.extend(self.cmd)
    cmd.extend(self.get_args())

    await self._actually_start_server_somehow(cmd, env)
    # url may not match self.ip:self.port, but it could!
    url = self._get_connectable_url()
    return url

When Spawner.start returns, the single-user server process should actually be running, not just requested. JupyterHub can handle Spawner.start being very slow (such as PBS-style batch queues, or instantiating whole AWS instances) via relaxing the Spawner.start_timeout config value.

Note on IPs and ports

Spawner.ip and Spawner.port attributes set the bind url, which the single-user server should listen on (passed to the single-user process via the JUPYTERHUB_SERVICE_URL environment variable). The return value is the ip and port (or full url) the Hub should connect to. These are not necessarily the same, and usually won’t be in any Spawner that works with remote resources or containers.

The default for Spawner.ip, and Spawner.port is 127.0.0.1:{random}, which is appropriate for Spawners that launch local processes, where everything is on localhost and each server needs its own port. For remote or container Spawners, it will often make sense to use a different value, such as ip = '0.0.0.0' and a fixed port, e.g. 8888. The defaults can be changed in the class, preserving configuration with traitlets:

from traitlets import default
from jupyterhub.spawner import Spawner

class MySpawner(Spawner):
    @default("ip")
    def _default_ip(self):
        return '0.0.0.0'

    @default("port")
    def _default_port(self):
        return 8888

    async def start(self):
        env = self.get_env()
        cmd = []
        # get jupyterhub command to run,
        # typically ['jupyterhub-singleuser']
        cmd.extend(self.cmd)
        cmd.extend(self.get_args())

        remote_server_info = await self._actually_start_server_somehow(cmd, env)
        url = self.get_public_url_from(remote_server_info)
        return url
Exception handling

When Spawner.start raises an Exception, a message can be passed on to the user via the exception via a .jupyterhub_html_message or .jupyterhub_message attribute.

When the Exception has a .jupyterhub_html_message attribute, it will be rendered as HTML to the user.

Alternatively .jupyterhub_message is rendered as unformatted text.

If both attributes are not present, the Exception will be shown to the user as unformatted text.

Spawner.poll

Spawner.poll should check if the spawner is still running. It should return None if it is still running, and an integer exit status, otherwise.

For the local process case, Spawner.poll uses os.kill(PID, 0) to check if the local process is still running. On Windows, it uses psutil.pid_exists.

Spawner.stop

Spawner.stop should stop the process. It must be a tornado coroutine, which should return when the process has finished exiting.

Spawner state

JupyterHub should be able to stop and restart without tearing down single-user notebook servers. To do this task, a Spawner may need to persist some information that can be restored later. A JSON-able dictionary of state can be used to store persisted information.

Unlike start, stop, and poll methods, the state methods must not be coroutines.

For the single-process case, the Spawner state is only the process ID of the server:

def get_state(self):
    """get the current state"""
    state = super().get_state()
    if self.pid:
        state['pid'] = self.pid
    return state

def load_state(self, state):
    """load state from the database"""
    super().load_state(state)
    if 'pid' in state:
        self.pid = state['pid']

def clear_state(self):
    """clear any state (called after shutdown)"""
    super().clear_state()
    self.pid = 0
Spawner options form

(new in 0.4)

Some deployments may want to offer options to users to influence how their servers are started. This may include cluster-based deployments, where users specify what resources should be available, or docker-based deployments where users can select from a list of base images.

This feature is enabled by setting Spawner.options_form, which is an HTML form snippet inserted unmodified into the spawn form. If the Spawner.options_form is defined, when a user tries to start their server, they will be directed to a form page, like this:

spawn-form

If Spawner.options_form is undefined, the user’s server is spawned directly, and no spawn page is rendered.

See this example for a form that allows custom CLI args for the local spawner.

Spawner.options_from_form

Options from this form will always be a dictionary of lists of strings, e.g.:

{
  'integer': ['5'],
  'text': ['some text'],
  'select': ['a', 'b'],
}

When formdata arrives, it is passed through Spawner.options_from_form(formdata), which is a method to turn the form data into the correct structure. This method must return a dictionary, and is meant to interpret the lists-of-strings into the correct types. For example, the options_from_form for the above form would look like:

def options_from_form(self, formdata):
    options = {}
    options['integer'] = int(formdata['integer'][0]) # single integer value
    options['text'] = formdata['text'][0] # single string value
    options['select'] = formdata['select'] # list already correct
    options['notinform'] = 'extra info' # not in the form at all
    return options

which would return:

{
  'integer': 5,
  'text': 'some text',
  'select': ['a', 'b'],
  'notinform': 'extra info',
}

When Spawner.start is called, this dictionary is accessible as self.user_options.

Writing a custom spawner

If you are interested in building a custom spawner, you can read this tutorial.

Registering custom Spawners via entry points

As of JupyterHub 1.0, custom Spawners can register themselves via the jupyterhub.spawners entry point metadata. To do this, in your setup.py add:

setup(
  ...
  entry_points={
    'jupyterhub.spawners': [
        'myservice = mypackage:MySpawner',
    ],
  },
)

If you have added this metadata to your package, users can select your spawner with the configuration:

c.JupyterHub.spawner_class = 'myservice'

instead of the full

c.JupyterHub.spawner_class = 'mypackage:MySpawner'

previously required. Additionally, configurable attributes for your spawner will appear in jupyterhub help output and auto-generated configuration files via jupyterhub --generate-config.

Environment variables and command-line arguments

Spawners mainly do one thing: launch a command in an environment.

The command-line is constructed from user configuration:

  • Spawner.cmd (default: ['jupterhub-singleuser'])

  • Spawner.args (cli args to pass to the cmd, default: empty)

where the configuration:

c.Spawner.cmd = ["my-singleuser-wrapper"]
c.Spawner.args = ["--debug", "--flag"]

would result in spawning the command:

my-singleuser-wrapper --debug --flag

The Spawner.get_args() method is how Spawner.args is accessed, and can be used by Spawners to customize/extend user-provided arguments.

Prior to 2.0, JupyterHub unconditionally added certain options if specified to the command-line, such as --ip={Spawner.ip} and --port={Spawner.port}. These have now all been moved to environment variables, and from JupyterHub 2.0, the command-line launched by JupyterHub is fully specified by overridable configuration Spawner.cmd + Spawner.args.

Most process configuration is passed via environment variables. Additional variables can be specified via the Spawner.environment configuration.

The process environment is returned by Spawner.get_env, which specifies the following environment variables:

  • JUPYTERHUBSERVICE_URL - the _bind url where the server should launch its http server (http://127.0.0.1:12345). This includes Spawner.ip and Spawner.port; new in 2.0, prior to 2.0 ip,port were on the command-line and only if specified

  • JUPYTERHUB_SERVICE_PREFIX - the URL prefix the service will run on (e.g. /user/name/)

  • JUPYTERHUB_USER - the JupyterHub user’s username

  • JUPYTERHUB_SERVER_NAME - the server’s name, if using named servers (default server has an empty name)

  • JUPYTERHUB_API_URL - the full url for the JupyterHub API (http://17.0.0.1:8001/hub/api)

  • JUPYTERHUB_BASE_URL - the base url of the whole jupyterhub deployment, i.e. the bit before hub/ or user/, as set by c.JupyterHub.base_url (default: /)

  • JUPYTERHUB_API_TOKEN - the API token the server can use to make requests to the Hub. This is also the OAuth client secret.

  • JUPYTERHUB_CLIENT_ID - the OAuth client ID for authenticating visitors.

  • JUPYTERHUB_OAUTH_CALLBACK_URL - the callback URL to use in oauth, typically /user/:name/oauth_callback

Optional environment variables, depending on configuration:

  • JUPYTERHUBSSL[KEYFILE|CERTFILE|CLIENT_CI] - SSL configuration, when internal_ssl is enabled

  • JUPYTERHUB_ROOT_DIR - the root directory of the server (notebook directory), when Spawner.notebook_dir is defined (new in 2.0)

  • JUPYTERHUB_DEFAULT_URL - the default URL for the server (for redirects from /user/:name/), if Spawner.default_url is defined (new in 2.0, previously passed via cli)

  • JUPYTERHUB_DEBUG=1 - generic debug flag, sets maximum log level when Spawner.debug is True (new in 2.0, previously passed via cli)

  • JUPYTERHUB_DISABLE_USER_CONFIG=1 - disable loading user config, sets maximum log level when Spawner.debug is True (new in 2.0, previously passed via cli)

  • JUPYTERHUB*[MEM|CPU]*[LIMIT_GUARANTEE] - the values of cpu and memory limits and guarantees. These are not expected to be enforced by the process, but are made available as a hint, e.g. for resource monitoring extensions.

Spawners, resource limits, and guarantees (Optional)

Some spawners of the single-user notebook servers allow setting limits or guarantees on resources, such as CPU and memory. To provide a consistent experience for sysadmins and users, we provide a standard way to set and discover these resource limits and guarantees, such as for memory and CPU. For the limits and guarantees to be useful, the spawner must implement support for them. For example, LocalProcessSpawner, the default spawner, does not support limits and guarantees. One of the spawners that supports limits and guarantees is the systemdspawner.

Memory Limits & Guarantees

c.Spawner.mem_limit: A limit specifies the maximum amount of memory that may be allocated, though there is no promise that the maximum amount will be available. In supported spawners, you can set c.Spawner.mem_limit to limit the total amount of memory that a single-user notebook server can allocate. Attempting to use more memory than this limit will cause errors. The single-user notebook server can discover its own memory limit by looking at the environment variable MEM_LIMIT, which is specified in absolute bytes.

c.Spawner.mem_guarantee: Sometimes, a guarantee of a minimum amount of memory is desirable. In this case, you can set c.Spawner.mem_guarantee to to provide a guarantee that at minimum this much memory will always be available for the single-user notebook server to use. The environment variable MEM_GUARANTEE will also be set in the single-user notebook server.

The spawner’s underlying system or cluster is responsible for enforcing these limits and providing these guarantees. If these values are set to None, no limits or guarantees are provided, and no environment values are set.

CPU Limits & Guarantees

c.Spawner.cpu_limit: In supported spawners, you can set c.Spawner.cpu_limit to limit the total number of cpu-cores that a single-user notebook server can use. These can be fractional - 0.5 means 50% of one CPU core, 4.0 is 4 cpu-cores, etc. This value is also set in the single-user notebook server’s environment variable CPU_LIMIT. The limit does not claim that you will be able to use all the CPU up to your limit as other higher priority applications might be taking up CPU.

c.Spawner.cpu_guarantee: You can set c.Spawner.cpu_guarantee to provide a guarantee for CPU usage. The environment variable CPU_GUARANTEE will be set in the single-user notebook server when a guarantee is being provided.

The spawner’s underlying system or cluster is responsible for enforcing these limits and providing these guarantees. If these values are set to None, no limits or guarantees are provided, and no environment values are set.

Encryption

Communication between the Proxy, Hub, and Notebook can be secured by turning on internal_ssl in jupyterhub_config.py. For a custom spawner to utilize these certs, there are two methods of interest on the base Spawner class: .create_certs and .move_certs.

The first method, .create_certs will sign a key-cert pair using an internally trusted authority for notebooks. During this process, .create_certs can apply ip and dns name information to the cert via an alt_names kwarg. This is used for certificate authentication (verification). Without proper verification, the Notebook will be unable to communicate with the Hub and vice versa when internal_ssl is enabled. For example, given a deployment using the DockerSpawner which will start containers with ips from the docker subnet pool, the DockerSpawner would need to instead choose a container ip prior to starting and pass that to .create_certs (TODO: edit).

In general though, this method will not need to be changed and the default ip/dns (localhost) info will suffice.

When .create_certs is run, it will .create_certs in a default, central location specified by c.JupyterHub.internal_certs_location. For Spawners that need access to these certs elsewhere (i.e. on another host altogether), the .move_certs method can be overridden to move the certs appropriately. Again, using DockerSpawner as an example, this would entail moving certs to a directory that will get mounted into the container this spawner starts.

Services
Definition of a Service

When working with JupyterHub, a Service is defined as a process that interacts with the Hub’s REST API. A Service may perform a specific action or task. For example, the following tasks can each be a unique Service:

  • shutting down individuals’ single user notebook servers that have been idle for some time

  • registering additional web servers which should use the Hub’s authentication and be served behind the Hub’s proxy.

Two key features help define a Service:

  • Is the Service managed by JupyterHub?

  • Does the Service have a web server that should be added to the proxy’s table?

Currently, these characteristics distinguish two types of Services:

  • A Hub-Managed Service which is managed by JupyterHub

  • An Externally-Managed Service which runs its own web server and communicates operation instructions via the Hub’s API.

Properties of a Service

A Service may have the following properties:

  • name: str - the name of the service

  • admin: bool (default - false) - whether the service should have administrative privileges

  • url: str (default - None) - The URL where the service is/should be. If a url is specified for where the Service runs its own web server, the service will be added to the proxy at /services/:name

  • api_token: str (default - None) - For Externally-Managed Services you need to specify an API token to perform API requests to the Hub

If a service is also to be managed by the Hub, it has a few extra options:

  • command: (str/Popen list) - Command for JupyterHub to spawn the service. - Only use this if the service should be a subprocess. - If command is not specified, the Service is assumed to be managed externally. - If a command is specified for launching the Service, the Service will be started and managed by the Hub.

  • environment: dict - additional environment variables for the Service.

  • user: str - the name of a system user to manage the Service. If unspecified, run as the same user as the Hub.

Hub-Managed Services

A Hub-Managed Service is started by the Hub, and the Hub is responsible for the Service’s actions. A Hub-Managed Service can only be a local subprocess of the Hub. The Hub will take care of starting the process and restarts it if it stops.

While Hub-Managed Services share some similarities with notebook Spawners, there are no plans for Hub-Managed Services to support the same spawning abstractions as a notebook Spawner.

If you wish to run a Service in a Docker container or other deployment environments, the Service can be registered as an Externally-Managed Service, as described below.

Launching a Hub-Managed Service

A Hub-Managed Service is characterized by its specified command for launching the Service. For example, a ‘cull idle’ notebook server task configured as a Hub-Managed Service would include:

  • the Service name,

  • admin permissions, and

  • the command to launch the Service which will cull idle servers after a timeout interval

This example would be configured as follows in jupyterhub_config.py:

c.JupyterHub.load_roles = [
    {
        "name": "idle-culler",
        "scopes": [
            "read:users:activity", # read user last_activity
            "servers", # start and stop servers
            # 'admin:users' # needed if culling idle users as well
        ]
    }
]

c.JupyterHub.services = [
    {
        'name': 'idle-culler',
        'command': [sys.executable, '-m', 'jupyterhub_idle_culler', '--timeout=3600']
    }
]

A Hub-Managed Service may also be configured with additional optional parameters, which describe the environment needed to start the Service process:

  • environment: dict - additional environment variables for the Service.

  • user: str - name of the user to run the server if different from the Hub. Requires Hub to be root.

  • cwd: path directory in which to run the Service, if different from the Hub directory.

The Hub will pass the following environment variables to launch the Service:

JUPYTERHUB_SERVICE_NAME:   The name of the service
JUPYTERHUB_API_TOKEN:      API token assigned to the service
JUPYTERHUB_API_URL:        URL for the JupyterHub API (default, http://127.0.0.1:8080/hub/api)
JUPYTERHUB_BASE_URL:       Base URL of the Hub (https://mydomain[:port]/)
JUPYTERHUB_SERVICE_PREFIX: URL path prefix of this service (/services/:service-name/)
JUPYTERHUB_SERVICE_URL:    Local URL where the service is expected to be listening.
                           Only for proxied web services.
JUPYTERHUB_OAUTH_SCOPES:   JSON-serialized list of scopes to use for allowing access to the service.

For the previous ‘cull idle’ Service example, these environment variables would be passed to the Service when the Hub starts the ‘cull idle’ Service:

JUPYTERHUB_SERVICE_NAME: 'idle-culler'
JUPYTERHUB_API_TOKEN: API token assigned to the service
JUPYTERHUB_API_URL: http://127.0.0.1:8080/hub/api
JUPYTERHUB_BASE_URL: https://mydomain[:port]
JUPYTERHUB_SERVICE_PREFIX: /services/idle-culler/

See the GitHub repo for additional information about the jupyterhub_idle_culler.

Externally-Managed Services

You may prefer to use your own service management tools, such as Docker or systemd, to manage a JupyterHub Service. These Externally-Managed Services, unlike Hub-Managed Services, are not subprocesses of the Hub. You must tell JupyterHub which API token the Externally-Managed Service is using to perform its API requests. Each Externally-Managed Service will need a unique API token, because the Hub authenticates each API request and the API token is used to identify the originating Service or user.

A configuration example of an Externally-Managed Service with admin access and running its own web server is:

c.JupyterHub.services = [
    {
        'name': 'my-web-service',
        'url': 'https://10.0.1.1:1984',
        # any secret >8 characters, you'll use api_token to
        # authenticate api requests to the hub from your service
        'api_token': 'super-secret',
    }
]

In this case, the url field will be passed along to the Service as JUPYTERHUB_SERVICE_URL.

Writing your own Services

When writing your own services, you have a few decisions to make (in addition to what your service does!):

  1. Does my service need a public URL?

  2. Do I want JupyterHub to start/stop the service?

  3. Does my service need to authenticate users?

When a Service is managed by JupyterHub, the Hub will pass the necessary information to the Service via the environment variables described above. A flexible Service, whether managed by the Hub or not, can make use of these same environment variables.

When you run a service that has a url, it will be accessible under a /services/ prefix, such as https://myhub.horse/services/my-service/. For your service to route proxied requests properly, it must take JUPYTERHUB_SERVICE_PREFIX into account when routing requests. For example, a web service would normally service its root handler at '/', but the proxied service would need to serve JUPYTERHUB_SERVICE_PREFIX.

Note that JUPYTERHUB_SERVICE_PREFIX will contain a trailing slash. This must be taken into consideration when creating the service routes. If you include an extra slash you might get unexpected behavior. For example if your service has a /foo endpoint, the route would be JUPYTERHUB_SERVICE_PREFIX + foo, and /foo/bar would be JUPYTERHUB_SERVICE_PREFIX + foo/bar.

Hub Authentication and Services

JupyterHub provides some utilities for using the Hub’s authentication mechanism to govern access to your service.

Requests to all JupyterHub services are made with OAuth tokens. These can either be requests with a token in the Authorization header, or url parameter ?token=..., or browser requests which must complete the OAuth authorization code flow, which results in a token that should be persisted for future requests (persistence is up to the service, but an encrypted cookie confined to the service path is appropriate, and provided by default).

Changed in version 2.0: The shared jupyterhub-services cookie is removed. OAuth must be used to authenticate browser requests with services.

JupyterHub includes a reference implementation of Hub authentication that can be used by services. You may go beyond this reference implementation and create custom hub-authenticating clients and services. We describe the process below.

The reference, or base, implementation is the HubAuth class, which implements the API requests to the Hub that resolve a token to a User model.

There are two levels of authentication with the Hub:

  • HubAuth - the most basic authentication, for services that should only accept API requests authorized with a token.

  • HubOAuth - For services that should use oauth to authenticate with the Hub. This should be used for any service that serves pages that should be visited with a browser.

To use HubAuth, you must set the .api_token, either programmatically when constructing the class, or via the JUPYTERHUB_API_TOKEN environment variable.

Most of the logic for authentication implementation is found in the HubAuth.user_for_token() methods, which makes a request of the Hub, and returns:

  • None, if no user could be identified, or

  • a dict of the following form:

    {
      "name": "username",
      "groups": ["list", "of", "groups"],
      "scopes": [
          "access:users:servers!server=username/",
      ],
    }
    

You are then free to use the returned user information to take appropriate action.

HubAuth also caches the Hub’s response for a number of seconds, configurable by the cookie_cache_max_age setting (default: five minutes).

If your service would like to make further requests on behalf of users, it should use the token issued by this OAuth process. If you are using tornado, you can access the token authenticating the current request with HubAuth.get_token().

Changed in version 2.2: HubAuth.get_token() adds support for retrieving tokens stored in tornado cookies after completion of OAuth. Previously, it only retrieved tokens from URL parameters or the Authorization header. Passing get_token(handler, in_cookie=False) preserves this behavior.

Flask Example

For example, you have a Flask service that returns information about a user. JupyterHub’s HubAuth class can be used to authenticate requests to the Flask service. See the service-whoami-flask example in the JupyterHub GitHub repo for more details.

#!/usr/bin/env python3
"""
whoami service authentication with the Hub
"""
import json
import os
import secrets
from functools import wraps

from flask import Flask
from flask import make_response
from flask import redirect
from flask import request
from flask import Response
from flask import session

from jupyterhub.services.auth import HubOAuth


prefix = os.environ.get('JUPYTERHUB_SERVICE_PREFIX', '/')

auth = HubOAuth(api_token=os.environ['JUPYTERHUB_API_TOKEN'], cache_max_age=60)

app = Flask(__name__)
# encryption key for session cookies
app.secret_key = secrets.token_bytes(32)


def authenticated(f):
    """Decorator for authenticating with the Hub via OAuth"""

    @wraps(f)
    def decorated(*args, **kwargs):
        token = session.get("token")

        if token:
            user = auth.user_for_token(token)
        else:
            user = None

        if user:
            return f(user, *args, **kwargs)
        else:
            # redirect to login url on failed auth
            state = auth.generate_state(next_url=request.path)
            response = make_response(redirect(auth.login_url + '&state=%s' % state))
            response.set_cookie(auth.state_cookie_name, state)
            return response

    return decorated


@app.route(prefix)
@authenticated
def whoami(user):
    return Response(
        json.dumps(user, indent=1, sort_keys=True), mimetype='application/json'
    )


@app.route(prefix + 'oauth_callback')
def oauth_callback():
    code = request.args.get('code', None)
    if code is None:
        return 403

    # validate state field
    arg_state = request.args.get('state', None)
    cookie_state = request.cookies.get(auth.state_cookie_name)
    if arg_state is None or arg_state != cookie_state:
        # state doesn't match
        return 403

    token = auth.token_for_code(code)
    # store token in session cookie
    session["token"] = token
    next_url = auth.get_next_url(cookie_state) or prefix
    response = make_response(redirect(next_url))
    return response
Authenticating tornado services with JupyterHub

Since most Jupyter services are written with tornado, we include a mixin class, [HubOAuthenticated][huboauthenticated], for quickly authenticating your own tornado services with JupyterHub.

Tornado’s authenticated() decorator calls a Handler’s get_current_user() method to identify the user. Mixing in HubAuthenticated defines get_current_user() to use HubAuth. If you want to configure the HubAuth instance beyond the default, you’ll want to define an initialize() method, such as:

class MyHandler(HubOAuthenticated, web.RequestHandler):

    def initialize(self, hub_auth):
        self.hub_auth = hub_auth

    @web.authenticated
    def get(self):
        ...

The HubAuth class will automatically load the desired configuration from the Service environment variables.

Changed in version 2.0: Access scopes are used to govern access to services. Prior to 2.0, sets of users and groups could be used to grant access by defining .hub_groups or .hub_users on the authenticated handler. These are ignored if the 2.0 .hub_scopes is defined.

Implementing your own Authentication with JupyterHub

If you don’t want to use the reference implementation (e.g. you find the implementation a poor fit for your Flask app), you can implement authentication via the Hub yourself. JupyterHub is a standard OAuth2 provider, so you can use any OAuth 2 client implementation appropriate for your toolkit. See the FastAPI example for an example of using JupyterHub as an OAuth provider with FastAPI, without using any code imported from JupyterHub.

On completion of OAuth, you will have an access token for JupyterHub, which can be used to identify the user and the permissions (scopes) the user has authorized for your service.

You will only get to this stage if the user has the required access:services!service=$service-name scope.

To retrieve the user model for the token, make a request to GET /hub/api/user with the token in the Authorization header. For example, using flask:

#!/usr/bin/env python3
"""
whoami service authentication with the Hub
"""
import json
import os
import secrets
from functools import wraps

from flask import Flask
from flask import make_response
from flask import redirect
from flask import request
from flask import Response
from flask import session

from jupyterhub.services.auth import HubOAuth


prefix = os.environ.get('JUPYTERHUB_SERVICE_PREFIX', '/')

auth = HubOAuth(api_token=os.environ['JUPYTERHUB_API_TOKEN'], cache_max_age=60)

app = Flask(__name__)
# encryption key for session cookies
app.secret_key = secrets.token_bytes(32)


def authenticated(f):
    """Decorator for authenticating with the Hub via OAuth"""

    @wraps(f)
    def decorated(*args, **kwargs):
        token = session.get("token")

        if token:
            user = auth.user_for_token(token)
        else:
            user = None

        if user:
            return f(user, *args, **kwargs)
        else:
            # redirect to login url on failed auth
            state = auth.generate_state(next_url=request.path)
            response = make_response(redirect(auth.login_url + '&state=%s' % state))
            response.set_cookie(auth.state_cookie_name, state)
            return response

    return decorated


@app.route(prefix)
@authenticated
def whoami(user):
    return Response(
        json.dumps(user, indent=1, sort_keys=True), mimetype='application/json'
    )


@app.route(prefix + 'oauth_callback')
def oauth_callback():
    code = request.args.get('code', None)
    if code is None:
        return 403

    # validate state field
    arg_state = request.args.get('state', None)
    cookie_state = request.cookies.get(auth.state_cookie_name)
    if arg_state is None or arg_state != cookie_state:
        # state doesn't match
        return 403

    token = auth.token_for_code(code)
    # store token in session cookie
    session["token"] = token
    next_url = auth.get_next_url(cookie_state) or prefix
    response = make_response(redirect(next_url))
    return response

We recommend looking at the [HubOAuth][huboauth] class implementation for reference, and taking note of the following process:

  1. retrieve the token from the request.

  2. Make an API request GET /hub/api/user, with the token in the Authorization header.

    For example, with requests:

    r = requests.get(
        "http://127.0.0.1:8081/hub/api/user",
        headers = {
            'Authorization' : f'token {api_token}',
        },
    )
    r.raise_for_status()
    user = r.json()
    
  3. On success, the reply will be a JSON model describing the user:

    {
      "name": "inara",
      # groups may be omitted, depending on permissions
      "groups": ["serenity", "guild"],
      # scopes is new in JupyterHub 2.0
      "scopes": [
        "access:services",
        "read:users:name",
        "read:users!user=inara",
        "..."
      ]
    }
    

The scopes field can be used to manage access. Note: a user will have access to a service to complete oauth access to the service for the first time. Individual permissions may be revoked at any later point without revoking the token, in which case the scopes field in this model should be checked on each access. The default required scopes for access are available from hub_auth.oauth_scopes or $JUPYTERHUB_OAUTH_SCOPES.

An example of using an Externally-Managed Service and authentication is in nbviewer README section on securing the notebook viewer, and an example of its configuration is found here. nbviewer can also be run as a Hub-Managed Service as described nbviewer README section on securing the notebook viewer.

Writing a custom Proxy implementation

JupyterHub 0.8 introduced the ability to write a custom implementation of the proxy. This enables deployments with different needs than the default proxy, configurable-http-proxy (CHP). CHP is a single-process nodejs proxy that the Hub manages by default as a subprocess (it can be run externally, as well, and typically is in production deployments).

The upside to CHP, and why we use it by default, is that it’s easy to install and run (if you have nodejs, you are set!). The downsides are that it’s a single process and does not support any persistence of the routing table. So if the proxy process dies, your whole JupyterHub instance is inaccessible until the Hub notices, restarts the proxy, and restores the routing table. For deployments that want to avoid such a single point of failure, or leverage existing proxy infrastructure in their chosen deployment (such as Kubernetes ingress objects), the Proxy API provides a way to do that.

In general, for a proxy to be usable by JupyterHub, it must:

  1. support websockets without prior knowledge of the URL where websockets may occur

  2. support trie-based routing (i.e. allow different routes on /foo and /foo/bar and route based on specificity)

  3. adding or removing a route should not cause existing connections to drop

Optionally, if the JupyterHub deployment is to use host-based routing, the Proxy must additionally support routing based on the Host of the request.

Subclassing Proxy

To start, any Proxy implementation should subclass the base Proxy class, as is done with custom Spawners and Authenticators.

from jupyterhub.proxy import Proxy

class MyProxy(Proxy):
    """My Proxy implementation"""
    ...
Starting and stopping the proxy

If your proxy should be launched when the Hub starts, you must define how to start and stop your proxy:

class MyProxy(Proxy):
    ...
    async def start(self):
        """Start the proxy"""

    async def stop(self):
        """Stop the proxy"""

These methods may be coroutines.

c.Proxy.should_start is a configurable flag that determines whether the Hub should call these methods when the Hub itself starts and stops.

Encryption

When using internal_ssl to encrypt traffic behind the proxy, at minimum, your Proxy will need client ssl certificates which the Hub must be made aware of. These can be generated with the command jupyterhub --generate-certs which will write them to the internal_certs_location in folders named proxy_api and proxy_client. Alternatively, these can be provided to the hub via the jupyterhub_config.py file by providing a dict of named paths to the external_authorities option. The hub will include all certificates provided in that dict in the trust bundle utilized by all internal components.

Purely external proxies

Probably most custom proxies will be externally managed, such as Kubernetes ingress-based implementations. In this case, you do not need to define start and stop. To disable the methods, you can define should_start = False at the class level:

class MyProxy(Proxy):
    should_start = False
Routes

At its most basic, a Proxy implementation defines a mechanism to add, remove, and retrieve routes. A proxy that implements these three methods is complete. Each of these methods may be a coroutine.

Definition: routespec

A routespec, which will appear in these methods, is a string describing a route to be proxied, such as /user/name/. A routespec will:

  1. always end with /

  2. always start with / if it is a path-based route /proxy/path/

  3. precede the leading / with a host for host-based routing, e.g. host.tld/proxy/path/

Adding a route

When adding a route, JupyterHub may pass a JSON-serializable dict as a data argument that should be attached to the proxy route. When that route is retrieved, the data argument should be returned as well. If your proxy implementation doesn’t support storing data attached to routes, then your Python wrapper may have to handle storing the data piece itself, e.g in a simple file or database.

async def add_route(self, routespec, target, data):
    """Proxy `routespec` to `target`.

    Store `data` associated with the routespec
    for retrieval later.
    """

Adding a route for a user looks like this:

await proxy.add_route('/user/pgeorgiou/', 'http://127.0.0.1:1227',
                {'user': 'pgeorgiou'})
Removing routes

delete_route() is given a routespec to delete. If there is no such route, delete_route should still succeed, but a warning may be issued.

async def delete_route(self, routespec):
    """Delete the route"""
Retrieving routes

For retrieval, you only need to implement a single method that retrieves all routes. The return value for this function should be a dictionary, keyed by routespect, of dicts whose keys are the same three arguments passed to add_route (routespec, target, data)

async def get_all_routes(self):
    """Return all routes, keyed by routespec"""
{
  '/proxy/path/': {
    'routespec': '/proxy/path/',
    'target': 'http://...',
    'data': {},
  },
}
Note on activity tracking

JupyterHub can track activity of users, for use in services such as culling idle servers. As of JupyterHub 0.8, this activity tracking is the responsibility of the proxy. If your proxy implementation can track activity to endpoints, it may add a last_activity key to the data of routes retrieved in .get_all_routes(). If present, the value of last_activity should be an ISO8601 UTC date string:

{
  '/user/pgeorgiou/': {
    'routespec': '/user/pgeorgiou/',
    'target': 'http://127.0.0.1:1227',
    'data': {
      'user': 'pgeourgiou',
      'last_activity': '2017-10-03T10:33:49.570Z',
    },
  },
}

If the proxy does not track activity, then only activity to the Hub itself is tracked, and services such as cull-idle will not work.

Now that notebook-5.0 tracks activity internally, we can retrieve activity information from the single-user servers instead, removing the need to track activity in the proxy. But this is not yet implemented in JupyterHub 0.8.0.

Registering custom Proxies via entry points

As of JupyterHub 1.0, custom proxy implementations can register themselves via the jupyterhub.proxies entry point metadata. To do this, in your setup.py add:

setup(
  ...
  entry_points={
    'jupyterhub.proxies': [
        'mything = mypackage:MyProxy',
    ],
  },
)

If you have added this metadata to your package, users can select your proxy with the configuration:

c.JupyterHub.proxy_class = 'mything'

instead of the full

c.JupyterHub.proxy_class = 'mypackage:MyProxy'

previously required. Additionally, configurable attributes for your proxy will appear in jupyterhub help output and auto-generated configuration files via jupyterhub --generate-config.

Index of proxies

A list of the proxies that are currently available for JupyterHub (that we know about).

  1. jupyterhub/configurable-http-proxy The default proxy which uses node-http-proxy

  2. jupyterhub/traefik-proxy The proxy which configures traefik proxy server for jupyterhub

  3. AbdealiJK/configurable-http-proxy A pure python implementation of the configurable-http-proxy

Running proxy separately from the hub
Background

The thing which users directly connect to is the proxy, by default configurable-http-proxy. The proxy either redirects users to the hub (for login and managing servers), or to their own single-user servers. Thus, as long as the proxy stays running, access to existing servers continues, even if the hub itself restarts or goes down.

When you first configure the hub, you may not even realize this because the proxy is automatically managed by the hub. This is great for getting started and even most use, but everytime you restart the hub, all user connections also get restarted. But it’s also simple to run the proxy as a service separate from the hub, so that you are free to reconfigure the hub while only interrupting users who are currently actively starting the hub.

The default JupyterHub proxy is configurable-http-proxy, and that page has some docs. If you are using a different proxy, such as Traefik, these instructions are probably not relevant to you.

Configuration options

c.JupyterHub.cleanup_servers = False should be set, which tells the hub to not stop servers when the hub restarts (this is useful even if you don’t run the proxy separately).

c.ConfigurableHTTPProxy.should_start = False should be set, which tells the hub that the proxy should not be started (because you start it yourself).

c.ConfigurableHTTPProxy.auth_token = "CONFIGPROXY_AUTH_TOKEN" should be set to a token for authenticating communication with the proxy.

c.ConfigurableHTTPProxy.api_url = 'http://localhost:8001' should be set to the URL which the hub uses to connect to the proxy’s API.

Proxy configuration

You need to configure a service to start the proxy. An example command line for this is configurable-http-proxy --ip=127.0.0.1 --port=8000 --api-ip=127.0.0.1 --api-port=8001 --default-target=http://localhost:8081 --error-target=http://localhost:8081/hub/error. (Details for how to do this is out of scope for this tutorial - for example it might be a systemd service on within another docker cotainer). The proxy has no configuration files, all configuration is via the command line and environment variables.

--api-ip and --api-port (which tells the proxy where to listen) should match the hub’s ConfigurableHTTPProxy.api_url.

--ip, -port, and other options configure the user connections to the proxy.

--default-target and --error-target should point to the hub, and used when users navigate to the proxy originally.

You must define the environment variable CONFIGPROXY_AUTH_TOKEN to match the token given to c.ConfigurableHTTPProxy.auth_token.

You should check the configurable-http-proxy options to see what other options are needed, for example SSL options. Note that these are configured in the hub if the hub is starting the proxy - you need to move the options to here.

Docker image

You can use jupyterhub configurable-http-proxy docker image to run the proxy.

See also
Using JupyterHub’s REST API

This section will give you information on:

  • what you can do with the API

  • create an API token

  • add API tokens to the config files

  • make an API request programmatically using the requests library

  • learn more about JupyterHub’s API

What you can do with the API

Using the JupyterHub REST API, you can perform actions on the Hub, such as:

  • checking which users are active

  • adding or removing users

  • stopping or starting single user notebook servers

  • authenticating services

  • communicating with an individual Jupyter server’s REST API

A REST API provides a standard way for users to get and send information to the Hub.

Create an API token

To send requests using JupyterHub API, you must pass an API token with the request.

The preferred way of generating an API token is:

openssl rand -hex 32

This openssl command generates a potential token that can then be added to JupyterHub using .api_tokens configuration setting in jupyterhub_config.py.

Alternatively, use the jupyterhub token command to generate a token for a specific hub user by passing the ‘username’:

jupyterhub token <username>

This command generates a random string to use as a token and registers it for the given user with the Hub’s database.

In version 0.8.0, a token request page for generating an API token is available from the JupyterHub user interface:

Request API token page

API token success page

Assigning permissions to a token

Prior to JupyterHub 2.0, there were two levels of permissions:

  1. user, and

  2. admin

where a token would always have full permissions to do whatever its owner could do.

In JupyterHub 2.0, specific permissions are now defined as ‘scopes’, and can be assigned both at the user/service level, and at the individual token level.

This allows e.g. a user with full admin permissions to request a token with limited permissions.

Updating to admin services

The api_tokens configuration has been softly deprecated since the introduction of services. We have no plans to remove it, but deployments are encouraged to use service configuration instead.

If you have been using api_tokens to create an admin user and a token for that user to perform some automations, the services mechanism may be a better fit. If you have the following configuration:

c.JupyterHub.admin_users = {"service-admin",}
c.JupyterHub.api_tokens = {
    "secret-token": "service-admin",
}

This can be updated to create a service, with the following configuration:

c.JupyterHub.services = [
    {
        # give the token a name
        "name": "service-admin",
        "api_token": "secret-token",
        # "admin": True, # if using JupyterHub 1.x
    },
]

# roles are new in JupyterHub 2.0
# prior to 2.0, only 'admin': True or False
# was available

c.JupyterHub.load_roles = [
    {
        "name": "service-role",
        "scopes": [
            # specify the permissions the token should have
            "admin:users",
        ],
        "services": [
            # assign the service the above permissions
            "service-admin",
        ],
    }
]

The token will have the permissions listed in the role (see [scopes][] for a list of available permissions), but there will no longer be a user account created to house it. The main noticeable difference is that there will be no notebook server associated with the account and the service will not show up in the various user list pages and APIs.

Make an API request

To authenticate your requests, pass the API token in the request’s Authorization header.

Use requests

Using the popular Python requests library, here’s example code to make an API request for the users of a JupyterHub deployment. An API GET request is made, and the request sends an API token for authorization. The response contains information about the users:

import requests

api_url = 'http://127.0.0.1:8081/hub/api'

r = requests.get(api_url + '/users',
    headers={
        'Authorization': f'token {token}',
    }
)

r.raise_for_status()
users = r.json()

This example provides a slightly more complicated request, yet the process is very similar:

import requests

api_url = 'http://127.0.0.1:8081/hub/api'

data = {'name': 'mygroup', 'users': ['user1', 'user2']}

r = requests.post(api_url + '/groups/formgrade-data301/users',
    headers={
        'Authorization': f'token {token}',
    },
    json=data,
)
r.raise_for_status()
r.json()

The same API token can also authorize access to the Jupyter Notebook REST API provided by notebook servers managed by JupyterHub if it has the necessary access:users:servers scope:

Paginating API requests

New in version 2.0.

Pagination is available through the offset and limit query parameters on list endpoints, which can be used to return ideally sized windows of results. Here’s example code demonstrating pagination on the GET /users endpoint to fetch the first 20 records.

import os
import requests

api_url = 'http://127.0.0.1:8081/hub/api'

r = requests.get(
    api_url + '/users?offset=0&limit=20',
    headers={
        "Accept": "application/jupyterhub-pagination+json",
        "Authorization": f"token {token}",
    },
)
r.raise_for_status()
r.json()

For backward-compatibility, the default structure of list responses is unchanged. However, this lacks pagination information (e.g. is there a next page), so if you have enough users that they won’t fit in the first response, it is a good idea to opt-in to the new paginated list format. There is a new schema for list responses which include pagination information. You can request this by including the header:

Accept: application/jupyterhub-pagination+json

with your request, in which case a response will look like:

{
  "items": [
    {
      "name": "username",
      "kind": "user",
      ...
    },
  ],
  "_pagination": {
    "offset": 0,
    "limit": 20,
    "total": 50,
    "next": {
      "offset": 20,
      "limit": 20,
      "url": "http://127.0.0.1:8081/hub/api/users?limit=20&offset=20"
    }
  }
}

where the list results (same as pre-2.0) will be in items, and pagination info will be in _pagination. The next field will include the offset, limit, and URL for requesting the next page. next will be null if there is no next page.

Pagination is governed by two configuration options:

  • JupyterHub.api_page_default_limit - the page size, if limit is unspecified in the request and the new pagination API is requested (default: 50)

  • JupyterHub.api_page_max_limit - the maximum page size a request can ask for (default: 200)

Pagination is enabled on the GET /users, GET /groups, and GET /proxy REST endpoints.

Enabling users to spawn multiple named-servers via the API

With JupyterHub version 0.8, support for multiple servers per user has landed. Prior to that, each user could only launch a single default server via the API like this:

curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/server"

With the named-server functionality, it’s now possible to launch more than one specifically named servers against a given user. This could be used, for instance, to launch each server based on a different image.

First you must enable named-servers by including the following setting in the jupyterhub_config.py file.

c.JupyterHub.allow_named_servers = True

If using the zero-to-jupyterhub-k8s set-up to run JupyterHub, then instead of editing the jupyterhub_config.py file directly, you could pass the following as part of the config.yaml file, as per the tutorial:

hub:
  extraConfig: |
    c.JupyterHub.allow_named_servers = True

With that setting in place, a new named-server is activated like this:

curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/servers/<serverA>"
curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/servers/<serverB>"

The same servers can be stopped by substituting DELETE for POST above.

Some caveats for using named-servers

For named-servers via the API to work, the spawner used to spawn these servers will need to be able to handle the case of multiple servers per user and ensure uniqueness of names, particularly if servers are spawned via docker containers or kubernetes pods.

Learn more about the API

You can see the full JupyterHub REST API for details.

JupyterHub REST API

Below is an interactive view of JupyterHub’s OpenAPI specification.

Starting servers with the JupyterHub API

JupyterHub’s REST API allows launching servers on behalf of users without ever interacting with the JupyterHub UI. This allows you to build services launching Jupyter-based services for users without relying on the JupyterHub UI at all, enabling a variety of user/launch/lifecycle patterns not natively supported by JupyterHub, without needing to develop all the server management features of JupyterHub Spawners and/or Authenticators. BinderHub is an example of such an application.

This document provides an example of working with the JupyterHub API to manage servers for users. In particular, we will cover how to:

  1. check status of servers

  2. start servers

  3. wait for servers to be ready

  4. communicate with servers

  5. stop servers

Checking server status

Requesting information about a user includes a servers field, which is a dictionary.

GET /hub/api/users/:username

Required scope: read:servers

{
  "admin": false,
  "groups": [],
  "pending": null,
  "server": null,
  "name": "test-1",
  "kind": "user",
  "last_activity": "2021-08-03T18:12:46.026411Z",
  "created": "2021-08-03T18:09:59.767600Z",
  "roles": ["user"],
  "servers": {}
}

If the servers dict is empty, the user has no running servers. The keys of the servers dict are server names as strings. Many JupyterHub deployments only use the ‘default’ server, which has the empty string '' for a name. In this case, the servers dict will always have either zero or one elements.

This is the servers dict when the user’s default server is fully running and ready:

  "servers": {
    "": {
      "name": "",
      "last_activity": "2021-08-03T18:48:35.934000Z",
      "started": "2021-08-03T18:48:29.093885Z",
      "pending": null,
      "ready": true,
      "url": "/user/test-1/",
      "user_options": {},
      "progress_url": "/hub/api/users/test-1/server/progress"
    }
  }

Key properties of a server:

name

the server’s name. Always the same as the key in servers

ready

boolean. If true, the server can be expected to respond to requests at url.

pending

null or a string indicating a transitional state (such as start or stop). Will always be null if ready is true, and will always be a string if ready is false.

url

The server’s url (just the path, e.g. /users/:name/:servername/) where the server can be accessed if ready is true.

progress_url

The API url path (starting with /hub/api) where the progress API can be used to wait for the server to be ready. See below for more details on the progress API.

last_activity

ISO8601 timestamp indicating when activity was last observed on the server

started

ISO801 timestamp indicating when the server was last started

We’ve seen the servers model with no servers and with one ready server. Here is what it looks like immediately after requesting a server launch, while the server is not ready yet:

  "servers": {
    "": {
      "name": "",
      "last_activity": "2021-08-03T18:48:29.093885Z",
      "started": "2021-08-03T18:48:29.093885Z",
      "pending": "spawn",
      "ready": false,
      "url": "/user/test-1/",
      "user_options": {},
      "progress_url": "/hub/api/users/test-1/server/progress"
    }
  }

Note that ready is false and pending is spawn. This means that the server is not ready (attempting to access it may not work) because it isn’t finished spawning yet. We’ll get more into that below in waiting for a server.

Starting servers

To start a server, make the request

POST /hub/api/users/:username/servers/[:servername]

Required scope: servers

(omit servername for the default server)

Assuming the request was valid, there are two possible responses:

201 Created

This status code means the launch completed and the server is ready. It should be available at the server’s URL immediately.

202 Accepted

This is the more likely response, and means that the server has begun launching, but isn’t immediately ready. The server has pending: 'spawn' at this point.

Aside: how quickly JupyterHub responds with 202 Accepted is governed by the slow_spawn_timeout tornado setting.

Waiting for a server

If you are starting a server via the API, there’s a good change you want to know when it’s ready. There are two ways to do with:

  1. Polling the server model

  2. the progress API

Polling the server model

The simplest way to check if a server is ready is to request the user model.

If:

  1. the server name is in the user’s servers model, and

  2. servers['servername']['ready'] is true

A Python example, checking if a server is ready:

def server_ready(hub_url, user, server_name="", token):
    r = requests.get(
        f"{hub_url}/hub/api/users/{user}/servers/{server_name}",
        headers={"Authorization": f"token {token}"},
    )
    r.raise_for_status()
    user_model = r.json()
    servers = user_model.get("servers", {})
    if server_name not in servers:
        return False

    server = servers[server_name]
    if server['ready']:
        print(f"Server {user}/{server_name} ready at {server['url']}")
        return True
    else:
        print(f"Server {user}/{server_name} not ready, pending {server['pending']}")
        return False

You can keep making this check until ready is true.

Progress API

The most efficient way to wait for a server to start is the progress API.

The progress URL is available in the server model under progress_url, and has the form /hub/api/users/:user/servers/:servername/progress.

the default server progress can be accessed at :user/servers//progress or :user/server/progress

GET /hub/api/users/:user/servers/:servername/progress

Required scope: read:servers

This is an EventStream API. In an event stream, messages are streamed and delivered on lines of the form:

data: {"progress": 10, "message": "...", ...}

where the line after data: contains a JSON-serialized dictionary. Lines that do not start with data: should be ignored.

progress events have the form:

{
    "progress": 0-100,
    "message": "",
    "ready": True, # or False

}
progress

integer, 0-100

message

string message describing progress stages

ready

present and true only for the last event when the server is ready

url

only present if ready is true; will be the server’s url

the progress API can be used even with fully ready servers. If the server is ready, there will only be one event that looks like:

{
  "progress": 100,
  "ready": true,
  "message": "Server ready at /user/test-1/",
  "html_message": "Server ready at <a href=\"/user/test-1/\">/user/test-1/</a>",
  "url": "/user/test-1/"
}

where ready and url are the same as in the server model (ready will always be true).

A typical complete stream from the event-stream API:


data: {"progress": 0, "message": "Server requested"}

data: {"progress": 50, "message": "Spawning server..."}

data: {"progress": 100, "ready": true, "message": "Server ready at /user/test-user/", "html_message": "Server ready at <a href=\"/user/test-user/\">/user/test-user/</a>", "url": "/user/test-user/"}

Here is a Python example for consuming an event stream:

def event_stream(session, url):
    """Generator yielding events from a JSON event stream

    For use with the server progress API
    """
    r = session.get(url, stream=True)
    r.raise_for_status()
    for line in r.iter_lines():
        line = line.decode('utf8', 'replace')
        # event lines all start with `data:`
        # all other lines should be ignored (they will be empty)
        if line.startswith('data:'):
            yield json.loads(line.split(':', 1)[1])
Stopping servers

Servers can be stopped with a DELETE request:

DELETE /hub/api/users/:user/servers/[:servername]

Required scope: servers

Like start, delete may not complete immediately. The DELETE request has two possible response codes:

204 Deleted

This status code means the delete completed and the server is fully stopped. It will now be absent from the user servers model.

202 Accepted

Like start, 202 means your request was accepted, but is not yet complete. The server has pending: 'stop' at this point.

Unlike start, there is no progress API for stop. To wait for stop to finish, you must poll the user model and wait for the server to disappear from the user servers model.

def stop_server(session, hub_url, user, server_name=""):
    """Stop a server via the JupyterHub API

    Returns when the server has finished stopping
    """
    # step 1: get user status
    user_url = f"{hub_url}/hub/api/users/{user}"
    server_url = f"{user_url}/servers/{server_name}"
    log_name = f"{user}/{server_name}".rstrip("/")

    log.info(f"Stopping server {log_name}")
    r = session.delete(server_url)
    if r.status_code == 404:
        log.info(f"Server {log_name} already stopped")

    r.raise_for_status()
    if r.status_code == 204:
        log.info(f"Server {log_name} stopped")
        return

    # else: 202, stop requested, but not complete
    # wait for stop to finish
    log.info(f"Server {log_name} stopping...")

    # wait for server to be done stopping
    while True:
        r = session.get(user_url)
        r.raise_for_status()
        user_model = r.json()
        if server_name not in user_model.get("servers", {}):
            log.info(f"Server {log_name} stopped")
            return
        server = user_model["servers"][server_name]
        if not server['pending']:
            raise ValueError(f"Waiting for {log_name}, but no longer pending.")
        log.info(f"Server {log_name} pending: {server['pending']}")
        # wait to poll again
        time.sleep(1)
Communicating with servers

JupyterHub tokens with the the access:servers scope can be used to communicate with servers themselves. This can be the same token you used to launch your service.

Note

Access scopes are new in JupyterHub 2.0. To access servers in JupyterHub 1.x, a token must be owned by the same user as the server, or be an admin token if admin_access is enabled.

The URL returned from a server model is the url path suffix, e.g. /user/:name/ to append to the jupyterhub base URL.

For instance, {hub_url}{server_url}, where hub_url would be e.g. http://127.0.0.1:8000 by default, and server_url /user/myname, for a full url of http://127.0.0.1:8000/user/myname.

Python example

The JupyterHub repo includes a complete example in examples/server-api tying all this together.

To summarize the steps:

  1. get user info from /user/:name

  2. the server model includes a ready state to tell you if it’s ready

  3. if it’s not ready, you can follow up with progress_url to wait for it

  4. if it is ready, you can use the url field to link directly to the running server

The example demonstrates starting and stopping servers via the JupyterHub API, including waiting for them to start via the progress API, as well as waiting for them to stop via polling the user model.

def event_stream(session, url):
    """Generator yielding events from a JSON event stream

    For use with the server progress API
    """
    r = session.get(url, stream=True)
    r.raise_for_status()
    for line in r.iter_lines():
        line = line.decode('utf8', 'replace')
        # event lines all start with `data:`
        # all other lines should be ignored (they will be empty)
        if line.startswith('data:'):
            yield json.loads(line.split(':', 1)[1])


def start_server(session, hub_url, user, server_name=""):
    """Start a server for a jupyterhub user

    Returns the full URL for accessing the server
    """
    user_url = f"{hub_url}/hub/api/users/{user}"
    log_name = f"{user}/{server_name}".rstrip("/")

    # step 1: get user status
    r = session.get(user_url)
    r.raise_for_status()
    user_model = r.json()

    # if server is not 'active', request launch
    if server_name not in user_model.get('servers', {}):
        log.info(f"Starting server {log_name}")
        r = session.post(f"{user_url}/servers/{server_name}")
        r.raise_for_status()
        if r.status_code == 201:
            log.info(f"Server {log_name} is launched and ready")
        elif r.status_code == 202:
            log.info(f"Server {log_name} is launching...")
        else:
            log.warning(f"Unexpected status: {r.status_code}")
        r = session.get(user_url)
        r.raise_for_status()
        user_model = r.json()

    # report server status
    server = user_model['servers'][server_name]
    if server['pending']:
        status = f"pending {server['pending']}"
    elif server['ready']:
        status = "ready"
    else:
        # shouldn't be possible!
        raise ValueError(f"Unexpected server state: {server}")

    log.info(f"Server {log_name} is {status}")

    # wait for server to be ready using progress API
    progress_url = user_model['servers'][server_name]['progress_url']
    for event in event_stream(session, f"{hub_url}{progress_url}"):
        log.info(f"Progress {event['progress']}%: {event['message']}")
        if event.get("ready"):
            server_url = event['url']
            break
    else:
        # server never ready
        raise ValueError(f"{log_name} never started!")

    # at this point, we know the server is ready and waiting to receive requests
    # return the full URL where the server can be accessed
    return f"{hub_url}{server_url}"


def stop_server(session, hub_url, user, server_name=""):
    """Stop a server via the JupyterHub API

    Returns when the server has finished stopping
    """
    # step 1: get user status
    user_url = f"{hub_url}/hub/api/users/{user}"
    server_url = f"{user_url}/servers/{server_name}"
    log_name = f"{user}/{server_name}".rstrip("/")

    log.info(f"Stopping server {log_name}")
    r = session.delete(server_url)
    if r.status_code == 404:
        log.info(f"Server {log_name} already stopped")

    r.raise_for_status()
    if r.status_code == 204:
        log.info(f"Server {log_name} stopped")
        return

    # else: 202, stop requested, but not complete
    # wait for stop to finish
    log.info(f"Server {log_name} stopping...")

    # wait for server to be done stopping
    while True:
        r = session.get(user_url)
        r.raise_for_status()
        user_model = r.json()
        if server_name not in user_model.get("servers", {}):
            log.info(f"Server {log_name} stopped")
            return
        server = user_model["servers"][server_name]
        if not server['pending']:
            raise ValueError(f"Waiting for {log_name}, but no longer pending.")
        log.info(f"Server {log_name} pending: {server['pending']}")
        # wait to poll again
        time.sleep(1)


Monitoring

This section covers details on monitoring the state of your JupyterHub installation.

JupyterHub expose the /metrics endpoint that returns text describing its current operational state formatted in a way Prometheus understands.

Prometheus is a separate open source tool that can be configured to repeatedly poll JupyterHub’s /metrics endpoint to parse and save its current state.

By doing so, Prometheus can describe JupyterHub’s evolving state over time. This evolving state can then be accessed through Prometheus that expose its underlying storage to those allowed to access it, and be presented with dashboards by a tool like Grafana.

List of Prometheus Metrics

Type

Name

Description

histogram

jupyterhub_check_routes_duration_seconds

Time taken to validate all routes in proxy

histogram

jupyterhub_hub_startup_duration_seconds

Time taken for Hub to start

histogram

jupyterhub_init_spawners_duration_seconds

Time taken for spawners to initialize

histogram

jupyterhub_proxy_add_duration_seconds

duration for adding user routes to proxy

histogram

jupyterhub_proxy_delete_duration_seconds

duration for deleting user routes from proxy

histogram

jupyterhub_proxy_poll_duration_seconds

duration for polling all routes from proxy

histogram

jupyterhub_request_duration_seconds

request duration for all HTTP requests

gauge

jupyterhub_running_servers

the number of user servers currently running

histogram

jupyterhub_server_poll_duration_seconds

time taken to poll if server is running

histogram

jupyterhub_server_spawn_duration_seconds

time taken for server spawning operation

histogram

jupyterhub_server_stop_seconds

time taken for server stopping operation

gauge

jupyterhub_total_users

total number of users

The Hub’s Database

JupyterHub uses a database to store information about users, services, and other data needed for operating the Hub.

Default SQLite database

The default database for JupyterHub is a SQLite database. We have chosen SQLite as JupyterHub’s default for its lightweight simplicity in certain uses such as testing, small deployments and workshops.

For production systems, SQLite has some disadvantages when used with JupyterHub:

  • upgrade-db may not work, and you may need to start with a fresh database

  • downgrade-db will not work if you want to rollback to an earlier version, so backup the jupyterhub.sqlite file before upgrading

The sqlite documentation provides a helpful page about when to use SQLite and where traditional RDBMS may be a better choice.

Using an RDBMS (PostgreSQL, MySQL)

When running a long term deployment or a production system, we recommend using a traditional RDBMS database, such as PostgreSQL or MySQL, that supports the SQL ALTER TABLE statement.

Notes and Tips
SQLite

The SQLite database should not be used on NFS. SQLite uses reader/writer locks to control access to the database. This locking mechanism might not work correctly if the database file is kept on an NFS filesystem. This is because fcntl() file locking is broken on many NFS implementations. Therefore, you should avoid putting SQLite database files on NFS since it will not handle well multiple processes which might try to access the file at the same time.

PostgreSQL

We recommend using PostgreSQL for production if you are unsure whether to use MySQL or PostgreSQL or if you do not have a strong preference. There is additional configuration required for MySQL that is not needed for PostgreSQL.

MySQL / MariaDB
  • You should use the pymysql sqlalchemy provider (the other one, MySQLdb, isn’t available for py3).

  • You also need to set pool_recycle to some value (typically 60 - 300) which depends on your MySQL setup. This is necessary since MySQL kills connections serverside if they’ve been idle for a while, and the connection from the hub will be idle for longer than most connections. This behavior will lead to frustrating ‘the connection has gone away’ errors from sqlalchemy if pool_recycle is not set.

  • If you use utf8mb4 collation with MySQL earlier than 5.7.7 or MariaDB earlier than 10.2.1 you may get an 1709, Index column size too large error. To fix this you need to set innodb_large_prefix to enabled and innodb_file_format to Barracuda to allow for the index sizes jupyterhub uses. row_format will be set to DYNAMIC as long as those options are set correctly. Later versions of MariaDB and MySQL should set these values by default, as well as have a default DYNAMIC row_format and pose no trouble to users.

Working with templates and UI

The pages of the JupyterHub application are generated from Jinja templates. These allow the header, for example, to be defined once and incorporated into all pages. By providing your own templates, you can have complete control over JupyterHub’s appearance.

Custom Templates

JupyterHub will look for custom templates in all of the paths in the JupyterHub.template_paths configuration option, falling back on the default templates if no custom template with that name is found. This fallback behavior is new in version 0.9; previous versions searched only those paths explicitly included in template_paths. You may override as many or as few templates as you desire.

Extending Templates

Jinja provides a mechanism to extend templates. A base template can define a block, and child templates can replace or supplement the material in the block. The JupyterHub templates make extensive use of blocks, which allows you to customize parts of the interface easily.

In general, a child template can extend a base template, page.html, by beginning with:

{% extends "page.html" %}

This works, unless you are trying to extend the default template for the same file name. Starting in version 0.9, you may refer to the base file with a templates/ prefix. Thus, if you are writing a custom page.html, start the file with this block:

{% extends "templates/page.html" %}

By defining blocks with same name as in the base template, child templates can replace those sections with custom content. The content from the base template can be included with the {{ super() }} directive.

Example

To add an additional message to the spawn-pending page, below the existing text about the server starting up, place this content in a file named spawn_pending.html in a directory included in the JupyterHub.template_paths configuration option.

{% extends "templates/spawn_pending.html" %} {% block message %} {{ super() }}
<p>Patience is a virtue.</p>
{% endblock %}
Page Announcements

To add announcements to be displayed on a page, you have two options:

  • Extend the page templates as described above

  • Use configuration variables

Announcement Configuration Variables

If you set the configuration variable JupyterHub.template_vars = {'announcement': 'some_text'}, the given some_text will be placed on the top of all pages. The more specific variables announcement_login, announcement_spawn, announcement_home, and announcement_logout are more specific and only show on their respective pages (overriding the global announcement variable). Note that changing these variables require a restart, unlike direct template extension.

You can get the same effect by extending templates, which allows you to update the messages without restarting. Set c.JupyterHub.template_paths as mentioned above, and then create a template (for example, login.html) with:

{% extends "templates/login.html" %} {% set announcement = 'some message' %}

Extending page.html puts the message on all pages, but note that extending page.html take precedence over an extension of a specific page (unlike the variable-based approach above).

Deploying JupyterHub in “API only mode”

As a service for deploying and managing Jupyter servers for users, JupyterHub exposes this functionality primarily via a REST API. For convenience, JupyterHub also ships with a basic web UI built using that REST API. The basic web UI enables users to click a button to quickly start and stop their servers, and it lets admins perform some basic user and server management tasks.

The REST API has always provided additional functionality beyond what is available in the basic web UI. Similarly, we avoid implementing UI functionality that is also not available via the API. With JupyterHub 2.0, the basic web UI will always be composed using the REST API. In other words, no UI pages should rely on information not available via the REST API. Previously, some admin UI functionality could only be achieved via admin pages, such as paginated requests.

Limited UI customization via templates

The JupyterHub UI is customizable via extensible HTML templates, but this has some limited scope to what can be customized. Adding some content and messages to existing pages is well supported, but changing the page flow and what pages are available are beyond the scope of what is customizable.

Rich UI customization with REST API based apps

Increasingly, JupyterHub is used purely as an API for managing Jupyter servers for other Jupyter-based applications that might want to present a different user experience. If you want a fully customized user experience, you can now disable the Hub UI and use your own pages together with the JupyterHub REST API to build your own web application to serve your users, relying on the Hub only as an API for managing users and servers.

One example of such an application is BinderHub, which powers https://mybinder.org, and motivates many of these changes.

BinderHub is distinct from a traditional JupyterHub deployment because it uses temporary users created for each launch. Instead of presenting a login page, users are presented with a form to specify what environment they would like to launch:

Binder launch form

When a launch is requested:

  1. an image is built, if necessary

  2. a temporary user is created,

  3. a server is launched for that user, and

  4. when running, users are redirected to an already running server with an auth token in the URL

  5. after the session is over, the user is deleted

This means that a lot of JupyterHub’s UI flow doesn’t make sense:

  • there is no way for users to login

  • the human user doesn’t map onto a JupyterHub User in a meaningful way

  • when a server isn’t running, there isn’t a ‘restart your server’ action available because the user has been deleted

  • users do not have any access to any Hub functionality, so presenting pages for those features would be confusing

BinderHub is one of the motivating use cases for JupyterHub supporting being used only via its API. We’ll use BinderHub here as an example of various configuration options.

Disabling Hub UI

c.JupyterHub.hub_routespec is a configuration option to specify which URL prefix should be routed to the Hub. The default is / which means that the Hub will receive all requests not already specified to be routed somewhere else.

There are three values that are most logical for hub_routespec:

  • / - this is the default, and used in most deployments. It is also the only option prior to JupyterHub 1.4.

  • /hub/ - this serves only Hub pages, both UI and API

  • /hub/api - this serves only the Hub API, so all Hub UI is disabled, aside from the OAuth confirmation page, if used.

If you choose a hub routespec other than /, the main JupyterHub feature you will lose is the automatic handling of requests for /user/:username when the requested server is not running.

JupyterHub’s handling of this request shows this page, telling you that the server is not running, with a button to launch it again:

screenshot of hub page for server not running

If you set hub_routespec to something other than /, it is likely that you also want to register another destination for / to handle requests to not-running servers. If you don’t, you will see a default 404 page from the proxy:

screenshot of CHP default 404

For mybinder.org, the default “start my server” page doesn’t make sense, because when a server is gone, there is no restart action. Instead, we provide hints about how to get back to a link to start a new server:

screenshot of mybinder.org 404

To achieve this, mybinder.org registers a route for / that goes to a custom endpoint that runs nginx and only serves this static HTML error page. This is set with

c.Proxy.extra_routes = {
    "/": "http://custom-404-entpoint/",
}

You may want to use an alternate behavior, such as redirecting to a landing page, or taking some other action based on the requested page.

If you use c.JupyterHub.hub_routespec = "/hub/", then all the Hub pages will be available, and only this default-page-404 issue will come up.

If you use c.JupyterHub.hub_routespec = "/hub/api/", then only the Hub API will be available, and all UI will be up to you. mybinder.org takes this last option, because none of the Hub UI pages really make sense. Binder users don’t have any reason to know or care that JupyterHub happens to be an implementation detail of how their environment is managed. Seeing Hub error pages and messages in that situation is more likely to be confusing than helpful.

New in version 1.4: c.JupyterHub.hub_routespec and c.Proxy.extra_routes are new in JupyterHub 1.4.

Eventlogging and Telemetry

JupyterHub can be configured to record structured events from a running server using Jupyter’s Telemetry System. The types of events that JupyterHub emits are defined by JSON schemas listed at the bottom of this page.

How to emit events

Event logging is handled by its Eventlog object. This leverages Python’s standing logging library to emit, filter, and collect event data.

To begin recording events, you’ll need to set two configurations:

  1. handlers: tells the EventLog where to route your events. This trait is a list of Python logging handlers that route events to

  2. allows_schemas: tells the EventLog which events should be recorded. No events are emitted by default; all recorded events must be listed here.

Here’s a basic example:

import logging

c.EventLog.handlers = [
    logging.FileHandler('event.log'),
]

c.EventLog.allowed_schemas = [
    'hub.jupyter.org/server-action'
]

The output is a file, "event.log", with events recorded as JSON data.

Event schemas
JupyterHub server events

hub.jupyter.org/server-action

Record actions on user servers made via JupyterHub.

JupyterHub can perform various actions on user servers via direct interaction from users, or via the API. This event is recorded whenever either of those happen.

Limitations:

  1. This does not record all server starts / stops, only those explicitly performed by JupyterHub. For example, a user’s server can go down because the node it was running on dies. That will not cause an event to be recorded, since it was not initiated by JupyterHub. In practice this happens often, so this is not a complete record.

  2. Events are only recorded when an action succeeds.

type

object

properties

  • action

Action performed by JupyterHub.

This is a required field.

Possibl Values:

  1. start A user’s server was successfully started

  2. stop A user’s server was successfully stopped

enum

start, stop

  • username

Name of the user whose server this action was performed on.

This is the normalized name used by JupyterHub itself, which is derived from the authentication provider used but might not be the same as used in the authentication provider.

type

string

  • servername

Name of the server this action was performed on.

JupyterHub supports each user having multiple servers with arbitrary names, and this field specifies the name of the server.

The ‘default’ server is denoted by the empty string

type

string

Configuring user environments

Deploying JupyterHub means you are providing Jupyter notebook environments for multiple users. Often, this includes a desire to configure the user environment in some way.

Since the jupyterhub-singleuser server extends the standard Jupyter notebook server, most configuration and documentation that applies to Jupyter Notebook applies to the single-user environments. Configuration of user environments typically does not occur through JupyterHub itself, but rather through system- wide configuration of Jupyter, which is inherited by jupyterhub-singleuser.

Tip: When searching for configuration tips for JupyterHub user environments, try removing JupyterHub from your search because there are a lot more people out there configuring Jupyter than JupyterHub and the configuration is the same.

This section will focus on user environments, including:

  • Installing packages

  • Configuring Jupyter and IPython

  • Installing kernelspecs

  • Using containers vs. multi-user hosts

Installing packages

To make packages available to users, you generally will install packages system-wide or in a shared environment.

This installation location should always be in the same environment that jupyterhub-singleuser itself is installed in, and must be readable and executable by your users. If you want users to be able to install additional packages, it must also be writable by your users.

If you are using a standard system Python install, you would use:

sudo python3 -m pip install numpy

to install the numpy package in the default system Python 3 environment (typically /usr/local).

You may also use conda to install packages. If you do, you should make sure that the conda environment has appropriate permissions for users to be able to run Python code in the env.

Configuring Jupyter and IPython

Jupyter and IPython have their own configuration systems.

As a JupyterHub administrator, you will typically want to install and configure environments for all JupyterHub users. For example, you wish for each student in a class to have the same user environment configuration.

Jupyter and IPython support “system-wide” locations for configuration, which is the logical place to put global configuration that you want to affect all users. It’s generally more efficient to configure user environments “system-wide”, and it’s a good idea to avoid creating files in users’ home directories.

The typical locations for these config files are:

  • system-wide in /etc/{jupyter|ipython}

  • env-wide (environment wide) in {sys.prefix}/etc/{jupyter|ipython}.

Example: Enable an extension system-wide

For example, to enable the cython IPython extension for all of your users, create the file /etc/ipython/ipython_config.py:

c.InteractiveShellApp.extensions.append("cython")
Example: Enable a Jupyter notebook configuration setting for all users

Note

These examples configure the Jupyter ServerApp, which is used by JupyterLab, the default in JupyterHub 2.0.

If you are using the classing Jupyter Notebook server, the same things should work, with the following substitutions:

  • Where you see jupyter_server_config, use jupyter_notebook_config

  • Where you see NotebookApp, use ServerApp

To enable Jupyter notebook’s internal idle-shutdown behavior (requires notebook ≥ 5.4), set the following in the /etc/jupyter/jupyter_server_config.py file:

# shutdown the server after no activity for an hour
c.ServerApp.shutdown_no_activity_timeout = 60 * 60
# shutdown kernels after no activity for 20 minutes
c.MappingKernelManager.cull_idle_timeout = 20 * 60
# check for idle kernels every two minutes
c.MappingKernelManager.cull_interval = 2 * 60
Installing kernelspecs

You may have multiple Jupyter kernels installed and want to make sure that they are available to all of your users. This means installing kernelspecs either system-wide (e.g. in /usr/local/) or in the sys.prefix of JupyterHub itself.

Jupyter kernelspec installation is system wide by default, but some kernels may default to installing kernelspecs in your home directory. These will need to be moved system-wide to ensure that they are accessible.

You can see where your kernelspecs are with:

jupyter kernelspec list
Example: Installing kernels system-wide

Assuming I have a Python 2 and Python 3 environment that I want to make sure are available, I can install their specs system-wide (in /usr/local) with:

/path/to/python3 -m ipykernel install --prefix=/usr/local
/path/to/python2 -m ipykernel install --prefix=/usr/local
Multi-user hosts vs. Containers

There are two broad categories of user environments that depend on what Spawner you choose:

  • Multi-user hosts (shared system)

  • Container-based

How you configure user environments for each category can differ a bit depending on what Spawner you are using.

The first category is a shared system (multi-user host) where each user has a JupyterHub account and a home directory as well as being a real system user. In this example, shared configuration and installation must be in a ‘system-wide’ location, such as /etc/ or /usr/local or a custom prefix such as /opt/conda.

When JupyterHub uses container-based Spawners (e.g. KubeSpawner or DockerSpawner), the ‘system-wide’ environment is really the container image which you are using for users.

In both cases, you want to avoid putting configuration in user home directories because users can change those configuration settings. Also, home directories typically persist once they are created, so they are difficult for admins to update later.

Named servers

By default, in a JupyterHub deployment each user has exactly one server.

JupyterHub can, however, have multiple servers per user. This is most useful in deployments where users can configure the environment in which their server will start (e.g. resource requests on an HPC cluster), so that a given user can have multiple configurations running at the same time, without having to stop and restart their one server.

To allow named servers:

c.JupyterHub.allow_named_servers = True

Named servers were implemented in the REST API in JupyterHub 0.8, and JupyterHub 1.0 introduces UI for managing named servers via the user home page:

named servers on the home page

as well as the admin page:

named servers on the admin page

Named servers can be accessed, created, started, stopped, and deleted from these pages. Activity tracking is now per-server as well.

The number of named servers per user can be limited by setting

c.JupyterHub.named_server_limit_per_user = 5
Switching back to classic notebook

By default the single-user server launches JupyterLab, which is based on Jupyter Server. This is the default server when running JupyterHub ≥ 2.0. You can switch to using the legacy Jupyter Notebook server by setting the JUPYTERHUB_SINGLEUSER_APP environment variable (in the single-user environment) to:

export JUPYTERHUB_SINGLEUSER_APP='notebook.notebookapp.NotebookApp'

Changed in version 2.0: JupyterLab is now the default singleuser UI, if available, which is based on the Jupyter Server, no longer the legacy Jupyter Notebook server. JupyterHub prior to 2.0 launched the legacy notebook server (jupyter notebook), and Jupyter server could be selected by specifying

# jupyterhub_config.py
c.Spawner.cmd = ["jupyter-labhub"]

or for an otherwise customized Jupyter Server app, set the environment variable:

export JUPYTERHUB_SINGLEUSER_APP='jupyter_server.serverapp.ServerApp'
Configuration examples

The following sections provide examples, including configuration files and tips, for the following:

  • Configuring GitHub OAuth

  • Using reverse proxy (nginx and Apache)

  • Run JupyterHub without root privileges using sudo

Configure GitHub OAuth

In this example, we show a configuration file for a fairly standard JupyterHub deployment with the following assumptions:

  • Running JupyterHub on a single cloud server

  • Using SSL on the standard HTTPS port 443

  • Using GitHub OAuth (using oauthenticator) for login

  • Using the default spawner (to configure other spawners, uncomment and edit spawner_class as well as follow the instructions for your desired spawner)

  • Users exist locally on the server

  • Users’ notebooks to be served from ~/assignments to allow users to browse for notebooks within other users’ home directories

  • You want the landing page for each user to be a Welcome.ipynb notebook in their assignments directory.

  • All runtime files are put into /srv/jupyterhub and log files in /var/log.

The jupyterhub_config.py file would have these settings:

# jupyterhub_config.py file
c = get_config()

import os
pjoin = os.path.join

runtime_dir = os.path.join('/srv/jupyterhub')
ssl_dir = pjoin(runtime_dir, 'ssl')
if not os.path.exists(ssl_dir):
    os.makedirs(ssl_dir)

# Allows multiple single-server per user
c.JupyterHub.allow_named_servers = True

# https on :443
c.JupyterHub.port = 443
c.JupyterHub.ssl_key = pjoin(ssl_dir, 'ssl.key')
c.JupyterHub.ssl_cert = pjoin(ssl_dir, 'ssl.cert')

# put the JupyterHub cookie secret and state db
# in /var/run/jupyterhub
c.JupyterHub.cookie_secret_file = pjoin(runtime_dir, 'cookie_secret')
c.JupyterHub.db_url = pjoin(runtime_dir, 'jupyterhub.sqlite')
# or `--db=/path/to/jupyterhub.sqlite` on the command-line

# use GitHub OAuthenticator for local users
c.JupyterHub.authenticator_class = 'oauthenticator.LocalGitHubOAuthenticator'
c.GitHubOAuthenticator.oauth_callback_url = os.environ['OAUTH_CALLBACK_URL']

# create system users that don't exist yet
c.LocalAuthenticator.create_system_users = True

# specify users and admin
c.Authenticator.allowed_users = {'rgbkrk', 'minrk', 'jhamrick'}
c.Authenticator.admin_users = {'jhamrick', 'rgbkrk'}

# uses the default spawner
# To use a different spawner, uncomment `spawner_class` and set to desired
# spawner (e.g. SudoSpawner). Follow instructions for desired spawner
# configuration.
# c.JupyterHub.spawner_class = 'sudospawner.SudoSpawner'

# start single-user notebook servers in ~/assignments,
# with ~/assignments/Welcome.ipynb as the default landing page
# this config could also be put in
# /etc/jupyter/jupyter_notebook_config.py
c.Spawner.notebook_dir = '~/assignments'
c.Spawner.args = ['--NotebookApp.default_url=/notebooks/Welcome.ipynb']

Using the GitHub Authenticator requires a few additional environment variable to be set prior to launching JupyterHub:

export GITHUB_CLIENT_ID=github_id
export GITHUB_CLIENT_SECRET=github_secret
export OAUTH_CALLBACK_URL=https://example.com/hub/oauth_callback
export CONFIGPROXY_AUTH_TOKEN=super-secret
# append log output to log file /var/log/jupyterhub.log
jupyterhub -f /etc/jupyterhub/jupyterhub_config.py &>> /var/log/jupyterhub.log
Using a reverse proxy

In the following example, we show configuration files for a JupyterHub server running locally on port 8000 but accessible from the outside on the standard SSL port 443. This could be useful if the JupyterHub server machine is also hosting other domains or content on 443. The goal in this example is to satisfy the following:

  • JupyterHub is running on a server, accessed only via HUB.DOMAIN.TLD:443

  • On the same machine, NO_HUB.DOMAIN.TLD strictly serves different content, also on port 443

  • nginx or apache is used as the public access point (which means that only nginx/apache will bind to 443)

  • After testing, the server in question should be able to score at least an A on the Qualys SSL Labs SSL Server Test

Let’s start out with needed JupyterHub configuration in jupyterhub_config.py:

# Force the proxy to only listen to connections to 127.0.0.1 (on port 8000)
c.JupyterHub.bind_url = 'http://127.0.0.1:8000'

(For Jupyterhub < 0.9 use c.JupyterHub.ip = '127.0.0.1'.)

For high-quality SSL configuration, we also generate Diffie-Helman parameters. This can take a few minutes:

openssl dhparam -out /etc/ssl/certs/dhparam.pem 4096
nginx

This nginx config file is fairly standard fare except for the two location blocks within the main section for HUB.DOMAIN.tld. To create a new site for jupyterhub in your nginx config, make a new file in sites.enabled, e.g. /etc/nginx/sites.enabled/jupyterhub.conf:

# top-level http config for websocket headers
# If Upgrade is defined, Connection = upgrade
# If Upgrade is empty, Connection = close
map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}

# HTTP server to redirect all 80 traffic to SSL/HTTPS
server {
    listen 80;
    server_name HUB.DOMAIN.TLD;

    # Tell all requests to port 80 to be 302 redirected to HTTPS
    return 302 https://$host$request_uri;
}

# HTTPS server to handle JupyterHub
server {
    listen 443;
    ssl on;

    server_name HUB.DOMAIN.TLD;

    ssl_certificate /etc/letsencrypt/live/HUB.DOMAIN.TLD/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/HUB.DOMAIN.TLD/privkey.pem;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_dhparam /etc/ssl/certs/dhparam.pem;
    ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:50m;
    ssl_stapling on;
    ssl_stapling_verify on;
    add_header Strict-Transport-Security max-age=15768000;

    # Managing literal requests to the JupyterHub front end
    location / {
        proxy_pass http://127.0.0.1:8000;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # websocket headers
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_set_header X-Scheme $scheme;

        proxy_buffering off;
    }

    # Managing requests to verify letsencrypt host
    location ~ /.well-known {
        allow all;
    }
}

If nginx is not running on port 443, substitute $http_host for $host on the lines setting the Host header.

nginx will now be the front facing element of JupyterHub on 443 which means it is also free to bind other servers, like NO_HUB.DOMAIN.TLD to the same port on the same machine and network interface. In fact, one can simply use the same server blocks as above for NO_HUB and simply add line for the root directory of the site as well as the applicable location call:

server {
    listen 80;
    server_name NO_HUB.DOMAIN.TLD;

    # Tell all requests to port 80 to be 302 redirected to HTTPS
    return 302 https://$host$request_uri;
}

server {
    listen 443;
    ssl on;

    # INSERT OTHER SSL PARAMETERS HERE AS ABOVE
    # SSL cert may differ

    # Set the appropriate root directory
    root /var/www/html

    # Set URI handling
    location / {
        try_files $uri $uri/ =404;
    }

    # Managing requests to verify letsencrypt host
    location ~ /.well-known {
        allow all;
    }

}

Now restart nginx, restart the JupyterHub, and enjoy accessing https://HUB.DOMAIN.TLD while serving other content securely on https://NO_HUB.DOMAIN.TLD.

SELinux permissions for nginx

On distributions with SELinux enabled (e.g. Fedora), one may encounter permission errors when the nginx service is started.

We need to allow nginx to perform network relay and connect to the jupyterhub port. The following commands do that:

semanage port -a -t http_port_t -p tcp 8000
setsebool -P httpd_can_network_relay 1
setsebool -P httpd_can_network_connect 1

Replace 8000 with the port the jupyterhub server is running from.

Apache

As with nginx above, you can use Apache as the reverse proxy. First, we will need to enable the apache modules that we are going to need:

a2enmod ssl rewrite proxy headers proxy_http proxy_wstunnel

Our Apache configuration is equivalent to the nginx configuration above:

  • Redirect HTTP to HTTPS

  • Good SSL Configuration

  • Support for websockets on any proxied URL

  • JupyterHub is running locally at http://127.0.0.1:8000

# redirect HTTP to HTTPS
Listen 80
<VirtualHost HUB.DOMAIN.TLD:80>
  ServerName HUB.DOMAIN.TLD
  Redirect / https://HUB.DOMAIN.TLD/
</VirtualHost>

Listen 443
<VirtualHost HUB.DOMAIN.TLD:443>

  ServerName HUB.DOMAIN.TLD

  # enable HTTP/2, if available
  Protocols h2 http/1.1

  # HTTP Strict Transport Security (mod_headers is required) (63072000 seconds)
  Header always set Strict-Transport-Security "max-age=63072000"

  # configure SSL
  SSLEngine on
  SSLCertificateFile /etc/letsencrypt/live/HUB.DOMAIN.TLD/fullchain.pem
  SSLCertificateKeyFile /etc/letsencrypt/live/HUB.DOMAIN.TLD/privkey.pem
  SSLOpenSSLConfCmd DHParameters /etc/ssl/certs/dhparam.pem

  # intermediate configuration from ssl-config.mozilla.org (2022-03-03)
  # Please note, that this configuration might be out-dated - please update it accordingly using https://ssl-config.mozilla.org/
  SSLProtocol             all -SSLv3 -TLSv1 -TLSv1.1
  SSLCipherSuite          ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
  SSLHonorCipherOrder     off
  SSLSessionTickets       off

  # Use RewriteEngine to handle websocket connection upgrades
  RewriteEngine On
  RewriteCond %{HTTP:Connection} Upgrade [NC]
  RewriteCond %{HTTP:Upgrade} websocket [NC]
  RewriteRule /(.*) ws://127.0.0.1:8000/$1 [P,L]

  <Location "/">
    # preserve Host header to avoid cross-origin problems
    ProxyPreserveHost on
    # proxy to JupyterHub
    ProxyPass         http://127.0.0.1:8000/
    ProxyPassReverse  http://127.0.0.1:8000/
    RequestHeader     set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
  </Location>
</VirtualHost>

In case of the need to run the jupyterhub under /jhub/ or other location please use the below configurations:

  • JupyterHub running locally at http://127.0.0.1:8000/jhub/ or other location

httpd.conf amendments:

 RewriteRule /jhub/(.*) ws://127.0.0.1:8000/jhub/$1 [NE,P,L]
 RewriteRule /jhub/(.*) http://127.0.0.1:8000/jhub/$1 [NE,P,L]

 ProxyPass /jhub/ http://127.0.0.1:8000/jhub/
 ProxyPassReverse /jhub/  http://127.0.0.1:8000/jhub/

jupyterhub_config.py amendments:

 --The public facing URL of the whole JupyterHub application.
 --This is the address on which the proxy will bind. Sets protocol, ip, base_url
 c.JupyterHub.bind_url = 'http://127.0.0.1:8000/jhub/'
Run JupyterHub without root privileges using sudo

Note: Setting up sudo permissions involves many pieces of system configuration. It is quite easy to get wrong and very difficult to debug. Only do this if you are very sure you must.

Overview

There are many Authenticators and Spawners available for JupyterHub. Some, such as DockerSpawner or OAuthenticator, do not need any elevated permissions. This document describes how to get the full default behavior of JupyterHub while running notebook servers as real system users on a shared system without running the Hub itself as root.

Since JupyterHub needs to spawn processes as other users, the simplest way is to run it as root, spawning user servers with setuid. But this isn’t especially safe, because you have a process running on the public web as root.

A more prudent way to run the server while preserving functionality is to create a dedicated user with sudo access restricted to launching and monitoring single-user servers.

Create a user

To do this, first create a user that will run the Hub:

sudo useradd rhea

This user shouldn’t have a login shell or password (possible with -r).

Set up sudospawner

Next, you will need sudospawner to enable monitoring the single-user servers with sudo:

sudo python3 -m pip install sudospawner

Now we have to configure sudo to allow the Hub user (rhea) to launch the sudospawner script on behalf of our hub users (here zoe and wash). We want to confine these permissions to only what we really need.

Edit /etc/sudoers

To do this we add to /etc/sudoers (use visudo for safe editing of sudoers):

  • specify the list of users JUPYTER_USERS for whom rhea can spawn servers

  • set the command JUPYTER_CMD that rhea can execute on behalf of users

  • give rhea permission to run JUPYTER_CMD on behalf of JUPYTER_USERS without entering a password

For example:

# comma-separated list of users that can spawn single-user servers
# this should include all of your Hub users
Runas_Alias JUPYTER_USERS = rhea, zoe, wash

# the command(s) the Hub can run on behalf of the above users without needing a password
# the exact path may differ, depending on how sudospawner was installed
Cmnd_Alias JUPYTER_CMD = /usr/local/bin/sudospawner

# actually give the Hub user permission to run the above command on behalf
# of the above users without prompting for a password
rhea ALL=(JUPYTER_USERS) NOPASSWD:JUPYTER_CMD

It might be useful to modify secure_path to add commands in path.

As an alternative to adding every user to the /etc/sudoers file, you can use a group in the last line above, instead of JUPYTER_USERS:

rhea ALL=(%jupyterhub) NOPASSWD:JUPYTER_CMD

If the jupyterhub group exists, there will be no need to edit /etc/sudoers again. A new user will gain access to the application when added to the group:

$ adduser -G jupyterhub newuser
Test sudo setup

Test that the new user doesn’t need to enter a password to run the sudospawner command.

This should prompt for your password to switch to rhea, but not prompt for any password for the second switch. It should show some help output about logging options:

$ sudo -u rhea sudo -n -u $USER /usr/local/bin/sudospawner --help
Usage: /usr/local/bin/sudospawner [OPTIONS]

Options:

--help          show this help information
...

And this should fail:

$ sudo -u rhea sudo -n -u $USER echo 'fail'
sudo: a password is required
Enable PAM for non-root

By default, PAM authentication is used by JupyterHub. To use PAM, the process may need to be able to read the shadow password database.

Shadow group (Linux)

Note: On Fedora based distributions there is no clear way to configure the PAM database to allow sufficient access for authenticating with the target user’s password from JupyterHub. As a workaround we recommend use an alternative authentication method.

$ ls -l /etc/shadow
-rw-r-----  1 root shadow   2197 Jul 21 13:41 shadow

If there’s already a shadow group, you are set. If its permissions are more like:

    $ ls -l /etc/shadow
    -rw-------  1 root wheel   2197 Jul 21 13:41 shadow

Then you may want to add a shadow group, and make the shadow file group-readable:

$ sudo groupadd shadow
$ sudo chgrp shadow /etc/shadow
$ sudo chmod g+r /etc/shadow

We want our new user to be able to read the shadow passwords, so add it to the shadow group:

    $ sudo usermod -a -G shadow rhea

If you want jupyterhub to serve pages on a restricted port (such as port 80 for http), then you will need to give node permission to do so:

sudo setcap 'cap_net_bind_service=+ep' /usr/bin/node

However, you may want to further understand the consequences of this.

You may also be interested in limiting the amount of CPU any process can use on your server. cpulimit is a useful tool that is available for many Linux distributions’ packaging system. This can be used to keep any user’s process from using too much CPU cycles. You can configure it accoring to these instructions.

Shadow group (FreeBSD)

NOTE: This has not been tested and may not work as expected.

$ ls -l /etc/spwd.db /etc/master.passwd
-rw-------  1 root  wheel   2516 Aug 22 13:35 /etc/master.passwd
-rw-------  1 root  wheel  40960 Aug 22 13:35 /etc/spwd.db

Add a shadow group if there isn’t one, and make the shadow file group-readable:

$ sudo pw group add shadow
$ sudo chgrp shadow /etc/spwd.db
$ sudo chmod g+r /etc/spwd.db
$ sudo chgrp shadow /etc/master.passwd
$ sudo chmod g+r /etc/master.passwd

We want our new user to be able to read the shadow passwords, so add it to the shadow group:

$ sudo pw user mod rhea -G shadow
Test that PAM works

We can verify that PAM is working, with:

$ sudo -u rhea python3 -c "import pamela, getpass; print(pamela.authenticate('$USER', getpass.getpass()))"
Password: [enter your unix password]
Make a directory for JupyterHub

JupyterHub stores its state in a database, so it needs write access to a directory. The simplest way to deal with this is to make a directory owned by your Hub user, and use that as the CWD when launching the server.

$ sudo mkdir /etc/jupyterhub
$ sudo chown rhea /etc/jupyterhub
Start jupyterhub

Finally, start the server as our newly configured user, rhea:

$ cd /etc/jupyterhub
$ sudo -u rhea jupyterhub --JupyterHub.spawner_class=sudospawner.SudoSpawner

And try logging in.

Troubleshooting: SELinux

If you still get a generic Permission denied PermissionError, it’s possible SELinux is blocking you.
Here’s how you can make a module to allow this. First, put this in a file named sudo_exec_selinux.te:

module sudo_exec_selinux 1.1;

require {
        type unconfined_t;
        type sudo_exec_t;
        class file { read entrypoint };
}

#============= unconfined_t ==============
allow unconfined_t sudo_exec_t:file entrypoint;

Then run all of these commands as root:

$ checkmodule -M -m -o sudo_exec_selinux.mod sudo_exec_selinux.te
$ semodule_package -o sudo_exec_selinux.pp -m sudo_exec_selinux.mod
$ semodule -i sudo_exec_selinux.pp
Troubleshooting: PAM session errors

If the PAM authentication doesn’t work and you see errors for login:session-auth, or similar, considering updating to a more recent version of jupyterhub and disabling the opening of PAM sessions with c.PAMAuthenticator.open_sessions=False.

Configuration Reference

Important

Make sure the version of JupyterHub for this documentation matches your installation version, as the output of this command may change between versions.

JupyterHub configuration

As explained in the Configuration Basics section, the jupyterhub_config.py can be automatically generated via

jupyterhub --generate-config

The following contains the output of that command for reference.

# Configuration file for jupyterhub.

#------------------------------------------------------------------------------
# Application(SingletonConfigurable) configuration
#------------------------------------------------------------------------------
## This is an application.

## The date format used by logging formatters for %(asctime)s
#  Default: '%Y-%m-%d %H:%M:%S'
# c.Application.log_datefmt = '%Y-%m-%d %H:%M:%S'

## The Logging format template
#  Default: '[%(name)s]%(highlevel)s %(message)s'
# c.Application.log_format = '[%(name)s]%(highlevel)s %(message)s'

## Set the log level by value or name.
#  Choices: any of [0, 10, 20, 30, 40, 50, 'DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']
#  Default: 30
# c.Application.log_level = 30

## Instead of starting the Application, dump configuration to stdout
#  Default: False
# c.Application.show_config = False

## Instead of starting the Application, dump configuration to stdout (as JSON)
#  Default: False
# c.Application.show_config_json = False

#------------------------------------------------------------------------------
# JupyterHub(Application) configuration
#------------------------------------------------------------------------------
## An Application for starting a Multi-User Jupyter Notebook server.

## Maximum number of concurrent servers that can be active at a time.
#  
#  Setting this can limit the total resources your users can consume.
#  
#  An active server is any server that's not fully stopped. It is considered
#  active from the time it has been requested until the time that it has
#  completely stopped.
#  
#  If this many user servers are active, users will not be able to launch new
#  servers until a server is shutdown. Spawn requests will be rejected with a 429
#  error asking them to try again.
#  
#  If set to 0, no limit is enforced.
#  Default: 0
# c.JupyterHub.active_server_limit = 0

## Duration (in seconds) to determine the number of active users.
#  Default: 1800
# c.JupyterHub.active_user_window = 1800

## Resolution (in seconds) for updating activity
#  
#  If activity is registered that is less than activity_resolution seconds more
#  recent than the current value, the new value will be ignored.
#  
#  This avoids too many writes to the Hub database.
#  Default: 30
# c.JupyterHub.activity_resolution = 30

## Grant admin users permission to access single-user servers.
#  
#          Users should be properly informed if this is enabled.
#  Default: False
# c.JupyterHub.admin_access = False

## DEPRECATED since version 0.7.2, use Authenticator.admin_users instead.
#  Default: set()
# c.JupyterHub.admin_users = set()

## Allow named single-user servers per user
#  Default: False
# c.JupyterHub.allow_named_servers = False

## Answer yes to any questions (e.g. confirm overwrite)
#  Default: False
# c.JupyterHub.answer_yes = False

## The default amount of records returned by a paginated endpoint
#  Default: 50
# c.JupyterHub.api_page_default_limit = 50

## The maximum amount of records that can be returned at once
#  Default: 200
# c.JupyterHub.api_page_max_limit = 200

## PENDING DEPRECATION: consider using services
#  
#          Dict of token:username to be loaded into the database.
#  
#          Allows ahead-of-time generation of API tokens for use by externally managed services,
#          which authenticate as JupyterHub users.
#  
#          Consider using services for general services that talk to the
#  JupyterHub API.
#  Default: {}
# c.JupyterHub.api_tokens = {}

## Authentication for prometheus metrics
#  Default: True
# c.JupyterHub.authenticate_prometheus = True

## Class for authenticating users.
#  
#          This should be a subclass of :class:`jupyterhub.auth.Authenticator`
#  
#          with an :meth:`authenticate` method that:
#  
#          - is a coroutine (asyncio or tornado)
#          - returns username on success, None on failure
#          - takes two arguments: (handler, data),
#            where `handler` is the calling web.RequestHandler,
#            and `data` is the POST form data from the login page.
#  
#          .. versionchanged:: 1.0
#              authenticators may be registered via entry points,
#              e.g. `c.JupyterHub.authenticator_class = 'pam'`
#  
#  Currently installed: 
#    - default: jupyterhub.auth.PAMAuthenticator
#    - dummy: jupyterhub.auth.DummyAuthenticator
#    - null: jupyterhub.auth.NullAuthenticator
#    - pam: jupyterhub.auth.PAMAuthenticator
#  Default: 'jupyterhub.auth.PAMAuthenticator'
# c.JupyterHub.authenticator_class = 'jupyterhub.auth.PAMAuthenticator'

## The base URL of the entire application.
#  
#          Add this to the beginning of all JupyterHub URLs.
#          Use base_url to run JupyterHub within an existing website.
#  
#          .. deprecated: 0.9
#              Use JupyterHub.bind_url
#  Default: '/'
# c.JupyterHub.base_url = '/'

## The public facing URL of the whole JupyterHub application.
#  
#          This is the address on which the proxy will bind.
#          Sets protocol, ip, base_url
#  Default: 'http://:8000'
# c.JupyterHub.bind_url = 'http://:8000'

## Whether to shutdown the proxy when the Hub shuts down.
#  
#          Disable if you want to be able to teardown the Hub while leaving the
#  proxy running.
#  
#          Only valid if the proxy was starting by the Hub process.
#  
#          If both this and cleanup_servers are False, sending SIGINT to the Hub will
#          only shutdown the Hub, leaving everything else running.
#  
#          The Hub should be able to resume from database state.
#  Default: True
# c.JupyterHub.cleanup_proxy = True

## Whether to shutdown single-user servers when the Hub shuts down.
#  
#          Disable if you want to be able to teardown the Hub while leaving the
#  single-user servers running.
#  
#          If both this and cleanup_proxy are False, sending SIGINT to the Hub will
#          only shutdown the Hub, leaving everything else running.
#  
#          The Hub should be able to resume from database state.
#  Default: True
# c.JupyterHub.cleanup_servers = True

## Maximum number of concurrent users that can be spawning at a time.
#  
#  Spawning lots of servers at the same time can cause performance problems for
#  the Hub or the underlying spawning system. Set this limit to prevent bursts of
#  logins from attempting to spawn too many servers at the same time.
#  
#  This does not limit the number of total running servers. See
#  active_server_limit for that.
#  
#  If more than this many users attempt to spawn at a time, their requests will
#  be rejected with a 429 error asking them to try again. Users will have to wait
#  for some of the spawning services to finish starting before they can start
#  their own.
#  
#  If set to 0, no limit is enforced.
#  Default: 100
# c.JupyterHub.concurrent_spawn_limit = 100

## The config file to load
#  Default: 'jupyterhub_config.py'
# c.JupyterHub.config_file = 'jupyterhub_config.py'

## DEPRECATED: does nothing
#  Default: False
# c.JupyterHub.confirm_no_ssl = False

## Number of days for a login cookie to be valid.
#          Default is two weeks.
#  Default: 14
# c.JupyterHub.cookie_max_age_days = 14

## The cookie secret to use to encrypt cookies.
#  
#          Loaded from the JPY_COOKIE_SECRET env variable by default.
#  
#          Should be exactly 256 bits (32 bytes).
#  Default: traitlets.Undefined
# c.JupyterHub.cookie_secret = traitlets.Undefined

## File in which to store the cookie secret.
#  Default: 'jupyterhub_cookie_secret'
# c.JupyterHub.cookie_secret_file = 'jupyterhub_cookie_secret'

## The location of jupyterhub data files (e.g. /usr/local/share/jupyterhub)
#  Default: '$HOME/checkouts/readthedocs.org/user_builds/jupyterhub/checkouts/2.2.2/share/jupyterhub'
# c.JupyterHub.data_files_path = '/home/docs/checkouts/readthedocs.org/user_builds/jupyterhub/checkouts/2.2.2/share/jupyterhub'

## Include any kwargs to pass to the database connection.
#          See sqlalchemy.create_engine for details.
#  Default: {}
# c.JupyterHub.db_kwargs = {}

## url for the database. e.g. `sqlite:///jupyterhub.sqlite`
#  Default: 'sqlite:///jupyterhub.sqlite'
# c.JupyterHub.db_url = 'sqlite:///jupyterhub.sqlite'

## log all database transactions. This has A LOT of output
#  Default: False
# c.JupyterHub.debug_db = False

## DEPRECATED since version 0.8: Use ConfigurableHTTPProxy.debug
#  Default: False
# c.JupyterHub.debug_proxy = False

## If named servers are enabled, default name of server to spawn or open, e.g. by
#  user-redirect.
#  Default: ''
# c.JupyterHub.default_server_name = ''

## The default URL for users when they arrive (e.g. when user directs to "/")
#  
#  By default, redirects users to their own server.
#  
#  Can be a Unicode string (e.g. '/hub/home') or a callable based on the handler
#  object:
#  
#  ::
#  
#      def default_url_fn(handler):
#          user = handler.current_user
#          if user and user.admin:
#              return '/hub/admin'
#          return '/hub/home'
#  
#      c.JupyterHub.default_url = default_url_fn
#  Default: traitlets.Undefined
# c.JupyterHub.default_url = traitlets.Undefined

## Dict authority:dict(files). Specify the key, cert, and/or
#          ca file for an authority. This is useful for externally managed
#          proxies that wish to use internal_ssl.
#  
#          The files dict has this format (you must specify at least a cert)::
#  
#              {
#                  'key': '/path/to/key.key',
#                  'cert': '/path/to/cert.crt',
#                  'ca': '/path/to/ca.crt'
#              }
#  
#          The authorities you can override: 'hub-ca', 'notebooks-ca',
#          'proxy-api-ca', 'proxy-client-ca', and 'services-ca'.
#  
#          Use with internal_ssl
#  Default: {}
# c.JupyterHub.external_ssl_authorities = {}

## Register extra tornado Handlers for jupyterhub.
#  
#  Should be of the form ``("<regex>", Handler)``
#  
#  The Hub prefix will be added, so `/my-page` will be served at `/hub/my-page`.
#  Default: []
# c.JupyterHub.extra_handlers = []

## DEPRECATED: use output redirection instead, e.g.
#  
#  jupyterhub &>> /var/log/jupyterhub.log
#  Default: ''
# c.JupyterHub.extra_log_file = ''

## Extra log handlers to set on JupyterHub logger
#  Default: []
# c.JupyterHub.extra_log_handlers = []

## Alternate header to use as the Host (e.g., X-Forwarded-Host)
#          when determining whether a request is cross-origin
#  
#          This may be useful when JupyterHub is running behind a proxy that rewrites
#          the Host header.
#  Default: ''
# c.JupyterHub.forwarded_host_header = ''

## Generate certs used for internal ssl
#  Default: False
# c.JupyterHub.generate_certs = False

## Generate default config file
#  Default: False
# c.JupyterHub.generate_config = False

## The URL on which the Hub will listen. This is a private URL for internal
#  communication. Typically set in combination with hub_connect_url. If a unix
#  socket, hub_connect_url **must** also be set.
#  
#  For example:
#  
#      "http://127.0.0.1:8081"
#      "unix+http://%2Fsrv%2Fjupyterhub%2Fjupyterhub.sock"
#  
#  .. versionadded:: 0.9
#  Default: ''
# c.JupyterHub.hub_bind_url = ''

## The ip or hostname for proxies and spawners to use
#          for connecting to the Hub.
#  
#          Use when the bind address (`hub_ip`) is 0.0.0.0, :: or otherwise different
#          from the connect address.
#  
#          Default: when `hub_ip` is 0.0.0.0 or ::, use `socket.gethostname()`,
#  otherwise use `hub_ip`.
#  
#          Note: Some spawners or proxy implementations might not support hostnames. Check your
#          spawner or proxy documentation to see if they have extra requirements.
#  
#          .. versionadded:: 0.8
#  Default: ''
# c.JupyterHub.hub_connect_ip = ''

## DEPRECATED
#  
#  Use hub_connect_url
#  
#  .. versionadded:: 0.8
#  
#  .. deprecated:: 0.9
#      Use hub_connect_url
#  Default: 0
# c.JupyterHub.hub_connect_port = 0

## The URL for connecting to the Hub. Spawners, services, and the proxy will use
#  this URL to talk to the Hub.
#  
#  Only needs to be specified if the default hub URL is not connectable (e.g.
#  using a unix+http:// bind url).
#  
#  .. seealso::
#      JupyterHub.hub_connect_ip
#      JupyterHub.hub_bind_url
#  
#  .. versionadded:: 0.9
#  Default: ''
# c.JupyterHub.hub_connect_url = ''

## The ip address for the Hub process to *bind* to.
#  
#          By default, the hub listens on localhost only. This address must be accessible from
#          the proxy and user servers. You may need to set this to a public ip or '' for all
#          interfaces if the proxy or user servers are in containers or on a different host.
#  
#          See `hub_connect_ip` for cases where the bind and connect address should differ,
#          or `hub_bind_url` for setting the full bind URL.
#  Default: '127.0.0.1'
# c.JupyterHub.hub_ip = '127.0.0.1'

## The internal port for the Hub process.
#  
#          This is the internal port of the hub itself. It should never be accessed directly.
#          See JupyterHub.port for the public port to use when accessing jupyterhub.
#          It is rare that this port should be set except in cases of port conflict.
#  
#          See also `hub_ip` for the ip and `hub_bind_url` for setting the full
#  bind URL.
#  Default: 8081
# c.JupyterHub.hub_port = 8081

## The routing prefix for the Hub itself.
#  
#  Override to send only a subset of traffic to the Hub. Default is to use the
#  Hub as the default route for all requests.
#  
#  This is necessary for normal jupyterhub operation, as the Hub must receive
#  requests for e.g. `/user/:name` when the user's server is not running.
#  
#  However, some deployments using only the JupyterHub API may want to handle
#  these events themselves, in which case they can register their own default
#  target with the proxy and set e.g. `hub_routespec = /hub/` to serve only the
#  hub's own pages, or even `/hub/api/` for api-only operation.
#  
#  Note: hub_routespec must include the base_url, if any.
#  
#  .. versionadded:: 1.4
#  Default: '/'
# c.JupyterHub.hub_routespec = '/'

## Trigger implicit spawns after this many seconds.
#  
#          When a user visits a URL for a server that's not running,
#          they are shown a page indicating that the requested server
#          is not running with a button to spawn the server.
#  
#          Setting this to a positive value will redirect the user
#          after this many seconds, effectively clicking this button
#          automatically for the users,
#          automatically beginning the spawn process.
#  
#          Warning: this can result in errors and surprising behavior
#          when sharing access URLs to actual servers,
#          since the wrong server is likely to be started.
#  Default: 0
# c.JupyterHub.implicit_spawn_seconds = 0

## Timeout (in seconds) to wait for spawners to initialize
#  
#  Checking if spawners are healthy can take a long time if many spawners are
#  active at hub start time.
#  
#  If it takes longer than this timeout to check, init_spawner will be left to
#  complete in the background and the http server is allowed to start.
#  
#  A timeout of -1 means wait forever, which can mean a slow startup of the Hub
#  but ensures that the Hub is fully consistent by the time it starts responding
#  to requests. This matches the behavior of jupyterhub 1.0.
#  
#  .. versionadded: 1.1.0
#  Default: 10
# c.JupyterHub.init_spawners_timeout = 10

## The location to store certificates automatically created by
#          JupyterHub.
#  
#          Use with internal_ssl
#  Default: 'internal-ssl'
# c.JupyterHub.internal_certs_location = 'internal-ssl'

## Enable SSL for all internal communication
#  
#          This enables end-to-end encryption between all JupyterHub components.
#          JupyterHub will automatically create the necessary certificate
#          authority and sign notebook certificates as they're created.
#  Default: False
# c.JupyterHub.internal_ssl = False

## The public facing ip of the whole JupyterHub application
#          (specifically referred to as the proxy).
#  
#          This is the address on which the proxy will listen. The default is to
#          listen on all interfaces. This is the only address through which JupyterHub
#          should be accessed by users.
#  
#          .. deprecated: 0.9
#              Use JupyterHub.bind_url
#  Default: ''
# c.JupyterHub.ip = ''

## Supply extra arguments that will be passed to Jinja environment.
#  Default: {}
# c.JupyterHub.jinja_environment_options = {}

## Interval (in seconds) at which to update last-activity timestamps.
#  Default: 300
# c.JupyterHub.last_activity_interval = 300

## Dict of 'group': ['usernames'] to load at startup.
#  
#          This strictly *adds* groups and users to groups.
#  
#          Loading one set of groups, then starting JupyterHub again with a different
#          set will not remove users or groups from previous launches.
#          That must be done through the API.
#  Default: {}
# c.JupyterHub.load_groups = {}

## List of predefined role dictionaries to load at startup.
#  
#          For instance::
#  
#              load_roles = [
#                              {
#                                  'name': 'teacher',
#                                  'description': 'Access to users' information and group membership',
#                                  'scopes': ['users', 'groups'],
#                                  'users': ['cyclops', 'gandalf'],
#                                  'services': [],
#                                  'groups': []
#                              }
#                          ]
#  
#          All keys apart from 'name' are optional.
#          See all the available scopes in the JupyterHub REST API documentation.
#  
#          Default roles are defined in roles.py.
#  Default: []
# c.JupyterHub.load_roles = []

## The date format used by logging formatters for %(asctime)s
#  See also: Application.log_datefmt
# c.JupyterHub.log_datefmt = '%Y-%m-%d %H:%M:%S'

## The Logging format template
#  See also: Application.log_format
# c.JupyterHub.log_format = '[%(name)s]%(highlevel)s %(message)s'

## Set the log level by value or name.
#  See also: Application.log_level
# c.JupyterHub.log_level = 30

## Specify path to a logo image to override the Jupyter logo in the banner.
#  Default: ''
# c.JupyterHub.logo_file = ''

## Maximum number of concurrent named servers that can be created by a user at a
#  time.
#  
#  Setting this can limit the total resources a user can consume.
#  
#  If set to 0, no limit is enforced.
#  Default: 0
# c.JupyterHub.named_server_limit_per_user = 0

## Expiry (in seconds) of OAuth access tokens.
#  
#          The default is to expire when the cookie storing them expires,
#          according to `cookie_max_age_days` config.
#  
#          These are the tokens stored in cookies when you visit
#          a single-user server or service.
#          When they expire, you must re-authenticate with the Hub,
#          even if your Hub authentication is still valid.
#          If your Hub authentication is valid,
#          logging in may be a transparent redirect as you refresh the page.
#  
#          This does not affect JupyterHub API tokens in general,
#          which do not expire by default.
#          Only tokens issued during the oauth flow
#          accessing services and single-user servers are affected.
#  
#          .. versionadded:: 1.4
#              OAuth token expires_in was not previously configurable.
#          .. versionchanged:: 1.4
#              Default now uses cookie_max_age_days so that oauth tokens
#              which are generally stored in cookies,
#              expire when the cookies storing them expire.
#              Previously, it was one hour.
#  Default: 0
# c.JupyterHub.oauth_token_expires_in = 0

## File to write PID
#          Useful for daemonizing JupyterHub.
#  Default: ''
# c.JupyterHub.pid_file = ''

## The public facing port of the proxy.
#  
#          This is the port on which the proxy will listen.
#          This is the only port through which JupyterHub
#          should be accessed by users.
#  
#          .. deprecated: 0.9
#              Use JupyterHub.bind_url
#  Default: 8000
# c.JupyterHub.port = 8000

## DEPRECATED since version 0.8 : Use ConfigurableHTTPProxy.api_url
#  Default: ''
# c.JupyterHub.proxy_api_ip = ''

## DEPRECATED since version 0.8 : Use ConfigurableHTTPProxy.api_url
#  Default: 0
# c.JupyterHub.proxy_api_port = 0

## DEPRECATED since version 0.8: Use ConfigurableHTTPProxy.auth_token
#  Default: ''
# c.JupyterHub.proxy_auth_token = ''

## DEPRECATED since version 0.8: Use ConfigurableHTTPProxy.check_running_interval
#  Default: 5
# c.JupyterHub.proxy_check_interval = 5

## The class to use for configuring the JupyterHub proxy.
#  
#          Should be a subclass of :class:`jupyterhub.proxy.Proxy`.
#  
#          .. versionchanged:: 1.0
#              proxies may be registered via entry points,
#              e.g. `c.JupyterHub.proxy_class = 'traefik'`
#  
#  Currently installed: 
#    - configurable-http-proxy: jupyterhub.proxy.ConfigurableHTTPProxy
#    - default: jupyterhub.proxy.ConfigurableHTTPProxy
#  Default: 'jupyterhub.proxy.ConfigurableHTTPProxy'
# c.JupyterHub.proxy_class = 'jupyterhub.proxy.ConfigurableHTTPProxy'

## DEPRECATED since version 0.8. Use ConfigurableHTTPProxy.command
#  Default: []
# c.JupyterHub.proxy_cmd = []

## Recreate all certificates used within JupyterHub on restart.
#  
#          Note: enabling this feature requires restarting all notebook servers.
#  
#          Use with internal_ssl
#  Default: False
# c.JupyterHub.recreate_internal_certs = False

## Redirect user to server (if running), instead of control panel.
#  Default: True
# c.JupyterHub.redirect_to_server = True

## Purge and reset the database.
#  Default: False
# c.JupyterHub.reset_db = False

## Interval (in seconds) at which to check connectivity of services with web
#  endpoints.
#  Default: 60
# c.JupyterHub.service_check_interval = 60

## Dict of token:servicename to be loaded into the database.
#  
#          Allows ahead-of-time generation of API tokens for use by externally
#  managed services.
#  Default: {}
# c.JupyterHub.service_tokens = {}

## List of service specification dictionaries.
#  
#          A service
#  
#          For instance::
#  
#              services = [
#                  {
#                      'name': 'cull_idle',
#                      'command': ['/path/to/cull_idle_servers.py'],
#                  },
#                  {
#                      'name': 'formgrader',
#                      'url': 'http://127.0.0.1:1234',
#                      'api_token': 'super-secret',
#                      'environment':
#                  }
#              ]
#  Default: []
# c.JupyterHub.services = []

## Instead of starting the Application, dump configuration to stdout
#  See also: Application.show_config
# c.JupyterHub.show_config = False

## Instead of starting the Application, dump configuration to stdout (as JSON)
#  See also: Application.show_config_json
# c.JupyterHub.show_config_json = False

## Shuts down all user servers on logout
#  Default: False
# c.JupyterHub.shutdown_on_logout = False

## The class to use for spawning single-user servers.
#  
#          Should be a subclass of :class:`jupyterhub.spawner.Spawner`.
#  
#          .. versionchanged:: 1.0
#              spawners may be registered via entry points,
#              e.g. `c.JupyterHub.spawner_class = 'localprocess'`
#  
#  Currently installed: 
#    - default: jupyterhub.spawner.LocalProcessSpawner
#    - localprocess: jupyterhub.spawner.LocalProcessSpawner
#    - simple: jupyterhub.spawner.SimpleLocalProcessSpawner
#  Default: 'jupyterhub.spawner.LocalProcessSpawner'
# c.JupyterHub.spawner_class = 'jupyterhub.spawner.LocalProcessSpawner'

## Path to SSL certificate file for the public facing interface of the proxy
#  
#          When setting this, you should also set ssl_key
#  Default: ''
# c.JupyterHub.ssl_cert = ''

## Path to SSL key file for the public facing interface of the proxy
#  
#          When setting this, you should also set ssl_cert
#  Default: ''
# c.JupyterHub.ssl_key = ''

## Host to send statsd metrics to. An empty string (the default) disables sending
#  metrics.
#  Default: ''
# c.JupyterHub.statsd_host = ''

## Port on which to send statsd metrics about the hub
#  Default: 8125
# c.JupyterHub.statsd_port = 8125

## Prefix to use for all metrics sent by jupyterhub to statsd
#  Default: 'jupyterhub'
# c.JupyterHub.statsd_prefix = 'jupyterhub'

## Run single-user servers on subdomains of this host.
#  
#          This should be the full `https://hub.domain.tld[:port]`.
#  
#          Provides additional cross-site protections for javascript served by
#  single-user servers.
#  
#          Requires `<username>.hub.domain.tld` to resolve to the same host as
#  `hub.domain.tld`.
#  
#          In general, this is most easily achieved with wildcard DNS.
#  
#          When using SSL (i.e. always) this also requires a wildcard SSL
#  certificate.
#  Default: ''
# c.JupyterHub.subdomain_host = ''

## Paths to search for jinja templates, before using the default templates.
#  Default: []
# c.JupyterHub.template_paths = []

## Extra variables to be passed into jinja templates
#  Default: {}
# c.JupyterHub.template_vars = {}

## Extra settings overrides to pass to the tornado application.
#  Default: {}
# c.JupyterHub.tornado_settings = {}

## Trust user-provided tokens (via JupyterHub.service_tokens)
#          to have good entropy.
#  
#          If you are not inserting additional tokens via configuration file,
#          this flag has no effect.
#  
#          In JupyterHub 0.8, internally generated tokens do not
#          pass through additional hashing because the hashing is costly
#          and does not increase the entropy of already-good UUIDs.
#  
#          User-provided tokens, on the other hand, are not trusted to have good entropy by default,
#          and are passed through many rounds of hashing to stretch the entropy of the key
#          (i.e. user-provided tokens are treated as passwords instead of random keys).
#          These keys are more costly to check.
#  
#          If your inserted tokens are generated by a good-quality mechanism,
#          e.g. `openssl rand -hex 32`, then you can set this flag to True
#          to reduce the cost of checking authentication tokens.
#  Default: False
# c.JupyterHub.trust_user_provided_tokens = False

## Names to include in the subject alternative name.
#  
#          These names will be used for server name verification. This is useful
#          if JupyterHub is being run behind a reverse proxy or services using ssl
#          are on different hosts.
#  
#          Use with internal_ssl
#  Default: []
# c.JupyterHub.trusted_alt_names = []

## Downstream proxy IP addresses to trust.
#  
#          This sets the list of IP addresses that are trusted and skipped when processing
#          the `X-Forwarded-For` header. For example, if an external proxy is used for TLS
#          termination, its IP address should be added to this list to ensure the correct
#          client IP addresses are recorded in the logs instead of the proxy server's IP
#          address.
#  Default: []
# c.JupyterHub.trusted_downstream_ips = []

## Upgrade the database automatically on start.
#  
#          Only safe if database is regularly backed up.
#          Only SQLite databases will be backed up to a local file automatically.
#  Default: False
# c.JupyterHub.upgrade_db = False

## Return 503 rather than 424 when request comes in for a non-running server.
#  
#  Prior to JupyterHub 2.0, we returned a 503 when any request came in for a user
#  server that was currently not running. By default, JupyterHub 2.0 will return
#  a 424 - this makes operational metric dashboards more useful.
#  
#  JupyterLab < 3.2 expected the 503 to know if the user server is no longer
#  running, and prompted the user to start their server. Set this config to true
#  to retain the old behavior, so JupyterLab < 3.2 can continue to show the
#  appropriate UI when the user server is stopped.
#  
#  This option will be removed in a future release.
#  Default: False
# c.JupyterHub.use_legacy_stopped_server_status_code = False

## Callable to affect behavior of /user-redirect/
#  
#  Receives 4 parameters: 1. path - URL path that was provided after /user-
#  redirect/ 2. request - A Tornado HTTPServerRequest representing the current
#  request. 3. user - The currently authenticated user. 4. base_url - The
#  base_url of the current hub, for relative redirects
#  
#  It should return the new URL to redirect to, or None to preserve current
#  behavior.
#  Default: None
# c.JupyterHub.user_redirect_hook = None

#------------------------------------------------------------------------------
# Spawner(LoggingConfigurable) configuration
#------------------------------------------------------------------------------
## Base class for spawning single-user notebook servers.
#  
#      Subclass this, and override the following methods:
#  
#      - load_state
#      - get_state
#      - start
#      - stop
#      - poll
#  
#      As JupyterHub supports multiple users, an instance of the Spawner subclass
#      is created for each user. If there are 20 JupyterHub users, there will be 20
#      instances of the subclass.

## Extra arguments to be passed to the single-user server.
#  
#  Some spawners allow shell-style expansion here, allowing you to use
#  environment variables here. Most, including the default, do not. Consult the
#  documentation for your spawner to verify!
#  Default: []
# c.Spawner.args = []

## An optional hook function that you can implement to pass `auth_state` to the
#  spawner after it has been initialized but before it starts. The `auth_state`
#  dictionary may be set by the `.authenticate()` method of the authenticator.
#  This hook enables you to pass some or all of that information to your spawner.
#  
#  Example::
#  
#      def userdata_hook(spawner, auth_state):
#          spawner.userdata = auth_state["userdata"]
#  
#      c.Spawner.auth_state_hook = userdata_hook
#  Default: None
# c.Spawner.auth_state_hook = None

## The command used for starting the single-user server.
#  
#  Provide either a string or a list containing the path to the startup script
#  command. Extra arguments, other than this path, should be provided via `args`.
#  
#  This is usually set if you want to start the single-user server in a different
#  python environment (with virtualenv/conda) than JupyterHub itself.
#  
#  Some spawners allow shell-style expansion here, allowing you to use
#  environment variables. Most, including the default, do not. Consult the
#  documentation for your spawner to verify!
#  Default: ['jupyterhub-singleuser']
# c.Spawner.cmd = ['jupyterhub-singleuser']

## Maximum number of consecutive failures to allow before shutting down
#  JupyterHub.
#  
#  This helps JupyterHub recover from a certain class of problem preventing
#  launch in contexts where the Hub is automatically restarted (e.g. systemd,
#  docker, kubernetes).
#  
#  A limit of 0 means no limit and consecutive failures will not be tracked.
#  Default: 0
# c.Spawner.consecutive_failure_limit = 0

## Minimum number of cpu-cores a single-user notebook server is guaranteed to
#  have available.
#  
#  If this value is set to 0.5, allows use of 50% of one CPU. If this value is
#  set to 2, allows use of up to 2 CPUs.
#  
#  **This is a configuration setting. Your spawner must implement support for the
#  limit to work.** The default spawner, `LocalProcessSpawner`, does **not**
#  implement this support. A custom spawner **must** add support for this setting
#  for it to be enforced.
#  Default: None
# c.Spawner.cpu_guarantee = None

## Maximum number of cpu-cores a single-user notebook server is allowed to use.
#  
#  If this value is set to 0.5, allows use of 50% of one CPU. If this value is
#  set to 2, allows use of up to 2 CPUs.
#  
#  The single-user notebook server will never be scheduled by the kernel to use
#  more cpu-cores than this. There is no guarantee that it can access this many
#  cpu-cores.
#  
#  **This is a configuration setting. Your spawner must implement support for the
#  limit to work.** The default spawner, `LocalProcessSpawner`, does **not**
#  implement this support. A custom spawner **must** add support for this setting
#  for it to be enforced.
#  Default: None
# c.Spawner.cpu_limit = None

## Enable debug-logging of the single-user server
#  Default: False
# c.Spawner.debug = False

## The URL the single-user server should start in.
#  
#  `{username}` will be expanded to the user's username
#  
#  Example uses:
#  
#  - You can set `notebook_dir` to `/` and `default_url` to `/tree/home/{username}` to allow people to
#    navigate the whole filesystem from their notebook server, but still start in their home directory.
#  - Start with `/notebooks` instead of `/tree` if `default_url` points to a notebook instead of a directory.
#  - You can set this to `/lab` to have JupyterLab start by default, rather than Jupyter Notebook.
#  Default: ''
# c.Spawner.default_url = ''

## Disable per-user configuration of single-user servers.
#  
#  When starting the user's single-user server, any config file found in the
#  user's $HOME directory will be ignored.
#  
#  Note: a user could circumvent this if the user modifies their Python
#  environment, such as when they have their own conda environments / virtualenvs
#  / containers.
#  Default: False
# c.Spawner.disable_user_config = False

## List of environment variables for the single-user server to inherit from the
#  JupyterHub process.
#  
#  This list is used to ensure that sensitive information in the JupyterHub
#  process's environment (such as `CONFIGPROXY_AUTH_TOKEN`) is not passed to the
#  single-user server's process.
#  Default: ['PATH', 'PYTHONPATH', 'CONDA_ROOT', 'CONDA_DEFAULT_ENV', 'VIRTUAL_ENV', 'LANG', 'LC_ALL', 'JUPYTERHUB_SINGLEUSER_APP']
# c.Spawner.env_keep = ['PATH', 'PYTHONPATH', 'CONDA_ROOT', 'CONDA_DEFAULT_ENV', 'VIRTUAL_ENV', 'LANG', 'LC_ALL', 'JUPYTERHUB_SINGLEUSER_APP']

## Extra environment variables to set for the single-user server's process.
#  
#  Environment variables that end up in the single-user server's process come from 3 sources:
#    - This `environment` configurable
#    - The JupyterHub process' environment variables that are listed in `env_keep`
#    - Variables to establish contact between the single-user notebook and the hub (such as JUPYTERHUB_API_TOKEN)
#  
#  The `environment` configurable should be set by JupyterHub administrators to
#  add installation specific environment variables. It is a dict where the key is
#  the name of the environment variable, and the value can be a string or a
#  callable. If it is a callable, it will be called with one parameter (the
#  spawner instance), and should return a string fairly quickly (no blocking
#  operations please!).
#  
#  Note that the spawner class' interface is not guaranteed to be exactly same
#  across upgrades, so if you are using the callable take care to verify it
#  continues to work after upgrades!
#  
#  .. versionchanged:: 1.2
#      environment from this configuration has highest priority,
#      allowing override of 'default' env variables,
#      such as JUPYTERHUB_API_URL.
#  Default: {}
# c.Spawner.environment = {}

## Timeout (in seconds) before giving up on a spawned HTTP server
#  
#  Once a server has successfully been spawned, this is the amount of time we
#  wait before assuming that the server is unable to accept connections.
#  Default: 30
# c.Spawner.http_timeout = 30

## The URL the single-user server should connect to the Hub.
#  
#  If the Hub URL set in your JupyterHub config is not reachable from spawned
#  notebooks, you can set differnt URL by this config.
#  
#  Is None if you don't need to change the URL.
#  Default: None
# c.Spawner.hub_connect_url = None

## The IP address (or hostname) the single-user server should listen on.
#  
#  Usually either '127.0.0.1' (default) or '0.0.0.0'.
#  
#  The JupyterHub proxy implementation should be able to send packets to this
#  interface.
#  
#  Subclasses which launch remotely or in containers should override the default
#  to '0.0.0.0'.
#  
#  .. versionchanged:: 2.0
#      Default changed to '127.0.0.1', from ''.
#      In most cases, this does not result in a change in behavior,
#      as '' was interpreted as 'unspecified',
#      which used the subprocesses' own default, itself usually '127.0.0.1'.
#  Default: '127.0.0.1'
# c.Spawner.ip = '127.0.0.1'

## Minimum number of bytes a single-user notebook server is guaranteed to have
#  available.
#  
#  Allows the following suffixes:
#    - K -> Kilobytes
#    - M -> Megabytes
#    - G -> Gigabytes
#    - T -> Terabytes
#  
#  **This is a configuration setting. Your spawner must implement support for the
#  limit to work.** The default spawner, `LocalProcessSpawner`, does **not**
#  implement this support. A custom spawner **must** add support for this setting
#  for it to be enforced.
#  Default: None
# c.Spawner.mem_guarantee = None

## Maximum number of bytes a single-user notebook server is allowed to use.
#  
#  Allows the following suffixes:
#    - K -> Kilobytes
#    - M -> Megabytes
#    - G -> Gigabytes
#    - T -> Terabytes
#  
#  If the single user server tries to allocate more memory than this, it will
#  fail. There is no guarantee that the single-user notebook server will be able
#  to allocate this much memory - only that it can not allocate more than this.
#  
#  **This is a configuration setting. Your spawner must implement support for the
#  limit to work.** The default spawner, `LocalProcessSpawner`, does **not**
#  implement this support. A custom spawner **must** add support for this setting
#  for it to be enforced.
#  Default: None
# c.Spawner.mem_limit = None

## Path to the notebook directory for the single-user server.
#  
#  The user sees a file listing of this directory when the notebook interface is
#  started. The current interface does not easily allow browsing beyond the
#  subdirectories in this directory's tree.
#  
#  `~` will be expanded to the home directory of the user, and {username} will be
#  replaced with the name of the user.
#  
#  Note that this does *not* prevent users from accessing files outside of this
#  path! They can do so with many other means.
#  Default: ''
# c.Spawner.notebook_dir = ''

## Allowed roles for oauth tokens.
#  
#          This sets the maximum and default roles
#          assigned to oauth tokens issued by a single-user server's
#          oauth client (i.e. tokens stored in browsers after authenticating with the server),
#          defining what actions the server can take on behalf of logged-in users.
#  
#          Default is an empty list, meaning minimal permissions to identify users,
#          no actions can be taken on their behalf.
#  Default: traitlets.Undefined
# c.Spawner.oauth_roles = traitlets.Undefined

## An HTML form for options a user can specify on launching their server.
#  
#  The surrounding `<form>` element and the submit button are already provided.
#  
#  For example:
#  
#  .. code:: html
#  
#      Set your key:
#      <input name="key" val="default_key"></input>
#      <br>
#      Choose a letter:
#      <select name="letter" multiple="true">
#        <option value="A">The letter A</option>
#        <option value="B">The letter B</option>
#      </select>
#  
#  The data from this form submission will be passed on to your spawner in
#  `self.user_options`
#  
#  Instead of a form snippet string, this could also be a callable that takes as
#  one parameter the current spawner instance and returns a string. The callable
#  will be called asynchronously if it returns a future, rather than a str. Note
#  that the interface of the spawner class is not deemed stable across versions,
#  so using this functionality might cause your JupyterHub upgrades to break.
#  Default: traitlets.Undefined
# c.Spawner.options_form = traitlets.Undefined

## Interpret HTTP form data
#  
#  Form data will always arrive as a dict of lists of strings. Override this
#  function to understand single-values, numbers, etc.
#  
#  This should coerce form data into the structure expected by self.user_options,
#  which must be a dict, and should be JSON-serializeable, though it can contain
#  bytes in addition to standard JSON data types.
#  
#  This method should not have any side effects. Any handling of `user_options`
#  should be done in `.start()` to ensure consistent behavior across servers
#  spawned via the API and form submission page.
#  
#  Instances will receive this data on self.user_options, after passing through
#  this function, prior to `Spawner.start`.
#  
#  .. versionchanged:: 1.0
#      user_options are persisted in the JupyterHub database to be reused
#      on subsequent spawns if no options are given.
#      user_options is serialized to JSON as part of this persistence
#      (with additional support for bytes in case of uploaded file data),
#      and any non-bytes non-jsonable values will be replaced with None
#      if the user_options are re-used.
#  Default: traitlets.Undefined
# c.Spawner.options_from_form = traitlets.Undefined

## Interval (in seconds) on which to poll the spawner for single-user server's
#  status.
#  
#  At every poll interval, each spawner's `.poll` method is called, which checks
#  if the single-user server is still running. If it isn't running, then
#  JupyterHub modifies its own state accordingly and removes appropriate routes
#  from the configurable proxy.
#  Default: 30
# c.Spawner.poll_interval = 30

## The port for single-user servers to listen on.
#  
#  Defaults to `0`, which uses a randomly allocated port number each time.
#  
#  If set to a non-zero value, all Spawners will use the same port, which only
#  makes sense if each server is on a different address, e.g. in containers.
#  
#  New in version 0.7.
#  Default: 0
# c.Spawner.port = 0

## An optional hook function that you can implement to do work after the spawner
#  stops.
#  
#  This can be set independent of any concrete spawner implementation.
#  Default: None
# c.Spawner.post_stop_hook = None

## An optional hook function that you can implement to do some bootstrapping work
#  before the spawner starts. For example, create a directory for your user or
#  load initial content.
#  
#  This can be set independent of any concrete spawner implementation.
#  
#  This maybe a coroutine.
#  
#  Example::
#  
#      from subprocess import check_call
#      def my_hook(spawner):
#          username = spawner.user.name
#          check_call(['./examples/bootstrap-script/bootstrap.sh', username])
#  
#      c.Spawner.pre_spawn_hook = my_hook
#  Default: None
# c.Spawner.pre_spawn_hook = None

## List of SSL alt names
#  
#          May be set in config if all spawners should have the same value(s),
#          or set at runtime by Spawner that know their names.
#  Default: []
# c.Spawner.ssl_alt_names = []

## Whether to include DNS:localhost, IP:127.0.0.1 in alt names
#  Default: True
# c.Spawner.ssl_alt_names_include_local = True

## Timeout (in seconds) before giving up on starting of single-user server.
#  
#  This is the timeout for start to return, not the timeout for the server to
#  respond. Callers of spawner.start will assume that startup has failed if it
#  takes longer than this. start should return when the server process is started
#  and its location is known.
#  Default: 60
# c.Spawner.start_timeout = 60

#------------------------------------------------------------------------------
# Authenticator(LoggingConfigurable) configuration
#------------------------------------------------------------------------------
## Base class for implementing an authentication provider for JupyterHub

## Set of users that will have admin rights on this JupyterHub.
#  
#  Note: As of JupyterHub 2.0, full admin rights should not be required, and more
#  precise permissions can be managed via roles.
#  
#  Admin users have extra privileges:
#   - Use the admin panel to see list of users logged in
#   - Add / remove users in some authenticators
#   - Restart / halt the hub
#   - Start / stop users' single-user servers
#   - Can access each individual users' single-user server (if configured)
#  
#  Admin access should be treated the same way root access is.
#  
#  Defaults to an empty set, in which case no user has admin access.
#  Default: set()
# c.Authenticator.admin_users = set()

## Set of usernames that are allowed to log in.
#  
#  Use this with supported authenticators to restrict which users can log in.
#  This is an additional list that further restricts users, beyond whatever
#  restrictions the authenticator has in place. Any user in this list is granted
#  the 'user' role on hub startup.
#  
#  If empty, does not perform any additional restriction.
#  
#  .. versionchanged:: 1.2
#      `Authenticator.whitelist` renamed to `allowed_users`
#  Default: set()
# c.Authenticator.allowed_users = set()

## The max age (in seconds) of authentication info
#          before forcing a refresh of user auth info.
#  
#          Refreshing auth info allows, e.g. requesting/re-validating auth
#  tokens.
#  
#          See :meth:`.refresh_user` for what happens when user auth info is refreshed
#          (nothing by default).
#  Default: 300
# c.Authenticator.auth_refresh_age = 300

## Automatically begin the login process
#  
#          rather than starting with a "Login with..." link at `/hub/login`
#  
#          To work, `.login_url()` must give a URL other than the default `/hub/login`,
#          such as an oauth handler or another automatic login handler,
#          registered with `.get_handlers()`.
#  
#          .. versionadded:: 0.8
#  Default: False
# c.Authenticator.auto_login = False

## Automatically begin login process for OAuth2 authorization requests
#  
#  When another application is using JupyterHub as OAuth2 provider, it sends
#  users to `/hub/api/oauth2/authorize`. If the user isn't logged in already, and
#  auto_login is not set, the user will be dumped on the hub's home page, without
#  any context on what to do next.
#  
#  Setting this to true will automatically redirect users to login if they aren't
#  logged in *only* on the `/hub/api/oauth2/authorize` endpoint.
#  
#  .. versionadded:: 1.5
#  Default: False
# c.Authenticator.auto_login_oauth2_authorize = False

## Set of usernames that are not allowed to log in.
#  
#  Use this with supported authenticators to restrict which users can not log in.
#  This is an additional block list that further restricts users, beyond whatever
#  restrictions the authenticator has in place.
#  
#  If empty, does not perform any additional restriction.
#  
#  .. versionadded: 0.9
#  
#  .. versionchanged:: 1.2
#      `Authenticator.blacklist` renamed to `blocked_users`
#  Default: set()
# c.Authenticator.blocked_users = set()

## Delete any users from the database that do not pass validation
#  
#          When JupyterHub starts, `.add_user` will be called
#          on each user in the database to verify that all users are still valid.
#  
#          If `delete_invalid_users` is True,
#          any users that do not pass validation will be deleted from the database.
#          Use this if users might be deleted from an external system,
#          such as local user accounts.
#  
#          If False (default), invalid users remain in the Hub's database
#          and a warning will be issued.
#          This is the default to avoid data loss due to config changes.
#  Default: False
# c.Authenticator.delete_invalid_users = False

## Enable persisting auth_state (if available).
#  
#          auth_state will be encrypted and stored in the Hub's database.
#          This can include things like authentication tokens, etc.
#          to be passed to Spawners as environment variables.
#  
#          Encrypting auth_state requires the cryptography package.
#  
#          Additionally, the JUPYTERHUB_CRYPT_KEY environment variable must
#          contain one (or more, separated by ;) 32B encryption keys.
#          These can be either base64 or hex-encoded.
#  
#          If encryption is unavailable, auth_state cannot be persisted.
#  
#          New in JupyterHub 0.8
#  Default: False
# c.Authenticator.enable_auth_state = False

## Let authenticator manage user groups
#  
#          If True, Authenticator.authenticate and/or .refresh_user
#          may return a list of group names in the 'groups' field,
#          which will be assigned to the user.
#  
#          All group-assignment APIs are disabled if this is True.
#  Default: False
# c.Authenticator.manage_groups = False

## An optional hook function that you can implement to do some bootstrapping work
#  during authentication. For example, loading user account details from an
#  external system.
#  
#  This function is called after the user has passed all authentication checks
#  and is ready to successfully authenticate. This function must return the
#  authentication dict reguardless of changes to it.
#  
#  This maybe a coroutine.
#  
#  .. versionadded: 1.0
#  
#  Example::
#  
#      import os, pwd
#      def my_hook(authenticator, handler, authentication):
#          user_data = pwd.getpwnam(authentication['name'])
#          spawn_data = {
#              'pw_data': user_data
#              'gid_list': os.getgrouplist(authentication['name'], user_data.pw_gid)
#          }
#  
#          if authentication['auth_state'] is None:
#              authentication['auth_state'] = {}
#          authentication['auth_state']['spawn_data'] = spawn_data
#  
#          return authentication
#  
#      c.Authenticator.post_auth_hook = my_hook
#  Default: None
# c.Authenticator.post_auth_hook = None

## Force refresh of auth prior to spawn.
#  
#          This forces :meth:`.refresh_user` to be called prior to launching
#          a server, to ensure that auth state is up-to-date.
#  
#          This can be important when e.g. auth tokens that may have expired
#          are passed to the spawner via environment variables from auth_state.
#  
#          If refresh_user cannot refresh the user auth data,
#          launch will fail until the user logs in again.
#  Default: False
# c.Authenticator.refresh_pre_spawn = False

## Dictionary mapping authenticator usernames to JupyterHub users.
#  
#          Primarily used to normalize OAuth user names to local users.
#  Default: {}
# c.Authenticator.username_map = {}

## Regular expression pattern that all valid usernames must match.
#  
#  If a username does not match the pattern specified here, authentication will
#  not be attempted.
#  
#  If not set, allow any username.
#  Default: ''
# c.Authenticator.username_pattern = ''

## Deprecated, use `Authenticator.allowed_users`
#  Default: set()
# c.Authenticator.whitelist = set()

#------------------------------------------------------------------------------
# CryptKeeper(SingletonConfigurable) configuration
#------------------------------------------------------------------------------
## Encapsulate encryption configuration
#  
#      Use via the encryption_config singleton below.

#  Default: []
# c.CryptKeeper.keys = []

## The number of threads to allocate for encryption
#  Default: 2
# c.CryptKeeper.n_threads = 2
JupyterHub help command output

This section contains the output of the command jupyterhub --help-all.

Start a multi-user Jupyter Notebook server

    Spawns a configurable-http-proxy and multi-user Hub,
    which authenticates users and spawns single-user Notebook servers
    on behalf of users.

Subcommands
===========
Subcommands are launched as `jupyterhub cmd [args]`. For information on using
subcommand 'cmd', do: `jupyterhub cmd -h`.

token
    Generate an API token for a user
upgrade-db
    Upgrade your JupyterHub state database to the current version.

Options
=======
The options below are convenience aliases to configurable class-options,
as listed in the "Equivalent to" description-line of the aliases.
To see all configurable class-options for some <cmd>, use:
    <cmd> --help-all

--debug
    set log level to logging.DEBUG (maximize logging output)
    Equivalent to: [--Application.log_level=10]
--show-config
    Show the application's configuration (human-readable format)
    Equivalent to: [--Application.show_config=True]
--show-config-json
    Show the application's configuration (json format)
    Equivalent to: [--Application.show_config_json=True]
--generate-config
    generate default config file
    Equivalent to: [--JupyterHub.generate_config=True]
--generate-certs
    generate certificates used for internal ssl
    Equivalent to: [--JupyterHub.generate_certs=True]
--no-db
    disable persisting state database to disk
    Equivalent to: [--JupyterHub.db_url=sqlite:///:memory:]
--upgrade-db
    Automatically upgrade the database if needed on startup.

            Only safe if the database has been backed up.
            Only SQLite database files will be backed up automatically.
    Equivalent to: [--JupyterHub.upgrade_db=True]
--no-ssl
    [DEPRECATED in 0.7: does nothing]
    Equivalent to: [--JupyterHub.confirm_no_ssl=True]
--base-url=<URLPrefix>
    The base URL of the entire application.
            Add this to the beginning of all JupyterHub URLs.
            Use base_url to run JupyterHub within an existing website.
            .. deprecated: 0.9
                Use JupyterHub.bind_url
    Default: '/'
    Equivalent to: [--JupyterHub.base_url]
-y=<Bool>
    Answer yes to any questions (e.g. confirm overwrite)
    Default: False
    Equivalent to: [--JupyterHub.answer_yes]
--ssl-key=<Unicode>
    Path to SSL key file for the public facing interface of the proxy
            When setting this, you should also set ssl_cert
    Default: ''
    Equivalent to: [--JupyterHub.ssl_key]
--ssl-cert=<Unicode>
    Path to SSL certificate file for the public facing interface of the proxy
            When setting this, you should also set ssl_key
    Default: ''
    Equivalent to: [--JupyterHub.ssl_cert]
--url=<Unicode>
    The public facing URL of the whole JupyterHub application.
            This is the address on which the proxy will bind.
            Sets protocol, ip, base_url
    Default: 'http://:8000'
    Equivalent to: [--JupyterHub.bind_url]
--ip=<Unicode>
    The public facing ip of the whole JupyterHub application
            (specifically referred to as the proxy).
            This is the address on which the proxy will listen. The default is to
            listen on all interfaces. This is the only address through which JupyterHub
            should be accessed by users.
            .. deprecated: 0.9
                Use JupyterHub.bind_url
    Default: ''
    Equivalent to: [--JupyterHub.ip]
--port=<Int>
    The public facing port of the proxy.
            This is the port on which the proxy will listen.
            This is the only port through which JupyterHub
            should be accessed by users.
            .. deprecated: 0.9
                Use JupyterHub.bind_url
    Default: 8000
    Equivalent to: [--JupyterHub.port]
--pid-file=<Unicode>
    File to write PID
            Useful for daemonizing JupyterHub.
    Default: ''
    Equivalent to: [--JupyterHub.pid_file]
--log-file=<Unicode>
    DEPRECATED: use output redirection instead, e.g.
    jupyterhub &>> /var/log/jupyterhub.log
    Default: ''
    Equivalent to: [--JupyterHub.extra_log_file]
--log-level=<Enum>
    Set the log level by value or name.
    Choices: any of [0, 10, 20, 30, 40, 50, 'DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']
    Default: 30
    Equivalent to: [--Application.log_level]
-f=<Unicode>
    The config file to load
    Default: 'jupyterhub_config.py'
    Equivalent to: [--JupyterHub.config_file]
--config=<Unicode>
    The config file to load
    Default: 'jupyterhub_config.py'
    Equivalent to: [--JupyterHub.config_file]
--db=<Unicode>
    url for the database. e.g. `sqlite:///jupyterhub.sqlite`
    Default: 'sqlite:///jupyterhub.sqlite'
    Equivalent to: [--JupyterHub.db_url]

Class options
=============
The command-line option below sets the respective configurable class-parameter:
    --Class.parameter=value
This line is evaluated in Python, so simple expressions are allowed.
For instance, to set `C.a=[0,1,2]`, you may type this:
    --C.a='range(3)'

Application(SingletonConfigurable) options
------------------------------------------
--Application.log_datefmt=<Unicode>
    The date format used by logging formatters for %(asctime)s
    Default: '%Y-%m-%d %H:%M:%S'
--Application.log_format=<Unicode>
    The Logging format template
    Default: '[%(name)s]%(highlevel)s %(message)s'
--Application.log_level=<Enum>
    Set the log level by value or name.
    Choices: any of [0, 10, 20, 30, 40, 50, 'DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']
    Default: 30
--Application.show_config=<Bool>
    Instead of starting the Application, dump configuration to stdout
    Default: False
--Application.show_config_json=<Bool>
    Instead of starting the Application, dump configuration to stdout (as JSON)
    Default: False

JupyterHub(Application) options
-------------------------------
--JupyterHub.active_server_limit=<Int>
    Maximum number of concurrent servers that can be active at a time.
    Setting this can limit the total resources your users can consume.
    An active server is any server that's not fully stopped. It is considered
    active from the time it has been requested until the time that it has
    completely stopped.
    If this many user servers are active, users will not be able to launch new
    servers until a server is shutdown. Spawn requests will be rejected with a
    429 error asking them to try again.
    If set to 0, no limit is enforced.
    Default: 0
--JupyterHub.active_user_window=<Int>
    Duration (in seconds) to determine the number of active users.
    Default: 1800
--JupyterHub.activity_resolution=<Int>
    Resolution (in seconds) for updating activity
    If activity is registered that is less than activity_resolution seconds more
    recent than the current value, the new value will be ignored.
    This avoids too many writes to the Hub database.
    Default: 30
--JupyterHub.admin_access=<Bool>
    Grant admin users permission to access single-user servers.
            Users should be properly informed if this is enabled.
    Default: False
--JupyterHub.admin_users=<set-item-1>...
    DEPRECATED since version 0.7.2, use Authenticator.admin_users instead.
    Default: set()
--JupyterHub.allow_named_servers=<Bool>
    Allow named single-user servers per user
    Default: False
--JupyterHub.answer_yes=<Bool>
    Answer yes to any questions (e.g. confirm overwrite)
    Default: False
--JupyterHub.api_page_default_limit=<Int>
    The default amount of records returned by a paginated endpoint
    Default: 50
--JupyterHub.api_page_max_limit=<Int>
    The maximum amount of records that can be returned at once
    Default: 200
--JupyterHub.api_tokens=<key-1>=<value-1>...
    PENDING DEPRECATION: consider using services
            Dict of token:username to be loaded into the database.
            Allows ahead-of-time generation of API tokens for use by externally managed services,
            which authenticate as JupyterHub users.
            Consider using services for general services that talk to the
    JupyterHub API.
    Default: {}
--JupyterHub.authenticate_prometheus=<Bool>
    Authentication for prometheus metrics
    Default: True
--JupyterHub.authenticator_class=<EntryPointType>
    Class for authenticating users.
            This should be a subclass of :class:`jupyterhub.auth.Authenticator`
            with an :meth:`authenticate` method that:
            - is a coroutine (asyncio or tornado)
            - returns username on success, None on failure
            - takes two arguments: (handler, data),
              where `handler` is the calling web.RequestHandler,
              and `data` is the POST form data from the login page.
            .. versionchanged:: 1.0
                authenticators may be registered via entry points,
                e.g. `c.JupyterHub.authenticator_class = 'pam'`
    Currently installed: 
      - default: jupyterhub.auth.PAMAuthenticator
      - dummy: jupyterhub.auth.DummyAuthenticator
      - null: jupyterhub.auth.NullAuthenticator
      - pam: jupyterhub.auth.PAMAuthenticator
    Default: 'jupyterhub.auth.PAMAuthenticator'
--JupyterHub.base_url=<URLPrefix>
    The base URL of the entire application.
            Add this to the beginning of all JupyterHub URLs.
            Use base_url to run JupyterHub within an existing website.
            .. deprecated: 0.9
                Use JupyterHub.bind_url
    Default: '/'
--JupyterHub.bind_url=<Unicode>
    The public facing URL of the whole JupyterHub application.
            This is the address on which the proxy will bind.
            Sets protocol, ip, base_url
    Default: 'http://:8000'
--JupyterHub.cleanup_proxy=<Bool>
    Whether to shutdown the proxy when the Hub shuts down.
            Disable if you want to be able to teardown the Hub while leaving the
    proxy running.
            Only valid if the proxy was starting by the Hub process.
            If both this and cleanup_servers are False, sending SIGINT to the Hub will
            only shutdown the Hub, leaving everything else running.
            The Hub should be able to resume from database state.
    Default: True
--JupyterHub.cleanup_servers=<Bool>
    Whether to shutdown single-user servers when the Hub shuts down.
            Disable if you want to be able to teardown the Hub while leaving the
    single-user servers running.
            If both this and cleanup_proxy are False, sending SIGINT to the Hub will
            only shutdown the Hub, leaving everything else running.
            The Hub should be able to resume from database state.
    Default: True
--JupyterHub.concurrent_spawn_limit=<Int>
    Maximum number of concurrent users that can be spawning at a time.
    Spawning lots of servers at the same time can cause performance problems for
    the Hub or the underlying spawning system. Set this limit to prevent bursts
    of logins from attempting to spawn too many servers at the same time.
    This does not limit the number of total running servers. See
    active_server_limit for that.
    If more than this many users attempt to spawn at a time, their requests will
    be rejected with a 429 error asking them to try again. Users will have to
    wait for some of the spawning services to finish starting before they can
    start their own.
    If set to 0, no limit is enforced.
    Default: 100
--JupyterHub.config_file=<Unicode>
    The config file to load
    Default: 'jupyterhub_config.py'
--JupyterHub.confirm_no_ssl=<Bool>
    DEPRECATED: does nothing
    Default: False
--JupyterHub.cookie_max_age_days=<Float>
    Number of days for a login cookie to be valid.
            Default is two weeks.
    Default: 14
--JupyterHub.cookie_secret=<Union>
    The cookie secret to use to encrypt cookies.
            Loaded from the JPY_COOKIE_SECRET env variable by default.
            Should be exactly 256 bits (32 bytes).
    Default: traitlets.Undefined
--JupyterHub.cookie_secret_file=<Unicode>
    File in which to store the cookie secret.
    Default: 'jupyterhub_cookie_secret'
--JupyterHub.data_files_path=<Unicode>
    The location of jupyterhub data files (e.g. /usr/local/share/jupyterhub)
    Default: '$HOME/checkouts/readthedocs.org/user_builds/jupyterhub/...
--JupyterHub.db_kwargs=<key-1>=<value-1>...
    Include any kwargs to pass to the database connection.
            See sqlalchemy.create_engine for details.
    Default: {}
--JupyterHub.db_url=<Unicode>
    url for the database. e.g. `sqlite:///jupyterhub.sqlite`
    Default: 'sqlite:///jupyterhub.sqlite'
--JupyterHub.debug_db=<Bool>
    log all database transactions. This has A LOT of output
    Default: False
--JupyterHub.debug_proxy=<Bool>
    DEPRECATED since version 0.8: Use ConfigurableHTTPProxy.debug
    Default: False
--JupyterHub.default_server_name=<Unicode>
    If named servers are enabled, default name of server to spawn or open, e.g.
    by user-redirect.
    Default: ''
--JupyterHub.default_url=<Union>
    The default URL for users when they arrive (e.g. when user directs to "/")
    By default, redirects users to their own server.
    Can be a Unicode string (e.g. '/hub/home') or a callable based on the
    handler object:
    ::
        def default_url_fn(handler):
            user = handler.current_user
            if user and user.admin:
                return '/hub/admin'
            return '/hub/home'
        c.JupyterHub.default_url = default_url_fn
    Default: traitlets.Undefined
--JupyterHub.external_ssl_authorities=<key-1>=<value-1>...
    Dict authority:dict(files). Specify the key, cert, and/or
            ca file for an authority. This is useful for externally managed
            proxies that wish to use internal_ssl.
            The files dict has this format (you must specify at least a cert)::
                {
                    'key': '/path/to/key.key',
                    'cert': '/path/to/cert.crt',
                    'ca': '/path/to/ca.crt'
                }
            The authorities you can override: 'hub-ca', 'notebooks-ca',
            'proxy-api-ca', 'proxy-client-ca', and 'services-ca'.
            Use with internal_ssl
    Default: {}
--JupyterHub.extra_handlers=<list-item-1>...
    Register extra tornado Handlers for jupyterhub.
    Should be of the form ``("<regex>", Handler)``
    The Hub prefix will be added, so `/my-page` will be served at `/hub/my-
    page`.
    Default: []
--JupyterHub.extra_log_file=<Unicode>
    DEPRECATED: use output redirection instead, e.g.
    jupyterhub &>> /var/log/jupyterhub.log
    Default: ''
--JupyterHub.extra_log_handlers=<list-item-1>...
    Extra log handlers to set on JupyterHub logger
    Default: []
--JupyterHub.forwarded_host_header=<Unicode>
    Alternate header to use as the Host (e.g., X-Forwarded-Host)
            when determining whether a request is cross-origin
            This may be useful when JupyterHub is running behind a proxy that rewrites
            the Host header.
    Default: ''
--JupyterHub.generate_certs=<Bool>
    Generate certs used for internal ssl
    Default: False
--JupyterHub.generate_config=<Bool>
    Generate default config file
    Default: False
--JupyterHub.hub_bind_url=<Unicode>
    The URL on which the Hub will listen. This is a private URL for internal
    communication. Typically set in combination with hub_connect_url. If a unix
    socket, hub_connect_url **must** also be set.
    For example:
        "http://127.0.0.1:8081"
        "unix+http://%2Fsrv%2Fjupyterhub%2Fjupyterhub.sock"
    .. versionadded:: 0.9
    Default: ''
--JupyterHub.hub_connect_ip=<Unicode>
    The ip or hostname for proxies and spawners to use
            for connecting to the Hub.
            Use when the bind address (`hub_ip`) is 0.0.0.0, :: or otherwise different
            from the connect address.
            Default: when `hub_ip` is 0.0.0.0 or ::, use `socket.gethostname()`,
    otherwise use `hub_ip`.
            Note: Some spawners or proxy implementations might not support hostnames. Check your
            spawner or proxy documentation to see if they have extra requirements.
            .. versionadded:: 0.8
    Default: ''
--JupyterHub.hub_connect_port=<Int>
    DEPRECATED
    Use hub_connect_url
    .. versionadded:: 0.8
    .. deprecated:: 0.9
        Use hub_connect_url
    Default: 0
--JupyterHub.hub_connect_url=<Unicode>
    The URL for connecting to the Hub. Spawners, services, and the proxy will
    use this URL to talk to the Hub.
    Only needs to be specified if the default hub URL is not connectable (e.g.
    using a unix+http:// bind url).
    .. seealso::
        JupyterHub.hub_connect_ip
        JupyterHub.hub_bind_url
    .. versionadded:: 0.9
    Default: ''
--JupyterHub.hub_ip=<Unicode>
    The ip address for the Hub process to *bind* to.
            By default, the hub listens on localhost only. This address must be accessible from
            the proxy and user servers. You may need to set this to a public ip or '' for all
            interfaces if the proxy or user servers are in containers or on a different host.
            See `hub_connect_ip` for cases where the bind and connect address should differ,
            or `hub_bind_url` for setting the full bind URL.
    Default: '127.0.0.1'
--JupyterHub.hub_port=<Int>
    The internal port for the Hub process.
            This is the internal port of the hub itself. It should never be accessed directly.
            See JupyterHub.port for the public port to use when accessing jupyterhub.
            It is rare that this port should be set except in cases of port conflict.
            See also `hub_ip` for the ip and `hub_bind_url` for setting the full
    bind URL.
    Default: 8081
--JupyterHub.hub_routespec=<Unicode>
    The routing prefix for the Hub itself.
    Override to send only a subset of traffic to the Hub. Default is to use the
    Hub as the default route for all requests.
    This is necessary for normal jupyterhub operation, as the Hub must receive
    requests for e.g. `/user/:name` when the user's server is not running.
    However, some deployments using only the JupyterHub API may want to handle
    these events themselves, in which case they can register their own default
    target with the proxy and set e.g. `hub_routespec = /hub/` to serve only the
    hub's own pages, or even `/hub/api/` for api-only operation.
    Note: hub_routespec must include the base_url, if any.
    .. versionadded:: 1.4
    Default: '/'
--JupyterHub.implicit_spawn_seconds=<Float>
    Trigger implicit spawns after this many seconds.
            When a user visits a URL for a server that's not running,
            they are shown a page indicating that the requested server
            is not running with a button to spawn the server.
            Setting this to a positive value will redirect the user
            after this many seconds, effectively clicking this button
            automatically for the users,
            automatically beginning the spawn process.
            Warning: this can result in errors and surprising behavior
            when sharing access URLs to actual servers,
            since the wrong server is likely to be started.
    Default: 0
--JupyterHub.init_spawners_timeout=<Int>
    Timeout (in seconds) to wait for spawners to initialize
    Checking if spawners are healthy can take a long time if many spawners are
    active at hub start time.
    If it takes longer than this timeout to check, init_spawner will be left to
    complete in the background and the http server is allowed to start.
    A timeout of -1 means wait forever, which can mean a slow startup of the Hub
    but ensures that the Hub is fully consistent by the time it starts
    responding to requests. This matches the behavior of jupyterhub 1.0.
    .. versionadded: 1.1.0
    Default: 10
--JupyterHub.internal_certs_location=<Unicode>
    The location to store certificates automatically created by
            JupyterHub.
            Use with internal_ssl
    Default: 'internal-ssl'
--JupyterHub.internal_ssl=<Bool>
    Enable SSL for all internal communication
            This enables end-to-end encryption between all JupyterHub components.
            JupyterHub will automatically create the necessary certificate
            authority and sign notebook certificates as they're created.
    Default: False
--JupyterHub.ip=<Unicode>
    The public facing ip of the whole JupyterHub application
            (specifically referred to as the proxy).
            This is the address on which the proxy will listen. The default is to
            listen on all interfaces. This is the only address through which JupyterHub
            should be accessed by users.
            .. deprecated: 0.9
                Use JupyterHub.bind_url
    Default: ''
--JupyterHub.jinja_environment_options=<key-1>=<value-1>...
    Supply extra arguments that will be passed to Jinja environment.
    Default: {}
--JupyterHub.last_activity_interval=<Int>
    Interval (in seconds) at which to update last-activity timestamps.
    Default: 300
--JupyterHub.load_groups=<key-1>=<value-1>...
    Dict of 'group': ['usernames'] to load at startup.
            This strictly *adds* groups and users to groups.
            Loading one set of groups, then starting JupyterHub again with a different
            set will not remove users or groups from previous launches.
            That must be done through the API.
    Default: {}
--JupyterHub.load_roles=<list-item-1>...
    List of predefined role dictionaries to load at startup.
            For instance::
                load_roles = [
                                {
                                    'name': 'teacher',
                                    'description': 'Access to users' information and group membership',
                                    'scopes': ['users', 'groups'],
                                    'users': ['cyclops', 'gandalf'],
                                    'services': [],
                                    'groups': []
                                }
                            ]
            All keys apart from 'name' are optional.
            See all the available scopes in the JupyterHub REST API documentation.
            Default roles are defined in roles.py.
    Default: []
--JupyterHub.log_datefmt=<Unicode>
    The date format used by logging formatters for %(asctime)s
    Default: '%Y-%m-%d %H:%M:%S'
--JupyterHub.log_format=<Unicode>
    The Logging format template
    Default: '[%(name)s]%(highlevel)s %(message)s'
--JupyterHub.log_level=<Enum>
    Set the log level by value or name.
    Choices: any of [0, 10, 20, 30, 40, 50, 'DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']
    Default: 30
--JupyterHub.logo_file=<Unicode>
    Specify path to a logo image to override the Jupyter logo in the banner.
    Default: ''
--JupyterHub.named_server_limit_per_user=<Int>
    Maximum number of concurrent named servers that can be created by a user at
    a time.
    Setting this can limit the total resources a user can consume.
    If set to 0, no limit is enforced.
    Default: 0
--JupyterHub.oauth_token_expires_in=<Int>
    Expiry (in seconds) of OAuth access tokens.
            The default is to expire when the cookie storing them expires,
            according to `cookie_max_age_days` config.
            These are the tokens stored in cookies when you visit
            a single-user server or service.
            When they expire, you must re-authenticate with the Hub,
            even if your Hub authentication is still valid.
            If your Hub authentication is valid,
            logging in may be a transparent redirect as you refresh the page.
            This does not affect JupyterHub API tokens in general,
            which do not expire by default.
            Only tokens issued during the oauth flow
            accessing services and single-user servers are affected.
            .. versionadded:: 1.4
                OAuth token expires_in was not previously configurable.
            .. versionchanged:: 1.4
                Default now uses cookie_max_age_days so that oauth tokens
                which are generally stored in cookies,
                expire when the cookies storing them expire.
                Previously, it was one hour.
    Default: 0
--JupyterHub.pid_file=<Unicode>
    File to write PID
            Useful for daemonizing JupyterHub.
    Default: ''
--JupyterHub.port=<Int>
    The public facing port of the proxy.
            This is the port on which the proxy will listen.
            This is the only port through which JupyterHub
            should be accessed by users.
            .. deprecated: 0.9
                Use JupyterHub.bind_url
    Default: 8000
--JupyterHub.proxy_api_ip=<Unicode>
    DEPRECATED since version 0.8 : Use ConfigurableHTTPProxy.api_url
    Default: ''
--JupyterHub.proxy_api_port=<Int>
    DEPRECATED since version 0.8 : Use ConfigurableHTTPProxy.api_url
    Default: 0
--JupyterHub.proxy_auth_token=<Unicode>
    DEPRECATED since version 0.8: Use ConfigurableHTTPProxy.auth_token
    Default: ''
--JupyterHub.proxy_check_interval=<Int>
    DEPRECATED since version 0.8: Use
    ConfigurableHTTPProxy.check_running_interval
    Default: 5
--JupyterHub.proxy_class=<EntryPointType>
    The class to use for configuring the JupyterHub proxy.
            Should be a subclass of :class:`jupyterhub.proxy.Proxy`.
            .. versionchanged:: 1.0
                proxies may be registered via entry points,
                e.g. `c.JupyterHub.proxy_class = 'traefik'`
    Currently installed: 
      - configurable-http-proxy: jupyterhub.proxy.ConfigurableHTTPProxy
      - default: jupyterhub.proxy.ConfigurableHTTPProxy
    Default: 'jupyterhub.proxy.ConfigurableHTTPProxy'
--JupyterHub.proxy_cmd=<command-item-1>...
    DEPRECATED since version 0.8. Use ConfigurableHTTPProxy.command
    Default: []
--JupyterHub.recreate_internal_certs=<Bool>
    Recreate all certificates used within JupyterHub on restart.
            Note: enabling this feature requires restarting all notebook
    servers.
            Use with internal_ssl
    Default: False
--JupyterHub.redirect_to_server=<Bool>
    Redirect user to server (if running), instead of control panel.
    Default: True
--JupyterHub.reset_db=<Bool>
    Purge and reset the database.
    Default: False
--JupyterHub.service_check_interval=<Int>
    Interval (in seconds) at which to check connectivity of services with web
    endpoints.
    Default: 60
--JupyterHub.service_tokens=<key-1>=<value-1>...
    Dict of token:servicename to be loaded into the database.
            Allows ahead-of-time generation of API tokens for use by externally
    managed services.
    Default: {}
--JupyterHub.services=<list-item-1>...
    List of service specification dictionaries.
            A service
            For instance::
                services = [
                    {
                        'name': 'cull_idle',
                        'command': ['/path/to/cull_idle_servers.py'],
                    },
                    {
                        'name': 'formgrader',
                        'url': 'http://127.0.0.1:1234',
                        'api_token': 'super-secret',
                        'environment':
                    }
                ]
    Default: []
--JupyterHub.show_config=<Bool>
    Instead of starting the Application, dump configuration to stdout
    Default: False
--JupyterHub.show_config_json=<Bool>
    Instead of starting the Application, dump configuration to stdout (as JSON)
    Default: False
--JupyterHub.shutdown_on_logout=<Bool>
    Shuts down all user servers on logout
    Default: False
--JupyterHub.spawner_class=<EntryPointType>
    The class to use for spawning single-user servers.
            Should be a subclass of :class:`jupyterhub.spawner.Spawner`.
            .. versionchanged:: 1.0
                spawners may be registered via entry points,
                e.g. `c.JupyterHub.spawner_class = 'localprocess'`
    Currently installed: 
      - default: jupyterhub.spawner.LocalProcessSpawner
      - localprocess: jupyterhub.spawner.LocalProcessSpawner
      - simple: jupyterhub.spawner.SimpleLocalProcessSpawner
    Default: 'jupyterhub.spawner.LocalProcessSpawner'
--JupyterHub.ssl_cert=<Unicode>
    Path to SSL certificate file for the public facing interface of the proxy
            When setting this, you should also set ssl_key
    Default: ''
--JupyterHub.ssl_key=<Unicode>
    Path to SSL key file for the public facing interface of the proxy
            When setting this, you should also set ssl_cert
    Default: ''
--JupyterHub.statsd_host=<Unicode>
    Host to send statsd metrics to. An empty string (the default) disables
    sending metrics.
    Default: ''
--JupyterHub.statsd_port=<Int>
    Port on which to send statsd metrics about the hub
    Default: 8125
--JupyterHub.statsd_prefix=<Unicode>
    Prefix to use for all metrics sent by jupyterhub to statsd
    Default: 'jupyterhub'
--JupyterHub.subdomain_host=<Unicode>
    Run single-user servers on subdomains of this host.
            This should be the full `https://hub.domain.tld[:port]`.
            Provides additional cross-site protections for javascript served by
    single-user servers.
            Requires `<username>.hub.domain.tld` to resolve to the same host as
    `hub.domain.tld`.
            In general, this is most easily achieved with wildcard DNS.
            When using SSL (i.e. always) this also requires a wildcard SSL
    certificate.
    Default: ''
--JupyterHub.template_paths=<list-item-1>...
    Paths to search for jinja templates, before using the default templates.
    Default: []
--JupyterHub.template_vars=<key-1>=<value-1>...
    Extra variables to be passed into jinja templates
    Default: {}
--JupyterHub.tornado_settings=<key-1>=<value-1>...
    Extra settings overrides to pass to the tornado application.
    Default: {}
--JupyterHub.trust_user_provided_tokens=<Bool>
    Trust user-provided tokens (via JupyterHub.service_tokens)
            to have good entropy.
            If you are not inserting additional tokens via configuration file,
            this flag has no effect.
            In JupyterHub 0.8, internally generated tokens do not
            pass through additional hashing because the hashing is costly
            and does not increase the entropy of already-good UUIDs.
            User-provided tokens, on the other hand, are not trusted to have good entropy by default,
            and are passed through many rounds of hashing to stretch the entropy of the key
            (i.e. user-provided tokens are treated as passwords instead of random keys).
            These keys are more costly to check.
            If your inserted tokens are generated by a good-quality mechanism,
            e.g. `openssl rand -hex 32`, then you can set this flag to True
            to reduce the cost of checking authentication tokens.
    Default: False
--JupyterHub.trusted_alt_names=<list-item-1>...
    Names to include in the subject alternative name.
            These names will be used for server name verification. This is useful
            if JupyterHub is being run behind a reverse proxy or services using ssl
            are on different hosts.
            Use with internal_ssl
    Default: []
--JupyterHub.trusted_downstream_ips=<list-item-1>...
    Downstream proxy IP addresses to trust.
            This sets the list of IP addresses that are trusted and skipped when processing
            the `X-Forwarded-For` header. For example, if an external proxy is used for TLS
            termination, its IP address should be added to this list to ensure the correct
            client IP addresses are recorded in the logs instead of the proxy server's IP
            address.
    Default: []
--JupyterHub.upgrade_db=<Bool>
    Upgrade the database automatically on start.
            Only safe if database is regularly backed up.
            Only SQLite databases will be backed up to a local file automatically.
    Default: False
--JupyterHub.use_legacy_stopped_server_status_code=<Bool>
    Return 503 rather than 424 when request comes in for a non-running server.
    Prior to JupyterHub 2.0, we returned a 503 when any request came in for a
    user server that was currently not running. By default, JupyterHub 2.0 will
    return a 424 - this makes operational metric dashboards more useful.
    JupyterLab < 3.2 expected the 503 to know if the user server is no longer
    running, and prompted the user to start their server. Set this config to
    true to retain the old behavior, so JupyterLab < 3.2 can continue to show
    the appropriate UI when the user server is stopped.
    This option will be removed in a future release.
    Default: False
--JupyterHub.user_redirect_hook=<Callable>
    Callable to affect behavior of /user-redirect/
    Receives 4 parameters: 1. path - URL path that was provided after /user-
    redirect/ 2. request - A Tornado HTTPServerRequest representing the current
    request. 3. user - The currently authenticated user. 4. base_url - The
    base_url of the current hub, for relative redirects
    It should return the new URL to redirect to, or None to preserve current
    behavior.
    Default: None

Spawner(LoggingConfigurable) options
------------------------------------
--Spawner.args=<list-item-1>...
    Extra arguments to be passed to the single-user server.
    Some spawners allow shell-style expansion here, allowing you to use
    environment variables here. Most, including the default, do not. Consult the
    documentation for your spawner to verify!
    Default: []
--Spawner.auth_state_hook=<Any>
    An optional hook function that you can implement to pass `auth_state` to the
    spawner after it has been initialized but before it starts. The `auth_state`
    dictionary may be set by the `.authenticate()` method of the authenticator.
    This hook enables you to pass some or all of that information to your
    spawner.
    Example::
        def userdata_hook(spawner, auth_state):
            spawner.userdata = auth_state["userdata"]
        c.Spawner.auth_state_hook = userdata_hook
    Default: None
--Spawner.cmd=<command-item-1>...
    The command used for starting the single-user server.
    Provide either a string or a list containing the path to the startup script
    command. Extra arguments, other than this path, should be provided via
    `args`.
    This is usually set if you want to start the single-user server in a
    different python environment (with virtualenv/conda) than JupyterHub itself.
    Some spawners allow shell-style expansion here, allowing you to use
    environment variables. Most, including the default, do not. Consult the
    documentation for your spawner to verify!
    Default: ['jupyterhub-singleuser']
--Spawner.consecutive_failure_limit=<Int>
    Maximum number of consecutive failures to allow before shutting down
    JupyterHub.
    This helps JupyterHub recover from a certain class of problem preventing
    launch in contexts where the Hub is automatically restarted (e.g. systemd,
    docker, kubernetes).
    A limit of 0 means no limit and consecutive failures will not be tracked.
    Default: 0
--Spawner.cpu_guarantee=<Float>
    Minimum number of cpu-cores a single-user notebook server is guaranteed to
    have available.
    If this value is set to 0.5, allows use of 50% of one CPU. If this value is
    set to 2, allows use of up to 2 CPUs.
    **This is a configuration setting. Your spawner must implement support for
    the limit to work.** The default spawner, `LocalProcessSpawner`, does
    **not** implement this support. A custom spawner **must** add support for
    this setting for it to be enforced.
    Default: None
--Spawner.cpu_limit=<Float>
    Maximum number of cpu-cores a single-user notebook server is allowed to use.
    If this value is set to 0.5, allows use of 50% of one CPU. If this value is
    set to 2, allows use of up to 2 CPUs.
    The single-user notebook server will never be scheduled by the kernel to use
    more cpu-cores than this. There is no guarantee that it can access this many
    cpu-cores.
    **This is a configuration setting. Your spawner must implement support for
    the limit to work.** The default spawner, `LocalProcessSpawner`, does
    **not** implement this support. A custom spawner **must** add support for
    this setting for it to be enforced.
    Default: None
--Spawner.debug=<Bool>
    Enable debug-logging of the single-user server
    Default: False
--Spawner.default_url=<Unicode>
    The URL the single-user server should start in.
    `{username}` will be expanded to the user's username
    Example uses:
    - You can set `notebook_dir` to `/` and `default_url` to `/tree/home/{username}` to allow people to
      navigate the whole filesystem from their notebook server, but still start in their home directory.
    - Start with `/notebooks` instead of `/tree` if `default_url` points to a notebook instead of a directory.
    - You can set this to `/lab` to have JupyterLab start by default, rather than Jupyter Notebook.
    Default: ''
--Spawner.disable_user_config=<Bool>
    Disable per-user configuration of single-user servers.
    When starting the user's single-user server, any config file found in the
    user's $HOME directory will be ignored.
    Note: a user could circumvent this if the user modifies their Python
    environment, such as when they have their own conda environments /
    virtualenvs / containers.
    Default: False
--Spawner.env_keep=<list-item-1>...
    List of environment variables for the single-user server to inherit from the
    JupyterHub process.
    This list is used to ensure that sensitive information in the JupyterHub
    process's environment (such as `CONFIGPROXY_AUTH_TOKEN`) is not passed to
    the single-user server's process.
    Default: ['PATH', 'PYTHONPATH', 'CONDA_ROOT', 'CONDA_DEFAULT_ENV', 'VI...
--Spawner.environment=<key-1>=<value-1>...
    Extra environment variables to set for the single-user server's process.
    Environment variables that end up in the single-user server's process come from 3 sources:
      - This `environment` configurable
      - The JupyterHub process' environment variables that are listed in `env_keep`
      - Variables to establish contact between the single-user notebook and the hub (such as JUPYTERHUB_API_TOKEN)
    The `environment` configurable should be set by JupyterHub administrators to
    add installation specific environment variables. It is a dict where the key
    is the name of the environment variable, and the value can be a string or a
    callable. If it is a callable, it will be called with one parameter (the
    spawner instance), and should return a string fairly quickly (no blocking
    operations please!).
    Note that the spawner class' interface is not guaranteed to be exactly same
    across upgrades, so if you are using the callable take care to verify it
    continues to work after upgrades!
    .. versionchanged:: 1.2
        environment from this configuration has highest priority,
        allowing override of 'default' env variables,
        such as JUPYTERHUB_API_URL.
    Default: {}
--Spawner.http_timeout=<Int>
    Timeout (in seconds) before giving up on a spawned HTTP server
    Once a server has successfully been spawned, this is the amount of time we
    wait before assuming that the server is unable to accept connections.
    Default: 30
--Spawner.hub_connect_url=<Unicode>
    The URL the single-user server should connect to the Hub.
    If the Hub URL set in your JupyterHub config is not reachable from spawned
    notebooks, you can set differnt URL by this config.
    Is None if you don't need to change the URL.
    Default: None
--Spawner.ip=<Unicode>
    The IP address (or hostname) the single-user server should listen on.
    Usually either '127.0.0.1' (default) or '0.0.0.0'.
    The JupyterHub proxy implementation should be able to send packets to this
    interface.
    Subclasses which launch remotely or in containers should override the
    default to '0.0.0.0'.
    .. versionchanged:: 2.0
        Default changed to '127.0.0.1', from ''.
        In most cases, this does not result in a change in behavior,
        as '' was interpreted as 'unspecified',
        which used the subprocesses' own default, itself usually '127.0.0.1'.
    Default: '127.0.0.1'
--Spawner.mem_guarantee=<ByteSpecification>
    Minimum number of bytes a single-user notebook server is guaranteed to have
    available.
    Allows the following suffixes:
      - K -> Kilobytes
      - M -> Megabytes
      - G -> Gigabytes
      - T -> Terabytes
    **This is a configuration setting. Your spawner must implement support for
    the limit to work.** The default spawner, `LocalProcessSpawner`, does
    **not** implement this support. A custom spawner **must** add support for
    this setting for it to be enforced.
    Default: None
--Spawner.mem_limit=<ByteSpecification>
    Maximum number of bytes a single-user notebook server is allowed to use.
    Allows the following suffixes:
      - K -> Kilobytes
      - M -> Megabytes
      - G -> Gigabytes
      - T -> Terabytes
    If the single user server tries to allocate more memory than this, it will
    fail. There is no guarantee that the single-user notebook server will be
    able to allocate this much memory - only that it can not allocate more than
    this.
    **This is a configuration setting. Your spawner must implement support for
    the limit to work.** The default spawner, `LocalProcessSpawner`, does
    **not** implement this support. A custom spawner **must** add support for
    this setting for it to be enforced.
    Default: None
--Spawner.notebook_dir=<Unicode>
    Path to the notebook directory for the single-user server.
    The user sees a file listing of this directory when the notebook interface
    is started. The current interface does not easily allow browsing beyond the
    subdirectories in this directory's tree.
    `~` will be expanded to the home directory of the user, and {username} will
    be replaced with the name of the user.
    Note that this does *not* prevent users from accessing files outside of this
    path! They can do so with many other means.
    Default: ''
--Spawner.oauth_roles=<Union>
    Allowed roles for oauth tokens.
            This sets the maximum and default roles
            assigned to oauth tokens issued by a single-user server's
            oauth client (i.e. tokens stored in browsers after authenticating with the server),
            defining what actions the server can take on behalf of logged-in users.
            Default is an empty list, meaning minimal permissions to identify users,
            no actions can be taken on their behalf.
    Default: traitlets.Undefined
--Spawner.options_form=<Union>
    An HTML form for options a user can specify on launching their server.
    The surrounding `<form>` element and the submit button are already provided.
    For example:
    .. code:: html
        Set your key:
        <input name="key" val="default_key"></input>
        <br>
        Choose a letter:
        <select name="letter" multiple="true">
          <option value="A">The letter A</option>
          <option value="B">The letter B</option>
        </select>
    The data from this form submission will be passed on to your spawner in
    `self.user_options`
    Instead of a form snippet string, this could also be a callable that takes
    as one parameter the current spawner instance and returns a string. The
    callable will be called asynchronously if it returns a future, rather than a
    str. Note that the interface of the spawner class is not deemed stable
    across versions, so using this functionality might cause your JupyterHub
    upgrades to break.
    Default: traitlets.Undefined
--Spawner.options_from_form=<Callable>
    Interpret HTTP form data
    Form data will always arrive as a dict of lists of strings. Override this
    function to understand single-values, numbers, etc.
    This should coerce form data into the structure expected by
    self.user_options, which must be a dict, and should be JSON-serializeable,
    though it can contain bytes in addition to standard JSON data types.
    This method should not have any side effects. Any handling of `user_options`
    should be done in `.start()` to ensure consistent behavior across servers
    spawned via the API and form submission page.
    Instances will receive this data on self.user_options, after passing through
    this function, prior to `Spawner.start`.
    .. versionchanged:: 1.0
        user_options are persisted in the JupyterHub database to be reused
        on subsequent spawns if no options are given.
        user_options is serialized to JSON as part of this persistence
        (with additional support for bytes in case of uploaded file data),
        and any non-bytes non-jsonable values will be replaced with None
        if the user_options are re-used.
    Default: traitlets.Undefined
--Spawner.poll_interval=<Int>
    Interval (in seconds) on which to poll the spawner for single-user server's
    status.
    At every poll interval, each spawner's `.poll` method is called, which
    checks if the single-user server is still running. If it isn't running, then
    JupyterHub modifies its own state accordingly and removes appropriate routes
    from the configurable proxy.
    Default: 30
--Spawner.port=<Int>
    The port for single-user servers to listen on.
    Defaults to `0`, which uses a randomly allocated port number each time.
    If set to a non-zero value, all Spawners will use the same port, which only
    makes sense if each server is on a different address, e.g. in containers.
    New in version 0.7.
    Default: 0
--Spawner.post_stop_hook=<Any>
    An optional hook function that you can implement to do work after the
    spawner stops.
    This can be set independent of any concrete spawner implementation.
    Default: None
--Spawner.pre_spawn_hook=<Any>
    An optional hook function that you can implement to do some bootstrapping
    work before the spawner starts. For example, create a directory for your
    user or load initial content.
    This can be set independent of any concrete spawner implementation.
    This maybe a coroutine.
    Example::
        from subprocess import check_call
        def my_hook(spawner):
            username = spawner.user.name
            check_call(['./examples/bootstrap-script/bootstrap.sh', username])
        c.Spawner.pre_spawn_hook = my_hook
    Default: None
--Spawner.ssl_alt_names=<list-item-1>...
    List of SSL alt names
            May be set in config if all spawners should have the same value(s),
            or set at runtime by Spawner that know their names.
    Default: []
--Spawner.ssl_alt_names_include_local=<Bool>
    Whether to include DNS:localhost, IP:127.0.0.1 in alt names
    Default: True
--Spawner.start_timeout=<Int>
    Timeout (in seconds) before giving up on starting of single-user server.
    This is the timeout for start to return, not the timeout for the server to
    respond. Callers of spawner.start will assume that startup has failed if it
    takes longer than this. start should return when the server process is
    started and its location is known.
    Default: 60

Authenticator(LoggingConfigurable) options
------------------------------------------
--Authenticator.admin_users=<set-item-1>...
    Set of users that will have admin rights on this JupyterHub.
    Note: As of JupyterHub 2.0, full admin rights should not be required, and
    more precise permissions can be managed via roles.
    Admin users have extra privileges:
     - Use the admin panel to see list of users logged in
     - Add / remove users in some authenticators
     - Restart / halt the hub
     - Start / stop users' single-user servers
     - Can access each individual users' single-user server (if configured)
    Admin access should be treated the same way root access is.
    Defaults to an empty set, in which case no user has admin access.
    Default: set()
--Authenticator.allowed_users=<set-item-1>...
    Set of usernames that are allowed to log in.
    Use this with supported authenticators to restrict which users can log in.
    This is an additional list that further restricts users, beyond whatever
    restrictions the authenticator has in place. Any user in this list is
    granted the 'user' role on hub startup.
    If empty, does not perform any additional restriction.
    .. versionchanged:: 1.2
        `Authenticator.whitelist` renamed to `allowed_users`
    Default: set()
--Authenticator.auth_refresh_age=<Int>
    The max age (in seconds) of authentication info
            before forcing a refresh of user auth info.
            Refreshing auth info allows, e.g. requesting/re-validating auth
    tokens.
            See :meth:`.refresh_user` for what happens when user auth info is refreshed
            (nothing by default).
    Default: 300
--Authenticator.auto_login=<Bool>
    Automatically begin the login process
            rather than starting with a "Login with..." link at `/hub/login`
            To work, `.login_url()` must give a URL other than the default `/hub/login`,
            such as an oauth handler or another automatic login handler,
            registered with `.get_handlers()`.
            .. versionadded:: 0.8
    Default: False
--Authenticator.auto_login_oauth2_authorize=<Bool>
    Automatically begin login process for OAuth2 authorization requests
    When another application is using JupyterHub as OAuth2 provider, it sends
    users to `/hub/api/oauth2/authorize`. If the user isn't logged in already,
    and auto_login is not set, the user will be dumped on the hub's home page,
    without any context on what to do next.
    Setting this to true will automatically redirect users to login if they
    aren't logged in *only* on the `/hub/api/oauth2/authorize` endpoint.
    .. versionadded:: 1.5
    Default: False
--Authenticator.blocked_users=<set-item-1>...
    Set of usernames that are not allowed to log in.
    Use this with supported authenticators to restrict which users can not log
    in. This is an additional block list that further restricts users, beyond
    whatever restrictions the authenticator has in place.
    If empty, does not perform any additional restriction.
    .. versionadded: 0.9
    .. versionchanged:: 1.2
        `Authenticator.blacklist` renamed to `blocked_users`
    Default: set()
--Authenticator.delete_invalid_users=<Bool>
    Delete any users from the database that do not pass validation
            When JupyterHub starts, `.add_user` will be called
            on each user in the database to verify that all users are still valid.
            If `delete_invalid_users` is True,
            any users that do not pass validation will be deleted from the database.
            Use this if users might be deleted from an external system,
            such as local user accounts.
            If False (default), invalid users remain in the Hub's database
            and a warning will be issued.
            This is the default to avoid data loss due to config changes.
    Default: False
--Authenticator.enable_auth_state=<Bool>
    Enable persisting auth_state (if available).
            auth_state will be encrypted and stored in the Hub's database.
            This can include things like authentication tokens, etc.
            to be passed to Spawners as environment variables.
            Encrypting auth_state requires the cryptography package.
            Additionally, the JUPYTERHUB_CRYPT_KEY environment variable must
            contain one (or more, separated by ;) 32B encryption keys.
            These can be either base64 or hex-encoded.
            If encryption is unavailable, auth_state cannot be persisted.
            New in JupyterHub 0.8
    Default: False
--Authenticator.manage_groups=<Bool>
    Let authenticator manage user groups
            If True, Authenticator.authenticate and/or .refresh_user
            may return a list of group names in the 'groups' field,
            which will be assigned to the user.
            All group-assignment APIs are disabled if this is True.
    Default: False
--Authenticator.post_auth_hook=<Any>
    An optional hook function that you can implement to do some bootstrapping
    work during authentication. For example, loading user account details from
    an external system.
    This function is called after the user has passed all authentication checks
    and is ready to successfully authenticate. This function must return the
    authentication dict reguardless of changes to it.
    This maybe a coroutine.
    .. versionadded: 1.0
    Example::
        import os, pwd
        def my_hook(authenticator, handler, authentication):
            user_data = pwd.getpwnam(authentication['name'])
            spawn_data = {
                'pw_data': user_data
                'gid_list': os.getgrouplist(authentication['name'], user_data.pw_gid)
            }
            if authentication['auth_state'] is None:
                authentication['auth_state'] = {}
            authentication['auth_state']['spawn_data'] = spawn_data
            return authentication
        c.Authenticator.post_auth_hook = my_hook
    Default: None
--Authenticator.refresh_pre_spawn=<Bool>
    Force refresh of auth prior to spawn.
            This forces :meth:`.refresh_user` to be called prior to launching
            a server, to ensure that auth state is up-to-date.
            This can be important when e.g. auth tokens that may have expired
            are passed to the spawner via environment variables from auth_state.
            If refresh_user cannot refresh the user auth data,
            launch will fail until the user logs in again.
    Default: False
--Authenticator.username_map=<key-1>=<value-1>...
    Dictionary mapping authenticator usernames to JupyterHub users.
            Primarily used to normalize OAuth user names to local users.
    Default: {}
--Authenticator.username_pattern=<Unicode>
    Regular expression pattern that all valid usernames must match.
    If a username does not match the pattern specified here, authentication will
    not be attempted.
    If not set, allow any username.
    Default: ''
--Authenticator.whitelist=<set-item-1>...
    Deprecated, use `Authenticator.allowed_users`
    Default: set()

CryptKeeper(SingletonConfigurable) options
------------------------------------------
--CryptKeeper.keys=<list-item-1>...
    Default: []
--CryptKeeper.n_threads=<Int>
    The number of threads to allocate for encryption
    Default: 2

Examples
--------

    generate default config file:

            jupyterhub --generate-config -f /etc/jupyterhub/jupyterhub_config.py

        spawn the server on 10.0.1.2:443 with https:

            jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert

JupyterHub and OAuth

JupyterHub uses OAuth 2 internally as a mechanism for authenticating users. As such, JupyterHub itself always functions as an OAuth provider. More on what that means below.

Additionally, JupyterHub is often deployed with oauthenticator, where an external identity provider, such as GitHub or KeyCloak, is used to authenticate users. When this is the case, there are two nested oauth flows: an internal oauth flow where JupyterHub is the provider, and and external oauth flow, where JupyterHub is a client.

This means that when you are using JupyterHub, there is always at least one and often two layers of OAuth involved in a user logging in and accessing their server.

Some relevant points:

  • Single-user servers never need to communicate with or be aware of the upstream provider configured in your Authenticator. As far as they are concerned, only JupyterHub is an OAuth provider, and how users authenticate with the Hub itself is irrelevant.

  • When talking to a single-user server, there are ~always two tokens: a token issued to the server itself to communicate with the Hub API, and a second per-user token in the browser to represent the completed login process and authorized permissions. More on this later.

Key OAuth terms

Here are some key definitions to keep in mind when we are talking about OAuth. You can also read more detail here.

  • provider the entity responsible for managing identity and authorization, always a web server. JupyterHub is always an oauth provider for JupyterHub’s components. When OAuthenticator is used, an external service, such as GitHub or KeyCloak, is also an oauth provider.

  • client An entity that requests OAuth tokens on a user’s behalf, generally a web server of some kind. OAuth clients are services that delegate authentication and/or authorization to an OAuth provider. JupyterHub services or single-user servers are OAuth clients of the JupyterHub provider. When OAuthenticator is used, JupyterHub is itself also an OAuth client for the external oauth provider, e.g. GitHub.

  • browser A user’s web browser, which makes requests and stores things like cookies

  • token The secret value used to represent a user’s authorization. This is the final product of the OAuth process.

  • code A short-lived temporary secret that the client exchanges for a token at the conclusion of oauth, in what’s generally called the “oauth callback handler.”

One oauth flow

OAuth flow is what we call the sequence of HTTP requests involved in authenticating a user and issuing a token, ultimately used for authorized access to a service or single-user server.

A single oauth flow generally goes like this:

OAuth request and redirect
  1. A browser makes an HTTP request to an oauth client.

  2. There are no credentials, so the client redirects the browser to an “authorize” page on the oauth provider with some extra information:

    • the oauth client id of the client itself

    • the redirect uri to be redirected back to after completion

    • the scopes requested, which the user should be presented with to confirm. This is the “X would like to be able to Y on your behalf. Allow this?” page you see on all the “Login with …” pages around the Internet.

  3. During this authorize step, the browser must be authenticated with the provider. This is often already stored in a cookie, but if not the provider webapp must begin its own authentication process before serving the authorization page. This may even begin another oauth flow!

  4. After the user tells the provider that they want to proceed with the authorization, the provider records this authorization in a short-lived record called an oauth code.

  5. Finally, the oauth provider redirects the browser back to the oauth client’s “redirect uri” (or “oauth callback uri”), with the oauth code in a url parameter.

That’s the end of the requests made between the browser and the provider.

State after redirect

At this point:

  • The browser is authenticated with the provider

  • The user’s authorized permissions are recorded in an oauth code

  • The provider knows that the given oauth client’s requested permissions have been granted, but the client doesn’t know this yet.

  • All requests so far have been made directly by the browser. No requests have originated at the client or provider.

OAuth Client Handles Callback Request

Now we get to finish the OAuth process. Let’s dig into what the oauth client does when it handles the oauth callback request with the

  • The OAuth client receives the code and makes an API request to the provider to exchange the code for a real token. This is the first direct request between the OAuth client and the provider.

  • Once the token is retrieved, the client usually makes a second API request to the provider to retrieve information about the owner of the token (the user). This is the step where behavior diverges for different OAuth providers. Up to this point, all oauth providers are the same, following the oauth specification. However, oauth does not define a standard for exchanging tokens for information about their owner or permissions (OpenID Connect does that), so this step may be different for each OAuth provider.

  • Finally, the oauth client stores its own record that the user is authorized in a cookie. This could be the token itself, or any other appropriate representation of successful authentication.

  • Last of all, now that credentials have been established, the browser can be redirected to the original URL where it started, to try the request again. If the client wasn’t able to keep track of the original URL all this time (not always easy!), you might end up back at a default landing page instead of where you started the login process. This is frustrating!

😮‍💨 phew.

So that’s one OAuth process.

Full sequence of OAuth in JupyterHub

Let’s go through the above oauth process in JupyterHub, with specific examples of each HTTP request and what information is contained. For bonus points, we are using the double-oauth example of JupyterHub configured with GitHubOAuthenticator.

To disambiguate, we will call the OAuth process where JupyterHub is the provider “internal oauth,” and the one with JupyterHub as a client “external oauth.”

Our starting point:

  • a user’s single-user server is running. Let’s call them danez

  • jupyterhub is running with GitHub as an oauth provider (this means two full instances of oauth),

  • Danez has a fresh browser session with no cookies yet

First request:

  • browser->single-user server running JupyterLab or Jupyter Classic

  • GET /user/danez/notebooks/mynotebook.ipynb

  • no credentials, so single-user server (as an oauth client) starts internal oauth process with JupyterHub (the provider)

  • response: 302 redirect -> /hub/api/oauth2/authorize with:

    • client-id=jupyterhub-user-danez

    • redirect-uri=/user/danez/oauth_callback (we’ll come back later!)

Second request, following redirect:

  • browser->jupyterhub

  • GET /hub/api/oauth2/authorize

  • no credentials, so jupyterhub starts external oauth process with GitHub

  • response: 302 redirect -> https://github.com/login/oauth/authorize with:

    • client-id=jupyterhub-client-uuid

    • redirect-uri=/hub/oauth_callback (we’ll come back later!)

pause This is where JupyterHub configuration comes into play. Recall, in this case JupyterHub is using:

c.JupyterHub.authenticator_class = 'github'

That means authenticating a request to the Hub itself starts a second, external oauth process with GitHub as a provider. This external oauth process is optional, though. If you were using the default username+password PAMAuthenticator, this redirect would have been to /hub/login instead, to present the user with a login form.

Third request, following redirect:

  • browser->GitHub

  • GET https://github.com/login/oauth/authorize

Here, GitHub prompts for login and asks for confirmation of authorization (more redirects if you aren’t logged in to GitHub yet, but ultimately back to this /authorize URL).

After successful authorization (either by looking up a pre-existing authorization, or recording it via form submission) GitHub issues an oauth code and redirects to /hub/oauth_callback?code=github-code

Next request:

  • browser->JupyterHub

  • GET /hub/oauth_callback?code=github-code

Inside the callback handler, JupyterHub makes two API requests:

The first:

  • JupyterHub->GitHub

  • POST https://github.com/login/oauth/access_token

  • request made with oauth code from url parameter

  • response includes an access token

The second:

  • JupyterHub->GitHub

  • GET https://api.github.com/user

  • request made with access token in the Authorization header

  • response is the user model, including username, email, etc.

Now the external oauth callback request completes with:

  • set cookie on /hub/ path, recording jupyterhub authentication so we don’t need to do external oauth with GitHub again for a while

  • redirect -> /hub/api/oauth2/authorize

🎉 At this point, we have completed our first OAuth flow! 🎉

Now, we get our first repeated request:

  • browser->jupyterhub

  • GET /hub/api/oauth2/authorize

  • this time with credentials, so jupyterhub either

    1. serves the internal authorization confirmation page, or

    2. automatically accepts authorization (shortcut taken when a user is visiting their own server)

  • redirect -> /user/danez/oauth_callback?code=jupyterhub-code

Here, we start the same oauth callback process as before, but at Danez’s single-user server for the internal oauth

  • browser->single-user server

  • GET /user/danez/oauth_callback

(in handler)

Inside the internal oauth callback handler, Danez’s server makes two API requests to JupyterHub:

The first:

  • single-user server->JupyterHub

  • POST /hub/api/oauth2/token

  • request made with oauth code from url parameter

  • response includes an API token

The second:

  • single-user server->JupyterHub

  • GET /hub/api/user

  • request made with token in the Authorization header

  • response is the user model, including username, groups, etc.

Finally completing GET /user/danez/oauth_callback:

  • response sets cookie, storing encrypted access token

  • finally redirects back to the original /user/danez/notebooks/mynotebook.ipynb

Final request:

  • browser -> single-user server

  • GET /user/danez/notebooks/mynotebook.ipynb

  • encrypted jupyterhub token in cookie

To authenticate this request, the single token stored in the encrypted cookie is passed to the Hub for verification:

  • single-user server -> Hub

  • GET /hub/api/user

  • browser’s token in Authorization header

  • response: user model with name, groups, etc.

If the user model matches who should be allowed (e.g. Danez), then the request is allowed. See Scopes in JupyterHub for how JupyterHub uses scopes to determine authorized access to servers and services.

the end

Token caches and expiry

Because tokens represent information from an external source, they can become ‘stale,’ or the information they represent may no longer be accurate. For example: a user’s GitHub account may no longer be authorized to use JupyterHub, that should ultimately propagate to revoking access and force logging in again.

To handle this, OAuth tokens and the various places they are stored can expire, which should have the same effect as no credentials, and trigger the authorization process again.

In JupyterHub’s internal oauth, we have these layers of information that can go stale:

  • The oauth client has a cache of Hub responses for tokens, so it doesn’t need to make API requests to the Hub for every request it receives. This cache has an expiry of five minutes by default, and is governed by the configuration HubAuth.cache_max_age in the single-user server.

  • The internal oauth token is stored in a cookie, which has its own expiry (default: 14 days), governed by JupyterHub.cookie_max_age_days.

  • The internal oauth token can also itself expire, which is by default the same as the cookie expiry, since it makes sense for the token itself and the place it is stored to expire at the same time. This is governed by JupyterHub.cookie_max_age_days first, or can overridden by JupyterHub.oauth_token_expires_in.

That’s all for internal auth storage, but the information from the external authentication provider (could be PAM or GitHub OAuth, etc.) can also expire. Authenticator configuration governs when JupyterHub needs to ask again, triggering the external login process anew before letting a user proceed.

  • jupyterhub-hub-login cookie stores that a browser is authenticated with the Hub. This expires according to JupyterHub.cookie_max_age_days configuration, with a default of 14 days. The jupyterhub-hub-login cookie is encrypted with JupyterHub.cookie_secret configuration.

  • Authenticator.refresh_user() is a method to refresh a user’s auth info. By default, it does nothing, but it can return an updated user model if a user’s information has changed, or force a full login process again if needed.

  • Authenticator.auth_refresh_age configuration governs how often refresh_user() will be called to check if a user must login again (default: 300 seconds).

  • Authenticator.refresh_pre_spawn configuration governs whether refresh_user() should be called prior to spawning a server, to force fresh auth info when a server is launched (default: False). This can be useful when Authenticators pass access tokens to spawner environments, to ensure they aren’t getting a stale token that’s about to expire.

So what happens when these things expire or get stale?

  • If the HubAuth token response cache expires, when a request is made with a token, the Hub is asked for the latest information about the token. This usually has no visible effect, since it is just refreshing a cache. If it turns out that the token itself has expired or been revoked, the request will be denied.

  • If the token has expired, but is still in the cookie: when the token response cache expires, the next time the server asks the hub about the token, no user will be identified and the internal oauth process begins again.

  • If the token cookie expires, the next browser request will be made with no credentials, and the internal oauth process will begin again. This will usually have the form of a transparent redirect browsers won’t notice. However, if this occurs on an API request in a long-lived page visit such as a JupyterLab session, the API request may fail and require a page refresh to get renewed credentials.

  • If the JupyterHub cookie expires, the next time the browser makes a request to the Hub, the Hub’s authorization process must begin again (e.g. login with GitHub). Hub cookie expiry on its own does not mean that a user can no longer access their single-user server!

  • If credentials from the upstream provider (e.g. GitHub) become stale or outdated, these will not be refreshed until/unless refresh_user is called and refresh_user() on the given Authenticator is implemented to perform such a check. At this point, few Authenticators implement refresh_user to support this feature. If your Authenticator does not or cannot implement refresh_user, the only way to force a check is to reset the JupyterHub.cookie_secret encryption key, which invalidates the jupyterhub-hub-login cookie for all users.

Logging out

Logging out of JupyterHub means clearing and revoking many of these credentials:

  • The jupyterhub-hub-login cookie is revoked, meaning the next request to the Hub itself will require a new login.

  • The token stored in the jupyterhub-user-username cookie for the single-user server will be revoked, based on its associaton with jupyterhub-session-id, but the cookie itself cannot be cleared at this point

  • The shared jupyterhub-session-id is cleared, which ensures that the HubAuth token response cache will not be used, and the next request with the expired token will ask the Hub, which will inform the single-user server that the token has expired

Extra bits
A tale of two tokens

TODO: discuss API token issued to server at startup ($JUPYTERHUB_API_TOKEN) and oauth-issued token in the cookie, and some details of how JupyterLab currently deals with that. They are different, and JupyterLab should be making requests using the token from the cookie, not the token from the server, but that is not currently the case.

Redirect loops

In general, an authenticated web endpoint has this behavior, based on the authentication/authorization state of the browser:

  • If authorized, allow the request to happen

  • If authenticated (I know who you are) but not authorized (you are not allowed), fail with a 403 permission denied error

  • If not authenticated, start a redirect process to establish authorization, which should end in a redirect back to the original URL to try again. This is why problems in authentication result in redirect loops! If the second request fails to detect the authentication that should have been established during the redirect, it will start the authentication redirect process over again, and keep redirecting in a loop until the browser balks.

Administrators guide

Administrator’s Guide

This guide covers best-practices, tips, common questions and operations, as well as other information relevant to running your own JupyterHub over time.

Troubleshooting

When troubleshooting, you may see unexpected behaviors or receive an error message. This section provide links for identifying the cause of the problem and how to resolve it.

Behavior

  • JupyterHub proxy fails to start

  • sudospawner fails to run

  • What is the default behavior when none of the lists (admin, allowed, allowed groups) are set?

  • JupyterHub Docker container not accessible at localhost

Errors

  • 500 error after spawning my single-user server

How do I…?

  • Use a chained SSL certificate

  • Install JupyterHub without a network connection

  • I want access to the whole filesystem, but still default users to their home directory

  • How do I increase the number of pySpark executors on YARN?

  • How do I use JupyterLab’s prerelease version with JupyterHub?

  • How do I set up JupyterHub for a workshop (when users are not known ahead of time)?

  • How do I set up rotating daily logs?

  • Toree integration with HDFS rack awareness script

  • Where do I find Docker images and Dockerfiles related to JupyterHub?

Troubleshooting commands

Behavior
JupyterHub proxy fails to start

If you have tried to start the JupyterHub proxy and it fails to start:

  • check if the JupyterHub IP configuration setting is c.JupyterHub.ip = '*'; if it is, try c.JupyterHub.ip = ''

  • Try starting with jupyterhub --ip=0.0.0.0

Note: If this occurs on Ubuntu/Debian, check that the you are using a recent version of node. Some versions of Ubuntu/Debian come with a version of node that is very old, and it is necessary to update node.

sudospawner fails to run

If the sudospawner script is not found in the path, sudospawner will not run. To avoid this, specify sudospawner’s absolute path. For example, start jupyterhub with:

jupyterhub --SudoSpawner.sudospawner_path='/absolute/path/to/sudospawner'

or add:

c.SudoSpawner.sudospawner_path = '/absolute/path/to/sudospawner'

to the config file, jupyterhub_config.py.

What is the default behavior when none of the lists (admin, allowed, allowed groups) are set?

When nothing is given for these lists, there will be no admins, and all users who can authenticate on the system (i.e. all the unix users on the server with a password) will be allowed to start a server. The allowed username set lets you limit this to a particular set of users, and admin_users lets you specify who among them may use the admin interface (not necessary, unless you need to do things like inspect other users’ servers, or modify the user list at runtime).

JupyterHub Docker container not accessible at localhost

Even though the command to start your Docker container exposes port 8000 (docker run -p 8000:8000 -d --name jupyterhub jupyterhub/jupyterhub jupyterhub), it is possible that the IP address itself is not accessible/visible. As a result when you try http://localhost:8000 in your browser, you are unable to connect even though the container is running properly. One workaround is to explicitly tell Jupyterhub to start at 0.0.0.0 which is visible to everyone. Try this command: docker run -p 8000:8000 -d --name jupyterhub jupyterhub/jupyterhub jupyterhub --ip 0.0.0.0 --port 8000

How can I kill ports from JupyterHub managed services that have been orphaned?

I started JupyterHub + nbgrader on the same host without containers. When I try to restart JupyterHub + nbgrader with this configuration, errors appear that the service accounts cannot start because the ports are being used.

How can I kill the processes that are using these ports?

Run the following command:

sudo kill -9 $(sudo lsof -t -i:<service_port>)

Where <service_port> is the port used by the nbgrader course service. This configuration is specified in jupyterhub_config.py.

Why am I getting a Spawn failed error message?

After successfully logging in to JupyterHub with a compatible authenticators, I get a ‘Spawn failed’ error message in the browser. The JupyterHub logs have jupyterhub KeyError: "getpwnam(): name not found: <my_user_name>.

This issue occurs when the authenticator requires a local system user to exist. In these cases, you need to use a spawner that does not require an existing system user account, such as DockerSpawner or KubeSpawner.

How can I run JupyterHub with sudo but use my current env vars and virtualenv location?

When launching JupyterHub with sudo jupyterhub I get import errors and my environment variables don’t work.

When launching services with sudo ... the shell won’t have the same environment variables or PATHs in place. The most direct way to solve this issue is to use the full path to your python environment and add environment variables. For example:

sudo MY_ENV=abc123 \
  /home/foo/venv/bin/python3 \
  /srv/jupyterhub/jupyterhub
How can I view the logs for JupyterHub or the user’s Notebook servers when using the DockerSpawner?

Use docker logs <container> where <container> is the container name defined within docker-compose.yml. For example, to view the logs of the JupyterHub container use:

docker logs hub

By default, the user’s notebook server is named jupyter-<username> where username is the user’s username within JupyterHub’s db. So if you wanted to see the logs for user foo you would use:

docker logs jupyter-foo

You can also tail logs to view them in real time using the -f option:

docker logs -f hub
Errors
500 error after spawning my single-user server

You receive a 500 error when accessing the URL /user/<your_name>/.... This is often seen when your single-user server cannot verify your user cookie with the Hub.

There are two likely reasons for this:

  1. The single-user server cannot connect to the Hub’s API (networking configuration problems)

  2. The single-user server cannot authenticate its requests (invalid token)

Symptoms

The main symptom is a failure to load any page served by the single-user server, met with a 500 error. This is typically the first page at /user/<your_name> after logging in or clicking “Start my server”. When a single-user notebook server receives a request, the notebook server makes an API request to the Hub to check if the cookie corresponds to the right user. This request is logged.

If everything is working, the response logged will be similar to this:

200 GET /hub/api/authorizations/cookie/jupyterhub-token-name/[secret] (@10.0.1.4) 6.10ms

You should see a similar 200 message, as above, in the Hub log when you first visit your single-user notebook server. If you don’t see this message in the log, it may mean that your single-user notebook server isn’t connecting to your Hub.

If you see 403 (forbidden) like this, it’s likely a token problem:

403 GET /hub/api/authorizations/cookie/jupyterhub-token-name/[secret] (@10.0.1.4) 4.14ms

Check the logs of the single-user notebook server, which may have more detailed information on the cause.

Causes and resolutions
No authorization request

If you make an API request and it is not received by the server, you likely have a network configuration issue. Often, this happens when the Hub is only listening on 127.0.0.1 (default) and the single-user servers are not on the same ‘machine’ (can be physically remote, or in a docker container or VM). The fix for this case is to make sure that c.JupyterHub.hub_ip is an address that all single-user servers can connect to, e.g.:

c.JupyterHub.hub_ip = '10.0.0.1'
Proxy settings (403 GET)

When your whole JupyterHub sits behind a organization proxy (not a reverse proxy like NGINX as part of your setup and not the configurable-http-proxy) the environment variables HTTP_PROXY, HTTPS_PROXY, http_proxy and https_proxy might be set. This confuses the jupyterhub-singleuser servers: When connecting to the Hub for authorization they connect via the proxy instead of directly connecting to the Hub on localhost. The proxy might deny the request (403 GET). This results in the singleuser server thinking it has a wrong auth token. To circumvent this you should add <hub_url>,<hub_ip>,localhost,127.0.0.1 to the environment variables NO_PROXY and no_proxy.

Launching Jupyter Notebooks to run as an externally managed JupyterHub service with the jupyterhub-singleuser command returns a JUPYTERHUB_API_TOKEN error

JupyterHub services allow processes to interact with JupyterHub’s REST API. Example use-cases include:

  • Secure Testing: provide a canonical Jupyter Notebook for testing production data to reduce the number of entry points into production systems.

  • Grading Assignments: provide access to shared Jupyter Notebooks that may be used for management tasks such grading assignments.

  • Private Dashboards: share dashboards with certain group members.

If possible, try to run the Jupyter Notebook as an externally managed service with one of the provided jupyter/docker-stacks.

Standard JupyterHub installations include a jupyterhub-singleuser command which is built from the jupyterhub.singleuser:main method. The jupyterhub-singleuser command is the default command when JupyterHub launches single-user Jupyter Notebooks. One of the goals of this command is to make sure the version of JupyterHub installed within the Jupyter Notebook coincides with the version of the JupyterHub server itself.

If you launch a Jupyter Notebook with the jupyterhub-singleuser command directly from the command line the Jupyter Notebook won’t have access to the JUPYTERHUB_API_TOKEN and will return:

    JUPYTERHUB_API_TOKEN env is required to run jupyterhub-singleuser.
    Did you launch it manually?

If you plan on testing jupyterhub-singleuser independently from JupyterHub, then you can set the api token environment variable. For example, if were to run the single-user Jupyter Notebook on the host, then:

export JUPYTERHUB_API_TOKEN=my_secret_token
jupyterhub-singleuser

With a docker container, pass in the environment variable with the run command:

docker run -d \
  -p 8888:8888 \
  -e JUPYTERHUB_API_TOKEN=my_secret_token \
  jupyter/datascience-notebook:latest

This example demonstrates how to combine the use of the jupyterhub-singleuser environment variables when launching a Notebook as an externally managed service.

How do I…?
Use a chained SSL certificate

Some certificate providers, i.e. Entrust, may provide you with a chained certificate that contains multiple files. If you are using a chained certificate you will need to concatenate the individual files by appending the chain cert and root cert to your host cert:

cat your_host.crt chain.crt root.crt > your_host-chained.crt

You would then set in your jupyterhub_config.py file the ssl_key and ssl_cert as follows:

c.JupyterHub.ssl_cert = your_host-chained.crt
c.JupyterHub.ssl_key = your_host.key
Example

Your certificate provider gives you the following files: example_host.crt, Entrust_L1Kroot.txt and Entrust_Root.txt.

Concatenate the files appending the chain cert and root cert to your host cert:

cat example_host.crt Entrust_L1Kroot.txt Entrust_Root.txt > example_host-chained.crt

You would then use the example_host-chained.crt as the value for JupyterHub’s ssl_cert. You may pass this value as a command line option when starting JupyterHub or more conveniently set the ssl_cert variable in JupyterHub’s configuration file, jupyterhub_config.py. In jupyterhub_config.py, set:

c.JupyterHub.ssl_cert = /path/to/example_host-chained.crt
c.JupyterHub.ssl_key = /path/to/example_host.key

where ssl_cert is example-chained.crt and ssl_key to your private key.

Then restart JupyterHub.

See also Enabling SSL encryption.

Install JupyterHub without a network connection

Both conda and pip can be used without a network connection. You can make your own repository (directory) of conda packages and/or wheels, and then install from there instead of the internet.

For instance, you can install JupyterHub with pip and configurable-http-proxy with npmbox:

python3 -m pip wheel jupyterhub
npmbox configurable-http-proxy
I want access to the whole filesystem, but still default users to their home directory

Setting the following in jupyterhub_config.py will configure access to the entire filesystem and set the default to the user’s home directory.

c.Spawner.notebook_dir = '/'
c.Spawner.default_url = '/home/%U' # %U will be replaced with the username
How do I increase the number of pySpark executors on YARN?

From the command line, pySpark executors can be configured using a command similar to this one:

pyspark --total-executor-cores 2 --executor-memory 1G

Cloudera documentation for configuring spark on YARN applications provides additional information. The pySpark configuration documentation is also helpful for programmatic configuration examples.

How do I use JupyterLab’s prerelease version with JupyterHub?

While JupyterLab is still under active development, we have had users ask about how to try out JupyterLab with JupyterHub.

You need to install and enable the JupyterLab extension system-wide, then you can change the default URL to /lab.

For instance:

python3 -m pip install jupyterlab
jupyter serverextension enable --py jupyterlab --sys-prefix

The important thing is that jupyterlab is installed and enabled in the single-user notebook server environment. For system users, this means system-wide, as indicated above. For Docker containers, it means inside the single-user docker image, etc.

In jupyterhub_config.py, configure the Spawner to tell the single-user notebook servers to default to JupyterLab:

c.Spawner.default_url = '/lab'
How do I set up JupyterHub for a workshop (when users are not known ahead of time)?
  1. Set up JupyterHub using OAuthenticator for GitHub authentication

  2. Configure admin list to have workshop leaders be listed with administrator privileges.

Users will need a GitHub account to login and be authenticated by the Hub.

How do I set up rotating daily logs?

You can do this with logrotate, or pipe to logger to use syslog instead of directly to a file.

For example, with this logrotate config file:

/var/log/jupyterhub.log {
  copytruncate
  daily
}

and run this daily by putting a script in /etc/cron.daily/:

logrotate /path/to/above-config

Or use syslog:

jupyterhub | logger -t jupyterhub
Troubleshooting commands

The following commands provide additional detail about installed packages, versions, and system information that may be helpful when troubleshooting a JupyterHub deployment. The commands are:

  • System and deployment information

jupyter troubleshooting
  • Kernel information

jupyter kernelspec list
  • Debug logs when running JupyterHub

jupyterhub --debug
Toree integration with HDFS rack awareness script

The Apache Toree kernel will an issue, when running with JupyterHub, if the standard HDFS rack awareness script is used. This will materialize in the logs as a repeated WARN:

16/11/29 16:24:20 WARN ScriptBasedMapping: Exception running /etc/hadoop/conf/topology_script.py some.ip.address
ExitCodeException exitCode=1:   File "/etc/hadoop/conf/topology_script.py", line 63
    print rack
             ^
SyntaxError: Missing parentheses in call to 'print'

    at `org.apache.hadoop.util.Shell.runCommand(Shell.java:576)`

In order to resolve this issue, there are two potential options.

  1. Update HDFS core-site.xml, so the parameter “net.topology.script.file.name” points to a custom script (e.g. /etc/hadoop/conf/custom_topology_script.py). Copy the original script and change the first line point to a python two installation (e.g. /usr/bin/python).

  2. In spark-env.sh add a Python 2 installation to your path (e.g. export PATH=/opt/anaconda2/bin:$PATH).

Upgrading JupyterHub

JupyterHub offers easy upgrade pathways between minor versions. This document describes how to do these upgrades.

If you are using a JupyterHub distribution, you should consult the distribution’s documentation on how to upgrade. This document is if you have set up your own JupyterHub without using a distribution.

It is long because is pretty detailed! Most likely, upgrading JupyterHub is painless, quick and with minimal user interruption.

Read the Changelog

The changelog contains information on what has changed with the new JupyterHub release, and any deprecation warnings. Read these notes to familiarize yourself with the coming changes. There might be new releases of authenticators & spawners you are using, so read the changelogs for those too!

Notify your users

If you are using the default configuration where configurable-http-proxy is managed by JupyterHub, your users will see service disruption during the upgrade process. You should notify them, and pick a time to do the upgrade where they will be least disrupted.

If you are using a different proxy, or running configurable-http-proxy independent of JupyterHub, your users will be able to continue using notebook servers they had already launched, but will not be able to launch new servers nor sign in.

Backup database & config

Before doing an upgrade, it is critical to back up:

  1. Your JupyterHub database (sqlite by default, or MySQL / Postgres if you used those). If you are using sqlite (the default), you should backup the jupyterhub.sqlite file.

  2. Your jupyterhub_config.py file.

  3. Your user’s home directories. This is unlikely to be affected directly by a JupyterHub upgrade, but we recommend a backup since user data is very critical.

Shutdown JupyterHub

Shutdown the JupyterHub process. This would vary depending on how you have set up JupyterHub to run. Most likely, it is using a process supervisor of some sort (systemd or supervisord or even docker). Use the supervisor specific command to stop the JupyterHub process.

Upgrade JupyterHub packages

There are two environments where the jupyterhub package is installed:

  1. The hub environment, which is where the JupyterHub server process runs. This is started with the jupyterhub command, and is what people generally think of as JupyterHub.

  2. The notebook user environments. This is where the user notebook servers are launched from, and is probably custom to your own installation. This could be just one environment (different from the hub environment) that is shared by all users, one environment per user, or same environment as the hub environment. The hub launched the jupyterhub-singleuser command in this environment, which in turn starts the notebook server.

You need to make sure the version of the jupyterhub package matches in both these environments. If you installed jupyterhub with pip, you can upgrade it with:

python3 -m pip install --upgrade jupyterhub==<version>

Where <version> is the version of JupyterHub you are upgrading to.

If you used conda to install jupyterhub, you should upgrade it with:

conda install -c conda-forge jupyterhub==<version>

Where <version> is the version of JupyterHub you are upgrading to.

You should also check for new releases of the authenticator & spawner you are using. You might wish to upgrade those packages too along with JupyterHub, or upgrade them separately.

Upgrade JupyterHub database

Once new packages are installed, you need to upgrade the JupyterHub database. From the hub environment, in the same directory as your jupyterhub_config.py file, you should run:

jupyterhub upgrade-db

This should find the location of your database, and run necessary upgrades for it.

SQLite database disadvantages

SQLite has some disadvantages when it comes to upgrading JupyterHub. These are:

  • upgrade-db may not work, and you may need delete your database and start with a fresh one.

  • downgrade-db will not work if you want to rollback to an earlier version, so backup the jupyterhub.sqlite file before upgrading

What happens if I delete my database?

Losing the Hub database is often not a big deal. Information that resides only in the Hub database includes:

  • active login tokens (user cookies, service tokens)

  • users added via JupyterHub UI, instead of config files

  • info about running servers

If the following conditions are true, you should be fine clearing the Hub database and starting over:

  • users specified in config file, or login using an external authentication provider (Google, GitHub, LDAP, etc)

  • user servers are stopped during upgrade

  • don’t mind causing users to login again after upgrade

Start JupyterHub

Once the database upgrade is completed, start the jupyterhub process again.

  1. Log-in and start the server to make sure things work as expected.

  2. Check the logs for any errors or deprecation warnings. You might have to update your jupyterhub_config.py file to deal with any deprecated options.

Congratulations, your JupyterHub has been upgraded!

Common log messages emitted by JupyterHub

When debugging errors and outages, looking at the logs emitted by JupyterHub is very helpful. This document tries to document some common log messages, and what they mean.

Failing suspected API request to not-running server
Example

Your logs might be littered with lines that might look slightly scary

[W 2022-03-10 17:25:19.774 JupyterHub base:1349] Failing suspected API request to not-running server: /hub/user/<user-name>/api/metrics/v1
Most likely cause

This likely means is that the user’s server has stopped running but they still have a browser tab open. For example, you might have 3 tabs open, and shut your server down via one. Or you closed your laptop, your server was culled for inactivity, and then you reopen your laptop again! The client side code (JupyterLab, Classic Notebook, etc) does not know yet that the server is dead, and continues to make some API requests. JupyterHub’s architecture means that the proxy routes all requests that don’t go to a running user server to the hub process itself. The hub process then explicitly returns a failure response, so the client knows that the server is not running anymore. This is used by JupyterLab to tell you your server is not running anymore, and offer you the option to let you restart it.

Most commonly, you’ll see this in reference to the /api/metrics/v1 URL, used by jupyter-resource-usage.

Actions you can take

This log message is benign, and there is usually no action for you to take.

Changelog

For detailed changes from the prior release, click on the version number, and its link will bring up a GitHub listing of changes. Use git log on the command line for details.

Unreleased
2.2
2.2.2 2022-03-14

2.2.2 fixes a small regressions in 2.2.1.

(full changelog)

Bugs fixed
Continuous integration improvements
Contributors to this release

(GitHub contributors page for this release)

@consideRatio | @manics | @minrk | @NarekA

2.2.1 2022-03-11

2.2.1 fixes a few small regressions in 2.2.0.

(full changelog)

Bugs fixed
Maintenance and upkeep improvements
Documentation
Contributors to this release

(GitHub contributors page for this release)

@choldgraf | @consideRatio | @minrk | @NarekA | @yuvipanda

2.2.0 2022-03-07

JupyterHub 2.2.0 is a small release. The main new feature is the ability of Authenticators to manage group membership, e.g. when the identity provider has its own concept of groups that should be preserved in JupyterHub.

The links to access user servers from the admin page have been restored.

(full changelog)

New features added
Enhancements made
Bugs fixed
Documentation improvements
Behavior Changes
Contributors to this release

(GitHub contributors page for this release)

@blink1073 | @clkao | @consideRatio | @cqzlxl | @dependabot | @dtaniwaki | @fcollonval | @GeorgianaElena | @github-actions | @kshitija08 | @ktaletsk | @manics | @minrk | @NarekA | @pre-commit-ci | @rajat404 | @rcthomas | @ryogesh | @rzo1 | @satra | @thomafred | @tmtabor | @tobi45 | @ykazakov

2.1
2.1.1 2022-01-25

2.1.1 is a tiny bugfix release, fixing an issue where admins did not receive the new read:metrics permission.

(full changelog)

Bugs fixed
Contributors to this release

(GitHub contributors page for this release)

@consideRatio | @dependabot | @manics | @minrk

2.1.0 2022-01-21

2.1.0 is a small bugfix release, resolving regressions in 2.0 and further refinements. In particular, the authenticated prometheus metrics endpoint did not work in 2.0 because it lacked a scope. To access the authenticated metrics endpoint with a token, upgrade to 2.1 and make sure the token/owner has the read:metrics scope.

Custom error messages for failed spawns are now handled more consistently on the spawn-progress API and the spawn-failed HTML page. Previously, spawn-progress did not relay the custom message provided by exception.jupyterhub_message, and full HTML messages in exception.jupyterhub_html_message can now be displayed in both contexts.

The long-deprecated, inconsistent behavior when users visited a URL for another user’s server, where they could sometimes be redirected back to their own server, has been removed in favor of consistent behavior based on the user’s permissions. To share a URL that will take any user to their own server, use https://my.hub/hub/user-redirect/path/....

(full changelog)

Enhancements made
  • relay custom messages in exception.jupyterhub_message in progress API #3764 (@minrk)

  • Add the capability to inform a connection to Alembic Migration Script #3762 (@DougTrajano)

Bugs fixed
  • Fix loading Spawner.user_options from db #3773 (@IgorBerman)

  • Add missing read:metrics scope for authenticated metrics endpoint #3770 (@minrk)

  • apply scope checks to some admin-or-self situations #3763 (@minrk)

Maintenance and upkeep improvements
  • DOCS: Add github metadata for edit button #3775 (@minrk)

Documentation improvements
  • Improve documentation about spawner exception handling #3765 (@twalcari)

Contributors to this release

(GitHub contributors page for this release)

@consideRatio | @dependabot | @DougTrajano | @IgorBerman | @minrk | @twalcari | @welcome

2.0
2.0.2 2022-01-10

2.0.2 fixes a regression in 2.0.1 causing false positives rejecting valid requests as cross-origin, mostly when JupyterHub is behind additional proxies.

(full changelog)

Bugs fixed
  • use outermost proxied entry when looking up browser protocol #3757 (@minrk)

Maintenance and upkeep improvements
Documentation improvements
Contributors to this release

(GitHub contributors page for this release)

@choldgraf | @consideRatio | @github-actions | @jakob-keller | @manics | @meeseeksmachine | @minrk | @pre-commit-ci | @welcome

2.0.1

(full changelog)

2.0.1 is a bugfix release, with some additional small improvements, especially in the new RBAC handling and admin page.

Several issues are fixed where users might not have the default ‘user’ role as expected.

Enhancements made
Bugs fixed
  • initialize new admin users with default roles #3735 (@minrk)

  • Fix missing f-string modifier #3733 (@manics)

  • accept token auth on /hub/user/... #3731 (@minrk)

  • simplify default role assignment #3720 (@minrk)

  • fix Spawner.oauth_roles config #3717 (@minrk)

  • Fix error message about Authenticator.pre_spawn_start #3716 (@minrk)

  • admin: Pass Base Url #3715 (@naatebarber)

  • Grant role after user creation during config load #3714 (@a3626a)

  • Avoid clearing user role membership when defining custom user scopes #3708 (@minrk)

  • cors: handle mismatched implicit/explicit ports in host header #3701 (@minrk)

Maintenance and upkeep improvements
  • clarify role argument in grant/strip_role #3727 (@minrk)

  • check for db clients before requesting install #3719 (@minrk)

  • run jsx tests in their own job #3698 (@minrk)

Documentation improvements
Contributors to this release

(GitHub contributors page for this release)

@a3626a | @betatim | @consideRatio | @github-actions | @kylewm | @manics | @minrk | @naatebarber | @pre-commit-ci | @sgaist | @welcome

2.0.0

JupyterHub 2.0 is a big release!

The most significant change is the addition of roles and scopes to the JupyterHub permissions model, allowing more fine-grained access control. Read more about it in the docs.

In particular, the ‘admin’ level of permissions should not be needed anymore, and you can now grant users and services only the permissions they need, not more. We encourage you to review permissions, especially any service or user with admin: true and consider assigning only the necessary roles and scopes.

JupyterHub 2.0 requires an update to the database schema, so make sure to read the upgrade documentation and backup your database before upgrading.

stop all servers before upgrading

Upgrading JupyterHub to 2.0 revokes all tokens issued before the upgrade, which means that single-user servers started before the upgrade will become inaccessible after the upgrade until they have been stopped and started again. To avoid this, it is best to shutdown all servers prior to the upgrade.

Other major changes that may require updates to your deployment, depending on what features you use:

  • List endpoints now support pagination, and have a max page size, which means API consumers must be updated to make paginated requests if you have a lot of users and/or groups.

  • Spawners have stopped specifying any command-line options to spawners by default. Previously, --ip and --port could be specified on the command-line. From 2.0 forward, JupyterHub will only communicate options to Spawners via environment variables, and the command to be launched is configured exclusively via Spawner.cmd and Spawner.args.

Other new features:

  • new Admin page, written in React. With RBAC, it should now be fully possible to implement a custom admin panel as a service via the REST API.

  • JupyterLab is the default UI for single-user servers, if available in the user environment. See more info in the docs about switching back to the classic notebook, if you are not ready to switch to JupyterLab.

  • NullAuthenticator is now bundled with JupyterHub, so you no longer need to install the nullauthenticator package to disable login, you can set c.JupyterHub.authenticator_class = 'null'.

  • Support jupyterhub --show-config option to see your current jupyterhub configuration.

  • Add expiration date dropdown to Token page

and major bug fixes:

  • Improve database rollback recovery on broken connections

and other changes:

  • Requests to a not-running server (e.g. visiting /user/someuser/) will return an HTTP 424 error instead of 503, making it easier to monitor for real deployment problems. JupyterLab in the user environment should be at least version 3.1.16 to recognize this error code as a stopped server. You can temporarily opt-in to the older behavior (e.g. if older JupyterLab is required) by setting c.JupyterHub.use_legacy_stopped_server_status_code = True.

Plus lots of little fixes along the way.

2.0.0 - 2021-12-01

(full changelog)

New features added
Enhancements made
Bugs fixed
  • Hub: only accept tokens in API requests #3686 (@minrk)

  • Forward-port fixes from 1.5.0 security release #3679 (@minrk)

  • raise 404 on admin attempt to spawn nonexistent user #3653 (@minrk)

  • new user token returns 200 instead of 201 #3646 (@joegasewicz)

  • Added base_url to path for jupyterhub-session-id cookie #3625 (@albertmichaelj)

  • Fix wrong name of auth_state_hook in the exception log #3569 (@dolfinus)

  • Stop injecting statsd parameters into the configurable HTTP proxy #3568 (@paccorsi)

  • explicit DB rollback for 500 errors #3566 (@nsshah1288)

  • don’t omit server model if it’s empty #3564 (@minrk)

  • ensure admin requests for missing users 404 #3563 (@minrk)

  • Avoid zombie processes in case of using LocalProcessSpawner #3543 (@dolfinus)

  • Fix regression where external services api_token became required #3531 (@consideRatio)

  • Fix allow_all check when only allow_admin is set #3526 (@dolfinus)

  • Bug: save_bearer_token (provider.py) passes a float value to the expires_at field (int) #3484 (@weisdd)

Maintenance and upkeep improvements
Documentation improvements
Contributors to this release

(GitHub contributors page for this release)

@0mar | @AbdealiJK | @albertmichaelj | @betatim | @bollwyvl | @choldgraf | @consideRatio | @cslocum | @danlester | @davidbrochart | @dependabot | @diurnalist | @dolfinus | @echarles | @edgarcosta | @ellisonbg | @eruditehassan | @icankeep | @IvanaH8 | @joegasewicz | @manics | @meeseeksmachine | @minrk | @mriedem | @naatebarber | @nsshah1288 | @octavd | @OrnithOrtion | @paccorsi | @panruipr | @pre-commit-ci | @rpwagner | @sgibson91 | @support | @twalcari | @VaishnaviHire | @warwing | @weisdd | @welcome | @willingc | @ykazakov | @yuvipanda

1.5

JupyterHub 1.5 is a security release, fixing a vulnerability ghsa-cw7p-q79f-m2v7 where JupyterLab users with multiple tabs open could fail to logout completely, leaving their browser with valid credentials until they logout again.

A few fully backward-compatible features have been backported from 2.0.

1.5.0 2021-11-04

(full changelog)

New features added
  • Backport #3636 to 1.4.x (opt-in support for JupyterHub.use_legacy_stopped_server_status_code) #3639 (@yuvipanda)

  • Backport PR #3552 on branch 1.4.x (Add expiration date dropdown to Token page) #3580 (@meeseeksmachine)

  • Backport PR #3488 on branch 1.4.x (Support auto login when used as a OAuth2 provider) #3579 (@meeseeksmachine)

Maintenance and upkeep improvements
Documentation improvements
  • use_legacy_stopped_server_status_code: use 1.* language #3676 (@manics)

Contributors to this release

(GitHub contributors page for this release)

@choldgraf | @consideRatio | @manics | @meeseeksmachine | @minrk | @support | @welcome | @yuvipanda

1.4

JupyterHub 1.4 is a small release, with several enhancements, bug fixes, and new configuration options.

There are no database schema changes requiring migration from 1.3 to 1.4.

1.4 is also the first version to start publishing docker images for arm64.

In particular, OAuth tokens stored in user cookies, used for accessing single-user servers and hub-authenticated services, have changed their expiration from one hour to the expiry of the cookie in which they are stored (default: two weeks). This is now also configurable via JupyterHub.oauth_token_expires_in.

The result is that it should be much less likely for auth tokens stored in cookies to expire during the lifetime of a server.

1.4.2 2021-06-15

1.4.2 is a small bugfix release for 1.4.

(full changelog)

Bugs fixed
  • Fix regression where external services api_token became required #3531 (@consideRatio)

  • Bug: save_bearer_token (provider.py) passes a float value to the expires_at field (int) #3484 (@weisdd)

Maintenance and upkeep improvements
Documentation improvements
Contributors to this release

(GitHub contributors page for this release)

@consideRatio | @davidbrochart | @icankeep | @minrk | @weisdd

1.4.1 2021-05-12

1.4.1 is a small bugfix release for 1.4.

(full changelog)

Enhancements made
Bugs fixed
  • define Spawner.delete_forever on base Spawner #3454 (@minrk)

  • patch base handlers from both jupyter_server and notebook #3437 (@minrk)

Maintenance and upkeep improvements
Documentation improvements
Contributors to this release

(GitHub contributors page for this release)

@0mar | @betatim | @consideRatio | @danlester | @davidbrochart | @IvanaH8 | @manics | @minrk | @naatebarber | @OrnithOrtion | @support | @welcome

1.4.0 2021-04-19

(full changelog)

New features added
Enhancements made
Bugs fixed
  • always start redirect count at 1 when redirecting /hub/user/:name -> /user/:name #3377 (@minrk)

  • Always raise on failed token creation #3370 (@minrk)

  • make_singleuser_app: patch-in HubAuthenticatedHandler at lower priority #3347 (@minrk)

  • Fix pagination with named servers #3335 (@rcthomas)

Maintenance and upkeep improvements
Documentation improvements
Contributors to this release

(GitHub contributors page for this release)

@00Kai0 | @8rV1n | @akhilputhiry | @alexal | @analytically | @andreamazzoni | @andrewisplinghoff | @BertR | @betatim | @bitnik | @bollwyvl | @carluri | @Carreau | @consideRatio | @davidedelvento | @dhirschfeld | @dmpe | @dsblank | @dtaniwaki | @echarles | @elgalu | @eran-pinhas | @gaebor | @GeorgianaElena | @gsemet | @gweis | @hynek2001 | @ianabc | @ibre5041 | @IvanaH8 | @jhegedus42 | @jhermann | @jiajunjie | @jtlz2 | @kafonek | @katsar0v | @kinow | @krinsman | @laurensdv | @lits789 | @m-alekseev | @mabbasi90 | @manics | @manniche | @maxshowarth | @mdivk | @meeseeksmachine | @minrk | @mogthesprog | @mriedem | @nsshah1288 | @olifre | @PandaWhoCodes | @pawsaw | @phozzy | @playermanny2 | @rabsr | @randy3k | @rawrgulmuffins | @rcthomas | @rebeca-maia | @rebenkoy | @rkdarst | @robnagler | @ronaldpetty | @ryanlovett | @ryogesh | @sbailey-auro | @sigurdurb | @SivaAccionLabs | @sougou | @stv0g | @sudi007 | @support | @tathagata | @timgates42 | @trallard | @vlizanae | @welcome | @whitespaceninja | @whlteXbread | @willingc | @yuvipanda | @Zsailer

1.3

JupyterHub 1.3 is a small feature release. Highlights include:

  • Require Python >=3.6 (jupyterhub 1.2 is the last release to support 3.5)

  • Add a ?state= filter for getting user list, allowing much quicker responses when retrieving a small fraction of users. state can be active, inactive, or ready.

  • prometheus metrics now include a jupyterhub_ prefix, so deployments may need to update their grafana charts to match.

  • page templates can now be async!

1.3.0

(full changelog)

Enhancements made
  • allow services to call /api/user to identify themselves #3293 (@minrk)

  • Add optional user agreement to login screen #3264 (@tlvu)

  • [Metrics] Add prefix to prometheus metrics to group all jupyterhub metrics #3243 (@agp8x)

  • Allow options_from_form to be configurable #3225 (@cbanek)

  • add ?state= filter for GET /users #3177 (@minrk)

  • Enable async support in jinja2 templates #3176 (@yuvipanda)

Bugs fixed
Maintenance and upkeep improvements
Documentation improvements
Contributors to this release

(GitHub contributors page for this release)

@0mar | @agp8x | @alexweav | @belfhi | @betatim | @cbanek | @cmd-ntrf | @coffeebenzene | @consideRatio | @danlester | @fcollonval | @GeorgianaElena | @ianabc | @IvanaH8 | @manics | @meeseeksmachine | @mhwasil | @minrk | @mriedem | @mxjeff | @olifre | @rcthomas | @rgbkrk | @rkdarst | @Sangarshanan | @slemonide | @support | @tlvu | @welcome | @yuvipanda

1.2
1.2.2 2020-11-27

(full changelog)

Enhancements made
  • Standardize “Sign in” capitalization on the login page #3252 (@cmd-ntrf)

Bugs fixed
  • Fix RootHandler when default_url is a callable #3265 (@danlester)

  • Only preserve params when ?next= is unspecified #3261 (@minrk)

  • [Windows] Improve robustness when detecting and closing existing proxy processes #3237 (@alexweav)

Maintenance and upkeep improvements
Documentation improvements
  • Update services-basics.md to use jupyterhub_idle_culler #3257 (@manics)

Contributors to this release

(GitHub contributors page for this release)

@alexweav | @belfhi | @betatim | @cmd-ntrf | @consideRatio | @danlester | @fcollonval | @GeorgianaElena | @ianabc | @IvanaH8 | @manics | @meeseeksmachine | @minrk | @mriedem | @olifre | @rcthomas | @rgbkrk | @rkdarst | @slemonide | @support | @welcome | @yuvipanda

1.2.1 2020-10-30

(full changelog)

Bugs fixed
  • JupyterHub services’ oauth_no_confirm configuration regression in 1.2.0 #3234 (@bitnik)

Contributors to this release

(GitHub contributors page for this release)

@bitnik

1.2.0 2020-10-29

JupyterHub 1.2 is an incremental release with lots of small improvements. It is unlikely that users will have to change much to upgrade, but lots of new things are possible and/or better!

There are no database schema changes requiring migration from 1.1 to 1.2.

Highlights:

  • Deprecate black/whitelist configuration fields in favor of more inclusive blocked/allowed language. For example: c.Authenticator.allowed_users = {'user', ...}

  • More configuration of page templates and service display

  • Pagination of the admin page improving performance with large numbers of users

  • Improved control of user redirect

  • Support for jupyter-server-based single-user servers, such as Voilà and latest JupyterLab.

  • Lots more improvements to documentation, HTML pages, and customizations

(full changelog)

Enhancements made
Bugs fixed
  • Fix #2284 must be sent from authorization page #3219 (@elgalu)

  • avoid specifying default_value=None in Command traits #3208 (@minrk)

  • Prevent OverflowErrors in exponential_backoff() #3204 (@kreuzert)

  • update prometheus metrics for server spawn when it fails with exception #3150 (@yhal-nesi)

  • jupyterhub/utils: Load system default CA certificates in make_ssl_context #3140 (@chancez)

  • admin page sorts on spawner last_activity instead of user last_activity #3137 (@lydian)

  • Fix the services dropdown on the admin page #3132 (@pabepadu)

  • Don’t log a warning when slow_spawn_timeout is disabled #3127 (@mriedem)

  • app.py: Work around incompatibility between Tornado 6 and asyncio proactor event loop in python 3.8 on Windows #3123 (@alexweav)

  • jupyterhub/user: clear spawner state after post_stop_hook #3121 (@rkdarst)

  • fix for stopping named server deleting default server and tests #3109 (@kxiao-fn)

  • Hide hamburger button menu in mobile/responsive mode and fix other minor issues #3103 (@kinow)

  • Rename Authenticator.white/blacklist to allowed/blocked #3090 (@minrk)

  • Include the query string parameters when redirecting to a new URL #3089 (@kinow)

  • Make delete_invalid_users configurable #3087 (@fcollonval)

  • Ensure client dependencies build before wheel #3082 (@diurnalist)

  • make Spawner.environment config highest priority #3081 (@minrk)

  • Changing start my server button link to spawn url once server is stopped #3042 (@rabsr)

  • Fix CSS on admin page version listing #3035 (@vilhelmen)

  • Fix user_row endblock in admin template #3015 (@jtpio)

  • Fix –generate-config bug when specifying a filename #2907 (@consideRatio)

  • Handle the protocol when ssl is enabled and log the right URL #2773 (@kinow)

Maintenance and upkeep improvements
Documentation improvements
Contributors to this release

(GitHub contributors page for this release)

@0nebody | @1kastner | @ahkui | @alexdriedger | @alexweav | @AlJohri | @Analect | @analytically | @aneagoe | @AngelOnFira | @barrachri | @basvandervlies | @betatim | @bigbosst | @blink1073 | @Cadair | @Carreau | @cbjuan | @ceocoder | @chancez | @choldgraf | @Chrisjw42 | @cmd-ntrf | @consideRatio | @danlester | @diurnalist | @Dmitry1987 | @dsblank | @dylex | @echarles | @elgalu | @fcollonval | @gatoniel | @GeorgianaElena | @hnykda | @itssimon | @jgwerner | @JohnPaton | @joshmeek | @jtpio | @kinow | @kreuzert | @kxiao-fn | @lesiano | @limimiking | @lydian | @mabbasi90 | @maluhoss | @manics | @matteoipri | @mbmilligan | @meeseeksmachine | @mhwasil | @minrk | @mriedem | @nscozzaro | @pabepadu | @possiblyMikeB | @psyvision | @rabsr | @rainwoodman | @rajat404 | @rcthomas | @reneluria | @rgbkrk | @rkdarst | @rkevin-arch | @romainx | @ryanlovett | @ryogesh | @sdague | @snickell | @SonakshiGrover | @ssanderson | @stefanvangastel | @steinad | @stephen-a2z | @stevegore | @stv0g | @subgero | @sudi007 | @summerswallow | @support | @synchronizing | @thuvh | @tritemio | @twalcari | @vchandvankar | @vilhelmen | @vlizanae | @weimin | @welcome | @willingc | @xlotlu | @yhal-nesi | @ynnelson | @yuvipanda | @zonca | @Zsailer

1.1
1.1.0 2020-01-17

1.1 is a release with lots of accumulated fixes and improvements, especially in performance, metrics, and customization. There are no database changes in 1.1, so no database upgrade is required when upgrading from 1.0 to 1.1.

Of particular interest to deployments with automatic health checking and/or large numbers of users is that the slow startup time introduced in 1.0 by additional spawner validation can now be mitigated by JupyterHub.init_spawners_timeout, allowing the Hub to become responsive before the spawners may have finished validating.

Several new Prometheus metrics are added (and others fixed!) to measure sources of common performance issues, such as proxy interactions and startup.

1.1 also begins adoption of the Jupyter telemetry project in JupyterHub, See The Jupyter Telemetry docs for more info. The only events so far are starting and stopping servers, but more will be added in future releases.

There are many more fixes and improvements listed below. Thanks to everyone who has contributed to this release!

New
  • LocalProcessSpawner should work on windows by using psutil.pid_exists #2882 (@ociule)

  • trigger auth_state_hook prior to options form, add auth_state to template namespace #2881 (@minrk)

  • Added guide ‘install jupyterlab the hard way’ #2110 #2842 (@mangecoeur)

  • Add prometheus metric to measure hub startup time #2799 (@rajat404)

  • Add Spawner.auth_state_hook #2555 (@rcthomas)

  • Link services from jupyterhub pages #2763 (@rcthomas)

  • JupyterHub.user_redirect_hook is added to allow admins to customize /user-redirect/ behavior #2790 (@yuvipanda)

  • Add prometheus metric to measure hub startup time #2799 (@rajat404)

  • Add prometheus metric to measure proxy route poll times #2798 (@rajat404)

  • PROXY_DELETE_DURATION_SECONDS prometheus metric is added, to measure proxy route deletion times #2788 (@rajat404)

  • Service.oauth_no_confirm is added, it is useful for admin-managed services that are considered part of the Hub and shouldn’t need to prompt the user for access #2767 (@minrk)

  • JupyterHub.default_server_name is added to make the default server be a named server with provided name #2735 (@krinsman)

  • JupyterHub.init_spawners_timeout is introduced to combat slow startups on large JupyterHub deployments #2721 (@minrk)

  • The configuration uids for local authenticators is added to consistently assign users UNIX id’s between installations #2687 (@rgerkin)

  • JupyterHub.activity_resolution is introduced with a default value of 30s improving performance by not updating the database with user activity too often #2605 (@minrk)

  • HubAuth’s SSL configuration can now be set through environment variables #2588 (@cmd-ntrf)

  • Expose spawner.user_options in REST API. #2755 (@danielballan)

  • add block for scripts included in head #2828 (@bitnik)

  • Instrument JupyterHub to record events with jupyter_telemetry [Part II] #2698 (@Zsailer)

  • Make announcements visible without custom HTML #2570 (@consideRatio)

  • Display server version on admin page #2776 (@vilhelmen)

Fixes
  • Bugfix: pam_normalize_username didn’t return username #2876 (@rkdarst)

  • Cleanup if spawner stop fails #2849 (@gabber12)

  • Fix an issue occurring with the default spawner and internal_ssl enabled #2785 (@rpwagner)

  • Fix named servers to not be spawnable unless activated #2772 (@bitnik)

  • JupyterHub now awaits proxy availability before accepting web requests #2750 (@minrk)

  • Fix a no longer valid assumption that MySQL and MariaDB need to have innodb_file_format and innodb_large_prefix configured #2712 (@chicocvenancio)

  • Login/Logout button now updates to Login on logout #2705 (@aar0nTw)

  • Fix handling of exceptions within pre_spawn_start hooks #2684 (@GeorgianaElena)

  • Fix an issue where a user could end up spawning a default server instead of a named server as intended #2682 (@rcthomas)

  • /hub/admin now redirects to login if unauthenticated #2670 (@GeorgianaElena)

  • Fix spawning of users with names containing characters that needs to be escaped #2648 (@nicorikken)

  • Fix TOTAL_USERS prometheus metric #2637 (@GeorgianaElena)

  • Fix RUNNING_SERVERS prometheus metric #2629 (@GeorgianaElena)

  • Fix faulty redirects to 404 that could occur with the use of named servers #2594 (@vilhelmen)

  • JupyterHub API spec is now a valid OpenAPI spec #2590 (@sbrunk)

  • Use of --help or --version previously could output unrelated errors #2584 (@minrk)

  • No longer crash on startup in Windows #2560 (@adelcast)

  • Escape usernames in the frontend #2640 (@nicorikken)

Maintenance
1.0
1.0.0 2019-05-03

JupyterHub 1.0 is a major milestone for JupyterHub. Huge thanks to the many people who have contributed to this release, whether it was through discussion, testing, documentation, or development.

Major new features
  • Support TLS encryption and authentication of all internal communication. Spawners must implement .move_certs method to make certificates available to the notebook server if it is not local to the Hub.

  • There is now full UI support for managing named servers. With named servers, each jupyterhub user may have access to more than one named server. For example, a professor may access a server named research and another named teaching.

    named servers on the home page

  • Authenticators can now expire and refresh authentication data by implementing Authenticator.refresh_user(user). This allows things like OAuth data and access tokens to be refreshed. When used together with Authenticator.refresh_pre_spawn = True, auth refresh can be forced prior to Spawn, allowing the Authenticator to require that authentication data is fresh immediately before the user’s server is launched.

New features
  • allow custom spawners, authenticators, and proxies to register themselves via ‘entry points’, enabling more convenient configuration such as:

    c.JupyterHub.authenticator_class = 'github'
    c.JupyterHub.spawner_class = 'docker'
    c.JupyterHub.proxy_class = 'traefik_etcd'
    
  • Spawners are passed the tornado Handler object that requested their spawn (as self.handler), so they can do things like make decisions based on query arguments in the request.

  • SimpleSpawner and DummyAuthenticator, which are useful for testing, have been merged into JupyterHub itself:

    # For testing purposes only. Should not be used in production.
    c.JupyterHub.authenticator_class = 'dummy'
    c.JupyterHub.spawner_class = 'simple'
    

    These classes are not appropriate for production use. Only testing.

  • Add health check endpoint at /hub/health

  • Several prometheus metrics have been added (thanks to Outreachy applicants!)

  • A new API for registering user activity. To prepare for the addition of alternate proxy implementations, responsibility for tracking activity is taken away from the proxy and moved to the notebook server (which already has activity tracking features). Activity is now tracked by pushing it to the Hub from user servers instead of polling the proxy API.

  • Dynamic options_form callables may now return an empty string which will result in no options form being rendered.

  • Spawner.user_options is persisted to the database to be re-used, so that a server spawned once via the form can be re-spawned via the API with the same options.

  • Added c.PAMAuthenticator.pam_normalize_username option for round-tripping usernames through PAM to retrieve the normalized form.

  • Added c.JupyterHub.named_server_limit_per_user configuration to limit the number of named servers each user can have. The default is 0, for no limit.

  • API requests to HubAuthenticated services (e.g. single-user servers) may pass a token in the Authorization header, matching authentication with the Hub API itself.

  • Added Authenticator.is_admin(handler, authentication) method and Authenticator.admin_groups configuration for automatically determining that a member of a group should be considered an admin.

  • New c.Authenticator.post_auth_hook configuration that can be any callable of the form async def hook(authenticator, handler, authentication=None):. This hook may transform the return value of Authenticator.authenticate() and return a new authentication dictionary, e.g. specifying admin privileges, group membership, or custom allowed/blocked logic. This hook is called after existing normalization and allowed-username checking.

  • Spawner.options_from_form may now be async

  • Added JupyterHub.shutdown_on_logout option to trigger shutdown of a user’s servers when they log out.

  • When Spawner.start raises an Exception, a message can be passed on to the user if the exception has a .jupyterhub_message attribute.

Changes
  • Authentication methods such as check_whitelist should now take an additional authentication argument that will be a dictionary (default: None) of authentication data, as returned by Authenticator.authenticate():

    def check_whitelist(self, username, authentication=None):
        ...
    

    authentication should have a default value of None for backward-compatibility with jupyterhub < 1.0.

  • Prometheus metrics page is now authenticated. Any authenticated user may see the prometheus metrics. To disable prometheus authentication, set JupyterHub.authenticate_prometheus = False.

  • Visits to /user/:name no longer trigger an implicit launch of the user’s server. Instead, a page is shown indicating that the server is not running with a link to request the spawn.

  • API requests to /user/:name for a not-running server will have status 503 instead of 404.

  • OAuth includes a confirmation page when attempting to visit another user’s server, so that users can choose to cancel authentication with the single-user server. Confirmation is still skipped when accessing your own server.

Fixed
  • Various fixes to improve Windows compatibility (default Authenticator and Spawner still do not support Windows, but other Spawners may)

  • Fixed compatibility with Oracle db

  • Fewer redirects following a visit to the default / url

  • Error when progress is requested before progress is ready

  • Error when API requests are made to a not-running server without authentication

  • Avoid logging database password on connect if password is specified in JupyterHub.db_url.

Development changes

There have been several changes to the development process that shouldn’t generally affect users of JupyterHub, but may affect contributors. In general, see CONTRIBUTING.md for contribution info or ask if you have questions.

  • JupyterHub has adopted black as a code autoformatter and pre-commit as a tool for automatically running code formatting on commit. This is meant to make it easier to contribute to JupyterHub, so let us know if it’s having the opposite effect.

  • JupyterHub has switched its test suite to using pytest-asyncio from pytest-tornado.

  • OAuth is now implemented internally using oauthlib instead of python-oauth2. This should have no effect on behavior.

0.9
0.9.6 2019-04-01

JupyterHub 0.9.6 is a security release.

  • Fixes an Open Redirect vulnerability (CVE-2019-10255).

JupyterHub 0.9.5 included a partial fix for this issue.

0.9.4 2018-09-24

JupyterHub 0.9.4 is a small bugfix release.

  • Fixes an issue that required all running user servers to be restarted when performing an upgrade from 0.8 to 0.9.

  • Fixes content-type for API endpoints back to application/json. It was text/html in 0.9.0-0.9.3.

0.9.3 2018-09-12

JupyterHub 0.9.3 contains small bugfixes and improvements

  • Fix token page and model handling of expires_at. This field was missing from the REST API model for tokens and could cause the token page to not render

  • Add keep-alive to progress event stream to avoid proxies dropping the connection due to inactivity

  • Documentation and example improvements

  • Disable quit button when using notebook 5.6

  • Prototype new feature (may change prior to 1.0): pass requesting Handler to Spawners during start, accessible as self.handler

0.9.2 2018-08-10

JupyterHub 0.9.2 contains small bugfixes and improvements.

  • Documentation and example improvements

  • Add Spawner.consecutive_failure_limit config for aborting the Hub if too many spawns fail in a row.

  • Fix for handling SIGTERM when run with asyncio (tornado 5)

  • Windows compatibility fixes

0.9.1 2018-07-04

JupyterHub 0.9.1 contains a number of small bugfixes on top of 0.9.

  • Use a PID file for the proxy to decrease the likelihood that a leftover proxy process will prevent JupyterHub from restarting

  • c.LocalProcessSpawner.shell_cmd is now configurable

  • API requests to stopped servers (requests to the hub for /user/:name/api/...) fail with 404 rather than triggering a restart of the server

  • Compatibility fix for notebook 5.6.0 which will introduce further security checks for local connections

  • Managed services always use localhost to talk to the Hub if the Hub listening on all interfaces

  • When using a URL prefix, the Hub route will be JupyterHub.base_url instead of unconditionally /

  • additional fixes and improvements

0.9.0 2018-06-15

JupyterHub 0.9 is a major upgrade of JupyterHub. There are several changes to the database schema, so make sure to backup your database and run:

jupyterhub upgrade-db

after upgrading jupyterhub.

The biggest change for 0.9 is the switch to asyncio coroutines everywhere instead of tornado coroutines. Custom Spawners and Authenticators are still free to use tornado coroutines for async methods, as they will continue to work. As part of this upgrade, JupyterHub 0.9 drops support for Python < 3.5 and tornado < 5.0.

Changed
  • Require Python >= 3.5

  • Require tornado >= 5.0

  • Use asyncio coroutines throughout

  • Set status 409 for conflicting actions instead of 400, e.g. creating users or groups that already exist.

  • timestamps in REST API continue to be UTC, but now include ‘Z’ suffix to identify them as such.

  • REST API User model always includes servers dict, not just when named servers are enabled.

  • server info is no longer available to oauth identification endpoints, only user info and group membership.

  • User.last_activity may be None if a user has not been seen, rather than starting with the user creation time which is now separately stored as User.created.

  • static resources are now found in $PREFIX/share/jupyterhub instead of share/jupyter/hub for improved consistency.

  • Deprecate .extra_log_file config. Use pipe redirection instead:

    jupyterhub &>> /var/log/jupyterhub.log
    
  • Add JupyterHub.bind_url config for setting the full bind URL of the proxy. Sets ip, port, base_url all at once.

  • Add JupyterHub.hub_bind_url for setting the full host+port of the Hub. hub_bind_url supports unix domain sockets, e.g. unix+http://%2Fsrv%2Fjupyterhub.sock

  • Deprecate JupyterHub.hub_connect_port config in favor of JupyterHub.hub_connect_url. hub_connect_ip is not deprecated and can still be used in the common case where only the ip address of the hub differs from the bind ip.

Added
  • Spawners can define a .progress method which should be an async generator. The generator should yield events of the form:

    {
      "message": "some-state-message",
      "progress": 50,
    }
    

    These messages will be shown with a progress bar on the spawn-pending page. The async_generator package can be used to make async generators compatible with Python 3.5.

  • track activity of individual API tokens

  • new REST API for managing API tokens at /hub/api/user/tokens[/token-id]

  • allow viewing/revoking tokens via token page

  • User creation time is available in the REST API as User.created

  • Server start time is stored as Server.started

  • Spawner.start may return a URL for connecting to a notebook instead of (ip, port). This enables Spawners to launch servers that setup their own HTTPS.

  • Optimize database performance by disabling sqlalchemy expire_on_commit by default.

  • Add python -m jupyterhub.dbutil shell entrypoint for quickly launching an IPython session connected to your JupyterHub database.

  • Include User.auth_state in user model on single-user REST endpoints for admins only.

  • Include Server.state in server model on REST endpoints for admins only.

  • Add Authenticator.blacklist for blocking users instead of allowing.

  • Pass c.JupyterHub.tornado_settings['cookie_options'] down to Spawners so that cookie options (e.g. expires_days) can be set globally for the whole application.

  • SIGINFO (ctrl-t) handler showing the current status of all running threads, coroutines, and CPU/memory/FD consumption.

  • Add async Spawner.get_options_form alternative to .options_form, so it can be a coroutine.

  • Add JupyterHub.redirect_to_server config to govern whether users should be sent to their server on login or the JupyterHub home page.

  • html page templates can be more easily customized and extended.

  • Allow registering external OAuth clients for using the Hub as an OAuth provider.

  • Add basic prometheus metrics at /hub/metrics endpoint.

  • Add session-id cookie, enabling immediate revocation of login tokens.

  • Authenticators may specify that users are admins by specifying the admin key when return the user model as a dict.

  • Added “Start All” button to admin page for launching all user servers at once.

  • Services have an info field which is a dictionary. This is accessible via the REST API.

  • JupyterHub.extra_handlers allows defining additional tornado RequestHandlers attached to the Hub.

  • API tokens may now expire. Expiry is available in the REST model as expires_at, and settable when creating API tokens by specifying expires_in.

Fixed
  • Remove green from theme to improve accessibility

  • Fix error when proxy deletion fails due to route already being deleted

  • clear ?redirects from URL on successful launch

  • disable send2trash by default, which is rarely desirable for jupyterhub

  • Put PAM calls in a thread so they don’t block the main application in cases where PAM is slow (e.g. LDAP).

  • Remove implicit spawn from login handler, instead relying on subsequent request for /user/:name to trigger spawn.

  • Fixed several inconsistencies for initial redirects, depending on whether server is running or not and whether the user is logged in or not.

  • Admin requests for /user/:name (when admin-access is enabled) launch the right server if it’s not running instead of redirecting to their own.

  • Major performance improvement starting up JupyterHub with many users, especially when most are inactive.

  • Various fixes in race conditions and performance improvements with the default proxy.

  • Fixes for CORS headers

  • Stop setting .form-control on spawner form inputs unconditionally.

  • Better recovery from database errors and database connection issues without having to restart the Hub.

  • Fix handling of ~ character in usernames.

  • Fix jupyterhub startup when getpass.getuser() would fail, e.g. due to missing entry in passwd file in containers.

0.8
0.8.1 2017-11-07

JupyterHub 0.8.1 is a collection of bugfixes and small improvements on 0.8.

Added
  • Run tornado with AsyncIO by default

  • Add jupyterhub --upgrade-db flag for automatically upgrading the database as part of startup. This is useful for cases where manually running jupyterhub upgrade-db as a separate step is unwieldy.

  • Avoid creating backups of the database when no changes are to be made by jupyterhub upgrade-db.

Fixed
  • Add some further validation to usernames - / is not allowed in usernames.

  • Fix empty logout page when using auto_login

  • Fix autofill of username field in default login form.

  • Fix listing of users on the admin page who have not yet started their server.

  • Fix ever-growing traceback when re-raising Exceptions from spawn failures.

  • Remove use of deprecated bower for javascript client dependencies.

0.8.0 2017-10-03

JupyterHub 0.8 is a big release!

Perhaps the biggest change is the use of OAuth to negotiate authentication between the Hub and single-user services. Due to this change, it is important that the single-user server and Hub are both running the same version of JupyterHub. If you are using containers (e.g. via DockerSpawner or KubeSpawner), this means upgrading jupyterhub in your user images at the same time as the Hub. In most cases, a

pip install jupyterhub==version

in your Dockerfile is sufficient.

Added
  • JupyterHub now defined a Proxy API for custom proxy implementations other than the default. The defaults are unchanged, but configuration of the proxy is now done on the ConfigurableHTTPProxy class instead of the top-level JupyterHub. TODO: docs for writing a custom proxy.

  • Single-user servers and services (anything that uses HubAuth) can now accept token-authenticated requests via the Authentication header.

  • Authenticators can now store state in the Hub’s database. To do so, the authenticate method should return a dict of the form

    {
        'username': 'name',
        'state': {}
    }
    

    This data will be encrypted and requires JUPYTERHUB_CRYPT_KEY environment variable to be set and the Authenticator.enable_auth_state flag to be True. If these are not set, auth_state returned by the Authenticator will not be stored.

  • There is preliminary support for multiple (named) servers per user in the REST API. Named servers can be created via API requests, but there is currently no UI for managing them.

  • Add LocalProcessSpawner.popen_kwargs and LocalProcessSpawner.shell_cmd for customizing how user server processes are launched.

  • Add Authenticator.auto_login flag for skipping the “Login with…” page explicitly.

  • Add JupyterHub.hub_connect_ip configuration for the ip that should be used when connecting to the Hub. This is promoting (and deprecating) DockerSpawner.hub_ip_connect for use by all Spawners.

  • Add Spawner.pre_spawn_hook(spawner) hook for customizing pre-spawn events.

  • Add JupyterHub.active_server_limit and JupyterHub.concurrent_spawn_limit for limiting the total number of running user servers and the number of pending spawns, respectively.

Changed
  • more arguments to spawners are now passed via environment variables (.get_env()) rather than CLI arguments (.get_args())

  • internally generated tokens no longer get extra hash rounds, significantly speeding up authentication. The hash rounds were deemed unnecessary because the tokens were already generated with high entropy.

  • JUPYTERHUB_API_TOKEN env is available at all times, rather than being removed during single-user start. The token is now accessible to kernel processes, enabling user kernels to make authenticated API requests to Hub-authenticated services.

  • Cookie secrets should be 32B hex instead of large base64 secrets.

  • pycurl is used by default, if available.

Fixed

So many things fixed!

  • Collisions are checked when users are renamed

  • Fix bug where OAuth authenticators could not logout users due to being redirected right back through the login process.

  • If there are errors loading your config files, JupyterHub will refuse to start with an informative error. Previously, the bad config would be ignored and JupyterHub would launch with default configuration.

  • Raise 403 error on unauthorized user rather than redirect to login, which could cause redirect loop.

  • Set httponly on cookies because it’s prudent.

  • Improve support for MySQL as the database backend

  • Many race conditions and performance problems under heavy load have been fixed.

  • Fix alembic tagging of database schema versions.

Removed
  • End support for Python 3.3

0.7
0.7.2 - 2017-01-09
Added
  • Support service environment variables and defaults in jupyterhub-singleuser for easier deployment of notebook servers as a Service.

  • Add --group parameter for deploying jupyterhub-singleuser as a Service with group authentication.

  • Include URL parameters when redirecting through /user-redirect/

Fixed
  • Fix group authentication for HubAuthenticated services

0.7.1 - 2017-01-02
Added
  • Spawner.will_resume for signaling that a single-user server is paused instead of stopped. This is needed for cases like DockerSpawner.remove_containers = False, where the first API token is re-used for subsequent spawns.

  • Warning on startup about single-character usernames, caused by common set('string') typo in config.

Fixed
  • Removed spurious warning about empty next_url, which is AOK.

0.7.0 - 2016-12-2
Added
  • Implement Services API #705

  • Add /api/ and /api/info endpoints #675

  • Add documentation for JupyterLab, pySpark configuration, troubleshooting, and more.

  • Add logging of error if adding users already in database. #689

  • Add HubAuth class for authenticating with JupyterHub. This class can be used by any application, even outside tornado.

  • Add user groups.

  • Add /hub/user-redirect/... URL for redirecting users to a file on their own server.

Changed
  • Always install with setuptools but not eggs (effectively require pip install .) #722

  • Updated formatting of changelog. #711

  • Single-user server is provided by JupyterHub package, so single-user servers depend on JupyterHub now.

Fixed
  • Fix docker repository location #719

  • Fix swagger spec conformance and timestamp type in API spec

  • Various redirect-loop-causing bugs have been fixed.

Removed
  • Deprecate --no-ssl command line option. It has no meaning and warns if used. #789

  • Deprecate %U username substitution in favor of {username}. #748

  • Removed deprecated SwarmSpawner link. #699

0.6
0.6.1 - 2016-05-04

Bugfixes on 0.6:

  • statsd is an optional dependency, only needed if in use

  • Notice more quickly when servers have crashed

  • Better error pages for proxy errors

  • Add Stop All button to admin panel for stopping all servers at once

0.6.0 - 2016-04-25
  • JupyterHub has moved to a new jupyterhub namespace on GitHub and Docker. What was jupyter/jupyterhub is now jupyterhub/jupyterhub, etc.

  • jupyterhub/jupyterhub image on DockerHub no longer loads the jupyterhub_config.py in an ONBUILD step. A new jupyterhub/jupyterhub-onbuild image does this

  • Add statsd support, via c.JupyterHub.statsd_{host,port,prefix}

  • Update to traitlets 4.1 @default, @observe APIs for traits

  • Allow disabling PAM sessions via c.PAMAuthenticator.open_sessions = False. This may be needed on SELinux-enabled systems, where our PAM session logic often does not work properly

  • Add Spawner.environment configurable, for defining extra environment variables to load for single-user servers

  • JupyterHub API tokens can be pregenerated and loaded via JupyterHub.api_tokens, a dict of token: username.

  • JupyterHub API tokens can be requested via the REST API, with a POST request to /api/authorizations/token. This can only be used if the Authenticator has a username and password.

  • Various fixes for user URLs and redirects

0.5 - 2016-03-07
  • Single-user server must be run with Jupyter Notebook ≥ 4.0

  • Require --no-ssl confirmation to allow the Hub to be run without SSL (e.g. behind SSL termination in nginx)

  • Add lengths to text fields for MySQL support

  • Add Spawner.disable_user_config for preventing user-owned configuration from modifying single-user servers.

  • Fixes for MySQL support.

  • Add ability to run each user’s server on its own subdomain. Requires wildcard DNS and wildcard SSL to be feasible. Enable subdomains by setting JupyterHub.subdomain_host = 'https://jupyterhub.domain.tld[:port]'.

  • Use 127.0.0.1 for local communication instead of localhost, avoiding issues with DNS on some systems.

  • Fix race that could add users to proxy prematurely if spawning is slow.

0.4
0.4.1 - 2016-02-03

Fix removal of /login page in 0.4.0, breaking some OAuth providers.

0.4.0 - 2016-02-01
  • Add Spawner.user_options_form for specifying an HTML form to present to users, allowing users to influence the spawning of their own servers.

  • Add Authenticator.pre_spawn_start and Authenticator.post_spawn_stop hooks, so that Authenticators can do setup or teardown (e.g. passing credentials to Spawner, mounting data sources, etc.). These methods are typically used with custom Authenticator+Spawner pairs.

  • 0.4 will be the last JupyterHub release where single-user servers running IPython 3 is supported instead of Notebook ≥ 4.0.

0.3 - 2015-11-04
  • No longer make the user starting the Hub an admin

  • start PAM sessions on login

  • hooks for Authenticators to fire before spawners start and after they stop, allowing deeper interaction between Spawner/Authenticator pairs.

  • login redirect fixes

0.2 - 2015-07-12
  • Based on standalone traitlets instead of IPython.utils.traitlets

  • multiple users in admin panel

  • Fixes for usernames that require escaping

0.1 - 2015-03-07

First preview release

API Reference

JupyterHub API

Release

2.2.2

Date

Mar 14, 2022

JupyterHub also provides a REST API for administration of the Hub and users. The documentation on Using JupyterHub’s REST API provides information on:

  • what you can do with the API

  • creating an API token

  • adding API tokens to the config files

  • making an API request programmatically using the requests library

  • learning more about JupyterHub’s API

JupyterHub API Reference:

Application configuration
Module: jupyterhub.app

The multi-user notebook application

JupyterHub
class jupyterhub.app.JupyterHub(**kwargs)

An Application for starting a Multi-User Jupyter Notebook server.

active_server_limit c.JupyterHub.active_server_limit = Int(0)

Maximum number of concurrent servers that can be active at a time.

Setting this can limit the total resources your users can consume.

An active server is any server that’s not fully stopped. It is considered active from the time it has been requested until the time that it has completely stopped.

If this many user servers are active, users will not be able to launch new servers until a server is shutdown. Spawn requests will be rejected with a 429 error asking them to try again.

If set to 0, no limit is enforced.

active_user_window c.JupyterHub.active_user_window = Int(1800)

Duration (in seconds) to determine the number of active users.

activity_resolution c.JupyterHub.activity_resolution = Int(30)

Resolution (in seconds) for updating activity

If activity is registered that is less than activity_resolution seconds more recent than the current value, the new value will be ignored.

This avoids too many writes to the Hub database.

admin_access c.JupyterHub.admin_access = Bool(False)

Grant admin users permission to access single-user servers.

Users should be properly informed if this is enabled.

admin_users c.JupyterHub.admin_users = Set()

DEPRECATED since version 0.7.2, use Authenticator.admin_users instead.

allow_named_servers c.JupyterHub.allow_named_servers = Bool(False)

Allow named single-user servers per user

answer_yes c.JupyterHub.answer_yes = Bool(False)

Answer yes to any questions (e.g. confirm overwrite)

api_page_default_limit c.JupyterHub.api_page_default_limit = Int(50)

The default amount of records returned by a paginated endpoint

api_page_max_limit c.JupyterHub.api_page_max_limit = Int(200)

The maximum amount of records that can be returned at once

api_tokens c.JupyterHub.api_tokens = Dict()

PENDING DEPRECATION: consider using services

Dict of token:username to be loaded into the database.

Allows ahead-of-time generation of API tokens for use by externally managed services, which authenticate as JupyterHub users.

Consider using services for general services that talk to the JupyterHub API.

authenticate_prometheus c.JupyterHub.authenticate_prometheus = Bool(True)

Authentication for prometheus metrics

authenticator_class c.JupyterHub.authenticator_class = EntryPointType(<class 'jupyterhub.auth.PAMAuthenticator'>)

Class for authenticating users.

This should be a subclass of jupyterhub.auth.Authenticator

with an authenticate() method that:

  • is a coroutine (asyncio or tornado)

  • returns username on success, None on failure

  • takes two arguments: (handler, data), where handler is the calling web.RequestHandler, and data is the POST form data from the login page.

Changed in version 1.0: authenticators may be registered via entry points, e.g. c.JupyterHub.authenticator_class = 'pam'

Currently installed:
  • default: jupyterhub.auth.PAMAuthenticator

  • dummy: jupyterhub.auth.DummyAuthenticator

  • null: jupyterhub.auth.NullAuthenticator

  • pam: jupyterhub.auth.PAMAuthenticator

base_url c.JupyterHub.base_url = URLPrefix('/')

The base URL of the entire application.

Add this to the beginning of all JupyterHub URLs. Use base_url to run JupyterHub within an existing website.

bind_url c.JupyterHub.bind_url = Unicode('http://:8000')

The public facing URL of the whole JupyterHub application.

This is the address on which the proxy will bind. Sets protocol, ip, base_url

cleanup_proxy c.JupyterHub.cleanup_proxy = Bool(True)

Whether to shutdown the proxy when the Hub shuts down.

Disable if you want to be able to teardown the Hub while leaving the proxy running.

Only valid if the proxy was starting by the Hub process.

If both this and cleanup_servers are False, sending SIGINT to the Hub will only shutdown the Hub, leaving everything else running.

The Hub should be able to resume from database state.

cleanup_servers c.JupyterHub.cleanup_servers = Bool(True)

Whether to shutdown single-user servers when the Hub shuts down.

Disable if you want to be able to teardown the Hub while leaving the single-user servers running.

If both this and cleanup_proxy are False, sending SIGINT to the Hub will only shutdown the Hub, leaving everything else running.

The Hub should be able to resume from database state.

concurrent_spawn_limit c.JupyterHub.concurrent_spawn_limit = Int(100)

Maximum number of concurrent users that can be spawning at a time.

Spawning lots of servers at the same time can cause performance problems for the Hub or the underlying spawning system. Set this limit to prevent bursts of logins from attempting to spawn too many servers at the same time.

This does not limit the number of total running servers. See active_server_limit for that.

If more than this many users attempt to spawn at a time, their requests will be rejected with a 429 error asking them to try again. Users will have to wait for some of the spawning services to finish starting before they can start their own.

If set to 0, no limit is enforced.

config_file c.JupyterHub.config_file = Unicode('jupyterhub_config.py')

The config file to load

confirm_no_ssl c.JupyterHub.confirm_no_ssl = Bool(False)

DEPRECATED: does nothing

cookie_max_age_days c.JupyterHub.cookie_max_age_days = Float(14)

Number of days for a login cookie to be valid. Default is two weeks.

cookie_secret c.JupyterHub.cookie_secret = Union()

The cookie secret to use to encrypt cookies.

Loaded from the JPY_COOKIE_SECRET env variable by default.

Should be exactly 256 bits (32 bytes).

cookie_secret_file c.JupyterHub.cookie_secret_file = Unicode('jupyterhub_cookie_secret')

File in which to store the cookie secret.

data_files_path c.JupyterHub.data_files_path = Unicode('/home/docs/checkouts/readthedocs.org/user_builds/jupyterhub/checkouts/2.2.2/share/jupyterhub')

The location of jupyterhub data files (e.g. /usr/local/share/jupyterhub)

db_kwargs c.JupyterHub.db_kwargs = Dict()

Include any kwargs to pass to the database connection. See sqlalchemy.create_engine for details.

db_url c.JupyterHub.db_url = Unicode('sqlite:///jupyterhub.sqlite')

url for the database. e.g. sqlite:///jupyterhub.sqlite

debug_db c.JupyterHub.debug_db = Bool(False)

log all database transactions. This has A LOT of output

debug_proxy c.JupyterHub.debug_proxy = Bool(False)

DEPRECATED since version 0.8: Use ConfigurableHTTPProxy.debug

default_server_name c.JupyterHub.default_server_name = Unicode('')

If named servers are enabled, default name of server to spawn or open, e.g. by user-redirect.

default_url c.JupyterHub.default_url = Union()

The default URL for users when they arrive (e.g. when user directs to “/”)

By default, redirects users to their own server.

Can be a Unicode string (e.g. ‘/hub/home’) or a callable based on the handler object:

def default_url_fn(handler):
    user = handler.current_user
    if user and user.admin:
        return '/hub/admin'
    return '/hub/home'

c.JupyterHub.default_url = default_url_fn
external_ssl_authorities c.JupyterHub.external_ssl_authorities = Dict()

Dict authority:dict(files). Specify the key, cert, and/or ca file for an authority. This is useful for externally managed proxies that wish to use internal_ssl.

The files dict has this format (you must specify at least a cert):

{
    'key': '/path/to/key.key',
    'cert': '/path/to/cert.crt',
    'ca': '/path/to/ca.crt'
}

The authorities you can override: ‘hub-ca’, ‘notebooks-ca’, ‘proxy-api-ca’, ‘proxy-client-ca’, and ‘services-ca’.

Use with internal_ssl

extra_handlers c.JupyterHub.extra_handlers = List()

Register extra tornado Handlers for jupyterhub.

Should be of the form ("<regex>", Handler)

The Hub prefix will be added, so /my-page will be served at /hub/my-page.

extra_log_file c.JupyterHub.extra_log_file = Unicode('')

DEPRECATED: use output redirection instead, e.g.

jupyterhub &>> /var/log/jupyterhub.log

extra_log_handlers c.JupyterHub.extra_log_handlers = List()

Extra log handlers to set on JupyterHub logger

forwarded_host_header c.JupyterHub.forwarded_host_header = Unicode('')

Alternate header to use as the Host (e.g., X-Forwarded-Host) when determining whether a request is cross-origin

This may be useful when JupyterHub is running behind a proxy that rewrites the Host header.

generate_certs c.JupyterHub.generate_certs = Bool(False)

Generate certs used for internal ssl

generate_config c.JupyterHub.generate_config = Bool(False)

Generate default config file

hub_bind_url c.JupyterHub.hub_bind_url = Unicode('')

The URL on which the Hub will listen. This is a private URL for internal communication. Typically set in combination with hub_connect_url. If a unix socket, hub_connect_url must also be set.

For example:

http://127.0.0.1:8081” “unix+http://%2Fsrv%2Fjupyterhub%2Fjupyterhub.sock”

New in version 0.9.

hub_connect_ip c.JupyterHub.hub_connect_ip = Unicode('')

The ip or hostname for proxies and spawners to use for connecting to the Hub.

Use when the bind address (hub_ip) is 0.0.0.0, :: or otherwise different from the connect address.

Default: when hub_ip is 0.0.0.0 or ::, use socket.gethostname(), otherwise use hub_ip.

Note: Some spawners or proxy implementations might not support hostnames. Check your spawner or proxy documentation to see if they have extra requirements.

New in version 0.8.

hub_connect_port c.JupyterHub.hub_connect_port = Int(0)

DEPRECATED

Use hub_connect_url

New in version 0.8.

Deprecated since version 0.9: Use hub_connect_url

hub_connect_url c.JupyterHub.hub_connect_url = Unicode('')

The URL for connecting to the Hub. Spawners, services, and the proxy will use this URL to talk to the Hub.

Only needs to be specified if the default hub URL is not connectable (e.g. using a unix+http:// bind url).

See also

JupyterHub.hub_connect_ip JupyterHub.hub_bind_url

New in version 0.9.

hub_ip c.JupyterHub.hub_ip = Unicode('127.0.0.1')

The ip address for the Hub process to bind to.

By default, the hub listens on localhost only. This address must be accessible from the proxy and user servers. You may need to set this to a public ip or ‘’ for all interfaces if the proxy or user servers are in containers or on a different host.

See hub_connect_ip for cases where the bind and connect address should differ, or hub_bind_url for setting the full bind URL.

hub_port c.JupyterHub.hub_port = Int(8081)

The internal port for the Hub process.

This is the internal port of the hub itself. It should never be accessed directly. See JupyterHub.port for the public port to use when accessing jupyterhub. It is rare that this port should be set except in cases of port conflict.

See also hub_ip for the ip and hub_bind_url for setting the full bind URL.

hub_routespec c.JupyterHub.hub_routespec = Unicode('/')

The routing prefix for the Hub itself.

Override to send only a subset of traffic to the Hub. Default is to use the Hub as the default route for all requests.

This is necessary for normal jupyterhub operation, as the Hub must receive requests for e.g. /user/:name when the user’s server is not running.

However, some deployments using only the JupyterHub API may want to handle these events themselves, in which case they can register their own default target with the proxy and set e.g. hub_routespec = /hub/ to serve only the hub’s own pages, or even /hub/api/ for api-only operation.

Note: hub_routespec must include the base_url, if any.

New in version 1.4.

implicit_spawn_seconds c.JupyterHub.implicit_spawn_seconds = Float(0)

Trigger implicit spawns after this many seconds.

When a user visits a URL for a server that’s not running, they are shown a page indicating that the requested server is not running with a button to spawn the server.

Setting this to a positive value will redirect the user after this many seconds, effectively clicking this button automatically for the users, automatically beginning the spawn process.

Warning: this can result in errors and surprising behavior when sharing access URLs to actual servers, since the wrong server is likely to be started.

init_spawners_timeout c.JupyterHub.init_spawners_timeout = Int(10)

Timeout (in seconds) to wait for spawners to initialize

Checking if spawners are healthy can take a long time if many spawners are active at hub start time.

If it takes longer than this timeout to check, init_spawner will be left to complete in the background and the http server is allowed to start.

A timeout of -1 means wait forever, which can mean a slow startup of the Hub but ensures that the Hub is fully consistent by the time it starts responding to requests. This matches the behavior of jupyterhub 1.0.

internal_certs_location c.JupyterHub.internal_certs_location = Unicode('internal-ssl')

The location to store certificates automatically created by JupyterHub.

Use with internal_ssl

internal_ssl c.JupyterHub.internal_ssl = Bool(False)

Enable SSL for all internal communication

This enables end-to-end encryption between all JupyterHub components. JupyterHub will automatically create the necessary certificate authority and sign notebook certificates as they’re created.

ip c.JupyterHub.ip = Unicode('')

The public facing ip of the whole JupyterHub application (specifically referred to as the proxy).

This is the address on which the proxy will listen. The default is to listen on all interfaces. This is the only address through which JupyterHub should be accessed by users.

jinja_environment_options c.JupyterHub.jinja_environment_options = Dict()

Supply extra arguments that will be passed to Jinja environment.

last_activity_interval c.JupyterHub.last_activity_interval = Int(300)

Interval (in seconds) at which to update last-activity timestamps.

load_groups c.JupyterHub.load_groups = Dict()

Dict of ‘group’: [‘usernames’] to load at startup.

This strictly adds groups and users to groups.

Loading one set of groups, then starting JupyterHub again with a different set will not remove users or groups from previous launches. That must be done through the API.

load_roles c.JupyterHub.load_roles = List()

List of predefined role dictionaries to load at startup.

For instance:

load_roles = [
                {
                    'name': 'teacher',
                    'description': 'Access to users' information and group membership',
                    'scopes': ['users', 'groups'],
                    'users': ['cyclops', 'gandalf'],
                    'services': [],
                    'groups': []
                }
            ]

All keys apart from ‘name’ are optional. See all the available scopes in the JupyterHub REST API documentation.

Default roles are defined in roles.py.

log_datefmt c.JupyterHub.log_datefmt = Unicode('%Y-%m-%d %H:%M:%S')

The date format used by logging formatters for %(asctime)s

log_format c.JupyterHub.log_format = Unicode('[%(name)s]%(highlevel)s %(message)s')

The Logging format template

log_level c.JupyterHub.log_level = Enum(30)

Set the log level by value or name.

logo_file c.JupyterHub.logo_file = Unicode('')

Specify path to a logo image to override the Jupyter logo in the banner.

named_server_limit_per_user c.JupyterHub.named_server_limit_per_user = Int(0)

Maximum number of concurrent named servers that can be created by a user at a time.

Setting this can limit the total resources a user can consume.

If set to 0, no limit is enforced.

oauth_token_expires_in c.JupyterHub.oauth_token_expires_in = Int(0)

Expiry (in seconds) of OAuth access tokens.

The default is to expire when the cookie storing them expires, according to cookie_max_age_days config.

These are the tokens stored in cookies when you visit a single-user server or service. When they expire, you must re-authenticate with the Hub, even if your Hub authentication is still valid. If your Hub authentication is valid, logging in may be a transparent redirect as you refresh the page.

This does not affect JupyterHub API tokens in general, which do not expire by default. Only tokens issued during the oauth flow accessing services and single-user servers are affected.

New in version 1.4: OAuth token expires_in was not previously configurable.

Changed in version 1.4: Default now uses cookie_max_age_days so that oauth tokens which are generally stored in cookies, expire when the cookies storing them expire. Previously, it was one hour.

pid_file c.JupyterHub.pid_file = Unicode('')

File to write PID Useful for daemonizing JupyterHub.

port c.JupyterHub.port = Int(8000)

The public facing port of the proxy.

This is the port on which the proxy will listen. This is the only port through which JupyterHub should be accessed by users.

proxy_api_ip c.JupyterHub.proxy_api_ip = Unicode('')

DEPRECATED since version 0.8 : Use ConfigurableHTTPProxy.api_url

proxy_api_port c.JupyterHub.proxy_api_port = Int(0)

DEPRECATED since version 0.8 : Use ConfigurableHTTPProxy.api_url

proxy_auth_token c.JupyterHub.proxy_auth_token = Unicode('')

DEPRECATED since version 0.8: Use ConfigurableHTTPProxy.auth_token

proxy_check_interval c.JupyterHub.proxy_check_interval = Int(5)

DEPRECATED since version 0.8: Use ConfigurableHTTPProxy.check_running_interval

proxy_class c.JupyterHub.proxy_class = EntryPointType(<class 'jupyterhub.proxy.ConfigurableHTTPProxy'>)

The class to use for configuring the JupyterHub proxy.

Should be a subclass of jupyterhub.proxy.Proxy.

Changed in version 1.0: proxies may be registered via entry points, e.g. c.JupyterHub.proxy_class = 'traefik'

Currently installed:
  • configurable-http-proxy: jupyterhub.proxy.ConfigurableHTTPProxy

  • default: jupyterhub.proxy.ConfigurableHTTPProxy

proxy_cmd c.JupyterHub.proxy_cmd = Command()

DEPRECATED since version 0.8. Use ConfigurableHTTPProxy.command

recreate_internal_certs c.JupyterHub.recreate_internal_certs = Bool(False)

Recreate all certificates used within JupyterHub on restart.

Note: enabling this feature requires restarting all notebook servers.

Use with internal_ssl

redirect_to_server c.JupyterHub.redirect_to_server = Bool(True)

Redirect user to server (if running), instead of control panel.

reset_db c.JupyterHub.reset_db = Bool(False)

Purge and reset the database.

service_check_interval c.JupyterHub.service_check_interval = Int(60)

Interval (in seconds) at which to check connectivity of services with web endpoints.

service_tokens c.JupyterHub.service_tokens = Dict()

Dict of token:servicename to be loaded into the database.

Allows ahead-of-time generation of API tokens for use by externally managed services.

services c.JupyterHub.services = List()

List of service specification dictionaries.

A service

For instance:

services = [
    {
        'name': 'cull_idle',
        'command': ['/path/to/cull_idle_servers.py'],
    },
    {
        'name': 'formgrader',
        'url': 'http://127.0.0.1:1234',
        'api_token': 'super-secret',
        'environment':
    }
]
show_config c.JupyterHub.show_config = Bool(False)

Instead of starting the Application, dump configuration to stdout

show_config_json c.JupyterHub.show_config_json = Bool(False)

Instead of starting the Application, dump configuration to stdout (as JSON)

shutdown_on_logout c.JupyterHub.shutdown_on_logout = Bool(False)

Shuts down all user servers on logout

spawner_class c.JupyterHub.spawner_class = EntryPointType(<class 'jupyterhub.spawner.LocalProcessSpawner'>)

The class to use for spawning single-user servers.

Should be a subclass of jupyterhub.spawner.Spawner.

Changed in version 1.0: spawners may be registered via entry points, e.g. c.JupyterHub.spawner_class = 'localprocess'

Currently installed:
  • default: jupyterhub.spawner.LocalProcessSpawner

  • localprocess: jupyterhub.spawner.LocalProcessSpawner

  • simple: jupyterhub.spawner.SimpleLocalProcessSpawner

ssl_cert c.JupyterHub.ssl_cert = Unicode('')

Path to SSL certificate file for the public facing interface of the proxy

When setting this, you should also set ssl_key

ssl_key c.JupyterHub.ssl_key = Unicode('')

Path to SSL key file for the public facing interface of the proxy

When setting this, you should also set ssl_cert

statsd_host c.JupyterHub.statsd_host = Unicode('')

Host to send statsd metrics to. An empty string (the default) disables sending metrics.

statsd_port c.JupyterHub.statsd_port = Int(8125)

Port on which to send statsd metrics about the hub

statsd_prefix c.JupyterHub.statsd_prefix = Unicode('jupyterhub')

Prefix to use for all metrics sent by jupyterhub to statsd

subdomain_host c.JupyterHub.subdomain_host = Unicode('')

Run single-user servers on subdomains of this host.

This should be the full https://hub.domain.tld[:port].

Provides additional cross-site protections for javascript served by single-user servers.

Requires <username>.hub.domain.tld to resolve to the same host as hub.domain.tld.

In general, this is most easily achieved with wildcard DNS.

When using SSL (i.e. always) this also requires a wildcard SSL certificate.

template_paths c.JupyterHub.template_paths = List()

Paths to search for jinja templates, before using the default templates.

template_vars c.JupyterHub.template_vars = Dict()

Extra variables to be passed into jinja templates

tornado_settings c.JupyterHub.tornado_settings = Dict()

Extra settings overrides to pass to the tornado application.

trust_user_provided_tokens c.JupyterHub.trust_user_provided_tokens = Bool(False)

Trust user-provided tokens (via JupyterHub.service_tokens) to have good entropy.

If you are not inserting additional tokens via configuration file, this flag has no effect.

In JupyterHub 0.8, internally generated tokens do not pass through additional hashing because the hashing is costly and does not increase the entropy of already-good UUIDs.

User-provided tokens, on the other hand, are not trusted to have good entropy by default, and are passed through many rounds of hashing to stretch the entropy of the key (i.e. user-provided tokens are treated as passwords instead of random keys). These keys are more costly to check.

If your inserted tokens are generated by a good-quality mechanism, e.g. openssl rand -hex 32, then you can set this flag to True to reduce the cost of checking authentication tokens.

trusted_alt_names c.JupyterHub.trusted_alt_names = List()

Names to include in the subject alternative name.

These names will be used for server name verification. This is useful if JupyterHub is being run behind a reverse proxy or services using ssl are on different hosts.

Use with internal_ssl

trusted_downstream_ips c.JupyterHub.trusted_downstream_ips = List()

Downstream proxy IP addresses to trust.

This sets the list of IP addresses that are trusted and skipped when processing the X-Forwarded-For header. For example, if an external proxy is used for TLS termination, its IP address should be added to this list to ensure the correct client IP addresses are recorded in the logs instead of the proxy server’s IP address.

upgrade_db c.JupyterHub.upgrade_db = Bool(False)

Upgrade the database automatically on start.

Only safe if database is regularly backed up. Only SQLite databases will be backed up to a local file automatically.

use_legacy_stopped_server_status_code c.JupyterHub.use_legacy_stopped_server_status_code = Bool(False)

Return 503 rather than 424 when request comes in for a non-running server.

Prior to JupyterHub 2.0, we returned a 503 when any request came in for a user server that was currently not running. By default, JupyterHub 2.0 will return a 424 - this makes operational metric dashboards more useful.

JupyterLab < 3.2 expected the 503 to know if the user server is no longer running, and prompted the user to start their server. Set this config to true to retain the old behavior, so JupyterLab < 3.2 can continue to show the appropriate UI when the user server is stopped.

This option will be removed in a future release.

user_redirect_hook c.JupyterHub.user_redirect_hook = Callable(None)

Callable to affect behavior of /user-redirect/

Receives 4 parameters: 1. path - URL path that was provided after /user-redirect/ 2. request - A Tornado HTTPServerRequest representing the current request. 3. user - The currently authenticated user. 4. base_url - The base_url of the current hub, for relative redirects

It should return the new URL to redirect to, or None to preserve current behavior.

Authenticators
Module: jupyterhub.auth

Base Authenticator class and the default PAM Authenticator

Authenticator
class jupyterhub.auth.Authenticator(**kwargs)

Base class for implementing an authentication provider for JupyterHub

add_user(user)

Hook called when a user is added to JupyterHub

This is called:
  • When a user first authenticates

  • When the hub restarts, for all users.

This method may be a coroutine.

By default, this just adds the user to the allowed_users set.

Subclasses may do more extensive things, such as adding actual unix users, but they should call super to ensure the allowed_users set is updated.

Note that this should be idempotent, since it is called whenever the hub restarts for all users.

Parameters

user (User) – The User wrapper object

admin_users c.Authenticator.admin_users = Set()

Set of users that will have admin rights on this JupyterHub.

Note: As of JupyterHub 2.0, full admin rights should not be required, and more precise permissions can be managed via roles.

Admin users have extra privileges:
  • Use the admin panel to see list of users logged in

  • Add / remove users in some authenticators

  • Restart / halt the hub

  • Start / stop users’ single-user servers

  • Can access each individual users’ single-user server (if configured)

Admin access should be treated the same way root access is.

Defaults to an empty set, in which case no user has admin access.

allowed_users c.Authenticator.allowed_users = Set()

Set of usernames that are allowed to log in.

Use this with supported authenticators to restrict which users can log in. This is an additional list that further restricts users, beyond whatever restrictions the authenticator has in place. Any user in this list is granted the ‘user’ role on hub startup.

If empty, does not perform any additional restriction.

Changed in version 1.2: Authenticator.whitelist renamed to allowed_users

auth_refresh_age c.Authenticator.auth_refresh_age = Int(300)

The max age (in seconds) of authentication info before forcing a refresh of user auth info.

Refreshing auth info allows, e.g. requesting/re-validating auth tokens.

See refresh_user() for what happens when user auth info is refreshed (nothing by default).

async authenticate(handler, data)

Authenticate a user with login form data

This must be a coroutine.

It must return the username on successful authentication, and return None on failed authentication.

Checking allowed_users/blocked_users is handled separately by the caller.

Changed in version 0.8: Allow authenticate to return a dict containing auth_state.

Parameters
  • handler (tornado.web.RequestHandler) – the current request handler

  • data (dict) – The formdata of the login form. The default form has ‘username’ and ‘password’ fields.

Returns

The username of the authenticated user, or None if Authentication failed.

The Authenticator may return a dict instead, which MUST have a key name holding the username, and MAY have additional keys:

  • auth_state, a dictionary of of auth state that will be persisted;

  • admin, the admin setting value for the user

  • groups, the list of group names the user should be a member of, if Authenticator.manage_groups is True.

Return type

user (str or dict or None)

auto_login c.Authenticator.auto_login = Bool(False)

Automatically begin the login process

rather than starting with a “Login with…” link at /hub/login

To work, .login_url() must give a URL other than the default /hub/login, such as an oauth handler or another automatic login handler, registered with .get_handlers().

New in version 0.8.

auto_login_oauth2_authorize c.Authenticator.auto_login_oauth2_authorize = Bool(False)

Automatically begin login process for OAuth2 authorization requests

When another application is using JupyterHub as OAuth2 provider, it sends users to /hub/api/oauth2/authorize. If the user isn’t logged in already, and auto_login is not set, the user will be dumped on the hub’s home page, without any context on what to do next.

Setting this to true will automatically redirect users to login if they aren’t logged in only on the /hub/api/oauth2/authorize endpoint.

New in version 1.5.

blocked_users c.Authenticator.blocked_users = Set()

Set of usernames that are not allowed to log in.

Use this with supported authenticators to restrict which users can not log in. This is an additional block list that further restricts users, beyond whatever restrictions the authenticator has in place.

If empty, does not perform any additional restriction.

Changed in version 1.2: Authenticator.blacklist renamed to blocked_users

check_allowed(username, authentication=None)

Check if a username is allowed to authenticate based on configuration

Return True if username is allowed, False otherwise. No allowed_users set means any username is allowed.

Names are normalized before being checked against the allowed set.

Changed in version 1.0: Signature updated to accept authentication data and any future changes

Changed in version 1.2: Renamed check_whitelist to check_allowed

check_blocked_users(username, authentication=None)

Check if a username is blocked to authenticate based on Authenticator.blocked configuration

Return True if username is allowed, False otherwise. No block list means any username is allowed.

Names are normalized before being checked against the block list.

Changed in version 1.0: Signature updated to accept authentication data as second argument

Changed in version 1.2: Renamed check_blacklist to check_blocked_users

delete_invalid_users c.Authenticator.delete_invalid_users = Bool(False)

Delete any users from the database that do not pass validation

When JupyterHub starts, .add_user will be called on each user in the database to verify that all users are still valid.

If delete_invalid_users is True, any users that do not pass validation will be deleted from the database. Use this if users might be deleted from an external system, such as local user accounts.

If False (default), invalid users remain in the Hub’s database and a warning will be issued. This is the default to avoid data loss due to config changes.

delete_user(user)

Hook called when a user is deleted

Removes the user from the allowed_users set. Subclasses should call super to ensure the allowed_users set is updated.

Parameters

user (User) – The User wrapper object

enable_auth_state c.Authenticator.enable_auth_state = Bool(False)

Enable persisting auth_state (if available).

auth_state will be encrypted and stored in the Hub’s database. This can include things like authentication tokens, etc. to be passed to Spawners as environment variables.

Encrypting auth_state requires the cryptography package.

Additionally, the JUPYTERHUB_CRYPT_KEY environment variable must contain one (or more, separated by ;) 32B encryption keys. These can be either base64 or hex-encoded.

If encryption is unavailable, auth_state cannot be persisted.

New in JupyterHub 0.8

async get_authenticated_user(handler, data)

Authenticate the user who is attempting to log in

Returns user dict if successful, None otherwise.

This calls authenticate, which should be overridden in subclasses, normalizes the username if any normalization should be done, and then validates the name in the allowed set.

This is the outer API for authenticating a user. Subclasses should not override this method.

The various stages can be overridden separately:
  • authenticate turns formdata into a username

  • normalize_username normalizes the username

  • check_allowed checks against the allowed usernames

Changed in version 0.8: return dict instead of username

get_custom_html(base_url)

Get custom HTML for the authenticator.

get_handlers(app)

Return any custom handlers the authenticator needs to register

Used in conjugation with login_url and logout_url.

Parameters

app (JupyterHub Application) – the application object, in case it needs to be accessed for info.

Returns

list of ('/url', Handler) tuples passed to tornado. The Hub prefix is added to any URLs.

Return type

handlers (list)

is_admin(handler, authentication)

Authentication helper to determine a user’s admin status.

Parameters
  • handler (tornado.web.RequestHandler) – the current request handler

  • authentication – The authetication dict generated by authenticate.

Returns

The admin status of the user, or None if it could not be determined or should not change.

Return type

admin_status (Bool or None)

login_url(base_url)

Override this when registering a custom login handler

Generally used by authenticators that do not use simple form-based authentication.

The subclass overriding this is responsible for making sure there is a handler available to handle the URL returned from this method, using the get_handlers method.

Parameters

base_url (str) – the base URL of the Hub (e.g. /hub/)

Returns

The login URL, e.g. ‘/hub/login’

Return type

str

logout_url(base_url)

Override when registering a custom logout handler

The subclass overriding this is responsible for making sure there is a handler available to handle the URL returned from this method, using the get_handlers method.

Parameters

base_url (str) – the base URL of the Hub (e.g. /hub/)

Returns

The logout URL, e.g. ‘/hub/logout’

Return type

str

manage_groups c.Authenticator.manage_groups = Bool(False)

Let authenticator manage user groups

If True, Authenticator.authenticate and/or .refresh_user may return a list of group names in the ‘groups’ field, which will be assigned to the user.

All group-assignment APIs are disabled if this is True.

normalize_username(username)

Normalize the given username and return it

Override in subclasses if usernames need different normalization rules.

The default attempts to lowercase the username and apply username_map if it is set.

post_auth_hook c.Authenticator.post_auth_hook = Any(None)

An optional hook function that you can implement to do some bootstrapping work during authentication. For example, loading user account details from an external system.

This function is called after the user has passed all authentication checks and is ready to successfully authenticate. This function must return the authentication dict reguardless of changes to it.

This maybe a coroutine.

Example:

import os, pwd
def my_hook(authenticator, handler, authentication):
    user_data = pwd.getpwnam(authentication['name'])
    spawn_data = {
        'pw_data': user_data
        'gid_list': os.getgrouplist(authentication['name'], user_data.pw_gid)
    }

    if authentication['auth_state'] is None:
        authentication['auth_state'] = {}
    authentication['auth_state']['spawn_data'] = spawn_data

    return authentication

c.Authenticator.post_auth_hook = my_hook
post_spawn_stop(user, spawner)

Hook called after stopping a user container

Can be used to do auth-related cleanup, e.g. closing PAM sessions.

pre_spawn_start(user, spawner)

Hook called before spawning a user’s server

Can be used to do auth-related startup, e.g. opening PAM sessions.

refresh_pre_spawn c.Authenticator.refresh_pre_spawn = Bool(False)

Force refresh of auth prior to spawn.

This forces refresh_user() to be called prior to launching a server, to ensure that auth state is up-to-date.

This can be important when e.g. auth tokens that may have expired are passed to the spawner via environment variables from auth_state.

If refresh_user cannot refresh the user auth data, launch will fail until the user logs in again.

async refresh_user(user, handler=None)

Refresh auth data for a given user

Allows refreshing or invalidating auth data.

Only override if your authenticator needs to refresh its data about users once in a while.

Parameters
Returns

Return True if auth data for the user is up-to-date and no updates are required.

Return False if the user’s auth data has expired, and they should be required to login again.

Return a dict of auth data if some values should be updated. This dict should have the same structure as that returned by authenticate() when it returns a dict. Any fields present will refresh the value for the user. Any fields not present will be left unchanged. This can include updating .admin or .auth_state fields.

Return type

auth_data (bool or dict)

async run_post_auth_hook(handler, authentication)

Run the post_auth_hook if defined

Parameters
  • handler (tornado.web.RequestHandler) – the current request handler

  • authentication (dict) – User authentication data dictionary. Contains the username (‘name’), admin status (‘admin’), and auth state dictionary (‘auth_state’).

Returns

The hook must always return the authentication dict

Return type

Authentication (dict)

username_map c.Authenticator.username_map = Dict()

Dictionary mapping authenticator usernames to JupyterHub users.

Primarily used to normalize OAuth user names to local users.

username_pattern c.Authenticator.username_pattern = Unicode('')

Regular expression pattern that all valid usernames must match.

If a username does not match the pattern specified here, authentication will not be attempted.

If not set, allow any username.

validate_username(username)

Validate a normalized username

Return True if username is valid, False otherwise.

whitelist c.Authenticator.whitelist = Set()

Deprecated, use Authenticator.allowed_users

LocalAuthenticator
class jupyterhub.auth.LocalAuthenticator(**kwargs)

Base class for Authenticators that work with local Linux/UNIX users

Checks for local users, and can attempt to create them if they exist.

add_system_user(user)

Create a new local UNIX user on the system.

Tested to work on FreeBSD and Linux, at least.

async add_user(user)

Hook called whenever a new user is added

If self.create_system_users, the user will attempt to be created if it doesn’t exist.

add_user_cmd c.LocalAuthenticator.add_user_cmd = Command()

The command to use for creating users as a list of strings

For each element in the list, the string USERNAME will be replaced with the user’s username. The username will also be appended as the final argument.

For Linux, the default value is:

[‘adduser’, ‘-q’, ‘–gecos’, ‘””’, ‘–disabled-password’]

To specify a custom home directory, set this to:

[‘adduser’, ‘-q’, ‘–gecos’, ‘””’, ‘–home’, ‘/customhome/USERNAME’, ‘–disabled-password’]

This will run the command:

adduser -q –gecos “” –home /customhome/river –disabled-password river

when the user ‘river’ is created.

admin_users c.LocalAuthenticator.admin_users = Set()

Set of users that will have admin rights on this JupyterHub.

Note: As of JupyterHub 2.0, full admin rights should not be required, and more precise permissions can be managed via roles.

Admin users have extra privileges:
  • Use the admin panel to see list of users logged in

  • Add / remove users in some authenticators

  • Restart / halt the hub

  • Start / stop users’ single-user servers

  • Can access each individual users’ single-user server (if configured)

Admin access should be treated the same way root access is.

Defaults to an empty set, in which case no user has admin access.

allowed_groups c.LocalAuthenticator.allowed_groups = Set()

Allow login from all users in these UNIX groups.

If set, allowed username set is ignored.

allowed_users c.LocalAuthenticator.allowed_users = Set()

Set of usernames that are allowed to log in.

Use this with supported authenticators to restrict which users can log in. This is an additional list that further restricts users, beyond whatever restrictions the authenticator has in place. Any user in this list is granted the ‘user’ role on hub startup.

If empty, does not perform any additional restriction.

Changed in version 1.2: Authenticator.whitelist renamed to allowed_users

auth_refresh_age c.LocalAuthenticator.auth_refresh_age = Int(300)

The max age (in seconds) of authentication info before forcing a refresh of user auth info.

Refreshing auth info allows, e.g. requesting/re-validating auth tokens.

See refresh_user() for what happens when user auth info is refreshed (nothing by default).

auto_login c.LocalAuthenticator.auto_login = Bool(False)

Automatically begin the login process

rather than starting with a “Login with…” link at /hub/login

To work, .login_url() must give a URL other than the default /hub/login, such as an oauth handler or another automatic login handler, registered with .get_handlers().

New in version 0.8.

auto_login_oauth2_authorize c.LocalAuthenticator.auto_login_oauth2_authorize = Bool(False)

Automatically begin login process for OAuth2 authorization requests

When another application is using JupyterHub as OAuth2 provider, it sends users to /hub/api/oauth2/authorize. If the user isn’t logged in already, and auto_login is not set, the user will be dumped on the hub’s home page, without any context on what to do next.

Setting this to true will automatically redirect users to login if they aren’t logged in only on the /hub/api/oauth2/authorize endpoint.

New in version 1.5.

blocked_users c.LocalAuthenticator.blocked_users = Set()

Set of usernames that are not allowed to log in.

Use this with supported authenticators to restrict which users can not log in. This is an additional block list that further restricts users, beyond whatever restrictions the authenticator has in place.

If empty, does not perform any additional restriction.

Changed in version 1.2: Authenticator.blacklist renamed to blocked_users

check_allowed(username, authentication=None)

Check if a username is allowed to authenticate based on configuration

Return True if username is allowed, False otherwise. No allowed_users set means any username is allowed.

Names are normalized before being checked against the allowed set.

Changed in version 1.0: Signature updated to accept authentication data and any future changes

Changed in version 1.2: Renamed check_whitelist to check_allowed

check_allowed_groups(username, authentication=None)

If allowed_groups is configured, check if authenticating user is part of group.

create_system_users c.LocalAuthenticator.create_system_users = Bool(False)

If set to True, will attempt to create local system users if they do not exist already.

Supports Linux and BSD variants only.

delete_invalid_users c.LocalAuthenticator.delete_invalid_users = Bool(False)

Delete any users from the database that do not pass validation

When JupyterHub starts, .add_user will be called on each user in the database to verify that all users are still valid.

If delete_invalid_users is True, any users that do not pass validation will be deleted from the database. Use this if users might be deleted from an external system, such as local user accounts.

If False (default), invalid users remain in the Hub’s database and a warning will be issued. This is the default to avoid data loss due to config changes.

enable_auth_state c.LocalAuthenticator.enable_auth_state = Bool(False)

Enable persisting auth_state (if available).

auth_state will be encrypted and stored in the Hub’s database. This can include things like authentication tokens, etc. to be passed to Spawners as environment variables.

Encrypting auth_state requires the cryptography package.

Additionally, the JUPYTERHUB_CRYPT_KEY environment variable must contain one (or more, separated by ;) 32B encryption keys. These can be either base64 or hex-encoded.

If encryption is unavailable, auth_state cannot be persisted.

New in JupyterHub 0.8

group_whitelist c.LocalAuthenticator.group_whitelist = Set()

DEPRECATED: use allowed_groups

manage_groups c.LocalAuthenticator.manage_groups = Bool(False)

Let authenticator manage user groups

If True, Authenticator.authenticate and/or .refresh_user may return a list of group names in the ‘groups’ field, which will be assigned to the user.

All group-assignment APIs are disabled if this is True.

post_auth_hook c.LocalAuthenticator.post_auth_hook = Any(None)

An optional hook function that you can implement to do some bootstrapping work during authentication. For example, loading user account details from an external system.

This function is called after the user has passed all authentication checks and is ready to successfully authenticate. This function must return the authentication dict reguardless of changes to it.

This maybe a coroutine.

Example:

import os, pwd
def my_hook(authenticator, handler, authentication):
    user_data = pwd.getpwnam(authentication['name'])
    spawn_data = {
        'pw_data': user_data
        'gid_list': os.getgrouplist(authentication['name'], user_data.pw_gid)
    }

    if authentication['auth_state'] is None:
        authentication['auth_state'] = {}
    authentication['auth_state']['spawn_data'] = spawn_data

    return authentication

c.Authenticator.post_auth_hook = my_hook
refresh_pre_spawn c.LocalAuthenticator.refresh_pre_spawn = Bool(False)

Force refresh of auth prior to spawn.

This forces refresh_user() to be called prior to launching a server, to ensure that auth state is up-to-date.

This can be important when e.g. auth tokens that may have expired are passed to the spawner via environment variables from auth_state.

If refresh_user cannot refresh the user auth data, launch will fail until the user logs in again.

system_user_exists(user)

Check if the user exists on the system

uids c.LocalAuthenticator.uids = Dict()

Dictionary of uids to use at user creation time. This helps ensure that users created from the database get the same uid each time they are created in temporary deployments or containers.

username_map c.LocalAuthenticator.username_map = Dict()

Dictionary mapping authenticator usernames to JupyterHub users.

Primarily used to normalize OAuth user names to local users.

username_pattern c.LocalAuthenticator.username_pattern = Unicode('')

Regular expression pattern that all valid usernames must match.

If a username does not match the pattern specified here, authentication will not be attempted.

If not set, allow any username.

whitelist c.LocalAuthenticator.whitelist = Set()

Deprecated, use Authenticator.allowed_users

PAMAuthenticator
class jupyterhub.auth.PAMAuthenticator(**kwargs)

Authenticate local UNIX users with PAM

add_user_cmd c.PAMAuthenticator.add_user_cmd = Command()

The command to use for creating users as a list of strings

For each element in the list, the string USERNAME will be replaced with the user’s username. The username will also be appended as the final argument.

For Linux, the default value is:

[‘adduser’, ‘-q’, ‘–gecos’, ‘””’, ‘–disabled-password’]

To specify a custom home directory, set this to:

[‘adduser’, ‘-q’, ‘–gecos’, ‘””’, ‘–home’, ‘/customhome/USERNAME’, ‘–disabled-password’]

This will run the command:

adduser -q –gecos “” –home /customhome/river –disabled-password river

when the user ‘river’ is created.

admin_groups c.PAMAuthenticator.admin_groups = Set()

Authoritative list of user groups that determine admin access. Users not in these groups can still be granted admin status through admin_users.

allowed/blocked rules still apply.

Note: As of JupyterHub 2.0, full admin rights should not be required, and more precise permissions can be managed via roles.

admin_users c.PAMAuthenticator.admin_users = Set()

Set of users that will have admin rights on this JupyterHub.

Note: As of JupyterHub 2.0, full admin rights should not be required, and more precise permissions can be managed via roles.

Admin users have extra privileges:
  • Use the admin panel to see list of users logged in

  • Add / remove users in some authenticators

  • Restart / halt the hub

  • Start / stop users’ single-user servers

  • Can access each individual users’ single-user server (if configured)

Admin access should be treated the same way root access is.

Defaults to an empty set, in which case no user has admin access.

allowed_groups c.PAMAuthenticator.allowed_groups = Set()

Allow login from all users in these UNIX groups.

If set, allowed username set is ignored.

allowed_users c.PAMAuthenticator.allowed_users = Set()

Set of usernames that are allowed to log in.

Use this with supported authenticators to restrict which users can log in. This is an additional list that further restricts users, beyond whatever restrictions the authenticator has in place. Any user in this list is granted the ‘user’ role on hub startup.

If empty, does not perform any additional restriction.

Changed in version 1.2: Authenticator.whitelist renamed to allowed_users

auth_refresh_age c.PAMAuthenticator.auth_refresh_age = Int(300)

The max age (in seconds) of authentication info before forcing a refresh of user auth info.

Refreshing auth info allows, e.g. requesting/re-validating auth tokens.

See refresh_user() for what happens when user auth info is refreshed (nothing by default).

auto_login c.PAMAuthenticator.auto_login = Bool(False)

Automatically begin the login process

rather than starting with a “Login with…” link at /hub/login

To work, .login_url() must give a URL other than the default /hub/login, such as an oauth handler or another automatic login handler, registered with .get_handlers().

New in version 0.8.

auto_login_oauth2_authorize c.PAMAuthenticator.auto_login_oauth2_authorize = Bool(False)

Automatically begin login process for OAuth2 authorization requests

When another application is using JupyterHub as OAuth2 provider, it sends users to /hub/api/oauth2/authorize. If the user isn’t logged in already, and auto_login is not set, the user will be dumped on the hub’s home page, without any context on what to do next.

Setting this to true will automatically redirect users to login if they aren’t logged in only on the /hub/api/oauth2/authorize endpoint.

New in version 1.5.

blocked_users c.PAMAuthenticator.blocked_users = Set()

Set of usernames that are not allowed to log in.

Use this with supported authenticators to restrict which users can not log in. This is an additional block list that further restricts users, beyond whatever restrictions the authenticator has in place.

If empty, does not perform any additional restriction.

Changed in version 1.2: Authenticator.blacklist renamed to blocked_users

check_account c.PAMAuthenticator.check_account = Bool(True)

Whether to check the user’s account status via PAM during authentication.

The PAM account stack performs non-authentication based account management. It is typically used to restrict/permit access to a service and this step is needed to access the host’s user access control.

Disabling this can be dangerous as authenticated but unauthorized users may be granted access and, therefore, arbitrary execution on the system.

create_system_users c.PAMAuthenticator.create_system_users = Bool(False)

If set to True, will attempt to create local system users if they do not exist already.

Supports Linux and BSD variants only.

delete_invalid_users c.PAMAuthenticator.delete_invalid_users = Bool(False)

Delete any users from the database that do not pass validation

When JupyterHub starts, .add_user will be called on each user in the database to verify that all users are still valid.

If delete_invalid_users is True, any users that do not pass validation will be deleted from the database. Use this if users might be deleted from an external system, such as local user accounts.

If False (default), invalid users remain in the Hub’s database and a warning will be issued. This is the default to avoid data loss due to config changes.

enable_auth_state c.PAMAuthenticator.enable_auth_state = Bool(False)

Enable persisting auth_state (if available).

auth_state will be encrypted and stored in the Hub’s database. This can include things like authentication tokens, etc. to be passed to Spawners as environment variables.

Encrypting auth_state requires the cryptography package.

Additionally, the JUPYTERHUB_CRYPT_KEY environment variable must contain one (or more, separated by ;) 32B encryption keys. These can be either base64 or hex-encoded.

If encryption is unavailable, auth_state cannot be persisted.

New in JupyterHub 0.8

encoding c.PAMAuthenticator.encoding = Unicode('utf8')

The text encoding to use when communicating with PAM

group_whitelist c.PAMAuthenticator.group_whitelist = Set()

DEPRECATED: use allowed_groups

manage_groups c.PAMAuthenticator.manage_groups = Bool(False)

Let authenticator manage user groups

If True, Authenticator.authenticate and/or .refresh_user may return a list of group names in the ‘groups’ field, which will be assigned to the user.

All group-assignment APIs are disabled if this is True.

open_sessions c.PAMAuthenticator.open_sessions = Bool(False)

Whether to open a new PAM session when spawners are started.

This may trigger things like mounting shared filesystems, loading credentials, etc. depending on system configuration.

The lifecycle of PAM sessions is not correct, so many PAM session configurations will not work.

If any errors are encountered when opening/closing PAM sessions, this is automatically set to False.

Changed in version 2.2: Due to longstanding problems in the session lifecycle, this is now disabled by default. You may opt-in to opening sessions by setting this to True.

pam_normalize_username c.PAMAuthenticator.pam_normalize_username = Bool(False)

Round-trip the username via PAM lookups to make sure it is unique

PAM can accept multiple usernames that map to the same user, for example DOMAINusername in some cases. To prevent this, convert username into uid, then back to uid to normalize.

post_auth_hook c.PAMAuthenticator.post_auth_hook = Any(None)

An optional hook function that you can implement to do some bootstrapping work during authentication. For example, loading user account details from an external system.

This function is called after the user has passed all authentication checks and is ready to successfully authenticate. This function must return the authentication dict reguardless of changes to it.

This maybe a coroutine.

Example:

import os, pwd
def my_hook(authenticator, handler, authentication):
    user_data = pwd.getpwnam(authentication['name'])
    spawn_data = {
        'pw_data': user_data
        'gid_list': os.getgrouplist(authentication['name'], user_data.pw_gid)
    }

    if authentication['auth_state'] is None:
        authentication['auth_state'] = {}
    authentication['auth_state']['spawn_data'] = spawn_data

    return authentication

c.Authenticator.post_auth_hook = my_hook
refresh_pre_spawn c.PAMAuthenticator.refresh_pre_spawn = Bool(False)

Force refresh of auth prior to spawn.

This forces refresh_user() to be called prior to launching a server, to ensure that auth state is up-to-date.

This can be important when e.g. auth tokens that may have expired are passed to the spawner via environment variables from auth_state.

If refresh_user cannot refresh the user auth data, launch will fail until the user logs in again.

service c.PAMAuthenticator.service = Unicode('login')

The name of the PAM service to use for authentication

uids c.PAMAuthenticator.uids = Dict()

Dictionary of uids to use at user creation time. This helps ensure that users created from the database get the same uid each time they are created in temporary deployments or containers.

username_map c.PAMAuthenticator.username_map = Dict()

Dictionary mapping authenticator usernames to JupyterHub users.

Primarily used to normalize OAuth user names to local users.

username_pattern c.PAMAuthenticator.username_pattern = Unicode('')

Regular expression pattern that all valid usernames must match.

If a username does not match the pattern specified here, authentication will not be attempted.

If not set, allow any username.

whitelist c.PAMAuthenticator.whitelist = Set()

Deprecated, use Authenticator.allowed_users

DummyAuthenticator
class jupyterhub.auth.DummyAuthenticator(**kwargs)

Dummy Authenticator for testing

By default, any username + password is allowed If a non-empty password is set, any username will be allowed if it logs in with that password.

New in version 1.0.

admin_users c.DummyAuthenticator.admin_users = Set()

Set of users that will have admin rights on this JupyterHub.

Note: As of JupyterHub 2.0, full admin rights should not be required, and more precise permissions can be managed via roles.

Admin users have extra privileges:
  • Use the admin panel to see list of users logged in

  • Add / remove users in some authenticators

  • Restart / halt the hub

  • Start / stop users’ single-user servers

  • Can access each individual users’ single-user server (if configured)

Admin access should be treated the same way root access is.

Defaults to an empty set, in which case no user has admin access.

allowed_users c.DummyAuthenticator.allowed_users = Set()

Set of usernames that are allowed to log in.

Use this with supported authenticators to restrict which users can log in. This is an additional list that further restricts users, beyond whatever restrictions the authenticator has in place. Any user in this list is granted the ‘user’ role on hub startup.

If empty, does not perform any additional restriction.

Changed in version 1.2: Authenticator.whitelist renamed to allowed_users

auth_refresh_age c.DummyAuthenticator.auth_refresh_age = Int(300)

The max age (in seconds) of authentication info before forcing a refresh of user auth info.

Refreshing auth info allows, e.g. requesting/re-validating auth tokens.

See refresh_user() for what happens when user auth info is refreshed (nothing by default).

auto_login c.DummyAuthenticator.auto_login = Bool(False)

Automatically begin the login process

rather than starting with a “Login with…” link at /hub/login

To work, .login_url() must give a URL other than the default /hub/login, such as an oauth handler or another automatic login handler, registered with .get_handlers().

New in version 0.8.

auto_login_oauth2_authorize c.DummyAuthenticator.auto_login_oauth2_authorize = Bool(False)

Automatically begin login process for OAuth2 authorization requests

When another application is using JupyterHub as OAuth2 provider, it sends users to /hub/api/oauth2/authorize. If the user isn’t logged in already, and auto_login is not set, the user will be dumped on the hub’s home page, without any context on what to do next.

Setting this to true will automatically redirect users to login if they aren’t logged in only on the /hub/api/oauth2/authorize endpoint.

New in version 1.5.

blocked_users c.DummyAuthenticator.blocked_users = Set()

Set of usernames that are not allowed to log in.

Use this with supported authenticators to restrict which users can not log in. This is an additional block list that further restricts users, beyond whatever restrictions the authenticator has in place.

If empty, does not perform any additional restriction.

Changed in version 1.2: Authenticator.blacklist renamed to blocked_users

delete_invalid_users c.DummyAuthenticator.delete_invalid_users = Bool(False)

Delete any users from the database that do not pass validation

When JupyterHub starts, .add_user will be called on each user in the database to verify that all users are still valid.

If delete_invalid_users is True, any users that do not pass validation will be deleted from the database. Use this if users might be deleted from an external system, such as local user accounts.

If False (default), invalid users remain in the Hub’s database and a warning will be issued. This is the default to avoid data loss due to config changes.

enable_auth_state c.DummyAuthenticator.enable_auth_state = Bool(False)

Enable persisting auth_state (if available).

auth_state will be encrypted and stored in the Hub’s database. This can include things like authentication tokens, etc. to be passed to Spawners as environment variables.

Encrypting auth_state requires the cryptography package.

Additionally, the JUPYTERHUB_CRYPT_KEY environment variable must contain one (or more, separated by ;) 32B encryption keys. These can be either base64 or hex-encoded.

If encryption is unavailable, auth_state cannot be persisted.

New in JupyterHub 0.8

manage_groups c.DummyAuthenticator.manage_groups = Bool(False)

Let authenticator manage user groups

If True, Authenticator.authenticate and/or .refresh_user may return a list of group names in the ‘groups’ field, which will be assigned to the user.

All group-assignment APIs are disabled if this is True.

password c.DummyAuthenticator.password = Unicode('')

Set a global password for all users wanting to log in.

This allows users with any username to log in with the same static password.

post_auth_hook c.DummyAuthenticator.post_auth_hook = Any(None)

An optional hook function that you can implement to do some bootstrapping work during authentication. For example, loading user account details from an external system.

This function is called after the user has passed all authentication checks and is ready to successfully authenticate. This function must return the authentication dict reguardless of changes to it.

This maybe a coroutine.

Example:

import os, pwd
def my_hook(authenticator, handler, authentication):
    user_data = pwd.getpwnam(authentication['name'])
    spawn_data = {
        'pw_data': user_data
        'gid_list': os.getgrouplist(authentication['name'], user_data.pw_gid)
    }

    if authentication['auth_state'] is None:
        authentication['auth_state'] = {}
    authentication['auth_state']['spawn_data'] = spawn_data

    return authentication

c.Authenticator.post_auth_hook = my_hook
refresh_pre_spawn c.DummyAuthenticator.refresh_pre_spawn = Bool(False)

Force refresh of auth prior to spawn.

This forces refresh_user() to be called prior to launching a server, to ensure that auth state is up-to-date.

This can be important when e.g. auth tokens that may have expired are passed to the spawner via environment variables from auth_state.

If refresh_user cannot refresh the user auth data, launch will fail until the user logs in again.

username_map c.DummyAuthenticator.username_map = Dict()

Dictionary mapping authenticator usernames to JupyterHub users.

Primarily used to normalize OAuth user names to local users.

username_pattern c.DummyAuthenticator.username_pattern = Unicode('')

Regular expression pattern that all valid usernames must match.

If a username does not match the pattern specified here, authentication will not be attempted.

If not set, allow any username.

whitelist c.DummyAuthenticator.whitelist = Set()

Deprecated, use Authenticator.allowed_users

Spawners
Module: jupyterhub.spawner

Contains base Spawner class & default implementation

Spawner
class jupyterhub.spawner.Spawner(**kwargs)

Base class for spawning single-user notebook servers.

Subclass this, and override the following methods:

  • load_state

  • get_state

  • start

  • stop

  • poll

As JupyterHub supports multiple users, an instance of the Spawner subclass is created for each user. If there are 20 JupyterHub users, there will be 20 instances of the subclass.

args c.Spawner.args = List()

Extra arguments to be passed to the single-user server.

Some spawners allow shell-style expansion here, allowing you to use environment variables here. Most, including the default, do not. Consult the documentation for your spawner to verify!

auth_state_hook c.Spawner.auth_state_hook = Any(None)

An optional hook function that you can implement to pass auth_state to the spawner after it has been initialized but before it starts. The auth_state dictionary may be set by the .authenticate() method of the authenticator. This hook enables you to pass some or all of that information to your spawner.

Example:

def userdata_hook(spawner, auth_state):
    spawner.userdata = auth_state["userdata"]

c.Spawner.auth_state_hook = userdata_hook
cmd c.Spawner.cmd = Command()

The command used for starting the single-user server.

Provide either a string or a list containing the path to the startup script command. Extra arguments, other than this path, should be provided via args.

This is usually set if you want to start the single-user server in a different python environment (with virtualenv/conda) than JupyterHub itself.

Some spawners allow shell-style expansion here, allowing you to use environment variables. Most, including the default, do not. Consult the documentation for your spawner to verify!

consecutive_failure_limit c.Spawner.consecutive_failure_limit = Int(0)

Maximum number of consecutive failures to allow before shutting down JupyterHub.

This helps JupyterHub recover from a certain class of problem preventing launch in contexts where the Hub is automatically restarted (e.g. systemd, docker, kubernetes).

A limit of 0 means no limit and consecutive failures will not be tracked.

cpu_guarantee c.Spawner.cpu_guarantee = Float(None)

Minimum number of cpu-cores a single-user notebook server is guaranteed to have available.

If this value is set to 0.5, allows use of 50% of one CPU. If this value is set to 2, allows use of up to 2 CPUs.

This is a configuration setting. Your spawner must implement support for the limit to work. The default spawner, LocalProcessSpawner, does not implement this support. A custom spawner must add support for this setting for it to be enforced.

cpu_limit c.Spawner.cpu_limit = Float(None)

Maximum number of cpu-cores a single-user notebook server is allowed to use.

If this value is set to 0.5, allows use of 50% of one CPU. If this value is set to 2, allows use of up to 2 CPUs.

The single-user notebook server will never be scheduled by the kernel to use more cpu-cores than this. There is no guarantee that it can access this many cpu-cores.

This is a configuration setting. Your spawner must implement support for the limit to work. The default spawner, LocalProcessSpawner, does not implement this support. A custom spawner must add support for this setting for it to be enforced.

async create_certs()

Create and set ownership for the certs to be used for internal ssl

Keyword Arguments
  • alt_names (list) – a list of alternative names to identify the

  • see (server by,) –

  • https – //en.wikipedia.org/wiki/Subject_Alternative_Name

  • override – override the default_names with the provided alt_names

Returns

Path to cert files and CA

Return type

dict

This method creates certs for use with the singleuser notebook. It enables SSL and ensures that the notebook can perform bi-directional SSL auth with the hub (verification based on CA).

If the singleuser host has a name or ip other than localhost, an appropriate alternative name(s) must be passed for ssl verification by the hub to work. For example, for Jupyter hosts with an IP of 10.10.10.10 or DNS name of jupyter.example.com, this would be:

alt_names=[“IP:10.10.10.10”] alt_names=[“DNS:jupyter.example.com”]

respectively. The list can contain both the IP and DNS names to refer to the host by either IP or DNS name (note the default_names below).

debug c.Spawner.debug = Bool(False)

Enable debug-logging of the single-user server

default_url c.Spawner.default_url = Unicode('')

The URL the single-user server should start in.

{username} will be expanded to the user’s username

Example uses:

  • You can set notebook_dir to / and default_url to /tree/home/{username} to allow people to navigate the whole filesystem from their notebook server, but still start in their home directory.

  • Start with /notebooks instead of /tree if default_url points to a notebook instead of a directory.

  • You can set this to /lab to have JupyterLab start by default, rather than Jupyter Notebook.

disable_user_config c.Spawner.disable_user_config = Bool(False)

Disable per-user configuration of single-user servers.

When starting the user’s single-user server, any config file found in the user’s $HOME directory will be ignored.

Note: a user could circumvent this if the user modifies their Python environment, such as when they have their own conda environments / virtualenvs / containers.

env_keep c.Spawner.env_keep = List()

List of environment variables for the single-user server to inherit from the JupyterHub process.

This list is used to ensure that sensitive information in the JupyterHub process’s environment (such as CONFIGPROXY_AUTH_TOKEN) is not passed to the single-user server’s process.

environment c.Spawner.environment = Dict()

Extra environment variables to set for the single-user server’s process.

Environment variables that end up in the single-user server’s process come from 3 sources:
  • This environment configurable

  • The JupyterHub process’ environment variables that are listed in env_keep

  • Variables to establish contact between the single-user notebook and the hub (such as JUPYTERHUB_API_TOKEN)

The environment configurable should be set by JupyterHub administrators to add installation specific environment variables. It is a dict where the key is the name of the environment variable, and the value can be a string or a callable. If it is a callable, it will be called with one parameter (the spawner instance), and should return a string fairly quickly (no blocking operations please!).

Note that the spawner class’ interface is not guaranteed to be exactly same across upgrades, so if you are using the callable take care to verify it continues to work after upgrades!

Changed in version 1.2: environment from this configuration has highest priority, allowing override of ‘default’ env variables, such as JUPYTERHUB_API_URL.

format_string(s)

Render a Python format string

Uses Spawner.template_namespace() to populate format namespace.

Parameters

s (str) – Python format-string to be formatted.

Returns

Formatted string, rendered

Return type

str

get_args()

Return the arguments to be passed after self.cmd

Doesn’t expect shell expansion to happen.

Changed in version 2.0: Prior to 2.0, JupyterHub passed some options such as ip, port, and default_url to the command-line. JupyterHub 2.0 no longer builds any CLI args other than Spawner.cmd and Spawner.args. All values that come from jupyterhub itself will be passed via environment variables.

get_env()

Return the environment dict to use for the Spawner.

This applies things like env_keep, anything defined in Spawner.environment, and adds the API token to the env.

When overriding in subclasses, subclasses must call super().get_env(), extend the returned dict and return it.

Use this to access the env in Spawner.start to allow extension in subclasses.

get_state()

Save state of spawner into database.

A black box of extra state for custom spawners. The returned value of this is passed to load_state.

Subclasses should call super().get_state(), augment the state returned from there, and return that state.

Returns

state – a JSONable dict of state

Return type

dict

http_timeout c.Spawner.http_timeout = Int(30)

Timeout (in seconds) before giving up on a spawned HTTP server

Once a server has successfully been spawned, this is the amount of time we wait before assuming that the server is unable to accept connections.

hub_connect_url c.Spawner.hub_connect_url = Unicode(None)

The URL the single-user server should connect to the Hub.

If the Hub URL set in your JupyterHub config is not reachable from spawned notebooks, you can set differnt URL by this config.

Is None if you don’t need to change the URL.

ip c.Spawner.ip = Unicode('127.0.0.1')

The IP address (or hostname) the single-user server should listen on.

Usually either ‘127.0.0.1’ (default) or ‘0.0.0.0’.

The JupyterHub proxy implementation should be able to send packets to this interface.

Subclasses which launch remotely or in containers should override the default to ‘0.0.0.0’.

Changed in version 2.0: Default changed to ‘127.0.0.1’, from ‘’. In most cases, this does not result in a change in behavior, as ‘’ was interpreted as ‘unspecified’, which used the subprocesses’ own default, itself usually ‘127.0.0.1’.

mem_guarantee c.Spawner.mem_guarantee = ByteSpecification(None)

Minimum number of bytes a single-user notebook server is guaranteed to have available.

Allows the following suffixes:
  • K -> Kilobytes

  • M -> Megabytes

  • G -> Gigabytes

  • T -> Terabytes

This is a configuration setting. Your spawner must implement support for the limit to work. The default spawner, LocalProcessSpawner, does not implement this support. A custom spawner must add support for this setting for it to be enforced.

mem_limit c.Spawner.mem_limit = ByteSpecification(None)

Maximum number of bytes a single-user notebook server is allowed to use.

Allows the following suffixes:
  • K -> Kilobytes

  • M -> Megabytes

  • G -> Gigabytes

  • T -> Terabytes

If the single user server tries to allocate more memory than this, it will fail. There is no guarantee that the single-user notebook server will be able to allocate this much memory - only that it can not allocate more than this.

This is a configuration setting. Your spawner must implement support for the limit to work. The default spawner, LocalProcessSpawner, does not implement this support. A custom spawner must add support for this setting for it to be enforced.

async move_certs(paths)

Takes certificate paths and makes them available to the notebook server

Parameters

paths (dict) – a list of paths for key, cert, and CA. These paths will be resolvable and readable by the Hub process, but not necessarily by the notebook server.

Returns

a list (potentially altered) of paths for key, cert, and CA.

These paths should be resolvable and readable by the notebook server to be launched.

Return type

dict

.move_certs is called after certs for the singleuser notebook have been created by create_certs.

By default, certs are created in a standard, central location defined by internal_certs_location. For a local, single-host deployment of JupyterHub, this should suffice. If, however, singleuser notebooks are spawned on other hosts, .move_certs should be overridden to move these files appropriately. This could mean using scp to copy them to another host, moving them to a volume mounted in a docker container, or exporting them as a secret in kubernetes.

notebook_dir c.Spawner.notebook_dir = Unicode('')

Path to the notebook directory for the single-user server.

The user sees a file listing of this directory when the notebook interface is started. The current interface does not easily allow browsing beyond the subdirectories in this directory’s tree.

~ will be expanded to the home directory of the user, and {username} will be replaced with the name of the user.

Note that this does not prevent users from accessing files outside of this path! They can do so with many other means.

oauth_roles c.Spawner.oauth_roles = Union()

Allowed roles for oauth tokens.

This sets the maximum and default roles assigned to oauth tokens issued by a single-user server’s oauth client (i.e. tokens stored in browsers after authenticating with the server), defining what actions the server can take on behalf of logged-in users.

Default is an empty list, meaning minimal permissions to identify users, no actions can be taken on their behalf.

options_form c.Spawner.options_form = Union()

An HTML form for options a user can specify on launching their server.

The surrounding <form> element and the submit button are already provided.

For example:

Set your key:
<input name="key" val="default_key"></input>
<br>
Choose a letter:
<select name="letter" multiple="true">
  <option value="A">The letter A</option>
  <option value="B">The letter B</option>
</select>

The data from this form submission will be passed on to your spawner in self.user_options

Instead of a form snippet string, this could also be a callable that takes as one parameter the current spawner instance and returns a string. The callable will be called asynchronously if it returns a future, rather than a str. Note that the interface of the spawner class is not deemed stable across versions, so using this functionality might cause your JupyterHub upgrades to break.

options_from_form c.Spawner.options_from_form = Callable()

Interpret HTTP form data

Form data will always arrive as a dict of lists of strings. Override this function to understand single-values, numbers, etc.

This should coerce form data into the structure expected by self.user_options, which must be a dict, and should be JSON-serializeable, though it can contain bytes in addition to standard JSON data types.

This method should not have any side effects. Any handling of user_options should be done in .start() to ensure consistent behavior across servers spawned via the API and form submission page.

Instances will receive this data on self.user_options, after passing through this function, prior to Spawner.start.

Changed in version 1.0: user_options are persisted in the JupyterHub database to be reused on subsequent spawns if no options are given. user_options is serialized to JSON as part of this persistence (with additional support for bytes in case of uploaded file data), and any non-bytes non-jsonable values will be replaced with None if the user_options are re-used.

async poll()

Check if the single-user process is running

Returns

None if single-user process is running. Integer exit status (0 if unknown), if it is not running.

State transitions, behavior, and return response:

  • If the Spawner has not been initialized (neither loaded state, nor called start), it should behave as if it is not running (status=0).

  • If the Spawner has not finished starting, it should behave as if it is running (status=None).

Design assumptions about when poll may be called:

  • On Hub launch: poll may be called before start when state is loaded on Hub launch. poll should return exit status 0 (unknown) if the Spawner has not been initialized via load_state or start.

  • If .start() is async: poll may be called during any yielded portions of the start process. poll should return None when start is yielded, indicating that the start process has not yet completed.

poll_interval c.Spawner.poll_interval = Int(30)

Interval (in seconds) on which to poll the spawner for single-user server’s status.

At every poll interval, each spawner’s .poll method is called, which checks if the single-user server is still running. If it isn’t running, then JupyterHub modifies its own state accordingly and removes appropriate routes from the configurable proxy.

port c.Spawner.port = Int(0)

The port for single-user servers to listen on.

Defaults to 0, which uses a randomly allocated port number each time.

If set to a non-zero value, all Spawners will use the same port, which only makes sense if each server is on a different address, e.g. in containers.

New in version 0.7.

post_stop_hook c.Spawner.post_stop_hook = Any(None)

An optional hook function that you can implement to do work after the spawner stops.

This can be set independent of any concrete spawner implementation.

pre_spawn_hook c.Spawner.pre_spawn_hook = Any(None)

An optional hook function that you can implement to do some bootstrapping work before the spawner starts. For example, create a directory for your user or load initial content.

This can be set independent of any concrete spawner implementation.

This maybe a coroutine.

Example:

from subprocess import check_call
def my_hook(spawner):
    username = spawner.user.name
    check_call(['./examples/bootstrap-script/bootstrap.sh', username])

c.Spawner.pre_spawn_hook = my_hook
ssl_alt_names c.Spawner.ssl_alt_names = List()

List of SSL alt names

May be set in config if all spawners should have the same value(s), or set at runtime by Spawner that know their names.

ssl_alt_names_include_local c.Spawner.ssl_alt_names_include_local = Bool(True)

Whether to include DNS:localhost, IP:127.0.0.1 in alt names

async start()

Start the single-user server

Returns

the (ip, port) where the Hub can connect to the server.

Return type

(str, int)

Changed in version 0.7: Return ip, port instead of setting on self.user.server directly.

start_timeout c.Spawner.start_timeout = Int(60)

Timeout (in seconds) before giving up on starting of single-user server.

This is the timeout for start to return, not the timeout for the server to respond. Callers of spawner.start will assume that startup has failed if it takes longer than this. start should return when the server process is started and its location is known.

async stop(now=False)

Stop the single-user server

If now is False (default), shutdown the server as gracefully as possible, e.g. starting with SIGINT, then SIGTERM, then SIGKILL. If now is True, terminate the server immediately.

The coroutine should return when the single-user server process is no longer running.

Must be a coroutine.

template_namespace()

Return the template namespace for format-string formatting.

Currently used on default_url and notebook_dir.

Subclasses may add items to the available namespace.

The default implementation includes:

{
  'username': user.name,
  'base_url': users_base_url,
}
Returns

namespace for string formatting.

Return type

ns (dict)

LocalProcessSpawner
class jupyterhub.spawner.LocalProcessSpawner(**kwargs)

A Spawner that uses subprocess.Popen to start single-user servers as local processes.

Requires local UNIX users matching the authenticated users to exist. Does not work on Windows.

This is the default spawner for JupyterHub.

Note: This spawner does not implement CPU / memory guarantees and limits.

args c.LocalProcessSpawner.args = List()

Extra arguments to be passed to the single-user server.

Some spawners allow shell-style expansion here, allowing you to use environment variables here. Most, including the default, do not. Consult the documentation for your spawner to verify!

auth_state_hook c.LocalProcessSpawner.auth_state_hook = Any(None)

An optional hook function that you can implement to pass auth_state to the spawner after it has been initialized but before it starts. The auth_state dictionary may be set by the .authenticate() method of the authenticator. This hook enables you to pass some or all of that information to your spawner.

Example:

def userdata_hook(spawner, auth_state):
    spawner.userdata = auth_state["userdata"]

c.Spawner.auth_state_hook = userdata_hook
cmd c.LocalProcessSpawner.cmd = Command()

The command used for starting the single-user server.

Provide either a string or a list containing the path to the startup script command. Extra arguments, other than this path, should be provided via args.

This is usually set if you want to start the single-user server in a different python environment (with virtualenv/conda) than JupyterHub itself.

Some spawners allow shell-style expansion here, allowing you to use environment variables. Most, including the default, do not. Consult the documentation for your spawner to verify!

consecutive_failure_limit c.LocalProcessSpawner.consecutive_failure_limit = Int(0)

Maximum number of consecutive failures to allow before shutting down JupyterHub.

This helps JupyterHub recover from a certain class of problem preventing launch in contexts where the Hub is automatically restarted (e.g. systemd, docker, kubernetes).

A limit of 0 means no limit and consecutive failures will not be tracked.

cpu_guarantee c.LocalProcessSpawner.cpu_guarantee = Float(None)

Minimum number of cpu-cores a single-user notebook server is guaranteed to have available.

If this value is set to 0.5, allows use of 50% of one CPU. If this value is set to 2, allows use of up to 2 CPUs.

This is a configuration setting. Your spawner must implement support for the limit to work. The default spawner, LocalProcessSpawner, does not implement this support. A custom spawner must add support for this setting for it to be enforced.

cpu_limit c.LocalProcessSpawner.cpu_limit = Float(None)

Maximum number of cpu-cores a single-user notebook server is allowed to use.

If this value is set to 0.5, allows use of 50% of one CPU. If this value is set to 2, allows use of up to 2 CPUs.

The single-user notebook server will never be scheduled by the kernel to use more cpu-cores than this. There is no guarantee that it can access this many cpu-cores.

This is a configuration setting. Your spawner must implement support for the limit to work. The default spawner, LocalProcessSpawner, does not implement this support. A custom spawner must add support for this setting for it to be enforced.

debug c.LocalProcessSpawner.debug = Bool(False)

Enable debug-logging of the single-user server

default_url c.LocalProcessSpawner.default_url = Unicode('')

The URL the single-user server should start in.

{username} will be expanded to the user’s username

Example uses:

  • You can set notebook_dir to / and default_url to /tree/home/{username} to allow people to navigate the whole filesystem from their notebook server, but still start in their home directory.

  • Start with /notebooks instead of /tree if default_url points to a notebook instead of a directory.

  • You can set this to /lab to have JupyterLab start by default, rather than Jupyter Notebook.

disable_user_config c.LocalProcessSpawner.disable_user_config = Bool(False)

Disable per-user configuration of single-user servers.

When starting the user’s single-user server, any config file found in the user’s $HOME directory will be ignored.

Note: a user could circumvent this if the user modifies their Python environment, such as when they have their own conda environments / virtualenvs / containers.

env_keep c.LocalProcessSpawner.env_keep = List()

List of environment variables for the single-user server to inherit from the JupyterHub process.

This list is used to ensure that sensitive information in the JupyterHub process’s environment (such as CONFIGPROXY_AUTH_TOKEN) is not passed to the single-user server’s process.

environment c.LocalProcessSpawner.environment = Dict()

Extra environment variables to set for the single-user server’s process.

Environment variables that end up in the single-user server’s process come from 3 sources:
  • This environment configurable

  • The JupyterHub process’ environment variables that are listed in env_keep

  • Variables to establish contact between the single-user notebook and the hub (such as JUPYTERHUB_API_TOKEN)

The environment configurable should be set by JupyterHub administrators to add installation specific environment variables. It is a dict where the key is the name of the environment variable, and the value can be a string or a callable. If it is a callable, it will be called with one parameter (the spawner instance), and should return a string fairly quickly (no blocking operations please!).

Note that the spawner class’ interface is not guaranteed to be exactly same across upgrades, so if you are using the callable take care to verify it continues to work after upgrades!

Changed in version 1.2: environment from this configuration has highest priority, allowing override of ‘default’ env variables, such as JUPYTERHUB_API_URL.

http_timeout c.LocalProcessSpawner.http_timeout = Int(30)

Timeout (in seconds) before giving up on a spawned HTTP server

Once a server has successfully been spawned, this is the amount of time we wait before assuming that the server is unable to accept connections.

hub_connect_url c.LocalProcessSpawner.hub_connect_url = Unicode(None)

The URL the single-user server should connect to the Hub.

If the Hub URL set in your JupyterHub config is not reachable from spawned notebooks, you can set differnt URL by this config.

Is None if you don’t need to change the URL.

interrupt_timeout c.LocalProcessSpawner.interrupt_timeout = Int(10)

Seconds to wait for single-user server process to halt after SIGINT.

If the process has not exited cleanly after this many seconds, a SIGTERM is sent.

ip c.LocalProcessSpawner.ip = Unicode('127.0.0.1')

The IP address (or hostname) the single-user server should listen on.

Usually either ‘127.0.0.1’ (default) or ‘0.0.0.0’.

The JupyterHub proxy implementation should be able to send packets to this interface.

Subclasses which launch remotely or in containers should override the default to ‘0.0.0.0’.

Changed in version 2.0: Default changed to ‘127.0.0.1’, from ‘’. In most cases, this does not result in a change in behavior, as ‘’ was interpreted as ‘unspecified’, which used the subprocesses’ own default, itself usually ‘127.0.0.1’.

kill_timeout c.LocalProcessSpawner.kill_timeout = Int(5)

Seconds to wait for process to halt after SIGKILL before giving up.

If the process does not exit cleanly after this many seconds of SIGKILL, it becomes a zombie process. The hub process will log a warning and then give up.

mem_guarantee c.LocalProcessSpawner.mem_guarantee = ByteSpecification(None)

Minimum number of bytes a single-user notebook server is guaranteed to have available.

Allows the following suffixes:
  • K -> Kilobytes

  • M -> Megabytes

  • G -> Gigabytes

  • T -> Terabytes

This is a configuration setting. Your spawner must implement support for the limit to work. The default spawner, LocalProcessSpawner, does not implement this support. A custom spawner must add support for this setting for it to be enforced.

mem_limit c.LocalProcessSpawner.mem_limit = ByteSpecification(None)

Maximum number of bytes a single-user notebook server is allowed to use.

Allows the following suffixes:
  • K -> Kilobytes

  • M -> Megabytes

  • G -> Gigabytes

  • T -> Terabytes

If the single user server tries to allocate more memory than this, it will fail. There is no guarantee that the single-user notebook server will be able to allocate this much memory - only that it can not allocate more than this.

This is a configuration setting. Your spawner must implement support for the limit to work. The default spawner, LocalProcessSpawner, does not implement this support. A custom spawner must add support for this setting for it to be enforced.

notebook_dir c.LocalProcessSpawner.notebook_dir = Unicode('')

Path to the notebook directory for the single-user server.

The user sees a file listing of this directory when the notebook interface is started. The current interface does not easily allow browsing beyond the subdirectories in this directory’s tree.

~ will be expanded to the home directory of the user, and {username} will be replaced with the name of the user.

Note that this does not prevent users from accessing files outside of this path! They can do so with many other means.

oauth_roles c.LocalProcessSpawner.oauth_roles = Union()

Allowed roles for oauth tokens.

This sets the maximum and default roles assigned to oauth tokens issued by a single-user server’s oauth client (i.e. tokens stored in browsers after authenticating with the server), defining what actions the server can take on behalf of logged-in users.

Default is an empty list, meaning minimal permissions to identify users, no actions can be taken on their behalf.

options_form c.LocalProcessSpawner.options_form = Union()

An HTML form for options a user can specify on launching their server.

The surrounding <form> element and the submit button are already provided.

For example:

Set your key:
<input name="key" val="default_key"></input>
<br>
Choose a letter:
<select name="letter" multiple="true">
  <option value="A">The letter A</option>
  <option value="B">The letter B</option>
</select>

The data from this form submission will be passed on to your spawner in self.user_options

Instead of a form snippet string, this could also be a callable that takes as one parameter the current spawner instance and returns a string. The callable will be called asynchronously if it returns a future, rather than a str. Note that the interface of the spawner class is not deemed stable across versions, so using this functionality might cause your JupyterHub upgrades to break.

options_from_form c.LocalProcessSpawner.options_from_form = Callable()

Interpret HTTP form data

Form data will always arrive as a dict of lists of strings. Override this function to understand single-values, numbers, etc.

This should coerce form data into the structure expected by self.user_options, which must be a dict, and should be JSON-serializeable, though it can contain bytes in addition to standard JSON data types.

This method should not have any side effects. Any handling of user_options should be done in .start() to ensure consistent behavior across servers spawned via the API and form submission page.

Instances will receive this data on self.user_options, after passing through this function, prior to Spawner.start.

Changed in version 1.0: user_options are persisted in the JupyterHub database to be reused on subsequent spawns if no options are given. user_options is serialized to JSON as part of this persistence (with additional support for bytes in case of uploaded file data), and any non-bytes non-jsonable values will be replaced with None if the user_options are re-used.

poll_interval c.LocalProcessSpawner.poll_interval = Int(30)

Interval (in seconds) on which to poll the spawner for single-user server’s status.

At every poll interval, each spawner’s .poll method is called, which checks if the single-user server is still running. If it isn’t running, then JupyterHub modifies its own state accordingly and removes appropriate routes from the configurable proxy.

popen_kwargs c.LocalProcessSpawner.popen_kwargs = Dict()

Extra keyword arguments to pass to Popen

when spawning single-user servers.

For example:

popen_kwargs = dict(shell=True)
port c.LocalProcessSpawner.port = Int(0)

The port for single-user servers to listen on.

Defaults to 0, which uses a randomly allocated port number each time.

If set to a non-zero value, all Spawners will use the same port, which only makes sense if each server is on a different address, e.g. in containers.

New in version 0.7.

post_stop_hook c.LocalProcessSpawner.post_stop_hook = Any(None)

An optional hook function that you can implement to do work after the spawner stops.

This can be set independent of any concrete spawner implementation.

pre_spawn_hook c.LocalProcessSpawner.pre_spawn_hook = Any(None)

An optional hook function that you can implement to do some bootstrapping work before the spawner starts. For example, create a directory for your user or load initial content.

This can be set independent of any concrete spawner implementation.

This maybe a coroutine.

Example:

from subprocess import check_call
def my_hook(spawner):
    username = spawner.user.name
    check_call(['./examples/bootstrap-script/bootstrap.sh', username])

c.Spawner.pre_spawn_hook = my_hook
shell_cmd c.LocalProcessSpawner.shell_cmd = Command()

Specify a shell command to launch.

The single-user command will be appended to this list, so it sould end with -c (for bash) or equivalent.

For example:

c.LocalProcessSpawner.shell_cmd = ['bash', '-l', '-c']

to launch with a bash login shell, which would set up the user’s own complete environment.

Warning

Using shell_cmd gives users control over PATH, etc., which could change what the jupyterhub-singleuser launch command does. Only use this for trusted users.

ssl_alt_names c.LocalProcessSpawner.ssl_alt_names = List()

List of SSL alt names

May be set in config if all spawners should have the same value(s), or set at runtime by Spawner that know their names.

ssl_alt_names_include_local c.LocalProcessSpawner.ssl_alt_names_include_local = Bool(True)

Whether to include DNS:localhost, IP:127.0.0.1 in alt names

start_timeout c.LocalProcessSpawner.start_timeout = Int(60)

Timeout (in seconds) before giving up on starting of single-user server.

This is the timeout for start to return, not the timeout for the server to respond. Callers of spawner.start will assume that startup has failed if it takes longer than this. start should return when the server process is started and its location is known.

term_timeout c.LocalProcessSpawner.term_timeout = Int(5)

Seconds to wait for single-user server process to halt after SIGTERM.

If the process does not exit cleanly after this many seconds of SIGTERM, a SIGKILL is sent.

Proxies
Module: jupyterhub.proxy

API for JupyterHub’s proxy.

Custom proxy implementations can subclass Proxy and register in JupyterHub config:

from mymodule import MyProxy
c.JupyterHub.proxy_class = MyProxy

Route Specification:

  • A routespec is a URL prefix ([host]/path/), e.g. ‘host.tld/path/’ for host-based routing or ‘/path/’ for default routing.

  • Route paths should be normalized to always start and end with ‘/’

Proxy
class jupyterhub.proxy.Proxy(**kwargs)

Base class for configurable proxies that JupyterHub can use.

A proxy implementation should subclass this and must define the following methods:

In addition to these, the following method(s) may need to be implemented:

  • start() start the proxy, if it should be launched by the Hub instead of externally managed. If the proxy is externally managed, it should set should_start to False.

  • stop() stop the proxy. Only used if start() is also used.

And the following method(s) are optional, but can be provided:

  • get_route() gets a single route. There is a default implementation that extracts data from get_all_routes(), but implementations may choose to provide a more efficient implementation of fetching a single route.

async add_all_services(service_dict)

Update the proxy table from the database.

Used when loading up a new proxy.

async add_all_users(user_dict)

Update the proxy table from the database.

Used when loading up a new proxy.

add_hub_route(hub)

Add the default route for the Hub

async add_route(routespec, target, data)

Add a route to the proxy.

Subclasses must define this method

Parameters
  • routespec (str) – A URL prefix ([host]/path/) for which this route will be matched, e.g. host.name/path/

  • target (str) – A full URL that will be the target of this route.

  • data (dict) – A JSONable dict that will be associated with this route, and will be returned when retrieving information about this route.

Will raise an appropriate Exception (FIXME: find what?) if the route could not be added.

The proxy implementation should also have a way to associate the fact that a route came from JupyterHub.

async add_service(service, client=None)

Add a service’s server to the proxy table.

async add_user(user, server_name='', client=None)

Add a user’s server to the proxy table.

async check_routes(user_dict, service_dict, routes=None)

Check that all users are properly routed on the proxy.

async delete_route(routespec)

Delete a route with a given routespec if it exists.

Subclasses must define this method

async delete_service(service, client=None)

Remove a service’s server from the proxy table.

async delete_user(user, server_name='')

Remove a user’s server from the proxy table.

extra_routes c.Proxy.extra_routes = Dict()

Additional routes to be maintained in the proxy.

A dictionary with a route specification as key, and a URL as target. The hub will ensure this route is present in the proxy.

If the hub is running in host based mode (with JupyterHub.subdomain_host set), the routespec must have a domain component (example.com/my-url/). If the hub is not running in host based mode, the routespec must not have a domain component (/my-url/).

Helpful when the hub is running in API-only mode.

async get_all_routes()

Fetch and return all the routes associated by JupyterHub from the proxy.

Subclasses must define this method

Should return a dictionary of routes, where the keys are routespecs and each value is a dict of the form:

{
  'routespec': the route specification ([host]/path/)
  'target': the target host URL (proto://host) for this route
  'data': the attached data dict for this route (as specified in add_route)
}
async get_route(routespec)

Return the route info for a given routespec.

Parameters

routespec (str) – A URI that was used to add this route, e.g. host.tld/path/

Returns

dict with the following keys:

'routespec': The normalized route specification passed in to add_route
    ([host]/path/)
'target': The target host for this route (proto://host)
'data': The arbitrary data dict that was passed in by JupyterHub when adding this
        route.

None: if there are no routes matching the given routespec

Return type

result (dict)

should_start c.Proxy.should_start = Bool(True)

Should the Hub start the proxy

If True, the Hub will start the proxy and stop it. Set to False if the proxy is managed externally, such as by systemd, docker, or another service manager.

start()

Start the proxy.

Will be called during startup if should_start is True.

Subclasses must define this method if the proxy is to be started by the Hub

stop()

Stop the proxy.

Will be called during teardown if should_start is True.

Subclasses must define this method if the proxy is to be started by the Hub

validate_routespec(routespec)

Validate a routespec

  • Checks host value vs host-based routing.

  • Ensures trailing slash on path.

ConfigurableHTTPProxy
class jupyterhub.proxy.ConfigurableHTTPProxy(**kwargs)

Proxy implementation for the default configurable-http-proxy.

This is the default proxy implementation for running the nodejs proxy configurable-http-proxy.

If the proxy should not be run as a subprocess of the Hub, (e.g. in a separate container), set:

c.ConfigurableHTTPProxy.should_start = False
api_url c.ConfigurableHTTPProxy.api_url = Unicode('')

The ip (or hostname) of the proxy’s API endpoint

auth_token c.ConfigurableHTTPProxy.auth_token = Unicode('')

The Proxy auth token

Loaded from the CONFIGPROXY_AUTH_TOKEN env variable by default.

check_running_interval c.ConfigurableHTTPProxy.check_running_interval = Int(5)

Interval (in seconds) at which to check if the proxy is running.

command c.ConfigurableHTTPProxy.command = Command()

The command to start the proxy

concurrency c.ConfigurableHTTPProxy.concurrency = Int(10)

The number of requests allowed to be concurrently outstanding to the proxy

Limiting this number avoids potential timeout errors by sending too many requests to update the proxy at once

debug c.ConfigurableHTTPProxy.debug = Bool(False)

Add debug-level logging to the Proxy.

extra_routes c.ConfigurableHTTPProxy.extra_routes = Dict()

Additional routes to be maintained in the proxy.

A dictionary with a route specification as key, and a URL as target. The hub will ensure this route is present in the proxy.

If the hub is running in host based mode (with JupyterHub.subdomain_host set), the routespec must have a domain component (example.com/my-url/). If the hub is not running in host based mode, the routespec must not have a domain component (/my-url/).

Helpful when the hub is running in API-only mode.

pid_file c.ConfigurableHTTPProxy.pid_file = Unicode('jupyterhub-proxy.pid')

File in which to write the PID of the proxy process.

should_start c.ConfigurableHTTPProxy.should_start = Bool(True)

Should the Hub start the proxy

If True, the Hub will start the proxy and stop it. Set to False if the proxy is managed externally, such as by systemd, docker, or another service manager.

Users
Module: jupyterhub.user
UserDict
class jupyterhub.user.UserDict(db_factory, settings)

Like defaultdict, but for users

Users can be retrieved by:

  • integer database id

  • orm.User object

  • username str

A User wrapper object is always returned.

This dict contains at least all active users, but not necessarily all users in the database.

Checking key in userdict returns whether an item is already in the cache, not whether it is in the database.

Changed in version 1.2: 'username' in userdict pattern is now supported

add(orm_user)

Add a user to the UserDict

count_active_users()

Count the number of user servers that are active/pending/ready

Returns dict with counts of active/pending/ready servers

delete(key)

Delete a user from the cache and the database

get(key, default=None)

Retrieve a User object if it can be found, else default

Lookup can be by User object, id, or name

Changed in version 1.2: get() accesses the database instead of just the cache by integer id, so is equivalent to catching KeyErrors on attempted lookup.

User
class jupyterhub.user.User(orm_user, settings=None, db=None)

High-level wrapper around an orm.User object

name

The user’s name

server

The user’s Server data object if running, None otherwise. Has ip, port attributes.

spawner

The user’s Spawner instance.

property escaped_name

My name, escaped for use in URLs, cookies, etc.

Services
Module: jupyterhub.services.service

A service is a process that talks to JupyterHub.

Types of services:
Managed:
  • managed by JupyterHub (always subprocess, no custom Spawners)

  • always a long-running process

  • managed services are restarted automatically if they exit unexpectedly

Unmanaged:
  • managed by external service (docker, systemd, etc.)

  • do not need to be long-running processes, or processes at all

URL: needs a route added to the proxy.
  • Public route will always be /services/service-name

  • url specified in config

  • if port is 0, Hub will select a port

API access:
  • admin: tokens will have admin-access to the API

  • not admin: tokens will only have non-admin access (not much they can do other than defer to Hub for auth)

An externally managed service running on a URL:

{
    'name': 'my-service',
    'url': 'https://host:8888',
    'admin': True,
    'api_token': 'super-secret',
}

A hub-managed service with no URL:

{
    'name': 'cull-idle',
    'command': ['python', '/path/to/cull-idle']
    'admin': True,
}
Service
class jupyterhub.services.service.Service(**kwargs)

An object wrapping a service specification for Hub API consumers.

A service has inputs:

  • name: str

    the name of the service

  • admin: bool(False)

    whether the service should have administrative privileges

  • url: str (None)

    The URL where the service is/should be. If specified, the service will be added to the proxy at /services/:name

  • oauth_no_confirm: bool(False)

    Whether this service should be allowed to complete oauth with logged-in users without prompting for confirmation.

If a service is to be managed by the Hub, it has a few extra options:

  • command: (str/Popen list)

    Command for JupyterHub to spawn the service. Only use this if the service should be a subprocess. If command is not specified, it is assumed to be managed by a

  • environment: dict

    Additional environment variables for the service.

  • user: str

    The name of a system user to become. If unspecified, run as the same user as the Hub.

property kind

The name of the kind of service as a string

  • ‘managed’ for managed services

  • ‘external’ for external services

property managed

Am I managed by the Hub?

Services Authentication
Module: jupyterhub.services.auth

Authenticating services with JupyterHub.

Tokens are sent to the Hub for verification. The Hub replies with a JSON model describing the authenticated user.

This contains two levels of authentication:

  • HubOAuth - Use OAuth 2 to authenticate browsers with the Hub. This should be used for any service that should respond to browser requests (i.e. most services).

  • HubAuth - token-only authentication, for a service that only need to handle token-authenticated API requests

The Auth classes (HubAuth, HubOAuth) can be used in any application, even outside tornado. They contain reference implementations of talking to the Hub API to resolve a token to a user.

The Authenticated classes (HubAuthenticated, HubOAuthenticated) are mixins for tornado handlers that should authenticate with the Hub.

If you are using OAuth, you will also need to register an oauth callback handler to complete the oauth process. A tornado implementation is provided in HubOAuthCallbackHandler.

HubAuth
class jupyterhub.services.auth.HubAuth(**kwargs)

A class for authenticating with JupyterHub

This can be used by any application.

Use this base class only for direct, token-authenticated applications (web APIs). For applications that support direct visits from browsers, use HubOAuth to enable OAuth redirect-based authentication.

If using tornado, use via HubAuthenticated mixin. If using manually, use the .user_for_token(token_value) method to identify the user owning a given token.

The following config must be set:

  • api_token (token for authenticating with JupyterHub API), fetched from the JUPYTERHUB_API_TOKEN env by default.

The following config MAY be set:

  • api_url: the base URL of the Hub’s internal API, fetched from JUPYTERHUB_API_URL by default.

  • cookie_cache_max_age: the number of seconds responses from the Hub should be cached.

  • login_url (the public /hub/login URL of the Hub).

api_token c.HubAuth.api_token = Unicode('')

API key for accessing Hub API.

Default: $JUPYTERHUB_API_TOKEN

Loaded from services configuration in jupyterhub_config. Will be auto-generated for hub-managed services.

api_url c.HubAuth.api_url = Unicode('http://127.0.0.1:8081/hub/api')

The base API URL of the Hub.

Typically http://hub-ip:hub-port/hub/api Default: $JUPYTERHUB_API_URL

base_url c.HubAuth.base_url = Unicode('/')

The base URL prefix of this application

e.g. /services/service-name/ or /user/name/

Default: get from JUPYTERHUB_SERVICE_PREFIX

cache_max_age c.HubAuth.cache_max_age = Int(300)

The maximum time (in seconds) to cache the Hub’s responses for authentication.

A larger value reduces load on the Hub and occasional response lag. A smaller value reduces propagation time of changes on the Hub (rare).

Default: 300 (five minutes)

certfile c.HubAuth.certfile = Unicode('')

The ssl cert to use for requests

Use with keyfile

check_scopes(required_scopes, user)

Check whether the user has required scope(s)

client_ca c.HubAuth.client_ca = Unicode('')

The ssl certificate authority to use to verify requests

Use with keyfile and certfile

cookie_options c.HubAuth.cookie_options = Dict()

Additional options to pass when setting cookies.

Can include things like expires_days=None for session-expiry or secure=True if served on HTTPS and default HTTPS discovery fails (e.g. behind some proxies).

get_session_id(handler)

Get the jupyterhub session id

from the jupyterhub-session-id cookie.

get_token(handler, in_cookie=True)

Get the token authenticating a request

Changed in version 2.2: in_cookie added. Previously, only URL params and header were considered. Pass in_cookie=False to preserve that behavior.

  • in URL parameters: ?token=<token>

  • in header: Authorization: token <token>

  • in cookie (stored after oauth), if in_cookie is True

get_user(handler)

Get the Hub user for a given tornado handler.

Checks cookie with the Hub to identify the current user.

Parameters

handler (tornado.web.RequestHandler) – the current request handler

Returns

The user model, if a user is identified, None if authentication fails.

The ‘name’ field contains the user’s name.

Return type

user_model (dict)

hub_host c.HubAuth.hub_host = Unicode('')

The public host of JupyterHub

Only used if JupyterHub is spreading servers across subdomains.

hub_prefix c.HubAuth.hub_prefix = Unicode('/hub/')

The URL prefix for the Hub itself.

Typically /hub/ Default: $JUPYTERHUB_BASE_URL

keyfile c.HubAuth.keyfile = Unicode('')

The ssl key to use for requests

Use with certfile

login_url c.HubAuth.login_url = Unicode('/hub/login')

The login URL to use

Typically /hub/login

oauth_scopes c.HubAuth.oauth_scopes = Set()

OAuth scopes to use for allowing access.

Get from $JUPYTERHUB_OAUTH_SCOPES by default.

Deprecated and removed. Use HubOAuth to authenticate browsers.

user_for_token(token, use_cache=True, session_id='')

Ask the Hub to identify the user for a given token.

Parameters
  • token (str) – the token

  • use_cache (bool) – Specify use_cache=False to skip cached cookie values (default: True)

Returns

The user model, if a user is identified, None if authentication fails.

The ‘name’ field contains the user’s name.

Return type

user_model (dict)

HubOAuth
class jupyterhub.services.auth.HubOAuth(**kwargs)

HubAuth using OAuth for login instead of cookies set by the Hub.

Use this class if you want users to be able to visit your service with a browser. They will be authenticated via OAuth with the Hub.

api_token c.HubOAuth.api_token = Unicode('')

API key for accessing Hub API.

Default: $JUPYTERHUB_API_TOKEN

Loaded from services configuration in jupyterhub_config. Will be auto-generated for hub-managed services.

api_url c.HubOAuth.api_url = Unicode('http://127.0.0.1:8081/hub/api')

The base API URL of the Hub.

Typically http://hub-ip:hub-port/hub/api Default: $JUPYTERHUB_API_URL

base_url c.HubOAuth.base_url = Unicode('/')

The base URL prefix of this application

e.g. /services/service-name/ or /user/name/

Default: get from JUPYTERHUB_SERVICE_PREFIX

cache_max_age c.HubOAuth.cache_max_age = Int(300)

The maximum time (in seconds) to cache the Hub’s responses for authentication.

A larger value reduces load on the Hub and occasional response lag. A smaller value reduces propagation time of changes on the Hub (rare).

Default: 300 (five minutes)

certfile c.HubOAuth.certfile = Unicode('')

The ssl cert to use for requests

Use with keyfile

Clear the OAuth cookie

client_ca c.HubOAuth.client_ca = Unicode('')

The ssl certificate authority to use to verify requests

Use with keyfile and certfile

property cookie_name

Use OAuth client_id for cookie name

because we don’t want to use the same cookie name across OAuth clients.

cookie_options c.HubOAuth.cookie_options = Dict()

Additional options to pass when setting cookies.

Can include things like expires_days=None for session-expiry or secure=True if served on HTTPS and default HTTPS discovery fails (e.g. behind some proxies).

generate_state(next_url=None, **extra_state)

Generate a state string, given a next_url redirect target

Parameters

next_url (str) – The URL of the page to redirect to on successful login.

Returns

state (str)

Return type

The base64-encoded state string.

get_next_url(b64_state='')

Get the next_url for redirection, given an encoded OAuth state

Get the cookie name for oauth state, given an encoded OAuth state

Cookie name is stored in the state itself because the cookie name is randomized to deal with races between concurrent oauth sequences.

hub_host c.HubOAuth.hub_host = Unicode('')

The public host of JupyterHub

Only used if JupyterHub is spreading servers across subdomains.

hub_prefix c.HubOAuth.hub_prefix = Unicode('/hub/')

The URL prefix for the Hub itself.

Typically /hub/ Default: $JUPYTERHUB_BASE_URL

keyfile c.HubOAuth.keyfile = Unicode('')

The ssl key to use for requests

Use with certfile

login_url c.HubOAuth.login_url = Unicode('/hub/login')

The login URL to use

Typically /hub/login

oauth_authorization_url c.HubOAuth.oauth_authorization_url = Unicode('/hub/api/oauth2/authorize')

The URL to redirect to when starting the OAuth process

oauth_client_id c.HubOAuth.oauth_client_id = Unicode('')

The OAuth client ID for this application.

Use JUPYTERHUB_CLIENT_ID by default.

oauth_redirect_uri c.HubOAuth.oauth_redirect_uri = Unicode('')

OAuth redirect URI

Should generally be /base_url/oauth_callback

oauth_scopes c.HubOAuth.oauth_scopes = Set()

OAuth scopes to use for allowing access.

Get from $JUPYTERHUB_OAUTH_SCOPES by default.

oauth_token_url c.HubOAuth.oauth_token_url = Unicode('')

The URL for requesting an OAuth token from JupyterHub

Set a cookie recording OAuth result

Generate an OAuth state and store it in a cookie

Parameters
  • handler (RequestHandler) – A tornado RequestHandler

  • next_url (str) – The page to redirect to on successful login

Returns

state – The OAuth state that has been stored in the cookie (url safe, base64-encoded)

Return type

str

The cookie name for storing OAuth state

This cookie is only live for the duration of the OAuth handshake.

token_for_code(code)

Get token for OAuth temporary code

This is the last step of OAuth login. Should be called in OAuth Callback handler.

Parameters

code (str) – oauth code for finishing OAuth login

Returns

JupyterHub API Token

Return type

token (str)

HubAuthenticated
class jupyterhub.services.auth.HubAuthenticated

Mixin for tornado handlers that are authenticated with JupyterHub

A handler that mixes this in must have the following attributes/properties:

  • .hub_auth: A HubAuth instance

  • .hub_scopes: A set of JupyterHub 2.0 OAuth scopes to allow. Default comes from .hub_auth.oauth_scopes, which in turn is set by $JUPYTERHUB_OAUTH_SCOPES Default values include: - ‘access:services’, ‘access:services!service={service_name}’ for services - ‘access:servers’, ‘access:servers!user={user}’, ‘access:servers!server={user}/{server_name}’ for single-user servers

If hub_scopes is not used (e.g. JupyterHub 1.x), these additional properties can be used:

  • .allow_admin: If True, allow any admin user. Default: False.

  • .hub_users: A set of usernames to allow. If left unspecified or None, username will not be checked.

  • .hub_groups: A set of group names to allow. If left unspecified or None, groups will not be checked.

  • .allow_admin: Is admin user access allowed or not If left unspecified or False, admin user won’t have an access.

Examples:

class MyHandler(HubAuthenticated, web.RequestHandler):
    def initialize(self, hub_auth):
        self.hub_auth = hub_auth

    @web.authenticated
    def get(self):
        ...
property allow_all

Property indicating that all successfully identified user or service should be allowed.

check_hub_user(model)

Check whether Hub-authenticated user or service should be allowed.

Returns the input if the user should be allowed, None otherwise.

Override for custom logic in authenticating users.

Parameters

user_model (dict) – the user or service model returned from HubAuth

Returns

The user model if the user should be allowed, None otherwise.

Return type

user_model (dict)

get_current_user()

Tornado’s authentication method

Returns

The user model, if a user is identified, None if authentication fails.

Return type

user_model (dict)

get_login_url()

Return the Hub’s login URL

hub_auth_class

alias of jupyterhub.services.auth.HubAuth

property hub_scopes

Set of allowed scopes (use hub_auth.oauth_scopes by default)

HubOAuthenticated
class jupyterhub.services.auth.HubOAuthenticated

Simple subclass of HubAuthenticated using OAuth instead of old shared cookies

HubOAuthCallbackHandler
class jupyterhub.services.auth.HubOAuthCallbackHandler(application: tornado.web.Application, request: tornado.httputil.HTTPServerRequest, **kwargs: Any)

OAuth Callback handler

Finishes the OAuth flow, setting a cookie to record the user’s info.

Should be registered at SERVICE_PREFIX/oauth_callback

RBAC Reference

JupyterHub RBAC

Role Based Access Control (RBAC) in JupyterHub serves to provide fine grained control of access to Jupyterhub’s API resources.

RBAC is new in JupyterHub 2.0.

Motivation

The JupyterHub API requires authorization to access its APIs. This ensures that an arbitrary user, or even an unauthenticated third party, are not allowed to perform such actions. For instance, the behaviour prior to adoption of RBAC is that creating or deleting users requires admin rights.

The prior system is functional, but lacks flexibility. If your Hub serves a number of users in different groups, you might want to delegate permissions to other users or automate certain processes. Prior to RBAC, appointing a ‘group-only admin’ or a bot that culls idle servers, requires granting full admin rights to all actions. This poses a risk of the user or service intentionally or unintentionally accessing and modifying any data within the Hub and violates the principle of least privilege.

To remedy situations like this, JupyterHub is transitioning to an RBAC system. By equipping users, groups and services with roles that supply them with a collection of permissions (scopes), administrators are able to fine-tune which parties are granted access to which resources.

Definitions

Scopes are specific permissions used to evaluate API requests. For example: the API endpoint users/servers, which enables starting or stopping user servers, is guarded by the scope servers.

Scopes are not directly assigned to requesters. Rather, when a client performs an API call, their access will be evaluated based on their assigned roles.

Roles are collections of scopes that specify the level of what a client is allowed to do. For example, a group administrator may be granted permission to control the servers of group members, but not to create, modify or delete group members themselves. Within the RBAC framework, this is achieved by assigning a role to the administrator that covers exactly those privileges.

Technical Overview
Roles

JupyterHub provides four roles that are available by default:

Default roles

  • user role provides a default user scope self that grants access to the user’s own resources.

  • admin role contains all available scopes and grants full rights to all actions. This role cannot be edited.

  • token role provides a default token scope all that resolves to the same permissions as the owner of the token has.

  • server role allows for posting activity of “itself” only.

These roles cannot be deleted.

These default roles have a default collection of scopes, but you can define the scopes associated with each role (excluding admin) to suit your needs, as seen below.

The user, admin, and token roles by default all preserve the permissions prior to RBAC. Only the server role is changed from pre-2.0, to reduce its permissions to activity-only instead of the default of a full access token.

Additional custom roles can also be defined (see Defining Roles). Roles can be assigned to the following entities:

  • Users

  • Services

  • Groups

  • Tokens

An entity can have zero, one, or multiple roles, and there are no restrictions on which roles can be assigned to which entity. Roles can be added to or removed from entities at any time.

Users
When a new user gets created, they are assigned their default role user. Additionaly, if the user is created with admin privileges (via c.Authenticator.admin_users in jupyterhub_config.py or admin: true via API), they will be also granted admin role. If existing user’s admin status changes via API or jupyterhub_config.py, their default role will be updated accordingly (after next startup for the latter).

Services
Services do not have a default role. Services without roles have no access to the guarded API end-points, so most services will require assignment of a role in order to function.

Groups
A group does not require any role, and has no roles by default. If a user is a member of a group, they automatically inherit any of the group’s permissions (see Resolving roles and scopes for more details). This is useful for assigning a set of common permissions to several users.

Tokens
A token’s permissions are evaluated based on their owning entity. Since a token is always issued for a user or service, it can never have more permissions than its owner. If no specific role is requested for a new token, the token is assigned the token role.

Defining Roles

Roles can be defined or modified in the configuration file as a list of dictionaries. An example:

# in jupyterhub_config.py

c.JupyterHub.load_roles = [{
   'name': 'server-rights',
   'description': 'Allows parties to start and stop user servers',
   'scopes': ['servers'],
   'users': ['alice', 'bob'],
   'services': ['idle-culler'],
   'groups': ['admin-group'],}
]

The role server-rights now allows the starting and stopping of servers by any of the following:

  • users alice and bob

  • the service idle-culler

  • any member of the admin-group.

Attention

Tokens cannot be assigned roles through role definition but may be assigned specific roles when requested via API (see Requesting API token with specific roles).

Another example:

# in jupyterhub_config.py

c.JupyterHub.load_roles = [
 {
   'description': 'Read-only user models',
   'name': 'reader',
   'scopes': ['read:users'],
   'services': ['external'],
   'users': ['maria', 'joe']
 }
]

The role reader allows users maria and joe and service external to read (but not modify) any user’s model.

Requirements

In a role definition, the name field is required, while all other fields are optional.
Role names must:

  • be 3 - 255 characters

  • use ascii lowercase, numbers, ‘unreserved’ URL punctuation -_.~

  • start with a letter

  • end with letter or number.

users, services, and groups only accept objects that already exist in the database or are defined previously in the file. It is not possible to implicitly add a new user to the database by defining a new role.

If no scopes are defined for new role, JupyterHub will raise a warning. Providing non-existing scopes will result in an error.

In case the role with a certain name already exists in the database, its definition and scopes will be overwritten. This holds true for all roles except the admin role, which cannot be overwritten; an error will be raised if trying to do so. All the role bearers permissions present in the definition will change accordingly.

Overriding default roles

Role definitions can include those of the “default” roles listed above (admin excluded), if the default scopes associated with those roles do not suit your deployment. For example, to specify what permissions the $JUPYTERHUB_API_TOKEN issued to all single-user servers has, define the server role.

To restore the JupyterHub 1.x behavior of servers being able to do anything their owners can do, use the scope inherit (for ‘inheriting’ the owner’s permissions):

c.JupyterHub.load_roles = [
 {
   'name': 'server',
   'scopes': ['inherit'],
 }
]

or, better yet, identify the specific scopes you want server environments to have access to.

If you don’t want to get too detailed, one option is the self scope, which will have no effect on non-admin users, but will restrict the token issued to admin user servers to only have access to their own resources, instead of being able to take actions on behalf of all other users.

c.JupyterHub.load_roles = [
 {
   'name': 'server',
   'scopes': ['self'],
 }
]
Removing roles

Only the entities present in the role definition in the jupyterhub_config.py remain the role bearers. If a user, service or group is removed from the role definition, they will lose the role on the next startup.

Once a role is loaded, it remains in the database until removing it from the jupyterhub_config.py and restarting the Hub. All previously defined role bearers will lose the role and associated permissions. Default roles, even if previously redefined through the config file and removed, will not be deleted from the database.

Scopes in JupyterHub

A scope has a syntax-based design that reveals which resources it provides access to. Resources are objects with a type, associated data, relationships to other resources, and a set of methods that operate on them (see RESTful API documentation for more information).

<resource> in the RBAC scope design refers to the resource name in the JupyterHub’s API endpoints in most cases. For instance, <resource> equal to users corresponds to JupyterHub’s API endpoints beginning with /users.

Scope conventions
  • <resource>
    The top-level <resource> scopes, such as users or groups, grant read, write, and list permissions to the resource itself as well as its sub-resources. For example, the scope users:activity is included in the scope users.

  • read:<resource>
    Limits permissions to read-only operations on single resources.

  • list:<resource>
    Read-only access to listing endpoints. Use read:<resource>:<subresource> to control what fields are returned.

  • admin:<resource>
    Grants additional permissions such as create/delete on the corresponding resource in addition to read and write permissions.

  • access:<resource>
    Grants access permissions to the <resource> via API or browser.

  • <resource>:<subresource>
    The vertically filtered scopes provide access to a subset of the information granted by the <resource> scope. E.g., the scope users:activity only provides permission to post user activity.

  • <resource>!<object>=<objectname>
    Horizontal filtering is implemented by the !<object>=<objectname>scope structure. A resource (or sub-resource) can be filtered based on user, server, group or service name. For instance, <resource>!user=charlie limits access to only return resources of user charlie.
    Only one filter per scope is allowed, but filters for the same scope have an additive effect; a larger filter can be used by supplying the scope multiple times with different filters.

By adding a scope to an existing role, all role bearers will gain the associated permissions.

Metascopes

Metascopes do not follow the general scope syntax. Instead, a metascope resolves to a set of scopes, which can refer to different resources, based on their owning entity. In JupyterHub, there are currently two metascopes:

  1. default user scope self, and

  2. default token scope all.

Default user scope

Access to the user’s own resources and subresources is covered by metascope self. This metascope includes the user’s model, activity, servers and tokens. For example, self for a user named “gerard” includes:

  • users!user=gerard where the users scope provides access to the full user model and activity. The filter restricts this access to the user’s own resources.

  • servers!user=gerard which grants the user access to their own servers without being able to create/delete any.

  • tokens!user=gerard which allows the user to access, request and delete their own tokens.

  • access:servers!user=gerard which allows the user to access their own servers via API or browser.

The self scope is only valid for user entities. In other cases (e.g., for services) it resolves to an empty set of scopes.

Default token scope

The token metascope all covers the same scopes as the token owner’s scopes during requests. For example, if a token owner has roles containing the scopes read:groups and read:users, the all scope resolves to the set of scopes {read:groups, read:users}.

If the token owner has default user role, the all scope resolves to self, which will subsequently be expanded to include all the user-specific scopes (or empty set in the case of services).

If the token owner is a member of any group with roles, the group scopes will also be included in resolving the all scope.

Horizontal filtering

Horizontal filtering, also called resource filtering, is the concept of reducing the payload of an API call to cover only the subset of the resources that the scopes of the client provides them access to. Requested resources are filtered based on the filter of the corresponding scope. For instance, if a service requests a user list (guarded with scope read:users) with a role that only contains scopes read:users!user=hannah and read:users!user=ivan, the returned list of user models will be an intersection of all users and the collection {hannah, ivan}. In case this intersection is empty, the API call returns an HTTP 404 error, regardless if any users exist outside of the clients scope filter collection.

In case a user resource is being accessed, any scopes with group filters will be expanded to filters for each user in those groups.

!user filter

The !user filter is a special horizontal filter that strictly refers to the “owner only” scopes, where owner is a user entity. The filter resolves internally into !user=<ownerusername> ensuring that only the owner’s resources may be accessed through the associated scopes.

For example, the server role assigned by default to server tokens contains access:servers!user and users:activity!user scopes. This allows the token to access and post activity of only the servers owned by the token owner.

The filter can be applied to any scope.

Vertical filtering

Vertical filtering, also called attribute filtering, is the concept of reducing the payload of an API call to cover only the attributes of the resources that the scopes of the client provides them access to. This occurs when the client scopes are subscopes of the API endpoint that is called. For instance, if a client requests a user list with the only scope being read:users:groups, the returned list of user models will contain only a list of groups per user. In case the client has multiple subscopes, the call returns the union of the data the client has access to.

The payload of an API call can be filtered both horizontally and vertically simultaneously. For instance, performing an API call to the endpoint /users/ with the scope users:name!user=juliette returns a payload of [{name: 'juliette'}] (provided that this name is present in the database).

Available scopes

Table below lists all available scopes and illustrates their hierarchy. Indented scopes indicate subscopes of the scope(s) above them.

There are four exceptions to the general scope conventions:

  • read:users:name is a subscope of both read:users and read:servers.
    The read:servers scope requires access to the user name (server owner) due to named servers distinguished internally in the form !server=username/servername.

  • read:users:activity is a subscope of both read:users and users:activity.
    Posting activity via the users:activity, which is not included in users scope, needs to check the last valid activity of the user.

  • read:roles:users is a subscope of both read:roles and admin:users.
    Admin privileges to the users resource include the information about user roles.

  • read:roles:groups is a subscope of both read:roles and admin:groups.
    Similar to the read:roles:users above.

Table 1. Available scopes and their hierarchy

Scope

Grants permission to:

(no_scope)

Identify the owner of the requesting entity.

self

The user’s own resources (metascope for users, resolves to (no_scope) for services)

inherit

Everything that the token-owning entity can access (metascope for tokens)

admin:users

Read, write, create and delete users and their authentication state, not including their servers or tokens.

   admin:auth_state

Read a user’s authentication state.

   users

Read and write permissions to user models (excluding servers, tokens and authentication state).

      read:users

Read user models (excluding including servers, tokens and authentication state).

         read:users:name

Read names of users.

         read:users:groups

Read users’ group membership.

         read:users:activity

Read time of last user activity.

      list:users

List users, including at least their names.

         read:users:name

Read names of users.

      users:activity

Update time of last user activity.

         read:users:activity

Read time of last user activity.

   read:roles:users

Read user role assignments.

   delete:users

Delete users.

read:roles

Read role assignments.

   read:roles:users

Read user role assignments.

   read:roles:services

Read service role assignments.

   read:roles:groups

Read group role assignments.

admin:servers

Read, start, stop, create and delete user servers and their state.

   admin:server_state

Read and write users’ server state.

   servers

Start and stop user servers.

      read:servers

Read users’ names and their server models (excluding the server state).

         read:users:name

Read names of users.

      delete:servers

Stop and delete users’ servers.

tokens

Read, write, create and delete user tokens.

   read:tokens

Read user tokens.

admin:groups

Read and write group information, create and delete groups.

   groups

Read and write group information, including adding/removing users to/from groups.

      read:groups

Read group models.

         read:groups:name

Read group names.

      list:groups

List groups, including at least their names.

         read:groups:name

Read group names.

   read:roles:groups

Read group role assignments.

   delete:groups

Delete groups.

list:services

List services, including at least their names.

   read:services:name

Read service names.

read:services

Read service models.

   read:services:name

Read service names.

read:hub

Read detailed information about the Hub.

access:servers

Access user servers via API or browser.

access:services

Access services via API or browser.

proxy

Read information about the proxy’s routing table, sync the Hub with the proxy and notify the Hub about a new proxy.

shutdown

Shutdown the hub.

read:metrics

Read prometheus metrics.

Caution

Note that only the horizontal filtering can be added to scopes to customize them.
Metascopes self and all, <resource>, <resource>:<subresource>, read:<resource>, admin:<resource>, and access:<resource> scopes are predefined and cannot be changed otherwise.

Scopes and APIs

The scopes are also listed in the JupyterHub REST API documentation. Each API endpoint has a list of scopes which can be used to access the API; if no scopes are listed, the API is not authenticated and can be accessed without any permissions (i.e., no scopes).

Listed scopes by each API endpoint reflect the “lowest” permissions required to gain any access to the corresponding API. For example, posting user’s activity (POST /users/:name/activity) needs users:activity scope. If scope users is passed during the request, the access will be granted as the required scope is a subscope of the users scope. If, on the other hand, read:users:activity scope is passed, the access will be denied.

Use Cases

To determine which scopes a role should have, one can follow these steps:

  1. Determine what actions the role holder should have/have not access to

  2. Match the actions against the JupyterHub’s APIs

  3. Check which scopes are required to access the APIs

  4. Combine scopes and subscopes if applicable

  5. Customize the scopes with filters if needed

  6. Define the role with required scopes and assign to users/services/groups/tokens

Below, different use cases are presented on how to use the RBAC framework.

Service to cull idle servers

Finding and shutting down idle servers can save a lot of computational resources. We can make use of jupyterhub-idle-culler to manage this for us. Below follows a short tutorial on how to add a cull-idle service in the RBAC system.

  1. Install the cull-idle server script with pip install jupyterhub-idle-culler.

  2. Define a new service idle-culler and a new role for this service:

    # in jupyterhub_config.py
    
    c.JupyterHub.services = [
        {
            "name": "idle-culler",
            "command": [
                sys.executable, "-m",
                "jupyterhub_idle_culler",
                "--timeout=3600"
            ],
        }
    ]
    
    c.JupyterHub.load_roles = [
        {
            "name": "idle-culler",
            "description": "Culls idle servers",
            "scopes": ["read:users:name", "read:users:activity", "servers"],
            "services": ["idle-culler"],
        }
    ]
    

    Important

    Note that in the RBAC system the admin field in the idle-culler service definition is omitted. Instead, the idle-culler role provides the service with only the permissions it needs.

    If the optional actions of deleting the idle servers and/or removing inactive users are desired, change the following scopes in the idle-culler role definition:

    • servers to admin:servers for deleting servers

    • read:users:name, read:users:activity to admin:users for deleting users.

  3. Restart JupyterHub to complete the process.

API launcher

A service capable of creating/removing users and launching multiple servers should have access to:

  1. POST and DELETE /users

  2. POST and DELETE /users/:name/server or /users/:name/servers/:server_name

  3. Creating/deleting servers

The scopes required to access the API enpoints:

  1. admin:users

  2. servers

  3. admin:servers

From the above, the role definition is:

# in jupyterhub_config.py

c.JupyterHub.load_roles = [
    {
        "name": "api-launcher",
        "description": "Manages servers",
        "scopes": ["admin:users", "admin:servers"],
        "services": [<service_name>]
    }
]

If needed, the scopes can be modified to limit the permissions to e.g. a particular group with !group=groupname filter.

Group admin roles

Roles can be used to specify different group member privileges.

For example, a group of students class-A may have a role allowing all group members to access information about their group. Teacher johan, who is a student of class-A but a teacher of another group of students class-B, can have additional role permitting him to access information about class-B students as well as start/stop their servers.

The roles can then be defined as follows:

# in jupyterhub_config.py

c.JupyterHub.load_groups = {
    'class-A': ['johan', 'student1', 'student2'],
    'class-B': ['student3', 'student4']
}

c.JupyterHub.load_roles = [
    {
        'name': 'class-A-student',
        'description': 'Grants access to information about the group',
        'scopes': ['read:groups!group=class-A'],
        'groups': ['class-A']
    },
    {
        'name': 'class-B-student',
        'description': 'Grants access to information about the group',
        'scopes': ['read:groups!group=class-B'],
        'groups': ['class-B']
    },
    {
        'name': 'teacher',
        'description': 'Allows for accessing information about teacher group members and starting/stopping their servers',
        'scopes': [ 'read:users!group=class-B', 'servers!group=class-B'],
        'users': ['johan']
    }
]

In the above example, johan has privileges inherited from class-A-student role and the teacher role on top of those.

Note

The scope filters (!group=) limit the privileges only to the particular groups. johan can access the servers and information of class-B group members only.

Technical Implementation

Roles are stored in the database, where they are associated with users, services, etc., and can be added or modified as explained in Defining Roles section. Users, services, groups, and tokens can gain, change, and lose roles. This is currently achieved via jupyterhub_config.py (see Defining Roles) and will be made available via API in future. The latter will allow for changing a token’s role, and thereby its permissions, without the need to issue a new token.

Roles and scopes utilities can be found in roles.py and scopes.py modules. Scope variables take on five different formats which is reflected throughout the utilities via specific nomenclature:

Scope variable nomenclature

  • scopes
    List of scopes with abbreviations (used in role definitions). E.g., ["users:activity!user"].

  • expanded scopes
    Set of expanded scopes without abbreviations (i.e., resolved metascopes, filters and subscopes). E.g., {"users:activity!user=charlie", "read:users:activity!user=charlie"}.

  • parsed scopes
    Dictionary JSON like format of expanded scopes. E.g., {"users:activity": {"user": ["charlie"]}, "read:users:activity": {"users": ["charlie"]}}.

  • intersection
    Set of expanded scopes as intersection of 2 expanded scope sets.

  • identify scopes
    Set of expanded scopes needed for identify (whoami) endpoints.

Resolving roles and scopes

Resolving roles refers to determining which roles a user, service, token, or group has, extracting the list of scopes from each role and combining them into a single set of scopes.

Resolving scopes involves expanding scopes into all their possible subscopes (expanded scopes), parsing them into format used for access evaluation (parsed scopes) and, if applicable, comparing two sets of scopes (intersection). All procedures take into account the scope hierarchy, vertical and horizontal filtering, limiting or elevated permissions (read:<resource> or admin:<resource>, respectively), and metascopes.

Roles and scopes are resolved on several occasions, for example when requesting an API token with specific roles or making an API request. The following sections provide more details.

Requesting API token with specific roles

API tokens grant access to JupyterHub’s APIs. The RBAC framework allows for requesting tokens with specific existing roles. To date, it is only possible to add roles to a token through the POST /users/:name/tokens API where the roles can be specified in the token parameters body (see JupyterHub REST API).

RBAC adds several steps into the token issue flow.

If no roles are requested, the token is issued with the default token role (providing the requester is allowed to create the token).

If the token is requested with any roles, the permissions of requesting entity are checked against the requested permissions to ensure the token would not grant its owner additional privileges.

If, due to modifications of roles or entities, at API request time a token has any scopes that its owner does not, those scopes are removed. The API request is resolved without additional errors using the scopes intersection, but the Hub logs a warning (see Figure 2).

Resolving a token’s roles (yellow box in Figure 1) corresponds to resolving all the token’s owner roles (including the roles associated with their groups) and the token’s requested roles into a set of scopes. The two sets are compared (Resolve the scopes box in orange in Figure 1), taking into account the scope hierarchy but, solely for role assignment, omitting any horizontal filter comparison. If the token’s scopes are a subset of the token owner’s scopes, the token is issued with the requested roles; if not, JupyterHub will raise an error.

Figure 1 below illustrates the steps involved. The orange rectangles highlight where in the process the roles and scopes are resolved.

_images/rbac-token-request-chart.png

Figure 1. Resolving roles and scopes during API token request

Making an API request

With the RBAC framework each authenticated JupyterHub API request is guarded by a scope decorator that specifies which scopes are required to gain the access to the API.

When an API request is performed, the requesting API token’s roles are again resolved (yellow box in Figure 2) to ensure the token does not grant more permissions than its owner has at the request time (e.g., due to changing/losing roles). If the owner’s roles do not include some scopes of the token’s scopes, only the intersection of the token’s and owner’s scopes will be used. For example, using a token with scope users whose owner’s role scope is read:users:name will result in only the read:users:name scope being passed on. In the case of no intersection, an empty set of scopes will be used.

The passed scopes are compared to the scopes required to access the API as follows:

  • if the API scopes are present within the set of passed scopes, the access is granted and the API returns its “full” response

  • if that is not the case, another check is utilized to determine if subscopes of the required API scopes can be found in the passed scope set:

    • if found, the RBAC framework employs the filtering procedures to refine the API response to access only resource attributes corresponding to the passed scopes. For example, providing a scope read:users:activity!group=class-C for the GET /users API will return a list of user models from group class-C containing only the last_activity attribute for each user model

    • if not found, the access to API is denied

Figure 2 illustrates this process highlighting the steps where the role and scope resolutions as well as filtering occur in orange.

_images/rbac-api-request-chart.png

Figure 2. Resolving roles and scopes when an API request is made

Upgrading JupyterHub with RBAC framework

RBAC framework requires different database setup than any previous JupyterHub versions due to eliminating the distinction between OAuth and API tokens (see OAuth vs API tokens for more details). This requires merging the previously two different database tables into one. By doing so, all existing tokens created before the upgrade no longer comply with the new database version and must be replaced.

This is achieved by the Hub deleting all existing tokens during the database upgrade and recreating the tokens loaded via the jupyterhub_config.py file with updated structure. However, any manually issued or stored tokens are not recreated automatically and must be manually re-issued after the upgrade.

No other database records are affected.

Upgrade steps
  1. All running servers must be stopped before proceeding with the upgrade.

  2. To upgrade the Hub, follow the Upgrading JupyterHub instructions.

    Attention

    We advise against defining any new roles in the jupyterhub.config.py file right after the upgrade is completed and JupyterHub restarted for the first time. This preserves the ‘current’ state of the Hub. You can define and assign new roles on any other following startup.

  3. After restarting the Hub re-issue all tokens that were previously issued manually (i.e., not through the jupyterhub_config.py file).

When the JupyterHub is restarted for the first time after the upgrade, all users, services and tokens stored in the database or re-loaded through the configuration file will be assigned their default role. Any newly added entities after that will be assigned their default role only if no other specific role is requested for them.

Changing the permissions after the upgrade

Once all the upgrade steps above are completed, the RBAC framework will be available for utilization. You can define new roles, modify default roles (apart from admin) and assign them to entities as described in the Defining Roles section.

We recommended the following procedure to start with RBAC:

  1. Identify which admin users and services you would like to grant only the permissions they need through the new roles.

  2. Strip these users and services of their admin status via API or UI. This will change their roles from admin to user.

    Note

    Stripping entities of their roles is currently available only via jupyterhub_config.py (see Removing roles).

  3. Define new roles that you would like to start using with appropriate scopes and assign them to these entities in jupyterhub_config.py.

  4. Restart the JupyterHub for the new roles to take effect.

OAuth vs API tokens
Before RBAC

Previous JupyterHub versions utilize two types of tokens, OAuth token and API token.

OAuth token is issued by the Hub to a single-user server when the user logs in. The token is stored in the browser cookie and is used to identify the user who owns the server during the OAuth flow. This token by default expires when the cookie reaches its expiry time of 2 weeks (or after 1 hour in JupyterHub versions < 1.3.0).

API token is issued by the Hub to a single-user server when launched and is used to communicate with the Hub’s APIs such as posting activity or completing the OAuth flow. This token has no expiry by default.

API tokens can also be issued to users via API (/hub/token or POST /users/:username/tokens) and services via jupyterhub_config.py to perform API requests.

With RBAC

The RBAC framework allows for granting tokens different levels of permissions via scopes attached to roles. The ‘only identify’ purpose of the separate OAuth tokens is no longer required. API tokens can be used used for every action, including the login and authentication, for which an API token with no role (i.e., no scope in Available scopes) is used.

OAuth tokens are therefore dropped from the Hub upgraded with the RBAC framework.

Contributing

We want you to contribute to JupyterHub in ways that are most exciting & useful to you. We value documentation, testing, bug reporting & code equally, and are glad to have your contributions in whatever form you wish :)

Our Code of Conduct (reporting guidelines) helps keep our community welcoming to as many people as possible.

Contributing

We want you to contribute to JupyterHub in ways that are most exciting & useful to you. We value documentation, testing, bug reporting & code equally, and are glad to have your contributions in whatever form you wish :)

Our Code of Conduct (reporting guidelines) helps keep our community welcoming to as many people as possible.

Community communication channels

We use Discourse <https://discourse.jupyter.org> for online discussion. Everyone in the Jupyter community is welcome to bring ideas and questions there. In addition, we use Gitter for online, real-time text chat, a place for more ephemeral discussions. The primary Gitter channel for JupyterHub is jupyterhub/jupyterhub. Gitter isn’t archived or searchable, so we recommend going to discourse first to make sure that discussions are most useful and accessible to the community. Remember that our community is distributed across the world in various timezones, so be patient if you do not get an answer immediately!

GitHub issues are used for most long-form project discussions, bug reports and feature requests. Issues related to a specific authenticator or spawner should be directed to the appropriate repository for the authenticator or spawner. If you are using a specific JupyterHub distribution (such as Zero to JupyterHub on Kubernetes or The Littlest JupyterHub), you should open issues directly in their repository. If you can not find a repository to open your issue in, do not worry! Create it in the main JupyterHub repository and our community will help you figure it out.

A mailing list for all of Project Jupyter exists, along with one for teaching with Jupyter.

Setting up a development install
System requirements

JupyterHub can only run on MacOS or Linux operating systems. If you are using Windows, we recommend using VirtualBox or a similar system to run Ubuntu Linux for development.

Install Python

JupyterHub is written in the Python programming language, and requires you have at least version 3.5 installed locally. If you haven’t installed Python before, the recommended way to install it is to use miniconda. Remember to get the ‘Python 3’ version, and not the ‘Python 2’ version!

Install nodejs

configurable-http-proxy, the default proxy implementation for JupyterHub, is written in Javascript to run on NodeJS. If you have not installed nodejs before, we recommend installing it in the miniconda environment you set up for Python. You can do so with conda install nodejs.

Install git

JupyterHub uses git & GitHub for development & collaboration. You need to install git to work on JupyterHub. We also recommend getting a free account on GitHub.com.

Setting up a development install

When developing JupyterHub, you need to make changes to the code & see their effects quickly. You need to do a developer install to make that happen.

Note

This guide does not attempt to dictate how development environements should be isolated since that is a personal preference and can be achieved in many ways, for example tox, conda, docker, etc. See this forum thread for a more detailed discussion.

  1. Clone the JupyterHub git repository to your computer.

    git clone https://github.com/jupyterhub/jupyterhub
    cd jupyterhub
    
  2. Make sure the python you installed and the npm you installed are available to you on the command line.

    python -V
    

    This should return a version number greater than or equal to 3.5.

    npm -v
    

    This should return a version number greater than or equal to 5.0.

  3. Install configurable-http-proxy. This is required to run JupyterHub.

    npm install -g configurable-http-proxy
    

    If you get an error that says Error: EACCES: permission denied, you might need to prefix the command with sudo. If you do not have access to sudo, you may instead run the following commands:

    npm install configurable-http-proxy
    export PATH=$PATH:$(pwd)/node_modules/.bin
    

    The second line needs to be run every time you open a new terminal.

  4. Install the python packages required for JupyterHub development.

    python3 -m pip install -r dev-requirements.txt
    python3 -m pip install -r requirements.txt
    
  5. Setup a database.

    The default database engine is sqlite so if you are just trying to get up and running quickly for local development that should be available via python. See The Hub’s Database for details on other supported databases.

  6. Install the development version of JupyterHub. This lets you edit JupyterHub code in a text editor & restart the JupyterHub process to see your code changes immediately.

    python3 -m pip install --editable .
    
  7. You are now ready to start JupyterHub!

    jupyterhub
    
  8. You can access JupyterHub from your browser at http://localhost:8000 now.

Happy developing!

Using DummyAuthenticator & SimpleLocalProcessSpawner

To simplify testing of JupyterHub, it’s helpful to use DummyAuthenticator instead of the default JupyterHub authenticator and SimpleLocalProcessSpawner instead of the default spawner.

There is a sample configuration file that does this in testing/jupyterhub_config.py. To launch jupyterhub with this configuration:

jupyterhub -f testing/jupyterhub_config.py

The default JupyterHub authenticator & spawner require your system to have user accounts for each user you want to log in to JupyterHub as.

DummyAuthenticator allows you to log in with any username & password, while SimpleLocalProcessSpawner allows you to start servers without having to create a unix user for each JupyterHub user. Together, these make it much easier to test JupyterHub.

Tip: If you are working on parts of JupyterHub that are common to all authenticators & spawners, we recommend using both DummyAuthenticator & SimpleLocalProcessSpawner. If you are working on just authenticator related parts, use only SimpleLocalProcessSpawner. Similarly, if you are working on just spawner related parts, use only DummyAuthenticator.

Troubleshooting

This section lists common ways setting up your development environment may fail, and how to fix them. Please add to the list if you encounter yet another way it can fail!

lessc not found

If the python3 -m pip install --editable . command fails and complains about lessc being unavailable, you may need to explicitly install some additional JavaScript dependencies:

npm install

This will fetch client-side JavaScript dependencies necessary to compile CSS.

You may also need to manually update JavaScript and CSS after some development updates, with:

python3 setup.py js    # fetch updated client-side js
python3 setup.py css   # recompile CSS from LESS sources
Contributing Documentation

Documentation is often more important than code. This page helps you get set up on how to contribute documentation to JupyterHub.

Building documentation locally

We use sphinx to build our documentation. It takes our documentation source files (written in markdown or reStructuredText & stored under the docs/source directory) and converts it into various formats for people to read. To make sure the documentation you write or change renders correctly, it is good practice to test it locally.

  1. Make sure you have successfuly completed Setting up a development install.

  2. Install the packages required to build the docs.

    python3 -m pip install -r docs/requirements.txt
    
  3. Build the html version of the docs. This is the most commonly used output format, so verifying it renders as you should is usually good enough.

    cd docs
    make html
    

    This step will display any syntax or formatting errors in the documentation, along with the filename / line number in which they occurred. Fix them, and re-run the make html command to re-render the documentation.

  4. View the rendered documentation by opening build/html/index.html in a web browser.

    Tip

    On macOS, you can open a file from the terminal with open <path-to-file>. On Linux, you can do the same with xdg-open <path-to-file>.

Documentation conventions

This section lists various conventions we use in our documentation. This is a living document that grows over time, so feel free to add to it / change it!

Our entire documentation does not yet fully conform to these conventions yet, so help in making it so would be appreciated!

pip invocation

There are many ways to invoke a pip command, we recommend the following approach:

python3 -m pip

This invokes pip explicitly using the python3 binary that you are currently using. This is the recommended way to invoke pip in our documentation, since it is least likely to cause problems with python3 and pip being from different environments.

For more information on how to invoke pip commands, see the pip documentation.

Testing JupyterHub

Unit test help validate that JupyterHub works the way we think it does, and continues to do so when changes occur. They also help communicate precisely what we expect our code to do.

JupyterHub uses pytest for all our tests. You can find them under jupyterhub/tests directory in the git repository.

Running the tests
  1. Make sure you have completed Setting up a development install. You should be able to start jupyterhub from the commandline & access it from your web browser. This ensures that the dev environment is properly set up for tests to run.

  2. You can run all tests in JupyterHub

    pytest -v jupyterhub/tests
    

    This should display progress as it runs all the tests, printing information about any test failures as they occur.

    If you wish to confirm test coverage the run tests with the --cov flag:

    pytest -v --cov=jupyterhub jupyterhub/tests
    
  3. You can also run tests in just a specific file:

    pytest -v jupyterhub/tests/<test-file-name>
    
  4. To run a specific test only, you can do:

    pytest -v jupyterhub/tests/<test-file-name>::<test-name>
    

    This runs the test with function name <test-name> defined in <test-file-name>. This is very useful when you are iteratively developing a single test.

    For example, to run the test test_shutdown in the file test_api.py, you would run:

    pytest -v jupyterhub/tests/test_api.py::test_shutdown
    
Troubleshooting Test Failures
All the tests are failing

Make sure you have completed all the steps in Setting up a development install successfully, and can launch jupyterhub from the terminal.

The JupyterHub roadmap

This roadmap collects “next steps” for JupyterHub. It is about creating a shared understanding of the project’s vision and direction amongst the community of users, contributors, and maintainers. The goal is to communicate priorities and upcoming release plans. It is not a aimed at limiting contributions to what is listed here.

Using the roadmap
Sharing Feedback on the Roadmap

All of the community is encouraged to provide feedback as well as share new ideas with the community. Please do so by submitting an issue. If you want to have an informal conversation first use one of the other communication channels. After submitting the issue, others from the community will probably respond with questions or comments they have to clarify the issue. The maintainers will help identify what a good next step is for the issue.

What do we mean by “next step”?

When submitting an issue, think about what “next step” category best describes your issue:

  • now, concrete/actionable step that is ready for someone to start work on. These might be items that have a link to an issue or more abstract like “decrease typos and dead links in the documentation”

  • soon, less concrete/actionable step that is going to happen soon, discussions around the topic are coming close to an end at which point it can move into the “now” category

  • later, abstract ideas or tasks, need a lot of discussion or experimentation to shape the idea so that it can be executed. Can also contain concrete/actionable steps that have been postponed on purpose (these are steps that could be in “now” but the decision was taken to work on them later)

Reviewing and Updating the Roadmap

The roadmap will get updated as time passes (next review by 1st December) based on discussions and ideas captured as issues. This means this list should not be exhaustive, it should only represent the “top of the stack” of ideas. It should not function as a wish list, collection of feature requests or todo list. For those please create a new issue.

The roadmap should give the reader an idea of what is happening next, what needs input and discussion before it can happen and what has been postponed.

The roadmap proper
Project vision

JupyterHub is a dependable tool used by humans that reduces the complexity of creating the environment in which a piece of software can be executed.

Now

These “Now” items are considered active areas of focus for the project:

  • HubShare - a sharing service for use with JupyterHub.

    • Users should be able to:

      • Push a project to other users.

      • Get a checkout of a project from other users.

      • Push updates to a published project.

      • Pull updates from a published project.

      • Manage conflicts/merges by simply picking a version (our/theirs)

      • Get a checkout of a project from the internet. These steps are completely different from saving notebooks/files.

      • Have directories that are managed by git completely separately from our stuff.

      • Look at pushed content that they have access to without an explicit pull.

      • Define and manage teams of users.

        • Adding/removing a user to/from a team gives/removes them access to all projects that team has access to.

      • Build other services, such as static HTML publishing and dashboarding on top of these things.

Soon

These “Soon” items are under discussion. Once an item reaches the point of an actionable plan, the item will be moved to the “Now” section. Typically, these will be moved at a future review of the roadmap.

  • resource monitoring and management:

    • (prometheus?) API for resource monitoring

    • tracking activity on single-user servers instead of the proxy

    • notes and activity tracking per API token

Later

The “Later” items are things that are at the back of the project’s mind. At this time there is no active plan for an item. The project would like to find the resources and time to discuss these ideas.

  • real-time collaboration

    • Enter into real-time collaboration mode for a project that starts a shared execution context.

    • Once the single-user notebook package supports realtime collaboration, implement sharing mechanism integrated into the Hub.

Reporting security issues in Jupyter or JupyterHub

If you find a security vulnerability in Jupyter or JupyterHub, whether it is a failure of the security model described in Security Overview or a failure in implementation, please report it to security@ipython.org.

If you prefer to encrypt your security reports, you can use this PGP public key.

About JupyterHub

About

JupyterHub is an open source project and community. It is a part of the Jupyter Project. JupyterHub is an open and inclusive community, and invites contributions from anyone. This section covers information about our community, as well as ways that you can connect and get involved.

Contributors

Project Jupyter thanks the following people for their help and contribution on JupyterHub:

  • adelcast

  • Analect

  • anderbubble

  • anikitml

  • ankitksharma

  • apetresc

  • athornton

  • barrachri

  • BerserkerTroll

  • betatim

  • Carreau

  • cfournie

  • charnpreetsingh

  • chicovenancio

  • cikao

  • ckald

  • cmoscardi

  • consideRatio

  • cqzlxl

  • CRegenschein

  • cwaldbieser

  • danielballen

  • danoventa

  • daradib

  • darky2004

  • datapolitan

  • dblockow-d2dcrc

  • DeepHorizons

  • DerekHeldtWerle

  • dhirschfeld

  • dietmarw

  • dingc3

  • dmartzol

  • DominicFollettSmith

  • dsblank

  • dtaniwaki

  • echarles

  • ellisonbg

  • emmanuel

  • evanlinde

  • Fokko

  • fperez

  • franga2000

  • GladysNalvarte

  • glenak1911

  • gweis

  • iamed18

  • jamescurtin

  • JamiesHQ

  • JasonJWilliamsNY

  • jbweston

  • jdavidheiser

  • jencabral

  • jhamrick

  • jkinkead

  • johnkpark

  • josephtate

  • jzf2101

  • karfai

  • kinuax

  • KrishnaPG

  • kroq-gar78

  • ksolan

  • mbmilligan

  • mgeplf

  • minrk

  • mistercrunch

  • Mistobaan

  • mpacer

  • mwmarkland

  • ndly

  • nthiery

  • nxg

  • ObiWahn

  • ozancaglayan

  • paccorsi

  • parente

  • PeterDaveHello

  • peterruppel

  • phill84

  • pjamason

  • prasadkatti

  • rafael-ladislau

  • rcthomas

  • rgbkrk

  • rkdarst

  • robnagler

  • rschroll

  • ryanlovett

  • sangramga

  • Scrypy

  • schon

  • shreddd

  • Siecje

  • smiller5678

  • spoorthyv

  • ssanderson

  • summerswallow

  • syutbai

  • takluyver

  • temogen

  • ThomasMChen

  • Thoralf Gutierrez

  • timfreund

  • TimShawver

  • tklever

  • Todd-Z-Li

  • toobaz

  • tsaeger

  • tschaume

  • vilhelmen

  • whitead

  • willingc

  • YannBrrd

  • yuvipanda

  • zoltan-fedor

  • zonca

Indices and tables

Questions? Suggestions?