Configuration Reference¶
The JupyterHub Helm
chart is configurable by
values in your config.yaml
. In this way, you can extend user resources, build off
of different Docker images, manage security and authentication, and more.
Below is a description of many but not all of the configurable values for the Helm chart. To see all configurable options, inspect their default values defined here.
For more guided information about some specific things you can do with modifications to the helm chart, see the Customization Guide.
scheduling¶
Objects for customizing the scheduling of various pods on the nodes and related labels.
scheduling.corePods¶
These settings influence the core pods like the hub, proxy and user-scheduler pods.
scheduling.corePods.nodeAffinity¶
Where should pods be scheduled? Perhaps on nodes with a certain label is preferred or even required?
scheduling.corePods.nodeAffinity.matchNodePurpose¶
Decide if core pods ignore, prefer or require to schedule on nodes with this label:
hub.jupyter.org/node-purpose=core
scheduling.podPriority¶
Pod Priority is used to allow real users evict placeholder pods that in turn triggers a scale up by a cluster autoscaler. So, enabling this option will only make sense if the following conditions are met:
- Your Kubernetes cluster has at least version 1.11
- A cluster autoscaler is installed
- user-placeholer pods is configured to get a priority equal or higher than the cluster autoscaler’s priority cutoff
- Normal user pods have a higher priority than the user-placeholder pods
Note that if the default priority cutoff if not configured on cluster autoscaler, it will currently default to 0, and that in the future this is meant to be lowered. If your cloud provider is installing the cluster autoscaler for you, they may also configure this specifically.
Recommended settings for a cluster autoscaler…
… with a priority cutoff of -10 (GKE):
podPriority:
enabled: true
globalDefault: false
defaultPriority: 0
userPlaceholderPriority: -10
… with a priority cutoff of 0:
podPriority:
enabled: true
globalDefault: true
defaultPriority: 10
userPlaceholderPriority: 0
scheduling.podPriority.userPlaceholderPriority¶
The actual value for the user-placeholder pods’ priority.
scheduling.podPriority.globalDefault¶
Warning! This will influence all pods in the cluster.
The priority a pod usually get is 0. But this can be overridden with a PriorityClass resource if it is declared to be the global default. This configuration option allows for the creation of such global default.
scheduling.podPriority.defaultPriority¶
The actual value for the default pod priority.
scheduling.podPriority.enabled¶
scheduling.userPlaceholder¶
User placeholders simulate users but will thanks to PodPriority be evicted by the cluster autoscaler if a real user shows up. In this way placeholders allow you to create a headroom for the real users and reduce the risk of a user having to wait for a node to be added. Be sure to use the the continuous image puller as well along with placeholders, so the images are also available when real users arrive.
To test your setup efficiently, you can adjust the amount of user placeholders with the following command:
# Configure to have 3 user placeholders
kubectl scale sts/user-placeholder --replicas=3
scheduling.userPlaceholder.replicas¶
How many placeholder pods would you like to have?
scheduling.userPlaceholder.enabled¶
scheduling.userPlaceholder.resources¶
Unless specified here, the placeholder pods will request the same resources specified for the real singleuser pods.
scheduling.userPods¶
These settings influence the user pods like the user-placeholder, user-dummy and actual user pods named like jupyter-someusername.
scheduling.userPods.nodeAffinity¶
Where should pods be scheduled? Perhaps on nodes with a certain label is preferred or even required?
scheduling.userPods.nodeAffinity.matchNodePurpose¶
Decide if user pods ignore, prefer or require to schedule on nodes with this label:
hub.jupyter.org/node-purpose=user
scheduling.userScheduler¶
The user scheduler is making sure that user pods are scheduled tight on nodes, this is useful for autoscaling of user node pools.
scheduling.userScheduler.replicas¶
You can have multiple schedulers to share the workload or improve availability on node failure.
scheduling.userScheduler.image¶
The image containing the kube-scheduler binary.
scheduling.userScheduler.image.name¶
scheduling.userScheduler.image.tag¶
scheduling.userScheduler.enabled¶
Enables the user scheduler.
ingress¶
ingress.hosts¶
List of hosts to route requests to the proxy.
ingress.annotations¶
Annotations to apply to the Ingress.
See the Kubernetes documentation for more details about annotations.
ingress.pathSuffix¶
Suffix added to Ingress’s routing path pattern.
Specify *
if your ingress matches path by glob pattern.
ingress.tls¶
TLS configurations for Ingress.
See the Kubernetes documentation for more details about annotations.
ingress.enabled¶
Enable the creation of a Kubernetes Ingress to proxy-public service.
See [Advanced Topics — Zero to JupyterHub with Kubernetes 0.7.0 documentation] (https://zero-to-jupyterhub.readthedocs.io/en/stable/advanced.html#ingress) for more details.
singleuser¶
Options for customizing the environment that is provided to the users after they log in.
singleuser.schedulerStrategy¶
Deprecated and no longer does anything. Use the user-scheduler instead in order to accomplish a good packing of the user pods.
singleuser.cpu¶
Set CPU limits & guarantees that are enforced for each user. See: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
singleuser.cpu.guarantee¶
singleuser.cpu.limit¶
singleuser.imagePullSecret¶
Creates an image pull secret for you and makes the user pods utilize it, allowing them to pull images from private image registries.
Using this configuration option automates the following steps that normally is required to pull from private image registries.
# you won't need to run this manually...
kubectl create secret docker-registry singleuser-image-credentials \
--docker-server=<REGISTRY> \
--docker-username=<USERNAME> \
--docker-email=<EMAIL> \
--docker-password=<PASSWORD>
# you won't need to specify this manually...
spec:
imagePullSecrets:
- name: singleuser-image-credentials
To learn the username and password fields to access a gcr.io registry from a Kubernetes cluster not associated with the same google cloud credentials, look into this guide and read the notes about the password.
singleuser.imagePullSecret.username¶
Name of the user you want to use to connect to your private
registry. For external gcr.io, you will use the _json_key
.
Examples:
- alexmorreale
- alex@pfc.com
- _json_key
singleuser.imagePullSecret.registry¶
Name of the private registry you want to create a credential set for. It will default to Docker Hub’s image registry.
Examples:
- https://index.docker.io/v1/
- quay.io
- eu.gcr.io
- alexmorreale.privatereg.net
singleuser.imagePullSecret.enabled¶
Enable the creation of a Kubernetes Secret containing credentials to access a image registry. By enabling this, user pods and image puller pods will also be configured to use these credentials when they pull their container images.
singleuser.imagePullSecret.password¶
Password of the user you want to use to connect to your private registry.
Examples:
- plaintextpassword
- abc123SECRETzyx098
For gcr.io registries the password will be a big JSON blob for a Google cloud service account, it should look something like below.
password: |-
{
"type": "service_account",
"project_id": "jupyter-se",
"private_key_id": "f2ba09118a8d3123b3321bd9a7d6d0d9dc6fdb85",
...
}
Learn more in this guide.
singleuser.memory¶
Set Memory limits & guarantees that are enforced for each user.
See the Kubernetes docs for more info.
singleuser.memory.guarantee¶
Note that this field is referred to as requests by the Kubernetes API.
singleuser.memory.limit¶
singleuser.extraTolerations¶
Tolerations allow a pod to be scheduled on nodes with taints. These
are additional tolerations other than the user pods and core pods
default ones hub.jupyter.org/dedicated=user:NoSchedule
or
hub.jupyter.org/dedicated=core:NoSchedule
. Note that a duplicate set
of tolerations exist where /
is replaced with _
as the Google
cloud does not support the character /
yet in the toleration.
See the Kubernetes docs for more info.
Pass this field an array of
Toleration
objects.
singleuser.extraNodeAffinity¶
Affinities describe where pods prefer or require to be scheduled, they may prefer or require a node where they are to be scheduled to have a certain label (node affinity). They may also require to be scheduled in proximity or with a lack of proximity to another pod (pod affinity and anti pod affinity).
See the Kubernetes docs for more info.
singleuser.extraNodeAffinity.preferred¶
Pass this field an array of
PreferredSchedulingTerm
objects.
singleuser.extraNodeAffinity.required¶
Pass this field an array of
NodeSelectorTerm
objects.
singleuser.extraPodAntiAffinity¶
See the description of singleuser.extraNodeAffinity
.
singleuser.extraPodAntiAffinity.preferred¶
Pass this field an array of
WeightedPodAffinityTerm
objects.
singleuser.extraPodAntiAffinity.required¶
Pass this field an array of
PodAffinityTerm
objects.
singleuser.image¶
Set custom image name / tag used for spawned users.
This image is used to launch the pod for each user.
singleuser.image.name¶
Name of the image, without the tag.
Examples:
- yuvipanda/wikimedia-hub-user
- gcr.io/my-project/my-user-image
singleuser.image.pullPolicy¶
Set the imagePullPolicy on the singleuser pods that are spun up by the hub.
See the Kubernetes docs for more info.
singleuser.image.tag¶
The tag of the image to use.
This is the value after the :
in your full image name.
singleuser.extraPodAffinity¶
See the description of singleuser.extraNodeAffinity
.
singleuser.extraPodAffinity.preferred¶
Pass this field an array of
WeightedPodAffinityTerm
objects.
singleuser.extraPodAffinity.required¶
Pass this field an array of
PodAffinityTerm
objects.
hub¶
hub.imagePullPolicy¶
Set the imagePullPolicy on the hub pod.
See the Kubernetes docs for more info on what the values mean.
hub.db¶
hub.db.pvc¶
Customize the Persistent Volume Claim used when hub.db.type
is sqlite-pvc
.
hub.db.pvc.storage¶
Size of disk to request for the database disk.
hub.db.pvc.selector¶
Label selectors to set for the PVC containing the sqlite database.
Useful when you are using a specific PV, and want to bind to that and only that.
See the Kubernetes documentation for more details about using a label selector for what PV to bind to.
hub.db.pvc.annotations¶
Annotations to apply to the PVC containing the sqlite database.
See the Kubernetes documentation for more details about annotations.
hub.db.type¶
Type of database backend to use for the hub database.
The Hub requires a persistent database to function, and this lets you specify where it should be stored.
The various options are:
sqlite-pvc
Use an
sqlite
database kept on a persistent volume attached to the hub.By default, this disk is created by the cloud provider using dynamic provisioning configured by a storage class. You can customize how this disk is created / attached by setting various properties under
hub.db.pvc
.This is the default setting, and should work well for most cloud provider deployments.
sqlite-memory
Use an in-memory
sqlite
database. This should only be used for testing, since the database is erased whenever the hub pod restarts - causing the hub to lose all memory of users who had logged in before.When using this for testing, make sure you delete all other objects that the hub has created (such as user pods, user PVCs, etc) every time the hub restarts. Otherwise you might run into errors about duplicate resources.
mysql
Use an externally hosted mysql database.
You have to specify an sqlalchemy connection string for the mysql database you want to connect to in
hub.db.url
if using this option.The general format of the connection string is:
mysql+pymysql://<db-username>:<db-password>@<db-hostname>:<db-port>/<db-name>
The user specified in the connection string must have the rights to create tables in the database specified.
Note that if you use this, you must also set
hub.cookieSecret
.postgres
Use an externally hosted postgres database.
You have to specify an sqlalchemy connection string for the postgres database you want to connect to in
hub.db.url
if using this option.The general format of the connection string is:
postgres+psycopg2://<db-username>:<db-password>@<db-hostname>:<db-port>/<db-name>
The user specified in the connection string must have the rights to create tables in the database specified.
Note that if you use this, you must also set
hub.cookieSecret
.
hub.db.url¶
Connection string when hub.db.type
is mysql or postgres.
See documentation for hub.db.type
for more details on the format of this property.
hub.db.password¶
Password for the database when hub.db.type
is mysql or postgres.
hub.uid¶
The UID the hub process should be running as.
Use this only if you are building your own image & know that a user with this uid exists inside the hub container! Advanced feature, handle with care!
Defaults to 1000, which is the uid of the jovyan
user that is present in the default hub image.
hub.fsGid¶
The gid the hub process should be using when touching any volumes mounted.
Use this only if you are building your own image & know that a group with this gid exists inside the hub container! Advanced feature, handle with care!
Defaults to 1000, which is the gid of the jovyan
user that is present in the default hub image.
hub.imagePullSecret¶
Creates an image pull secret for you and makes the hub pod utilize it, allowing it to pull images from private image registries.
Using this configuration option automates the following steps that normally is required to pull from private image registries.
# you won't need to run this manually...
kubectl create secret docker-registry hub-image-credentials \
--docker-server=<REGISTRY> \
--docker-username=<USERNAME> \
--docker-email=<EMAIL> \
--docker-password=<PASSWORD>
# you won't need to specify this manually...
spec:
imagePullSecrets:
- name: hub-image-credentials
To learn the username and password fields to access a gcr.io registry from a Kubernetes cluster not associated with the same google cloud credentials, look into this guide and read the notes about the password.
hub.imagePullSecret.username¶
Name of the user you want to use to connect to your private
registry. For external gcr.io, you will use the _json_key
.
Examples:
- alexmorreale
- alex@pfc.com
- _json_key
hub.imagePullSecret.registry¶
Name of the private registry you want to create a credential set for. It will default to Docker Hub’s image registry.
Examples:
- https://index.docker.io/v1/
- quay.io
- eu.gcr.io
- alexmorreale.privatereg.net
hub.imagePullSecret.enabled¶
Enable the creation of a Kubernetes Secret containing credentials to access a image registry. By enabling this, the hub pod will also be configured to use these credentials when it pulls its container image.
hub.imagePullSecret.password¶
Password of the user you want to use to connect to your private registry.
Examples:
- plaintextpassword
- abc123SECRETzyx098
For gcr.io registries the password will be a big JSON blob for a Google cloud service account, it should look something like below.
password: |-
{
"type": "service_account",
"project_id": "jupyter-se",
"private_key_id": "f2ba09118a8d3123b3321bd9a7d6d0d9dc6fdb85",
...
}
Learn more in this guide.
hub.cookieSecret¶
A 32-byte cryptographically secure randomly generated string used to sign values of
secure cookies set by the hub. If unset, jupyterhub will generate one on startup and
save it in the file jupyterhub_cookie_secret
in the /srv/jupyterhub
directory of
the hub container. A value set here will make JupyterHub overwrite any previous file.
You do not need to set this at all if you are using the default configuration for
storing databases - sqlite on a persistent volume (with hub.db.type
set to the
default sqlite-pvc
). If you are using an external database, then you must set this
value explicitly - or your users will keep getting logged out each time the hub pod
restarts.
Changing this value will all user logins to be invalidated. If this secret leaks, immediately change it to something else, or user data can be compromised
# to generate a value, run
openssl rand -hex 32
hub.extraEnv¶
Extra environment variables that should be set for the hub pod.
A list of EnvVar objects.
These are usually used in two circumstances:
- Passing parameters to some custom code specified with
extraConfig
- Passing parameters to an authenticator or spawner that can be directly customized by environment variables (rarer)
hub.service¶
Object to configure the service the JupyterHub will be exposed on by the Kubernetes server.
hub.service.ports¶
Object to configure the ports the hub service will be deployed on.
hub.service.ports.nodePort¶
The nodePort to deploy the hub service on.
hub.service.annotations¶
Kuberentes annotations to apply to the hub service.
hub.service.type¶
The Kubernetes ServiceType to be used.
The default type is ClusterIP
.
See the Kubernetes docs
to learn more about service types.
hub.service.loadBalancerIP¶
The public IP address the hub service should be exposed on.
This sets the IP address that should be used by the LoadBalancer for exposing the hub service. Set this if you want the hub service to be provided with a fixed external IP address instead of a dynamically acquired one. Useful to ensure a stable IP to access to the hub with, for example if you have reserved an IP address in your network to communicate with the JupyterHub.
To be provided like:
hub:
service:
loadBalancerIP: xxx.xxx.xxx.xxx
hub.extraConfig¶
Arbitrary extra python based configuration that should be in jupyterhub_config.py
.
This is the escape hatch - if you want to configure JupyterHub to do something specific that is not present here as an option, you can write the raw Python to do it here.
extraConfig is a dict, so there can be multiple configuration snippets under different names. The configuration sections are run in alphabetical order.
Non-exhaustive examples of things you can do here:
- Subclass authenticator / spawner to do a custom thing
- Dynamically launch different images for different sets of images
- Inject an auth token from GitHub authenticator into user pod
- Anything else you can think of!
Since this is usually a multi-line string, you want to format it using YAML’s | operator.
For example:
hub:
extraConfig:
myConfig.py: |
c.JupyterHub.something = 'something'
c.Spawner.somethingelse = 'something else'
No validation of this python is performed! If you make a mistake here, it will probably
manifest as either the hub pod going into Error
or CrashLoopBackoff
states, or in
some special cases, the hub running but… just doing very random things. Be careful!
hub.image¶
Set custom image name / tag for the hub pod.
Use this to customize which hub image is used. Note that you must use a version of the hub image that was bundled with this particular version of the helm-chart - using other images might not work.
hub.image.name¶
Name of the image, without the tag.
# example names
yuvipanda/wikimedia-hub
gcr.io/my-project/my-hub
hub.image.tag¶
The tag of the image to pull.
This is the value after the :
in your full image name.
# example tags
v1.11.1
zhy270a
proxy¶
proxy.service¶
Object to configure the service the JupyterHub’s proxy will be exposed on by the Kubernetes server.
proxy.service.annotations¶
Annotations to apply to the service that is exposing the proxy.
See the Kubernetes documentation for more details about annotations.
proxy.service.labels¶
Extra labels to add to the proxy service.
See the Kubernetes docs to learn more about labels.
proxy.service.nodePorts¶
Object to set NodePorts to expose the service on for http and https.
See the Kubernetes documentation for more details about NodePorts.
proxy.service.nodePorts.http¶
The HTTP port the proxy-public service should be exposed on.
proxy.service.nodePorts.https¶
The HTTPS port the proxy-public service should be exposed on.
proxy.service.type¶
See hub.service.type
.
proxy.service.loadBalancerIP¶
See hub.service.loadBalancerIP
proxy.secretToken¶
A 32-byte cryptographically secure randomly generated string used to secure communications between the hub and the configurable-http-proxy.
# to generate a value, run
openssl rand -hex 32
Changing this value will cause the proxy and hub pods to restart. It is good security practice to rotate these values over time. If this secret leaks, immediately change it to something else, or user data can be compromised
proxy.https¶
Object for customizing the settings for HTTPS used by the JupyterHub’s proxy. For more information on configuring HTTPS for your JupyterHub, see the HTTPS section in our security guide
proxy.https.manual¶
Object for providing own certificates for manual HTTPS configuration. To be provided when setting https.type
to manual
.
See Set up manual HTTPS
proxy.https.manual.cert¶
The certificate to be used for HTTPS. To be provided in the form of
cert: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
proxy.https.manual.key¶
The RSA private key to be used for HTTPS. To be provided in the form of
key: |
-----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY-----
proxy.https.secret¶
Secret to be provided when setting https.type
to secret
.
proxy.https.secret.name¶
Name of the secret
proxy.https.secret.key¶
Path to the private key to be used for HTTPS.
Example: 'tls.key'
proxy.https.secret.crt¶
Path to the certificate to be used for HTTPS.
Example: 'tls.crt'
proxy.https.enabled¶
Indicator to set whether HTTPS should be enabled or not on the proxy. Defaults to true
if the https object is provided.
proxy.https.letsencrypt¶
proxy.https.letsencrypt.contactEmail¶
The contact email to be used for automatically provisioned HTTPS certificates by Let’s Encrypt. For more information see Set up automatic HTTPS. Required for automatic HTTPS.
proxy.https.type¶
The type of HTTPS encryption that is used.
Decides on which ports and network policies are used for communication via HTTPS. Setting this to secret
sets the type to manual HTTPS with a secret that has to be provided in the https.secret
object.
Defaults to letsencrypt
.
proxy.https.hosts¶
You domain in list form. Required for automatic HTTPS. See Set up automatic HTTPS. To be provided like:
hosts:
- <your-domain-name>
auth¶
auth.state¶
auth.state.enabled¶
Enable persisting auth_state (if available). See: http://jupyterhub.readthedocs.io/en/latest/api/auth.html
auth.state.cryptoKey¶
auth_state will be encrypted and stored in the Hub’s database. This can include things like authentication tokens, etc. to be passed to Spawners as environment variables. Encrypting auth_state requires the cryptography package. It must contain one (or more, separated by ;) 32-byte encryption keys. These can be either base64 or hex-encoded. The JUPYTERHUB_CRYPT_KEY environment variable for the hub pod is set using this entry.
# to generate a value, run
openssl rand -hex 32
If encryption is unavailable, auth_state cannot be persisted.
custom¶
Additional values to pass to the Hub.
JupyterHub will not itself look at these,
but you can read values in your own custom config via hub.extraConfig
.
For example:
custom:
myHost: "https://example.horse"
hub:
extraConfig:
myConfig.py: |
c.MyAuthenticator.host = get_config("custom.myHost")