Compare commits

..

20 Commits

Author SHA1 Message Date
Min RK
69bb34b943 release 1.2.2 2020-11-27 14:44:42 +01:00
Min RK
728fbc68e0 Merge pull request #3285 from meeseeksmachine/auto-backport-of-pr-3284-on-1.2.x
Backport PR #3284 on branch 1.2.x (Changelog for 1.2.2)
2020-11-27 14:41:45 +01:00
Min RK
0dad9a3f39 Backport PR #3284: Changelog for 1.2.2 2020-11-27 13:41:32 +00:00
Min RK
41f291c0c9 Merge pull request #3282 from meeseeksmachine/auto-backport-of-pr-3257-on-1.2.x
Backport PR #3257 on branch 1.2.x (Update services-basics.md to use jupyterhub_idle_culler)
2020-11-27 10:05:01 +01:00
Min RK
9a5b11d5e1 Merge pull request #3283 from meeseeksmachine/auto-backport-of-pr-3250-on-1.2.x
Backport PR #3250 on branch 1.2.x (remove push-branch conditions for CI)
2020-11-27 10:04:11 +01:00
Erik Sundell
b47159b31e Backport PR #3250: remove push-branch conditions for CI 2020-11-27 09:03:54 +00:00
Erik Sundell
bbe377b70a Backport PR #3257: Update services-basics.md to use jupyterhub_idle_culler 2020-11-27 08:59:11 +00:00
Min RK
374a3a7b36 Merge pull request #3273 from meeseeksmachine/auto-backport-of-pr-3237-on-1.2.x
Backport PR #3237 on branch 1.2.x ([proxy.py] Improve robustness when detecting and closing existing proxy processes)
2020-11-26 10:01:46 +01:00
Min RK
32c493e5ab Merge pull request #3272 from meeseeksmachine/auto-backport-of-pr-3252-on-1.2.x
Backport PR #3252 on branch 1.2.x (Standardize "Sign in" capitalization on the login page)
2020-11-20 10:34:41 +01:00
Min RK
edfd363758 Merge pull request #3271 from meeseeksmachine/auto-backport-of-pr-3265-on-1.2.x
Backport PR #3265 on branch 1.2.x (Fix RootHandler when default_url is a callable)
2020-11-20 10:34:31 +01:00
Min RK
d72a5ca3e4 Merge pull request #3274 from meeseeksmachine/auto-backport-of-pr-3255-on-1.2.x
Backport PR #3255 on branch 1.2.x (Environment marker on pamela)
2020-11-20 10:34:22 +01:00
Min RK
3a6309a570 Backport PR #3255: Environment marker on pamela 2020-11-20 09:17:45 +00:00
Min RK
588407200f Backport PR #3237: [proxy.py] Improve robustness when detecting and closing existing proxy processes 2020-11-20 09:17:08 +00:00
Min RK
5cc36a6809 Backport PR #3252: Standardize "Sign in" capitalization on the login page 2020-11-20 09:16:57 +00:00
Min RK
5733eb76c2 Backport PR #3265: Fix RootHandler when default_url is a callable 2020-11-20 09:15:46 +00:00
Min RK
d9719e3538 Merge pull request #3269 from meeseeksmachine/auto-backport-of-pr-3261-on-1.2.x
Backport PR #3261 on branch 1.2.x (Only preserve params when ?next= is unspecified)
2020-11-20 10:10:43 +01:00
Min RK
7c91fbea93 Merge pull request #3270 from meeseeksmachine/auto-backport-of-pr-3246-on-1.2.x
Backport PR #3246 on branch 1.2.x (Migrate from travis to GitHub actions)
2020-11-20 10:10:30 +01:00
Min RK
5076745085 back to dev 2020-11-20 09:54:14 +01:00
Min RK
39eea2f053 Backport PR #3246: Migrate from travis to GitHub actions 2020-11-20 08:51:59 +00:00
Min RK
998f5d7b6c Backport PR #3261: Only preserve params when ?next= is unspecified 2020-11-20 08:48:02 +00:00
445 changed files with 13470 additions and 65617 deletions

33
.circleci/config.yml Normal file
View File

@@ -0,0 +1,33 @@
# Python CircleCI 2.0 configuration file
# Updating CircleCI configuration from v1 to v2
# Check https://circleci.com/docs/2.0/language-python/ for more details
#
version: 2
jobs:
build:
machine: true
steps:
- checkout
- run:
name: build images
command: |
docker build -t jupyterhub/jupyterhub .
docker build -t jupyterhub/jupyterhub-onbuild onbuild
docker build -t jupyterhub/jupyterhub:alpine -f dockerfiles/Dockerfile.alpine .
docker build -t jupyterhub/singleuser singleuser
- run:
name: smoke test jupyterhub
command: |
docker run --rm -it jupyterhub/jupyterhub jupyterhub --help
- run:
name: verify static files
command: |
docker run --rm -it -v $PWD/dockerfiles:/io jupyterhub/jupyterhub python3 /io/test.py
# Tell CircleCI to use this workflow when it builds the site
workflows:
version: 2
default:
jobs:
- build

View File

@@ -5,5 +5,6 @@ jupyterhub.sqlite
jupyterhub_config.py
node_modules
docs
.git
dist
build

View File

@@ -3,9 +3,14 @@
# E: style errors
# W: style warnings
# C: complexity
# D: docstring warnings (unused pydocstyle extension)
# F401: module imported but unused
# F403: import *
# F811: redefinition of unused `name` from line `N`
# F841: local variable assigned but never used
ignore = E, C, W, D, F841
# E402: module level import not at top of file
# I100: Import statements are in the wrong order
# I101: Imported names are in the wrong order. Should be
ignore = E, C, W, F401, F403, F811, F841, E402, I100, I101, D400
builtins = c, get_config
exclude =
.cache,

View File

@@ -1,64 +0,0 @@
# dependabot.yaml reference: https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
#
# Notes:
# - Status and logs from dependabot are provided at
# https://github.com/jupyterhub/jupyterhub/network/updates.
#
version: 2
updates:
# Maintain dependencies in our GitHub Workflows
- package-ecosystem: github-actions
directory: /
labels: [ci]
schedule:
interval: monthly
time: "05:00"
timezone: Etc/UTC
- package-ecosystem: npm
directory: /
groups:
# one big pull request for minor bumps
npm-minor:
patterns:
- "*"
update-types:
- minor
- patch
schedule:
interval: monthly
- package-ecosystem: npm
directory: /jsx
groups:
# one big pull request for minor bumps
jsx-minor:
patterns:
- "*"
update-types:
- minor
- patch
# group major bumps of react-related dependencies
jsx-react:
patterns:
- "react*"
- "redux*"
- "*react"
- "recompose"
update-types:
- major
# group major bumps of webpack-related dependencies
jsx-webpack:
patterns:
- "*webpack*"
- "@babel/*"
- "*-loader"
update-types:
- major
# group major bumps of jest-related dependencies
jsx-jest:
patterns:
- "*jest*"
- "*test*"
update-types:
- major
schedule:
interval: monthly

View File

@@ -1,87 +0,0 @@
# This is a GitHub workflow defining a set of jobs with a set of steps.
# ref: https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions
#
# Test build release artifacts (PyPI package) and publish them on
# pushed git tags.
#
name: Release
on:
pull_request:
paths-ignore:
- "docs/**"
- "**.md"
- "**.rst"
- ".github/workflows/*"
- "!.github/workflows/release.yml"
push:
paths-ignore:
- "docs/**"
- "**.md"
- "**.rst"
- ".github/workflows/*"
- "!.github/workflows/release.yml"
branches-ignore:
- "dependabot/**"
- "pre-commit-ci-update-config"
tags:
- "**"
workflow_dispatch:
permissions:
contents: read
jobs:
build-release:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v5
- uses: actions/setup-python@v6
with:
python-version: "3.11"
cache: pip
- uses: actions/setup-node@v5
with:
node-version: "20"
- name: install build requirements
run: |
npm install -g yarn
pip install --upgrade pip
pip install build
pip freeze
- name: build release
run: |
python -m build --sdist --wheel .
ls -l dist
- name: verify sdist
run: |
./ci/check_sdist.py dist/jupyterhub-*.tar.gz
- name: verify data-files are installed where they are found
run: |
pip install dist/*.whl
./ci/check_installed_data.py
- name: verify sdist can be installed without npm/yarn
run: |
docker run --rm -v $PWD/dist:/dist:ro docker.io/library/python:3.9-slim-bullseye bash -c 'pip install /dist/jupyterhub-*.tar.gz'
# ref: https://github.com/actions/upload-artifact#readme
- uses: actions/upload-artifact@v4
with:
name: jupyterhub-${{ github.sha }}
path: "dist/*"
if-no-files-found: error
- name: Publish to PyPI
if: startsWith(github.ref, 'refs/tags/')
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
run: |
pip install twine
twine upload --skip-existing dist/*

View File

@@ -1,31 +0,0 @@
# https://github.com/dessant/support-requests
name: "Support Requests"
on:
issues:
types: [labeled, unlabeled, reopened]
permissions:
issues: write
jobs:
action:
runs-on: ubuntu-latest
steps:
- uses: dessant/support-requests@v4
with:
github-token: ${{ github.token }}
support-label: "support"
issue-comment: |
Hi there @{issue-author} :wave:!
I closed this issue because it was labelled as a support question.
Please help us organize discussion by posting this on the http://discourse.jupyter.org/ forum.
Our goal is to sustain a positive experience for both users and developers. We use GitHub issues for specific discussions related to changing a repository's content, and let the forum be where we can more generally help and inspire each other.
Thank you for being an active member of our community! :heart:
close-issue: true
lock-issue: false
issue-lock-reason: "off-topic"

View File

@@ -1,120 +0,0 @@
# This is a GitHub workflow defining a set of jobs with a set of steps.
# ref: https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions
#
# This workflow validates the REST API definition and runs the pytest tests in
# the docs/ folder. This workflow does not build the documentation. That is
# instead tested via ReadTheDocs (https://readthedocs.org/projects/jupyterhub/).
#
name: Test docs
# The tests defined in docs/ are currently influenced by changes to _version.py
# and scopes.py.
on:
pull_request:
paths:
- "docs/**"
- "jupyterhub/_version.py"
- "jupyterhub/scopes.py"
- ".github/workflows/test-docs.yml"
push:
paths:
- "docs/**"
- "jupyterhub/_version.py"
- "jupyterhub/scopes.py"
- ".github/workflows/test-docs.yml"
branches-ignore:
- "dependabot/**"
- "pre-commit-ci-update-config"
tags:
- "**"
workflow_dispatch:
permissions:
contents: read
env:
# UTF-8 content may be interpreted as ascii and causes errors without this.
LANG: C.UTF-8
PYTEST_ADDOPTS: "--verbose --color=yes"
jobs:
validate-rest-api-definition:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v5
- uses: actions/setup-node@v5
with:
node-version: "20"
cache: npm
- name: Validate REST API definition
run: |
npx @redocly/cli lint
test-docs:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v5
with:
# make rediraffecheckdiff requires git history to compare current
# commit with the main branch and previous releases.
fetch-depth: 0
- uses: actions/setup-python@v6
with:
python-version: "3.11"
cache: pip
cache-dependency-path: |
requirements.txt
docs/requirements.txt
- name: Install requirements
run: |
pip install -e . -r docs/requirements.txt pytest
- name: pytest docs/
run: |
pytest docs/
# readthedocs doesn't halt on warnings,
# so raise any warnings here
- name: build docs
run: |
cd docs
make html
# Output broken and permanently redirected links in a readable format
- name: check links
uses: manics/action-sphinx-linkcheck-summary@main
with:
docs-dir: docs
build-dir: docs/_build
# make rediraffecheckdiff compares files for different changesets
# these diff targets aren't always available
# - compare with base ref (usually 'main', always on 'origin') for pull requests
# - only compare with tags when running against jupyterhub/jupyterhub
# to avoid errors on forks, which often lack tags
- name: check redirects for this PR
if: github.event_name == 'pull_request'
run: |
cd docs
export REDIRAFFE_BRANCH=origin/${{ github.base_ref }}
make rediraffecheckdiff
# this should check currently published 'stable' links for redirects
- name: check redirects since last release
if: github.repository == 'jupyterhub/jupyterhub'
run: |
cd docs
export REDIRAFFE_BRANCH=$(git describe --tags --abbrev=0)
make rediraffecheckdiff
# longer-term redirect check (fixed version) for older links
- name: check redirects since 3.0.0
if: github.repository == 'jupyterhub/jupyterhub'
run: |
cd docs
export REDIRAFFE_BRANCH=3.0.0
make rediraffecheckdiff

View File

@@ -1,48 +0,0 @@
# This is a GitHub workflow defining a set of jobs with a set of steps.
# ref: https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions
#
name: Test jsx (admin-react.js)
on:
pull_request:
paths:
- "jsx/**"
- ".github/workflows/test-jsx.yml"
push:
paths:
- "jsx/**"
- ".github/workflows/test-jsx.yml"
branches-ignore:
- "dependabot/**"
- "pre-commit-ci-update-config"
tags:
- "**"
workflow_dispatch:
permissions:
contents: read
jobs:
# The ./jsx folder contains React based source code files that are to compile
# to share/jupyterhub/static/js/admin-react.js. The ./jsx folder includes
# tests also has tests that this job is meant to run with `npm test`
# according to the documentation in jsx/README.md.
test-jsx-admin-react:
runs-on: ubuntu-22.04
timeout-minutes: 5
steps:
- uses: actions/checkout@v5
- uses: actions/setup-node@v5
with:
node-version: "20"
- name: install jsx
run: |
cd jsx
npm ci
- name: test
run: |
cd jsx
npm test

View File

@@ -1,43 +1,60 @@
# This is a GitHub workflow defining a set of jobs with a set of steps.
# ref: https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions
# ref: https://docs.github.com/en/free-pro-team@latest/actions/reference/workflow-syntax-for-github-actions
#
name: Test
name: Run tests
# Trigger the workflow's on all PRs but only on pushed tags or commits to
# main/master branch to avoid PRs developed in a GitHub fork's dedicated branch
# to trigger.
on:
pull_request:
paths-ignore:
- "docs/**"
- "**.md"
- "**.rst"
- ".github/workflows/*"
- "!.github/workflows/test.yml"
push:
paths-ignore:
- "docs/**"
- "**.md"
- "**.rst"
- ".github/workflows/*"
- "!.github/workflows/test.yml"
branches-ignore:
- "dependabot/**"
- "pre-commit-ci-update-config"
tags:
- "**"
workflow_dispatch:
defaults:
run:
# Declare bash be used by default in this workflow's "run" steps.
#
# NOTE: bash will by default run with:
# --noprofile: Ignore ~/.profile etc.
# --norc: Ignore ~/.bashrc etc.
# -e: Exit directly on errors
# -o pipefail: Don't mask errors from a command piped into another command
shell: bash
env:
# UTF-8 content may be interpreted as ascii and causes errors without this.
LANG: C.UTF-8
SQLALCHEMY_WARN_20: "1"
permissions:
contents: read
jobs:
# Run "pre-commit run --all-files"
pre-commit:
runs-on: ubuntu-20.04
timeout-minutes: 2
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: 3.8
# ref: https://github.com/pre-commit/action
- uses: pre-commit/action@v2.0.0
- name: Help message if pre-commit fail
if: ${{ failure() }}
run: |
echo "You can install pre-commit hooks to automatically run formatting"
echo "on each commit with:"
echo " pre-commit install"
echo "or you can run by hand on staged files with"
echo " pre-commit run"
echo "or after-the-fact on already committed files with"
echo " pre-commit run --all-files"
# Run "pytest jupyterhub/tests" in various configurations
pytest:
runs-on: ubuntu-22.04
timeout-minutes: 15
runs-on: ubuntu-20.04
timeout-minutes: 10
strategy:
# Keep running even if one variation of the job fail
@@ -56,57 +73,27 @@ jobs:
# Tests everything when JupyterHub works against a dedicated mysql or
# postgresql server.
#
# legacy_notebook:
# jupyter_server:
# Tests everything when the user instances are started with
# the legacy notebook server instead of jupyter_server.
#
# ssl:
# Tests everything using internal SSL connections instead of
# unencrypted HTTP
# jupyter_server instead of notebook.
#
# main_dependencies:
# Tests everything when we use the latest available dependencies
# from: traitlets.
# Tests everything when the we use the latest available dependencies
# from: ipytraitlets.
#
# NOTE: Since only the value of these parameters are presented in the
# GitHub UI when the workflow run, we avoid using true/false as
# values by instead duplicating the name to signal true.
# Python versions available at:
# https://github.com/actions/python-versions/blob/HEAD/versions-manifest.json
include:
- python: "3.8"
oldest_dependencies: oldest_dependencies
legacy_notebook: legacy_notebook
- python: "3.8"
jupyter_server: "1.*"
subset: singleuser
- python: "3.9"
- python: "3.6"
subdomain: subdomain
- python: "3.7"
db: mysql
- python: "3.10"
- python: "3.8"
db: postgres
- python: "3.12"
subdomain: subdomain
serverextension: serverextension
- python: "3.11"
ssl: ssl
serverextension: serverextension
- python: "3.11"
jupyverse: jupyverse
subset: singleuser
- python: "3.11"
subdomain: subdomain
noextension: noextension
subset: singleuser
- python: "3.11"
ssl: ssl
noextension: noextension
subset: singleuser
- python: "3.11"
browser: browser
- python: "3.11"
subdomain: subdomain
browser: browser
- python: "3.12"
- python: "3.8"
jupyter_server: jupyter_server
- python: "3.9"
main_dependencies: main_dependencies
steps:
@@ -120,10 +107,7 @@ jobs:
fi
if [ "${{ matrix.db }}" == "mysql" ]; then
echo "MYSQL_HOST=127.0.0.1" >> $GITHUB_ENV
echo "JUPYTERHUB_TEST_DB_URL=mysql+mysqldb://root@127.0.0.1:3306/jupyterhub" >> $GITHUB_ENV
fi
if [ "${{ matrix.ssl }}" == "ssl" ]; then
echo "SSL_ENABLED=1" >> $GITHUB_ENV
echo "JUPYTERHUB_TEST_DB_URL=mysql+mysqlconnector://root@127.0.0.1:3306/jupyterhub" >> $GITHUB_ENV
fi
if [ "${{ matrix.db }}" == "postgres" ]; then
echo "PGHOST=127.0.0.1" >> $GITHUB_ENV
@@ -131,77 +115,47 @@ jobs:
echo "PGPASSWORD=hub[test/:?" >> $GITHUB_ENV
echo "JUPYTERHUB_TEST_DB_URL=postgresql://test_user:hub%5Btest%2F%3A%3F@127.0.0.1:5432/jupyterhub" >> $GITHUB_ENV
fi
if [ "${{ matrix.serverextension }}" != "" ]; then
echo "JUPYTERHUB_SINGLEUSER_EXTENSION=1" >> $GITHUB_ENV
elif [ "${{ matrix.noextension }}" != "" ]; then
echo "JUPYTERHUB_SINGLEUSER_EXTENSION=0" >> $GITHUB_ENV
if [ "${{ matrix.jupyter_server }}" != "" ]; then
echo "JUPYTERHUB_SINGLEUSER_APP=jupyterhub.tests.mockserverapp.MockServerApp" >> $GITHUB_ENV
fi
if [ "${{ matrix.jupyverse }}" != "" ]; then
echo "JUPYTERHUB_SINGLEUSER_APP=jupyverse" >> $GITHUB_ENV
fi
- uses: actions/checkout@v5
# NOTE: actions/setup-node@v5 make use of a cache within the GitHub base
- uses: actions/checkout@v2
# NOTE: actions/setup-node@v1 make use of a cache within the GitHub base
# environment and setup in a fraction of a second.
- name: Install Node
uses: actions/setup-node@v5
- name: Install Node v14
uses: actions/setup-node@v1
with:
node-version: "20"
- name: Install Javascript dependencies
node-version: "14"
- name: Install Node dependencies
run: |
npm install
npm install -g configurable-http-proxy yarn
npm install -g configurable-http-proxy
npm list
# NOTE: actions/setup-python@v6 make use of a cache within the GitHub base
# NOTE: actions/setup-python@v2 make use of a cache within the GitHub base
# environment and setup in a fraction of a second.
- name: Install Python ${{ matrix.python }}
uses: actions/setup-python@v6
uses: actions/setup-python@v2
with:
python-version: "${{ matrix.python }}"
cache: pip
cache-dependency-path: |
pyproject.toml
requirements.txt
ci/oldest-dependencies/requirements.old
python-version: ${{ matrix.python }}
- name: Install Python dependencies
run: |
pip install --upgrade pip
if [ "${{ matrix.oldest_dependencies }}" != "" ]; then
# frozen env with oldest dependencies
# make sure our `>=` pins really do express our minimum supported versions
pip install -r ci/oldest-dependencies/requirements.old -e .
else
pip install --pre -e ".[test]" "pycurl; python_version >= '3.10'"
fi
pip install --upgrade . -r dev-requirements.txt
if [ "${{ matrix.main_dependencies }}" != "" ]; then
# Tests are broken:
# https://github.com/jupyterhub/jupyterhub/issues/4418
# pip install git+https://github.com/ipython/traitlets#egg=traitlets --force
pip install --upgrade --pre sqlalchemy
fi
if [ "${{ matrix.legacy_notebook }}" != "" ]; then
pip uninstall jupyter_server --yes
pip install 'notebook<7'
pip install git+https://github.com/ipython/traitlets#egg=traitlets --force
fi
if [ "${{ matrix.jupyter_server }}" != "" ]; then
pip install "jupyter_server==${{ matrix.jupyter_server }}"
fi
if [ "${{ matrix.jupyverse }}" != "" ]; then
pip install "jupyverse[jupyterlab,auth-jupyterhub]"
pip install -e .
pip uninstall notebook --yes
pip install jupyter_server
fi
if [ "${{ matrix.db }}" == "mysql" ]; then
pip install mysqlclient
pip install mysql-connector-python
fi
if [ "${{ matrix.db }}" == "postgres" ]; then
pip install psycopg2-binary
fi
if [ "${{ matrix.serverextension }}" != "" ]; then
pip install 'jupyter-server>=2'
fi
pip freeze
@@ -230,32 +184,23 @@ jobs:
if: ${{ matrix.db }}
run: |
if [ "${{ matrix.db }}" == "mysql" ]; then
if [[ -z "$(which mysql)" ]]; then
sudo apt-get update
sudo apt-get install -y mysql-client
fi
DB=mysql bash ci/docker-db.sh
DB=mysql bash ci/init-db.sh
fi
if [ "${{ matrix.db }}" == "postgres" ]; then
if [[ -z "$(which psql)" ]]; then
sudo apt-get update
sudo apt-get install -y postgresql-client
fi
DB=postgres bash ci/docker-db.sh
DB=postgres bash ci/init-db.sh
fi
- name: Configure browser tests
if: matrix.browser
run: echo "PYTEST_ADDOPTS=$PYTEST_ADDOPTS -m browser" >> "${GITHUB_ENV}"
- name: Ensure browsers are installed for playwright
if: matrix.browser
run: python -m playwright install --with-deps firefox
- name: Run pytest
# FIXME: --color=yes explicitly set because:
# https://github.com/actions/runner/issues/241
run: |
pytest -k "${{ matrix.subset }}" --maxfail=2 --cov=jupyterhub jupyterhub/tests
- uses: codecov/codecov-action@v5
pytest -v --maxfail=2 --color=yes --cov=jupyterhub jupyterhub/tests
- name: Submit codecov report
run: |
codecov

16
.gitignore vendored
View File

@@ -7,24 +7,17 @@ node_modules
dist
docs/_build
docs/build
docs/source/reference/metrics.md
docs/source/_static/rest-api
.ipynb_checkpoints
.virtual_documents
jsx/build/
# ignore config file at the top-level of the repo
# but not sub-dirs
/jupyterhub_config.py
jupyterhub_cookie_secret
jupyterhub.sqlite
jupyterhub.sqlite*
package-lock.json
share/jupyterhub/static/components
share/jupyterhub/static/css/style.css
share/jupyterhub/static/css/style.css.map
share/jupyterhub/static/css/style.min.css
share/jupyterhub/static/css/style.min.css.map
share/jupyterhub/static/js/admin-react.js*
*.egg-info
MANIFEST
.coverage
@@ -35,8 +28,3 @@ htmlcov
.pytest_cache
pip-wheel-metadata
docs/source/reference/metrics.rst
oldest-requirements.txt
jupyterhub-proxy.pid
examples/server-api/service-token
*.hot-update*

View File

@@ -1,86 +1,19 @@
# pre-commit is a tool to perform a predefined set of tasks manually and/or
# automatically before git commits are made.
#
# Config reference: https://pre-commit.com/#pre-commit-configyaml---top-level
#
# Common tasks
#
# - Run on all files: pre-commit run --all-files
# - Register git hooks: pre-commit install --install-hooks
#
ci:
# pre-commit.ci will open PRs updating our hooks once a month
autoupdate_schedule: monthly
repos:
# autoformat and lint Python code
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.12.11
hooks:
- id: ruff
types_or:
- python
- jupyter
args: ["--fix", "--show-fixes"]
- id: ruff-format
types_or:
- python
- jupyter
# Autoformat: markdown, yaml, javascript (see the file .prettierignore)
- repo: https://github.com/rbubley/mirrors-prettier
rev: v3.6.2
hooks:
- id: prettier
exclude: .*/templates/.*|docs/source/_static/rest-api.yml|docs/source/rbac/scope-table.md
# autoformat HTML templates
- repo: https://github.com/djlint/djLint
rev: v1.36.4
hooks:
- id: djlint-reformat-jinja
files: ".*templates/.*.html"
types_or: ["html"]
exclude: redoc.html
- id: djlint-jinja
files: ".*templates/.*.html"
types_or: ["html"]
# Autoformat and linting, misc. details
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v6.0.0
hooks:
- id: end-of-file-fixer
exclude: share/jupyterhub/static/js/admin-react.js
- id: requirements-txt-fixer
- id: check-case-conflict
- id: check-executables-have-shebangs
# source docs: rest-api.yml and scope-table.md are autogenerated
- repo: local
hooks:
- id: update-api-and-scope-docs
name: Update rest-api.yml and scope-table.md based on scopes.py
language: python
additional_dependencies: ["pytablewriter", "ruamel.yaml"]
entry: python docs/source/rbac/generate-scope-table.py
args:
- --update
files: jupyterhub/scopes.py
pass_filenames: false
# run eslint in the jsx directory
# need to pass through 'jsx:install-run' hook in
# top-level package.json to ensure dependencies are installed
# eslint pre-commit hook doesn't really work with eslint 9,
# so use `npm run lint:fix`
- id: jsx-eslint
name: eslint in jsx/
entry: npm run jsx:install-run lint:fix
pass_filenames: false
language: node
files: "jsx/.*"
# can't run on pre-commit; hangs, for some reason
stages:
- manual
- repo: https://github.com/asottile/reorder_python_imports
rev: v1.9.0
hooks:
- id: reorder-python-imports
- repo: https://github.com/psf/black
rev: 19.10b0
hooks:
- id: black
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.4.0
hooks:
- id: end-of-file-fixer
- id: check-json
- id: check-yaml
- id: check-case-conflict
- id: check-executables-have-shebangs
- id: requirements-txt-fixer
- id: flake8

View File

@@ -1,4 +0,0 @@
share/jupyterhub/templates/
share/jupyterhub/static/js/admin-react.js
jupyterhub/singleuser/templates/
docs/source/_templates/

View File

@@ -1,25 +0,0 @@
# Configuration on how ReadTheDocs (RTD) builds our documentation
# ref: https://readthedocs.org/projects/jupyterhub/
# ref: https://docs.readthedocs.io/en/stable/config-file/v2.html
#
version: 2
sphinx:
configuration: docs/source/conf.py
build:
os: ubuntu-24.04
tools:
python: "3.13"
python:
install:
- path: .
- requirements: docs/requirements.txt
formats:
# Adding htmlzip enables a Downloads section in the rendered website's RTD
# menu where the html build can be downloaded. This doesn't require any
# additional configuration in docs/source/conf.py.
#
- htmlzip

26
CHECKLIST-Release.md Normal file
View File

@@ -0,0 +1,26 @@
# Release checklist
- [ ] Upgrade Docs prior to Release
- [ ] Change log
- [ ] New features documented
- [ ] Update the contributor list - thank you page
- [ ] Upgrade and test Reference Deployments
- [ ] Release software
- [ ] Make sure 0 issues in milestone
- [ ] Follow release process steps
- [ ] Send builds to PyPI (Warehouse) and Conda Forge
- [ ] Blog post and/or release note
- [ ] Notify users of release
- [ ] Email Jupyter and Jupyter In Education mailing lists
- [ ] Tweet (optional)
- [ ] Increment the version number for the next release
- [ ] Update roadmap

View File

@@ -1 +1 @@
Please refer to [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md).
Please refer to [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/master/conduct/code_of_conduct.md).

View File

@@ -1,40 +1,137 @@
# Contributing to JupyterHub
Welcome! As a [Jupyter](https://jupyter.org) project,
you can follow the [Jupyter contributor guide](https://jupyter.readthedocs.io/en/latest/contributing/content-contributor.html).
you can follow the [Jupyter contributor guide](https://jupyter.readthedocs.io/en/latest/contributor/content-contributor.html).
Make sure to also follow [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md)
Make sure to also follow [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/master/conduct/code_of_conduct.md)
for a friendly and welcoming collaborative environment.
Please see our documentation on
## Setting up a development environment
- [Setting up a development install](https://jupyterhub.readthedocs.io/en/latest/contributing/setup.html)
- [Testing JupyterHub and linting code](https://jupyterhub.readthedocs.io/en/latest/contributing/tests.html)
<!--
https://jupyterhub.readthedocs.io/en/stable/contributing/setup.html
contains a lot of the same information. Should we merge the docs and
just have this page link to that one?
-->
If you need some help, feel free to ask on [Gitter](https://gitter.im/jupyterhub/jupyterhub) or [Discourse](https://discourse.jupyter.org/).
JupyterHub requires Python >= 3.5 and nodejs.
## Our Copyright Policy
As a Python project, a development install of JupyterHub follows standard practices for the basics (steps 1-2).
Jupyter uses a shared copyright model. Each contributor maintains copyright
over their contributions to Jupyter. But, it is important to note that these
contributions are typically only changes to the repositories. Thus, the Jupyter
source code, in its entirety is not the copyright of any single person or
institution. Instead, it is the collective copyright of the entire Jupyter
Development Team. If individual contributors want to maintain a record of what
changes/contributions they have specific copyright on, they should indicate
their copyright in the commit message of the change, when they commit the
change to one of the Jupyter repositories.
With this in mind, the following banner should be used in any source code file
to indicate the copyright and license terms:
1. clone the repo
```bash
git clone https://github.com/jupyterhub/jupyterhub
```
2. do a development install with pip
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
```bash
cd jupyterhub
python3 -m pip install --editable .
```
3. install the development requirements,
which include things like testing tools
### About the Jupyter Development Team
```bash
python3 -m pip install -r dev-requirements.txt
```
4. install configurable-http-proxy with npm:
The Jupyter Development Team is the set of all contributors to the Jupyter project.
This includes all of the Jupyter subprojects.
```bash
npm install -g configurable-http-proxy
```
5. set up pre-commit hooks for automatic code formatting, etc.
The team that coordinates JupyterHub subproject can be found here:
https://compass.hub.jupyter.org/page/governance.html
```bash
pre-commit install
```
You can also invoke the pre-commit hook manually at any time with
```bash
pre-commit run
```
## Contributing
JupyterHub has adopted automatic code formatting so you shouldn't
need to worry too much about your code style.
As long as your code is valid,
the pre-commit hook should take care of how it should look.
You can invoke the pre-commit hook by hand at any time with:
```bash
pre-commit run
```
which should run any autoformatting on your code
and tell you about any errors it couldn't fix automatically.
You may also install [black integration](https://github.com/psf/black#editor-integration)
into your text editor to format code automatically.
If you have already committed files before setting up the pre-commit
hook with `pre-commit install`, you can fix everything up using
`pre-commit run --all-files`. You need to make the fixing commit
yourself after that.
## Testing
It's a good idea to write tests to exercise any new features,
or that trigger any bugs that you have fixed to catch regressions.
You can run the tests with:
```bash
pytest -v
```
in the repo directory. If you want to just run certain tests,
check out the [pytest docs](https://pytest.readthedocs.io/en/latest/usage.html)
for how pytest can be called.
For instance, to test only spawner-related things in the REST API:
```bash
pytest -v -k spawn jupyterhub/tests/test_api.py
```
The tests live in `jupyterhub/tests` and are organized roughly into:
1. `test_api.py` tests the REST API
2. `test_pages.py` tests loading the HTML pages
and other collections of tests for different components.
When writing a new test, there should usually be a test of
similar functionality already written and related tests should
be added nearby.
The fixtures live in `jupyterhub/tests/conftest.py`. There are
fixtures that can be used for JupyterHub components, such as:
- `app`: an instance of JupyterHub with mocked parts
- `auth_state_enabled`: enables persisting auth_state (like authentication tokens)
- `db`: a sqlite in-memory DB session
- `io_loop`: a Tornado event loop
- `event_loop`: a new asyncio event loop
- `user`: creates a new temporary user
- `admin_user`: creates a new temporary admin user
- single user servers
- `cleanup_after`: allows cleanup of single user servers between tests
- mocked service
- `MockServiceSpawner`: a spawner that mocks services for testing with a short poll interval
- `mockservice`: mocked service with no external service url
- `mockservice_url`: mocked service with a url to test external services
And fixtures to add functionality or spawning behavior:
- `admin_access`: grants admin access
- `no_patience`: sets slow-spawning timeouts to zero
- `slow_spawn`: enables the SlowSpawner (a spawner that takes a few seconds to start)
- `never_spawn`: enables the NeverSpawner (a spawner that will never start)
- `bad_spawn`: enables the BadSpawner (a spawner that fails immediately)
- `slow_bad_spawn`: enables the SlowBadSpawner (a spawner that fails after a short delay)
To read more about fixtures check out the
[pytest docs](https://docs.pytest.org/en/latest/fixture.html)
for how to use the existing fixtures, and how to create new ones.
When in doubt, feel free to [ask](https://gitter.im/jupyterhub/jupyterhub).

59
COPYING.md Normal file
View File

@@ -0,0 +1,59 @@
# The Jupyter multi-user notebook server licensing terms
Jupyter multi-user notebook server is licensed under the terms of the Modified BSD License
(also known as New or Revised or 3-Clause BSD), as follows:
- Copyright (c) 2014-, Jupyter Development Team
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
Neither the name of the Jupyter Development Team nor the names of its
contributors may be used to endorse or promote products derived from this
software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
## About the Jupyter Development Team
The Jupyter Development Team is the set of all contributors to the Jupyter project.
This includes all of the Jupyter subprojects.
The core team that coordinates development on GitHub can be found here:
https://github.com/jupyter/.
## Our Copyright Policy
Jupyter uses a shared copyright model. Each contributor maintains copyright
over their contributions to Jupyter. But, it is important to note that these
contributions are typically only changes to the repositories. Thus, the Jupyter
source code, in its entirety is not the copyright of any single person or
institution. Instead, it is the collective copyright of the entire Jupyter
Development Team. If individual contributors want to maintain a record of what
changes/contributions they have specific copyright on, they should indicate
their copyright in the commit message of the change, when they commit the
change to one of the Jupyter repositories.
With this in mind, the following banner should be used in any source code file
to indicate the copyright and license terms:
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.

101
Dockerfile Normal file
View File

@@ -0,0 +1,101 @@
# An incomplete base Docker image for running JupyterHub
#
# Add your configuration to create a complete derivative Docker image.
#
# Include your configuration settings by starting with one of two options:
#
# Option 1:
#
# FROM jupyterhub/jupyterhub:latest
#
# And put your configuration file jupyterhub_config.py in /srv/jupyterhub/jupyterhub_config.py.
#
# Option 2:
#
# Or you can create your jupyterhub config and database on the host machine, and mount it with:
#
# docker run -v $PWD:/srv/jupyterhub -t jupyterhub/jupyterhub
#
# NOTE
# If you base on jupyterhub/jupyterhub-onbuild
# your jupyterhub_config.py will be added automatically
# from your docker directory.
ARG BASE_IMAGE=ubuntu:focal-20200729@sha256:6f2fb2f9fb5582f8b587837afd6ea8f37d8d1d9e41168c90f410a6ef15fa8ce5
FROM $BASE_IMAGE AS builder
USER root
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update \
&& apt-get install -yq --no-install-recommends \
build-essential \
ca-certificates \
locales \
python3-dev \
python3-pip \
python3-pycurl \
nodejs \
npm \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN python3 -m pip install --upgrade setuptools pip wheel
# copy everything except whats in .dockerignore, its a
# compromise between needing to rebuild and maintaining
# what needs to be part of the build
COPY . /src/jupyterhub/
WORKDIR /src/jupyterhub
# Build client component packages (they will be copied into ./share and
# packaged with the built wheel.)
RUN python3 setup.py bdist_wheel
RUN python3 -m pip wheel --wheel-dir wheelhouse dist/*.whl
FROM $BASE_IMAGE
USER root
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get install -yq --no-install-recommends \
ca-certificates \
curl \
gnupg \
locales \
python3-pip \
python3-pycurl \
nodejs \
npm \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
ENV SHELL=/bin/bash \
LC_ALL=en_US.UTF-8 \
LANG=en_US.UTF-8 \
LANGUAGE=en_US.UTF-8
RUN locale-gen $LC_ALL
# always make sure pip is up to date!
RUN python3 -m pip install --no-cache --upgrade setuptools pip
RUN npm install -g configurable-http-proxy@^4.2.0 \
&& rm -rf ~/.npm
# install the wheels we built in the first stage
COPY --from=builder /src/jupyterhub/wheelhouse /tmp/wheelhouse
RUN python3 -m pip install --no-cache /tmp/wheelhouse/*
RUN mkdir -p /srv/jupyterhub/
WORKDIR /srv/jupyterhub/
EXPOSE 8000
LABEL maintainer="Jupyter Project <jupyter@googlegroups.com>"
LABEL org.jupyter.service="jupyterhub"
CMD ["jupyterhub"]

11
LICENSE
View File

@@ -1,11 +0,0 @@
Copyright 2014-, Jupyter Development Team
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,13 +1,24 @@
# using setuptools-scm means we only need to handle _non-tracked files here_
include README.md
include COPYING.md
include setupegg.py
include bower-lite
include package.json
include package-lock.json
include *requirements.txt
include Dockerfile
# include untracked js/css artifacts, components
graft onbuild
graft jupyterhub
graft scripts
graft share
graft singleuser
graft ci
# prune some large unused files from components.
# these patterns affect source distributions (sdists)
# we have stricter exclusions from installation in setup.py:get_data_files
# Documentation
graft docs
prune docs/node_modules
# prune some large unused files from components
prune share/jupyterhub/static/components/bootstrap/dist/css
exclude share/jupyterhub/static/components/bootstrap/dist/fonts/*.svg
prune share/jupyterhub/static/components/font-awesome/css
@@ -17,3 +28,11 @@ prune share/jupyterhub/static/components/jquery/external
prune share/jupyterhub/static/components/jquery/src
prune share/jupyterhub/static/components/moment/lang
prune share/jupyterhub/static/components/moment/min
# Patterns to exclude from any directory
global-exclude *~
global-exclude *.pyc
global-exclude *.pyo
global-exclude .git
global-exclude .ipynb_checkpoints
global-exclude .bower.json

View File

@@ -6,27 +6,29 @@
**[License](#license)** |
**[Help and Resources](#help-and-resources)**
---
# [JupyterHub](https://github.com/jupyterhub/jupyterhub)
[![Latest PyPI version](https://img.shields.io/pypi/v/jupyterhub?logo=pypi)](https://pypi.python.org/pypi/jupyterhub)
[![Latest conda-forge version](https://img.shields.io/conda/vn/conda-forge/jupyterhub?logo=conda-forge)](https://anaconda.org/conda-forge/jupyterhub)
[![Latest conda-forge version](https://img.shields.io/conda/vn/conda-forge/jupyterhub?logo=conda-forge)](https://www.npmjs.com/package/jupyterhub)
[![Documentation build status](https://img.shields.io/readthedocs/jupyterhub?logo=read-the-docs)](https://jupyterhub.readthedocs.org/en/latest/)
[![GitHub Workflow Status - Test](https://img.shields.io/github/workflow/status/jupyterhub/jupyterhub/Test?logo=github&label=tests)](https://github.com/jupyterhub/jupyterhub/actions)
[![Test coverage of code](https://codecov.io/gh/jupyterhub/jupyterhub/branch/main/graph/badge.svg)](https://codecov.io/gh/jupyterhub/jupyterhub)
[![TravisCI build status](https://img.shields.io/travis/com/jupyterhub/jupyterhub?logo=travis)](https://travis-ci.com/jupyterhub/jupyterhub)
[![DockerHub build status](https://img.shields.io/docker/build/jupyterhub/jupyterhub?logo=docker&label=build)](https://hub.docker.com/r/jupyterhub/jupyterhub/tags)
[![CircleCI build status](https://img.shields.io/circleci/build/github/jupyterhub/jupyterhub?logo=circleci)](https://circleci.com/gh/jupyterhub/jupyterhub)<!-- CircleCI Token: b5b65862eb2617b9a8d39e79340b0a6b816da8cc -->
[![Test coverage of code](https://codecov.io/gh/jupyterhub/jupyterhub/branch/master/graph/badge.svg)](https://codecov.io/gh/jupyterhub/jupyterhub)
[![GitHub](https://img.shields.io/badge/issue_tracking-github-blue?logo=github)](https://github.com/jupyterhub/jupyterhub/issues)
[![Discourse](https://img.shields.io/badge/help_forum-discourse-blue?logo=discourse)](https://discourse.jupyter.org/c/jupyterhub)
[![Gitter](https://img.shields.io/badge/social_chat-gitter-blue?logo=gitter)](https://gitter.im/jupyterhub/jupyterhub)
With [JupyterHub](https://jupyterhub.readthedocs.io) you can create a
**multi-user Hub** that spawns, manages, and proxies multiple instances of the
**multi-user Hub** which spawns, manages, and proxies multiple instances of the
single-user [Jupyter notebook](https://jupyter-notebook.readthedocs.io)
server.
[Project Jupyter](https://jupyter.org) created JupyterHub to support many
users. The Hub can offer notebook servers to a class of students, a corporate
data science workgroup, a scientific research project, or a high-performance
data science workgroup, a scientific research project, or a high performance
computing group.
## Technical overview
@@ -40,29 +42,37 @@ Three main actors make up JupyterHub:
Basic principles for operation are:
- Hub launches a proxy.
- The Proxy forwards all requests to Hub by default.
- Hub handles login and spawns single-user servers on demand.
- Hub configures proxy to forward URL prefixes to the single-user notebook
- Proxy forwards all requests to Hub by default.
- Hub handles login, and spawns single-user servers on demand.
- Hub configures proxy to forward url prefixes to the single-user notebook
servers.
JupyterHub also provides a
[REST API][]
[REST API](http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/master/docs/rest-api.yml#/default)
for administration of the Hub and its users.
[rest api]: https://jupyterhub.readthedocs.io/en/latest/reference/rest-api.html
## Installation
### Check prerequisites
- A Linux/Unix based system
- [Python](https://www.python.org/downloads/) 3.8 or greater
- [Python](https://www.python.org/downloads/) 3.5 or greater
- [nodejs/npm](https://www.npmjs.com/)
- If you are using **`conda`**, the nodejs and npm dependencies will be installed for
* If you are using **`conda`**, the nodejs and npm dependencies will be installed for
you by conda.
- If you are using **`pip`**, install a recent version (at least 12.0) of
* If you are using **`pip`**, install a recent version of
[nodejs/npm](https://docs.npmjs.com/getting-started/installing-node).
For example, install it on Linux (Debian/Ubuntu) using:
```
sudo apt-get install npm nodejs-legacy
```
The `nodejs-legacy` package installs the `node` executable and is currently
required for npm to work on Debian/Ubuntu.
- If using the default PAM Authenticator, a [pluggable authentication module (PAM)](https://en.wikipedia.org/wiki/Pluggable_authentication_module).
- TLS certificate and key for HTTPS communication
@@ -78,11 +88,12 @@ To install JupyterHub along with its dependencies including nodejs/npm:
conda install -c conda-forge jupyterhub
```
If you plan to run notebook servers locally, install JupyterLab or Jupyter notebook:
If you plan to run notebook servers locally, install the Jupyter notebook
or JupyterLab:
```bash
conda install jupyterlab
conda install notebook
conda install jupyterlab
```
#### Using `pip`
@@ -91,13 +102,13 @@ JupyterHub can be installed with `pip`, and the proxy with `npm`:
```bash
npm install -g configurable-http-proxy
python3 -m pip install jupyterhub
python3 -m pip install jupyterhub
```
If you plan to run notebook servers locally, you will need to install
[JupyterLab or Jupyter notebook](https://jupyter.readthedocs.io/en/latest/install.html):
If you plan to run notebook servers locally, you will need to install the
[Jupyter notebook](https://jupyter.readthedocs.io/en/latest/install.html)
package:
python3 -m pip install --upgrade jupyterlab
python3 -m pip install --upgrade notebook
### Run the Hub server
@@ -106,17 +117,18 @@ To start the Hub server, run the command:
jupyterhub
Visit `http://localhost:8000` in your browser, and sign in with your system username and password.
Visit `https://localhost:8000` in your browser, and sign in with your unix
PAM credentials.
_Note_: To allow multiple users to sign in to the server, you will need to
run the `jupyterhub` command as a _privileged user_, such as root.
The [documentation](https://jupyterhub.readthedocs.io/en/latest/howto/configuration/config-sudo.html)
describes how to run the server as a _less privileged user_, which requires
*Note*: To allow multiple users to sign into the server, you will need to
run the `jupyterhub` command as a *privileged user*, such as root.
The [wiki](https://github.com/jupyterhub/jupyterhub/wiki/Using-sudo-to-run-JupyterHub-without-root-privileges)
describes how to run the server as a *less privileged user*, which requires
more configuration of the system.
## Configuration
The [Getting Started](https://jupyterhub.readthedocs.io/en/latest/tutorial/index.html#getting-started) section of the
The [Getting Started](https://jupyterhub.readthedocs.io/en/latest/getting-started/index.html) section of the
documentation explains the common steps in setting up JupyterHub.
The [**JupyterHub tutorial**](https://github.com/jupyterhub/jupyterhub-tutorial)
@@ -130,7 +142,7 @@ To generate a default config file with settings and descriptions:
### Start the Hub
To start the Hub on a specific url and port `10.0.1.2:443` with **https**:
To start the Hub on a specific url and port ``10.0.1.2:443`` with **https**:
jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert
@@ -158,10 +170,10 @@ To start the Hub on a specific url and port `10.0.1.2:443` with **https**:
## Docker
A starter [**docker image for JupyterHub**](https://quay.io/repository/jupyterhub/jupyterhub)
A starter [**docker image for JupyterHub**](https://hub.docker.com/r/jupyterhub/jupyterhub/)
gives a baseline deployment of JupyterHub using Docker.
**Important:** This `quay.io/jupyterhub/jupyterhub` image contains only the Hub itself,
**Important:** This `jupyterhub/jupyterhub` image contains only the Hub itself,
with no configuration. In general, one needs to make a derivative image, with
at least a `jupyterhub_config.py` setting up an Authenticator and/or a Spawner.
To run the single-user servers, which may be on the same system as the Hub or
@@ -169,7 +181,7 @@ not, Jupyter Notebook version 4 or greater must be installed.
The JupyterHub docker image can be started with the following command:
docker run -p 8000:8000 -d --name jupyterhub quay.io/jupyterhub/jupyterhub jupyterhub
docker run -p 8000:8000 -d --name jupyterhub jupyterhub/jupyterhub jupyterhub
This command will create a container named `jupyterhub` that you can
**stop and resume** with `docker stop/start`.
@@ -179,7 +191,7 @@ this a good choice for **testing JupyterHub on your desktop or laptop**.
If you want to run docker on a computer that has a public IP then you should
(as in MUST) **secure it with ssl** by adding ssl options to your docker
configuration or by using an ssl enabled proxy.
configuration or by using a ssl enabled proxy.
[Mounting volumes](https://docs.docker.com/engine/admin/volumes/volumes/) will
allow you to **store data outside the docker image (host system) so it will be persistent**, even when you start
@@ -192,7 +204,7 @@ These accounts will be used for authentication in JupyterHub's default configura
## Contributing
If you would like to contribute to the project, please read our
[contributor documentation](https://jupyter.readthedocs.io/en/latest/contributing/content-contributor.html)
[contributor documentation](http://jupyter.readthedocs.io/en/latest/contributor/content-contributor.html)
and the [`CONTRIBUTING.md`](CONTRIBUTING.md). The `CONTRIBUTING.md` file
explains how to set up a development installation, how to run the test suite,
and how to contribute to documentation.
@@ -219,18 +231,19 @@ docker container or Linux VM.
We use a shared copyright model that enables all contributors to maintain the
copyright on their contributions.
All code is licensed under the terms of the [revised BSD license](./LICENSE).
All code is licensed under the terms of the revised BSD license.
## Help and resources
We encourage you to ask questions and share ideas on the [Jupyter community forum](https://discourse.jupyter.org/).
You can also talk with us on our JupyterHub [Gitter](https://gitter.im/jupyterhub/jupyterhub) channel.
We encourage you to ask questions on the [Jupyter mailing list](https://groups.google.com/forum/#!forum/jupyter).
To participate in development discussions or get help, talk with us on
our JupyterHub [Gitter](https://gitter.im/jupyterhub/jupyterhub) channel.
- [Reporting Issues](https://github.com/jupyterhub/jupyterhub/issues)
- [JupyterHub tutorial](https://github.com/jupyterhub/jupyterhub-tutorial)
- [Documentation for JupyterHub](https://jupyterhub.readthedocs.io/en/latest/)
- [Documentation for JupyterHub's REST API][rest api]
- [Documentation for Project Jupyter](http://jupyter.readthedocs.io/en/latest/index.html)
- [Documentation for JupyterHub](https://jupyterhub.readthedocs.io/en/latest/) | [PDF (latest)](https://media.readthedocs.org/pdf/jupyterhub/latest/jupyterhub.pdf) | [PDF (stable)](https://media.readthedocs.org/pdf/jupyterhub/stable/jupyterhub.pdf)
- [Documentation for JupyterHub's REST API](http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/master/docs/rest-api.yml#/default)
- [Documentation for Project Jupyter](http://jupyter.readthedocs.io/en/latest/index.html) | [PDF](https://media.readthedocs.org/pdf/jupyter/latest/jupyter.pdf)
- [Project Jupyter website](https://jupyter.org)
- [Project Jupyter community](https://jupyter.org/community)

View File

@@ -1,55 +0,0 @@
# How to make a release
`jupyterhub` is a package available on [PyPI][] and [conda-forge][].
These are instructions on how to make a release.
## Pre-requisites
- Push rights to [jupyterhub/jupyterhub][]
- Push rights to [conda-forge/jupyterhub-feedstock][]
## Steps to make a release
1. Create a PR updating `docs/source/changelog.md` with [github-activity][] and
continue only when its merged.
```shell
pip install github-activity
github-activity --heading-level=3 jupyterhub/jupyterhub
```
1. Checkout main and make sure it is up to date.
```shell
git checkout main
git fetch origin main
git reset --hard origin/main
```
1. Update the version, make commits, and push a git tag with `tbump`.
```shell
pip install tbump
tbump --dry-run ${VERSION}
tbump ${VERSION}
```
Following this, the [CI system][] will build and publish a release.
1. Reset the version back to dev, e.g. `2.1.0.dev` after releasing `2.0.0`
```shell
tbump --no-tag ${NEXT_VERSION}.dev
```
1. Following the release to PyPI, an automated PR should arrive to
[conda-forge/jupyterhub-feedstock][] with instructions.
[pypi]: https://pypi.org/project/jupyterhub/
[conda-forge]: https://anaconda.org/conda-forge/jupyterhub
[jupyterhub/jupyterhub]: https://github.com/jupyterhub/jupyterhub
[conda-forge/jupyterhub-feedstock]: https://github.com/conda-forge/jupyterhub-feedstock
[github-activity]: https://github.com/executablebooks/github-activity
[ci system]: https://github.com/jupyterhub/jupyterhub/actions/workflows/release.yml

View File

@@ -1,5 +0,0 @@
# Reporting a Vulnerability
If you believe youve found a security vulnerability in a Jupyter
project, please report it!
See the [security documentation](https://jupyterhub.readthedocs.org/en/latest/contributing/security.html) for how.

View File

@@ -7,7 +7,6 @@ bower-lite
Since Bower's on its way out,
stage frontend dependencies from node_modules into components
"""
import json
import os
import shutil
@@ -30,5 +29,5 @@ dependencies = package_json['dependencies']
for dep in dependencies:
src = join(node_modules, dep)
dest = join(components, dep)
print(f"{src} -> {dest}")
print("%s -> %s" % (src, dest))
shutil.copytree(src, dest)

View File

@@ -1,36 +0,0 @@
#!/usr/bin/env python
# Check that installed package contains everything we expect
from pathlib import Path
import jupyterhub
from jupyterhub._data import DATA_FILES_PATH
print("Checking jupyterhub._data", end=" ")
print(f"DATA_FILES_PATH={DATA_FILES_PATH}", end=" ")
DATA_FILES_PATH = Path(DATA_FILES_PATH)
assert DATA_FILES_PATH.is_dir(), DATA_FILES_PATH
for subpath in (
"templates/spawn.html",
"static/css/style.min.css",
"static/components/jquery/dist/jquery.js",
"static/js/admin-react.js",
):
path = DATA_FILES_PATH / subpath
assert path.is_file(), path
print("OK")
print("Checking package_data", end=" ")
jupyterhub_path = Path(jupyterhub.__file__).parent.resolve()
for subpath in (
"alembic.ini",
"alembic/versions/833da8570507_rbac.py",
"event-schemas/server-actions/v1.yaml",
"singleuser/templates/page.html",
):
path = jupyterhub_path / subpath
assert path.is_file(), path
print("OK")

View File

@@ -1,27 +0,0 @@
#!/usr/bin/env python
# Check that sdist contains everything we expect
import sys
import tarfile
expected_files = [
"docs/requirements.txt",
"jsx/package.json",
"package.json",
"README.md",
]
assert len(sys.argv) == 2, "Expected one file"
print(f"Checking {sys.argv[1]}")
tar = tarfile.open(name=sys.argv[1], mode="r:gz")
try:
# Remove leading jupyterhub-VERSION/
filelist = {f.partition('/')[2] for f in tar.getnames()}
finally:
tar.close()
for e in expected_files:
assert e in filelist, f"{e} not found"
print("OK")

View File

@@ -22,7 +22,7 @@ if [[ "$DB" == "mysql" ]]; then
# ref server: https://hub.docker.com/_/mysql/
# ref client: https://dev.mysql.com/doc/refman/5.7/en/setting-environment-variables.html
#
DOCKER_RUN_ARGS="-p 3306:3306 --env MYSQL_ALLOW_EMPTY_PASSWORD=1 mysql:8.0"
DOCKER_RUN_ARGS="-p 3306:3306 --env MYSQL_ALLOW_EMPTY_PASSWORD=1 mysql:5.7"
READINESS_CHECK="mysql --user root --execute \q"
elif [[ "$DB" == "postgres" ]]; then
# Environment variables can influence both the postgresql server in the
@@ -36,7 +36,7 @@ elif [[ "$DB" == "postgres" ]]; then
# used by the postgresql client psql, so we configure the user based on how
# we want to connect.
#
DOCKER_RUN_ARGS="-p 5432:5432 --env "POSTGRES_USER=${PGUSER}" --env "POSTGRES_PASSWORD=${PGPASSWORD}" postgres:15.1"
DOCKER_RUN_ARGS="-p 5432:5432 --env "POSTGRES_USER=${PGUSER}" --env "POSTGRES_PASSWORD=${PGPASSWORD}" postgres:9.5"
READINESS_CHECK="psql --command \q"
else
echo '$DB must be mysql or postgres'

View File

@@ -19,9 +19,8 @@ else
fi
# Configure a set of databases in the database server for upgrade tests
# this list must be in sync with versions in test_db.py:test_upgrade
set -x
for SUFFIX in '' _upgrade_110 _upgrade_122 _upgrade_130 _upgrade_150 _upgrade_211 _upgrade_311; do
for SUFFIX in '' _upgrade_072 _upgrade_081 _upgrade_094; do
$SQL_CLIENT "DROP DATABASE jupyterhub${SUFFIX};" 2>/dev/null || true
$SQL_CLIENT "CREATE DATABASE jupyterhub${SUFFIX} ${EXTRA_CREATE_DATABASE_ARGS:-};"
done

View File

@@ -1,13 +0,0 @@
alembic==1.4
async_generator==1.9
certipy==0.1.2
importlib_metadata==3.6; python_version < '3.10'
jinja2==2.11.0
jupyter_telemetry==0.1.0
oauthlib==3.0
pamela==1.1.0; sys_platform != 'win32'
prometheus_client==0.5.0
psutil==5.6.5; sys_platform == 'win32'
SQLAlchemy==1.4.1
tornado==5.1
traitlets==4.3.2

View File

@@ -1,20 +0,0 @@
# oldest-dependencies.txt is autogenerated.
# recreate with:
# cat requirements.txt | grep '>=' | sed -e 's@>=@==@g' > ci/legacy-env/oldest-dependencies.txt
-r ./oldest-dependencies.txt
# then `pip-compile` with Python 3.8
# below are additional pins to make this a working test env
# these are extracted from jupyterhub[test]
beautifulsoup4
coverage
playwright
pytest
pytest-cov
pytest-asyncio==0.17.*
requests-mock
virtualenv
# and any additional pins to make this a working test env
# e.g. pinning down a transitive dependency
notebook==6.*
markupsafe==2.0.*

View File

@@ -1,285 +0,0 @@
#
# This file is autogenerated by pip-compile with Python 3.8
# by the following command:
#
# pip-compile --output-file=requirements.old
#
alembic==1.4.0
# via -r ./oldest-dependencies.txt
appnope==0.1.3
# via
# ipykernel
# ipython
argon2-cffi==23.1.0
# via notebook
argon2-cffi-bindings==21.2.0
# via argon2-cffi
async-generator==1.9
# via -r ./oldest-dependencies.txt
attrs==23.1.0
# via
# jsonschema
# referencing
backcall==0.2.0
# via ipython
beautifulsoup4==4.12.2
# via -r requirements.in
bleach==6.0.0
# via nbconvert
certifi==2023.7.22
# via requests
certipy==0.1.2
# via -r ./oldest-dependencies.txt
cffi==1.15.1
# via
# argon2-cffi-bindings
# cryptography
charset-normalizer==3.2.0
# via requests
coverage[toml]==7.3.1
# via
# -r requirements.in
# pytest-cov
cryptography==41.0.4
# via pyopenssl
debugpy==1.8.0
# via ipykernel
decorator==5.1.1
# via
# ipython
# traitlets
defusedxml==0.7.1
# via nbconvert
distlib==0.3.7
# via virtualenv
entrypoints==0.4
# via
# jupyter-client
# nbconvert
exceptiongroup==1.1.3
# via pytest
fastjsonschema==2.18.0
# via nbformat
filelock==3.12.4
# via virtualenv
greenlet==2.0.2
# via
# playwright
# sqlalchemy
idna==3.4
# via requests
importlib-metadata==3.6.0 ; python_version < "3.10"
# via -r ./oldest-dependencies.txt
importlib-resources==6.1.0
# via
# jsonschema
# jsonschema-specifications
iniconfig==2.0.0
# via pytest
ipykernel==6.4.2
# via notebook
ipython==7.34.0
# via ipykernel
ipython-genutils==0.2.0
# via
# ipykernel
# notebook
# traitlets
jedi==0.19.0
# via ipython
jinja2==2.11.0
# via
# -r ./oldest-dependencies.txt
# nbconvert
# notebook
jsonschema==4.19.1
# via
# jupyter-telemetry
# nbformat
jsonschema-specifications==2023.7.1
# via jsonschema
jupyter-client==7.2.0
# via
# ipykernel
# nbclient
# notebook
jupyter-core==5.0.0
# via
# jupyter-client
# nbconvert
# nbformat
# notebook
jupyter-telemetry==0.1.0
# via -r ./oldest-dependencies.txt
jupyterlab-pygments==0.2.2
# via nbconvert
mako==1.2.4
# via alembic
markupsafe==2.0.1
# via
# -r requirements.in
# jinja2
# mako
matplotlib-inline==0.1.6
# via
# ipykernel
# ipython
mistune==0.8.4
# via nbconvert
nbclient==0.5.11
# via nbconvert
nbconvert==6.0.7
# via notebook
nbformat==5.3.0
# via
# nbclient
# nbconvert
# notebook
nest-asyncio==1.5.8
# via
# jupyter-client
# nbclient
notebook==6.1.6
# via -r requirements.in
oauthlib==3.0.0
# via -r ./oldest-dependencies.txt
packaging==23.1
# via pytest
pamela==1.1.0 ; sys_platform != "win32"
# via -r ./oldest-dependencies.txt
pandocfilters==1.5.0
# via nbconvert
parso==0.8.3
# via jedi
pexpect==4.8.0
# via ipython
pickleshare==0.7.5
# via ipython
pkgutil-resolve-name==1.3.10
# via jsonschema
platformdirs==3.10.0
# via
# jupyter-core
# virtualenv
playwright==1.38.0
# via -r requirements.in
pluggy==1.3.0
# via pytest
prometheus-client==0.5.0
# via
# -r ./oldest-dependencies.txt
# notebook
prompt-toolkit==3.0.39
# via ipython
ptyprocess==0.7.0
# via
# pexpect
# terminado
pycparser==2.21
# via cffi
pyee==9.0.4
# via playwright
pygments==2.16.1
# via
# ipython
# nbconvert
pyopenssl==23.2.0
# via certipy
pytest==7.4.2
# via
# -r requirements.in
# pytest-asyncio
# pytest-cov
pytest-asyncio==0.17.2
# via -r requirements.in
pytest-cov==4.1.0
# via -r requirements.in
python-dateutil==2.8.2
# via
# alembic
# jupyter-client
python-editor==1.0.4
# via alembic
python-json-logger==2.0.7
# via jupyter-telemetry
pyzmq==25.1.1
# via
# jupyter-client
# notebook
referencing==0.30.2
# via
# jsonschema
# jsonschema-specifications
requests==2.31.0
# via requests-mock
requests-mock==1.11.0
# via -r requirements.in
rpds-py==0.10.3
# via
# jsonschema
# referencing
ruamel-yaml==0.17.32
# via jupyter-telemetry
ruamel-yaml-clib==0.2.7
# via ruamel-yaml
send2trash==1.8.2
# via notebook
six==1.16.0
# via
# bleach
# python-dateutil
# requests-mock
# traitlets
soupsieve==2.5
# via beautifulsoup4
sqlalchemy==1.4.1
# via
# -r ./oldest-dependencies.txt
# alembic
terminado==0.13.3
# via notebook
testpath==0.6.0
# via nbconvert
tomli==2.0.1
# via
# coverage
# pytest
tornado==5.1
# via
# -r ./oldest-dependencies.txt
# ipykernel
# jupyter-client
# notebook
# terminado
traitlets==4.3.2
# via
# -r ./oldest-dependencies.txt
# ipykernel
# ipython
# jupyter-client
# jupyter-core
# jupyter-telemetry
# matplotlib-inline
# nbclient
# nbconvert
# nbformat
# notebook
typing-extensions==4.8.0
# via
# playwright
# pyee
urllib3==2.0.5
# via requests
virtualenv==20.24.5
# via -r requirements.in
wcwidth==0.2.6
# via prompt-toolkit
webencodings==0.5.1
# via bleach
zipp==3.17.0
# via
# importlib-metadata
# importlib-resources
# The following packages are considered to be unsafe in a requirements file:
# setuptools

16
demo-image/Dockerfile Normal file
View File

@@ -0,0 +1,16 @@
# Demo JupyterHub Docker image
#
# This should only be used for demo or testing and not as a base image to build on.
#
# It includes the notebook package and it uses the DummyAuthenticator and the SimpleLocalProcessSpawner.
ARG BASE_IMAGE=jupyterhub/jupyterhub-onbuild
FROM ${BASE_IMAGE}
# Install the notebook package
RUN python3 -m pip install notebook
# Create a demo user
RUN useradd --create-home demo
RUN chown demo .
USER demo

25
demo-image/README.md Normal file
View File

@@ -0,0 +1,25 @@
## Demo Dockerfile
This is a demo JupyterHub Docker image to help you get a quick overview of what
JupyterHub is and how it works.
It uses the SimpleLocalProcessSpawner to spawn new user servers and
DummyAuthenticator for authentication.
The DummyAuthenticator allows you to log in with any username & password and the
SimpleLocalProcessSpawner allows starting servers without having to create a
local user for each JupyterHub user.
### Important!
This should only be used for demo or testing purposes!
It shouldn't be used as a base image to build on.
### Try it
1. `cd` to the root of your jupyterhub repo.
2. Build the demo image with `docker build -t jupyterhub-demo demo-image`.
3. Run the demo image with `docker run -d -p 8000:8000 jupyterhub-demo`.
4. Visit http://localhost:8000 and login with any username and password
5. Happy demo-ing :tada:!

View File

@@ -0,0 +1,7 @@
# Configuration file for jupyterhub-demo
c = get_config()
# Use DummyAuthenticator and SimpleSpawner
c.JupyterHub.spawner_class = "simple"
c.JupyterHub.authenticator_class = "dummy"

20
dev-requirements.txt Normal file
View File

@@ -0,0 +1,20 @@
-r requirements.txt
# temporary pin of attrs for jsonschema 0.3.0a1
# seems to be a pip bug
attrs>=17.4.0
beautifulsoup4
codecov
coverage
cryptography
html5lib # needed for beautifulsoup
mock
notebook
pre-commit
pytest-asyncio
pytest-cov
pytest>=3.3
requests-mock
# blacklist urllib3 releases affected by https://github.com/urllib3/urllib3/issues/1683
# I *think* this should only affect testing, not production
urllib3!=1.25.4,!=1.25.5
virtualenv

View File

@@ -0,0 +1,9 @@
FROM python:3.6.3-alpine3.6
ARG JUPYTERHUB_VERSION=0.8.1
RUN pip3 install --no-cache jupyterhub==${JUPYTERHUB_VERSION}
ENV LANG=en_US.UTF-8
USER nobody
CMD ["jupyterhub"]

20
dockerfiles/README.md Normal file
View File

@@ -0,0 +1,20 @@
## What is Dockerfile.alpine
Dockerfile.alpine contains base image for jupyterhub. It does not work independently, but only as part of a full jupyterhub cluster
## How to use it?
1. A running configurable-http-proxy, whose API is accessible.
2. A jupyterhub_config file.
3. Authentication and other libraries required by the specific jupyterhub_config file.
## Steps to test it outside a cluster
* start configurable-http-proxy in another container
* specify CONFIGPROXY_AUTH_TOKEN env in both containers
* put both containers on the same network (e.g. docker network create jupyterhub; docker run ... --net jupyterhub)
* tell jupyterhub where CHP is (e.g. c.ConfigurableHTTPProxy.api_url = 'http://chp:8001')
* tell jupyterhub not to start the proxy itself (c.ConfigurableHTTPProxy.should_start = False)
* Use dummy authenticator for ease of testing. Update following in jupyterhub_config file
- c.JupyterHub.authenticator_class = 'dummyauthenticator.DummyAuthenticator'
- c.DummyAuthenticator.password = "your strong password"

9
dockerfiles/test.py Normal file
View File

@@ -0,0 +1,9 @@
import os
from jupyterhub._data import DATA_FILES_PATH
print(f"DATA_FILES_PATH={DATA_FILES_PATH}")
for sub_path in ("templates", "static/components", "static/css/style.min.css"):
path = os.path.join(DATA_FILES_PATH, sub_path)
assert os.path.exists(path), path

View File

@@ -1,58 +1,212 @@
# Makefile for Sphinx documentation generated by sphinx-quickstart
# ----------------------------------------------------------------------------
# Makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?= --color -W --keep-going
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source
BUILDDIR = _build
# You can set these variables from the command line.
SPHINXOPTS = "-W"
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS)
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " applehelp to make an Apple Help Book"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
@echo " spelling to run spell check on documentation"
@echo " metrics to generate documentation for metrics by inspecting the source code"
.PHONY: help Makefile metrics scopes
clean:
rm -rf $(BUILDDIR)/*
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option.
#
# Several sphinx-build commands can be used through this, for example:
#
# - make clean
# - make linkcheck
# - make spelling
#
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS)
node_modules: package.json
npm install && touch node_modules
rest-api: source/_static/rest-api/index.html
# Manually added targets - related to code generation
# ----------------------------------------------------------------------------
source/_static/rest-api/index.html: rest-api.yml node_modules
npm run rest-api
# For local development:
# - builds the html
# - NOTE: If the pre-requisites for the html target is updated, also update the
# Read The Docs section in docs/source/conf.py.
#
html: metrics
$(SPHINXBUILD) -b html "$(SOURCEDIR)" "$(BUILDDIR)/html" $(SPHINXOPTS)
metrics: source/reference/metrics.rst
source/reference/metrics.rst: generate-metrics.py
python3 generate-metrics.py
html: rest-api metrics
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
metrics: source/reference/metrics.md
source/reference/metrics.md:
python3 generate-metrics.py
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
# Manually added targets - related to development
# ----------------------------------------------------------------------------
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
# For local development:
# - requires sphinx-autobuild, see
# https://sphinxcontrib-spelling.readthedocs.io/en/latest/
# - builds and rebuilds html on changes to source, but does not re-generate
# metrics files
# - starts a livereload enabled webserver and opens up a browser
devenv: html
sphinx-autobuild -b html --open-browser "$(SOURCEDIR)" "$(BUILDDIR)/html"
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/JupyterHub.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/JupyterHub.qhc"
applehelp:
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
@echo
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
@echo "N.B. You won't be able to view it unless you put it in" \
"~/Library/Documentation/Help or install it in your application" \
"bundle."
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/JupyterHub"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/JupyterHub"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
spelling:
$(SPHINXBUILD) -b spelling $(ALLSPHINXOPTS) $(BUILDDIR)/spelling
@echo
@echo "Spell check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/spelling/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

View File

@@ -1,6 +1,8 @@
import os
from os.path import join
from pytablewriter import MarkdownTableWriter
from pytablewriter import RstSimpleTableWriter
from pytablewriter.style import Style
import jupyterhub.metrics
@@ -10,11 +12,12 @@ HERE = os.path.abspath(os.path.dirname(__file__))
class Generator:
@classmethod
def create_writer(cls, table_name, headers, values):
writer = MarkdownTableWriter()
writer = RstSimpleTableWriter()
writer.table_name = table_name
writer.headers = headers
writer.value_matrix = values
writer.margin = 1
[writer.set_style(header, Style(align="center")) for header in headers]
return writer
def _parse_metrics(self):
@@ -31,17 +34,18 @@ class Generator:
if not os.path.exists(generated_directory):
os.makedirs(generated_directory)
filename = f"{generated_directory}/metrics.md"
filename = f"{generated_directory}/metrics.rst"
table_name = ""
headers = ["Type", "Name", "Description"]
values = self._parse_metrics()
writer = self.create_writer(table_name, headers, values)
title = "List of Prometheus Metrics"
underline = "============================"
content = f"{title}\n{underline}\n{writer.dumps()}"
with open(filename, 'w') as f:
f.write("# List of Prometheus Metrics\n\n")
f.write(writer.dumps())
f.write("\n")
print(f"Generated {filename}")
f.write(content)
print(f"Generated {filename}.")
def main():

View File

@@ -1,49 +1,263 @@
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=--color -W --keep-going
)
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=source
set BUILDDIR=_build
set BUILDDIR=build
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% source
set I18NSPHINXOPTS=%SPHINXOPTS% source
if NOT "%PAPER%" == "" (
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
)
if "%1" == "" goto help
if "%1" == "devenv" goto devenv
goto default
if "%1" == "help" (
:help
echo.Please use `make ^<target^>` where ^<target^> is one of
echo. html to make standalone HTML files
echo. dirhtml to make HTML files named index.html in directories
echo. singlehtml to make a single large HTML file
echo. pickle to make pickle files
echo. json to make JSON files
echo. htmlhelp to make HTML files and a HTML help project
echo. qthelp to make HTML files and a qthelp project
echo. devhelp to make HTML files and a Devhelp project
echo. epub to make an epub
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
echo. text to make text files
echo. man to make manual pages
echo. texinfo to make Texinfo files
echo. gettext to make PO message catalogs
echo. changes to make an overview over all changed/added/deprecated items
echo. xml to make Docutils-native XML files
echo. pseudoxml to make pseudoxml-XML files for display purposes
echo. linkcheck to check all external links for integrity
echo. doctest to run all doctests embedded in the documentation if enabled
echo. coverage to run coverage check of the documentation if enabled
goto end
)
if "%1" == "clean" (
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
del /q /s %BUILDDIR%\*
goto end
)
:default
%SPHINXBUILD% >NUL 2>NUL
REM Check if sphinx-build is available and fallback to Python version if any
%SPHINXBUILD% 1>NUL 2>NUL
if errorlevel 9009 goto sphinx_python
goto sphinx_ok
:sphinx_python
set SPHINXBUILD=python -m sphinx.__init__
%SPHINXBUILD% 2> nul
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Open and read README.md!
exit /b 1
)
%SPHINXBUILD% -M %1 "%SOURCEDIR%" "%BUILDDIR%" %SPHINXOPTS%
goto end
:help
%SPHINXBUILD% -M help "%SOURCEDIR%" "%BUILDDIR%" %SPHINXOPTS%
goto end
:devenv
sphinx-autobuild >NUL 2>NUL
if errorlevel 9009 (
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.The 'sphinx-autobuild' command was not found. Open and read README.md!
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
sphinx-autobuild -b html --open-browser "%SOURCEDIR%" "%BUILDDIR%/html"
goto end
:sphinx_ok
if "%1" == "html" (
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
goto end
)
if "%1" == "dirhtml" (
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
goto end
)
if "%1" == "singlehtml" (
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
goto end
)
if "%1" == "pickle" (
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the pickle files.
goto end
)
if "%1" == "json" (
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the JSON files.
goto end
)
if "%1" == "htmlhelp" (
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run HTML Help Workshop with the ^
.hhp project file in %BUILDDIR%/htmlhelp.
goto end
)
if "%1" == "qthelp" (
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run "qcollectiongenerator" with the ^
.qhcp project file in %BUILDDIR%/qthelp, like this:
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\JupyterHub.qhcp
echo.To view the help file:
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\JupyterHub.ghc
goto end
)
if "%1" == "devhelp" (
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished.
goto end
)
if "%1" == "epub" (
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The epub file is in %BUILDDIR%/epub.
goto end
)
if "%1" == "latex" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
if errorlevel 1 exit /b 1
echo.
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdf" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf
cd %~dp0
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdfja" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf-ja
cd %~dp0
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "text" (
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The text files are in %BUILDDIR%/text.
goto end
)
if "%1" == "man" (
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The manual pages are in %BUILDDIR%/man.
goto end
)
if "%1" == "texinfo" (
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
goto end
)
if "%1" == "gettext" (
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
goto end
)
if "%1" == "changes" (
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
if errorlevel 1 exit /b 1
echo.
echo.The overview file is in %BUILDDIR%/changes.
goto end
)
if "%1" == "linkcheck" (
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
if errorlevel 1 exit /b 1
echo.
echo.Link check complete; look for any errors in the above output ^
or in %BUILDDIR%/linkcheck/output.txt.
goto end
)
if "%1" == "doctest" (
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
if errorlevel 1 exit /b 1
echo.
echo.Testing of doctests in the sources finished, look at the ^
results in %BUILDDIR%/doctest/output.txt.
goto end
)
if "%1" == "coverage" (
%SPHINXBUILD% -b coverage %ALLSPHINXOPTS% %BUILDDIR%/coverage
if errorlevel 1 exit /b 1
echo.
echo.Testing of coverage in the sources finished, look at the ^
results in %BUILDDIR%/coverage/python.txt.
goto end
)
if "%1" == "xml" (
%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The XML files are in %BUILDDIR%/xml.
goto end
)
if "%1" == "pseudoxml" (
%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
goto end
)
:end
popd

14
docs/package.json Normal file
View File

@@ -0,0 +1,14 @@
{
"name": "jupyterhub-docs-build",
"version": "0.8.0",
"description": "build JupyterHub swagger docs",
"scripts": {
"rest-api": "bootprint openapi ./rest-api.yml source/_static/rest-api"
},
"author": "",
"license": "BSD-3-Clause",
"devDependencies": {
"bootprint": "^1.0.0",
"bootprint-openapi": "^1.0.0"
}
}

View File

@@ -1,15 +1,12 @@
# docs also require jupyterhub itself to be installed
# don't depend on it here, as that often results in a duplicate
# installation of jupyterhub that's already installed
autodoc-traits
intersphinx-registry
jupyterhub-sphinx-theme
myst-parser>=0.19
pre-commit
-r ../requirements.txt
alabaster_jupyterhub
# Temporary fix of #3021. Revert back to released autodoc-traits when
# 0.1.0 released.
https://github.com/jupyterhub/autodoc-traits/archive/75885ee24636efbfebfceed1043459715049cd84.zip
pydata-sphinx-theme
pytablewriter>=0.56
ruamel.yaml
sphinx>=4
recommonmark>=0.6
sphinx-copybutton
sphinx-jsonschema
sphinxext-opengraph
sphinxext-rediraffe
sphinx>=1.7

881
docs/rest-api.yml Normal file
View File

@@ -0,0 +1,881 @@
# see me at: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#/default
swagger: '2.0'
info:
title: JupyterHub
description: The REST API for JupyterHub
version: 1.2.0dev
license:
name: BSD-3-Clause
schemes:
[http, https]
securityDefinitions:
token:
type: apiKey
name: Authorization
in: header
security:
- token: []
basePath: /hub/api
produces:
- application/json
consumes:
- application/json
paths:
/:
get:
summary: Get JupyterHub version
description: |
This endpoint is not authenticated for the purpose of clients and user
to identify the JupyterHub version before setting up authentication.
responses:
'200':
description: The JupyterHub version
schema:
type: object
properties:
version:
type: string
description: The version of JupyterHub itself
/info:
get:
summary: Get detailed info about JupyterHub
description: |
Detailed JupyterHub information, including Python version,
JupyterHub's version and executable path,
and which Authenticator and Spawner are active.
responses:
'200':
description: Detailed JupyterHub info
schema:
type: object
properties:
version:
type: string
description: The version of JupyterHub itself
python:
type: string
description: The Python version, as returned by sys.version
sys_executable:
type: string
description: The path to sys.executable running JupyterHub
authenticator:
type: object
properties:
class:
type: string
description: The Python class currently active for JupyterHub Authentication
version:
type: string
description: The version of the currently active Authenticator
spawner:
type: object
properties:
class:
type: string
description: The Python class currently active for spawning single-user notebook servers
version:
type: string
description: The version of the currently active Spawner
/users:
get:
summary: List users
responses:
'200':
description: The Hub's user list
schema:
type: array
items:
$ref: '#/definitions/User'
post:
summary: Create multiple users
parameters:
- name: body
in: body
required: true
schema:
type: object
properties:
usernames:
type: array
description: list of usernames to create on the Hub
items:
type: string
admin:
description: whether the created users should be admins
type: boolean
responses:
'201':
description: The users have been created
schema:
type: array
description: The created users
items:
$ref: '#/definitions/User'
/users/{name}:
get:
summary: Get a user by name
parameters:
- name: name
description: username
in: path
required: true
type: string
responses:
'200':
description: The User model
schema:
$ref: '#/definitions/User'
post:
summary: Create a single user
parameters:
- name: name
description: username
in: path
required: true
type: string
responses:
'201':
description: The user has been created
schema:
$ref: '#/definitions/User'
patch:
summary: Modify a user
description: Change a user's name or admin status
parameters:
- name: name
description: username
in: path
required: true
type: string
- name: body
in: body
required: true
description: Updated user info. At least one key to be updated (name or admin) is required.
schema:
type: object
properties:
name:
type: string
description: the new name (optional, if another key is updated i.e. admin)
admin:
type: boolean
description: update admin (optional, if another key is updated i.e. name)
responses:
'200':
description: The updated user info
schema:
$ref: '#/definitions/User'
delete:
summary: Delete a user
parameters:
- name: name
description: username
in: path
required: true
type: string
responses:
'204':
description: The user has been deleted
/users/{name}/activity:
post:
summary:
Notify Hub of activity for a given user.
description:
Notify the Hub of activity by the user,
e.g. accessing a service or (more likely)
actively using a server.
parameters:
- name: name
description: username
in: path
required: true
type: string
- name: body
in: body
schema:
type: object
properties:
last_activity:
type: string
format: date-time
description: |
Timestamp of last-seen activity for this user.
Only needed if this is not activity associated
with using a given server.
servers:
description: |
Register activity for specific servers by name.
The keys of this dict are the names of servers.
The default server has an empty name ('').
type: object
properties:
'<server name>':
description: |
Activity for a single server.
type: object
required:
- last_activity
properties:
last_activity:
type: string
format: date-time
description: |
Timestamp of last-seen activity on this server.
example:
last_activity: '2019-02-06T12:54:14Z'
servers:
'':
last_activity: '2019-02-06T12:54:14Z'
gpu:
last_activity: '2019-02-06T12:54:14Z'
responses:
'401':
$ref: '#/responses/Unauthorized'
'404':
description: No such user
/users/{name}/server:
post:
summary: Start a user's single-user notebook server
parameters:
- name: name
description: username
in: path
required: true
type: string
- name: options
description: |
Spawn options can be passed as a JSON body
when spawning via the API instead of spawn form.
The structure of the options
will depend on the Spawner's configuration.
The body itself will be available as `user_options` for the
Spawner.
in: body
required: false
schema:
type: object
responses:
'201':
description: The user's notebook server has started
'202':
description: The user's notebook server has not yet started, but has been requested
delete:
summary: Stop a user's server
parameters:
- name: name
description: username
in: path
required: true
type: string
responses:
'204':
description: The user's notebook server has stopped
'202':
description: The user's notebook server has not yet stopped as it is taking a while to stop
/users/{name}/servers/{server_name}:
post:
summary: Start a user's single-user named-server notebook server
parameters:
- name: name
description: username
in: path
required: true
type: string
- name: server_name
description: |
name given to a named-server.
Note that depending on your JupyterHub infrastructure there are chracterter size limitation to `server_name`. Default spawner with K8s pod will not allow Jupyter Notebooks to be spawned with a name that contains more than 253 characters (keep in mind that the pod will be spawned with extra characters to identify the user and hub).
in: path
required: true
type: string
- name: options
description: |
Spawn options can be passed as a JSON body
when spawning via the API instead of spawn form.
The structure of the options
will depend on the Spawner's configuration.
in: body
required: false
schema:
type: object
responses:
'201':
description: The user's notebook named-server has started
'202':
description: The user's notebook named-server has not yet started, but has been requested
delete:
summary: Stop a user's named-server
parameters:
- name: name
description: username
in: path
required: true
type: string
- name: server_name
description: name given to a named-server
in: path
required: true
type: string
- name: body
in: body
required: false
schema:
type: object
properties:
remove:
type: boolean
description: |
Whether to fully remove the server, rather than just stop it.
Removing a server deletes things like the state of the stopped server.
Default: false.
responses:
'204':
description: The user's notebook named-server has stopped
'202':
description: The user's notebook named-server has not yet stopped as it is taking a while to stop
/users/{name}/tokens:
parameters:
- name: name
description: username
in: path
required: true
type: string
get:
summary: List tokens for the user
responses:
'200':
description: The list of tokens
schema:
type: array
items:
$ref: '#/definitions/Token'
'401':
$ref: '#/responses/Unauthorized'
'404':
description: No such user
post:
summary: Create a new token for the user
parameters:
- name: token_params
in: body
required: false
schema:
type: object
properties:
expires_in:
type: number
description: lifetime (in seconds) after which the requested token will expire.
note:
type: string
description: A note attached to the token for future bookkeeping
responses:
'201':
description: The newly created token
schema:
$ref: '#/definitions/Token'
'400':
description: Body must be a JSON dict or empty
/users/{name}/tokens/{token_id}:
parameters:
- name: name
description: username
in: path
required: true
type: string
- name: token_id
in: path
required: true
type: string
get:
summary: Get the model for a token by id
responses:
'200':
description: The info for the new token
schema:
$ref: '#/definitions/Token'
delete:
summary: Delete (revoke) a token by id
responses:
'204':
description: The token has been deleted
/user:
get:
summary: Return authenticated user's model
responses:
'200':
description: The authenticated user's model is returned.
schema:
$ref: '#/definitions/User'
/groups:
get:
summary: List groups
responses:
'200':
description: The list of groups
schema:
type: array
items:
$ref: '#/definitions/Group'
/groups/{name}:
get:
summary: Get a group by name
parameters:
- name: name
description: group name
in: path
required: true
type: string
responses:
'200':
description: The group model
schema:
$ref: '#/definitions/Group'
post:
summary: Create a group
parameters:
- name: name
description: group name
in: path
required: true
type: string
responses:
'201':
description: The group has been created
schema:
$ref: '#/definitions/Group'
delete:
summary: Delete a group
parameters:
- name: name
description: group name
in: path
required: true
type: string
responses:
'204':
description: The group has been deleted
/groups/{name}/users:
post:
summary: Add users to a group
parameters:
- name: name
description: group name
in: path
required: true
type: string
- name: body
in: body
required: true
description: The users to add to the group
schema:
type: object
properties:
users:
type: array
description: List of usernames to add to the group
items:
type: string
responses:
'200':
description: The users have been added to the group
schema:
$ref: '#/definitions/Group'
delete:
summary: Remove users from a group
parameters:
- name: name
description: group name
in: path
required: true
type: string
- name: body
in: body
required: true
description: The users to remove from the group
schema:
type: object
properties:
users:
type: array
description: List of usernames to remove from the group
items:
type: string
responses:
'200':
description: The users have been removed from the group
/services:
get:
summary: List services
responses:
'200':
description: The service list
schema:
type: array
items:
$ref: '#/definitions/Service'
/services/{name}:
get:
summary: Get a service by name
parameters:
- name: name
description: service name
in: path
required: true
type: string
responses:
'200':
description: The Service model
schema:
$ref: '#/definitions/Service'
/proxy:
get:
summary: Get the proxy's routing table
description: A convenience alias for getting the routing table directly from the proxy
responses:
'200':
description: Routing table
schema:
type: object
description: configurable-http-proxy routing table (see configurable-http-proxy docs for details)
post:
summary: Force the Hub to sync with the proxy
responses:
'200':
description: Success
patch:
summary: Notify the Hub about a new proxy
description: Notifies the Hub of a new proxy to use.
parameters:
- name: body
in: body
required: true
description: Any values that have changed for the new proxy. All keys are optional.
schema:
type: object
properties:
ip:
type: string
description: IP address of the new proxy
port:
type: string
description: Port of the new proxy
protocol:
type: string
description: Protocol of new proxy, if changed
auth_token:
type: string
description: CONFIGPROXY_AUTH_TOKEN for the new proxy
responses:
'200':
description: Success
/authorizations/token:
post:
summary: Request a new API token
description: |
Request a new API token to use with the JupyterHub REST API.
If not already authenticated, username and password can be sent
in the JSON request body.
Logging in via this method is only available when the active Authenticator
accepts passwords (e.g. not OAuth).
parameters:
- name: credentials
in: body
schema:
type: object
properties:
username:
type: string
password:
type: string
responses:
'200':
description: The new API token
schema:
type: object
properties:
token:
type: string
description: The new API token.
'403':
description: The user can not be authenticated.
/authorizations/token/{token}:
get:
summary: Identify a user or service from an API token
parameters:
- name: token
in: path
required: true
type: string
responses:
'200':
description: The user or service identified by the API token
'404':
description: A user or service is not found.
/authorizations/cookie/{cookie_name}/{cookie_value}:
get:
summary: Identify a user from a cookie
description: Used by single-user notebook servers to hand off cookie authentication to the Hub
parameters:
- name: cookie_name
in: path
required: true
type: string
- name: cookie_value
in: path
required: true
type: string
responses:
'200':
description: The user identified by the cookie
schema:
$ref: '#/definitions/User'
'404':
description: A user is not found.
/oauth2/authorize:
get:
summary: 'OAuth 2.0 authorize endpoint'
description: |
Redirect users to this URL to begin the OAuth process.
It is not an API endpoint.
parameters:
- name: client_id
description: The client id
in: query
required: true
type: string
- name: response_type
description: The response type (always 'code')
in: query
required: true
type: string
- name: state
description: A state string
in: query
required: false
type: string
- name: redirect_uri
description: The redirect url
in: query
required: true
type: string
responses:
'200':
description: Success
'400':
description: OAuth2Error
/oauth2/token:
post:
summary: Request an OAuth2 token
description: |
Request an OAuth2 token from an authorization code.
This request completes the OAuth process.
consumes:
- application/x-www-form-urlencoded
parameters:
- name: client_id
description: The client id
in: formData
required: true
type: string
- name: client_secret
description: The client secret
in: formData
required: true
type: string
- name: grant_type
description: The grant type (always 'authorization_code')
in: formData
required: true
type: string
- name: code
description: The code provided by the authorization redirect
in: formData
required: true
type: string
- name: redirect_uri
description: The redirect url
in: formData
required: true
type: string
responses:
'200':
description: JSON response including the token
schema:
type: object
properties:
access_token:
type: string
description: The new API token for the user
token_type:
type: string
description: Will always be 'Bearer'
/shutdown:
post:
summary: Shutdown the Hub
parameters:
- name: body
in: body
schema:
type: object
properties:
proxy:
type: boolean
description: Whether the proxy should be shutdown as well (default from Hub config)
servers:
type: boolean
description: Whether users' notebook servers should be shutdown as well (default from Hub config)
responses:
'202':
description: Shutdown successful
'400':
description: Unexpeced value for proxy or servers
# Descriptions of common responses
responses:
NotFound:
description: The specified resource was not found
Unauthorized:
description: Authentication/Authorization error
definitions:
User:
type: object
properties:
name:
type: string
description: The user's name
admin:
type: boolean
description: Whether the user is an admin
groups:
type: array
description: The names of groups where this user is a member
items:
type: string
server:
type: string
description: The user's notebook server's base URL, if running; null if not.
pending:
type: string
enum: ["spawn", "stop", null]
description: The currently pending action, if any
last_activity:
type: string
format: date-time
description: Timestamp of last-seen activity from the user
servers:
type: array
description: The active servers for this user.
items:
$ref: '#/definitions/Server'
Server:
type: object
properties:
name:
type: string
description: The server's name. The user's default server has an empty name ('')
ready:
type: boolean
description: |
Whether the server is ready for traffic.
Will always be false when any transition is pending.
pending:
type: string
enum: ["spawn", "stop", null]
description: |
The currently pending action, if any.
A server is not ready if an action is pending.
url:
type: string
description: |
The URL where the server can be accessed
(typically /user/:name/:server.name/).
progress_url:
type: string
description: |
The URL for an event-stream to retrieve events during a spawn.
started:
type: string
format: date-time
description: UTC timestamp when the server was last started.
last_activity:
type: string
format: date-time
description: UTC timestamp last-seen activity on this server.
state:
type: object
description: Arbitrary internal state from this server's spawner. Only available on the hub's users list or get-user-by-name method, and only if a hub admin. None otherwise.
user_options:
type: object
description: User specified options for the user's spawned instance of a single-user server.
Group:
type: object
properties:
name:
type: string
description: The group's name
users:
type: array
description: The names of users who are members of this group
items:
type: string
Service:
type: object
properties:
name:
type: string
description: The service's name
admin:
type: boolean
description: Whether the service is an admin
url:
type: string
description: The internal url where the service is running
prefix:
type: string
description: The proxied URL prefix to the service's url
pid:
type: number
description: The PID of the service process (if managed)
command:
type: array
description: The command used to start the service (if managed)
items:
type: string
info:
type: object
description: |
Additional information a deployment can attach to a service.
JupyterHub does not use this field.
Token:
type: object
properties:
token:
type: string
description: The token itself. Only present in responses to requests for a new token.
id:
type: string
description: The id of the API token. Used for modifying or deleting the token.
user:
type: string
description: The user that owns a token (undefined if owned by a service)
service:
type: string
description: The service that owns the token (undefined of owned by a user)
note:
type: string
description: A note about the token, typically describing what it was created for.
created:
type: string
format: date-time
description: Timestamp when this token was created
expires_at:
type: string
format: date-time
description: Timestamp when this token expires. Null if there is no expiry.
last_activity:
type: string
format: date-time
description: |
Timestamp of last-seen activity using this token.
Can be null if token has never been used.

View File

@@ -1,10 +1,4 @@
/* Added to avoid logo being too squeezed */
.navbar-brand {
height: 4rem !important;
}
/* hide redundant funky-formatted swagger-ui version */
.swagger-ui .info .title small {
display: none !important;
}
/* Added to avoid logo being too squeezed */
.navbar-brand {
height: 4rem !important;
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,2 +0,0 @@
{%- set _meta = meta | default({}) %}
{%- extends _meta.page_template | default('!page.html') %}

View File

@@ -1,32 +0,0 @@
{# djlint: off #}
{%- extends "!layout.html" %}
{# not sure why, but theme CSS prevents scrolling within redoc content
# If this were fixed, we could keep the navbar and footer
#}
{% block css %}
{% endblock css %}
{% block docs_navbar %}
{% endblock docs_navbar %}
{% block footer %}
{% endblock footer %}
{%- block body_tag -%}<body>{%- endblock body_tag %}
{%- block extrahead %}
{{ super() }}
<link href="{{ pathto('_static/redoc-fonts.css', 1) }}" rel="stylesheet" />
<script src="{{ pathto('_static/redoc.js', 1) }}"></script>
{%- endblock extrahead %}
{%- block content %}
<redoc id="redoc-spec"></redoc>
<script>
if (location.protocol === "file:") {
document.body.innerText = "Rendered API specification doesn't work with file: protocol. Use sphinx-autobuild to do local builds of the docs, served over HTTP."
} else {
Redoc.init(
"{{ pathto('_static/rest-api.yml', 1) }}",
{{ meta.redoc_options | default ({}) }},
document.getElementById("redoc-spec"),
);
}
</script>
{%- endblock content %}
{# djlint: on #}

View File

@@ -0,0 +1,159 @@
.. _admin/upgrading:
====================
Upgrading JupyterHub
====================
JupyterHub offers easy upgrade pathways between minor versions. This
document describes how to do these upgrades.
If you are using :ref:`a JupyterHub distribution <index/distributions>`, you
should consult the distribution's documentation on how to upgrade. This
document is if you have set up your own JupyterHub without using a
distribution.
It is long because is pretty detailed! Most likely, upgrading
JupyterHub is painless, quick and with minimal user interruption.
Read the Changelog
==================
The `changelog <../changelog.html>`_ contains information on what has
changed with the new JupyterHub release, and any deprecation warnings.
Read these notes to familiarize yourself with the coming changes. There
might be new releases of authenticators & spawners you are using, so
read the changelogs for those too!
Notify your users
=================
If you are using the default configuration where ``configurable-http-proxy``
is managed by JupyterHub, your users will see service disruption during
the upgrade process. You should notify them, and pick a time to do the
upgrade where they will be least disrupted.
If you are using a different proxy, or running ``configurable-http-proxy``
independent of JupyterHub, your users will be able to continue using notebook
servers they had already launched, but will not be able to launch new servers
nor sign in.
Backup database & config
========================
Before doing an upgrade, it is critical to back up:
#. Your JupyterHub database (sqlite by default, or MySQL / Postgres
if you used those). If you are using sqlite (the default), you
should backup the ``jupyterhub.sqlite`` file.
#. Your ``jupyterhub_config.py`` file.
#. Your user's home directories. This is unlikely to be affected directly by
a JupyterHub upgrade, but we recommend a backup since user data is very
critical.
Shutdown JupyterHub
===================
Shutdown the JupyterHub process. This would vary depending on how you
have set up JupyterHub to run. Most likely, it is using a process
supervisor of some sort (``systemd`` or ``supervisord`` or even ``docker``).
Use the supervisor specific command to stop the JupyterHub process.
Upgrade JupyterHub packages
===========================
There are two environments where the ``jupyterhub`` package is installed:
#. The *hub environment*, which is where the JupyterHub server process
runs. This is started with the ``jupyterhub`` command, and is what
people generally think of as JupyterHub.
#. The *notebook user environments*. This is where the user notebook
servers are launched from, and is probably custom to your own
installation. This could be just one environment (different from the
hub environment) that is shared by all users, one environment
per user, or same environment as the hub environment. The hub
launched the ``jupyterhub-singleuser`` command in this environment,
which in turn starts the notebook server.
You need to make sure the version of the ``jupyterhub`` package matches
in both these environments. If you installed ``jupyterhub`` with pip,
you can upgrade it with:
.. code-block:: bash
python3 -m pip install --upgrade jupyterhub==<version>
Where ``<version>`` is the version of JupyterHub you are upgrading to.
If you used ``conda`` to install ``jupyterhub``, you should upgrade it
with:
.. code-block:: bash
conda install -c conda-forge jupyterhub==<version>
Where ``<version>`` is the version of JupyterHub you are upgrading to.
You should also check for new releases of the authenticator & spawner you
are using. You might wish to upgrade those packages too along with JupyterHub,
or upgrade them separately.
Upgrade JupyterHub database
===========================
Once new packages are installed, you need to upgrade the JupyterHub
database. From the hub environment, in the same directory as your
``jupyterhub_config.py`` file, you should run:
.. code-block:: bash
jupyterhub upgrade-db
This should find the location of your database, and run necessary upgrades
for it.
SQLite database disadvantages
-----------------------------
SQLite has some disadvantages when it comes to upgrading JupyterHub. These
are:
- ``upgrade-db`` may not work, and you may need delete your database
and start with a fresh one.
- ``downgrade-db`` **will not** work if you want to rollback to an
earlier version, so backup the ``jupyterhub.sqlite`` file before
upgrading
What happens if I delete my database?
-------------------------------------
Losing the Hub database is often not a big deal. Information that
resides only in the Hub database includes:
- active login tokens (user cookies, service tokens)
- users added via JupyterHub UI, instead of config files
- info about running servers
If the following conditions are true, you should be fine clearing the
Hub database and starting over:
- users specified in config file, or login using an external
authentication provider (Google, GitHub, LDAP, etc)
- user servers are stopped during upgrade
- don't mind causing users to login again after upgrade
Start JupyterHub
================
Once the database upgrade is completed, start the ``jupyterhub``
process again.
#. Log-in and start the server to make sure things work as
expected.
#. Check the logs for any errors or deprecation warnings. You
might have to update your ``jupyterhub_config.py`` file to
deal with any deprecated options.
Congratulations, your JupyterHub has been upgraded!

15
docs/source/api/app.rst Normal file
View File

@@ -0,0 +1,15 @@
=========================
Application configuration
=========================
Module: :mod:`jupyterhub.app`
=============================
.. automodule:: jupyterhub.app
.. currentmodule:: jupyterhub.app
:class:`JupyterHub`
-------------------
.. autoconfigurable:: JupyterHub

32
docs/source/api/auth.rst Normal file
View File

@@ -0,0 +1,32 @@
==============
Authenticators
==============
Module: :mod:`jupyterhub.auth`
==============================
.. automodule:: jupyterhub.auth
.. currentmodule:: jupyterhub.auth
:class:`Authenticator`
----------------------
.. autoconfigurable:: Authenticator
:members:
:class:`LocalAuthenticator`
---------------------------
.. autoconfigurable:: LocalAuthenticator
:members:
:class:`PAMAuthenticator`
-------------------------
.. autoconfigurable:: PAMAuthenticator
:class:`DummyAuthenticator`
---------------------------
.. autoconfigurable:: DummyAuthenticator

38
docs/source/api/index.rst Normal file
View File

@@ -0,0 +1,38 @@
.. _api-index:
##############
JupyterHub API
##############
:Release: |release|
:Date: |today|
JupyterHub also provides a REST API for administration of the Hub and users.
The documentation on `Using JupyterHub's REST API <../reference/rest.html>`_ provides
information on:
- what you can do with the API
- creating an API token
- adding API tokens to the config files
- making an API request programmatically using the requests library
- learning more about JupyterHub's API
The same JupyterHub API spec, as found here, is available in an interactive form
`here (on swagger's petstore) <http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#!/default>`__.
The `OpenAPI Initiative`_ (fka Swagger™) is a project used to describe
and document RESTful APIs.
JupyterHub API Reference:
.. toctree::
app
auth
spawner
proxy
user
service
services.auth
.. _OpenAPI Initiative: https://www.openapis.org/

22
docs/source/api/proxy.rst Normal file
View File

@@ -0,0 +1,22 @@
=======
Proxies
=======
Module: :mod:`jupyterhub.proxy`
===============================
.. automodule:: jupyterhub.proxy
.. currentmodule:: jupyterhub.proxy
:class:`Proxy`
--------------
.. autoconfigurable:: Proxy
:members:
:class:`ConfigurableHTTPProxy`
------------------------------
.. autoconfigurable:: ConfigurableHTTPProxy
:members: debug, auth_token, check_running_interval, api_url, command

View File

@@ -0,0 +1,16 @@
========
Services
========
Module: :mod:`jupyterhub.services.service`
==========================================
.. automodule:: jupyterhub.services.service
.. currentmodule:: jupyterhub.services.service
:class:`Service`
----------------
.. autoconfigurable:: Service
:members: name, admin, url, api_token, managed, kind, command, cwd, environment, user, oauth_client_id, server, prefix, proxy_spec

View File

@@ -0,0 +1,40 @@
=======================
Services Authentication
=======================
Module: :mod:`jupyterhub.services.auth`
=======================================
.. automodule:: jupyterhub.services.auth
.. currentmodule:: jupyterhub.services.auth
:class:`HubAuth`
----------------
.. autoconfigurable:: HubAuth
:members:
:class:`HubOAuth`
-----------------
.. autoconfigurable:: HubOAuth
:members:
:class:`HubAuthenticated`
-------------------------
.. autoclass:: HubAuthenticated
:members:
:class:`HubOAuthenticated`
--------------------------
.. autoclass:: HubOAuthenticated
:class:`HubOAuthCallbackHandler`
--------------------------------
.. autoclass:: HubOAuthCallbackHandler

View File

@@ -0,0 +1,21 @@
========
Spawners
========
Module: :mod:`jupyterhub.spawner`
=================================
.. automodule:: jupyterhub.spawner
.. currentmodule:: jupyterhub.spawner
:class:`Spawner`
----------------
.. autoconfigurable:: Spawner
:members: options_from_form, poll, start, stop, get_args, get_env, get_state, template_namespace, format_string, create_certs, move_certs
:class:`LocalProcessSpawner`
----------------------------
.. autoconfigurable:: LocalProcessSpawner

36
docs/source/api/user.rst Normal file
View File

@@ -0,0 +1,36 @@
=====
Users
=====
Module: :mod:`jupyterhub.user`
==============================
.. automodule:: jupyterhub.user
.. currentmodule:: jupyterhub.user
:class:`UserDict`
-----------------
.. autoclass:: UserDict
:members:
:class:`User`
-------------
.. autoclass:: User
:members: escaped_name
.. attribute:: name
The user's name
.. attribute:: server
The user's Server data object if running, None otherwise.
Has ``ip``, ``port`` attributes.
.. attribute:: spawner
The user's :class:`~.Spawner` instance.

936
docs/source/changelog.md Normal file

File diff suppressed because one or more lines are too long

View File

@@ -1,93 +1,71 @@
# Configuration file for Sphinx to build our documentation to HTML.
# -*- coding: utf-8 -*-
#
# Configuration reference: https://www.sphinx-doc.org/en/master/usage/configuration.html
#
import contextlib
import datetime
import io
import os
import re
import subprocess
from pathlib import Path
from urllib.request import urlretrieve
import sys
from docutils import nodes
from intersphinx_registry import get_intersphinx_mapping
from ruamel.yaml import YAML
from sphinx.directives.other import SphinxDirective
from sphinx.util import logging
# Set paths
sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# Minimal Sphinx version
needs_sphinx = '1.4'
# Sphinx extension modules
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'sphinx.ext.napoleon',
'autodoc_traits',
'sphinx_copybutton',
'sphinx-jsonschema',
'recommonmark',
]
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'JupyterHub'
copyright = u'2016, Project Jupyter team'
author = u'Project Jupyter team'
# Autopopulate version
from os.path import dirname
docs = dirname(dirname(__file__))
root = dirname(docs)
sys.path.insert(0, root)
import jupyterhub
# The short X.Y version.
version = '%i.%i' % jupyterhub.version_info[:2]
# The full version, including alpha/beta/rc tags.
release = jupyterhub.__version__
language = None
exclude_patterns = []
pygments_style = 'sphinx'
todo_include_todos = False
# Set the default role so we can use `foo` instead of ``foo``
default_role = 'literal'
# -- Source -------------------------------------------------------------
import recommonmark
from recommonmark.transform import AutoStructify
# -- Config -------------------------------------------------------------
from jupyterhub.app import JupyterHub
from docutils import nodes
from sphinx.directives.other import SphinxDirective
from contextlib import redirect_stdout
from io import StringIO
logger = logging.getLogger(__name__)
# -- Project information -----------------------------------------------------
# ref: https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
#
project = "JupyterHub"
author = "Project Jupyter Contributors"
copyright = f"{datetime.date.today().year}, {author}"
# -- General Sphinx configuration --------------------------------------------
# ref: https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
#
extensions = [
"sphinx.ext.autodoc",
"sphinx.ext.intersphinx",
"sphinx.ext.napoleon",
"autodoc_traits",
"sphinx_copybutton",
"sphinx-jsonschema",
"sphinxext.opengraph",
"sphinxext.rediraffe",
"jupyterhub_sphinx_theme",
"myst_parser",
]
root_doc = "index"
source_suffix = [".md"]
# default_role let's use use `foo` instead of ``foo`` in rST
default_role = "literal"
docs = Path(__file__).parent.parent.absolute()
docs_source = docs / "source"
rest_api_yaml = docs_source / "_static" / "rest-api.yml"
# -- MyST configuration ------------------------------------------------------
# ref: https://myst-parser.readthedocs.io/en/latest/configuration.html
#
myst_heading_anchors = 2
myst_enable_extensions = [
# available extensions: https://myst-parser.readthedocs.io/en/latest/syntax/optional.html
"attrs_inline",
"colon_fence",
"deflist",
"fieldlist",
"substitution",
]
myst_substitutions = {
# date example: Dev 07, 2022
"date": datetime.date.today().strftime("%b %d, %Y").title(),
"node_min": "12",
"python_min": "3.8",
"version": jupyterhub.__version__,
}
# -- Custom directives to generate documentation -----------------------------
# ref: https://myst-parser.readthedocs.io/en/latest/syntax/roles-and-directives.html
#
# We define custom directives to help us generate documentation using Python on
# demand when referenced from our documentation files.
#
# Create a temp instance of JupyterHub for use by two separate directive classes
# to get the output from using the "--generate-config" and "--help-all" CLI
# flags respectively.
#
# create a temp instance of JupyterHub just to get the output of the generate-config
# and help --all commands.
jupyterhub_app = JupyterHub()
@@ -104,8 +82,8 @@ class ConfigDirective(SphinxDirective):
# The generated configuration file for this version
generated_config = jupyterhub_app.generate_config_file()
# post-process output
home_dir = os.environ["HOME"]
generated_config = generated_config.replace(home_dir, "$HOME", 1)
home_dir = os.environ['HOME']
generated_config = generated_config.replace(home_dir, '$HOME', 1)
par = nodes.literal_block(text=generated_config)
return [par]
@@ -121,246 +99,135 @@ class HelpAllDirective(SphinxDirective):
def run(self):
# The output of the help command for this version
buffer = io.StringIO()
with contextlib.redirect_stdout(buffer):
jupyterhub_app.print_help("--help-all")
buffer = StringIO()
with redirect_stdout(buffer):
jupyterhub_app.print_help('--help-all')
all_help = buffer.getvalue()
# post-process output
home_dir = os.environ["HOME"]
all_help = all_help.replace(home_dir, "$HOME", 1)
home_dir = os.environ['HOME']
all_help = all_help.replace(home_dir, '$HOME', 1)
par = nodes.literal_block(text=all_help)
return [par]
class RestAPILinksDirective(SphinxDirective):
"""Directive to populate link targets for the REST API
The resulting nodes resolve xref targets,
but are not actually rendered in the final result
which is handled by a custom template.
"""
has_content = False
required_arguments = 0
optional_arguments = 0
final_argument_whitespace = False
option_spec = {}
def run(self):
targets = []
yaml = YAML(typ="safe")
with rest_api_yaml.open() as f:
api = yaml.load(f)
for path, path_spec in api["paths"].items():
for method, operation in path_spec.items():
operation_id = operation.get("operationId")
if not operation_id:
logger.warning(f"No operation id for {method} {path}")
continue
# 'id' is the id on the page (must match redoc anchor)
# 'name' is the name of the ref for use in our documents
target = nodes.target(
ids=[f"operation/{operation_id}"],
names=[f"rest-api-{operation_id}"],
)
targets.append(target)
self.state.document.note_explicit_target(target, target)
return targets
templates_path = ["_templates"]
def stage_redoc_js(app, exception):
"""Download redoc.js to our static files"""
if app.builder.name != "html":
logger.info(f"Skipping redoc download for builder: {app.builder.name}")
return
out_static = Path(app.builder.outdir) / "_static"
redoc_version = "2.1.3"
redoc_url = (
f"https://cdn.redoc.ly/redoc/v{redoc_version}/bundles/redoc.standalone.js"
)
dest = out_static / "redoc.js"
if not dest.exists():
logger.info(f"Downloading {redoc_url} -> {dest}")
urlretrieve(redoc_url, dest)
# stage fonts for redoc from google fonts
fonts_css_url = "https://fonts.googleapis.com/css?family=Montserrat:300,400,700|Roboto:300,400,700"
fonts_css_file = out_static / "redoc-fonts.css"
fonts_dir = out_static / "fonts"
fonts_dir.mkdir(exist_ok=True)
if not fonts_css_file.exists():
logger.info(f"Downloading {fonts_css_url} -> {fonts_css_file}")
urlretrieve(fonts_css_url, fonts_css_file)
# For each font external font URL,
# download the font and rewrite to a local URL
# The downloaded TTF fonts have license info in their metadata
with open(fonts_css_file) as f:
fonts_css = f.read()
fonts_css_changed = False
for font_url in re.findall(r'url\((https?[^\)]+)\)', fonts_css):
fonts_css_changed = True
filename = font_url.rpartition("/")[-1]
dest = fonts_dir / filename
local_url = str(dest.relative_to(fonts_css_file.parent))
fonts_css = fonts_css.replace(font_url, local_url)
if not dest.exists():
logger.info(f"Downloading {font_url} -> {dest}")
urlretrieve(font_url, dest)
if fonts_css_changed:
# rewrite font css with local URLs
with open(fonts_css_file, "w") as f:
logger.info(f"Rewriting URLs in {fonts_css_file}")
f.write(fonts_css)
def setup(app):
app.connect("build-finished", stage_redoc_js)
app.add_css_file("custom.css")
app.add_directive("jupyterhub-generate-config", ConfigDirective)
app.add_directive("jupyterhub-help-all", HelpAllDirective)
app.add_directive("jupyterhub-rest-api-links", RestAPILinksDirective)
app.add_css_file("https://docs.jupyter.org/en/latest/_static/jupyter.css")
app.add_config_value('recommonmark_config', {'enable_eval_rst': True}, True)
app.add_css_file('custom.css')
app.add_transform(AutoStructify)
app.add_directive('jupyterhub-generate-config', ConfigDirective)
app.add_directive('jupyterhub-help-all', HelpAllDirective)
# -- Read The Docs -----------------------------------------------------------
#
# Since RTD runs sphinx-build directly without running "make html", we run the
# pre-requisite steps for "make html" from here if needed.
#
if os.environ.get("READTHEDOCS"):
subprocess.check_call(["make", "metrics", "scopes"], cwd=str(docs))
source_suffix = ['.rst', '.md']
# source_encoding = 'utf-8-sig'
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages.
html_theme = 'pydata_sphinx_theme'
html_logo = '_static/images/logo/logo.png'
html_favicon = '_static/images/logo/favicon.ico'
# Paths that contain custom static files (such as style sheets)
html_static_path = ['_static']
htmlhelp_basename = 'JupyterHubdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# 'papersize': 'letterpaper',
# 'pointsize': '10pt',
# 'preamble': '',
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(
master_doc,
'JupyterHub.tex',
u'JupyterHub Documentation',
u'Project Jupyter team',
'manual',
)
]
# latex_logo = None
# latex_use_parts = False
# latex_show_pagerefs = False
# latex_show_urls = False
# latex_appendices = []
# latex_domain_indices = True
# -- Spell checking ----------------------------------------------------------
# ref: https://sphinxcontrib-spelling.readthedocs.io/en/latest/customize.html#configuration-options
#
# The "sphinxcontrib.spelling" extension is optionally enabled if its available.
#
# -- manual page output -------------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [(master_doc, 'jupyterhub', u'JupyterHub Documentation', [author], 1)]
# man_show_urls = False
# -- Texinfo output -----------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(
master_doc,
'JupyterHub',
u'JupyterHub Documentation',
author,
'JupyterHub',
'One line description of project.',
'Miscellaneous',
)
]
# texinfo_appendices = []
# texinfo_domain_indices = True
# texinfo_show_urls = 'footnote'
# texinfo_no_detailmenu = False
# -- Epub output --------------------------------------------------------
# Bibliographic Dublin Core info.
epub_title = project
epub_author = author
epub_publisher = author
epub_copyright = copyright
# A list of files that should not be packed into the epub file.
epub_exclude_files = ['search.html']
# -- Intersphinx ----------------------------------------------------------
intersphinx_mapping = {'https://docs.python.org/3/': None}
# -- Read The Docs --------------------------------------------------------
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
if on_rtd:
# readthedocs.org uses their theme by default, so no need to specify it
# build both metrics and rest-api, since RTD doesn't run make
from subprocess import check_call as sh
sh(['make', 'metrics', 'rest-api'], cwd=docs)
# -- Spell checking -------------------------------------------------------
try:
import sphinxcontrib.spelling # noqa
import sphinxcontrib.spelling
except ImportError:
pass
else:
extensions.append("sphinxcontrib.spelling")
spelling_word_list_filename = "spelling_wordlist.txt"
# -- Options for HTML output -------------------------------------------------
# ref: https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
#
html_logo = "_static/images/logo/logo.png"
html_favicon = "_static/images/logo/favicon.ico"
html_static_path = ["_static"]
html_theme = "jupyterhub_sphinx_theme"
html_theme_options = {
"announcement": "🚀 Join us in San Diego · JupyterCon 2025 · Nov 4-5 · <a href=\"https://events.linuxfoundation.org/jupytercon/program/schedule/?ajs_aid=53afb00d-be65-4a99-9112-28cdaac99463\">SCHEDULE</a> · <a href=\"https://events.linuxfoundation.org/jupytercon/register/?ajs_aid=53afb00d-be65-4a99-9112-28cdaac99463\">REGISTER NOW</a>",
"header_links_before_dropdown": 6,
"icon_links": [
{
"name": "GitHub",
"url": "https://github.com/jupyterhub/jupyterhub",
"icon": "fa-brands fa-github",
},
],
"use_edit_page_button": True,
"navbar_align": "left",
}
html_context = {
"github_user": "jupyterhub",
"github_repo": "jupyterhub",
"github_version": "main",
"doc_path": "docs/source",
}
# -- Options for linkcheck builder -------------------------------------------
# ref: https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-the-linkcheck-builder
#
linkcheck_ignore = [
r"(.*)github\.com(.*)#", # javascript based anchors
r"(.*)/#%21(.*)/(.*)", # /#!forum/jupyter - encoded anchor edge case
r"https?://(.*\.)?example\.(org|com)(/.*)?", # example links
r"https://github.com/[^/]*$", # too many github usernames / searches in changelog
"https://github.com/jupyterhub/jupyterhub/pull/", # too many PRs in changelog
"https://github.com/jupyterhub/jupyterhub/compare/", # too many comparisons in changelog
"https://schema.jupyter.org/jupyterhub/.*", # schemas are not published yet
r"https?://(localhost|127.0.0.1).*", # ignore localhost references in auto-links
r"https://linux.die.net/.*", # linux.die.net seems to block requests from CI with 403 sometimes
# don't check links to unpublished advisories
r"https://github.com/jupyterhub/jupyterhub/security/advisories/.*",
# Occasionally blocks CI checks with 403
r"https://www\.mysql\.com",
r"https://www\.npmjs\.com",
# Occasionally blocks CI checks with SSL error
r"https://mediaspace\.msu\.edu/.*",
]
linkcheck_anchors_ignore = [
"/#!",
"/#%21",
]
# -- Intersphinx -------------------------------------------------------------
# ref: https://www.sphinx-doc.org/en/master/usage/extensions/intersphinx.html#configuration
#
intersphinx_mapping = get_intersphinx_mapping(
packages={
"python",
"tornado",
"jupyter-server",
"nbgitpuller",
}
)
# -- Options for the opengraph extension -------------------------------------
# ref: https://github.com/wpilibsuite/sphinxext-opengraph#options
#
# ogp_site_url is set automatically by RTD
ogp_image = "_static/logo.png"
ogp_use_first_image = True
# -- Options for the rediraffe extension -------------------------------------
# ref: https://github.com/wpilibsuite/sphinxext-rediraffe#readme
#
# This extension helps us relocate content without breaking links. If a
# document is moved internally, a redirect link should be configured as below to
# help us not break links.
#
# The workflow for adding redirects can be as follows:
# 1. Change "rediraffe_branch" below to point to the commit/ branch you
# want to base off the changes.
# 2. Option 1: run "make rediraffecheckdiff"
# a. Analyze the output of this command.
# b. Manually add the redirect entries to the "redirects.txt" file.
# Option 2: run "make rediraffewritediff"
# a. rediraffe will then automatically add the obvious redirects to redirects.txt.
# b. Analyze the output of the command for broken links.
# c. Check the "redirects.txt" file for any files that were moved/ renamed but are not listed.
# d. Manually add the redirects that have been mised by the automatic builder to "redirects.txt".
# Option 3: Do not use the commands above and, instead, do everything manually - by taking
# note of the files you have moved or renamed and adding them to the "redirects.txt" file.
#
# If you are basing changes off another branch/ commit, always change back
# rediraffe_branch to main before pushing your changes upstream.
#
rediraffe_branch = os.environ.get("REDIRAFFE_BRANCH", "main")
rediraffe_redirects = "redirects.txt"
# allow 80% match for autogenerated redirects
rediraffe_auto_redirect_perc = 80
# rediraffe_redirects = {
# "old-file": "new-folder/new-file-name",
# }
spelling_word_list_filename = 'spelling_wordlist.txt'

View File

@@ -1,42 +0,0 @@
(contributing:community)=
# Community communication channels
```{note}
Our community is distributed across the world in various timezones, so please be patient if you do not get a response immediately!
```
We use different channels of communication for different purposes. Whichever one you use will depend on what kind of communication you want to engage in.
## Discourse (recommended)
```{note}
[Discourse] is open source.
```
We use [Jupyter instance of Discourse] for online discussions and support questions.
You can ask questions at [Jupyter instance of Discourse] if you are a first-time contributor to the JupyterHub project.
Everyone is welcome to bring ideas and questions at [Jupyter instance of Discourse].
We recommend that you first use [Jupyter instance of Discourse] as all past and current discussions on it are archived and searchable. Thus, all discussions remain useful and accessible to the whole community.
## Zulip
```{note}
[Zulip] is open source.
```
We use [Jupyter instance of Zulip] for online, real-time text chat; a place for more ephemeral discussions. When you're not on [Jupyter instance of Discourse], you can stop at [Jupyter instance of Zulip] to have other discussions on the fly.
## Github Issues
[Github issues](https://docs.github.com/en/issues/tracking-your-work-with-issues/about-issues) are used for most long-form project discussions, bug reports and feature requests.
- Issues related to a specific authenticator or spawner should be opened in the appropriate repository for the authenticator or spawner.
- If you are using a specific JupyterHub distribution (such as [Zero to JupyterHub on Kubernetes](https://github.com/jupyterhub/zero-to-jupyterhub-k8s) or [The Littlest JupyterHub](https://github.com/jupyterhub/the-littlest-jupyterhub/)), you should open issues directly in their repository.
- If you cannot find a repository to open your issue in, do not worry! Open the issue in the [main JupyterHub repository](https://github.com/jupyterhub/jupyterhub/) and our community will help you figure it out.
[Discourse]: https://www.discourse.org/
[Jupyter instance of Discourse]: https://discourse.jupyter.org
[Jupyter instance of Zulip]: https://jupyter.zulipchat.com/
[Zulip]: https://zulip.com/

View File

@@ -0,0 +1,30 @@
.. _contributing/community:
================================
Community communication channels
================================
We use `Discourse <https://discourse.jupyter.org>` for online discussion.
Everyone in the Jupyter community is welcome to bring ideas and questions there.
In addition, we use `Gitter <https://gitter.im>`_ for online, real-time text chat,
a place for more ephemeral discussions.
The primary Gitter channel for JupyterHub is `jupyterhub/jupyterhub <https://gitter.im/jupyterhub/jupyterhub>`_.
Gitter isn't archived or searchable, so we recommend going to discourse first
to make sure that discussions are most useful and accessible to the community.
Remember that our community is distributed across the world in various
timezones, so be patient if you do not get an answer immediately!
GitHub issues are used for most long-form project discussions, bug reports
and feature requests. Issues related to a specific authenticator or
spawner should be directed to the appropriate repository for the
authenticator or spawner. If you are using a specific JupyterHub
distribution (such as `Zero to JupyterHub on Kubernetes <http://github.com/jupyterhub/zero-to-jupyterhub-k8s>`_
or `The Littlest JupyterHub <http://github.com/jupyterhub/the-littlest-jupyterhub/>`_),
you should open issues directly in their repository. If you can not
find a repository to open your issue in, do not worry! Create it in the `main
JupyterHub repository <https://github.com/jupyterhub/jupyterhub/>`_ and our
community will help you figure it out.
A `mailing list <https://groups.google.com/forum/#!forum/jupyter>`_ for all
of Project Jupyter exists, along with one for `teaching with Jupyter
<https://groups.google.com/forum/#!forum/jupyter-education>`_.

View File

@@ -1,69 +0,0 @@
(contributing:docs)=
# Contributing Documentation
Documentation is often more important than code. This page helps
you get set up on how to contribute to JupyterHub's documentation.
We use [Sphinx](https://www.sphinx-doc.org) to build our documentation. It takes
our documentation source files (written in [Markedly Structured Text (MyST)](https://mystmd.org/) and
stored under the `docs/source` directory) and converts it into various
formats for people to read.
## Building documentation locally
To make sure the documentation you write or
change renders correctly, it is good practice to test it locally.
```{note}
You will need Python and Git installed. Installation details are avaiable at {ref}`contributing:setup`.
```
1. Install the packages required to build the docs.
```bash
python3 -m pip install -r docs/requirements.txt
python3 -m pip install sphinx-autobuild
```
2. Build the HTML version of the docs. This is the most commonly used
output format, so verifying it renders correctly is usually good
enough.
```bash
sphinx-autobuild docs/source/ docs/_build/html
```
This step will display any syntax or formatting errors in the documentation,
along with the filename / line number in which they occurred. Fix them,
and the HTML will be re-render automatically.
3. View the rendered documentation by opening <http://127.0.0.1:8000> in
a web browser.
(contributing-docs-conventions)=
## Documentation conventions
This section lists various conventions we use in our documentation. This is a
living document that grows over time, so feel free to add to it / change it!
Our entire documentation does not yet fully conform to these conventions yet,
so help in making it so would be appreciated!
### `pip` invocation
There are many ways to invoke a `pip` command, we recommend the following
approach:
```bash
python3 -m pip
```
This invokes `pip` explicitly using the `python3` binary that you are
currently using. This is the **recommended way** to invoke pip
in our documentation, since it is least likely to cause problems
with `python3` and `pip` being from different environments.
For more information on how to invoke `pip` commands, see
[the `pip` documentation](https://pip.pypa.io/en/stable/).

View File

@@ -0,0 +1,78 @@
.. _contributing/docs:
==========================
Contributing Documentation
==========================
Documentation is often more important than code. This page helps
you get set up on how to contribute documentation to JupyterHub.
Building documentation locally
==============================
We use `sphinx <http://sphinx-doc.org>`_ to build our documentation. It takes
our documentation source files (written in `markdown
<https://daringfireball.net/projects/markdown/>`_ or `reStructuredText
<http://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_ &
stored under the ``docs/source`` directory) and converts it into various
formats for people to read. To make sure the documentation you write or
change renders correctly, it is good practice to test it locally.
#. Make sure you have successfuly completed :ref:`contributing/setup`.
#. Install the packages required to build the docs.
.. code-block:: bash
python3 -m pip install -r docs/requirements.txt
#. Build the html version of the docs. This is the most commonly used
output format, so verifying it renders as you should is usually good
enough.
.. code-block:: bash
cd docs
make html
This step will display any syntax or formatting errors in the documentation,
along with the filename / line number in which they occurred. Fix them,
and re-run the ``make html`` command to re-render the documentation.
#. View the rendered documentation by opening ``build/html/index.html`` in
a web browser.
.. tip::
On macOS, you can open a file from the terminal with ``open <path-to-file>``.
On Linux, you can do the same with ``xdg-open <path-to-file>``.
.. _contributing/docs/conventions:
Documentation conventions
=========================
This section lists various conventions we use in our documentation. This is a
living document that grows over time, so feel free to add to it / change it!
Our entire documentation does not yet fully conform to these conventions yet,
so help in making it so would be appreciated!
``pip`` invocation
------------------
There are many ways to invoke a ``pip`` command, we recommend the following
approach:
.. code-block:: bash
python3 -m pip
This invokes pip explicitly using the python3 binary that you are
currently using. This is the **recommended way** to invoke pip
in our documentation, since it is least likely to cause problems
with python3 and pip being from different environments.
For more information on how to invoke ``pip`` commands, see
`the pip documentation <https://pip.pypa.io/en/stable/>`_.

View File

@@ -1,24 +0,0 @@
(contributing)=
# Contributing
We want you to contribute to JupyterHub in ways that are most exciting
and useful to you. We value documentation, testing, bug reporting and code equally,
and are glad to have your contributions in whatever form you wish.
Be sure to first check our [Code of Conduct](https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md)
([reporting guidelines](https://github.com/jupyter/governance/blob/HEAD/conduct/reporting_online.md)), which help keep our community welcoming to as many people as possible.
This section covers information about our community, as well as ways that you can connect and get involved.
```{toctree}
:maxdepth: 2
contributor-list
community
setup
docs
tests
roadmap
security
```

View File

@@ -0,0 +1,21 @@
============
Contributing
============
We want you to contribute to JupyterHub in ways that are most exciting
& useful to you. We value documentation, testing, bug reporting & code equally,
and are glad to have your contributions in whatever form you wish :)
Our `Code of Conduct <https://github.com/jupyter/governance/blob/master/conduct/code_of_conduct.md>`_
(`reporting guidelines <https://github.com/jupyter/governance/blob/master/conduct/reporting_online.md>`_)
helps keep our community welcoming to as many people as possible.
.. toctree::
:maxdepth: 2
community
setup
docs
tests
roadmap
security

View File

@@ -1,15 +1,13 @@
(contributing:roadmap)=
# The JupyterHub roadmap
This roadmap collects "next steps" for JupyterHub. It is about creating a
shared understanding of the project's vision and direction amongst
the community of users, contributors, and maintainers.
The goal is to communicate priorities and upcoming release plans.
It is not aimed at limiting contributions to what is listed here.
It is not a aimed at limiting contributions to what is listed here.
## Using the roadmap
### Sharing Feedback on the Roadmap
All of the community is encouraged to provide feedback as well as share new
@@ -24,17 +22,17 @@ maintainers will help identify what a good next step is for the issue.
When submitting an issue, think about what "next step" category best describes
your issue:
- **now**, concrete/actionable step that is ready for someone to start work on.
These might be items that have a link to an issue or more abstract like
"decrease typos and dead links in the documentation"
- **soon**, less concrete/actionable step that is going to happen soon,
discussions around the topic are coming close to an end at which point it can
move into the "now" category
- **later**, abstract ideas or tasks, need a lot of discussion or
experimentation to shape the idea so that it can be executed. Can also
contain concrete/actionable steps that have been postponed on purpose
(these are steps that could be in "now" but the decision was taken to work on
them later)
* **now**, concrete/actionable step that is ready for someone to start work on.
These might be items that have a link to an issue or more abstract like
"decrease typos and dead links in the documentation"
* **soon**, less concrete/actionable step that is going to happen soon,
discussions around the topic are coming close to an end at which point it can
move into the "now" category
* **later**, abstract ideas or tasks, need a lot of discussion or
experimentation to shape the idea so that it can be executed. Can also
contain concrete/actionable steps that have been postponed on purpose
(these are steps that could be in "now" but the decision was taken to work on
them later)
### Reviewing and Updating the Roadmap
@@ -49,8 +47,8 @@ For those please create a
The roadmap should give the reader an idea of what is happening next, what needs
input and discussion before it can happen and what has been postponed.
## The roadmap proper
## The roadmap proper
### Project vision
JupyterHub is a dependable tool used by humans that reduces the complexity of
@@ -60,19 +58,20 @@ creating the environment in which a piece of software can be executed.
These "Now" items are considered active areas of focus for the project:
- HubShare - a sharing service for use with JupyterHub.
- Users should be able to:
- Push a project to other users.
- Get a checkout of a project from other users.
- Push updates to a published project.
- Pull updates from a published project.
- Manage conflicts/merges by simply picking a version (our/theirs)
- Get a checkout of a project from the internet. These steps are completely different from saving notebooks/files.
- Have directories that are managed by git completely separately from our stuff.
- Look at pushed content that they have access to without an explicit pull.
- Define and manage teams of users.
- Adding/removing a user to/from a team gives/removes them access to all projects that team has access to.
- Build other services, such as static HTML publishing and dashboarding on top of these things.
* HubShare - a sharing service for use with JupyterHub.
* Users should be able to:
- Push a project to other users.
- Get a checkout of a project from other users.
- Push updates to a published project.
- Pull updates from a published project.
- Manage conflicts/merges by simply picking a version (our/theirs)
- Get a checkout of a project from the internet. These steps are completely different from saving notebooks/files.
- Have directories that are managed by git completely separately from our stuff.
- Look at pushed content that they have access to without an explicit pull.
- Define and manage teams of users.
- Adding/removing a user to/from a team gives/removes them access to all projects that team has access to.
- Build other services, such as static HTML publishing and dashboarding on top of these things.
### Soon
@@ -80,10 +79,11 @@ These "Soon" items are under discussion. Once an item reaches the point of an
actionable plan, the item will be moved to the "Now" section. Typically,
these will be moved at a future review of the roadmap.
- resource monitoring and management:
- (prometheus?) API for resource monitoring
- tracking activity on single-user servers instead of the proxy
- notes and activity tracking per API token
* resource monitoring and management:
- (prometheus?) API for resource monitoring
- tracking activity on single-user servers instead of the proxy
- notes and activity tracking per API token
### Later
@@ -92,6 +92,6 @@ time there is no active plan for an item. The project would like to find the
resources and time to discuss these ideas.
- real-time collaboration
- Enter into real-time collaboration mode for a project that starts a shared execution context.
- Once the single-user notebook package supports realtime collaboration,
implement sharing mechanism integrated into the Hub.
- Enter into real-time collaboration mode for a project that starts a shared execution context.
- Once the single-user notebook package supports realtime collaboration,
implement sharing mechanism integrated into the Hub.

View File

@@ -1,15 +0,0 @@
(contributing:security)=
# Reporting security issues in Jupyter or JupyterHub
If you find a security vulnerability in Jupyter or JupyterHub,
whether it is a failure of the security model described in [Security Overview](explanation:security)
or a failure in implementation,
please report it!
Please use GitHub's "Report a Vulnerability" button under Security > Advisories on the appropriate repo,
e.g. [report here for JupyterHub](https://github.com/jupyterhub/jupyterhub/security/advisories).
You may also send an email to <mailto:security@ipython.org>, but the GitHub reporting system is preferred.
If you prefer to encrypt your security reports,
you can use {download}`this PGP public key </ipython_security.asc>`.

View File

@@ -0,0 +1,10 @@
Reporting security issues in Jupyter or JupyterHub
==================================================
If you find a security vulnerability in Jupyter or JupyterHub,
whether it is a failure of the security model described in :doc:`../reference/websecurity`
or a failure in implementation,
please report it to security@ipython.org.
If you prefer to encrypt your security reports,
you can use :download:`this PGP public key </ipython_security.asc>`.

View File

@@ -1,261 +0,0 @@
(contributing:setup)=
# Setting up a development install
JupyterHub's continuous integration runs on [Ubuntu LTS](https://ubuntu.com/).
While JupyterHub is only tested on one [Linux distribution](https://en.wikipedia.org/wiki/Linux_distribution),
it should be fairly insensitive to variations between common [POXIS](https://en.wikipedia.org/wiki/POSIX) implementation,
though we don't have the bandwidth to verify this automatically and continuously.
Feel free to try it on your platform, and be sure to {ref}`let us know <contributing:community>` about any issues you encounter.
## System requirements
Your system **must** be able to run
- Python
- NodeJS
- Git
Our small team knows JupyterHub to work perfectly on macOS or Linux operating systems.
```{admonition} What about Windows?
Some users have reported that JupyterHub runs successfully on [Windows Subsystem for Linux (WSL)](https://learn.microsoft.com/en-us/windows/wsl/). We have no plans to support Windows outside of the WSL.
```
```{admonition} What about virtualization?
Using any form of virtualization (for example, [VirtualBox](https://www.virtualbox.org/), [Docker](https://www.docker.com/), [Podman](https://podman.io/), [WSL](https://learn.microsoft.com/en-us/windows/wsl/)) is a good way to get up and running quickly, though properly configuring the networking settings can be a bit tricky.
```
### Install Python
JupyterHub is written in the [Python](https://www.python.org) programming language and
requires you have at least version {{python_min}} installed locally. If you havent
installed Python before, the recommended way to install it is to use
[Miniforge](https://github.com/conda-forge/miniforge#download).
### Install NodeJS
Some JavaScript components require you have at least version {{node_min}} of [NodeJS](https://nodejs.org/en/) installed locally.
`configurable-http-proxy`, the default proxy implementation for JupyterHub, is written in JavaScript.
If you have not installed NodeJS before, we recommend installing it in the `miniconda` environment you set up for Python.
You can do so with `conda install nodejs`.
Many in the Jupyter community use [`nvm`](https://github.com/nvm-sh/nvm) to
managing node dependencies.
### Install Git
JupyterHub uses [Git](https://git-scm.com) and [GitHub](https://github.com)
for development and collaboration. You need to [install Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) to work on
JupyterHub. We also recommend getting a free account on GitHub.
## Install JupyterHub for development
When developing JupyterHub, you would need to make changes and be able to instantly view the results of the changes. To achieve that, a developer install is required.
:::{note}
This guide does not attempt to dictate _how_ development
environments should be isolated since that is a personal preference and can
be achieved in many ways, for example, `tox`, `conda`, `docker`, etc. See this
[forum thread](https://discourse.jupyter.org/t/thoughts-on-using-tox/3497) for
a more detailed discussion.
:::
1. Clone the [JupyterHub Git repository](https://github.com/jupyterhub/jupyterhub)
to your computer.
```bash
git clone https://github.com/jupyterhub/jupyterhub
cd jupyterhub
```
2. Make sure the `python` you installed and the `npm` you installed
are available to you on the command line.
```bash
python -V
```
This should return a version number greater than or equal to {{python_min}}.
```bash
npm -v
```
This should return a version number greater than or equal to {{node_min}}.
3. Install `configurable-http-proxy` (required to run and test the default JupyterHub configuration):
```bash
npm install -g configurable-http-proxy
```
If you get an error that says `Error: EACCES: permission denied`, you might need to prefix the command with `sudo`.
`sudo` may be required to perform a system-wide install.
If you do not have access to sudo, you may instead run the following commands:
```bash
npm install configurable-http-proxy
export PATH=$PATH:$(pwd)/node_modules/.bin
```
The second line needs to be run every time you open a new terminal.
If you are using conda you can instead run:
```bash
conda install configurable-http-proxy
```
4. Install an editable version of JupyterHub and its requirements for
development and testing. This lets you edit JupyterHub code in a text editor
and restart the JupyterHub process to see your code changes immediately.
```bash
python3 -m pip install --editable ".[test]"
```
5. You are now ready to start JupyterHub!
```bash
jupyterhub
```
6. You can access JupyterHub from your browser at
`http://localhost:8000` now.
Happy developing!
## Using DummyAuthenticator and SimpleLocalProcessSpawner
To simplify testing of JupyterHub, it is helpful to use
{class}`~jupyterhub.auth.DummyAuthenticator` instead of the default JupyterHub
authenticator and SimpleLocalProcessSpawner instead of the default spawner.
There is a sample configuration file that does this in
`testing/jupyterhub_config.py`. To launch JupyterHub with this
configuration:
```bash
jupyterhub -f testing/jupyterhub_config.py
```
The test configuration enables a few things to make testing easier:
- use 'dummy' authentication and 'simple' spawner
- named servers are enabled
- listen only on localhost
- 'admin' is an admin user, if you want to test the admin page
- disable caching of static files
The default JupyterHub [authenticator](PAMAuthenticator)
and [spawner](LocalProcessSpawner)
require your system to have user accounts for each user you want to log in to
JupyterHub as.
DummyAuthenticator allows you to log in with any username and password,
while SimpleLocalProcessSpawner allows you to start servers without having to
create a Unix user for each JupyterHub user. Together, these make it
much easier to test JupyterHub.
Tip: If you are working on parts of JupyterHub that are common to all
authenticators and spawners, we recommend using both DummyAuthenticator and
SimpleLocalProcessSpawner. If you are working on just authenticator-related
parts, use only SimpleLocalProcessSpawner. Similarly, if you are working on
just spawner-related parts, use only DummyAuthenticator.
## Building frontend components
The testing configuration file also disables caching of static files,
which allows you to edit and rebuild these files without restarting JupyterHub.
If you are working on the admin react page, which is in the `jsx` directory, you can run:
```bash
cd jsx
npm install
npm run build:watch
```
to continuously rebuild the admin page, requiring only a refresh of the page.
If you are working on the frontend SCSS files, you can run the same `build:watch` command
in the _top level_ directory of the repo:
```bash
npm install
npm run build:watch
```
## Troubleshooting
This section lists common ways setting up your development environment may
fail, and how to fix them. Please add to the list if you encounter yet
another way it can fail!
### `lessc` not found
If the `python3 -m pip install --editable .` command fails and complains about
`lessc` being unavailable, you may need to explicitly install some
additional JavaScript dependencies:
```bash
npm install
```
This will fetch client-side JavaScript dependencies necessary to compile
CSS.
You may also need to manually update JavaScript and CSS after some
development updates, with:
```bash
python3 setup.py js # fetch updated client-side js
python3 setup.py css # recompile CSS from LESS sources
python3 setup.py jsx # build React admin app
```
### Failed to bind XXX to `http://127.0.0.1:<port>/<path>`
This error can happen when there's already an application or a service using this
port.
Use the following command to find out which service is using this port.
```bash
lsof -P -i TCP:<port> -sTCP:LISTEN
```
If nothing shows up, it likely means there's a system service that uses it but
your current user cannot list it. Reuse the same command with sudo.
```bash
sudo lsof -P -i TCP:<port> -sTCP:LISTEN
```
Depending on the result of the above commands, the most simple solution is to
configure JupyterHub to use a different port for the service that is failing.
As an example, the following is a frequently seen issue:
`Failed to bind hub to http://127.0.0.1:8081/hub/`
Using the procedure described above, start with:
```bash
lsof -P -i TCP:8081 -sTCP:LISTEN
```
and if nothing shows up:
```bash
sudo lsof -P -i TCP:8081 -sTCP:LISTEN
```
Finally, depending on your findings, you can apply the following change and start JupyterHub again:
```python
c.JupyterHub.hub_port = 9081 # Or any other free port
```

View File

@@ -0,0 +1,188 @@
.. _contributing/setup:
================================
Setting up a development install
================================
System requirements
===================
JupyterHub can only run on MacOS or Linux operating systems. If you are
using Windows, we recommend using `VirtualBox <https://virtualbox.org>`_
or a similar system to run `Ubuntu Linux <https://ubuntu.com>`_ for
development.
Install Python
--------------
JupyterHub is written in the `Python <https://python.org>`_ programming language, and
requires you have at least version 3.5 installed locally. If you havent
installed Python before, the recommended way to install it is to use
`miniconda <https://conda.io/miniconda.html>`_. Remember to get the Python 3 version,
and **not** the Python 2 version!
Install nodejs
--------------
``configurable-http-proxy``, the default proxy implementation for
JupyterHub, is written in Javascript to run on `NodeJS
<https://nodejs.org/en/>`_. If you have not installed nodejs before, we
recommend installing it in the ``miniconda`` environment you set up for
Python. You can do so with ``conda install nodejs``.
Install git
-----------
JupyterHub uses `git <https://git-scm.com>`_ & `GitHub <https://github.com>`_
for development & collaboration. You need to `install git
<https://git-scm.com/book/en/v2/Getting-Started-Installing-Git>`_ to work on
JupyterHub. We also recommend getting a free account on GitHub.com.
Setting up a development install
================================
When developing JupyterHub, you need to make changes to the code & see
their effects quickly. You need to do a developer install to make that
happen.
.. note:: This guide does not attempt to dictate *how* development
environements should be isolated since that is a personal preference and can
be achieved in many ways, for example `tox`, `conda`, `docker`, etc. See this
`forum thread <https://discourse.jupyter.org/t/thoughts-on-using-tox/3497>`_ for
a more detailed discussion.
1. Clone the `JupyterHub git repository <https://github.com/jupyterhub/jupyterhub>`_
to your computer.
.. code:: bash
git clone https://github.com/jupyterhub/jupyterhub
cd jupyterhub
2. Make sure the ``python`` you installed and the ``npm`` you installed
are available to you on the command line.
.. code:: bash
python -V
This should return a version number greater than or equal to 3.5.
.. code:: bash
npm -v
This should return a version number greater than or equal to 5.0.
3. Install ``configurable-http-proxy``. This is required to run
JupyterHub.
.. code:: bash
npm install -g configurable-http-proxy
If you get an error that says ``Error: EACCES: permission denied``,
you might need to prefix the command with ``sudo``. If you do not
have access to sudo, you may instead run the following commands:
.. code:: bash
npm install configurable-http-proxy
export PATH=$PATH:$(pwd)/node_modules/.bin
The second line needs to be run every time you open a new terminal.
4. Install the python packages required for JupyterHub development.
.. code:: bash
python3 -m pip install -r dev-requirements.txt
python3 -m pip install -r requirements.txt
5. Setup a database.
The default database engine is ``sqlite`` so if you are just trying
to get up and running quickly for local development that should be
available via `python <https://docs.python.org/3.5/library/sqlite3.html>`__.
See :doc:`/reference/database` for details on other supported databases.
6. Install the development version of JupyterHub. This lets you edit
JupyterHub code in a text editor & restart the JupyterHub process to
see your code changes immediately.
.. code:: bash
python3 -m pip install --editable .
7. You are now ready to start JupyterHub!
.. code:: bash
jupyterhub
8. You can access JupyterHub from your browser at
``http://localhost:8000`` now.
Happy developing!
Using DummyAuthenticator & SimpleLocalProcessSpawner
====================================================
To simplify testing of JupyterHub, its helpful to use
:class:`~jupyterhub.auth.DummyAuthenticator` instead of the default JupyterHub
authenticator and SimpleLocalProcessSpawner instead of the default spawner.
There is a sample configuration file that does this in
``testing/jupyterhub_config.py``. To launch jupyterhub with this
configuration:
.. code:: bash
jupyterhub -f testing/jupyterhub_config.py
The default JupyterHub `authenticator
<https://jupyterhub.readthedocs.io/en/stable/reference/authenticators.html#the-default-pam-authenticator>`_
& `spawner
<https://jupyterhub.readthedocs.io/en/stable/api/spawner.html#localprocessspawner>`_
require your system to have user accounts for each user you want to log in to
JupyterHub as.
DummyAuthenticator allows you to log in with any username & password,
while SimpleLocalProcessSpawner allows you to start servers without having to
create a unix user for each JupyterHub user. Together, these make it
much easier to test JupyterHub.
Tip: If you are working on parts of JupyterHub that are common to all
authenticators & spawners, we recommend using both DummyAuthenticator &
SimpleLocalProcessSpawner. If you are working on just authenticator related
parts, use only SimpleLocalProcessSpawner. Similarly, if you are working on
just spawner related parts, use only DummyAuthenticator.
Troubleshooting
===============
This section lists common ways setting up your development environment may
fail, and how to fix them. Please add to the list if you encounter yet
another way it can fail!
``lessc`` not found
-------------------
If the ``python3 -m pip install --editable .`` command fails and complains about
``lessc`` being unavailable, you may need to explicitly install some
additional JavaScript dependencies:
.. code:: bash
npm install
This will fetch client-side JavaScript dependencies necessary to compile
CSS.
You may also need to manually update JavaScript and CSS after some
development updates, with:
.. code:: bash
python3 setup.py js # fetch updated client-side js
python3 setup.py css # recompile CSS from LESS sources

View File

@@ -1,165 +0,0 @@
(contributing-tests)=
# Testing JupyterHub and linting code
Unit testing helps to validate that JupyterHub works the way we think it does,
and continues to do so when changes occur. They also help communicate
precisely what we expect our code to do.
JupyterHub uses [`pytest`](https://pytest.org) for all the tests. You
can find them under the [jupyterhub/tests](https://github.com/jupyterhub/jupyterhub/tree/main/jupyterhub/tests) directory in the Git repository.
```{note}
Before run any test, make sure you have completed {ref}`contributing:setup`.
Once you are done, you would be able to run `jupyterhub` from the command line and access it from your web browser.
This ensures that the development environment is properly set up for tests to run.
```
```{note}
For details of `pytest`, refer to the [`pytest` usage documentation](https://pytest.readthedocs.io/en/latest/usage.html).
```
## Running all the tests
You can run all tests in JupyterHub
```bash
pytest -v jupyterhub/tests
```
This should display progress as it runs all the tests, printing
information about any test failures as they occur.
If you wish to confirm test coverage the run tests with the `--cov` flag:
```bash
pytest -v --cov=jupyterhub jupyterhub/tests
```
## Running tests from a specific file
You can also run tests in just a specific file:
```bash
pytest -v jupyterhub/tests/<test-file-name>
```
## Running a single test
To run a specific test only, you can do:
```bash
pytest -v jupyterhub/tests/<test-file-name>::<test-name>
```
This runs the test with function name `<test-name>` defined in
`<test-file-name>`. This is very useful when you are iteratively
developing a single test.
For example, to run the test `test_shutdown` in the file `test_api.py`,
you would run:
```bash
pytest -v jupyterhub/tests/test_api.py::test_shutdown
```
## Test organisation
The tests live in `jupyterhub/tests` and are organized roughly into:
1. `test_api.py`: tests the REST API
2. `test_pages.py`: tests loading the HTML pages
and other collections of tests for different components.
When writing a new test, there should usually be a test of
similar functionality already written and related tests should
be added nearby.
The fixtures live in `jupyterhub/tests/conftest.py`. There are
fixtures that can be used for JupyterHub components, such as:
- `app`: an instance of JupyterHub with mocked parts
- `auth_state_enabled`: enables persisting auth_state (like authentication tokens)
- `db`: a sqlite in-memory DB session
- `` io_loop` ``: a Tornado event loop
- `event_loop`: a new asyncio event loop
- `user`: creates a new temporary user
- `admin_user`: creates a new temporary admin user
- single user servers
\- `cleanup_after`: allows cleanup of single user servers between tests
- mocked service
\- `MockServiceSpawner`: a spawner that mocks services for testing with a short poll interval
\- `` mockservice` ``: mocked service with no external service url
\- `mockservice_url`: mocked service with a url to test external services
And fixtures to add functionality or spawning behavior:
- `admin_access`: grants admin access
- `` no_patience` ``: sets slow-spawning timeouts to zero
- `slow_spawn`: enables the SlowSpawner (a spawner that takes a few seconds to start)
- `never_spawn`: enables the NeverSpawner (a spawner that will never start)
- `bad_spawn`: enables the BadSpawner (a spawner that fails immediately)
- `slow_bad_spawn`: enables the SlowBadSpawner (a spawner that fails after a short delay)
Refer to the [pytest fixtures documentation](https://pytest.readthedocs.io/en/latest/fixture.html) to learn how to use fixtures that exists already and to create new ones.
### The Pytest-Asyncio Plugin
When testing the various JupyterHub components and their various implementations, it sometimes becomes necessary to have a running instance of JupyterHub to test against.
The [`app`](https://github.com/jupyterhub/jupyterhub/blob/270b61992143b29af8c2fab90c4ed32f2f6fe209/jupyterhub/tests/conftest.py#L60) fixture mocks a JupyterHub application for use in testing by:
- enabling ssl if internal certificates are available
- creating an instance of [MockHub](https://github.com/jupyterhub/jupyterhub/blob/270b61992143b29af8c2fab90c4ed32f2f6fe209/jupyterhub/tests/mocking.py#L221) using any provided configurations as arguments
- initializing the mocked instance
- starting the mocked instance
- finally, a registered finalizer function performs a cleanup and stops the mocked instance
The JupyterHub test suite uses the [pytest-asyncio plugin](https://pytest-asyncio.readthedocs.io/en/latest/) that handles [event-loop](https://docs.python.org/3/library/asyncio-eventloop.html) integration in [Tornado](https://www.tornadoweb.org/en/stable/) applications. This allows for the use of top-level awaits when calling async functions or [fixtures](https://docs.pytest.org/en/6.2.x/fixture.html#what-fixtures-are) during testing. All test functions and fixtures labelled as `async` will run on the same event loop.
```{note}
With the introduction of [top-level awaits](https://piccolo-orm.com/blog/top-level-await-in-python/), the use of the `io_loop` fixture of the [pytest-tornado plugin](https://www.tornadoweb.org/en/stable/ioloop.html) is no longer necessary. It was initially used to call coroutines. With the upgrades made to `pytest-asyncio`, this usage is now deprecated. It is now, only utilized within the JupyterHub test suite to ensure complete cleanup of resources used during testing such as open file descriptors. This is demonstrated in this [pull request](https://github.com/jupyterhub/jupyterhub/pull/4332).
More information is provided below.
```
One of the general goals of the [JupyterHub Pytest Plugin project](https://github.com/jupyterhub/pytest-jupyterhub) is to ensure the MockHub cleanup fully closes and stops all utilized resources during testing so the use of the `io_loop` fixture for teardown is not necessary. This was highlighted in this [issue](https://github.com/jupyterhub/pytest-jupyterhub/issues/30)
For more information on asyncio and event-loops, here are some resources:
- **Read**: [Introduction to the Python event loop](https://www.pythontutorial.net/python-concurrency/python-event-loop)
- **Read**: [Overview of Async IO in Python 3.7](https://stackabuse.com/overview-of-async-io-in-python-3-7)
- **Watch**: [Asyncio: Understanding Async / Await in Python](https://www.youtube.com/watch?v=bs9tlDFWWdQ)
- **Watch**: [Learn Python's AsyncIO #2 - The Event Loop](https://www.youtube.com/watch?v=E7Yn5biBZ58)
## Troubleshooting Test Failures
### All the tests are failing
Make sure you have completed all the steps in {ref}`contributing:setup` successfully, and are able to access JupyterHub from your browser at <http://localhost:8000> after starting `jupyterhub` in your command line.
## Code formatting and linting
JupyterHub automatically enforces code formatting. This means that pull requests
with changes breaking this formatting will receive a commit from pre-commit.ci
automatically.
To automatically format code locally, you can install pre-commit and register a
_git hook_ to automatically check with pre-commit before you make a commit if
the formatting is okay.
```bash
pip install pre-commit
pre-commit install --install-hooks
```
To run pre-commit manually you would do:
```bash
# check for changes to code not yet committed
pre-commit run
# check for changes also in already committed code
pre-commit run --all-files
```
You may also install [black integration](https://github.com/psf/black#editor-integration)
into your text editor to format code automatically.

View File

@@ -0,0 +1,68 @@
.. _contributing/tests:
==================
Testing JupyterHub
==================
Unit test help validate that JupyterHub works the way we think it does,
and continues to do so when changes occur. They also help communicate
precisely what we expect our code to do.
JupyterHub uses `pytest <https://pytest.org>`_ for all our tests. You
can find them under ``jupyterhub/tests`` directory in the git repository.
Running the tests
==================
#. Make sure you have completed :ref:`contributing/setup`. You should be able
to start ``jupyterhub`` from the commandline & access it from your
web browser. This ensures that the dev environment is properly set
up for tests to run.
#. You can run all tests in JupyterHub
.. code-block:: bash
pytest -v jupyterhub/tests
This should display progress as it runs all the tests, printing
information about any test failures as they occur.
If you wish to confirm test coverage the run tests with the `--cov` flag:
.. code-block:: bash
pytest -v --cov=jupyterhub jupyterhub/tests
#. You can also run tests in just a specific file:
.. code-block:: bash
pytest -v jupyterhub/tests/<test-file-name>
#. To run a specific test only, you can do:
.. code-block:: bash
pytest -v jupyterhub/tests/<test-file-name>::<test-name>
This runs the test with function name ``<test-name>`` defined in
``<test-file-name>``. This is very useful when you are iteratively
developing a single test.
For example, to run the test ``test_shutdown`` in the file ``test_api.py``,
you would run:
.. code-block:: bash
pytest -v jupyterhub/tests/test_api.py::test_shutdown
Troubleshooting Test Failures
=============================
All the tests are failing
-------------------------
Make sure you have completed all the steps in :ref:`contributing/setup` successfully, and
can launch ``jupyterhub`` from the terminal.

View File

@@ -1,5 +1,3 @@
(contributing:contributors)=
# Contributors
Project Jupyter thanks the following people for their help and
@@ -122,4 +120,3 @@ contribution on JupyterHub:
- yuvipanda
- zoltan-fedor
- zonca
- Neeraj Natu

View File

@@ -0,0 +1,50 @@
Eventlogging and Telemetry
==========================
JupyterHub can be configured to record structured events from a running server using Jupyter's `Telemetry System`_. The types of events that JupyterHub emits are defined by `JSON schemas`_ listed below_
emitted as JSON data, defined and validated by the JSON schemas listed below.
.. _logging: https://docs.python.org/3/library/logging.html
.. _`Telemetry System`: https://github.com/jupyter/telemetry
.. _`JSON schemas`: https://json-schema.org/
How to emit events
------------------
Event logging is handled by its ``Eventlog`` object. This leverages Python's standing logging_ library to emit, filter, and collect event data.
To begin recording events, you'll need to set two configurations:
1. ``handlers``: tells the EventLog *where* to route your events. This trait is a list of Python logging handlers that route events to
2. ``allows_schemas``: tells the EventLog *which* events should be recorded. No events are emitted by default; all recorded events must be listed here.
Here's a basic example:
.. code-block::
import logging
c.EventLog.handlers = [
logging.FileHandler('event.log'),
]
c.EventLog.allowed_schemas = [
'hub.jupyter.org/server-action'
]
The output is a file, ``"event.log"``, with events recorded as JSON data.
.. _below:
Event schemas
-------------
.. toctree::
:maxdepth: 2
server-actions.rst

View File

@@ -1,3 +1 @@
```{eval-rst}
.. jsonschema:: ../../../jupyterhub/event-schemas/server-actions/v1.yaml
```

View File

@@ -1,310 +0,0 @@
(explanation:capacity-planning)=
# Capacity planning
General capacity planning advice for JupyterHub is hard to give,
because it depends almost entirely on what your users are doing,
and what JupyterHub users do varies _wildly_ in terms of resource consumption.
**There is no single answer to "I have X users, what resources do I need?" or "How many users can I support with this machine?"**
Here are three _typical_ Jupyter use patterns that require vastly different resources:
- **Learning**: negligible resources because computation is mostly idle,
e.g. students learning programming for the first time
- **Production code**: very intense, sustained load, e.g. training machine learning models
- **Bursting**: _mostly_ idle, but needs a lot of resources for short periods of time
(interactive research often looks like this)
But just because there's no single answer doesn't mean we can't help.
So we have gathered here some useful information to help you make your decisions
about what resources you need based on how your users work,
including the relative invariants in terms of resources that JupyterHub itself needs.
## JupyterHub infrastructure
JupyterHub consists of a few components that are always running.
These take up very little resources,
especially relative to the resources consumed by users when you have more than a few.
As an example, an instance of mybinder.org (running JupyterHub 1.5.0),
running with typically ~100-150 users has:
| Component | CPU (mean/peak) | Memory (mean/peak) |
| --------- | --------------- | ------------------ |
| Hub | 4% / 13% | (230 MB / 260 MB) |
| Proxy | 6% / 13% | (47 MB / 65 MB) |
So it would be pretty generous to allocate ~25% of one CPU core
and ~500MB of RAM to overall JupyterHub infrastructure.
The rest is going to be up to your users.
Per-user overhead from JupyterHub is typically negligible
up to at least a few hundred concurrent active users.
```{figure} /images/mybinder-hub-components-cpu-memory.png
JupyterHub component resource usage for mybinder.org.
```
## Factors to consider
### Static vs elastic resources
A big factor in planning resources is:
**how much does it cost to change your mind?**
If you are using a single shared machine with local storage,
migrating to a new one because it turns out your users don't fit might be very costly.
You will have to get a new machine, set it up, and maybe even migrate user data.
On the other hand, if you are using ephemeral resources,
such as node pools in Kubernetes,
changing resource types costs close to nothing
because nodes can automatically be added or removed as needed.
Take that cost into account when you are picking how much memory or cpu to allocate to users.
Static resources (like [the-littlest-jupyterhub][]) provide for more **stable, predictable costs**,
but elastic resources (like [zero-to-jupyterhub][]) tend to provide **lower overall costs**
(especially when deployed with monitoring allowing cost optimizations over time),
but which are **less predictable**.
[the-littlest-jupyterhub]: https://the-littlest-jupyterhub.readthedocs.io
[zero-to-jupyterhub]: https://z2jh.jupyter.org
(limits-requests)=
### Limit vs Request for resources
Many scheduling tools like Kubernetes have two separate ways of allocating resources to users.
A **Request** or **Reservation** describes how much resources are _set aside_ for each user.
Often, this doesn't have any practical effect other than deciding when a given machine is considered 'full'.
If you are using expandable resources like an autoscaling Kubernetes cluster,
a new node must be launched and added to the pool if you 'request' more resources than fit on currently running nodes (a cluster **scale-up event**).
If you are running on a single VM, this describes how many users you can run at the same time, full stop.
A **Limit**, on the other hand, enforces a limit to how much resources any given user can consume.
For more information on what happens when users try to exceed their limits, see [](oversubscription).
In the strictest, safest case, you can have these two numbers be the same.
That means that each user is _limited_ to fit within the resources allocated to it.
This avoids **[oversubscription](oversubscription)** of resources (allowing use of more than you have available),
at the expense (in a literal, this-costs-money sense) of reserving lots of usually-idle capacity.
However, you often find that a small fraction of users use more resources than others.
In this case you may give users limits that _go beyond the amount of resources requested_.
This is called **oversubscribing** the resources available to users.
Having a gap between the request and the limit means you can fit a number of _typical_ users on a node (based on the request),
but still limit how much a runaway user can gobble up for themselves.
(oversubscription)=
### Oversubscribed CPU is okay, running out of memory is bad
An important consideration when assigning resources to users is: **What happens when users need more than I've given them?**
A good summary to keep in mind:
> When tasks don't get enough CPU, things are slow.
> When they don't get enough memory, things are broken.
This means it's **very important that users have enough memory**,
but much less important that they always have exclusive access to all the CPU they can use.
This relates to [Limits and Requests](limits-requests),
because these are the consequences of your limits and/or requests not matching what users actually try to use.
A table of mismatched resource allocation situations and their consequences:
| issue | consequence |
| -------------------------------------------------------- | ------------------------------------------------------------------------------------- |
| Requests too high | Unnecessarily high cost and/or low capacity. |
| CPU limit too low | Poor performance experienced by users |
| CPU oversubscribed (too-low request + too-high limit) | Poor performance across the system; may crash, if severe |
| Memory limit too low | Servers killed by Out-of-Memory Killer (OOM); lost work for users |
| Memory oversubscribed (too-low request + too-high limit) | System memory exhaustion - all kinds of hangs and crashes and weird errors. Very bad. |
Note that the 'oversubscribed' problem case is where the request is lower than _typical_ usage,
meaning that the total reserved resources isn't enough for the total _actual_ consumption.
This doesn't mean that _all_ your users exceed the request,
just that the _limit_ gives enough room for the _average_ user to exceed the request.
All of these considerations are important _per node_.
Larger nodes means more users per node, and therefore more users to average over.
It also means more chances for multiple outliers on the same node.
### Example case for oversubscribing memory
Take for example, this system and sampling of user behavior:
- System memory = 8G
- memory request = 1G, limit = 3G
- typical 'heavy' user: 2G
- typical 'light' user: 0.5G
This will assign 8 users to those 8G of RAM (remember: only requests are used for deciding when a machine is 'full').
As long as the total of 8 users _actual_ usage is under 8G, everything is fine.
But the _limit_ allows a total of 24G to be used,
which would be a mess if everyone used their full limit.
But _not_ everyone uses the full limit, which is the point!
This pattern is fine if 1/8 of your users are 'heavy' because _typical_ usage will be ~0.7G,
and your total usage will be ~5G (`1 × 2 + 7 × 0.5 = 5.5`).
But if _50%_ of your users are 'heavy' you have a problem because that means your users will be trying to use 10G (`4 × 2 + 4 × 0.5 = 10`),
which you don't have.
You can make guesses at these numbers, but the only _real_ way to get them is to measure (see [](measuring)).
### CPU:memory ratio
Most of the time, you'll find that only one resource is the limiting factor for your users.
Most often it's memory, but for certain tasks, it could be CPU (or even GPUs).
Many cloud deployments have just one or a few fixed ratios of cpu to memory
(e.g. 'general purpose', 'high memory', and 'high cpu').
Setting your secondary resource allocation according to this ratio
after selecting the more important limit results in a balanced resource allocation.
For instance, some of Google Cloud's ratios are:
| node type | GB RAM / CPU core |
| ----------- | ----------------- |
| n2-highmem | 8 |
| n2-standard | 4 |
| n2-highcpu | 1 |
(idleness)=
### Idleness
Jupyter being an interactive tool means people tend to spend a lot more time reading and thinking than actually running resource-intensive code.
This significantly affects how much _cpu_ resources a typical active user needs,
but often does not significantly affect the _memory_.
Ways to think about this:
- More idle users means unused CPU.
This generally means setting your CPU _limit_ higher than your CPU _request_.
- What do your users do when they _are_ running code?
Is it typically single-threaded local computation in a notebook?
If so, there's little reason to set a limit higher than 1 CPU core.
- Do typical computations take a long time, or just a few seconds?
Longer typical computations means it's more likely for users to be trying to use the CPU at the same moment,
suggesting a higher _request_.
- Even with idle users, parallel computation adds up quickly - one user fully loading 4 cores and 3 using almost nothing still averages to more than a full CPU core per user.
- Long-running intense computations suggest higher requests.
Again, using mybinder.org as an example—we run around 100 users on 8-core nodes,
and still see fairly _low_ overall CPU usage on each user node.
The limit here is actually Kubernetes' pods per node, not memory _or_ CPU.
This is likely a extreme case, as many Binder users come from clicking links on webpages
without any actual intention of running code.
```{figure} /images/mybinder-load5.png
mybinder.org node CPU usage is low with 50-150 users sharing just 8 cores
```
### Concurrent users and culling idle servers
Related to [](idleness), all of these resource consumptions and limits are calculated based on **concurrently active users**,
not total users.
You might have 10,000 users of your JupyterHub deployment, but only 100 of them running at any given time.
That 100 is the main number you need to use for your capacity planning.
JupyterHub costs scale very little based on the number of _total_ users,
up to a point.
There are two important definitions for **active user**:
- Are they _actually_ there (i.e. a human interacting with Jupyter, or running code that might be )
- Is their server running (this is where resource reservations and limits are actually applied)
Connecting those two definitions (how long are servers running if their humans aren't using them) is an important area of deployment configuration, usually implemented via the [JupyterHub idle culler service][idle-culler].
[idle-culler]: https://github.com/jupyterhub/jupyterhub-idle-culler
There are a lot of considerations when it comes to culling idle users that will depend:
- How much does it save me to shut down user servers? (e.g. keeping an elastic cluster small, or keeping a fixed-size deployment available to active users)
- How much does it cost my users to have their servers shut down? (e.g. lost work if shutdown prematurely)
- How easy do I want it to be for users to keep their servers running? (e.g. Do they want to run unattended simulations overnight? Do you want them to?)
Like many other things in this guide, there are many correct answers leading to different configuration choices.
For more detail on culling configuration and considerations, consult the [JupyterHub idle culler documentation][idle-culler].
## More tips
### Start strict and generous, then measure
A good tip, in general, is to give your users as much resources as you can afford that you think they _might_ use.
Then, use resource usage metrics like prometheus to analyze what your users _actually_ need,
and tune accordingly.
Remember: **Limits affect your user experience and stability. Requests mostly affect your costs**.
For example, a sensible starting point (lacking any other information) might be:
```yaml
request:
cpu: 0.5
mem: 2G
limit:
cpu: 1
mem: 2G
```
(more memory if significant computations are likely - machine learning models, data analysis, etc.)
Some actions
- If you see out-of-memory killer events, increase the limit (or talk to your users!)
- If you see typical memory well below your limit, reduce the request (but not the limit)
- If _nobody_ uses that much memory, reduce your limit
- If CPU is your limiting scheduling factor and your CPUs are mostly idle,
reduce the cpu request (maybe even to 0!).
- If CPU usage continues to be low, increase the limit to 2 or 4 to allow bursts of parallel execution.
(measuring)=
### Measuring user resource consumption
It is _highly_ recommended to deploy monitoring services such as [Prometheus][]
and [Grafana][] to get a view of your users' resource usage.
This is the only way to truly know what your users need.
JupyterHub has some experimental [grafana dashboards][] you can use as a starting point,
to keep an eye on your resource usage.
Here are some sample charts from (again from mybinder.org),
showing >90% of users using less than 10% CPU and 200MB,
but a few outliers near the limit of 1 CPU and 2GB of RAM.
This is the kind of information you can use to tune your requests and limits.
![Snapshot from JupyterHub's Grafana dashboards on mybinder.org](/images/mybinder-user-resources.png)
[prometheus]: https://prometheus.io
[grafana]: https://grafana.com
[grafana dashboards]: https://github.com/jupyterhub/grafana-dashboards
### Measuring costs
Measuring costs may be as important as measuring your users activity.
If you are using a cloud provider, you can often use cost thresholds and quotas to instruct them to notify you if your costs are too high,
e.g. "Have AWS send me an email if I hit X spending trajectory on week 3 of the month."
You can then use this information to tune your resources based on what you can afford.
You can mix this information with user resource consumption to figure out if you have a problem,
e.g. "my users really do need X resources, but I can only afford to give them 80% of X."
This information may prove useful when asking your budget-approving folks for more funds.
### Additional resources
There are lots of other resources for cost and capacity planning that may be specific to JupyterHub and/or your cloud provider.
Here are some useful links to other resources
- [Zero to JupyterHub](https://z2jh.jupyter.org) documentation on
- [projecting costs](https://z2jh.jupyter.org/en/latest/administrator/cost.html)
- [configuring user resources](https://z2jh.jupyter.org/en/latest/jupyterhub/customizing/user-resources.html)
- Cloud platform cost calculators:
- [Google Cloud](https://cloud.google.com/products/calculator/)
- [Amazon AWS](https://calculator.aws)
- [Microsoft Azure](https://azure.microsoft.com/en-us/pricing/calculator/)

View File

@@ -1,430 +0,0 @@
(explanation:concepts)=
# JupyterHub: A conceptual overview
```{warning}
This page could be missing cross-links to other parts of
the documentation. You can help by adding them!
```
JupyterHub is not what you think it is. Most things you think are
part of JupyterHub are actually handled by some other component, for
example the spawner or notebook server itself, and it's not always
obvious how the parts relate. The knowledge contained here hasn't
been assembled in one place before, and is essential to understand
when setting up a sufficiently complex Jupyter(Hub) setup.
This document was originally written to assist in debugging: very
often, the actual problem is not where one thinks it is and thus
people can't easily debug. In order to tell this story, we start at
JupyterHub and go all the way down to the fundamental components of
Jupyter.
In this document, we occasionally leave things out or bend the truth
where it helps in explanation, and give our explanations in terms of
Python even though Jupyter itself is language-neutral. The "(&)"
symbol highlights important points where this page leaves out or bends
the truth for simplification of explanation, but there is more if you
dig deeper.
This guide is long, but after reading it you will be know of all major
components in the Jupyter ecosystem and everything else you read
should make sense.
## What is Jupyter?
Before we get too far, let's remember what our end goal is. A
**Jupyter Notebook** is nothing more than a Python(&) process
which is getting commands from a web browser and displaying the output
via that browser. What the process actually sees is roughly like
getting commands on standard input(&) and writing to standard
output(&). There is nothing intrinsically special about this process
- it can do anything a normal Python process can do, and nothing more.
The **Jupyter kernel** handles capturing output and converting things
such as graphics to a form usable by the browser.
Everything we explain below is building up to this, going through many
different layers which give you many ways of customizing how this
process runs.
## JupyterHub
**JupyterHub** is the central piece that provides multi-user
login capabilities. Despite this, the end user only briefly interacts with
JupyterHub and most of the actual Jupyter session does not relate to
the hub at all: the hub mainly handles authentication and creating (JupyterHub calls it "spawning") the
single-user server. In short, anything which is related to _starting_
the user's workspace/environment is about JupyterHub, anything about
_running_ usually isn't.
If you have problems connecting the authentication, spawning, and the
proxy (explained below), the issue is usually with JupyterHub. To
debug, JupyterHub has extensive logs which get printed to its console
and can be used to discover most problems.
The main pieces of JupyterHub are:
### Authenticator
JupyterHub itself doesn't actually manage your users. It has a
database of users, but it is usually connected with some other system
that manages the usernames and passwords. When someone tries to log
in to JupyteHub, it asks the
**authenticator**([basics](authenticators),
[reference](../reference/authenticators)) if the
username/password is valid(&). The authenticator returns a username(&),
which is passed on to the spawner, which has to use it to start that
user's environment. The authenticator can also return user
groups and admin status of users, so that JupyterHub can do some
higher-level management.
The following authenticators are included with JupyterHub:
- **PAMAuthenticator** uses the standard Unix/Linux operating system
functions to check users. Roughly, if someone already has access to
the machine (they can log in by ssh), they will be able to log in to
JupyterHub without any other setup. Thus, JupyterHub fills the role
of a ssh server, but providing a web-browser based way to access the
machine.
There are [plenty of others to choose from](authenticators-reference).
You can connect to almost any other existing service to manage your
users. You either use all users from this other service (e.g. your
company), or enable only the allowed users (e.g. your group's
Github usernames). Some other popular authenticators include:
- **OAuthenticator** uses the standard OAuth protocol to verify users.
For example, you can easily use Github to authenticate your users -
people have a "click to login with Github" button. This is often
done with a allowlist to only allow certain users.
- **NativeAuthenticator** actually stores and validates its own
usernames and passwords, unlike most other authenticators. Thus,
you can manage all your users within JupyterHub only.
- There are authenticators for LTI (learning management systems),
Shibboleth, Kerberos - and so on.
The authenticator is configured with the
`c.JupyterHub.authenticator_class` configuration option in the
`jupyterhub_config.py` file.
The authenticator runs internally to the Hub process but communicates
with outside services.
If you have trouble logging in, this is usually a problem of the
authenticator. The authenticator logs are part of the the JupyterHub
logs, but there may also be relevant information in whatever external
services you are using.
### Spawner
The **spawner** ([basics](spawners),
[reference](../reference/spawners)) is the real core of
JupyterHub: when someone wants a notebook server, the spawner allocates
resources and starts the server. The notebook server could run on the
same machine as JupyterHub, on another machine, on some cloud service,
or more. Administrators can limit resources (CPU, memory) or isolate users
from each other - if the spawner supports it. They can also do no
limiting and allow any user to access any other user's files if they
are not configured properly.
Some basic spawners included in JupyterHub are:
- **LocalProcessSpawner** is built into JupyterHub. Upon launch it tries
to switch users to the given username (`su` (&)) and start the
notebook server. It requires that the hub be run as root (because
only root has permission to start processes as other user IDs).
LocalProcessSpawner is no different than a user logging in with
something like `ssh` and running `jupyter notebook`. PAMAuthenticator and
LocalProcessSpawner is the most basic way of using JupyterHub (and
what it does out of the box) and makes the hub not too dissimilar to
an advanced ssh server.
There are [many more advanced spawners](/reference/spawners), and to
show the diversity of spawning strategys some are listed below:
- **SudoSpawner** is like LocalProcessSpawner but lets you run
JupyterHub without root. `sudo` has to be configured to allow the
hub's user to run processes under other user IDs.
- **SystemdSpawner** uses Systemd to start other processes. It can
isolate users from each other and provide resource limiting.
- **DockerSpawner** runs stuff in Docker, a containerization system.
This lets you fully isolate users, limit CPU, memory, and provide
other container images to fully customize the environment.
- **KubeSpawner** runs on the Kubernetes, a cloud orchestration
system. The spawner can easily limit users and provide cloud
scaling - but the spawner doesn't actually do that, Kubernetes
does. The spawner just tells Kubernetes what to do. If you want to
get KubeSpawner to do something, first you would figure out how to
do it in Kubernetes, then figure out how to tell KubeSpawner to tell
Kubernetes that. Actually... this is true for most spawners.
- **BatchSpawner** runs on computer clusters with batch job scheduling
systems (e.g Slurm, HTCondor, PBS, etc). The user processes are run
as batch jobs, having access to all the data and software that the
users normally will.
In short, spawners are the interface to the rest of the operating
system, and to configure them right you need to know a bit about how
the corresponding operating system service works.
The spawner is responsible for the environment of the single-user
notebook servers (described in the next section). In the end, it just
makes a choice about how to start these processes: for example, the
Docker spawner starts a normal Docker container and runs the right
command inside of it. Thus, the spawner is responsible for setting
what kind of software and data is available to the user.
The spawner runs internally to the Hub process but communicates with
outside services. It is configured by `c.JupyterHub.spawner_class` in
`jupyterhub_config.py`.
If a user tries to launch a notebook server and it doesn't work, the
error is usually with the spawner or the notebook server (as described
in the next section). Each spawner outputs some logs to the main
JupyterHub logs, but may also have logs in other places depending on
what services it interacts with (for example, the Docker spawner
somehow puts logs in the Docker system services, Kubernetes through
the `kubectl` API).
### Proxy
The JupyterHub **proxy** relays connections between the users
and their single-user notebook servers. What this basically means is
that the hub itself can shut down and the proxy can continue to
allow users to communicate with their notebook servers. (This
further emphasizes that the hub is responsible for starting, not
running, the notebooks). By default, the hub starts the proxy
automatically
and stops the proxy when the hub stops (so that connections get
interrupted). But when you [configure the proxy to run
separately](howto:separate-proxy),
user's connections will continue to work even without the hub.
The default proxy is **ConfigurableHttpProxy** which is simple but
effective. A more advanced option is the [**Traefik Proxy**](https://blog.jupyter.org/introducing-traefikproxy-a-new-jupyterhub-proxy-based-on-traefik-4839e972faf6),
which gives you redundancy and high-availability.
When users "connect to JupyterHub", they _always_ first connect to the
proxy and the proxy relays the connection to the hub. Thus, the proxy
is responsible for SSL and accepting connections from the rest of the
internet. The user uses the hub to authenticate and start the server,
and then the hub connects back to the proxy to adjust the proxy routes
for the user's server (e.g. the web path `/user/someone` redirects to
the server of someone at a certain internal address). The proxy has
to be able to internally connect to both the hub and all the
single-user servers.
The proxy always runs as a separate process to JupyterHub (even though
JupyterHub can start it for you). JupyterHub has one set of
configuration options for the proxy addresses (`bind_url`) and one for
the hub (`hub_bind_url`). If `bind_url` is given, it is just passed to
the automatic proxy to tell it what to do.
If you have problems after users are redirected to their single-user
notebook servers, or making the first connection to the hub, it is
usually caused by the proxy. The ConfigurableHttpProxy's logs are
mixed with JupyterHub's logs if it's started through the hub (the
default case), otherwise from whatever system runs the proxy (if you
do configure it, you'll know).
### Services
JupyterHub has the concept of **services** ([basics](tutorial:services),
[reference](services-reference)), which are other web services
started by the hub, but otherwise are not necessarily related to the
hub itself. They are often used to do things related to Jupyter
(things that user interacts with, usually not the hub), but could
always be run some other way. Running from the hub provides an easy
way to get Hub API tokens and authenticate users against the hub. It
can also automatically add a proxy route to forward web requests to
that service.
A common example of a service is the [cull idle
servers](https://github.com/jupyterhub/jupyterhub-idle-culler)
service. When started by the hub, it automatically gets admin API
tokens. It uses the API to list all running servers, compare against
activity timeouts, and shut down servers exceeding the limits. Even
though this is an intrinsic part of JupyterHub, it is only loosely
coupled and running as a service provides convenience of
authentication - it could be just as well run some other way, with a
manually provided API token.
The configuration option `c.JupyterHub.services` is used to start
services from the hub.
When a service is started from JupyterHub automatically, its logs are
included in the JupyterHub logs.
## Single-user notebook server
The **single-user notebook server** is the same thing you get by
running `jupyter notebook` or `jupyter lab` from the command line -
the actual Jupyter user interface for a single person.
The role of the spawner is to start this server - basically, running
the command `jupyter notebook`. Actually it doesn't run that, it runs
`jupyterhub-singleuser` which first communicates with the hub to say
"I'm alive" before running a completely normal Jupyter server. The
single-user server can be JupyterLab or classic notebooks. By this
point, the hub is almost completely out of the picture (the web
traffic is going through proxy unchanged). Also by this time, the
spawner has already decided the environment which this single-user
server will have and the single-user server has to deal with that.
The spawner starts the server using `jupyterhub-singleuser` with some
environment variables like `JUPYTERHUB_API_TOKEN` and
`JUPYTERHUB_BASE_URL` which tell the single-user server how to connect
back to the hub in order to say that it's ready.
The single-user server options are **JupyterLab** and **classic
Jupyter Notebook**. They both run through the same backend server process--the web
frontend is an option when it is starting. The spawner can choose the
command line when it starts the single-user server. Extensions are a
property of the single-user server (in two parts: there can be a part
that runs in the Python server process, and parts that run in
javascript in lab or notebook).
If one wants to install software for users, it is not a matter of
"installing it for JupyerHub" - it's a matter of installing it for the
single-user server, which might be the same environment as the hub,
but not necessarily. (see below - it's a matter of the kernels!)
After the single-user notebook server is started, any errors are only
an issue of the single-user notebook server. Sometimes, it seems like
the spawner is failing, but really the spawner is working but the
single-user notebook server dies right away (in this case, you need to
find the problem with the single-user server and adjust the spawner to
start it correctly or fix the environment). This can happen, for
example, if the spawner doesn't set an environment variable or doesn't
provide storage.
The single-user server's logs are printed to stdout/stderr, and the
spawer decides where those streams are directed, so if you
notice problems at this phase you need to check your spawner for
instructions for accessing the single-user logs. For example, the
LocalProcessSpawner logs are just outputted to the same JupyterHub
output logs, the SystemdSpawner logs are
written to the Systemd journal, Docker and Kubernetes logs are written
to Docker and Kubernetes respectively, and batchspawner output goes to
the normal output places of batch jobs and is an explicit
configuration option of the spawner.
**(Jupyter) Notebook** is the classic interface, where each notebook
opens in a separate tab. It is traditionally started by `jupyter
notebook`. Does anything need to be said here?
**JupyterLab** is the new interface, where multiple notebooks are
openable in the same tab in an IDE-like environment. It is
traditionally started with `jupyter lab`. Both Notebook and Lab use
the same `.ipynb` file format.
JupyterLab is run thorugh the same server file, but at a path `/lab`
instead of `/tree`. Thus, they can be active at the same time in the
backend and you can switch between them at runtime by changing your
URL path.
Extensions need to be re-written for JupyterLab (if moving from
classic notebooks). But, the server-side of the extensions can be
shared by both.
## Kernel
The commands you run in the notebook session are not executed in the same process as
the notebook itself, but in a separate **Jupyter kernel**. There are [many
kernels
available](https://github.com/jupyter/jupyter/wiki/Jupyter-kernels).
As a basic approximation, a **Jupyter kernel** is a process which
accepts commands (cells that are run) and returns the output to
Jupyter to display. One example is the **IPython Jupyter kernel**,
which runs Python. There is nothing special about it, it can be
considered a \*normal Python process. The kernel process can be
approximated in UNIX terms as a process that takes commands on stdin
and returns stuff on stdout(&). Obviously, it's more because it has
to be able to disentangle all the possible outputs, such as figures,
and present it to the user in a web browser.
Kernel communication is via the the ZeroMQ protocol on the local
computer. Kernels are separate processes from the main single-user
notebook server (and thus obviously, different from the JupyterHub
process and everything else). By default (and unless you do something
special), kernels share the same environment as the notebook server
(data, resource limits, permissions, user id, etc.). But they _can_
run in a separate Python environment from the single-user server
(search `--prefix` in the [ipykernel installation
instructions](https://ipython.readthedocs.io/en/stable/install/kernel_install.html))
There are also more fancy techniques such as the [Jupyter Kernel
Gateway](https://jupyter-kernel-gateway.readthedocs.io/) and [Enterprise
Gateway](https://jupyter-enterprise-gateway.readthedocs.io/), which
allow you to run the kernels on a different machine and possibly with
a different environment.
A kernel doesn't just execute it's language - cell magics such as `%`,
`%%`, and `!` are a property of the kernel - in particular, these are
IPython kernel commands and don't necessarily work in any other
kernel unless they specifically support them.
Kernels are yet _another_ layer of configurability.
Each kernel can run a different programming language, with different
software, and so on. By default, they would run in the same
environment as the single-user notebook server, and the most common
other way they are configured is by
running in different Python virtual environments or conda
environments. They can be started and killed independently (there is
normally one per notebook you have open). The kernel uses
most of your memory and CPU when running Jupyter - the rest of the web
interface has a small footprint.
You can list your installed kernels with `jupyter kernelspec list`.
If you look at one of `kernel.json` files in those directories, you
will see exactly what command is run. These are normally
automatically made by the kernels, but can be edited as needed. [The
spec](https://jupyter-client.readthedocs.io/en/stable/kernels.html)
tells you even more.
The kernel normally has to be reachable by the single-user notebook server
but the gateways mentioned above can get around that limitation.
If you get problems with "Kernel died" or some other error in a single
notebook but the single-user notebook server stays working, it is
usually a problem with the kernel. It could be that you are trying to
use more resources than you are allowed and the symptom is the kernel
getting killed. It could be that it crashes for some other reason.
In these cases, you need to find the kernel logs and investigate.
The debug logs for the kernel are normally mixed in with the
single-user notebook server logs.
## JupyterHub distributions
There are several "distributions" which automatically install all of
the things above and configure them for a certain purpose. They are
good ways to get started, but if you have custom needs, eventually it
may become hard to adapt them to your requirements.
- [**Zero to JupyterHub with
Kubernetes**](https://zero-to-jupyterhub.readthedocs.io/) installs
an entire scaleable system using Kubernetes. Uses KubeSpawner,
....Authenticator, ....
- [**The Littlest JupyterHub**](https://tljh.jupyter.org/) installs JupyterHub on a single system
using SystemdSpawner and NativeAuthenticator (which manages users
itself).
- [**JupyterHub the hard way**](https://github.com/jupyterhub/jupyterhub-the-hard-way/blob/master/docs/installation-guide-hard.md)
takes you through everything yourself. It is a natural companion to
this guide, since you get to experience every little bit.
## What's next?
Now you know everything. Well, you know how everything relates, but
there are still plenty of details, implementations, and exceptions.
When setting up JupyterHub, the first step is to consider the above
layers, decide the right option for each of them, then begin putting
everything together.

View File

@@ -1,186 +0,0 @@
(explanation:hub-database)=
# The Hub's Database
JupyterHub uses a database to store information about users, services, and other data needed for operating the Hub.
This is the **state** of the Hub.
## Why does JupyterHub have a database?
JupyterHub is a **stateful** application (more on that 'state' later).
Updating JupyterHub's configuration or upgrading the version of JupyterHub requires restarting the JupyterHub process to apply the changes.
We want to minimize the disruption caused by restarting the Hub process, so it can be a mundane, frequent, routine activity.
Storing state information outside the process for later retrieval is necessary for this, and one of the main thing databases are for.
A lot of the operations in JupyterHub are also **relationships**, which is exactly what SQL databases are great at.
For example:
- Given an API token, what user is making the request?
- Which users don't have running servers?
- Which servers belong to user X?
- Which users have not been active in the last 24 hours?
Finally, a database allows us to have more information stored without needing it all loaded in memory,
e.g. supporting a large number (several thousands) of inactive users.
## What's in the database?
The short answer of what's in the JupyterHub database is "everything."
JupyterHub's **state** lives in the database.
That is, everything JupyterHub needs to be aware of to function that _doesn't_ come from the configuration files, such as
- users, roles, role assignments
- state, urls of running servers
- Hashed API tokens
- Short-lived state related to OAuth flow
- Timestamps for when users, tokens, and servers were last used
### What's _not_ in the database
Not _quite_ all of JupyterHub's state is in the database.
This mostly involves transient state, such as the 'pending' transitions of Spawners (starting, stopping, etc.).
Anything not in the database must be reconstructed on Hub restart, and the only sources of information to do that are the database and JupyterHub configuration file(s).
## How does JupyterHub use the database?
JupyterHub makes some _unusual_ choices in how it connects to the database.
These choices represent trade-offs favoring single-process simplicity and performance at the expense of horizontal scalability (multiple Hub instances).
We often say that the Hub 'owns' the database.
This ownership means that we assume the Hub is the only process that will talk to the database.
This assumption enables us to make several caching optimizations that dramatically improve JupyterHub's performance (i.e. data written recently to the database can be read from memory instead of fetched again from the database) that would not work if multiple processes could be interacting with the database at the same time.
Database operations are also synchronous, so while JupyterHub is waiting on a database operation, it cannot respond to other requests.
This allows us to avoid complex locking mechanisms, because transaction races can only occur during an `await`, so we only need to make sure we've completed any given transaction before the next `await` in a given request.
:::{note}
We are slowly working to remove these assumptions, and moving to a more traditional db session per-request pattern.
This will enable multiple Hub instances and enable scaling JupyterHub, but will significantly reduce the number of active users a single Hub instance can serve.
:::
### Database performance in a typical request
Most authenticated requests to JupyterHub involve a few database transactions:
1. look up the authenticated user (e.g. look up token by hash, then resolve owner and permissions)
2. record activity
3. perform any relevant changes involved in processing the request (e.g. create the records for a running server when starting one)
This means that the database is involved in almost every request, but only in quite small, simple queries, e.g.:
- lookup one token by hash
- lookup one user by name
- list tokens or servers for one user (typically 1-10)
- etc.
### The database as a limiting factor
As a result of the above transactions in most requests, database performance is the _leading_ factor in JupyterHub's baseline requests-per-second performance, but that cost does not scale significantly with the number of users, active or otherwise.
However, the database is _rarely_ a limiting factor in JupyterHub performance in a practical sense, because the main thing JupyterHub does is start, stop, and monitor whole servers, which take far more time than any small database transaction, no matter how many records you have or how slow your database is (within reason).
Additionally, there is usually _very_ little load on the database itself.
By far the most taxing activity on the database is the 'list all users' endpoint, primarily used by the [idle-culling service](https://github.com/jupyterhub/jupyterhub-idle-culler).
Database-based optimizations have been added to make even these operations feasible for large numbers of users:
1. State filtering on [GET /hub/api/users?state=active](rest-api-get-users),
which limits the number of results in the query to only the relevant subset (added in JupyterHub 1.3), rather than all users.
2. [Pagination](api-pagination) of all list endpoints, allowing the request of a large number of resources to be more fairly balanced with other Hub activities across multiple requests (added in 2.0).
:::{note}
It's important to note when discussing performance and limiting factors and that all of this only applies to requests to `/hub/...`.
The Hub and its database are not involved in most requests to single-user servers (`/user/...`), which is by design, and largely motivated by the fact that the Hub itself doesn't _need_ to be fast because its operations are infrequent and large.
:::
## Database backends
JupyterHub supports a variety of database backends via [SQLAlchemy][].
The default is sqlite, which works great for many cases, but you should be able to use many backends supported by SQLAlchemy.
Usually, this will mean PostgreSQL or MySQL, both of which are officially supported and well tested with JupyterHub, but others may work as well.
See [SQLAlchemy's docs][sqlalchemy-dialect] for how to connect to different database backends.
Doing so generally involves:
1. installing a Python package that provides a client implementation, and
2. setting [](JupyterHub.db_url) to connect to your database with the specified implementation
[sqlalchemy-dialect]: https://docs.sqlalchemy.org/en/20/dialects/
[sqlalchemy]: https://www.sqlalchemy.org
### Default backend: SQLite
The default database backend for JupyterHub is [SQLite](https://sqlite.org).
We have chosen SQLite as JupyterHub's default because it's simple (the 'database' is a single file), ubiquitous (it is in the Python standard library), and it does not require maintaining a separate database server.
The main disadvantage of SQLite is it does not support remote backup tools or replication.
You should backup your database by taking snapshots of the file (`jupyterhub.sqlite`).
SQLite is ideal for testing, small deployments, workshops, and production servers where you do not require remote backup or replication.
### Picking your database backend (PostgreSQL, MySQL)
The sqlite documentation provides a helpful page about [when to use SQLite and
where traditional RDBMS may be a better choice](https://sqlite.org/whentouse.html).
In general, you select your database backend with [](JupyterHub.db_url), and can further configure it (usually not necessary) with [](JupyterHub.db_kwargs).
## Notes and Tips
### Upgrading the JupyterHub database
[Upgrading JupyterHub to a new major release](howto:upgrading-jupyterhub) often requires an upgrade to the database schema.
- `jupyterhub upgrade-db` will execute a schema upgrade. You should backup your database before running this.
- `jupyterhub downgrade-db` may be able to revert a schema upgrade on PostgreSQL and MySQL, but this is not guaranteed to work, and is not supported.
### SQLite
The SQLite database should not be used on NFS. SQLite uses reader/writer locks
to control access to the database. This locking mechanism might not work
correctly if the database file is kept on an NFS filesystem. This is because
`fcntl()` file locking is broken on many NFS implementations. Therefore, you
should avoid putting SQLite database files on NFS since it will not handle well
multiple processes which might try to access the file at the same time.
### PostgreSQL
We recommend using PostgreSQL for production if you are unsure whether to use
MySQL or PostgreSQL or if you do not have a strong preference.
There is additional configuration required for MySQL that is not needed for PostgreSQL.
For example, to connect to a PostgreSQL database with psycopg2:
1. install psycopg2: `pip install psycopg2` (or `psycopg2-binary` to avoid compilation, which is [not recommended for production][psycopg2-binary])
2. set authentication via environment variables `PGUSER` and `PGPASSWORD`
3. configure [](JupyterHub.db_url):
```python
c.JupyterHub.db_url = "postgresql+psycopg2://my-postgres-server:5432/my-db-name"
```
[psycopg2-binary]: https://www.psycopg.org/docs/install.html#psycopg-vs-psycopg-binary
### MySQL / MariaDB
- You should probably use the `pymysql` or `mysqlclient` sqlalchemy provider, or another backend [recommended by sqlalchemy](https://docs.sqlalchemy.org/en/20/dialects/mysql.html#dialect-mysql)
- You also need to set `pool_recycle` to some value (typically 60 - 300, JupyterHub will default to 60)
which depends on your MySQL setup. This is necessary since MySQL kills
connections serverside if they've been idle for a while, and the connection
from the hub will be idle for longer than most connections. This behavior
will lead to frustrating 'the connection has gone away' errors from
sqlalchemy if `pool_recycle` is not set.
- If you use `utf8mb4` collation with MySQL earlier than 5.7.7 or MariaDB
earlier than 10.2.1 you may get an `1709, Index column size too large` error.
To fix this you need to set `innodb_large_prefix` to enabled and
`innodb_file_format` to `Barracuda` to allow for the index sizes jupyterhub
uses. `row_format` will be set to `DYNAMIC` as long as those options are set
correctly. Later versions of MariaDB and MySQL should set these values by
default, as well as have a default `DYNAMIC` `row_format` and pose no trouble
to users.
For example, to connect to a mysql database with mysqlclient:
1. install mysqlclient: `pip install mysqlclient`
2. configure [](JupyterHub.db_url):
```python
c.JupyterHub.db_url = "mysql+mysqldb://myuser:mypassword@my-sql-server:3306/my-db-name"
```

View File

@@ -1,17 +0,0 @@
(explanation)=
# Explanation
_Explanation_ documentation provide big-picture descriptions of how JupyterHub works. This section is meant to build your understanding of particular topics.
```{toctree}
:maxdepth: 1
concepts
capacity-planning
database
websecurity
oauth
singleuser
../rbac/index
```

View File

@@ -1,375 +0,0 @@
(explanation:hub-oauth)=
# JupyterHub and OAuth
JupyterHub uses [OAuth 2](https://oauth.net/2/) as an internal mechanism for authenticating users.
As such, JupyterHub itself always functions as an OAuth **provider**.
You can find out more about what that means [below](oauth-terms).
Additionally, JupyterHub is _often_ deployed with [OAuthenticator](https://oauthenticator.readthedocs.io),
where an external identity provider, such as GitHub or KeyCloak, is used to authenticate users.
When this is the case, there are _two_ nested OAuth flows:
an _internal_ OAuth flow where JupyterHub is the **provider**,
and an _external_ OAuth flow, where JupyterHub is the **client**.
This means that when you are using JupyterHub, there is always _at least one_ and often two layers of OAuth involved in a user logging in and accessing their server.
The following points are noteworthy:
- Single-user servers _never_ need to communicate with or be aware of the upstream provider configured in your Authenticator.
As far as the servers are concerned, only JupyterHub is an OAuth provider,
and how users authenticate with the Hub itself is irrelevant.
- When interacting with a single-user server,
there are ~always two tokens:
first, a token issued to the server itself to communicate with the Hub API,
and second, a per-user token in the browser to represent the completed login process and authorized permissions.
More on this [later](two-tokens).
(oauth-terms)=
## Key OAuth terms
Here are some key definitions to keep in mind when we are talking about OAuth.
You can also read more in detail [here](https://www.oauth.com/oauth2-servers/definitions/).
- **provider**: The entity responsible for managing identity and authorization;
always a web server.
JupyterHub is _always_ an OAuth provider for JupyterHub's components.
When OAuthenticator is used, an external service, such as GitHub or KeyCloak, is also an OAuth provider.
- **client**: An entity that requests OAuth **tokens** on a user's behalf;
generally a web server of some kind.
OAuth **clients** are services that _delegate_ authentication and/or authorization
to an OAuth **provider**.
JupyterHub _services_ or single-user _servers_ are OAuth **clients** of the JupyterHub **provider**.
When OAuthenticator is used, JupyterHub is itself _also_ an OAuth **client** for the external OAuth **provider**, e.g. GitHub.
- **browser**: A user's web browser, which makes requests and stores things like cookies.
- **token**: The secret value used to represent a user's authorization. This is the final product of the OAuth process.
- **code**: A short-lived temporary secret that the **client** exchanges
for a **token** at the conclusion of OAuth,
in what's generally called the "OAuth callback handler."
## One oauth flow
OAuth **flow** is what we call the sequence of HTTP requests involved in authenticating a user and issuing a token, ultimately used for authorizing access to a service or single-user server.
A single OAuth flow typically goes like this:
### OAuth request and redirect
1. A **browser** makes an HTTP request to an OAuth **client**.
2. There are no credentials, so the client _redirects_ the browser to an "authorize" page on the OAuth **provider** with some extra information:
- the OAuth **client ID** of the client itself.
- the **redirect URI** to be redirected back to after completion.
- the **scopes** requested, which the user should be presented with to confirm.
This is the "X would like to be able to Y on your behalf. Allow this?" page you see on all the "Login with ..." pages around the Internet.
3. During this authorize step,
the browser must be _authenticated_ with the provider.
This is often already stored in a cookie,
but if not the provider webapp must begin its _own_ authentication process before serving the authorization page.
This _may_ even begin another OAuth flow!
4. After the user tells the provider that they want to proceed with the authorization,
the provider records this authorization in a short-lived record called an **OAuth code**.
5. Finally, the oauth provider redirects the browser _back_ to the oauth client's "redirect URI"
(or "OAuth callback URI"),
with the OAuth code in a URL parameter.
That marks the end of the requests made between the **browser** and the **provider**.
### State after redirect
At this point:
- The browser is authenticated with the _provider_.
- The user's authorized permissions are recorded in an _OAuth code_.
- The _provider_ knows that the permissions requested by the OAuth client have been granted, but the client doesn't know this yet.
- All the requests so far have been made directly by the browser.
No requests have originated from the client or provider.
### OAuth Client Handles Callback Request
At this stage, we get to finish the OAuth process.
Let's dig into what the OAuth client does when it handles
the OAuth callback request.
- The OAuth client receives the _code_ and makes an API request to the _provider_ to exchange the code for a real _token_.
This is the first direct request between the OAuth _client_ and the _provider_.
- Once the token is retrieved, the client _usually_
makes a second API request to the _provider_
to retrieve information about the owner of the token (the user).
This is the step where behavior diverges for different OAuth providers.
Up to this point, all OAuth providers are the same, following the OAuth specification.
However, OAuth does not define a standard for issuing tokens in exchange for information about their owner or permissions ([OpenID Connect](https://openid.net/developers/how-connect-works/) does that),
so this step may be different for each OAuth provider.
- Finally, the OAuth client stores its own record that the user is authorized in a cookie.
This could be the token itself, or any other appropriate representation of successful authentication.
- Now that credentials have been established,
the browser can be redirected to the _original_ URL where it started,
to try the request again.
If the client wasn't able to keep track of the original URL all this time
(not always easy!),
you might end up back at a default landing page instead of where you started the login process. This is frustrating!
😮‍💨 _phew_.
So that's _one_ OAuth process.
## Full sequence of OAuth in JupyterHub
Let's go through the above OAuth process in JupyterHub,
with specific examples of each HTTP request and what information it contains.
For bonus points, we are using the double-OAuth example of JupyterHub configured with GitHubOAuthenticator.
To disambiguate, we will call the OAuth process where JupyterHub is the **provider** "internal OAuth,"
and the one with JupyterHub as a **client** "external OAuth."
Our starting point:
- a user's single-user server is running. Let's call them `danez`
- Jupyterhub is running with GitHub as an OAuth provider (this means two full instances of OAuth),
- Danez has a fresh browser session with no cookies yet.
First request:
- browser->single-user server running JupyterLab or Jupyter Classic
- `GET /user/danez/notebooks/mynotebook.ipynb`
- no credentials, so single-user server (as an OAuth **client**) starts internal OAuth process with JupyterHub (the **provider**)
- response: 302 redirect -> `/hub/api/oauth2/authorize`
with:
- client-id=`jupyterhub-user-danez`
- redirect-uri=`/user/danez/oauth_callback` (we'll come back later!)
Second request, following redirect:
- browser->JupyterHub
- `GET /hub/api/oauth2/authorize`
- no credentials, so JupyterHub starts external OAuth process _with GitHub_
- response: 302 redirect -> `https://github.com/login/oauth/authorize`
with:
- client-id=`jupyterhub-client-uuid`
- redirect-uri=`/hub/oauth_callback` (we'll come back later!)
_pause_ This is where JupyterHub configuration comes into play.
Recall, in this case JupyterHub is using:
```python
c.JupyterHub.authenticator_class = 'github'
```
That means authenticating a request to the Hub itself starts
a _second_, external OAuth process with GitHub as a provider.
This external OAuth process is optional, though.
If you were using the default username+password PAMAuthenticator,
this redirect would have been to `/hub/login` instead, to present the user
with a login form.
Third request, following redirect:
- browser->GitHub
- `GET https://github.com/login/oauth/authorize`
Here, GitHub prompts for login and asks for confirmation of authorization
(more redirects if you aren't logged in to GitHub yet, but ultimately back to this `/authorize` URL).
After successful authorization
(either by looking up a pre-existing authorization,
or recording it via form submission)
GitHub issues an **OAuth code** and redirects to `/hub/oauth_callback?code=github-code`
Next request:
- browser->JupyterHub
- `GET /hub/oauth_callback?code=github-code`
Inside the callback handler, JupyterHub makes two API requests:
The first:
- JupyterHub->GitHub
- `POST https://github.com/login/oauth/access_token`
- request made with OAuth **code** from URL parameter
- response includes an access **token**
The second:
- JupyterHub->GitHub
- `GET https://api.github.com/user`
- request made with access **token** in the `Authorization` header
- response is the user model, including username, email, etc.
Now the external OAuth callback request completes with:
- set cookie on `/hub/` path, recording jupyterhub authentication so we don't need to do external OAuth with GitHub again for a while
- redirect -> `/hub/api/oauth2/authorize`
🎉 At this point, we have completed our first OAuth flow! 🎉
Now, we get our first repeated request:
- browser->jupyterhub
- `GET /hub/api/oauth2/authorize`
- this time with credentials,
so jupyterhub either
1. serves the internal authorization confirmation page, or
2. automatically accepts authorization (shortcut taken when a user is visiting their own server)
- redirect -> `/user/danez/oauth_callback?code=jupyterhub-code`
Here, we start the same OAuth callback process as before, but at Danez's single-user server for the _internal_ OAuth.
- browser->single-user server
- `GET /user/danez/oauth_callback`
(in handler)
Inside the internal OAuth callback handler,
Danez's server makes two API requests to JupyterHub:
The first:
- single-user server->JupyterHub
- `POST /hub/api/oauth2/token`
- request made with oauth code from url parameter
- response includes an API token
The second:
- single-user server->JupyterHub
- `GET /hub/api/user`
- request made with token in the `Authorization` header
- response is the user model, including username, groups, etc.
Finally completing `GET /user/danez/oauth_callback`:
- response sets cookie, storing encrypted access token
- _finally_ redirects back to the original `/user/danez/notebooks/mynotebook.ipynb`
Final request:
- browser -> single-user server
- `GET /user/danez/notebooks/mynotebook.ipynb`
- encrypted jupyterhub token in cookie
To authenticate this request, the single token stored in the encrypted cookie is passed to the Hub for verification:
- single-user server -> Hub
- `GET /hub/api/user`
- browser's token in Authorization header
- response: user model with name, groups, etc.
If the user model matches who should be allowed (e.g. Danez),
then the request is allowed.
See [Scopes in JupyterHub](jupyterhub-scopes) for how JupyterHub uses scopes to determine authorized access to servers and services.
_the end_
## Token caches and expiry
Because tokens represent information from an external source,
they can become 'stale,'
or the information they represent may no longer be accurate.
For example: a user's GitHub account may no longer be authorized to use JupyterHub,
that should ultimately propagate to revoking access and force logging in again.
To handle this, OAuth tokens and the various places they are stored can _expire_,
which should have the same effect as no credentials,
and trigger the authorization process again.
In JupyterHub's internal OAuth, we have these layers of information that can go stale:
- The OAuth client has a **cache** of Hub responses for tokens,
so it doesn't need to make API requests to the Hub for every request it receives.
This cache has an expiry of five minutes by default,
and is governed by the configuration `HubAuth.cache_max_age` in the single-user server.
- The internal OAuth token is stored in a cookie, which has its own expiry (default: 14 days),
governed by `JupyterHub.cookie_max_age_days`.
- The internal OAuth token itself can also expire,
which is by default the same as the cookie expiry,
since it makes sense for the token itself and the place it is stored to expire at the same time.
This is governed by `JupyterHub.cookie_max_age_days` first,
or can overridden by `JupyterHub.oauth_token_expires_in`.
That's all for _internal_ auth storage,
but the information from the _external_ authentication provider
(could be PAM or GitHub OAuth, etc.) can also expire.
Authenticator configuration governs when JupyterHub needs to ask again,
triggering the external login process anew before letting a user proceed.
- `jupyterhub-hub-login` cookie stores that a browser is authenticated with the Hub.
This expires according to `JupyterHub.cookie_max_age_days` configuration,
with a default of 14 days.
The `jupyterhub-hub-login` cookie is encrypted with `JupyterHub.cookie_secret`
configuration.
- {meth}`.Authenticator.refresh_user` is a method to refresh a user's auth info.
By default, it does nothing, but it can return an updated user model if a user's information has changed,
or force a full login process again if needed.
- {attr}`.Authenticator.auth_refresh_age` configuration governs how often
`refresh_user()` will be called to check if a user must login again (default: 300 seconds).
- {attr}`.Authenticator.refresh_pre_spawn` configuration governs whether
`refresh_user()` should be called prior to spawning a server,
to force fresh auth info when a server is launched (default: False).
This can be useful when Authenticators pass access tokens to spawner environments, to ensure they aren't getting a stale token that's about to expire.
**So what happens when these things expire or get stale?**
- If the HubAuth **token response cache** expires,
when a request is made with a token,
the Hub is asked for the latest information about the token.
This usually has no visible effect, since it is just refreshing a cache.
If it turns out that the token itself has expired or been revoked,
the request will be denied.
- If the token has expired, but is still in the cookie:
when the token response cache expires,
the next time the server asks the hub about the token,
no user will be identified and the internal OAuth process begins again.
- If the token _cookie_ expires, the next browser request will be made with no credentials,
and the internal OAuth process will begin again.
This will usually have the form of a transparent redirect browsers won't notice.
However, if this occurs on an API request in a long-lived page visit
such as a JupyterLab session, the API request may fail and require
a page refresh to get renewed credentials.
- If the _JupyterHub_ cookie expires, the next time the browser makes a request to the Hub,
the Hub's authorization process must begin again (e.g. login with GitHub).
Hub cookie expiry on its own **does not** mean that a user can no longer access their single-user server!
- If credentials from the upstream provider (e.g. GitHub) become stale or outdated,
these will not be refreshed until/unless `refresh_user` is called
_and_ `refresh_user()` on the given Authenticator is implemented to perform such a check.
At this point, few Authenticators implement `refresh_user` to support this feature.
If your Authenticator does not or cannot implement `refresh_user`,
the only way to force a check is to reset the `JupyterHub.cookie_secret` encryption key,
which invalidates the `jupyterhub-hub-login` cookie for all users.
### Logging out
Logging out of JupyterHub means clearing and revoking many of these credentials:
- The `jupyterhub-hub-login` cookie is revoked, meaning the next request to the Hub itself will require a new login.
- The token stored in the `jupyterhub-user-username` cookie for the single-user server
will be revoked, based on its associaton with `jupyterhub-session-id`, but the _cookie itself cannot be cleared at this point_
- The shared `jupyterhub-session-id` is cleared, which ensures that the HubAuth **token response cache** will not be used,
and the next request with the expired token will ask the Hub, which will inform the single-user server that the token has expired
## Extra bits
(two-tokens)=
### A tale of two tokens
**TODO**: discuss API token issued to server at startup ($JUPYTERHUB_API_TOKEN)
and OAuth-issued token in the cookie,
and some details of how JupyterLab currently deals with that.
They are different, and JupyterLab should be making requests using the token from the cookie,
not the token from the server,
but that is not currently the case.
### Redirect loops
In general, an authenticated web endpoint has this behavior,
based on the authentication/authorization state of the browser:
- If authorized, allow the request to happen
- If authenticated (I know who you are) but not authorized (you are not allowed), fail with a 403 permission denied error
- If not authenticated, start a redirect process to establish authorization,
which should end in a redirect back to the original URL to try again.
**This is why problems in authentication result in redirect loops!**
If the second request fails to detect the authentication that should have been established during the redirect,
it will start the authentication redirect process over again,
and keep redirecting in a loop until the browser balks.

View File

@@ -1,109 +0,0 @@
(explanation:singleuser)=
# The JupyterHub single-user server
When a user logs into JupyterHub, they get a 'server', which we usually call the **single-user server**, because it's a server that's meant for a single JupyterHub user.
Each JupyterHub user gets a different one (or more than one!).
A single-user server is a process running somewhere that is:
1. accessible over http[s],
2. authenticated via JupyterHub using OAuth 2.0,
3. started by a [Spawner](spawners), and
4. 'owned' by a single JupyterHub user
## The single-user server command
The Spawner's default single-user server startup command, `jupyterhub-singleuser`, launches `jupyter-server`, the same program used when you run `jupyter lab` on your laptop.
(_It can also launch the legacy `jupyter-notebook` server_).
That's why JupyterHub looks familiar to folks who are already using Jupyter at home or elsewhere.
It's the same!
`jupyterhub-singleuser` _customizes_ that program to change (approximately) one thing: **authenticate requests with JupyterHub**.
(singleuser-auth)=
## Single-user server authentication
Implementation-wise, JupyterHub single-user servers are a special-case of {ref}`services-reference`
and as such use the same (OAuth) authentication mechanism (more on OAuth in JupyterHub at [](oauth)).
This is primarily implemented in the {class}`~.HubOAuth` class.
This code resides in `jupyterhub.singleuser` subpackage of JupyterHub.
The main task of this code is to:
1. resolve a JupyterHub token to a JupyterHub user (authenticate)
2. check permissions (`access:servers`) for the token to make sure the request should be allowed (authorize)
3. if not authorized, begin the OAuth process with a redirect to the Hub
4. after login, store OAuth tokens in a cookie only used by this single-user server
5. implement logout to clear the cookie
Most of this is implemented in the {class}`~.HubOAuth` class. `jupyterhub.singleuser` is responsible for _adapting_ the base Jupyter Server to use HubOAuth for these tasks.
### JupyterHub authentication extension
By default, `jupyter-server` uses its own cookie to authenticate.
If that cookie is not present, the server redirects you a login page and asks you to enter a password or token.
Jupyter Server 2.0 introduces two new _APIs_ for customizing authentication: the [IdentityProvider](inv:jupyter-server#jupyter_server.auth.IdentityProvider) and the [Authorizer](inv:jupyter-server#jupyter_server.auth.Authorizer).
More information can be found in the [Jupyter Server documentation](https://jupyter-server.readthedocs.io).
JupyterHub implements these APIs in `jupyterhub.singleuser.extension`.
The IdentityProvider is responsible for _authenticating_ requests.
In JupyterHub, that means extracting OAuth tokens from the request and resolving them to a JupyterHub user.
The Authorizer is a _separate_ API for _authorizing_ actions on particular resources.
Because the JupyterHub IdentityProvider only allows _authenticating_ users who already have the necessary `access:servers` permission to access the server, the default Authorizer only contains a redundant check for this same permission, and ignores the resource inputs.
However, specifying a _custom_ Authorizer allows for granular permissions, such as read-only access to subsets of a shared server.
### JupyterHub authentication via subclass
Prior to Jupyter Server 2 (i.e. Jupyter Server 1.x or the legacy `jupyter-notebook` server), JupyterHub authentication is applied via _subclass_.
Originally a subclass of `NotebookApp`,
this approach works with both `jupyter-server` and `jupyter-notebook`.
Instead of using the extension mechanisms above,
the server application is _subclassed_. This worked well in the `jupyter-notebook` days,
but doesn't fit well with Jupyter Server's extension-based architecture.
### Selecting jupyterhub-singleuser implementation
Using the JupyterHub singleuser-server extension is the default behavior of JupyterHub 4 and Jupyter Server 2, otherwise the subclass approach is taken.
You can opt-out of the extension by setting the environment variable `JUPYTERHUB_SINGLEUSER_EXTENSION=0`:
```python
c.Spawner.environment.update(
{
"JUPYTERHUB_SINGLEUSER_EXTENSION": "0",
}
)
```
The subclass approach will also be taken if you've opted to use the classic notebook server with:
```
JUPYTERHUB_SINGLEUSER_APP=notebook
```
which was introduced in JupyterHub 2.
## Other customizations
`jupyterhub-singleuser` makes other small customizations to how the single-user server behaves:
1. logs activity on the single-user server, used in [idle-culling](https://github.com/jupyterhub/jupyterhub-idle-culler).
2. disables some features that don't make sense in JupyterHub (trash, retrying ports)
3. loading options such as URLs and SSL configuration from the environment
4. customize logging for consistency with JupyterHub logs
## Running a single-user server that's not `jupyterhub-singleuser`
By default, `jupyterhub-singleuser` is the same `jupyter-server` used by JupyterLab, Jupyter notebook (>= 7), etc.
But technically, all JupyterHub cares about is that it is:
1. an http server at the prescribed URL, accessible from the Hub and proxy, and
2. authenticated via [OAuth](oauth) with the Hub (it doesn't even have to do this, if you want to do your own authentication, as is done in BinderHub)
which means that you can customize JupyterHub to launch _any_ web application that meets these criteria, by following the specifications in {ref}`services-reference`.
Most of the time, though, it's easier to use [jupyter-server-proxy](https://jupyter-server-proxy.readthedocs.io) if you want to launch additional web applications in JupyterHub.

View File

@@ -1,273 +0,0 @@
(explanation:security)=
# Security Overview
The **Security Overview** section helps you learn about:
- the design of JupyterHub with respect to web security
- the semi-trusted user
- the available mitigations to protect untrusted users from each other
- the value of periodic security audits
This overview also helps you obtain a deeper understanding of how JupyterHub
works.
## Semi-trusted and untrusted users
JupyterHub is designed to be a _simple multi-user server for modestly sized
groups_ of **semi-trusted** users. While the design reflects serving
semi-trusted users, JupyterHub can also be suitable for serving **untrusted** users,
but **is not suitable for untrusted users** in its default configuration.
As a result, using JupyterHub with **untrusted** users means more work by the
administrator, since much care is required to secure a Hub, with extra caution on
protecting users from each other.
One aspect of JupyterHub's _design simplicity_ for **semi-trusted** users is that
the Hub and single-user servers are placed in a _single domain_, behind a
[_proxy_][configurable-http-proxy]. If the Hub is serving untrusted
users, many of the web's cross-site protections are not applied between
single-user servers and the Hub, or between single-user servers and each
other, since browsers see the whole thing (proxy, Hub, and single user
servers) as a single website (i.e. single domain).
## Protect users from each other
To protect users from each other, a user must **never** be able to write arbitrary
HTML and serve it to another user on the Hub's domain. This is prevented by JupyterHub's
authentication setup because only the owner of a given single-user notebook server is
allowed to view user-authored pages served by the given single-user notebook
server.
To protect all users from each other, JupyterHub administrators must
ensure that:
- A user **does not have permission** to modify their single-user notebook server,
including:
- the installation of new packages in the Python environment that runs
their single-user server;
- the creation of new files in any `PATH` directory that precedes the
directory containing `jupyterhub-singleuser` (if the `PATH` is used
to resolve the single-user executable instead of using an absolute path);
- the modification of environment variables (e.g. PATH, PYTHONPATH) for
their single-user server;
- the modification of the configuration of the notebook server
(the `~/.jupyter` or `JUPYTER_CONFIG_DIR` directory).
- unrestricted selection of the base environment (e.g. the image used in container-based Spawners)
If any additional services are run on the same domain as the Hub, the services
**must never** display user-authored HTML that is neither _sanitized_ nor _sandboxed_
to any user that lacks authentication as the author of a file.
### Sharing access to servers
Because sharing access to servers (via `access:servers` scopes or the sharing feature in JupyterHub 5) by definition means users can serve each other files, enabling sharing is not suitable for untrusted users without also enabling per-user domains.
JupyterHub does not enable any sharing by default.
## Mitigate security issues
The several approaches to mitigating security issues with configuration
options provided by JupyterHub include:
(subdomains)=
### Enable user subdomains
JupyterHub provides the ability to run single-user servers on their own
domains. This means the cross-origin protections between servers has the
desired effect, and user servers and the Hub are protected from each other.
**Subdomains are the only way to reliably isolate user servers from each other.**
To enable subdomains, set:
```python
c.JupyterHub.subdomain_host = "https://jupyter.example.org"
```
When subdomains are enabled, each user's single-user server will be at e.g. `https://username.jupyter.example.org`.
This also requires all user subdomains to point to the same address,
which is most easily accomplished with wildcard DNS, where a single A record points to your server and a wildcard CNAME record points to your A record:
```
A jupyter.example.org 192.168.1.123
CNAME *.jupyter.example.org jupyter.example.org
```
Since this spreads the service across multiple domains, you will likely need wildcard SSL as well,
matching `*.jupyter.example.org`.
Unfortunately, for many institutional domains, wildcard DNS and SSL may not be available.
We also **strongly encourage** serving JupyterHub and user content on a domain that is _not_ a subdomain of any sensitive content.
For reasoning, see [GitHub's discussion of moving user content to github.io from \*.github.com](https://github.blog/engineering/yummy-cookies-across-domains/).
**If you do plan to serve untrusted users, enabling subdomains is highly encouraged**,
as it resolves many security issues, which are difficult to unavoidable when JupyterHub is on a single-domain.
:::{important}
JupyterHub makes no guarantees about protecting users from each other unless subdomains are enabled.
If you want to protect users from each other, you **_must_** enable per-user domains.
:::
### Disable user config
If subdomains are unavailable or undesirable, JupyterHub provides a
configuration option `Spawner.disable_user_config = True`, which can be set to prevent
the user-owned configuration files from being loaded. After implementing this
option, `PATH`s and package installation are the other things that the
admin must enforce.
### Prevent spawners from evaluating shell configuration files
For most Spawners, `PATH` is not something users can influence, but it's important that
the Spawner should _not_ evaluate shell configuration files prior to launching the server.
### Isolate packages in a read-only environment
The user must not have permission to install packages into the environment where the singleuser-server runs.
On a shared system, package isolation is most easily handled by running the single-user server in
a root-owned virtualenv with disabled system-site-packages.
The user must not have permission to install packages into this environment.
The same principle extends to the images used by container-based deployments.
If users can select the images in which their servers run, they can disable all security for their own servers.
It is important to note that the control over the environment is only required for the
single-user server, and not the environment(s) in which the users' kernel(s)
may run. Installing additional packages in the kernel environment does not
pose additional risk to the web application's security.
### Encrypt internal connections with SSL/TLS
By default, all communications within JupyterHub—between the proxy, hub, and single
-user notebooks—are performed unencrypted. Setting the `internal_ssl` flag in
`jupyterhub_config.py` secures the aforementioned routes. Turning this
feature on does require that the enabled `Spawner` can use the certificates
generated by the `Hub` (the default `LocalProcessSpawner` can, for instance).
It is also important to note that this encryption **does not** cover the
`zmq tcp` sockets between the Notebook client and kernel yet. While users cannot
submit arbitrary commands to another user's kernel, they can bind to these
sockets and listen. When serving untrusted users, this eavesdropping can be
mitigated by setting `KernelManager.transport` to `ipc`. This applies standard
Unix permissions to the communication sockets thereby restricting
communication to the socket owner. The `internal_ssl` option will eventually
extend to securing the `tcp` sockets as well.
### Mitigating same-origin deployments
While per-user domains are **required** for robust protection of users from each other,
you can mitigate many (but not all) cross-user issues.
First, it is critical that users cannot modify their server environments, as described above.
Second, it is important that users do not have `access:servers` permission to any server other than their own.
If users can access each others' servers, additional security measures must be enabled, some of which come with distinct user-experience costs.
Without the [Same-Origin Policy] (SOP) protecting user servers from each other,
each user server is considered a trusted origin for requests to each other user server (and the Hub itself).
Servers _cannot_ meaningfully distinguish requests originating from other user servers,
because SOP implies a great deal of trust, losing many restrictions applied to cross-origin requests.
That means pages served from each user server can:
1. arbitrarily modify the path in the Referer
2. make fully authorized requests with cookies
3. access full page contents served from the hub or other servers via popups
JupyterHub uses distinct xsrf tokens stored in cookies on each server path to attempt to limit requests across.
This has limitations because not all requests are protected by these XSRF tokens,
and unless additional measures are taken, the XSRF tokens from other user prefixes may be retrieved.
[Same-Origin Policy]: https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy
For example:
- `Content-Security-Policy` header must prohibit popups and iframes from the same origin.
The following Content-Security-Policy rules are _insecure_ and readily enable users to access each others' servers:
- `frame-ancestors: 'self'`
- `frame-ancestors: '*'`
- `sandbox allow-popups`
- Ideally, pages should use the strictest `Content-Security-Policy: sandbox` available,
but this is not feasible in general for JupyterLab pages, which need at least `sandbox allow-same-origin allow-scripts` to work.
The default Content-Security-Policy for single-user servers is
```
frame-ancestors: 'none'
```
which prohibits iframe embedding, but not pop-ups.
A more secure Content-Security-Policy that has some costs to user experience is:
```
frame-ancestors: 'none'; sandbox allow-same-origin allow-scripts
```
`allow-popups` is not disabled by default because disabling it breaks legitimate functionality, like "Open this in a new tab", and the "JupyterHub Control Panel" menu item.
To reiterate, the right way to avoid these issues is to enable per-user domains, where none of these concerns come up.
Note: even this level of protection requires administrators maintaining full control over the user server environment.
If users can modify their server environment, these methods are ineffective, as users can readily disable them.
### Cookie tossing
Cookie tossing is a technique where another server on a subdomain or peer subdomain can set a cookie
which will be read on another domain.
This is not relevant unless there are other user-controlled servers on a peer domain.
"Domain-locked" cookies avoid this issue, but have their own restrictions:
- JupyterHub must be served over HTTPS
- All secure cookies must be set on `/`, not on sub-paths, which means they are shared by all JupyterHub components in a single-domain deployment.
As a result, this option is only recommended when per-user subdomains are enabled,
to prevent sending all jupyterhub cookies to all user servers.
To enable domain-locked cookies, set:
```python
c.JupyterHub.cookie_host_prefix_enabled = True
```
```{versionadded} 4.1
```
### Forced-login
Jupyter servers can share links with `?token=...`.
JupyterHub prior to 5.0 will accept this request and persist the token for future requests.
This is useful for enabling admins to create 'fully authenticated' links bypassing login.
However, it also means users can share their own links that will log other users into their own servers,
enabling them to serve each other notebooks and other arbitrary HTML, depending on server configuration.
```{versionadded} 4.1
Setting environment variable `JUPYTERHUB_ALLOW_TOKEN_IN_URL=0` in the single-user environment can opt out of accepting token auth in URL parameters.
```
```{versionadded} 5.0
Accepting tokens in URLs is disabled by default, and `JUPYTERHUB_ALLOW_TOKEN_IN_URL=1` environment variable must be set to _allow_ token auth in URL parameters.
```
## Security audits
We recommend that you do periodic reviews of your deployment's security. It's
good practice to keep [JupyterHub](https://readthedocs.org/projects/jupyterhub/), [configurable-http-proxy][], and [nodejs
versions](https://github.com/nodejs/Release) up to date.
A handy website for testing your deployment is
[Qualsys' SSL analyzer tool](https://www.ssllabs.com/ssltest/analyze.html).
[configurable-http-proxy]: https://github.com/jupyterhub/configurable-http-proxy
## Vulnerability reporting
If you believe you have found a security vulnerability in JupyterHub, or any
Jupyter project, please report it to
[security@ipython.org](mailto:security@ipython.org). If you prefer to encrypt
your security reports, you can use [this PGP public
key](https://jupyter.org/assets/ipython_security.asc).

View File

@@ -1,78 +0,0 @@
(faq)=
# Frequently asked questions
## How do I share links to notebooks?
Sharing links to notebooks is a common activity,
and can look different depending on what you mean by 'share.'
Your first instinct might be to copy the URL you see in the browser,
e.g. `jupyterhub.example/user/yourname/notebooks/coolthing.ipynb`,
but this usually won't work, depending on the permissions of the person you share the link with.
Unfortunately, 'share' means at least a few things to people in a JupyterHub context.
We'll cover 3 common cases here, when they are applicable, and what assumptions they make:
1. sharing links that will open the same file on the visitor's own server
2. sharing links that will bring the visitor to _your_ server (e.g. for real-time collaboration, or RTC)
3. publishing notebooks and sharing links that will download the notebook into the user's server
### link to the same file on the visitor's server
This is for the case where you have JupyterHub on a shared (or sufficiently similar) filesystem, where you want to share a link that will cause users to login and start their _own_ server, to view or edit the file.
**Assumption:** the same path on someone else's server is valid and points to the same file
This is useful in e.g. classes where you know students have certain files in certain locations, or collaborations where you know you have a shared filesystem where everyone has access to the same files.
A link should look like `https://jupyterhub.example/hub/user-redirect/lab/tree/foo.ipynb`.
You can hand-craft these URLs from the URL you are looking at, where you see `/user/name/lab/tree/foo.ipynb` use `/hub/user-redirect/lab/tree/foo.ipynb` (replace `/user/name/` with `/hub/user-redirect/`).
Or you can use JupyterLab's "copy shareable link" in the context menu in the file browser:
![copy shareable link in JupyterLab](../images/shareable_link.webp)
which will produce a correct URL with `/hub/user-redirect/` in it.
### link to the file on your server
This is for the case where you want to both be using _your_ server, e.g. for real-time collaboration (RTC).
**Assumption:** the user has (or should have) access to your server.
**Assumption:** your server is running _or_ the user has permission to start it.
By default, JupyterHub users don't have access to each other's servers, but JupyterHub 2.0 administrators can grant users limited access permissions to each other's servers.
If the visitor doesn't have access to the server, these links will result in a 403 Permission Denied error.
In many cases, for this situation you can copy the link in your URL bar (`/user/yourname/lab`), or you can add `/tree/path/to/specific/notebook.ipynb` to open a specific file.
The [jupyterlab-link-share] JupyterLab extension generates these links, and even can _grant_ other users access to your server.
[jupyterlab-link-share]: https://github.com/jupyterlab-contrib/jupyterlab-link-share
:::{warning}
Note that the way the extension _grants_ access is handing over credentials to allow the other user to **_BECOME YOU_**.
This is usually not appropriate in JupyterHub.
:::
### link to a published copy
Another way to 'share' notebooks is to publish copies, e.g. pushing the notebook to a git repository and sharing a download link.
This way is especially useful for course materials,
where no assumptions are necessary about the user's environment,
except for having one package installed.
**Assumption:** The [nbgitpuller](inv:nbgitpuller#index) server extension is installed
Unlike the other two methods, nbgitpuller doesn't provide an extension to copy a shareable link for the document you're currently looking at,
but it does provide a [link generator](inv:nbgitpuller#link),
which uses the `user-redirect` approach above.
When visiting an nbgitpuller link:
- The visitor will be directed to their own server
- Your repo will be cloned (or updated if it's already been cloned)
- and then the file opened when it's ready
[nbgitpuller]: https://nbgitpuller.readthedocs.io
[nbgitpuller-link]: https://nbgitpuller.readthedocs.io/en/latest/link.html

View File

@@ -1,11 +0,0 @@
# FAQs
Find answers to some of the most frequently-asked questions around JupyterHub and how it works.
```{toctree}
:maxdepth: 2
faq
institutional-faq
troubleshooting
```

View File

@@ -0,0 +1,191 @@
# A Gallery of JupyterHub Deployments
**A JupyterHub Community Resource**
We've compiled this list of JupyterHub deployments to help the community
see the breadth and growth of JupyterHub's use in education, research, and
high performance computing.
Please submit pull requests to update information or to add new institutions or uses.
## Academic Institutions, Research Labs, and Supercomputer Centers
### University of California Berkeley
- [BIDS - Berkeley Institute for Data Science](https://bids.berkeley.edu/)
- [Teaching with Jupyter notebooks and JupyterHub](https://bids.berkeley.edu/resources/videos/teaching-ipythonjupyter-notebooks-and-jupyterhub)
- [Data 8](http://data8.org/)
- [GitHub organization](https://github.com/data-8)
- [NERSC](http://www.nersc.gov/)
- [Press release on Jupyter and Cori](http://www.nersc.gov/news-publications/nersc-news/nersc-center-news/2016/jupyter-notebooks-will-open-up-new-possibilities-on-nerscs-cori-supercomputer/)
- [Moving and sharing data](https://www.nersc.gov/assets/Uploads/03-MovingAndSharingData-Cholia.pdf)
- [Research IT](http://research-it.berkeley.edu)
- [JupyterHub server supports campus research computation](http://research-it.berkeley.edu/blog/17/01/24/free-fully-loaded-jupyterhub-server-supports-campus-research-computation)
### University of California Davis
- [Spinning up multiple Jupyter Notebooks on AWS for a tutorial](https://github.com/mblmicdiv/course2017/blob/master/exercises/sourmash-setup.md)
Although not technically a JupyterHub deployment, this tutorial setup
may be helpful to others in the Jupyter community.
Thank you C. Titus Brown for sharing this with the Software Carpentry
mailing list.
```
* I started a big Amazon machine;
* I installed Docker and built a custom image containing my software of
interest;
* I ran multiple containers, one connected to port 8000, one on 8001,
etc. and gave each student a different port;
* students could connect in and use the Terminal program in Jupyter to
execute commands, and could upload/download files via the Jupyter
console interface;
* in theory I could have used notebooks too, but for this I didnt have
need.
I am aware that JupyterHub can probably do all of this including manage
the containers, but Im still a bit shy of diving into that; this was
fairly straightforward, gave me disposable containers that were isolated
for each individual student, and worked almost flawlessly. Should be
easy to do with RStudio too.
```
### Cal Poly San Luis Obispo
- [jupyterhub-deploy-teaching](https://github.com/jupyterhub/jupyterhub-deploy-teaching) based on work by Brian Granger for Cal Poly's Data Science 301 Course
### Clemson University
- Advanced Computing
- [Palmetto cluster and JupyterHub](http://citi.sites.clemson.edu/2016/08/18/JupyterHub-for-Palmetto-Cluster.html)
### University of Colorado Boulder
- (CU Research Computing) CURC
- [JupyterHub User Guide](https://www.rc.colorado.edu/support/user-guide/jupyterhub.html)
- Slurm job dispatched on Crestone compute cluster
- log troubleshooting
- Profiles in IPython Clusters tab
- [Parallel Processing with JupyterHub tutorial](https://www.rc.colorado.edu/support/examples-and-tutorials/parallel-processing-with-jupyterhub.html)
- [Parallel Programming with JupyterHub document](https://www.rc.colorado.edu/book/export/html/833)
- Earth Lab at CU
- [Tutorial on Parallel R on JupyterHub](https://earthdatascience.org/tutorials/parallel-r-on-jupyterhub/)
### George Washington University
- [Jupyter Hub](http://go.gwu.edu/jupyter) with university single-sign-on. Deployed early 2017.
### HTCondor
- [HTCondor Python Bindings Tutorial from HTCondor Week 2017 includes information on their JupyterHub tutorials](https://research.cs.wisc.edu/htcondor/HTCondorWeek2017/presentations/TueBockelman_Python.pdf)
### University of Illinois
- https://datascience.business.illinois.edu (currently down; checked 04/26/19)
### IllustrisTNG Simulation Project
- [JupyterHub/Lab-based analysis platform, part of the TNG public data release](http://www.tng-project.org/data/)
### MIT and Lincoln Labs
- https://supercloud.mit.edu/
### Michigan State University
- [Setting up JupyterHub](https://mediaspace.msu.edu/media/Setting+Up+Your+JupyterHub+Password/1_hgv13aag/11980471)
### University of Minnesota
- [JupyterHub Inside HPC](https://insidehpc.com/tag/jupyterhub/)
### University of Missouri
- https://dsa.missouri.edu/faq/
### Paderborn University
- [Data Science (DICE) group](https://dice.cs.uni-paderborn.de/)
- [nbgraderutils](https://github.com/dice-group/nbgraderutils): Use JupyterHub + nbgrader + iJava kernel for online Java exercises. Used in lecture Statistical Natural Language Processing.
### Penn State University
- [Press release](https://news.psu.edu/story/523093/2018/05/24/new-open-source-web-apps-available-students-and-faculty): "New open-source web apps available for students and faculty" (but Hub is currently down; checked 04/26/19)
### University of Rochester CIRC
- [JupyterHub Userguide](https://info.circ.rochester.edu/Web_Applications/JupyterHub.html) - Slurm, beehive
### University of California San Diego
- San Diego Supercomputer Center - Andrea Zonca
- [Deploy JupyterHub on a Supercomputer with SSH](https://zonca.github.io/2017/05/jupyterhub-hpc-batchspawner-ssh.html)
- [Run Jupyterhub on a Supercomputer](https://zonca.github.io/2015/04/jupyterhub-hpc.html)
- [Deploy JupyterHub on a VM for a Workshop](https://zonca.github.io/2016/04/jupyterhub-sdsc-cloud.html)
- [Customize your Python environment in Jupyterhub](https://zonca.github.io/2017/02/customize-python-environment-jupyterhub.html)
- [Jupyterhub deployment on multiple nodes with Docker Swarm](https://zonca.github.io/2016/05/jupyterhub-docker-swarm.html)
- [Sample deployment of Jupyterhub in HPC on SDSC Comet](https://zonca.github.io/2017/02/sample-deployment-jupyterhub-hpc.html)
- Educational Technology Services - Paul Jamason
- [jupyterhub.ucsd.edu](https://jupyterhub.ucsd.edu)
### TACC University of Texas
### Texas A&M
- Kristen Thyng - Oceanography
- [Teaching with JupyterHub and nbgrader](http://kristenthyng.com/blog/2016/09/07/jupyterhub+nbgrader/)
### Elucidata
- What's new in Jupyter Notebooks @[Elucidata](https://elucidata.io/):
- Using Jupyter Notebooks with Jupyterhub on GCP, managed by GKE
- https://medium.com/elucidata/why-you-should-be-using-a-jupyter-notebook-8385a4ccd93d
## Service Providers
### AWS
- [running-jupyter-notebook-and-jupyterhub-on-amazon-emr](https://aws.amazon.com/blogs/big-data/running-jupyter-notebook-and-jupyterhub-on-amazon-emr/)
### Google Cloud Platform
- [Using Tensorflow and JupyterHub in Classrooms](https://cloud.google.com/solutions/using-tensorflow-jupyterhub-classrooms)
- [using-tensorflow-and-jupyterhub blog post](https://opensource.googleblog.com/2016/10/using-tensorflow-and-jupyterhub.html)
### Everware
[Everware](https://github.com/everware) Reproducible and reusable science powered by jupyterhub and docker. Like nbviewer, but executable. CERN, Geneva [website](http://everware.xyz/)
### Microsoft Azure
- https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-data-science-linux-dsvm-intro
### Rackspace Carina
- https://getcarina.com/blog/learning-how-to-whale/
- http://carolynvanslyck.com/talk/carina/jupyterhub/#/
### Hadoop
- [Deploying JupyterHub on Hadoop](https://jupyterhub-on-hadoop.readthedocs.io)
## Miscellaneous
- https://medium.com/@ybarraud/setting-up-jupyterhub-with-sudospawner-and-anaconda-844628c0dbee#.rm3yt87e1
- https://groups.google.com/forum/#!topic/jupyter/nkPSEeMr8c0 Mailing list UT deployment
- JupyterHub setup on Centos https://gist.github.com/johnrc/604971f7d41ebf12370bf5729bf3e0a4
- Deploy JupyterHub to Docker Swarm https://jupyterhub.surge.sh/#/welcome
- http://www.laketide.com/building-your-lab-part-3/
- http://estrellita.hatenablog.com/entry/2015/07/31/083202
- http://www.walkingrandomly.com/?p=5734
- https://wrdrd.com/docs/consulting/education-technology
- https://bitbucket.org/jackhale/fenics-jupyter
- [LinuxCluster blog](https://linuxcluster.wordpress.com/category/application/jupyterhub/)
- [Network Technology](https://arnesund.com/tag/jupyterhub/) [Spark Cluster on OpenStack with Multi-User Jupyter Notebook](https://arnesund.com/2015/09/21/spark-cluster-on-openstack-with-multi-user-jupyter-notebook/)

View File

@@ -0,0 +1,119 @@
# Authentication and User Basics
The default Authenticator uses [PAM][] to authenticate system users with
their username and password. With the default Authenticator, any user
with an account and password on the system will be allowed to login.
## Create a set of allowed users
You can restrict which users are allowed to login with a set,
`Authenticator.allowed_users`:
```python
c.Authenticator.allowed_users = {'mal', 'zoe', 'inara', 'kaylee'}
```
Users in the `allowed_users` set are added to the Hub database when the Hub is
started.
## Configure admins (`admin_users`)
Admin users of JupyterHub, `admin_users`, can add and remove users from
the user `allowed_users` set. `admin_users` can take actions on other users'
behalf, such as stopping and restarting their servers.
A set of initial admin users, `admin_users` can configured be as follows:
```python
c.Authenticator.admin_users = {'mal', 'zoe'}
```
Users in the admin set are automatically added to the user `allowed_users` set,
if they are not already present.
Each authenticator may have different ways of determining whether a user is an
administrator. By default JupyterHub use the PAMAuthenticator which provide the
`admin_groups` option and can determine administrator status base on a user
groups. For example we can let any users in the `wheel` group be admin:
```python
c.PAMAuthenticator.admin_groups = {'wheel'}
```
## Give admin access to other users' notebook servers (`admin_access`)
Since the default `JupyterHub.admin_access` setting is False, the admins
do not have permission to log in to the single user notebook servers
owned by *other users*. If `JupyterHub.admin_access` is set to True,
then admins have permission to log in *as other users* on their
respective machines, for debugging. **As a courtesy, you should make
sure your users know if admin_access is enabled.**
## Add or remove users from the Hub
Users can be added to and removed from the Hub via either the admin
panel or the REST API. When a user is **added**, the user will be
automatically added to the allowed users set and database. Restarting the Hub
will not require manually updating the allowed users set in your config file,
as the users will be loaded from the database.
After starting the Hub once, it is not sufficient to **remove** a user
from the allowed users set in your config file. You must also remove the user
from the Hub's database, either by deleting the user from JupyterHub's
admin page, or you can clear the `jupyterhub.sqlite` database and start
fresh.
## Use LocalAuthenticator to create system users
The `LocalAuthenticator` is a special kind of authenticator that has
the ability to manage users on the local system. When you try to add a
new user to the Hub, a `LocalAuthenticator` will check if the user
already exists. If you set the configuration value, `create_system_users`,
to `True` in the configuration file, the `LocalAuthenticator` has
the privileges to add users to the system. The setting in the config
file is:
```python
c.LocalAuthenticator.create_system_users = True
```
Adding a user to the Hub that doesn't already exist on the system will
result in the Hub creating that user via the system `adduser` command
line tool. This option is typically used on hosted deployments of
JupyterHub, to avoid the need to manually create all your users before
launching the service. This approach is not recommended when running
JupyterHub in situations where JupyterHub users map directly onto the
system's UNIX users.
## Use OAuthenticator to support OAuth with popular service providers
JupyterHub's [OAuthenticator][] currently supports the following
popular services:
- Auth0
- Bitbucket
- CILogon
- GitHub
- GitLab
- Globus
- Google
- MediaWiki
- Okpy
- OpenShift
A generic implementation, which you can use for OAuth authentication
with any provider, is also available.
## Use DummyAuthenticator for testing
The :class:`~jupyterhub.auth.DummyAuthenticator` is a simple authenticator that
allows for any username/password unless if a global password has been set. If
set, it will allow for any username as long as the correct password is provided.
To set a global password, add this to the config file:
```python
c.DummyAuthenticator.password = "some_password"
```
[PAM]: https://en.wikipedia.org/wiki/Pluggable_authentication_module
[OAuthenticator]: https://github.com/jupyterhub/oauthenticator

View File

@@ -1,7 +1,7 @@
# Configuration Basics
This section contains basic information about configuring settings for a JupyterHub
deployment. The [Technical Reference](reference-index)
The section contains basic information about configuring settings for a JupyterHub
deployment. The [Technical Reference](../reference/index)
documentation provides additional details.
This section will help you learn how to:
@@ -11,8 +11,6 @@ This section will help you learn how to:
- configure JupyterHub using command line options
- find information and examples for some common deployments
(generate-config-file)=
## Generate a default config file
On startup, JupyterHub will look by default for a configuration file,
@@ -46,30 +44,30 @@ jupyterhub -f /etc/jupyterhub/jupyterhub_config.py
```
The IPython documentation provides additional information on the
[config system](https://ipython.readthedocs.io/en/stable/development/config.html)
[config system](http://ipython.readthedocs.io/en/stable/development/config)
that Jupyter uses.
## Configure using command line options
To display all command line options that are available for configuration run the following command:
To display all command line options that are available for configuration:
```bash
jupyterhub --help-all
```
Configuration using the command line options is done when launching JupyterHub.
For example, to start JupyterHub on `10.0.1.2:443` with https, you
For example, to start JupyterHub on ``10.0.1.2:443`` with https, you
would enter:
```bash
jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert
```
All configurable options may technically be set on the command line,
All configurable options may technically be set on the command-line,
though some are inconvenient to type. To set a particular configuration
parameter, `c.Class.trait`, you would use the command line option,
`--Class.trait`, when starting JupyterHub. For example, to configure the
`c.Spawner.notebook_dir` trait from the command line, use the
`c.Spawner.notebook_dir` trait from the command-line, use the
`--Spawner.notebook_dir` option:
```bash
@@ -79,24 +77,24 @@ jupyterhub --Spawner.notebook_dir='~/assignments'
## Configure for various deployment environments
The default authentication and process spawning mechanisms can be replaced, and
specific [authenticators](authenticators-users-basics) and
[spawners](spawners-basics) can be set in the configuration file.
specific [authenticators](./authenticators-users-basics) and
[spawners](./spawners-basics) can be set in the configuration file.
This enables JupyterHub to be used with a variety of authentication methods or
process control and deployment environments. [Some examples](config-examples),
meant as illustrations, are:
process control and deployment environments. [Some examples](../reference/config-examples),
meant as illustration, are:
- Using GitHub OAuth instead of PAM with [OAuthenticator](https://github.com/jupyterhub/oauthenticator)
- Spawning single-user servers with Docker, using the [DockerSpawner](https://github.com/jupyterhub/dockerspawner)
## Run the proxy separately
This is _not_ strictly necessary, but useful in many cases. If you
use a custom proxy (e.g. Traefik), this is also not needed.
This is *not* strictly necessary, but useful in many cases. If you
use a custom proxy (e.g. Traefik), this also not needed.
Connections to user servers go through the proxy, and _not_ the hub
itself. If the proxy stays running when the hub restarts (for
maintenance, re-configuration, etc.), then user connections are not
interrupted. For simplicity, by default the hub starts the proxy
Connections to user servers go through the proxy, and *not* the hub
itself. If the proxy stays running when the hub restarts (for
maintenance, re-configuration, etc.), then use connections are not
interrupted. For simplicity, by default the hub starts the proxy
automatically, so if the hub restarts, the proxy restarts, and user
connections are interrupted. It is easy to run the proxy separately,
for information see [the separate proxy page](howto:separate-proxy).
connections are interrupted. It is easy to run the proxy separately,
for information see [the separate proxy page](../reference/separate-proxy).

View File

@@ -0,0 +1,36 @@
# Frequently asked questions
### How do I share links to notebooks?
In short, where you see `/user/name/notebooks/foo.ipynb` use `/hub/user-redirect/notebooks/foo.ipynb` (replace `/user/name` with `/hub/user-redirect`).
Sharing links to notebooks is a common activity,
and can look different based on what you mean.
Your first instinct might be to copy the URL you see in the browser,
e.g. `hub.jupyter.org/user/yourname/notebooks/coolthing.ipynb`.
However, let's break down what this URL means:
`hub.jupyter.org/user/yourname/` is the URL prefix handled by *your server*,
which means that sharing this URL is asking the person you share the link with
to come to *your server* and look at the exact same file.
In most circumstances, this is forbidden by permissions because the person you share with does not have access to your server.
What actually happens when someone visits this URL will depend on whether your server is running and other factors.
But what is our actual goal?
A typical situation is that you have some shared or common filesystem,
such that the same path corresponds to the same document
(either the exact same document or another copy of it).
Typically, what folks want when they do sharing like this
is for each visitor to open the same file *on their own server*,
so Breq would open `/user/breq/notebooks/foo.ipynb` and
Seivarden would open `/user/seivarden/notebooks/foo.ipynb`, etc.
JupyterHub has a special URL that does exactly this!
It's called `/hub/user-redirect/...` and after the visitor logs in,
So if you replace `/user/yourname` in your URL bar
with `/hub/user-redirect` any visitor should get the same
URL on their own server, rather than visiting yours.
In JupyterLab 2.0, this should also be the result of the "Copy Shareable Link"
action in the file browser.

View File

@@ -0,0 +1,19 @@
Get Started
===========
This section covers how to configure and customize JupyterHub for your
needs. It contains information about authentication, networking, security, and
other topics that are relevant to individuals or organizations deploying their
own JupyterHub.
.. toctree::
:maxdepth: 2
config-basics
networking-basics
security-basics
authenticators-users-basics
spawners-basics
services-basics
faq
institutional-faq

View File

@@ -1,5 +1,3 @@
(faq:institutional)=
# Institutional FAQ
This page contains common questions from users of JupyterHub,
@@ -10,40 +8,34 @@ broken down by their roles within organizations.
### Is it appropriate for adoption within a larger institutional context?
Yes! JupyterHub has been used at-scale for large pools of users, as well
as complex and high-performance computing.
For example,
- UC Berkeley uses
JupyterHub for its Data Science Education Program courses (serving over
3,000 students).
- The Pangeo project uses JupyterHub to provide access
to scalable cloud computing with Dask.
JupyterHub is stable and customizable
as complex and high-performance computing. For example, UC Berkeley uses
JupyterHub for its Data Science Education Program courses (serving over
3,000 students). The Pangeo project uses JupyterHub to provide access
to scalable cloud computing with Dask. JupyterHub is stable customizable
to the use-cases of large organizations.
### I keep hearing about Jupyter Notebook, JupyterLab, and now JupyterHub. Whats the difference?
Here is a quick breakdown of these three tools:
- **The Jupyter Notebook** is a document specification (the `.ipynb`) file that interweaves
* **The Jupyter Notebook** is a document specification (the `.ipynb`) file that interweaves
narrative text with code cells and their outputs. It is also a graphical interface
that allows users to edit these documents. There are also several other graphical interfaces
that allow users to edit the `.ipynb` format (nteract, Jupyter Lab, Google Colab, Kaggle, etc).
- **JupyterLab** is a flexible and extendible user interface for interactive computing. It
* **JupyterLab** is a flexible and extendible user interface for interactive computing. It
has several extensions that are tailored for using Jupyter Notebooks, as well as extensions
for other parts of the data science stack.
- **JupyterHub** is an application that manages interactive computing sessions for **multiple users**.
It also connects users with infrastructure they wish to access. It can provide
remote access to Jupyter Notebooks and JupyterLab for many people.
* **JupyterHub** is an application that manages interactive computing sessions for **multiple users**.
It also connects them with infrastructure those users wish to access. It can provide
remote access to Jupyter Notebooks and Jupyter Lab for many people.
## For management
### Briefly, what problem does JupyterHub solve for us?
JupyterHub provides a shared platform for data science and collaboration.
It allows users to utilize familiar data science workflows (such as the scientific Python stack,
the R tidyverse, and Jupyter Notebooks) on institutional infrastructure. It also gives administrators
It allows users to utilize familiar data science workflows (such as the scientific python stack,
the R tidyverse, and Jupyter Notebooks) on institutional infrastructure. It also allows administrators
some control over access to resources, security, environments, and authentication.
### Is JupyterHub mature? Why should we trust it?
@@ -58,22 +50,22 @@ scalable infrastructure, large datasets, and high-performance computing.
JupyterHub is used at a variety of institutions in academia,
industry, and government research labs. It is most-commonly used by two kinds of groups:
- Small teams (e.g., data science teams, research labs, or collaborative projects) to provide a
* Small teams (e.g., data science teams, research labs, or collaborative projects) to provide a
shared resource for interactive computing, collaboration, and analytics.
- Large teams (e.g., a department, a large class, or a large group of remote users) to provide
* Large teams (e.g., a department, a large class, or a large group of remote users) to provide
access to organizational hardware, data, and analytics environments at scale.
Here is a sample of organizations that use JupyterHub:
Here are a sample of organizations that use JupyterHub:
- **Universities and colleges**: UC Berkeley, UC San Diego, Cal Poly SLO, Harvard University, University of Chicago,
University of Oslo, University of Sheffield, Université Paris Sud, University of Versailles, University of Portland
- **Research laboratories**: NASA, NCAR, NOAA, the Large Synoptic Survey Telescope, Brookhaven National Lab,
Minnesota Supercomputing Institute, ALCF, CERN, Lawrence Livermore National Laboratory, HUNT
- **Online communities**: Pangeo, Quantopian, mybinder.org, MathHub, Open Humans
- **Computing infrastructure providers**: NERSC, San Diego Supercomputing Center, Compute Canada
- **Companies**: Capital One, SANDVIK code, Globus
* **Universities and colleges**: UC Berkeley, UC San Diego, Cal Poly SLO, Harvard University, University of Chicago,
University of Oslo, University of Sheffield, Université Paris Sud, University of Versailles
* **Research laboratories**: NASA, NCAR, NOAA, the Large Synoptic Survey Telescope, Brookhaven National Lab,
Minnesota Supercomputing Institute, ALCF, CERN, Lawrence Livermore National Laboratory
* **Online communities**: Pangeo, Quantopian, mybinder.org, MathHub, Open Humans
* **Computing infrastructure providers**: NERSC, San Diego Supercomputing Center, Compute Canada
* **Companies**: Capital One, SANDVIK code, Globus
See the [Gallery of JupyterHub deployments](gallery-of-deployments) for
See the [Gallery of JupyterHub deployments](../gallery-jhub-deployments.md) for
a more complete list of JupyterHub deployments at institutions.
### How does JupyterHub compare with hosted products, like Google Colaboratory, RStudio.cloud, or Anaconda Enterprise?
@@ -86,7 +78,7 @@ gives administrators more control over their setup and hardware.
Because JupyterHub is an open-source, community-driven tool, it can be extended and
modified to fit an institution's needs. It plays nicely with the open source data science
stack, and can serve a variety of computing environments, user interfaces, and
stack, and can serve a variety of computing enviroments, user interfaces, and
computational hardware. It can also be deployed anywhere - on enterprise cloud infrastructure, on
High-Performance-Computing machines, on local hardware, or even on a single laptop, which
is not possible with most other tools for shared interactive computing.
@@ -103,16 +95,17 @@ The most common way to set up a JupyterHub is to use a JupyterHub distribution,
and opinionated ways to set up a JupyterHub on particular kinds of infrastructure. The two distributions
that we currently suggest are:
- [Zero to JupyterHub for Kubernetes](https://z2jh.jupyter.org) is a scalable JupyterHub deployment and
* [Zero to JupyterHub for Kubernetes](https://z2jh.jupyter.org) is a scalable JupyterHub deployment and
guide that runs on Kubernetes. Better for larger or dynamic user groups (50-10,000) or more complex
compute/data needs.
- [The Littlest JupyterHub](https://tljh.jupyter.org) is a lightweight JupyterHub that runs on a single
machine (in the cloud or under your desk). Better for smaller user groups (4-80) or more
* [The Littlest JupyterHub](https://tljh.jupyter.org) is a lightweight JupyterHub that runs on a single
single machine (in the cloud or under your desk). Better for smaller usergroups (4-80) or more
lightweight computational resources.
### Does JupyterHub run well in the cloud?
**Yes** - most deployments of JupyterHub are run via cloud infrastructure and on a variety of cloud providers.
Yes - most deployments of JupyterHub are run via cloud infrastructure and on a variety of cloud providers.
Depending on the distribution of JupyterHub that you'd like to use, you can also connect your JupyterHub
deployment with a number of other cloud-native services so that users have access to other resources from
their interactive computing sessions.
@@ -126,15 +119,14 @@ as more resources are needed - allowing you to utilize the benefits of a flexibl
### Is JupyterHub secure?
The short answer: yes.
JupyterHub as a standalone application has been battle-tested at an institutional
The short answer: yes. JupyterHub as a standalone application has been battle-tested at an institutional
level for several years, and makes a number of "default" security decisions that are reasonable for most
users.
- For security considerations in the base JupyterHub application,
[see the JupyterHub security page](explanation:security).
- For security considerations when deploying JupyterHub on Kubernetes, see the
[JupyterHub on Kubernetes security page](https://z2jh.jupyter.org/en/latest/security.html).
* For security considerations in the base JupyterHub application,
[see the JupyterHub security page](https://jupyterhub.readthedocs.io/en/stable/reference/websecurity.html)
* For security considerations when deploying JupyterHub on Kubernetes, see the
[JupyterHub on Kubernetes security page](https://zero-to-jupyterhub.readthedocs.io/en/latest/security.html).
The longer answer: it depends on your deployment. Because JupyterHub is very flexible, it can be used
in a variety of deployment setups. This often entails connecting your JupyterHub to **other** infrastructure
@@ -142,16 +134,18 @@ in a variety of deployment setups. This often entails connecting your JupyterHub
in these cases, and the security of your JupyterHub deployment will often depend on these decisions.
If you are worried about security, don't hesitate to reach out to the JupyterHub community in the
[Jupyter Community Forum](https://discourse.jupyter.org/c/jupyterhub/10). This community of practice has many
individuals with experience running secure JupyterHub deployments and will be very glad to help you out.
[Jupyter Community Forum](https://discourse.jupyter.org/c/jupyterhub). This community of practice has many
individuals with experience running secure JupyterHub deployments.
### Does JupyterHub provide computing or data infrastructure?
**No** - JupyterHub manages user sessions and can _control_ computing infrastructure, but it does not provide these
No - JupyterHub manages user sessions and can *control* computing infrastructure, but it does not provide these
things itself. You are expected to run JupyterHub on your own infrastructure (local or in the cloud). Moreover,
JupyterHub has no internal concept of "data", but is designed to be able to communicate with data repositories
(again, either locally or remotely) for use within interactive computing sessions.
### How do I manage users?
JupyterHub offers a few options for managing your users. Upon setting up a JupyterHub, you can choose what
@@ -160,7 +154,7 @@ email address, or choose a username / password when they first log-in, or offloa
another service such as an organization's OAuth.
The users of a JupyterHub are stored locally, and can be modified manually by an administrator of the JupyterHub.
Moreover, the _active_ users on a JupyterHub can be found on the administrator's page. This page
Moreover, the *active* users on a JupyterHub can be found on the administrator's page. This page
gives you the abiltiy to stop or restart kernels, inspect user filesystems, and even take over user
sessions to assist them with debugging.
@@ -188,11 +182,12 @@ connect with other infrastructure tools (like Dask or Spark). This allows users
scalable or high-performance resources from within their JupyterHub sessions. The logic of
how those resources are controlled is taken care of by the non-JupyterHub application.
### Can JupyterHub be used with my high-performance computing resources?
Yes - JupyterHub can provide access to many kinds of computing infrastructure.
Especially when combined with other open-source schedulers such as Dask, you can manage fairly
complex computing infrastructures from the interactive sessions of a JupyterHub. For example
complex computing infrastructure from the interactive sessions of a JupyterHub. For example
[see the Dask HPC page](https://docs.dask.org/en/latest/setup/hpc.html).
### How much resources do user sessions take?
@@ -200,8 +195,8 @@ complex computing infrastructures from the interactive sessions of a JupyterHub.
This is highly configurable by the administrator. If you wish for your users to have simple
data analytics environments for prototyping and light data exploring, you can restrict their
memory and CPU based on the resources that you have available. If you'd like your JupyterHub
to serve as a gateway to high-performance computing or data resources, you may increase the
resources available on user machines, or connect them with computing infrastructures elsewhere.
to serve as a gateway to high-performance compute or data resources, you may increase the
resources available on user machines, or connect them with computing infrastructure elsewhere.
### Can I customize the look and feel of a JupyterHub?
@@ -223,14 +218,16 @@ the technologies your JupyterHub will use (e.g., dev-ops knowledge with cloud co
In general, the base JupyterHub deployment is not the bottleneck for setup, it is connecting
your JupyterHub with the various services and tools that you wish to provide to your users.
### How well does JupyterHub scale? What are JupyterHub's limitations?
JupyterHub works well at both a small scale (e.g., a single VM or machine) as well as a
high scale (e.g., a scalable Kubernetes cluster). It can be used for teams as small as 2, and
high scale (e.g., a scalable Kubernetes cluster). It can be used for teams as small a 2, and
for user bases as large as 10,000. The scalability of JupyterHub largely depends on the
infrastructure on which it is deployed. JupyterHub has been designed to be lightweight and
flexible, so you can tailor your JupyterHub deployment to your needs.
### Is JupyterHub resilient? What happens when a machine goes down?
For JupyterHubs that are deployed in a containerized environment (e.g., Kubernetes), it is
@@ -258,7 +255,7 @@ share their results with one another.
JupyterHub also provides a computational framework to share computational narratives between
different levels of an organization. For example, data scientists can share Jupyter Notebooks
rendered as [Voilà dashboards](https://voila.readthedocs.io/en/stable/) with those who are not
rendered as [voila dashboards](https://voila.readthedocs.io/en/stable/) with those who are not
familiar with programming, or create publicly-available interactive analyses to allow others to
interact with your work.

View File

@@ -11,7 +11,7 @@ This section will help you with basic proxy and network configuration to:
The Proxy's main IP address setting determines where JupyterHub is available to users.
By default, JupyterHub is configured to be available on all network interfaces
(`''`) on port 8000. _Note_: Use of `'*'` is discouraged for IP configuration;
(`''`) on port 8000. *Note*: Use of `'*'` is discouraged for IP configuration;
instead, use of `'0.0.0.0'` is preferred.
Changing the Proxy's main IP address and port can be done with the following
@@ -41,9 +41,9 @@ port.
## Set the Proxy's REST API communication URL (optional)
By default, the proxy's REST API listens on port 8081 of `localhost` only.
The Hub service talks to the proxy via a REST API on a secondary port.
The REST API URL (hostname and port) can be configured separately and override the default settings.
By default, this REST API listens on port 8001 of `localhost` only.
The Hub service talks to the proxy via a REST API on a secondary port. The
API URL can be configured separately and override the default settings.
### Set api_url
@@ -74,7 +74,7 @@ The Hub service listens only on `localhost` (port 8081) by default.
The Hub needs to be accessible from both the proxy and all Spawners.
When spawning local servers, an IP address setting of `localhost` is fine.
If _either_ the Proxy _or_ (more likely) the Spawners will be remote or
If *either* the Proxy *or* (more likely) the Spawners will be remote or
isolated in containers, the Hub must listen on an IP that is accessible.
```python
@@ -82,20 +82,20 @@ c.JupyterHub.hub_ip = '10.0.1.4'
c.JupyterHub.hub_port = 54321
```
**Added in 0.8:** The `c.JupyterHub.hub_connect_ip` setting is the IP address or
**Added in 0.8:** The `c.JupyterHub.hub_connect_ip` setting is the ip address or
hostname that other services should use to connect to the Hub. A common
configuration for, e.g. docker, is:
```python
c.JupyterHub.hub_ip = '0.0.0.0' # listen on all interfaces
c.JupyterHub.hub_connect_ip = '10.0.1.4' # IP as seen on the docker network. Can also be a hostname.
c.JupyterHub.hub_connect_ip = '10.0.1.4' # ip as seen on the docker network. Can also be a hostname.
```
## Adjusting the hub's URL
The hub will most commonly be running on a hostname of its own. If it
The hub will most commonly be running on a hostname of its own. If it
is not for example, if the hub is being reverse-proxied and being
exposed at a URL such as `https://proxy.example.org/jupyter/` then
you will need to tell JupyterHub the base URL of the service. In such
you will need to tell JupyterHub the base URL of the service. In such
a case, it is both necessary and sufficient to set
`c.JupyterHub.base_url = '/jupyter/'` in the configuration.

View File

@@ -0,0 +1,261 @@
Security settings
=================
.. important::
You should not run JupyterHub without SSL encryption on a public network.
Security is the most important aspect of configuring Jupyter. Three
configuration settings are the main aspects of security configuration:
1. :ref:`SSL encryption <ssl-encryption>` (to enable HTTPS)
2. :ref:`Cookie secret <cookie-secret>` (a key for encrypting browser cookies)
3. Proxy :ref:`authentication token <authentication-token>` (used for the Hub and
other services to authenticate to the Proxy)
The Hub hashes all secrets (e.g., auth tokens) before storing them in its
database. A loss of control over read-access to the database should have
minimal impact on your deployment; if your database has been compromised, it
is still a good idea to revoke existing tokens.
.. _ssl-encryption:
Enabling SSL encryption
-----------------------
Since JupyterHub includes authentication and allows arbitrary code execution,
you should not run it without SSL (HTTPS).
Using an SSL certificate
~~~~~~~~~~~~~~~~~~~~~~~~
This will require you to obtain an official, trusted SSL certificate or create a
self-signed certificate. Once you have obtained and installed a key and
certificate you need to specify their locations in the ``jupyterhub_config.py``
configuration file as follows:
.. code-block:: python
c.JupyterHub.ssl_key = '/path/to/my.key'
c.JupyterHub.ssl_cert = '/path/to/my.cert'
Some cert files also contain the key, in which case only the cert is needed. It
is important that these files be put in a secure location on your server, where
they are not readable by regular users.
If you are using a **chain certificate**, see also chained certificate for SSL
in the JupyterHub `Troubleshooting FAQ <../troubleshooting.html>`_.
Using letsencrypt
~~~~~~~~~~~~~~~~~
It is also possible to use `letsencrypt <https://letsencrypt.org/>`_ to obtain
a free, trusted SSL certificate. If you run letsencrypt using the default
options, the needed configuration is (replace ``mydomain.tld`` by your fully
qualified domain name):
.. code-block:: python
c.JupyterHub.ssl_key = '/etc/letsencrypt/live/{mydomain.tld}/privkey.pem'
c.JupyterHub.ssl_cert = '/etc/letsencrypt/live/{mydomain.tld}/fullchain.pem'
If the fully qualified domain name (FQDN) is ``example.com``, the following
would be the needed configuration:
.. code-block:: python
c.JupyterHub.ssl_key = '/etc/letsencrypt/live/example.com/privkey.pem'
c.JupyterHub.ssl_cert = '/etc/letsencrypt/live/example.com/fullchain.pem'
If SSL termination happens outside of the Hub
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In certain cases, for example if the hub is running behind a reverse proxy, and
`SSL termination is being provided by NGINX <https://www.nginx.com/resources/admin-guide/nginx-ssl-termination/>`_,
it is reasonable to run the hub without SSL.
To achieve this, simply omit the configuration settings
``c.JupyterHub.ssl_key`` and ``c.JupyterHub.ssl_cert``
(setting them to ``None`` does not have the same effect, and is an error).
.. _authentication-token:
Proxy authentication token
--------------------------
The Hub authenticates its requests to the Proxy using a secret token that
the Hub and Proxy agree upon. Note that this applies to the default
``ConfigurableHTTPProxy`` implementation. Not all proxy implementations
use an auth token.
The value of this token should be a random string (for example, generated by
``openssl rand -hex 32``). You can store it in the configuration file or an
environment variable
Generating and storing token in the configuration file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can set the value in the configuration file, ``jupyterhub_config.py``:
.. code-block:: python
c.ConfigurableHTTPProxy.api_token = 'abc123...' # any random string
Generating and storing as an environment variable
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can pass this value of the proxy authentication token to the Hub and Proxy
using the ``CONFIGPROXY_AUTH_TOKEN`` environment variable:
.. code-block:: bash
export CONFIGPROXY_AUTH_TOKEN=$(openssl rand -hex 32)
This environment variable needs to be visible to the Hub and Proxy.
Default if token is not set
~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you don't set the Proxy authentication token, the Hub will generate a random
key itself, which means that any time you restart the Hub you **must also
restart the Proxy**. If the proxy is a subprocess of the Hub, this should happen
automatically (this is the default configuration).
.. _cookie-secret:
Cookie secret
-------------
The cookie secret is an encryption key, used to encrypt the browser cookies
which are used for authentication. Three common methods are described for
generating and configuring the cookie secret.
Generating and storing as a cookie secret file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The cookie secret should be 32 random bytes, encoded as hex, and is typically
stored in a ``jupyterhub_cookie_secret`` file. An example command to generate the
``jupyterhub_cookie_secret`` file is:
.. code-block:: bash
openssl rand -hex 32 > /srv/jupyterhub/jupyterhub_cookie_secret
In most deployments of JupyterHub, you should point this to a secure location on
the file system, such as ``/srv/jupyterhub/jupyterhub_cookie_secret``.
The location of the ``jupyterhub_cookie_secret`` file can be specified in the
``jupyterhub_config.py`` file as follows:
.. code-block:: python
c.JupyterHub.cookie_secret_file = '/srv/jupyterhub/jupyterhub_cookie_secret'
If the cookie secret file doesn't exist when the Hub starts, a new cookie
secret is generated and stored in the file. The file must not be readable by
``group`` or ``other`` or the server won't start. The recommended permissions
for the cookie secret file are ``600`` (owner-only rw).
Generating and storing as an environment variable
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you would like to avoid the need for files, the value can be loaded in the
Hub process from the ``JPY_COOKIE_SECRET`` environment variable, which is a
hex-encoded string. You can set it this way:
.. code-block:: bash
export JPY_COOKIE_SECRET=$(openssl rand -hex 32)
For security reasons, this environment variable should only be visible to the
Hub. If you set it dynamically as above, all users will be logged out each time
the Hub starts.
Generating and storing as a binary string
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can also set the cookie secret in the configuration file
itself, ``jupyterhub_config.py``, as a binary string:
.. code-block:: python
c.JupyterHub.cookie_secret = bytes.fromhex('64 CHAR HEX STRING')
.. important::
If the cookie secret value changes for the Hub, all single-user notebook
servers must also be restarted.
.. _cookies:
Cookies used by JupyterHub authentication
-----------------------------------------
The following cookies are used by the Hub for handling user authentication.
This section was created based on this post_ from Discourse.
.. _post: https://discourse.jupyter.org/t/how-to-force-re-login-for-users/1998/6
jupyterhub-hub-login
~~~~~~~~~~~~~~~~~~~~
This is the login token used when visiting Hub-served pages that are
protected by authentication such as the main home, the spawn form, etc.
If this cookie is set, then the user is logged in.
Resetting the Hub cookie secret effectively revokes this cookie.
This cookie is restricted to the path ``/hub/``.
jupyterhub-user-<username>
~~~~~~~~~~~~~~~~~~~~~~~~~~
This is the cookie used for authenticating with a single-user server.
It is set by the single-user server after OAuth with the Hub.
Effectively the same as ``jupyterhub-hub-login``, but for the
single-user server instead of the Hub. It contains an OAuth access token,
which is checked with the Hub to authenticate the browser.
Each OAuth access token is associated with a session id (see ``jupyterhub-session-id`` section
below).
To avoid hitting the Hub on every request, the authentication response
is cached. And to avoid a stale cache the cache key is comprised of both
the token and session id.
Resetting the Hub cookie secret effectively revokes this cookie.
This cookie is restricted to the path ``/user/<username>``, so that
only the users server receives it.
jupyterhub-session-id
~~~~~~~~~~~~~~~~~~~~~
This is a random string, meaningless in itself, and the only cookie
shared by the Hub and single-user servers.
Its sole purpose is to coordinate logout of the multiple OAuth cookies.
This cookie is set to ``/`` so all endpoints can receive it, or clear it, etc.
jupyterhub-user-<username>-oauth-state
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A short-lived cookie, used solely to store and validate OAuth state.
It is only set while OAuth between the single-user server and the Hub
is processing.
If you use your browser development tools, you should see this cookie
for a very brief moment before your are logged in,
with an expiration date shorter than ``jupyterhub-hub-login`` or
``jupyterhub-user-<username>``.
This cookie should not exist after you have successfully logged in.
This cookie is restricted to the path ``/user/<username>``, so that only
the users server receives it.

View File

@@ -1,10 +1,8 @@
(tutorial:services)=
# External services
When working with JupyterHub, a **Service** is defined as a process
that interacts with the Hub's REST API. A Service may perform a specific
action or task. For example, shutting down individuals' single user
or action or task. For example, shutting down individuals' single user
notebook servers that have been idle for some time is a good example of
a task that could be automated by a Service. Let's look at how the
[jupyterhub_idle_culler][] script can be used as a Service.
@@ -16,41 +14,41 @@ document will:
- explain some basic information about API tokens
- clarify that API tokens can be used to authenticate to
single-user servers as of [version 0.8.0](changelog)
single-user servers as of [version 0.8.0](../changelog)
- show how the [jupyterhub_idle_culler][] script can be:
- used in a Hub-managed service
- run as a standalone script
- used in a Hub-managed service
- run as a standalone script
Both examples for `jupyterhub_idle_culler` will communicate tasks to the
Hub via the REST API.
## API Token basics
### Step 1: Generate an API token
### Create an API token
To run such an external service, an API token must be created and
provided to the service.
As of [version 0.6.0](changelog), the preferred way of doing
As of [version 0.6.0](../changelog), the preferred way of doing
this is to first generate an API token:
```bash
openssl rand -hex 32
```
In [version 0.8.0](changelog), a TOKEN request page for
In [version 0.8.0](../changelog), a TOKEN request page for
generating an API token is available from the JupyterHub user interface:
![Request API TOKEN page](/images/token-page.png)
![Request API TOKEN page](../images/token-request.png)
![API TOKEN success page](/images/token-request-success.png)
![API TOKEN success page](../images/token-request-success.png)
### Step 2: Pass environment variable with token to the Hub
### Pass environment variable with token to the Hub
In the case of `cull_idle_servers`, it is passed as the environment
variable called `JUPYTERHUB_API_TOKEN`.
### Step 3: Use API tokens for services and tasks that require external access
### Use API tokens for services and tasks that require external access
While API tokens are often associated with a specific user, API tokens
can be used by services that require external access for activities
@@ -64,12 +62,12 @@ c.JupyterHub.services = [
]
```
### Step 4: Restart JupyterHub
### Restart JupyterHub
Upon restarting JupyterHub, you should see a message like below in the
logs:
```none
```
Adding API token for <username>
```
@@ -80,55 +78,34 @@ single-user servers, and only cookies can be used for authentication.
0.8 supports using JupyterHub API tokens to authenticate to single-user
servers.
## How to configure the idle culler to run as a Hub-Managed Service
## Configure the idle culler to run as a Hub-Managed Service
### Step 1: Install the idle culler:
Install the idle culler:
```
pip install jupyterhub-idle-culler
```
### Step 2: In `jupyterhub_config.py`, add the following dictionary for the `idle-culler` Service to the `c.JupyterHub.services` list:
In `jupyterhub_config.py`, add the following dictionary for the
`idle-culler` Service to the `c.JupyterHub.services` list:
```python
c.JupyterHub.services = [
{
'name': 'idle-culler',
'admin': True,
'command': [sys.executable, '-m', 'jupyterhub_idle_culler', '--timeout=3600'],
}
]
c.JupyterHub.load_roles = [
{
"name": "list-and-cull", # name the role
"services": [
"idle-culler", # assign the service to this role
],
"scopes": [
# declare what permissions the service should have
"list:users", # list users
"read:users:activity", # read user last-activity
"admin:servers", # start/stop servers
],
}
]
```
where:
- `command` indicates that the Service will be launched as a
- `'admin': True` indicates that the Service has 'admin' permissions, and
- `'command'` indicates that the Service will be launched as a
subprocess, managed by the Hub.
```{versionchanged} 2.0
Prior to 2.0, the idle-culler required 'admin' permissions.
It now needs the scopes:
- `list:users` to access the user list endpoint
- `read:users:activity` to read activity info
- `admin:servers` to start/stop servers
```
## How to run `cull-idle` manually as a standalone script
## Run `cull-idle` manually as a standalone script
Now you can run your script by providing it
the API token and it will authenticate through the REST API to
@@ -137,8 +114,7 @@ interact with it.
This will run the idle culler service manually. It can be run as a standalone
script anywhere with access to the Hub, and will periodically check for idle
servers and shut them down via the Hub's REST API. In order to shutdown the
servers, the token given to `cull-idle` must have permission to list users
and admin their servers.
servers, the token given to cull-idle must have admin privileges.
Generate an API token and store it in the `JUPYTERHUB_API_TOKEN` environment
variable. Run `jupyterhub_idle_culler` manually.

View File

@@ -1,14 +1,12 @@
(spawners)=
# Spawners and single-user notebook servers
A Spawner starts each single-user notebook server. Since the single-user server is an instance of `jupyter notebook`, an entire separate
multi-process application, many aspects of that server can be configured and there are a lot
of ways to express that configuration.
Since the single-user server is an instance of `jupyter notebook`, an entire separate
multi-process application, there are many aspect of that server can configure, and a lot of ways
to express that configuration.
At the JupyterHub level, you can set some values on the Spawner. The simplest of these is
`Spawner.notebook_dir`, which lets you set the root directory for a user's server. This root
notebook directory is the highest-level directory users will be able to access in the notebook
notebook directory is the highest level directory users will be able to access in the notebook
dashboard. In this example, the root notebook directory is set to `~/notebooks`, where `~` is
expanded to the user's home directory.
@@ -16,13 +14,13 @@ expanded to the user's home directory.
c.Spawner.notebook_dir = '~/notebooks'
```
You can also specify extra command line arguments to the notebook server with:
You can also specify extra command-line arguments to the notebook server with:
```python
c.Spawner.args = ['--debug', '--profile=PHYS131']
```
This could be used to set the user's default page for the single-user server:
This could be used to set the users default page for the single user server:
```python
c.Spawner.args = ['--NotebookApp.default_url=/notebooks/Welcome.ipynb']

View File

@@ -1,128 +0,0 @@
(howto:api-only)=
# Deploying JupyterHub in "API only mode"
As a service for deploying and managing Jupyter servers for users, JupyterHub
exposes this functionality _primarily_ via a [REST API](rest).
For convenience, JupyterHub also ships with a _basic_ web UI built using that REST API.
The basic web UI enables users to click a button to quickly start and stop their servers,
and it lets admins perform some basic user and server management tasks.
The REST API has always provided additional functionality beyond what is available in the basic web UI.
Similarly, we avoid implementing UI functionality that is also not available via the API.
With JupyterHub 2.0, the basic web UI will **always** be composed using the REST API.
In other words, no UI pages should rely on information not available via the REST API.
Previously, some admin UI functionality could only be achieved via admin pages,
such as paginated requests.
## Limited UI customization via templates
The JupyterHub UI is customizable via extensible HTML [templates](templates),
but this has some limited scope to what can be customized.
Adding some content and messages to existing pages is well supported,
but changing the page flow and what pages are available are beyond the scope of what is customizable.
## Rich UI customization with REST API based apps
Increasingly, JupyterHub is used purely as an API for managing Jupyter servers
for other Jupyter-based applications that might want to present a different user experience.
If you want a fully customized user experience,
you can now disable the Hub UI and use your own pages together with the JupyterHub REST API
to build your own web application to serve your users,
relying on the Hub only as an API for managing users and servers.
One example of such an application is [BinderHub][], which powers https://mybinder.org,
and motivates many of these changes.
BinderHub is distinct from a traditional JupyterHub deployment
because it uses temporary users created for each launch.
Instead of presenting a login page,
users are presented with a form to specify what environment they would like to launch:
![Binder launch form](../images/binderhub-form.png)
When a launch is requested:
1. an image is built, if necessary
2. a temporary user is created,
3. a server is launched for that user, and
4. when running, users are redirected to an already running server with an auth token in the URL
5. after the session is over, the user is deleted
This means that a lot of JupyterHub's UI flow doesn't make sense:
- there is no way for users to login
- the human user doesn't map onto a JupyterHub `User` in a meaningful way
- when a server isn't running, there isn't a 'restart your server' action available because the user has been deleted
- users do not have any access to any Hub functionality, so presenting pages for those features would be confusing
BinderHub is one of the motivating use cases for JupyterHub supporting being used _only_ via its API.
We'll use BinderHub here as an example of various configuration options.
[binderhub]: https://binderhub.readthedocs.io
## Disabling Hub UI
`c.JupyterHub.hub_routespec` is a configuration option to specify which URL prefix should be routed to the Hub.
The default is `/` which means that the Hub will receive all requests not already specified to be routed somewhere else.
There are three values that are most logical for `hub_routespec`:
- `/` - this is the default, and used in most deployments.
It is also the only option prior to JupyterHub 1.4.
- `/hub/` - this serves only Hub pages, both UI and API
- `/hub/api` - this serves _only the Hub API_, so all Hub UI is disabled,
aside from the OAuth confirmation page, if used.
If you choose a hub routespec other than `/`,
the main JupyterHub feature you will lose is the automatic handling of requests for `/user/:username`
when the requested server is not running.
JupyterHub's handling of this request shows this page,
telling you that the server is not running,
with a button to launch it again:
![screenshot of hub page for server not running](../images/server-not-running.png)
If you set `hub_routespec` to something other than `/`,
it is likely that you also want to register another destination for `/` to handle requests to not-running servers.
If you don't, you will see a default 404 page from the proxy:
![screenshot of CHP default 404](../images/chp-404.png)
For mybinder.org, the default "start my server" page doesn't make sense,
because when a server is gone, there is no restart action.
Instead, we provide hints about how to get back to a link to start a _new_ server:
![screenshot of mybinder.org 404](../images/binder-404.png)
To achieve this, mybinder.org registers a route for `/` that goes to a custom endpoint
that runs nginx and only serves this static HTML error page.
This is set with
```python
c.Proxy.extra_routes = {
"/": "http://custom-404-entpoint/",
}
```
You may want to use an alternate behavior, such as redirecting to a landing page,
or taking some other action based on the requested page.
If you use `c.JupyterHub.hub_routespec = "/hub/"`,
then all the Hub pages will be available,
and only this default-page-404 issue will come up.
If you use `c.JupyterHub.hub_routespec = "/hub/api/"`,
then only the Hub _API_ will be available,
and all UI will be up to you.
mybinder.org takes this last option,
because none of the Hub UI pages really make sense.
Binder users don't have any reason to know or care that JupyterHub happens
to be an implementation detail of how their environment is managed.
Seeing Hub error pages and messages in that situation is more likely to be confusing than helpful.
:::{versionadded} 1.4
`c.JupyterHub.hub_routespec` and `c.Proxy.extra_routes` are new in JupyterHub 1.4.
:::

View File

@@ -1,265 +0,0 @@
(howto:config:user-env)=
# Configuring user environments
To deploy JupyterHub means you are providing Jupyter notebook environments for
multiple users. Often, this includes a desire to configure the user
environment in a custom way.
Since the `jupyterhub-singleuser` server extends the standard Jupyter notebook
server, most configuration and documentation that applies to Jupyter Notebook
applies to the single-user environments. Configuration of user environments
typically does not occur through JupyterHub itself, but rather through system-wide
configuration of Jupyter, which is inherited by `jupyterhub-singleuser`.
**Tip:** When searching for configuration tips for JupyterHub user environments, you might want to remove JupyterHub from your search because there are a lot more people out there configuring Jupyter than JupyterHub and the configuration is the same.
This section will focus on user environments, which includes the following:
- [Installing packages](#installing-packages)
- [Configuring Jupyter and IPython](#configuring-jupyter-and-ipython)
- [Installing kernelspecs](#installing-kernelspecs)
- [Using containers vs. multi-user hosts](#multi-user-hosts-vs-containers)
## Installing packages
To make packages available to users, you will typically install packages system-wide or in a shared environment.
This installation location should always be in the same environment where
`jupyterhub-singleuser` itself is installed in, and must be _readable and
executable_ by your users. If you want your users to be able to install additional
packages, the installation location must also be _writable_ by your users.
If you are using a standard Python installation on your system, use the following command:
```bash
sudo python3 -m pip install numpy
```
to install the numpy package in the default Python 3 environment on your system
(typically `/usr/local`).
You may also use conda to install packages. If you do, you should make sure
that the conda environment has appropriate permissions for users to be able to
run Python code in the env. The env must be _readable and executable_ by all
users. Additionally it must be _writeable_ if you want users to install
additional packages.
## Configuring Jupyter and IPython
[Jupyter](https://jupyter-notebook.readthedocs.io/en/stable/configuring/config_overview.html)
and [IPython](https://ipython.readthedocs.io/en/stable/development/config.html)
have their own configuration systems.
As a JupyterHub administrator, you will typically want to install and configure environments for all JupyterHub users. For example, let's say you wish for each student in a class to have the same user environment configuration.
Jupyter and IPython support **"system-wide"** locations for configuration, which is the logical place to put global configuration that you want to affect all users. It's generally more efficient to configure user environments "system-wide", and it's a good practice to avoid creating files in the users' home directories.
The typical locations for these config files are:
- **system-wide** in `/etc/{jupyter|ipython}`
- **env-wide** (environment wide) in `{sys.prefix}/etc/{jupyter|ipython}`.
### Jupyter environment configuration priority
When Jupyter runs in an environment (conda or virtualenv), it prefers to load configuration from the environment over each user's own configuration (e.g. in `~/.jupyter`).
This may cause issues if you use a _shared_ conda environment or virtualenv for users, because e.g. jupyterlab may try to write information like workspaces or settings to the environment instead of the user's own directory.
This could fail with something like `Permission denied: $PREFIX/etc/jupyter/lab`.
To avoid this issue, set `JUPYTER_PREFER_ENV_PATH=0` in the user environment:
```python
c.Spawner.environment.update(
{
"JUPYTER_PREFER_ENV_PATH": "0",
}
)
```
which tells Jupyter to prefer _user_ configuration paths (e.g. in `~/.jupyter`) to configuration set in the environment.
### Example: Enable an extension system-wide
For example, to enable the `cython` IPython extension for all of your users, create the file `/etc/ipython/ipython_config.py`:
```python
c.InteractiveShellApp.extensions.append("cython")
```
### Example: Enable a Jupyter notebook configuration setting for all users
:::{note}
These examples configure the Jupyter ServerApp, which is used by JupyterLab, the default in JupyterHub 2.0.
If you are using the classing Jupyter Notebook server,
the same things should work,
with the following substitutions:
- Search for `jupyter_server_config`, and replace with `jupyter_notebook_config`
- Search for `NotebookApp`, and replace with `ServerApp`
:::
To enable Jupyter notebook's internal idle-shutdown behavior (requires notebook ≥ 5.4), set the following in the `/etc/jupyter/jupyter_server_config.py` file:
```python
# shutdown the server after no activity for an hour
c.ServerApp.shutdown_no_activity_timeout = 60 * 60
# shutdown kernels after no activity for 20 minutes
c.MappingKernelManager.cull_idle_timeout = 20 * 60
# check for idle kernels every two minutes
c.MappingKernelManager.cull_interval = 2 * 60
```
## Installing kernelspecs
You may have multiple Jupyter kernels installed and want to make sure that they are available to all of your users. This means installing kernelspecs either system-wide (e.g. in /usr/local/) or in the `sys.prefix` of JupyterHub
itself.
Jupyter kernelspec installation is system-wide by default, but some kernels
may default to installing kernelspecs in your home directory. These will need
to be moved system-wide to ensure that they are accessible.
To see where your kernelspecs are, you can use the following command:
```bash
jupyter kernelspec list
```
### Example: Installing kernels system-wide
Let's assume that I have a Python 2 and Python 3 environment that I want to make sure are available, I can install their specs **system-wide** (in /usr/local) using the following command:
```bash
/path/to/python3 -m ipykernel install --prefix=/usr/local
/path/to/python2 -m ipykernel install --prefix=/usr/local
```
## Multi-user hosts vs. Containers
There are two broad categories of user environments that depend on what
Spawner you choose:
- Multi-user hosts (shared system)
- Container-based
How you configure user environments for each category can differ a bit
depending on what Spawner you are using.
The first category is a **shared system (multi-user host)** where
each user has a JupyterHub account, a home directory as well as being
a real system user. In this example, shared configuration and installation
must be in a 'system-wide' location, such as `/etc/`, or `/usr/local`
or a custom prefix such as `/opt/conda`.
When JupyterHub uses **container-based** Spawners (e.g. KubeSpawner or
DockerSpawner), the 'system-wide' environment is really the container image used for users.
In both cases, you want to _avoid putting configuration in user home
directories_ because users can change those configuration settings. Also, home directories typically persist once they are created, thereby making it difficult for admins to update later.
## Named servers
By default, in a JupyterHub deployment, each user has one server only.
JupyterHub can, however, have multiple servers per user.
This is mostly useful in deployments where users can configure the environment in which their server will start (e.g. resource requests on an HPC cluster), so that a given user can have multiple configurations running at the same time, without having to stop and restart their own server.
To allow named servers, include this code snippet in your config file:
```python
c.JupyterHub.allow_named_servers = True
```
Named servers were implemented in the REST API in JupyterHub 0.8,
and JupyterHub 1.0 introduces UI for managing named servers via the user home page:
![named servers on the home page](/images/named-servers-home.png)
as well as the admin page:
![named servers on the admin page](/images/named-servers-admin.png)
Named servers can be accessed, created, started, stopped, and deleted
from these pages. Activity tracking is now per server as well.
To limit the number of **named server** per user by setting a constant value, include this code snippet in your config file:
```python
c.JupyterHub.named_server_limit_per_user = 5
```
Alternatively, to use a callable/awaitable based on the handler object, include this code snippet in your config file:
```python
def named_server_limit_per_user_fn(handler):
user = handler.current_user
if user and user.admin:
return 0
return 5
c.JupyterHub.named_server_limit_per_user = named_server_limit_per_user_fn
```
This can be useful for quota service implementations. The example above limits the number of named servers for non-admin users only.
If `named_server_limit_per_user` is set to `0`, no limit is enforced.
When using named servers, Spawners may need additional configuration to take the `servername` into account. Whilst `KubeSpawner` takes the `servername` into account by default in [`pod_name_template`](https://jupyterhub-kubespawner.readthedocs.io/en/latest/spawner.html#kubespawner.KubeSpawner.pod_name_template), other Spawners may not. Check the documentation for the specific Spawner to see how singleuser servers are named, for example in `DockerSpawner` this involves modifying the [`name_template`](https://jupyterhub-dockerspawner.readthedocs.io/en/latest/api/index.html) setting to include `servername`, eg. `"{prefix}-{username}-{servername}"`.
(classic-notebook-ui)=
## Switching back to the classic notebook
By default, the single-user server launches JupyterLab,
which is based on [Jupyter Server][].
This is the default server when running JupyterHub ≥ 2.0.
To switch to using the legacy Jupyter Notebook server (notebook < 7.0), you can set the `JUPYTERHUB_SINGLEUSER_APP` environment variable
(in the single-user environment) to:
```bash
export JUPYTERHUB_SINGLEUSER_APP='notebook.notebookapp.NotebookApp'
```
:::{note}
```
JUPYTERHUB_SINGLEUSER_APP='notebook.notebookapp.NotebookApp'
```
is only valid for notebook < 7. notebook v7 is based on jupyter-server,
and the default jupyter-server application must be used.
Selecting the new notebook UI is no longer a matter of selecting the server app to launch,
but only the default URL for users to visit.
To use notebook v7 with JupyterHub, leave the default singleuser app config alone (or specify `JUPYTERHUB_SINGLEUSER_APP=jupyter-server`) and set the default _URL_ for user servers:
```python
c.Spawner.default_url = '/tree/'
```
:::
[jupyter server]: https://jupyter-server.readthedocs.io
[jupyter notebook]: https://jupyter-notebook.readthedocs.io
:::{versionchanged} 2.0
JupyterLab is now the default single-user UI, if available,
which is based on the [Jupyter Server][],
no longer the legacy [Jupyter Notebook][] server.
JupyterHub prior to 2.0 launched the legacy notebook server (`jupyter notebook`),
and the Jupyter server could be selected by specifying the following:
```python
# jupyterhub_config.py
c.Spawner.cmd = ["jupyter-labhub"]
```
Alternatively, for an otherwise customized Jupyter Server app,
set the environment variable using the following command:
```bash
export JUPYTERHUB_SINGLEUSER_APP='jupyter_server.serverapp.ServerApp'
```
:::

View File

@@ -1,130 +0,0 @@
# Logging users in via URL
Sometimes, JupyterHub is integrated into an existing application that has already handled user login, etc..
It is often preferable in these applications to be able to link users to their running JupyterHub server without _prompting_ the user to login again with the Hub when the Hub should really be an implementation detail,
and not part of the user experience.
One way to do this has been to use [API only mode](#howto:api-only), issue tokens for users, and redirect users to a URL like `/users/name/?token=abc123`.
This is [disabled by default](#HubAuth.allow_token_in_url) in JupyterHub 5, because it presents a vulnerability for users to craft links that let _other_ users login as them, which can lead to inter-user attacks.
But that leaves the question: how do I as an _application developer_ embedding JupyterHub link users to their own running server without triggering another login prompt?
The problem with `?token=...` in the URL is specifically that _users_ can get and create these tokens, and share URLs.
This wouldn't be an issue if only authorized applications could issue tokens that behave this way.
The single-user server doesn't exactly have the hooks to manage this easily, but the [Authenticator](#Authenticator) API does.
## Problem statement
We want our external application to be able to:
1. authenticate users
2. (maybe) create JupyterHub users
3. start JupyterHub servers
4. redirect users into running servers _without_ any login prompts/loading pages from JupyterHub, and without any prior JupyterHub credentials
Step 1 is up to the application and not JupyterHub's problem.
Step 2 and 3 use the JupyterHub [REST API](#jupyterhub-rest-API).
The service would need the scopes:
```
admin:users # creating users
servers # start/stop servers
```
That leaves the last step: sending users to their running server with credentials, without prompting login.
This is where things can get tricky!
### Ideal case: oauth
_Ideally_, the best way to set this up is with the external service as an OAuth provider,
though in some cases it works best to use proxy-based authentication like Shibboleth / [REMOTE_USER](https://github.com/cwaldbieser/jhub_remote_user_authenticator).
The main things to know are:
- Links to `/hub/user-redirect/some/path` will ultimately land users at `/users/theirserver/some/path` after completing login, ensuring the server is running, etc.
- Setting `Authenticator.auto_login = True` allows beginning the login process without JupyterHub's "Login with..." prompt
_If_ your OAuth provider allows logging in to external services via your oauth provider without prompting, this is enough.
Not all do, though.
If you've already ensured the server is running, this will _appear_ to the user as if they are being sent directly to their running server.
But what _actually_ happens is quite a series of redirects, state checks, and cookie-setting:
1. visiting `/hub/user-redirect/some/path` checks if the user is logged in
1. if not, begin the login process (`/hub/login?next=/hub/user-redirect/...`)
2. redirects to your oauth provider to authenticate the user
3. redirects back to `/hub/oauth_callback` to complete login
4. redirects back to `/hub/user-redirect/...`
2. once authenticated, checks that the user's server is running
1. if not running, begins launch of the server
2. redirects to `/hub/spawn-pending/?next=...`
3. once the server is running, redirects to the actual user server `/users/username/some/path`
Now we're done, right? Actually, no, because the browser doesn't have credentials for their user server!
This sequence of redirects happens all the time in JupyterHub launch, and is usually totally transparent.
4. at the user server, check for a token in cookie
1. if not present or not valid, begin oauth with the Hub (redirect to `/hub/api/oauth2/authorize/...`)
2. hub redirects back to `/users/user/oauth_callback` to complete oauth
3. redirect again to the URL that started this internal oauth
5. finally, arrive at `/users/username/some/path`, the ultimate destination, with valid JupyterHub credentials
The steps that will show users something other than the page you want them to are:
- Step 1.1 will be a prompt e.g. with "Login with..." unless you set `c.Authenticator.auto_login = True`
- Step 1.2 _may_ be a prompt from your oauth provider. This isn't controlled by JupyterHub, and may not be avoidable.
- Step 2.2 will show the spawn pending page only if the server is not already running
Otherwise, this is all transparent redirects to the final destination.
#### Using an authentication proxy (REMOTE_USER)
If you use an Authentication proxy like Shibboleth that sets e.g. the REMOTE_USER header,
you can use an Authenticator like [RemoteUserAuthenticator](https://github.com/cwaldbieser/jhub_remote_user_authenticator) to automatically login users based on headers in the request.
The same process will work, but instead of step 1.1 redirecting to the oauth provider, it logs in immediately.
If you do support an auth proxy, you also need to be extremely sure that requests only come from the auth proxy, and don't accept any requests setting the REMOTE_USER header coming from other sources.
### Custom case
But let's say you can't use OAuth or REMOTE_USER, and you still want to hide JupyterHub implementation details.
All you really want is a way to write a URL that will take users to their servers without any login prompts.
You can do this if you create an Authenticator with `auto_login=True` that logs users in based on something in the _request_, e.g. a query parameter.
We have an _example_ in the JupyterHub repo in `examples/forced-login` that does this.
It is a sample 'external service' where you type in a username and a destination path.
When you 'login' with this username:
1. a token is issued
2. the token is stored and associated with the username
3. redirect to `/hub/login?login_token=...&next=/hub/user-redirect/destination/path`
Then on the JupyterHub side, there is the `ForcedLoginAuthenticator`.
This class implements `authenticate`, which:
1. has `auto_login = True` so visiting `/hub/login` calls `authenticate()` directly instead of serving a page
2. gets the token from the `login_token` URL parameter
3. makes a POST request to the external application with the token, requesting a username
4. the external application returns the username and deletes the token, so it cannot be re-used
5. Authenticator returns the username
This doesn't _bypass_ JupyterHub authentication, as some deployments have done, but it does _hide_ it.
If your service launches servers via the API, you could run this in [API only mode](#howto:api-only) by adding `/hub/login` as well:
```python
c.JupyterHub.hub_routespec = "/hub/api/"
c.Proxy.additional_routes = {"/hub/login": "http://hub:8080"}
```
```{literalinclude} ../../../examples/forced-login/jupyterhub_config.py
:language: python
:start-at: class ForcedLoginAuthenticator
:end-before: c = get_config()
```
**Why does this work?**
This is still logging in with a token in the URL, right?
Yes, but the key difference is that users cannot issue these tokens.
The sample application is still technically vulnerable, because the token link should really be non-transferrable, even if it can only be used once.
The only defense the sample application has against this is rapidly expiring tokens (they expire after 30 seconds).
You can use state cookies, etc. to manage that more rigorously, as done in OAuth (at which point, maybe implement OAuth itself, why not?).

View File

@@ -1,34 +0,0 @@
# How-to
The _How-to_ guides provide practical step-by-step details to help you achieve a particular goal. They are useful when you are trying to get something done but require you to understand and adapt the steps to your specific usecase.
Use the following guides when:
```{toctree}
:maxdepth: 1
api-only
proxy
rest
separate-proxy
templates
upgrading
log-messages
forced-login
```
(config-examples)=
## Configuration
The following guides provide examples, including configuration files and tips, for the
following:
```{toctree}
:maxdepth: 1
configuration/config-user-env
configuration/config-ghoauth
configuration/config-proxy
configuration/config-sudo
```

View File

@@ -1,74 +0,0 @@
(howto:log-messages)=
# Interpreting common log messages
When debugging errors and outages, looking at the logs emitted by
JupyterHub is very helpful. This document intends to describe some common
log messages, what they mean and what are the most common causes that generated them, as well as some possible ways to fix them.
## Failing suspected API request to not-running server
### Example
Your logs might be littered with lines that look scary
```
[W 2022-03-10 17:25:19.774 JupyterHub base:1349] Failing suspected API request to not-running server: /hub/user/<user-name>/api/metrics/v1
```
### Cause
This likely means that the user's server has stopped running but they
still have a browser tab open. For example, you might have 3 tabs open and you shut
the server down via one.
Another possible reason could be that you closed your laptop and the server was culled for inactivity, then reopened the laptop!
However, the client-side code (JupyterLab, Classic Notebook, etc) doesn't interpret the shut-down server and continues to make some API requests.
JupyterHub's architecture means that the proxy routes all requests that
don't go to a running user server to the hub process itself. The hub
process then explicitly returns a failure response, so the client knows
that the server is not running anymore. This is used by JupyterLab to
inform the user that the server is not running anymore, and provide an option
to restart it.
Most commonly, you'll see this in reference to the `/api/metrics/v1`
URL, used by [jupyter-resource-usage](https://github.com/jupyter-server/jupyter-resource-usage).
### Actions you can take
This log message is benign, and there is usually no action for you to take.
## JupyterHub Singleuser Version mismatch
### Example
```
jupyterhub version 1.5.0 != jupyterhub-singleuser version 1.3.0. This could cause failure to authenticate and result in redirect loops!
```
### Cause
JupyterHub requires the `jupyterhub` python package installed inside the image or
environment, the user server starts in. This message indicates that the version of
the `jupyterhub` package installed inside the user image or environment is not
the same as the JupyterHub server's version itself. This is not necessarily always a
problem - some version drift is mostly acceptable, and the only two known cases of
breakage are across the 0.7 and 2.0 version releases. In those cases, issues pop
up immediately after upgrading your version of JupyterHub, so **always check the JupyterHub
changelog before upgrading!**. The primary problems this _could_ cause are:
1. Infinite redirect loops after the user server starts
2. Missing expected environment variables in the user server once it starts
3. Failure for the started user server to authenticate with the JupyterHub server -
note that this is _not_ the same as _user authentication_ failing!
However, for the most part, unless you are seeing these specific issues, the log
message should be counted as a warning to get the `jupyterhub` package versions
aligned, rather than as an indicator of an existing problem.
### Actions you can take
Upgrade the version of the `jupyterhub` package in your user environment or image
so that it matches the version of JupyterHub running your JupyterHub server! If you
are using the [zero-to-jupyterhub](https://z2jh.jupyter.org) helm chart, you can find the appropriate
version of the `jupyterhub` package to install in your user image [here](https://hub.jupyter.org/helm-chart/)

View File

@@ -1,383 +0,0 @@
(howto:rest-api)=
# Using JupyterHub's REST API
This section will give you information on:
- What you can do with the API
- How to create an API token
- Assigning permissions to a token
- Updating to admin services
- Making an API request programmatically using the requests library
- Paginating API requests
- Enabling users to spawn multiple named-servers via the API
- Learn more about JupyterHub's API
Before we discuss about JupyterHub's REST API, you can learn about [REST APIs here](https://en.wikipedia.org/wiki/Representational_state_transfer). A REST
API provides a standard way for users to get and send information to the
Hub.
## What you can do with the API
Using the [JupyterHub REST API](jupyterhub-rest-API), you can perform actions on the Hub,
such as:
- Checking which users are active
- Adding or removing users
- Adding or removing services
- Stopping or starting single user notebook servers
- Authenticating services
- Communicating with an individual Jupyter server's REST API
## Create an API token
To send requests using the JupyterHub API, you must pass an API token with
the request.
While JupyterHub is running, any JupyterHub user can request a token via the `token` page.
This is accessible via a `token` link in the top nav bar from the JupyterHub home page,
or at the URL `/hub/token`.
:::{figure-md}
![token request page](../images/token-page.png)
JupyterHub's API token page
:::
:::{figure-md}
![token-request-success](../images/token-request-success.png)
JupyterHub's token page after successfully requesting a token.
:::
### Register API tokens via configuration
Sometimes, you'll want to pre-generate a token for access to JupyterHub,
typically for use by external services,
so that both JupyterHub and the service have access to the same value.
First, you need to generate a good random secret.
A good way of generating an API token is by running:
```bash
openssl rand -hex 32
```
This `openssl` command generates a random token that can be added to the JupyterHub configuration in `jupyterhub_config.py`.
For external services, this would be registered with JupyterHub via configuration:
```python
c.JupyterHub.services = [
{
"name": "my-service",
"api_token": the_secret_value,
},
]
```
At this point, requests authenticated with the token will be associated with The service `my-service`.
```{note}
You can also load additional tokens for users via the `JupyterHub.api_tokens` configuration.
However, this option has been deprecated since the introduction of services.
```
## Assigning permissions to a token
Prior to JupyterHub 2.0, there were two levels of permissions:
1. user, and
2. admin
where a token would always have full permissions to do whatever its owner could do.
In JupyterHub 2.0,
specific permissions are now defined as '**scopes**',
and can be assigned both at the user/service level,
and at the individual token level.
The previous behavior is represented by the scope `inherit`,
and is still the default behavior for requesting a token if limited permissions are not specified.
This allows e.g. a user with full admin permissions to request a token with limited permissions.
In JupyterHub 5.0, you can specify scopes for a token when requesting it via the `/hub/tokens` page as a space-separated list.
In JupyterHub 3.0 and later, you can also request tokens with limited scopes via the JupyterHub API (provided you already have a token!):
```python
import json
from urllib.parse import quote
import requests
def request_token(
username, *, api_token, scopes=None, expires_in=0, hub_url="http://127.0.0.1:8081"
):
"""Request a new token for a user"""
request_body = {}
if expires_in:
request_body["expires_in"] = expires_in
if scopes:
request_body["scopes"] = scopes
url = hub_url.rstrip("/") + f"/hub/api/users/{quote(username)}/tokens"
r = requests.post(
url,
data=json.dumps(request_body),
headers={"Authorization": f"token {api_token}"},
)
if r.status_code >= 400:
# extract error message for nicer error messages
r.reason = r.json().get("message", r.text)
r.raise_for_status()
# response is a dict and will include the token itself in the 'token' field,
# as well as other fields about the token
return r.json()
request_token("myusername", scopes=["list:users"], api_token="abc123")
```
## Updating to admin services
```{note}
The `api_tokens` configuration has been softly deprecated since the introduction of services.
We have no plans to remove it,
but deployments are encouraged to use service configuration instead.
```
If you have been using `api_tokens` to create an admin user
and the token for that user to perform some automations, then
the services' mechanism may be a better fit if you have the following configuration:
```python
c.JupyterHub.admin_users = {"service-admin"}
c.JupyterHub.api_tokens = {
"secret-token": "service-admin",
}
```
This can be updated to create a service, with the following configuration:
```python
c.JupyterHub.services = [
{
# give the token a name
"name": "service-admin",
"api_token": "secret-token",
# "admin": True, # if using JupyterHub 1.x
},
]
# roles were introduced in JupyterHub 2.0
# prior to 2.0, only "admin": True or False was available
c.JupyterHub.load_roles = [
{
"name": "service-role",
"scopes": [
# specify the permissions the token should have
"admin:users",
],
"services": [
# assign the service the above permissions
"service-admin",
],
}
]
```
The token will have the permissions listed in the role
(see [scopes][] for a list of available permissions),
but there will no longer be a user account created to house it.
The main noticeable difference between a user and a service is that there will be no notebook server associated with the account
and the service will not show up in the various user list pages and APIs.
## Make an API request
To authenticate your requests, pass the API token in the request's
Authorization header.
### Use requests
Using the popular Python [requests](https://requests.readthedocs.io)
library, an API GET request is made to [/users](rest-api-get-users), and the request sends an API token for
authorization. The response contains information about the users, here's example code to make an API request for the users of a JupyterHub deployment
```python
import requests
api_url = 'http://127.0.0.1:8081/hub/api'
r = requests.get(api_url + '/users',
headers={
'Authorization': f'token {token}',
}
)
r.raise_for_status()
users = r.json()
```
This example provides a slightly more complicated request (to [/groups/formgrade-data301/users](rest-api-post-group-users)), yet the
process is very similar:
```python
import requests
api_url = 'http://127.0.0.1:8081/hub/api'
data = {'name': 'mygroup', 'users': ['user1', 'user2']}
r = requests.post(api_url + '/groups/formgrade-data301/users',
headers={
'Authorization': f'token {token}',
},
json=data,
)
r.raise_for_status()
r.json()
```
The same API token can also authorize access to the [Jupyter Notebook REST API][]
provided by notebook servers managed by JupyterHub if it has the necessary `access:servers` scope.
(api-pagination)=
## Paginating API requests
```{versionadded} 2.0
```
Pagination is available through the `offset` and `limit` query parameters on
list endpoints, which can be used to return ideally sized windows of results.
Here's example code demonstrating pagination on the [`GET /users`](rest-api-get-users)
endpoint to fetch the first 20 records.
```python
import os
import requests
api_url = 'http://127.0.0.1:8081/hub/api'
r = requests.get(
api_url + '/users?offset=0&limit=20',
headers={
"Accept": "application/jupyterhub-pagination+json",
"Authorization": f"token {token}",
},
)
r.raise_for_status()
r.json()
```
For backward-compatibility, the default structure of list responses is unchanged.
However, this lacks pagination information (e.g. is there a next page),
so if you have enough users that they won't fit in the first response,
it is a good idea to opt-in to the new paginated list format.
There is a new schema for list responses which include pagination information.
You can request this by including the header:
```
Accept: application/jupyterhub-pagination+json
```
with your request, in which case a response will look like:
```python
{
"items": [
{
"name": "username",
"kind": "user",
...
},
],
"_pagination": {
"offset": 0,
"limit": 20,
"total": 50,
"next": {
"offset": 20,
"limit": 20,
"url": "http://127.0.0.1:8081/hub/api/users?limit=20&offset=20"
}
}
}
```
where the list results (same as pre-2.0) will be in `items`,
and pagination info will be in `_pagination`.
The `next` field will include the `offset`, `limit`, and `url` for requesting the next page.
`next` will be `null` if there is no next page.
Pagination is governed by two configuration options:
- `JupyterHub.api_page_default_limit` - the page size, if `limit` is unspecified in the request
and the new pagination API is requested
(default: 50)
- `JupyterHub.api_page_max_limit` - the maximum page size a request can ask for (default: 200)
Pagination is enabled on the `GET /users`, `GET /groups`, and `GET /proxy` REST endpoints.
## Enabling users to spawn multiple named-servers via the API
Support for multiple servers per user was introduced in JupyterHub [version 0.8.](changelog)
Prior to that, each user could only launch a single default server via the API
like this:
```bash
curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/server"
```
With the named-server functionality, it's now possible to launch more than one
specifically named servers against a given user. This could be used, for instance,
to launch each server based on a different image.
First you must enable named-servers by including the following setting in the `jupyterhub_config.py` file.
`c.JupyterHub.allow_named_servers = True`
If you are using the [zero-to-jupyterhub-k8s](https://github.com/jupyterhub/zero-to-jupyterhub-k8s) set-up to run JupyterHub,
then instead of editing the `jupyterhub_config.py` file directly, you could pass
the following as part of the `config.yaml` file, as per the [tutorial](https://z2jh.jupyter.org/en/latest/):
```bash
hub:
extraConfig: |
c.JupyterHub.allow_named_servers = True
```
With that setting in place, a new named-server is activated like this:
```{parsed-literal}
[POST /api/users/:username/servers/:servername](rest-api-post-user-server-name)
```
e.g.
```bash
curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/servers/<serverA>"
curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/servers/<serverB>"
```
The same servers can be [stopped](rest-api-delete-user-server-name) by substituting `DELETE` for `POST` above.
### Some caveats for using named-servers
For named-servers via the API to work, the spawner used to spawn these servers
will need to be able to handle the case of multiple servers per user and ensure
uniqueness of names, particularly if servers are spawned via docker containers
or kubernetes pods.
## Learn more about the API
You can see the full [JupyterHub REST API](jupyterhub-rest-api) for more details.
[openapi initiative]: https://www.openapis.org/
[jupyterhub rest api]: ./rest-api
[scopes]: ../rbac/scopes.md
[jupyter notebook rest api]: https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyter/notebook/HEAD/notebook/services/api/api.yaml

Some files were not shown because too many files have changed in this diff Show More