Compare commits
10 Commits
Author | SHA1 | Date | |
---|---|---|---|
![]() |
b1111363fd | ||
![]() |
6c99b807c2 | ||
![]() |
8d650f594e | ||
![]() |
04a0a3a2e5 | ||
![]() |
9cebfd6367 | ||
![]() |
587cd70221 | ||
![]() |
e94f5e043a | ||
![]() |
5456fb6356 | ||
![]() |
fb75b9a392 | ||
![]() |
90d341e6f7 |
21
.circleci/config.yml
Normal file
@@ -0,0 +1,21 @@
|
||||
# Python CircleCI 2.0 configuration file
|
||||
# Updating CircleCI configuration from v1 to v2
|
||||
# Check https://circleci.com/docs/2.0/language-python/ for more details
|
||||
#
|
||||
version: 2
|
||||
jobs:
|
||||
build:
|
||||
machine: true
|
||||
steps:
|
||||
- checkout
|
||||
- run:
|
||||
name: build images
|
||||
command: |
|
||||
docker build -t jupyterhub/jupyterhub .
|
||||
docker build -t jupyterhub/jupyterhub-onbuild onbuild
|
||||
docker build -t jupyterhub/jupyterhub:alpine -f dockerfiles/Dockerfile.alpine .
|
||||
docker build -t jupyterhub/singleuser singleuser
|
||||
- run:
|
||||
name: smoke test jupyterhub
|
||||
command: |
|
||||
docker run --rm -it jupyterhub/jupyterhub jupyterhub --help
|
@@ -1,5 +1,4 @@
|
||||
[run]
|
||||
parallel = True
|
||||
branch = False
|
||||
omit =
|
||||
jupyterhub/tests/*
|
||||
|
12
.flake8
@@ -3,14 +3,20 @@
|
||||
# E: style errors
|
||||
# W: style warnings
|
||||
# C: complexity
|
||||
# D: docstring warnings (unused pydocstyle extension)
|
||||
# F401: module imported but unused
|
||||
# F403: import *
|
||||
# F811: redefinition of unused `name` from line `N`
|
||||
# F841: local variable assigned but never used
|
||||
ignore = E, C, W, D, F841
|
||||
builtins = c, get_config
|
||||
# E402: module level import not at top of file
|
||||
# I100: Import statements are in the wrong order
|
||||
# I101: Imported names are in the wrong order. Should be
|
||||
ignore = E, C, W, F401, F403, F811, F841, E402, I100, I101
|
||||
|
||||
exclude =
|
||||
.cache,
|
||||
.github,
|
||||
docs,
|
||||
examples,
|
||||
jupyterhub/alembic*,
|
||||
onbuild,
|
||||
scripts,
|
||||
|
37
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
|
||||
---
|
||||
|
||||
Hi! Thanks for using JupyterHub.
|
||||
|
||||
If you are reporting an issue with JupyterHub, please use the [GitHub issue](https://github.com/jupyterhub/jupyterhub/issues) search feature to check if your issue has been asked already. If it has, please add your comments to the existing issue.
|
||||
|
||||
**Describe the bug**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**To Reproduce**
|
||||
Steps to reproduce the behavior:
|
||||
1. Go to '...'
|
||||
2. Click on '....'
|
||||
3. Scroll down to '....'
|
||||
4. See error
|
||||
|
||||
**Expected behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Screenshots**
|
||||
If applicable, add screenshots to help explain your problem.
|
||||
|
||||
**Desktop (please complete the following information):**
|
||||
- OS: [e.g. iOS]
|
||||
- Browser [e.g. chrome, safari]
|
||||
- Version [e.g. 22]
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here.
|
||||
|
||||
- Running `jupyter troubleshoot` from the command line, if possible, and posting
|
||||
its output would also be helpful.
|
||||
- Running in `--debug` mode can also be helpful for troubleshooting.
|
7
.github/ISSUE_TEMPLATE/installation-and-configuration-issues.md
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
name: Installation and configuration issues
|
||||
about: Installation and configuration assistance
|
||||
|
||||
---
|
||||
|
||||
If you are having issues with installation or configuration, you may ask for help on the JupyterHub gitter channel or file an issue here.
|
16
.github/dependabot.yaml
vendored
@@ -1,16 +0,0 @@
|
||||
# dependabot.yaml reference: https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
|
||||
#
|
||||
# Notes:
|
||||
# - Status and logs from dependabot are provided at
|
||||
# https://github.com/jupyterhub/jupyterhub/network/updates.
|
||||
#
|
||||
version: 2
|
||||
updates:
|
||||
# Maintain dependencies in our GitHub Workflows
|
||||
- package-ecosystem: github-actions
|
||||
directory: /
|
||||
labels: [ci]
|
||||
schedule:
|
||||
interval: monthly
|
||||
time: "05:00"
|
||||
timezone: Etc/UTC
|
226
.github/workflows/release.yml
vendored
@@ -1,226 +0,0 @@
|
||||
# This is a GitHub workflow defining a set of jobs with a set of steps.
|
||||
# ref: https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions
|
||||
#
|
||||
# Test build release artifacts (PyPI package, Docker images) and publish them on
|
||||
# pushed git tags.
|
||||
#
|
||||
name: Release
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
paths-ignore:
|
||||
- "docs/**"
|
||||
- "**.md"
|
||||
- "**.rst"
|
||||
- ".github/workflows/*"
|
||||
- "!.github/workflows/release.yml"
|
||||
push:
|
||||
paths-ignore:
|
||||
- "docs/**"
|
||||
- "**.md"
|
||||
- "**.rst"
|
||||
- ".github/workflows/*"
|
||||
- "!.github/workflows/release.yml"
|
||||
branches-ignore:
|
||||
- "dependabot/**"
|
||||
- "pre-commit-ci-update-config"
|
||||
tags:
|
||||
- "**"
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
build-release:
|
||||
runs-on: ubuntu-20.04
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: "3.9"
|
||||
|
||||
- uses: actions/setup-node@v3
|
||||
with:
|
||||
node-version: "14"
|
||||
|
||||
- name: install build requirements
|
||||
run: |
|
||||
npm install -g yarn
|
||||
pip install --upgrade pip
|
||||
pip install build
|
||||
pip freeze
|
||||
|
||||
- name: build release
|
||||
run: |
|
||||
python -m build --sdist --wheel .
|
||||
ls -l dist
|
||||
|
||||
- name: verify sdist
|
||||
run: |
|
||||
./ci/check_sdist.py dist/jupyterhub-*.tar.gz
|
||||
|
||||
- name: verify data-files are installed where they are found
|
||||
run: |
|
||||
pip install dist/*.whl
|
||||
./ci/check_installed_data.py
|
||||
|
||||
- name: verify sdist can be installed without npm/yarn
|
||||
run: |
|
||||
docker run --rm -v $PWD/dist:/dist:ro docker.io/library/python:3.9-slim-bullseye bash -c 'pip install /dist/jupyterhub-*.tar.gz'
|
||||
|
||||
# ref: https://github.com/actions/upload-artifact#readme
|
||||
- uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: jupyterhub-${{ github.sha }}
|
||||
path: "dist/*"
|
||||
if-no-files-found: error
|
||||
|
||||
- name: Publish to PyPI
|
||||
if: startsWith(github.ref, 'refs/tags/')
|
||||
env:
|
||||
TWINE_USERNAME: __token__
|
||||
TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
|
||||
run: |
|
||||
pip install twine
|
||||
twine upload --skip-existing dist/*
|
||||
|
||||
publish-docker:
|
||||
runs-on: ubuntu-20.04
|
||||
timeout-minutes: 20
|
||||
|
||||
services:
|
||||
# So that we can test this in PRs/branches
|
||||
local-registry:
|
||||
image: registry:2
|
||||
ports:
|
||||
- 5000:5000
|
||||
|
||||
steps:
|
||||
- name: Should we push this image to a public registry?
|
||||
run: |
|
||||
if [ "${{ startsWith(github.ref, 'refs/tags/') || (github.ref == 'refs/heads/main') }}" = "true" ]; then
|
||||
# Empty => Docker Hub
|
||||
echo "REGISTRY=" >> $GITHUB_ENV
|
||||
else
|
||||
echo "REGISTRY=localhost:5000/" >> $GITHUB_ENV
|
||||
fi
|
||||
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
# Setup docker to build for multiple platforms, see:
|
||||
# https://github.com/docker/build-push-action/tree/v2.4.0#usage
|
||||
# https://github.com/docker/build-push-action/blob/v2.4.0/docs/advanced/multi-platform.md
|
||||
- name: Set up QEMU (for docker buildx)
|
||||
uses: docker/setup-qemu-action@v2
|
||||
|
||||
- name: Set up Docker Buildx (for multi-arch builds)
|
||||
uses: docker/setup-buildx-action@v2
|
||||
with:
|
||||
# Allows pushing to registry on localhost:5000
|
||||
driver-opts: network=host
|
||||
|
||||
- name: Setup push rights to Docker Hub
|
||||
# This was setup by...
|
||||
# 1. Creating a Docker Hub service account "jupyterhubbot"
|
||||
# 2. Creating a access token for the service account specific to this
|
||||
# repository: https://hub.docker.com/settings/security
|
||||
# 3. Making the account part of the "bots" team, and granting that team
|
||||
# permissions to push to the relevant images:
|
||||
# https://hub.docker.com/orgs/jupyterhub/teams/bots/permissions
|
||||
# 4. Registering the username and token as a secret for this repo:
|
||||
# https://github.com/jupyterhub/jupyterhub/settings/secrets/actions
|
||||
if: env.REGISTRY != 'localhost:5000/'
|
||||
run: |
|
||||
docker login -u "${{ secrets.DOCKERHUB_USERNAME }}" -p "${{ secrets.DOCKERHUB_TOKEN }}"
|
||||
|
||||
# image: jupyterhub/jupyterhub
|
||||
#
|
||||
# https://github.com/jupyterhub/action-major-minor-tag-calculator
|
||||
# If this is a tagged build this will return additional parent tags.
|
||||
# E.g. 1.2.3 is expanded to Docker tags
|
||||
# [{prefix}:1.2.3, {prefix}:1.2, {prefix}:1, {prefix}:latest] unless
|
||||
# this is a backported tag in which case the newer tags aren't updated.
|
||||
# For branches this will return the branch name.
|
||||
# If GITHUB_TOKEN isn't available (e.g. in PRs) returns no tags [].
|
||||
- name: Get list of jupyterhub tags
|
||||
id: jupyterhubtags
|
||||
uses: jupyterhub/action-major-minor-tag-calculator@v2
|
||||
with:
|
||||
githubToken: ${{ secrets.GITHUB_TOKEN }}
|
||||
prefix: "${{ env.REGISTRY }}jupyterhub/jupyterhub:"
|
||||
defaultTag: "${{ env.REGISTRY }}jupyterhub/jupyterhub:noref"
|
||||
branchRegex: ^\w[\w-.]*$
|
||||
|
||||
- name: Build and push jupyterhub
|
||||
uses: docker/build-push-action@3b5e8027fcad23fda98b2e3ac259d8d67585f671
|
||||
with:
|
||||
context: .
|
||||
platforms: linux/amd64,linux/arm64
|
||||
push: true
|
||||
# tags parameter must be a string input so convert `gettags` JSON
|
||||
# array into a comma separated list of tags
|
||||
tags: ${{ join(fromJson(steps.jupyterhubtags.outputs.tags)) }}
|
||||
|
||||
# image: jupyterhub/jupyterhub-onbuild
|
||||
#
|
||||
- name: Get list of jupyterhub-onbuild tags
|
||||
id: onbuildtags
|
||||
uses: jupyterhub/action-major-minor-tag-calculator@v2
|
||||
with:
|
||||
githubToken: ${{ secrets.GITHUB_TOKEN }}
|
||||
prefix: "${{ env.REGISTRY }}jupyterhub/jupyterhub-onbuild:"
|
||||
defaultTag: "${{ env.REGISTRY }}jupyterhub/jupyterhub-onbuild:noref"
|
||||
branchRegex: ^\w[\w-.]*$
|
||||
|
||||
- name: Build and push jupyterhub-onbuild
|
||||
uses: docker/build-push-action@3b5e8027fcad23fda98b2e3ac259d8d67585f671
|
||||
with:
|
||||
build-args: |
|
||||
BASE_IMAGE=${{ fromJson(steps.jupyterhubtags.outputs.tags)[0] }}
|
||||
context: onbuild
|
||||
platforms: linux/amd64,linux/arm64
|
||||
push: true
|
||||
tags: ${{ join(fromJson(steps.onbuildtags.outputs.tags)) }}
|
||||
|
||||
# image: jupyterhub/jupyterhub-demo
|
||||
#
|
||||
- name: Get list of jupyterhub-demo tags
|
||||
id: demotags
|
||||
uses: jupyterhub/action-major-minor-tag-calculator@v2
|
||||
with:
|
||||
githubToken: ${{ secrets.GITHUB_TOKEN }}
|
||||
prefix: "${{ env.REGISTRY }}jupyterhub/jupyterhub-demo:"
|
||||
defaultTag: "${{ env.REGISTRY }}jupyterhub/jupyterhub-demo:noref"
|
||||
branchRegex: ^\w[\w-.]*$
|
||||
|
||||
- name: Build and push jupyterhub-demo
|
||||
uses: docker/build-push-action@3b5e8027fcad23fda98b2e3ac259d8d67585f671
|
||||
with:
|
||||
build-args: |
|
||||
BASE_IMAGE=${{ fromJson(steps.onbuildtags.outputs.tags)[0] }}
|
||||
context: demo-image
|
||||
# linux/arm64 currently fails:
|
||||
# ERROR: Could not build wheels for argon2-cffi which use PEP 517 and cannot be installed directly
|
||||
# ERROR: executor failed running [/bin/sh -c python3 -m pip install notebook]: exit code: 1
|
||||
platforms: linux/amd64
|
||||
push: true
|
||||
tags: ${{ join(fromJson(steps.demotags.outputs.tags)) }}
|
||||
|
||||
# image: jupyterhub/singleuser
|
||||
#
|
||||
- name: Get list of jupyterhub/singleuser tags
|
||||
id: singleusertags
|
||||
uses: jupyterhub/action-major-minor-tag-calculator@v2
|
||||
with:
|
||||
githubToken: ${{ secrets.GITHUB_TOKEN }}
|
||||
prefix: "${{ env.REGISTRY }}jupyterhub/singleuser:"
|
||||
defaultTag: "${{ env.REGISTRY }}jupyterhub/singleuser:noref"
|
||||
branchRegex: ^\w[\w-.]*$
|
||||
|
||||
- name: Build and push jupyterhub/singleuser
|
||||
uses: docker/build-push-action@3b5e8027fcad23fda98b2e3ac259d8d67585f671
|
||||
with:
|
||||
build-args: |
|
||||
JUPYTERHUB_VERSION=${{ github.ref_type == 'tag' && github.ref_name || format('git:{0}', github.sha) }}
|
||||
context: singleuser
|
||||
platforms: linux/amd64,linux/arm64
|
||||
push: true
|
||||
tags: ${{ join(fromJson(steps.singleusertags.outputs.tags)) }}
|
31
.github/workflows/support-bot.yml
vendored
@@ -1,31 +0,0 @@
|
||||
# https://github.com/dessant/support-requests
|
||||
name: "Support Requests"
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [labeled, unlabeled, reopened]
|
||||
|
||||
permissions:
|
||||
issues: write
|
||||
|
||||
jobs:
|
||||
action:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: dessant/support-requests@v3
|
||||
with:
|
||||
github-token: ${{ github.token }}
|
||||
support-label: "support"
|
||||
issue-comment: |
|
||||
Hi there @{issue-author} :wave:!
|
||||
|
||||
I closed this issue because it was labelled as a support question.
|
||||
|
||||
Please help us organize discussion by posting this on the http://discourse.jupyter.org/ forum.
|
||||
|
||||
Our goal is to sustain a positive experience for both users and developers. We use GitHub issues for specific discussions related to changing a repository's content, and let the forum be where we can more generally help and inspire each other.
|
||||
|
||||
Thank you for being an active member of our community! :heart:
|
||||
close-issue: true
|
||||
lock-issue: false
|
||||
issue-lock-reason: "off-topic"
|
107
.github/workflows/test-docs.yml
vendored
@@ -1,107 +0,0 @@
|
||||
# This is a GitHub workflow defining a set of jobs with a set of steps.
|
||||
# ref: https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions
|
||||
#
|
||||
# This workflow validates the REST API definition and runs the pytest tests in
|
||||
# the docs/ folder. This workflow does not build the documentation. That is
|
||||
# instead tested via ReadTheDocs (https://readthedocs.org/projects/jupyterhub/).
|
||||
#
|
||||
name: Test docs
|
||||
|
||||
# The tests defined in docs/ are currently influenced by changes to _version.py
|
||||
# and scopes.py.
|
||||
on:
|
||||
pull_request:
|
||||
paths:
|
||||
- "docs/**"
|
||||
- "jupyterhub/_version.py"
|
||||
- "jupyterhub/scopes.py"
|
||||
- ".github/workflows/test-docs.yml"
|
||||
push:
|
||||
paths:
|
||||
- "docs/**"
|
||||
- "jupyterhub/_version.py"
|
||||
- "jupyterhub/scopes.py"
|
||||
- ".github/workflows/test-docs.yml"
|
||||
branches-ignore:
|
||||
- "dependabot/**"
|
||||
- "pre-commit-ci-update-config"
|
||||
tags:
|
||||
- "**"
|
||||
workflow_dispatch:
|
||||
|
||||
env:
|
||||
# UTF-8 content may be interpreted as ascii and causes errors without this.
|
||||
LANG: C.UTF-8
|
||||
PYTEST_ADDOPTS: "--verbose --color=yes"
|
||||
|
||||
jobs:
|
||||
validate-rest-api-definition:
|
||||
runs-on: ubuntu-20.04
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Validate REST API definition
|
||||
uses: char0n/swagger-editor-validate@v1.3.2
|
||||
with:
|
||||
definition-file: docs/source/_static/rest-api.yml
|
||||
|
||||
test-docs:
|
||||
runs-on: ubuntu-20.04
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
# make rediraffecheckdiff requires git history to compare current
|
||||
# commit with the main branch and previous releases.
|
||||
fetch-depth: 0
|
||||
|
||||
- uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: "3.9"
|
||||
|
||||
- name: Install requirements
|
||||
run: |
|
||||
pip install -r docs/requirements.txt pytest
|
||||
|
||||
- name: pytest docs/
|
||||
run: |
|
||||
pytest docs/
|
||||
|
||||
# readthedocs doesn't halt on warnings,
|
||||
# so raise any warnings here
|
||||
- name: build docs
|
||||
run: |
|
||||
cd docs
|
||||
make html
|
||||
|
||||
- name: check links
|
||||
run: |
|
||||
cd docs
|
||||
make linkcheck
|
||||
|
||||
# make rediraffecheckdiff compares files for different changesets
|
||||
# these diff targets aren't always available
|
||||
# - compare with base ref (usually 'main', always on 'origin') for pull requests
|
||||
# - only compare with tags when running against jupyterhub/jupyterhub
|
||||
# to avoid errors on forks, which often lack tags
|
||||
- name: check redirects for this PR
|
||||
if: github.event_name == 'pull_request'
|
||||
run: |
|
||||
cd docs
|
||||
export REDIRAFFE_BRANCH=origin/${{ github.base_ref }}
|
||||
make rediraffecheckdiff
|
||||
|
||||
# this should check currently published 'stable' links for redirects
|
||||
- name: check redirects since last release
|
||||
if: github.repository == 'jupyterhub/jupyterhub'
|
||||
run: |
|
||||
cd docs
|
||||
export REDIRAFFE_BRANCH=$(git describe --tags --abbrev=0)
|
||||
make rediraffecheckdiff
|
||||
|
||||
# longer-term redirect check (fixed version) for older links
|
||||
- name: check redirects since 3.0.0
|
||||
if: github.repository == 'jupyterhub/jupyterhub'
|
||||
run: |
|
||||
cd docs
|
||||
export REDIRAFFE_BRANCH=3.0.0
|
||||
make rediraffecheckdiff
|
52
.github/workflows/test-jsx.yml
vendored
@@ -1,52 +0,0 @@
|
||||
# This is a GitHub workflow defining a set of jobs with a set of steps.
|
||||
# ref: https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions
|
||||
#
|
||||
name: Test jsx (admin-react.js)
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
paths:
|
||||
- "jsx/**"
|
||||
- ".github/workflows/test-jsx.yml"
|
||||
push:
|
||||
paths:
|
||||
- "jsx/**"
|
||||
- ".github/workflows/test-jsx.yml"
|
||||
branches-ignore:
|
||||
- "dependabot/**"
|
||||
- "pre-commit-ci-update-config"
|
||||
tags:
|
||||
- "**"
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
# The ./jsx folder contains React based source code files that are to compile
|
||||
# to share/jupyterhub/static/js/admin-react.js. The ./jsx folder includes
|
||||
# tests also has tests that this job is meant to run with `yarn test`
|
||||
# according to the documentation in jsx/README.md.
|
||||
test-jsx-admin-react:
|
||||
runs-on: ubuntu-20.04
|
||||
timeout-minutes: 5
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/setup-node@v3
|
||||
with:
|
||||
node-version: "14"
|
||||
|
||||
- name: Install yarn
|
||||
run: |
|
||||
npm install -g yarn
|
||||
|
||||
- name: yarn
|
||||
run: |
|
||||
cd jsx
|
||||
yarn
|
||||
|
||||
- name: yarn test
|
||||
run: |
|
||||
cd jsx
|
||||
yarn test
|
265
.github/workflows/test.yml
vendored
@@ -1,265 +0,0 @@
|
||||
# This is a GitHub workflow defining a set of jobs with a set of steps.
|
||||
# ref: https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions
|
||||
#
|
||||
name: Test
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
paths-ignore:
|
||||
- "docs/**"
|
||||
- "**.md"
|
||||
- "**.rst"
|
||||
- ".github/workflows/*"
|
||||
- "!.github/workflows/test.yml"
|
||||
push:
|
||||
paths-ignore:
|
||||
- "docs/**"
|
||||
- "**.md"
|
||||
- "**.rst"
|
||||
- ".github/workflows/*"
|
||||
- "!.github/workflows/test.yml"
|
||||
branches-ignore:
|
||||
- "dependabot/**"
|
||||
- "pre-commit-ci-update-config"
|
||||
tags:
|
||||
- "**"
|
||||
workflow_dispatch:
|
||||
|
||||
env:
|
||||
# UTF-8 content may be interpreted as ascii and causes errors without this.
|
||||
LANG: C.UTF-8
|
||||
SQLALCHEMY_WARN_20: "1"
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
# Run "pytest jupyterhub/tests" in various configurations
|
||||
pytest:
|
||||
runs-on: ubuntu-20.04
|
||||
timeout-minutes: 15
|
||||
|
||||
strategy:
|
||||
# Keep running even if one variation of the job fail
|
||||
fail-fast: false
|
||||
matrix:
|
||||
# We run this job multiple times with different parameterization
|
||||
# specified below, these parameters have no meaning on their own and
|
||||
# gain meaning on how job steps use them.
|
||||
#
|
||||
# subdomain:
|
||||
# Tests everything when JupyterHub is configured to add routes for
|
||||
# users with dedicated subdomains like user1.jupyter.example.com
|
||||
# rather than jupyter.example.com/user/user1.
|
||||
#
|
||||
# db: [mysql/postgres]
|
||||
# Tests everything when JupyterHub works against a dedicated mysql or
|
||||
# postgresql server.
|
||||
#
|
||||
# legacy_notebook:
|
||||
# Tests everything when the user instances are started with
|
||||
# the legacy notebook server instead of jupyter_server.
|
||||
#
|
||||
# ssl:
|
||||
# Tests everything using internal SSL connections instead of
|
||||
# unencrypted HTTP
|
||||
#
|
||||
# main_dependencies:
|
||||
# Tests everything when the we use the latest available dependencies
|
||||
# from: traitlets.
|
||||
#
|
||||
# NOTE: Since only the value of these parameters are presented in the
|
||||
# GitHub UI when the workflow run, we avoid using true/false as
|
||||
# values by instead duplicating the name to signal true.
|
||||
# Python versions available at:
|
||||
# https://github.com/actions/python-versions/blob/HEAD/versions-manifest.json
|
||||
include:
|
||||
- python: "3.7"
|
||||
oldest_dependencies: oldest_dependencies
|
||||
legacy_notebook: legacy_notebook
|
||||
- python: "3.8"
|
||||
jupyter_server: "1.*"
|
||||
subset: singleuser
|
||||
- python: "3.9"
|
||||
db: mysql
|
||||
- python: "3.10"
|
||||
db: postgres
|
||||
- python: "3.11"
|
||||
subdomain: subdomain
|
||||
serverextension: serverextension
|
||||
- python: "3.11"
|
||||
ssl: ssl
|
||||
serverextension: serverextension
|
||||
- python: "3.11"
|
||||
subdomain: subdomain
|
||||
noextension: noextension
|
||||
subset: singleuser
|
||||
- python: "3.11"
|
||||
ssl: ssl
|
||||
noextension: noextension
|
||||
subset: singleuser
|
||||
- python: "3.11"
|
||||
browser: browser
|
||||
- python: "3.11"
|
||||
main_dependencies: main_dependencies
|
||||
|
||||
steps:
|
||||
# NOTE: In GitHub workflows, environment variables are set by writing
|
||||
# assignment statements to a file. They will be set in the following
|
||||
# steps as if would used `export MY_ENV=my-value`.
|
||||
- name: Configure environment variables
|
||||
run: |
|
||||
if [ "${{ matrix.subdomain }}" != "" ]; then
|
||||
echo "JUPYTERHUB_TEST_SUBDOMAIN_HOST=http://localhost.jovyan.org:8000" >> $GITHUB_ENV
|
||||
fi
|
||||
if [ "${{ matrix.db }}" == "mysql" ]; then
|
||||
echo "MYSQL_HOST=127.0.0.1" >> $GITHUB_ENV
|
||||
echo "JUPYTERHUB_TEST_DB_URL=mysql+mysqldb://root@127.0.0.1:3306/jupyterhub" >> $GITHUB_ENV
|
||||
fi
|
||||
if [ "${{ matrix.ssl }}" == "ssl" ]; then
|
||||
echo "SSL_ENABLED=1" >> $GITHUB_ENV
|
||||
fi
|
||||
if [ "${{ matrix.db }}" == "postgres" ]; then
|
||||
echo "PGHOST=127.0.0.1" >> $GITHUB_ENV
|
||||
echo "PGUSER=test_user" >> $GITHUB_ENV
|
||||
echo "PGPASSWORD=hub[test/:?" >> $GITHUB_ENV
|
||||
echo "JUPYTERHUB_TEST_DB_URL=postgresql://test_user:hub%5Btest%2F%3A%3F@127.0.0.1:5432/jupyterhub" >> $GITHUB_ENV
|
||||
fi
|
||||
if [ "${{ matrix.serverextension }}" != "" ]; then
|
||||
echo "JUPYTERHUB_SINGLEUSER_EXTENSION=1" >> $GITHUB_ENV
|
||||
elif [ "${{ matrix.noextension }}" != "" ]; then
|
||||
echo "JUPYTERHUB_SINGLEUSER_EXTENSION=0" >> $GITHUB_ENV
|
||||
fi
|
||||
- uses: actions/checkout@v3
|
||||
# NOTE: actions/setup-node@v3 make use of a cache within the GitHub base
|
||||
# environment and setup in a fraction of a second.
|
||||
- name: Install Node v14
|
||||
uses: actions/setup-node@v3
|
||||
with:
|
||||
node-version: "14"
|
||||
- name: Install Javascript dependencies
|
||||
run: |
|
||||
npm install
|
||||
npm install -g configurable-http-proxy yarn
|
||||
npm list
|
||||
|
||||
# NOTE: actions/setup-python@v4 make use of a cache within the GitHub base
|
||||
# environment and setup in a fraction of a second.
|
||||
- name: Install Python ${{ matrix.python }}
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: "${{ matrix.python }}"
|
||||
|
||||
- name: Install Python dependencies
|
||||
run: |
|
||||
pip install --upgrade pip
|
||||
pip install -e ".[test]"
|
||||
|
||||
if [ "${{ matrix.oldest_dependencies }}" != "" ]; then
|
||||
# take any dependencies in requirements.txt such as tornado>=5.0
|
||||
# and transform them to tornado==5.0 so we can run tests with
|
||||
# the earliest-supported versions
|
||||
cat requirements.txt | grep '>=' | sed -e 's@>=@==@g' > oldest-requirements.txt
|
||||
pip install -r oldest-requirements.txt
|
||||
fi
|
||||
|
||||
if [ "${{ matrix.main_dependencies }}" != "" ]; then
|
||||
# Tests are broken:
|
||||
# https://github.com/jupyterhub/jupyterhub/issues/4418
|
||||
# pip install git+https://github.com/ipython/traitlets#egg=traitlets --force
|
||||
pip install --upgrade --pre sqlalchemy
|
||||
fi
|
||||
if [ "${{ matrix.legacy_notebook }}" != "" ]; then
|
||||
pip uninstall jupyter_server --yes
|
||||
pip install 'notebook<7'
|
||||
fi
|
||||
if [ "${{ matrix.jupyter_server }}" != "" ]; then
|
||||
pip install "jupyter_server==${{ matrix.jupyter_server }}"
|
||||
fi
|
||||
if [ "${{ matrix.db }}" == "mysql" ]; then
|
||||
pip install mysqlclient
|
||||
fi
|
||||
if [ "${{ matrix.db }}" == "postgres" ]; then
|
||||
pip install psycopg2-binary
|
||||
fi
|
||||
if [ "${{ matrix.serverextension }}" != "" ]; then
|
||||
pip install 'jupyter-server>=2'
|
||||
fi
|
||||
|
||||
pip freeze
|
||||
|
||||
# NOTE: If you need to debug this DB setup step, consider the following.
|
||||
#
|
||||
# 1. mysql/postgressql are database servers we start as docker containers,
|
||||
# and we use clients named mysql/psql.
|
||||
#
|
||||
# 2. When we start a database server we need to pass environment variables
|
||||
# explicitly as part of the `docker run` command. These environment
|
||||
# variables are named differently from the similarly named environment
|
||||
# variables used by the clients.
|
||||
#
|
||||
# - mysql server ref: https://hub.docker.com/_/mysql/
|
||||
# - mysql client ref: https://dev.mysql.com/doc/refman/5.7/en/environment-variables.html
|
||||
# - postgres server ref: https://hub.docker.com/_/postgres/
|
||||
# - psql client ref: https://www.postgresql.org/docs/9.5/libpq-envars.html
|
||||
#
|
||||
# 3. When we connect, they should use 127.0.0.1 rather than the
|
||||
# default way of connecting which leads to errors like below both for
|
||||
# mysql and postgresql unless we set MYSQL_HOST/PGHOST to 127.0.0.1.
|
||||
#
|
||||
# - ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
|
||||
#
|
||||
- name: Start a database server (${{ matrix.db }})
|
||||
if: ${{ matrix.db }}
|
||||
run: |
|
||||
if [ "${{ matrix.db }}" == "mysql" ]; then
|
||||
if [[ -z "$(which mysql)" ]]; then
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y mysql-client
|
||||
fi
|
||||
DB=mysql bash ci/docker-db.sh
|
||||
DB=mysql bash ci/init-db.sh
|
||||
fi
|
||||
if [ "${{ matrix.db }}" == "postgres" ]; then
|
||||
if [[ -z "$(which psql)" ]]; then
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y postgresql-client
|
||||
fi
|
||||
DB=postgres bash ci/docker-db.sh
|
||||
DB=postgres bash ci/init-db.sh
|
||||
fi
|
||||
|
||||
- name: Configure browser tests
|
||||
if: matrix.browser
|
||||
run: echo "PYTEST_ADDOPTS=$PYTEST_ADDOPTS -m browser" >> "${GITHUB_ENV}"
|
||||
|
||||
- name: Ensure browsers are installed for playwright
|
||||
if: matrix.browser
|
||||
run: python -m playwright install --with-deps
|
||||
|
||||
- name: Run pytest
|
||||
run: |
|
||||
pytest -k "${{ matrix.subset }}" --maxfail=2 --cov=jupyterhub jupyterhub/tests
|
||||
|
||||
- uses: codecov/codecov-action@v3
|
||||
|
||||
docker-build:
|
||||
runs-on: ubuntu-20.04
|
||||
timeout-minutes: 20
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: build images
|
||||
run: |
|
||||
DOCKER_BUILDKIT=1 docker build -t jupyterhub/jupyterhub .
|
||||
docker build -t jupyterhub/jupyterhub-onbuild onbuild
|
||||
docker build -t jupyterhub/singleuser singleuser
|
||||
|
||||
- name: smoke test jupyterhub
|
||||
run: |
|
||||
docker run --rm -t jupyterhub/jupyterhub jupyterhub --help
|
||||
|
||||
- name: verify static files
|
||||
run: |
|
||||
docker run --rm -t -v $PWD/dockerfiles:/io jupyterhub/jupyterhub python3 /io/test.py
|
13
.gitignore
vendored
@@ -8,32 +8,19 @@ dist
|
||||
docs/_build
|
||||
docs/build
|
||||
docs/source/_static/rest-api
|
||||
docs/source/rbac/scope-table.md
|
||||
docs/source/reference/metrics.md
|
||||
|
||||
.ipynb_checkpoints
|
||||
jsx/build/
|
||||
# ignore config file at the top-level of the repo
|
||||
# but not sub-dirs
|
||||
/jupyterhub_config.py
|
||||
jupyterhub_cookie_secret
|
||||
jupyterhub.sqlite
|
||||
jupyterhub.sqlite*
|
||||
package-lock.json
|
||||
share/jupyterhub/static/components
|
||||
share/jupyterhub/static/css/style.min.css
|
||||
share/jupyterhub/static/css/style.min.css.map
|
||||
share/jupyterhub/static/js/admin-react.js*
|
||||
*.egg-info
|
||||
MANIFEST
|
||||
.coverage
|
||||
.coverage.*
|
||||
htmlcov
|
||||
.idea/
|
||||
.vscode/
|
||||
.pytest_cache
|
||||
pip-wheel-metadata
|
||||
docs/source/reference/metrics.rst
|
||||
oldest-requirements.txt
|
||||
jupyterhub-proxy.pid
|
||||
examples/server-api/service-token
|
||||
|
@@ -1,66 +0,0 @@
|
||||
# pre-commit is a tool to perform a predefined set of tasks manually and/or
|
||||
# automatically before git commits are made.
|
||||
#
|
||||
# Config reference: https://pre-commit.com/#pre-commit-configyaml---top-level
|
||||
#
|
||||
# Common tasks
|
||||
#
|
||||
# - Run on all files: pre-commit run --all-files
|
||||
# - Register git hooks: pre-commit install --install-hooks
|
||||
#
|
||||
|
||||
ci:
|
||||
# pre-commit.ci will open PRs updating our hooks once a month
|
||||
autoupdate_schedule: monthly
|
||||
|
||||
repos:
|
||||
# Autoformat: Python code, syntax patterns are modernized
|
||||
- repo: https://github.com/asottile/pyupgrade
|
||||
rev: v3.4.0
|
||||
hooks:
|
||||
- id: pyupgrade
|
||||
args:
|
||||
- --py37-plus
|
||||
|
||||
# Autoformat: Python code
|
||||
- repo: https://github.com/PyCQA/autoflake
|
||||
rev: v2.1.1
|
||||
hooks:
|
||||
- id: autoflake
|
||||
# args ref: https://github.com/PyCQA/autoflake#advanced-usage
|
||||
args:
|
||||
- --in-place
|
||||
|
||||
# Autoformat: Python code
|
||||
- repo: https://github.com/pycqa/isort
|
||||
rev: 5.12.0
|
||||
hooks:
|
||||
- id: isort
|
||||
|
||||
# Autoformat: Python code
|
||||
- repo: https://github.com/psf/black
|
||||
rev: 23.3.0
|
||||
hooks:
|
||||
- id: black
|
||||
|
||||
# Autoformat: markdown, yaml, javascript (see the file .prettierignore)
|
||||
- repo: https://github.com/pre-commit/mirrors-prettier
|
||||
rev: v3.0.0-alpha.9-for-vscode
|
||||
hooks:
|
||||
- id: prettier
|
||||
|
||||
# Autoformat and linting, misc. details
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.4.0
|
||||
hooks:
|
||||
- id: end-of-file-fixer
|
||||
exclude: share/jupyterhub/static/js/admin-react.js
|
||||
- id: requirements-txt-fixer
|
||||
- id: check-case-conflict
|
||||
- id: check-executables-have-shebangs
|
||||
|
||||
# Linting: Python code (see the file .flake8)
|
||||
- repo: https://github.com/PyCQA/flake8
|
||||
rev: "6.0.0"
|
||||
hooks:
|
||||
- id: flake8
|
@@ -1,3 +0,0 @@
|
||||
share/jupyterhub/templates/
|
||||
share/jupyterhub/static/js/admin-react.js
|
||||
jupyterhub/singleuser/templates/
|
@@ -1,25 +0,0 @@
|
||||
# Configuration on how ReadTheDocs (RTD) builds our documentation
|
||||
# ref: https://readthedocs.org/projects/jupyterhub/
|
||||
# ref: https://docs.readthedocs.io/en/stable/config-file/v2.html
|
||||
#
|
||||
version: 2
|
||||
|
||||
sphinx:
|
||||
configuration: docs/source/conf.py
|
||||
|
||||
build:
|
||||
os: ubuntu-20.04
|
||||
tools:
|
||||
nodejs: "16"
|
||||
python: "3.9"
|
||||
|
||||
python:
|
||||
install:
|
||||
- requirements: docs/requirements.txt
|
||||
|
||||
formats:
|
||||
# Adding htmlzip enables a Downloads section in the rendered website's RTD
|
||||
# menu where the html build can be downloaded. This doesn't require any
|
||||
# additional configuration in docs/source/conf.py.
|
||||
#
|
||||
- htmlzip
|
68
.travis.yml
Normal file
@@ -0,0 +1,68 @@
|
||||
language: python
|
||||
sudo: false
|
||||
cache:
|
||||
- pip
|
||||
python:
|
||||
- 3.6
|
||||
- 3.5
|
||||
- nightly
|
||||
env:
|
||||
global:
|
||||
- ASYNC_TEST_TIMEOUT=15
|
||||
- MYSQL_HOST=127.0.0.1
|
||||
- MYSQL_TCP_PORT=13306
|
||||
services:
|
||||
- postgres
|
||||
- docker
|
||||
|
||||
# installing dependencies
|
||||
before_install:
|
||||
- nvm install 6; nvm use 6
|
||||
- npm install
|
||||
- npm install -g configurable-http-proxy
|
||||
- |
|
||||
# setup database
|
||||
if [[ $JUPYTERHUB_TEST_DB_URL == mysql* ]]; then
|
||||
unset MYSQL_UNIX_PORT
|
||||
DB=mysql bash ci/docker-db.sh
|
||||
DB=mysql bash ci/init-db.sh
|
||||
pip install 'mysql-connector<2.2'
|
||||
elif [[ $JUPYTERHUB_TEST_DB_URL == postgresql* ]]; then
|
||||
DB=postgres bash ci/init-db.sh
|
||||
pip install psycopg2-binary
|
||||
fi
|
||||
install:
|
||||
- pip install --upgrade pip
|
||||
- pip install --pre -r dev-requirements.txt .
|
||||
- pip freeze
|
||||
|
||||
# running tests
|
||||
script:
|
||||
- |
|
||||
# run tests
|
||||
set -e
|
||||
pytest -v --maxfail=2 --cov=jupyterhub jupyterhub/tests
|
||||
- |
|
||||
# build docs
|
||||
pushd docs
|
||||
pip install -r requirements.txt
|
||||
make html
|
||||
popd
|
||||
after_success:
|
||||
- codecov
|
||||
|
||||
matrix:
|
||||
fast_finish: true
|
||||
include:
|
||||
- python: 3.6
|
||||
env: JUPYTERHUB_TEST_SUBDOMAIN_HOST=http://localhost.jovyan.org:8000
|
||||
- python: 3.6
|
||||
env:
|
||||
- JUPYTERHUB_TEST_DB_URL=mysql+mysqlconnector://root@127.0.0.1:$MYSQL_TCP_PORT/jupyterhub
|
||||
- python: 3.6
|
||||
env:
|
||||
- JUPYTERHUB_TEST_DB_URL=postgresql://postgres@127.0.0.1/jupyterhub
|
||||
- python: 3.7
|
||||
dist: xenial
|
||||
allow_failures:
|
||||
- python: nightly
|
26
CHECKLIST-Release.md
Normal file
@@ -0,0 +1,26 @@
|
||||
# Release checklist
|
||||
|
||||
- [ ] Upgrade Docs prior to Release
|
||||
|
||||
- [ ] Change log
|
||||
- [ ] New features documented
|
||||
- [ ] Update the contributor list - thank you page
|
||||
|
||||
- [ ] Upgrade and test Reference Deployments
|
||||
|
||||
- [ ] Release software
|
||||
|
||||
- [ ] Make sure 0 issues in milestone
|
||||
- [ ] Follow release process steps
|
||||
- [ ] Send builds to PyPI (Warehouse) and Conda Forge
|
||||
|
||||
- [ ] Blog post and/or release note
|
||||
|
||||
- [ ] Notify users of release
|
||||
|
||||
- [ ] Email Jupyter and Jupyter In Education mailing lists
|
||||
- [ ] Tweet (optional)
|
||||
|
||||
- [ ] Increment the version number for the next release
|
||||
|
||||
- [ ] Update roadmap
|
@@ -1 +1 @@
|
||||
Please refer to [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md).
|
||||
Please refer to [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/master/conduct/code_of_conduct.md).
|
||||
|
102
CONTRIBUTING.md
@@ -1,14 +1,98 @@
|
||||
# Contributing to JupyterHub
|
||||
# Contributing
|
||||
|
||||
Welcome! As a [Jupyter](https://jupyter.org) project,
|
||||
you can follow the [Jupyter contributor guide](https://jupyter.readthedocs.io/en/latest/contributing/content-contributor.html).
|
||||
Welcome! As a [Jupyter](https://jupyter.org) project, we follow the [Jupyter contributor guide](https://jupyter.readthedocs.io/en/latest/contributor/content-contributor.html).
|
||||
|
||||
Make sure to also follow [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md)
|
||||
for a friendly and welcoming collaborative environment.
|
||||
|
||||
Please see our documentation on
|
||||
## Set up your development system
|
||||
|
||||
- [Setting up a development install](https://jupyterhub.readthedocs.io/en/latest/contributing/setup.html)
|
||||
- [Testing JupyterHub and linting code](https://jupyterhub.readthedocs.io/en/latest/contributing/tests.html)
|
||||
For a development install, clone the [repository](https://github.com/jupyterhub/jupyterhub)
|
||||
and then install from source:
|
||||
|
||||
If you need some help, feel free to ask on [Gitter](https://gitter.im/jupyterhub/jupyterhub) or [Discourse](https://discourse.jupyter.org/).
|
||||
```bash
|
||||
git clone https://github.com/jupyterhub/jupyterhub
|
||||
cd jupyterhub
|
||||
npm install -g configurable-http-proxy
|
||||
pip3 install -r dev-requirements.txt -e .
|
||||
```
|
||||
|
||||
### Troubleshooting a development install
|
||||
|
||||
If the `pip3 install` command fails and complains about `lessc` being
|
||||
unavailable, you may need to explicitly install some additional JavaScript
|
||||
dependencies:
|
||||
|
||||
npm install
|
||||
|
||||
This will fetch client-side JavaScript dependencies necessary to compile CSS.
|
||||
|
||||
You may also need to manually update JavaScript and CSS after some development
|
||||
updates, with:
|
||||
|
||||
```bash
|
||||
python3 setup.py js # fetch updated client-side js
|
||||
python3 setup.py css # recompile CSS from LESS sources
|
||||
```
|
||||
|
||||
## Running the test suite
|
||||
|
||||
We use [pytest](http://doc.pytest.org/en/latest/) for running tests.
|
||||
|
||||
1. Set up a development install as described above.
|
||||
|
||||
2. Set environment variable for `ASYNC_TEST_TIMEOUT` to 15 seconds:
|
||||
|
||||
```bash
|
||||
export ASYNC_TEST_TIMEOUT=15
|
||||
```
|
||||
|
||||
3. Run tests.
|
||||
|
||||
To run all the tests:
|
||||
|
||||
```bash
|
||||
pytest -v jupyterhub/tests
|
||||
```
|
||||
|
||||
To run an individual test file (i.e. `test_api.py`):
|
||||
|
||||
```bash
|
||||
pytest -v jupyterhub/tests/test_api.py
|
||||
```
|
||||
|
||||
### Troubleshooting tests
|
||||
|
||||
If you see test failures because of timeouts, you may wish to increase the
|
||||
`ASYNC_TEST_TIMEOUT` used by the
|
||||
[pytest-tornado-plugin](https://github.com/eugeniy/pytest-tornado/blob/c79f68de2222eb7cf84edcfe28650ebf309a4d0c/README.rst#markers)
|
||||
from the default of 5 seconds:
|
||||
|
||||
```bash
|
||||
export ASYNC_TEST_TIMEOUT=15
|
||||
```
|
||||
|
||||
If you see many test errors and failures, double check that you have installed
|
||||
`configurable-http-proxy`.
|
||||
|
||||
## Building the Docs locally
|
||||
|
||||
1. Install the development system as described above.
|
||||
|
||||
2. Install the dependencies for documentation:
|
||||
|
||||
```bash
|
||||
python3 -m pip install -r docs/requirements.txt
|
||||
```
|
||||
|
||||
3. Build the docs:
|
||||
|
||||
```bash
|
||||
cd docs
|
||||
make clean
|
||||
make html
|
||||
```
|
||||
|
||||
4. View the docs:
|
||||
|
||||
```bash
|
||||
open build/html/index.html
|
||||
```
|
||||
|
@@ -24,7 +24,7 @@ software without specific prior written permission.
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
|
||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
@@ -46,8 +46,8 @@ Jupyter uses a shared copyright model. Each contributor maintains copyright
|
||||
over their contributions to Jupyter. But, it is important to note that these
|
||||
contributions are typically only changes to the repositories. Thus, the Jupyter
|
||||
source code, in its entirety is not the copyright of any single person or
|
||||
institution. Instead, it is the collective copyright of the entire Jupyter
|
||||
Development Team. If individual contributors want to maintain a record of what
|
||||
institution. Instead, it is the collective copyright of the entire Jupyter
|
||||
Development Team. If individual contributors want to maintain a record of what
|
||||
changes/contributions they have specific copyright on, they should indicate
|
||||
their copyright in the commit message of the change, when they commit the
|
||||
change to one of the Jupyter repositories.
|
||||
|
128
Dockerfile
@@ -21,116 +21,40 @@
|
||||
# your jupyterhub_config.py will be added automatically
|
||||
# from your docker directory.
|
||||
|
||||
######################################################################
|
||||
# This Dockerfile uses multi-stage builds with optimisations to build
|
||||
# the JupyterHub wheel on the native architecture only
|
||||
# https://www.docker.com/blog/faster-multi-platform-builds-dockerfile-cross-compilation-guide/
|
||||
FROM ubuntu:18.04
|
||||
LABEL maintainer="Jupyter Project <jupyter@googlegroups.com>"
|
||||
|
||||
ARG BASE_IMAGE=ubuntu:22.04
|
||||
# install nodejs, utf8 locale, set CDN because default httpredir is unreliable
|
||||
ENV DEBIAN_FRONTEND noninteractive
|
||||
RUN apt-get -y update && \
|
||||
apt-get -y upgrade && \
|
||||
apt-get -y install wget git bzip2 && \
|
||||
apt-get purge && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
ENV LANG C.UTF-8
|
||||
|
||||
# install Python + NodeJS with conda
|
||||
RUN wget -q https://repo.continuum.io/miniconda/Miniconda3-4.5.1-Linux-x86_64.sh -O /tmp/miniconda.sh && \
|
||||
echo '0c28787e3126238df24c5d4858bd0744 */tmp/miniconda.sh' | md5sum -c - && \
|
||||
bash /tmp/miniconda.sh -f -b -p /opt/conda && \
|
||||
/opt/conda/bin/conda install --yes -c conda-forge \
|
||||
python=3.6 sqlalchemy tornado jinja2 traitlets requests pip pycurl \
|
||||
nodejs configurable-http-proxy && \
|
||||
/opt/conda/bin/pip install --upgrade pip && \
|
||||
rm /tmp/miniconda.sh
|
||||
ENV PATH=/opt/conda/bin:$PATH
|
||||
|
||||
######################################################################
|
||||
# The JupyterHub wheel is pure Python so can be built for any platform
|
||||
# on the native architecture (avoiding QEMU emulation)
|
||||
FROM --platform=${BUILDPLATFORM:-linux/amd64} $BASE_IMAGE AS jupyterhub-builder
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
# Don't clear apt cache, and don't combine RUN commands, so that cached layers can
|
||||
# be reused in other stages
|
||||
|
||||
RUN apt-get update -qq \
|
||||
&& apt-get install -yqq --no-install-recommends \
|
||||
build-essential \
|
||||
ca-certificates \
|
||||
curl \
|
||||
locales \
|
||||
python3-dev \
|
||||
python3-pip \
|
||||
python3-pycurl \
|
||||
python3-venv \
|
||||
&& python3 -m pip install --no-cache-dir --upgrade setuptools pip build wheel
|
||||
# Ubuntu 22.04 comes with Nodejs 12 which is too old for building JupyterHub JS
|
||||
# It's fine at runtime though (used only by configurable-http-proxy)
|
||||
RUN curl -fsSL https://deb.nodesource.com/setup_18.x | bash - \
|
||||
&& apt-get install -yqq --no-install-recommends \
|
||||
nodejs \
|
||||
&& npm install --global yarn
|
||||
|
||||
WORKDIR /src/jupyterhub
|
||||
# copy everything except whats in .dockerignore, its a
|
||||
# compromise between needing to rebuild and maintaining
|
||||
# what needs to be part of the build
|
||||
COPY . .
|
||||
|
||||
ARG PIP_CACHE_DIR=/tmp/pip-cache
|
||||
RUN --mount=type=cache,target=${PIP_CACHE_DIR} \
|
||||
python3 -m build --wheel
|
||||
|
||||
|
||||
######################################################################
|
||||
# All other wheels required by JupyterHub, some are platform specific
|
||||
FROM $BASE_IMAGE AS wheel-builder
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
RUN apt-get update -qq \
|
||||
&& apt-get install -yqq --no-install-recommends \
|
||||
build-essential \
|
||||
ca-certificates \
|
||||
curl \
|
||||
locales \
|
||||
python3-dev \
|
||||
python3-pip \
|
||||
python3-pycurl \
|
||||
python3-venv \
|
||||
&& python3 -m pip install --no-cache-dir --upgrade setuptools pip build wheel
|
||||
|
||||
ADD . /src/jupyterhub
|
||||
WORKDIR /src/jupyterhub
|
||||
|
||||
COPY --from=jupyterhub-builder /src/jupyterhub/dist/*.whl /src/jupyterhub/dist/
|
||||
ARG PIP_CACHE_DIR=/tmp/pip-cache
|
||||
RUN --mount=type=cache,target=${PIP_CACHE_DIR} \
|
||||
python3 -m pip wheel --wheel-dir wheelhouse dist/*.whl
|
||||
|
||||
|
||||
######################################################################
|
||||
# The final JupyterHub image, platform specific
|
||||
FROM $BASE_IMAGE AS jupyterhub
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive \
|
||||
SHELL=/bin/bash \
|
||||
LC_ALL=en_US.UTF-8 \
|
||||
LANG=en_US.UTF-8 \
|
||||
LANGUAGE=en_US.UTF-8 \
|
||||
PYTHONDONTWRITEBYTECODE=1
|
||||
RUN pip install . && \
|
||||
rm -rf $PWD ~/.cache ~/.npm
|
||||
|
||||
RUN mkdir -p /srv/jupyterhub/
|
||||
WORKDIR /srv/jupyterhub/
|
||||
EXPOSE 8000
|
||||
|
||||
LABEL maintainer="Jupyter Project <jupyter@googlegroups.com>"
|
||||
LABEL org.jupyter.service="jupyterhub"
|
||||
|
||||
WORKDIR /srv/jupyterhub
|
||||
|
||||
RUN apt-get update -qq \
|
||||
&& apt-get install -yqq --no-install-recommends \
|
||||
ca-certificates \
|
||||
curl \
|
||||
gnupg \
|
||||
locales \
|
||||
python-is-python3 \
|
||||
python3-pip \
|
||||
python3-pycurl \
|
||||
nodejs \
|
||||
npm \
|
||||
&& locale-gen $LC_ALL \
|
||||
&& npm install -g configurable-http-proxy@^4.2.0 \
|
||||
# clean cache and logs
|
||||
&& rm -rf /var/lib/apt/lists/* /var/log/* /var/tmp/* ~/.npm
|
||||
# install the wheels we built in the previous stage
|
||||
RUN --mount=type=cache,from=wheel-builder,source=/src/jupyterhub/wheelhouse,target=/tmp/wheelhouse \
|
||||
# always make sure pip is up to date!
|
||||
python3 -m pip install --no-compile --no-cache-dir --upgrade setuptools pip \
|
||||
&& python3 -m pip install --no-compile --no-cache-dir /tmp/wheelhouse/*
|
||||
|
||||
CMD ["jupyterhub"]
|
||||
|
@@ -8,7 +8,6 @@ include *requirements.txt
|
||||
include Dockerfile
|
||||
|
||||
graft onbuild
|
||||
graft jsx
|
||||
graft jupyterhub
|
||||
graft scripts
|
||||
graft share
|
||||
@@ -19,10 +18,6 @@ graft ci
|
||||
graft docs
|
||||
prune docs/node_modules
|
||||
|
||||
# Intermediate javascript files
|
||||
prune jsx/node_modules
|
||||
prune jsx/build
|
||||
|
||||
# prune some large unused files from components
|
||||
prune share/jupyterhub/static/components/bootstrap/dist/css
|
||||
exclude share/jupyterhub/static/components/bootstrap/dist/fonts/*.svg
|
||||
|
1
PULL_REQUEST_TEMPLATE.md
Normal file
@@ -0,0 +1 @@
|
||||
|
110
README.md
@@ -6,28 +6,26 @@
|
||||
**[License](#license)** |
|
||||
**[Help and Resources](#help-and-resources)**
|
||||
|
||||
---
|
||||
|
||||
# [JupyterHub](https://github.com/jupyterhub/jupyterhub)
|
||||
|
||||
[](https://pypi.python.org/pypi/jupyterhub)
|
||||
[](https://anaconda.org/conda-forge/jupyterhub)
|
||||
[](https://jupyterhub.readthedocs.org/en/latest/)
|
||||
[](https://github.com/jupyterhub/jupyterhub/actions)
|
||||
[](https://hub.docker.com/r/jupyterhub/jupyterhub/tags)
|
||||
[](https://codecov.io/gh/jupyterhub/jupyterhub)
|
||||
[](https://github.com/jupyterhub/jupyterhub/issues)
|
||||
[](https://discourse.jupyter.org/c/jupyterhub)
|
||||
[](https://gitter.im/jupyterhub/jupyterhub)
|
||||
|
||||
[](https://pypi.python.org/pypi/jupyterhub)
|
||||
[](https://jupyterhub.readthedocs.org/en/latest/?badge=latest)
|
||||
[](https://jupyterhub.readthedocs.io/en/0.7.2/?badge=0.7.2)
|
||||
[](https://travis-ci.org/jupyterhub/jupyterhub)
|
||||
[](https://circleci.com/gh/jupyterhub/jupyterhub)
|
||||
[](https://codecov.io/github/jupyterhub/jupyterhub?branch=master)
|
||||
[](https://groups.google.com/forum/#!forum/jupyter)
|
||||
|
||||
With [JupyterHub](https://jupyterhub.readthedocs.io) you can create a
|
||||
**multi-user Hub** that spawns, manages, and proxies multiple instances of the
|
||||
**multi-user Hub** which spawns, manages, and proxies multiple instances of the
|
||||
single-user [Jupyter notebook](https://jupyter-notebook.readthedocs.io)
|
||||
server.
|
||||
|
||||
[Project Jupyter](https://jupyter.org) created JupyterHub to support many
|
||||
users. The Hub can offer notebook servers to a class of students, a corporate
|
||||
data science workgroup, a scientific research project, or a high-performance
|
||||
data science workgroup, a scientific research project, or a high performance
|
||||
computing group.
|
||||
|
||||
## Technical overview
|
||||
@@ -41,32 +39,38 @@ Three main actors make up JupyterHub:
|
||||
Basic principles for operation are:
|
||||
|
||||
- Hub launches a proxy.
|
||||
- The Proxy forwards all requests to Hub by default.
|
||||
- Hub handles login and spawns single-user servers on demand.
|
||||
- Hub configures proxy to forward URL prefixes to the single-user notebook
|
||||
- Proxy forwards all requests to Hub by default.
|
||||
- Hub handles login, and spawns single-user servers on demand.
|
||||
- Hub configures proxy to forward url prefixes to the single-user notebook
|
||||
servers.
|
||||
|
||||
JupyterHub also provides a
|
||||
[REST API][]
|
||||
[REST API](http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/master/docs/rest-api.yml#/default)
|
||||
for administration of the Hub and its users.
|
||||
|
||||
[rest api]: https://jupyterhub.readthedocs.io/en/latest/reference/rest-api.html
|
||||
|
||||
## Installation
|
||||
|
||||
|
||||
### Check prerequisites
|
||||
|
||||
- A Linux/Unix based system
|
||||
- [Python](https://www.python.org/downloads/) 3.6 or greater
|
||||
- [Python](https://www.python.org/downloads/) 3.5 or greater
|
||||
- [nodejs/npm](https://www.npmjs.com/)
|
||||
|
||||
- If you are using **`conda`**, the nodejs and npm dependencies will be installed for
|
||||
* If you are using **`conda`**, the nodejs and npm dependencies will be installed for
|
||||
you by conda.
|
||||
|
||||
- If you are using **`pip`**, install a recent version (at least 12.0) of
|
||||
* If you are using **`pip`**, install a recent version of
|
||||
[nodejs/npm](https://docs.npmjs.com/getting-started/installing-node).
|
||||
For example, install it on Linux (Debian/Ubuntu) using:
|
||||
|
||||
```
|
||||
sudo apt-get install npm nodejs-legacy
|
||||
```
|
||||
|
||||
The `nodejs-legacy` package installs the `node` executable and is currently
|
||||
required for npm to work on Debian/Ubuntu.
|
||||
|
||||
- If using the default PAM Authenticator, a [pluggable authentication module (PAM)](https://en.wikipedia.org/wiki/Pluggable_authentication_module).
|
||||
- TLS certificate and key for HTTPS communication
|
||||
- Domain name
|
||||
|
||||
@@ -80,11 +84,12 @@ To install JupyterHub along with its dependencies including nodejs/npm:
|
||||
conda install -c conda-forge jupyterhub
|
||||
```
|
||||
|
||||
If you plan to run notebook servers locally, install JupyterLab or Jupyter notebook:
|
||||
If you plan to run notebook servers locally, install the Jupyter notebook
|
||||
or JupyterLab:
|
||||
|
||||
```bash
|
||||
conda install jupyterlab
|
||||
conda install notebook
|
||||
conda install jupyterlab
|
||||
```
|
||||
|
||||
#### Using `pip`
|
||||
@@ -93,13 +98,13 @@ JupyterHub can be installed with `pip`, and the proxy with `npm`:
|
||||
|
||||
```bash
|
||||
npm install -g configurable-http-proxy
|
||||
python3 -m pip install jupyterhub
|
||||
python3 -m pip install jupyterhub
|
||||
```
|
||||
|
||||
If you plan to run notebook servers locally, you will need to install
|
||||
[JupyterLab or Jupyter notebook](https://jupyter.readthedocs.io/en/latest/install.html):
|
||||
If you plan to run notebook servers locally, you will need to install the
|
||||
[Jupyter notebook](https://jupyter.readthedocs.io/en/latest/install.html)
|
||||
package:
|
||||
|
||||
python3 -m pip install --upgrade jupyterlab
|
||||
python3 -m pip install --upgrade notebook
|
||||
|
||||
### Run the Hub server
|
||||
@@ -108,17 +113,18 @@ To start the Hub server, run the command:
|
||||
|
||||
jupyterhub
|
||||
|
||||
Visit `http://localhost:8000` in your browser, and sign in with your system username and password.
|
||||
Visit `https://localhost:8000` in your browser, and sign in with your unix
|
||||
PAM credentials.
|
||||
|
||||
_Note_: To allow multiple users to sign in to the server, you will need to
|
||||
run the `jupyterhub` command as a _privileged user_, such as root.
|
||||
*Note*: To allow multiple users to sign into the server, you will need to
|
||||
run the `jupyterhub` command as a *privileged user*, such as root.
|
||||
The [wiki](https://github.com/jupyterhub/jupyterhub/wiki/Using-sudo-to-run-JupyterHub-without-root-privileges)
|
||||
describes how to run the server as a _less privileged user_, which requires
|
||||
describes how to run the server as a *less privileged user*, which requires
|
||||
more configuration of the system.
|
||||
|
||||
## Configuration
|
||||
|
||||
The [Getting Started](https://jupyterhub.readthedocs.io/en/latest/tutorial/index.html#getting-started) section of the
|
||||
The [Getting Started](https://jupyterhub.readthedocs.io/en/latest/getting-started/index.html) section of the
|
||||
documentation explains the common steps in setting up JupyterHub.
|
||||
|
||||
The [**JupyterHub tutorial**](https://github.com/jupyterhub/jupyterhub-tutorial)
|
||||
@@ -132,18 +138,18 @@ To generate a default config file with settings and descriptions:
|
||||
|
||||
### Start the Hub
|
||||
|
||||
To start the Hub on a specific url and port `10.0.1.2:443` with **https**:
|
||||
To start the Hub on a specific url and port ``10.0.1.2:443`` with **https**:
|
||||
|
||||
jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert
|
||||
|
||||
### Authenticators
|
||||
|
||||
| Authenticator | Description |
|
||||
| ---------------------------------------------------------------------------- | ------------------------------------------------- |
|
||||
| PAMAuthenticator | Default, built-in authenticator |
|
||||
| [OAuthenticator](https://github.com/jupyterhub/oauthenticator) | OAuth + JupyterHub Authenticator = OAuthenticator |
|
||||
| [ldapauthenticator](https://github.com/jupyterhub/ldapauthenticator) | Simple LDAP Authenticator Plugin for JupyterHub |
|
||||
| [kerberosauthenticator](https://github.com/jupyterhub/kerberosauthenticator) | Kerberos Authenticator Plugin for JupyterHub |
|
||||
| Authenticator | Description |
|
||||
| --------------------------------------------------------------------------- | ------------------------------------------------- |
|
||||
| PAMAuthenticator | Default, built-in authenticator |
|
||||
| [OAuthenticator](https://github.com/jupyterhub/oauthenticator) | OAuth + JupyterHub Authenticator = OAuthenticator |
|
||||
| [ldapauthenticator](https://github.com/jupyterhub/ldapauthenticator) | Simple LDAP Authenticator Plugin for JupyterHub |
|
||||
| [kdcAuthenticator](https://github.com/bloomberg/jupyterhub-kdcauthenticator)| Kerberos Authenticator Plugin for JupyterHub |
|
||||
|
||||
### Spawners
|
||||
|
||||
@@ -155,7 +161,6 @@ To start the Hub on a specific url and port `10.0.1.2:443` with **https**:
|
||||
| [sudospawner](https://github.com/jupyterhub/sudospawner) | Spawn single-user servers without being root |
|
||||
| [systemdspawner](https://github.com/jupyterhub/systemdspawner) | Spawn single-user notebook servers using systemd |
|
||||
| [batchspawner](https://github.com/jupyterhub/batchspawner) | Designed for clusters using batch scheduling software |
|
||||
| [yarnspawner](https://github.com/jupyterhub/yarnspawner) | Spawn single-user notebook servers distributed on a Hadoop cluster |
|
||||
| [wrapspawner](https://github.com/jupyterhub/wrapspawner) | WrapSpawner and ProfilesSpawner enabling runtime configuration of spawners |
|
||||
|
||||
## Docker
|
||||
@@ -181,7 +186,7 @@ this a good choice for **testing JupyterHub on your desktop or laptop**.
|
||||
|
||||
If you want to run docker on a computer that has a public IP then you should
|
||||
(as in MUST) **secure it with ssl** by adding ssl options to your docker
|
||||
configuration or by using an ssl enabled proxy.
|
||||
configuration or by using a ssl enabled proxy.
|
||||
|
||||
[Mounting volumes](https://docs.docker.com/engine/admin/volumes/volumes/) will
|
||||
allow you to **store data outside the docker image (host system) so it will be persistent**, even when you start
|
||||
@@ -194,14 +199,11 @@ These accounts will be used for authentication in JupyterHub's default configura
|
||||
## Contributing
|
||||
|
||||
If you would like to contribute to the project, please read our
|
||||
[contributor documentation](https://jupyter.readthedocs.io/en/latest/contributing/content-contributor.html)
|
||||
[contributor documentation](http://jupyter.readthedocs.io/en/latest/contributor/content-contributor.html)
|
||||
and the [`CONTRIBUTING.md`](CONTRIBUTING.md). The `CONTRIBUTING.md` file
|
||||
explains how to set up a development installation, how to run the test suite,
|
||||
and how to contribute to documentation.
|
||||
|
||||
For a high-level view of the vision and next directions of the project, see the
|
||||
[JupyterHub community roadmap](docs/source/contributing/roadmap.md).
|
||||
|
||||
### A note about platform support
|
||||
|
||||
JupyterHub is supported on Linux/Unix based systems.
|
||||
@@ -221,22 +223,20 @@ docker container or Linux VM.
|
||||
We use a shared copyright model that enables all contributors to maintain the
|
||||
copyright on their contributions.
|
||||
|
||||
All code is licensed under the terms of the [revised BSD license](./COPYING.md).
|
||||
All code is licensed under the terms of the revised BSD license.
|
||||
|
||||
## Help and resources
|
||||
|
||||
We encourage you to ask questions and share ideas on the [Jupyter community forum](https://discourse.jupyter.org/).
|
||||
You can also talk with us on our JupyterHub [Gitter](https://gitter.im/jupyterhub/jupyterhub) channel.
|
||||
We encourage you to ask questions on the [Jupyter mailing list](https://groups.google.com/forum/#!forum/jupyter).
|
||||
To participate in development discussions or get help, talk with us on
|
||||
our JupyterHub [Gitter](https://gitter.im/jupyterhub/jupyterhub) channel.
|
||||
|
||||
- [Reporting Issues](https://github.com/jupyterhub/jupyterhub/issues)
|
||||
- [JupyterHub tutorial](https://github.com/jupyterhub/jupyterhub-tutorial)
|
||||
- [Documentation for JupyterHub](https://jupyterhub.readthedocs.io/en/latest/)
|
||||
- [Documentation for JupyterHub's REST API][rest api]
|
||||
- [Documentation for Project Jupyter](http://jupyter.readthedocs.io/en/latest/index.html)
|
||||
- [Documentation for JupyterHub](https://jupyterhub.readthedocs.io/en/latest/) | [PDF (latest)](https://media.readthedocs.org/pdf/jupyterhub/latest/jupyterhub.pdf) | [PDF (stable)](https://media.readthedocs.org/pdf/jupyterhub/stable/jupyterhub.pdf)
|
||||
- [Documentation for JupyterHub's REST API](http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/master/docs/rest-api.yml#/default)
|
||||
- [Documentation for Project Jupyter](http://jupyter.readthedocs.io/en/latest/index.html) | [PDF](https://media.readthedocs.org/pdf/jupyter/latest/jupyter.pdf)
|
||||
- [Project Jupyter website](https://jupyter.org)
|
||||
- [Project Jupyter community](https://jupyter.org/community)
|
||||
|
||||
JupyterHub follows the Jupyter [Community Guides](https://jupyter.readthedocs.io/en/latest/community/content-community.html).
|
||||
|
||||
---
|
||||
|
||||
|
55
RELEASE.md
@@ -1,55 +0,0 @@
|
||||
# How to make a release
|
||||
|
||||
`jupyterhub` is a package available on [PyPI][] and [conda-forge][].
|
||||
These are instructions on how to make a release.
|
||||
|
||||
## Pre-requisites
|
||||
|
||||
- Push rights to [jupyterhub/jupyterhub][]
|
||||
- Push rights to [conda-forge/jupyterhub-feedstock][]
|
||||
|
||||
## Steps to make a release
|
||||
|
||||
1. Create a PR updating `docs/source/changelog.md` with [github-activity][] and
|
||||
continue only when its merged.
|
||||
|
||||
```shell
|
||||
pip install github-activity
|
||||
|
||||
github-activity --heading-level=3 jupyterhub/jupyterhub
|
||||
```
|
||||
|
||||
1. Checkout main and make sure it is up to date.
|
||||
|
||||
```shell
|
||||
git checkout main
|
||||
git fetch origin main
|
||||
git reset --hard origin/main
|
||||
```
|
||||
|
||||
1. Update the version, make commits, and push a git tag with `tbump`.
|
||||
|
||||
```shell
|
||||
pip install tbump
|
||||
tbump --dry-run ${VERSION}
|
||||
|
||||
tbump ${VERSION}
|
||||
```
|
||||
|
||||
Following this, the [CI system][] will build and publish a release.
|
||||
|
||||
1. Reset the version back to dev, e.g. `2.1.0.dev` after releasing `2.0.0`
|
||||
|
||||
```shell
|
||||
tbump --no-tag ${NEXT_VERSION}.dev
|
||||
```
|
||||
|
||||
1. Following the release to PyPI, an automated PR should arrive to
|
||||
[conda-forge/jupyterhub-feedstock][] with instructions.
|
||||
|
||||
[pypi]: https://pypi.org/project/jupyterhub/
|
||||
[conda-forge]: https://anaconda.org/conda-forge/jupyterhub
|
||||
[jupyterhub/jupyterhub]: https://github.com/jupyterhub/jupyterhub
|
||||
[conda-forge/jupyterhub-feedstock]: https://github.com/conda-forge/jupyterhub-feedstock
|
||||
[github-activity]: https://github.com/executablebooks/github-activity
|
||||
[ci system]: https://github.com/jupyterhub/jupyterhub/actions/workflows/release.yml
|
@@ -1,5 +0,0 @@
|
||||
# Reporting a Vulnerability
|
||||
|
||||
If you believe you’ve found a security vulnerability in a Jupyter
|
||||
project, please report it to security@ipython.org. If you prefer to
|
||||
encrypt your security reports, you can use [this PGP public key](https://jupyter-notebook.readthedocs.io/en/stable/_downloads/1d303a645f2505a8fd283826fafc9908/ipython_security.asc).
|
@@ -1,16 +1,19 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
# Copyright (c) Jupyter Development Team.
|
||||
# Distributed under the terms of the Modified BSD License.
|
||||
|
||||
"""
|
||||
bower-lite
|
||||
|
||||
Since Bower's on its way out,
|
||||
stage frontend dependencies from node_modules into components
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
from os.path import join
|
||||
import shutil
|
||||
|
||||
HERE = os.path.abspath(os.path.dirname(__file__))
|
||||
|
||||
@@ -29,5 +32,5 @@ dependencies = package_json['dependencies']
|
||||
for dep in dependencies:
|
||||
src = join(node_modules, dep)
|
||||
dest = join(components, dep)
|
||||
print(f"{src} -> {dest}")
|
||||
print("%s -> %s" % (src, dest))
|
||||
shutil.copytree(src, dest)
|
||||
|
@@ -1,36 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
# Check that installed package contains everything we expect
|
||||
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import jupyterhub
|
||||
from jupyterhub._data import DATA_FILES_PATH
|
||||
|
||||
print("Checking jupyterhub._data", end=" ")
|
||||
print(f"DATA_FILES_PATH={DATA_FILES_PATH}", end=" ")
|
||||
DATA_FILES_PATH = Path(DATA_FILES_PATH)
|
||||
assert DATA_FILES_PATH.is_dir(), DATA_FILES_PATH
|
||||
for subpath in (
|
||||
"templates/spawn.html",
|
||||
"static/css/style.min.css",
|
||||
"static/components/jquery/dist/jquery.js",
|
||||
"static/js/admin-react.js",
|
||||
):
|
||||
path = DATA_FILES_PATH / subpath
|
||||
assert path.is_file(), path
|
||||
|
||||
print("OK")
|
||||
|
||||
print("Checking package_data", end=" ")
|
||||
jupyterhub_path = Path(jupyterhub.__file__).parent.resolve()
|
||||
for subpath in (
|
||||
"alembic.ini",
|
||||
"alembic/versions/833da8570507_rbac.py",
|
||||
"event-schemas/server-actions/v1.yaml",
|
||||
"singleuser/templates/page.html",
|
||||
):
|
||||
path = jupyterhub_path / subpath
|
||||
assert path.is_file(), path
|
||||
|
||||
print("OK")
|
@@ -1,27 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
# Check that sdist contains everything we expect
|
||||
|
||||
import sys
|
||||
import tarfile
|
||||
|
||||
expected_files = [
|
||||
"docs/requirements.txt",
|
||||
"jsx/package.json",
|
||||
"package.json",
|
||||
"README.md",
|
||||
]
|
||||
|
||||
assert len(sys.argv) == 2, "Expected one file"
|
||||
print(f"Checking {sys.argv[1]}")
|
||||
|
||||
tar = tarfile.open(name=sys.argv[1], mode="r:gz")
|
||||
try:
|
||||
# Remove leading jupyterhub-VERSION/
|
||||
filelist = {f.partition('/')[2] for f in tar.getnames()}
|
||||
finally:
|
||||
tar.close()
|
||||
|
||||
for e in expected_files:
|
||||
assert e in filelist, f"{e} not found"
|
||||
|
||||
print("OK")
|
@@ -1,60 +1,50 @@
|
||||
#!/usr/bin/env bash
|
||||
# The goal of this script is to start a database server as a docker container.
|
||||
#
|
||||
# Required environment variables:
|
||||
# - DB: The database server to start, either "postgres" or "mysql".
|
||||
#
|
||||
# - PGUSER/PGPASSWORD: For the creation of a postgresql user with associated
|
||||
# password.
|
||||
# source this file to setup postgres and mysql
|
||||
# for local testing (as similar as possible to docker)
|
||||
|
||||
set -eu
|
||||
set -e
|
||||
|
||||
# Stop and remove any existing database container
|
||||
DOCKER_CONTAINER="hub-test-$DB"
|
||||
docker rm -f "$DOCKER_CONTAINER" 2>/dev/null || true
|
||||
export MYSQL_HOST=127.0.0.1
|
||||
export MYSQL_TCP_PORT=${MYSQL_TCP_PORT:-13306}
|
||||
export PGHOST=127.0.0.1
|
||||
NAME="hub-test-$DB"
|
||||
DOCKER_RUN="docker run -d --name $NAME"
|
||||
|
||||
# Prepare environment variables to startup and await readiness of either a mysql
|
||||
# or postgresql server.
|
||||
if [[ "$DB" == "mysql" ]]; then
|
||||
# Environment variables can influence both the mysql server in the docker
|
||||
# container and the mysql client.
|
||||
#
|
||||
# ref server: https://hub.docker.com/_/mysql/
|
||||
# ref client: https://dev.mysql.com/doc/refman/5.7/en/setting-environment-variables.html
|
||||
#
|
||||
DOCKER_RUN_ARGS="-p 3306:3306 --env MYSQL_ALLOW_EMPTY_PASSWORD=1 mysql:8.0"
|
||||
READINESS_CHECK="mysql --user root --execute \q"
|
||||
elif [[ "$DB" == "postgres" ]]; then
|
||||
# Environment variables can influence both the postgresql server in the
|
||||
# docker container and the postgresql client (psql).
|
||||
#
|
||||
# ref server: https://hub.docker.com/_/postgres/
|
||||
# ref client: https://www.postgresql.org/docs/9.5/libpq-envars.html
|
||||
#
|
||||
# POSTGRES_USER / POSTGRES_PASSWORD will create a user on startup of the
|
||||
# postgres server, but PGUSER and PGPASSWORD are the environment variables
|
||||
# used by the postgresql client psql, so we configure the user based on how
|
||||
# we want to connect.
|
||||
#
|
||||
DOCKER_RUN_ARGS="-p 5432:5432 --env "POSTGRES_USER=${PGUSER}" --env "POSTGRES_PASSWORD=${PGPASSWORD}" postgres:15.1"
|
||||
READINESS_CHECK="psql --command \q"
|
||||
else
|
||||
echo '$DB must be mysql or postgres'
|
||||
exit 1
|
||||
fi
|
||||
docker rm -f "$NAME" 2>/dev/null || true
|
||||
|
||||
# Start the database server
|
||||
docker run --detach --name "$DOCKER_CONTAINER" $DOCKER_RUN_ARGS
|
||||
case "$DB" in
|
||||
"mysql")
|
||||
RUN_ARGS="-e MYSQL_ALLOW_EMPTY_PASSWORD=1 -p $MYSQL_TCP_PORT:3306 mysql:5.7"
|
||||
CHECK="mysql --host $MYSQL_HOST --port $MYSQL_TCP_PORT --user root -e \q"
|
||||
;;
|
||||
"postgres")
|
||||
RUN_ARGS="-p 5432:5432 postgres:9.5"
|
||||
CHECK="psql --user postgres -c \q"
|
||||
;;
|
||||
*)
|
||||
echo '$DB must be mysql or postgres'
|
||||
exit 1
|
||||
esac
|
||||
|
||||
$DOCKER_RUN $RUN_ARGS
|
||||
|
||||
# Wait for the database server to start
|
||||
echo -n "waiting for $DB "
|
||||
for i in {1..60}; do
|
||||
if $READINESS_CHECK; then
|
||||
echo 'done'
|
||||
break
|
||||
else
|
||||
echo -n '.'
|
||||
sleep 1
|
||||
fi
|
||||
if $CHECK; then
|
||||
echo 'done'
|
||||
break
|
||||
else
|
||||
echo -n '.'
|
||||
sleep 1
|
||||
fi
|
||||
done
|
||||
$READINESS_CHECK
|
||||
$CHECK
|
||||
|
||||
|
||||
echo -e "
|
||||
Set these environment variables:
|
||||
|
||||
export MYSQL_HOST=127.0.0.1
|
||||
export MYSQL_TCP_PORT=$MYSQL_TCP_PORT
|
||||
export PGHOST=127.0.0.1
|
||||
"
|
||||
|
@@ -1,27 +1,27 @@
|
||||
#!/usr/bin/env bash
|
||||
# The goal of this script is to initialize a running database server with clean
|
||||
# databases for use during tests.
|
||||
#
|
||||
# Required environment variables:
|
||||
# - DB: The database server to start, either "postgres" or "mysql".
|
||||
# initialize jupyterhub databases for testing
|
||||
|
||||
set -eu
|
||||
set -e
|
||||
|
||||
# Prepare env vars SQL_CLIENT and EXTRA_CREATE_DATABASE_ARGS
|
||||
if [[ "$DB" == "mysql" ]]; then
|
||||
SQL_CLIENT="mysql --user root --execute "
|
||||
EXTRA_CREATE_DATABASE_ARGS='CHARACTER SET utf8 COLLATE utf8_general_ci'
|
||||
elif [[ "$DB" == "postgres" ]]; then
|
||||
SQL_CLIENT="psql --command "
|
||||
else
|
||||
echo '$DB must be mysql or postgres'
|
||||
exit 1
|
||||
fi
|
||||
MYSQL="mysql --user root --host $MYSQL_HOST --port $MYSQL_TCP_PORT -e "
|
||||
PSQL="psql --user postgres -c "
|
||||
|
||||
case "$DB" in
|
||||
"mysql")
|
||||
EXTRA_CREATE='CHARACTER SET utf8 COLLATE utf8_general_ci'
|
||||
SQL="$MYSQL"
|
||||
;;
|
||||
"postgres")
|
||||
SQL="$PSQL"
|
||||
;;
|
||||
*)
|
||||
echo '$DB must be mysql or postgres'
|
||||
exit 1
|
||||
esac
|
||||
|
||||
# Configure a set of databases in the database server for upgrade tests
|
||||
# this list must be in sync with versions in test_db.py:test_upgrade
|
||||
set -x
|
||||
for SUFFIX in '' _upgrade_110 _upgrade_122 _upgrade_130 _upgrade_150 _upgrade_211; do
|
||||
$SQL_CLIENT "DROP DATABASE jupyterhub${SUFFIX};" 2>/dev/null || true
|
||||
$SQL_CLIENT "CREATE DATABASE jupyterhub${SUFFIX} ${EXTRA_CREATE_DATABASE_ARGS:-};"
|
||||
|
||||
for SUFFIX in '' _upgrade_072 _upgrade_081; do
|
||||
$SQL "DROP DATABASE jupyterhub${SUFFIX};" 2>/dev/null || true
|
||||
$SQL "CREATE DATABASE jupyterhub${SUFFIX} ${EXTRA_CREATE};"
|
||||
done
|
||||
|
@@ -1,16 +0,0 @@
|
||||
# Demo JupyterHub Docker image
|
||||
#
|
||||
# This should only be used for demo or testing and not as a base image to build on.
|
||||
#
|
||||
# It includes the notebook package and it uses the DummyAuthenticator and the SimpleLocalProcessSpawner.
|
||||
ARG BASE_IMAGE=jupyterhub/jupyterhub-onbuild
|
||||
FROM ${BASE_IMAGE}
|
||||
|
||||
# Install the notebook package
|
||||
RUN python3 -m pip install notebook
|
||||
|
||||
# Create a demo user
|
||||
RUN useradd --create-home demo
|
||||
RUN chown demo .
|
||||
|
||||
USER demo
|
@@ -1,26 +0,0 @@
|
||||
## Demo Dockerfile
|
||||
|
||||
This is a demo JupyterHub Docker image to help you get a quick overview of what
|
||||
JupyterHub is and how it works.
|
||||
|
||||
It uses the SimpleLocalProcessSpawner to spawn new user servers and
|
||||
DummyAuthenticator for authentication.
|
||||
The DummyAuthenticator allows you to log in with any username & password and the
|
||||
SimpleLocalProcessSpawner allows starting servers without having to create a
|
||||
local user for each JupyterHub user.
|
||||
|
||||
### Important!
|
||||
|
||||
This should only be used for demo or testing purposes!
|
||||
It shouldn't be used as a base image to build on.
|
||||
|
||||
### Try it
|
||||
|
||||
1. `cd` to the root of your jupyterhub repo.
|
||||
|
||||
2. Build the demo image with `docker build -t jupyterhub-demo demo-image`.
|
||||
|
||||
3. Run the demo image with `docker run -d -p 8000:8000 jupyterhub-demo`.
|
||||
|
||||
4. Visit http://localhost:8000 and login with any username and password
|
||||
5. Happy demo-ing :tada:!
|
@@ -1,7 +0,0 @@
|
||||
# Configuration file for jupyterhub-demo
|
||||
|
||||
c = get_config()
|
||||
|
||||
# Use DummyAuthenticator and SimpleSpawner
|
||||
c.JupyterHub.spawner_class = "simple"
|
||||
c.JupyterHub.authenticator_class = "dummy"
|
14
dev-requirements.txt
Normal file
@@ -0,0 +1,14 @@
|
||||
-r requirements.txt
|
||||
mock
|
||||
beautifulsoup4
|
||||
codecov
|
||||
cryptography
|
||||
pytest-cov
|
||||
pytest-tornado
|
||||
pytest>=3.3
|
||||
notebook
|
||||
requests-mock
|
||||
virtualenv
|
||||
# temporary pin of attrs for jsonschema 0.3.0a1
|
||||
# seems to be a pip bug
|
||||
attrs>=17.4.0
|
11
dockerfiles/Dockerfile.alpine
Normal file
@@ -0,0 +1,11 @@
|
||||
FROM python:3.6.3-alpine3.6
|
||||
|
||||
ARG JUPYTERHUB_VERSION=0.8.1
|
||||
|
||||
RUN pip3 install --no-cache jupyterhub==${JUPYTERHUB_VERSION}
|
||||
ENV LANG=en_US.UTF-8
|
||||
|
||||
USER nobody
|
||||
CMD ["jupyterhub"]
|
||||
|
||||
|
21
dockerfiles/README.md
Normal file
@@ -0,0 +1,21 @@
|
||||
## What is Dockerfile.alpine
|
||||
Dockerfile.alpine contains base image for jupyterhub. It does not work independently, but only as part of a full jupyterhub cluster
|
||||
|
||||
## How to use it?
|
||||
|
||||
1. A running configurable-http-proxy, whose API is accessible.
|
||||
2. A jupyterhub_config file.
|
||||
3. Authentication and other libraries required by the specific jupyterhub_config file.
|
||||
|
||||
|
||||
## Steps to test it outside a cluster
|
||||
|
||||
* start configurable-http-proxy in another container
|
||||
* specify CONFIGPROXY_AUTH_TOKEN env in both containers
|
||||
* put both containers on the same network (e.g. docker create network jupyterhub; docker run ... --net jupyterhub)
|
||||
* tell jupyterhub where CHP is (e.g. c.ConfigurableHTTPProxy.api_url = 'http://chp:8001')
|
||||
* tell jupyterhub not to start the proxy itself (c.ConfigurableHTTPProxy.should_start = False)
|
||||
* Use dummy authenticator for ease of testing. Update following in jupyterhub_config file
|
||||
- c.JupyterHub.authenticator_class = 'dummyauthenticator.DummyAuthenticator'
|
||||
- c.DummyAuthenticator.password = "your strong password"
|
||||
|
@@ -1,14 +0,0 @@
|
||||
import os
|
||||
|
||||
from jupyterhub._data import DATA_FILES_PATH
|
||||
|
||||
print(f"DATA_FILES_PATH={DATA_FILES_PATH}")
|
||||
|
||||
for sub_path in (
|
||||
"templates",
|
||||
"static/components",
|
||||
"static/css/style.min.css",
|
||||
"static/js/admin-react.js",
|
||||
):
|
||||
path = os.path.join(DATA_FILES_PATH, sub_path)
|
||||
assert os.path.exists(path), path
|
238
docs/Makefile
@@ -1,62 +1,206 @@
|
||||
# Makefile for Sphinx documentation generated by sphinx-quickstart
|
||||
# ----------------------------------------------------------------------------
|
||||
# Makefile for Sphinx documentation
|
||||
#
|
||||
|
||||
# You can set these variables from the command line, and also
|
||||
# from the environment for the first two.
|
||||
SPHINXOPTS ?= --color -W --keep-going
|
||||
SPHINXBUILD ?= sphinx-build
|
||||
SOURCEDIR = source
|
||||
BUILDDIR = _build
|
||||
# You can set these variables from the command line.
|
||||
SPHINXOPTS = "-W"
|
||||
SPHINXBUILD = sphinx-build
|
||||
PAPER =
|
||||
BUILDDIR = build
|
||||
|
||||
# User-friendly check for sphinx-build
|
||||
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
|
||||
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
|
||||
endif
|
||||
|
||||
# Internal variables.
|
||||
PAPEROPT_a4 = -D latex_paper_size=a4
|
||||
PAPEROPT_letter = -D latex_paper_size=letter
|
||||
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
|
||||
# the i18n builder cannot share the environment and doctrees with the others
|
||||
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
|
||||
|
||||
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext
|
||||
|
||||
# Put it first so that "make" without argument is like "make help".
|
||||
help:
|
||||
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS)
|
||||
@echo "Please use \`make <target>' where <target> is one of"
|
||||
@echo " html to make standalone HTML files"
|
||||
@echo " dirhtml to make HTML files named index.html in directories"
|
||||
@echo " singlehtml to make a single large HTML file"
|
||||
@echo " pickle to make pickle files"
|
||||
@echo " json to make JSON files"
|
||||
@echo " htmlhelp to make HTML files and a HTML help project"
|
||||
@echo " qthelp to make HTML files and a qthelp project"
|
||||
@echo " applehelp to make an Apple Help Book"
|
||||
@echo " devhelp to make HTML files and a Devhelp project"
|
||||
@echo " epub to make an epub"
|
||||
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
|
||||
@echo " latexpdf to make LaTeX files and run them through pdflatex"
|
||||
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
|
||||
@echo " text to make text files"
|
||||
@echo " man to make manual pages"
|
||||
@echo " texinfo to make Texinfo files"
|
||||
@echo " info to make Texinfo files and run them through makeinfo"
|
||||
@echo " gettext to make PO message catalogs"
|
||||
@echo " changes to make an overview of all changed/added/deprecated items"
|
||||
@echo " xml to make Docutils-native XML files"
|
||||
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
|
||||
@echo " linkcheck to check all external links for integrity"
|
||||
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
|
||||
@echo " coverage to run coverage check of the documentation (if enabled)"
|
||||
@echo " spelling to run spell check on documentation"
|
||||
|
||||
.PHONY: help Makefile metrics scopes
|
||||
clean:
|
||||
rm -rf $(BUILDDIR)/*
|
||||
|
||||
# Catch-all target: route all unknown targets to Sphinx using the new
|
||||
# "make mode" option.
|
||||
#
|
||||
# Several sphinx-build commands can be used through this, for example:
|
||||
#
|
||||
# - make clean
|
||||
# - make linkcheck
|
||||
# - make spelling
|
||||
#
|
||||
%: Makefile
|
||||
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS)
|
||||
node_modules: package.json
|
||||
npm install && touch node_modules
|
||||
|
||||
rest-api: source/_static/rest-api/index.html
|
||||
|
||||
# Manually added targets - related to code generation
|
||||
# ----------------------------------------------------------------------------
|
||||
source/_static/rest-api/index.html: rest-api.yml node_modules
|
||||
npm run rest-api
|
||||
|
||||
# For local development:
|
||||
# - builds the html
|
||||
# - NOTE: If the pre-requisites for the html target is updated, also update the
|
||||
# Read The Docs section in docs/source/conf.py.
|
||||
#
|
||||
html: metrics scopes
|
||||
$(SPHINXBUILD) -b html "$(SOURCEDIR)" "$(BUILDDIR)/html" $(SPHINXOPTS)
|
||||
html: rest-api
|
||||
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
|
||||
@echo
|
||||
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
|
||||
|
||||
metrics: source/reference/metrics.md
|
||||
source/reference/metrics.md:
|
||||
python3 generate-metrics.py
|
||||
dirhtml:
|
||||
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
|
||||
@echo
|
||||
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
|
||||
|
||||
scopes: source/rbac/scope-table.md
|
||||
source/rbac/scope-table.md:
|
||||
python3 source/rbac/generate-scope-table.py
|
||||
singlehtml:
|
||||
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
|
||||
@echo
|
||||
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
|
||||
|
||||
pickle:
|
||||
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
|
||||
@echo
|
||||
@echo "Build finished; now you can process the pickle files."
|
||||
|
||||
# Manually added targets - related to development
|
||||
# ----------------------------------------------------------------------------
|
||||
json:
|
||||
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
|
||||
@echo
|
||||
@echo "Build finished; now you can process the JSON files."
|
||||
|
||||
# For local development:
|
||||
# - requires sphinx-autobuild, see
|
||||
# https://sphinxcontrib-spelling.readthedocs.io/en/latest/
|
||||
# - builds and rebuilds html on changes to source, but does not re-generate
|
||||
# metrics/scopes files
|
||||
# - starts a livereload enabled webserver and opens up a browser
|
||||
devenv: html
|
||||
sphinx-autobuild -b html --open-browser "$(SOURCEDIR)" "$(BUILDDIR)/html"
|
||||
htmlhelp:
|
||||
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
|
||||
@echo
|
||||
@echo "Build finished; now you can run HTML Help Workshop with the" \
|
||||
".hhp project file in $(BUILDDIR)/htmlhelp."
|
||||
|
||||
qthelp:
|
||||
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
|
||||
@echo
|
||||
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
|
||||
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
|
||||
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/JupyterHub.qhcp"
|
||||
@echo "To view the help file:"
|
||||
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/JupyterHub.qhc"
|
||||
|
||||
applehelp:
|
||||
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
|
||||
@echo
|
||||
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
|
||||
@echo "N.B. You won't be able to view it unless you put it in" \
|
||||
"~/Library/Documentation/Help or install it in your application" \
|
||||
"bundle."
|
||||
|
||||
devhelp:
|
||||
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
|
||||
@echo
|
||||
@echo "Build finished."
|
||||
@echo "To view the help file:"
|
||||
@echo "# mkdir -p $$HOME/.local/share/devhelp/JupyterHub"
|
||||
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/JupyterHub"
|
||||
@echo "# devhelp"
|
||||
|
||||
epub:
|
||||
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
|
||||
@echo
|
||||
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
|
||||
|
||||
latex:
|
||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
||||
@echo
|
||||
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
|
||||
@echo "Run \`make' in that directory to run these through (pdf)latex" \
|
||||
"(use \`make latexpdf' here to do that automatically)."
|
||||
|
||||
latexpdf:
|
||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
||||
@echo "Running LaTeX files through pdflatex..."
|
||||
$(MAKE) -C $(BUILDDIR)/latex all-pdf
|
||||
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
|
||||
|
||||
latexpdfja:
|
||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
||||
@echo "Running LaTeX files through platex and dvipdfmx..."
|
||||
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
|
||||
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
|
||||
|
||||
text:
|
||||
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
|
||||
@echo
|
||||
@echo "Build finished. The text files are in $(BUILDDIR)/text."
|
||||
|
||||
man:
|
||||
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
|
||||
@echo
|
||||
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
|
||||
|
||||
texinfo:
|
||||
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
|
||||
@echo
|
||||
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
|
||||
@echo "Run \`make' in that directory to run these through makeinfo" \
|
||||
"(use \`make info' here to do that automatically)."
|
||||
|
||||
info:
|
||||
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
|
||||
@echo "Running Texinfo files through makeinfo..."
|
||||
make -C $(BUILDDIR)/texinfo info
|
||||
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
|
||||
|
||||
gettext:
|
||||
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
|
||||
@echo
|
||||
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
|
||||
|
||||
changes:
|
||||
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
|
||||
@echo
|
||||
@echo "The overview file is in $(BUILDDIR)/changes."
|
||||
|
||||
linkcheck:
|
||||
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
|
||||
@echo
|
||||
@echo "Link check complete; look for any errors in the above output " \
|
||||
"or in $(BUILDDIR)/linkcheck/output.txt."
|
||||
|
||||
spelling:
|
||||
$(SPHINXBUILD) -b spelling $(ALLSPHINXOPTS) $(BUILDDIR)/spelling
|
||||
@echo
|
||||
@echo "Spell check complete; look for any errors in the above output " \
|
||||
"or in $(BUILDDIR)/spelling/output.txt."
|
||||
doctest:
|
||||
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
|
||||
@echo "Testing of doctests in the sources finished, look at the " \
|
||||
"results in $(BUILDDIR)/doctest/output.txt."
|
||||
|
||||
coverage:
|
||||
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
|
||||
@echo "Testing of coverage in the sources finished, look at the " \
|
||||
"results in $(BUILDDIR)/coverage/python.txt."
|
||||
|
||||
xml:
|
||||
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
|
||||
@echo
|
||||
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
|
||||
|
||||
pseudoxml:
|
||||
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
|
||||
@echo
|
||||
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
|
||||
|
22
docs/environment.yml
Normal file
@@ -0,0 +1,22 @@
|
||||
# ReadTheDocs uses the `environment.yaml` so make sure to update that as well
|
||||
# if you change the dependencies of JupyterHub in the various `requirements.txt`
|
||||
name: jhub_docs
|
||||
channels:
|
||||
- conda-forge
|
||||
dependencies:
|
||||
- nodejs
|
||||
- python=3.6
|
||||
- alembic
|
||||
- jinja2
|
||||
- pamela
|
||||
- requests
|
||||
- sqlalchemy>=1
|
||||
- tornado>=5.0
|
||||
- traitlets>=4.1
|
||||
- sphinx>=1.7
|
||||
- pip:
|
||||
- python-oauth2
|
||||
- recommonmark==0.4.0
|
||||
- async_generator
|
||||
- prometheus_client
|
||||
- attrs>=17.4.0
|
@@ -1,53 +0,0 @@
|
||||
import os
|
||||
|
||||
from pytablewriter import MarkdownTableWriter
|
||||
|
||||
import jupyterhub.metrics
|
||||
|
||||
HERE = os.path.abspath(os.path.dirname(__file__))
|
||||
|
||||
|
||||
class Generator:
|
||||
@classmethod
|
||||
def create_writer(cls, table_name, headers, values):
|
||||
writer = MarkdownTableWriter()
|
||||
writer.table_name = table_name
|
||||
writer.headers = headers
|
||||
writer.value_matrix = values
|
||||
writer.margin = 1
|
||||
return writer
|
||||
|
||||
def _parse_metrics(self):
|
||||
table_rows = []
|
||||
for name in dir(jupyterhub.metrics):
|
||||
obj = getattr(jupyterhub.metrics, name)
|
||||
if obj.__class__.__module__.startswith('prometheus_client.'):
|
||||
for metric in obj.describe():
|
||||
table_rows.append([metric.type, metric.name, metric.documentation])
|
||||
return table_rows
|
||||
|
||||
def prometheus_metrics(self):
|
||||
generated_directory = f"{HERE}/source/reference"
|
||||
if not os.path.exists(generated_directory):
|
||||
os.makedirs(generated_directory)
|
||||
|
||||
filename = f"{generated_directory}/metrics.md"
|
||||
table_name = ""
|
||||
headers = ["Type", "Name", "Description"]
|
||||
values = self._parse_metrics()
|
||||
writer = self.create_writer(table_name, headers, values)
|
||||
|
||||
with open(filename, 'w') as f:
|
||||
f.write("# List of Prometheus Metrics\n\n")
|
||||
f.write(writer.dumps())
|
||||
f.write("\n")
|
||||
print(f"Generated {filename}")
|
||||
|
||||
|
||||
def main():
|
||||
doc_generator = Generator()
|
||||
doc_generator.prometheus_metrics()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
274
docs/make.bat
@@ -1,49 +1,263 @@
|
||||
@ECHO OFF
|
||||
|
||||
pushd %~dp0
|
||||
|
||||
REM Command file for Sphinx documentation
|
||||
|
||||
if "%SPHINXBUILD%" == "" (
|
||||
set SPHINXBUILD=--color -W --keep-going
|
||||
)
|
||||
if "%SPHINXBUILD%" == "" (
|
||||
set SPHINXBUILD=sphinx-build
|
||||
)
|
||||
set SOURCEDIR=source
|
||||
set BUILDDIR=_build
|
||||
set BUILDDIR=build
|
||||
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% source
|
||||
set I18NSPHINXOPTS=%SPHINXOPTS% source
|
||||
if NOT "%PAPER%" == "" (
|
||||
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
|
||||
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
|
||||
)
|
||||
|
||||
if "%1" == "" goto help
|
||||
if "%1" == "devenv" goto devenv
|
||||
goto default
|
||||
|
||||
if "%1" == "help" (
|
||||
:help
|
||||
echo.Please use `make ^<target^>` where ^<target^> is one of
|
||||
echo. html to make standalone HTML files
|
||||
echo. dirhtml to make HTML files named index.html in directories
|
||||
echo. singlehtml to make a single large HTML file
|
||||
echo. pickle to make pickle files
|
||||
echo. json to make JSON files
|
||||
echo. htmlhelp to make HTML files and a HTML help project
|
||||
echo. qthelp to make HTML files and a qthelp project
|
||||
echo. devhelp to make HTML files and a Devhelp project
|
||||
echo. epub to make an epub
|
||||
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
|
||||
echo. text to make text files
|
||||
echo. man to make manual pages
|
||||
echo. texinfo to make Texinfo files
|
||||
echo. gettext to make PO message catalogs
|
||||
echo. changes to make an overview over all changed/added/deprecated items
|
||||
echo. xml to make Docutils-native XML files
|
||||
echo. pseudoxml to make pseudoxml-XML files for display purposes
|
||||
echo. linkcheck to check all external links for integrity
|
||||
echo. doctest to run all doctests embedded in the documentation if enabled
|
||||
echo. coverage to run coverage check of the documentation if enabled
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "clean" (
|
||||
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
|
||||
del /q /s %BUILDDIR%\*
|
||||
goto end
|
||||
)
|
||||
|
||||
|
||||
:default
|
||||
%SPHINXBUILD% >NUL 2>NUL
|
||||
REM Check if sphinx-build is available and fallback to Python version if any
|
||||
%SPHINXBUILD% 1>NUL 2>NUL
|
||||
if errorlevel 9009 goto sphinx_python
|
||||
goto sphinx_ok
|
||||
|
||||
:sphinx_python
|
||||
|
||||
set SPHINXBUILD=python -m sphinx.__init__
|
||||
%SPHINXBUILD% 2> nul
|
||||
if errorlevel 9009 (
|
||||
echo.
|
||||
echo.The 'sphinx-build' command was not found. Open and read README.md!
|
||||
exit /b 1
|
||||
)
|
||||
%SPHINXBUILD% -M %1 "%SOURCEDIR%" "%BUILDDIR%" %SPHINXOPTS%
|
||||
goto end
|
||||
|
||||
|
||||
:help
|
||||
%SPHINXBUILD% -M help "%SOURCEDIR%" "%BUILDDIR%" %SPHINXOPTS%
|
||||
goto end
|
||||
|
||||
|
||||
:devenv
|
||||
sphinx-autobuild >NUL 2>NUL
|
||||
if errorlevel 9009 (
|
||||
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
|
||||
echo.installed, then set the SPHINXBUILD environment variable to point
|
||||
echo.to the full path of the 'sphinx-build' executable. Alternatively you
|
||||
echo.may add the Sphinx directory to PATH.
|
||||
echo.
|
||||
echo.The 'sphinx-autobuild' command was not found. Open and read README.md!
|
||||
echo.If you don't have Sphinx installed, grab it from
|
||||
echo.http://sphinx-doc.org/
|
||||
exit /b 1
|
||||
)
|
||||
sphinx-autobuild -b html --open-browser "%SOURCEDIR%" "%BUILDDIR%/html"
|
||||
goto end
|
||||
|
||||
:sphinx_ok
|
||||
|
||||
|
||||
if "%1" == "html" (
|
||||
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "dirhtml" (
|
||||
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "singlehtml" (
|
||||
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "pickle" (
|
||||
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished; now you can process the pickle files.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "json" (
|
||||
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished; now you can process the JSON files.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "htmlhelp" (
|
||||
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished; now you can run HTML Help Workshop with the ^
|
||||
.hhp project file in %BUILDDIR%/htmlhelp.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "qthelp" (
|
||||
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished; now you can run "qcollectiongenerator" with the ^
|
||||
.qhcp project file in %BUILDDIR%/qthelp, like this:
|
||||
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\JupyterHub.qhcp
|
||||
echo.To view the help file:
|
||||
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\JupyterHub.ghc
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "devhelp" (
|
||||
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "epub" (
|
||||
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The epub file is in %BUILDDIR%/epub.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "latex" (
|
||||
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "latexpdf" (
|
||||
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
|
||||
cd %BUILDDIR%/latex
|
||||
make all-pdf
|
||||
cd %~dp0
|
||||
echo.
|
||||
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "latexpdfja" (
|
||||
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
|
||||
cd %BUILDDIR%/latex
|
||||
make all-pdf-ja
|
||||
cd %~dp0
|
||||
echo.
|
||||
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "text" (
|
||||
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The text files are in %BUILDDIR%/text.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "man" (
|
||||
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The manual pages are in %BUILDDIR%/man.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "texinfo" (
|
||||
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "gettext" (
|
||||
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "changes" (
|
||||
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.The overview file is in %BUILDDIR%/changes.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "linkcheck" (
|
||||
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Link check complete; look for any errors in the above output ^
|
||||
or in %BUILDDIR%/linkcheck/output.txt.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "doctest" (
|
||||
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Testing of doctests in the sources finished, look at the ^
|
||||
results in %BUILDDIR%/doctest/output.txt.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "coverage" (
|
||||
%SPHINXBUILD% -b coverage %ALLSPHINXOPTS% %BUILDDIR%/coverage
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Testing of coverage in the sources finished, look at the ^
|
||||
results in %BUILDDIR%/coverage/python.txt.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "xml" (
|
||||
%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The XML files are in %BUILDDIR%/xml.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "pseudoxml" (
|
||||
%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
|
||||
goto end
|
||||
)
|
||||
|
||||
:end
|
||||
popd
|
||||
|
14
docs/package.json
Normal file
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"name": "jupyterhub-docs-build",
|
||||
"version": "0.8.0",
|
||||
"description": "build JupyterHub swagger docs",
|
||||
"scripts": {
|
||||
"rest-api": "bootprint openapi ./rest-api.yml source/_static/rest-api"
|
||||
},
|
||||
"author": "",
|
||||
"license": "BSD-3-Clause",
|
||||
"devDependencies": {
|
||||
"bootprint": "^1.0.0",
|
||||
"bootprint-openapi": "^1.0.0"
|
||||
}
|
||||
}
|
@@ -1,21 +1,5 @@
|
||||
# We install the jupyterhub package to help autodoc-traits inspect it and
|
||||
# generate documentation.
|
||||
#
|
||||
# FIXME: If there is a way for this requirements.txt file to pass a flag that
|
||||
# the build system can intercept to not build the javascript artifacts,
|
||||
# then do so so. That would mean that installing the documentation can
|
||||
# avoid needing node/npm installed.
|
||||
#
|
||||
--editable .
|
||||
|
||||
autodoc-traits
|
||||
jupyterhub-sphinx-theme
|
||||
myst-parser>=0.19
|
||||
pre-commit
|
||||
pytablewriter>=0.56
|
||||
ruamel.yaml
|
||||
sphinx>=4
|
||||
sphinx-copybutton
|
||||
sphinx-jsonschema
|
||||
sphinxext-opengraph
|
||||
sphinxext-rediraffe
|
||||
# ReadTheDocs uses the `environment.yaml` so make sure to update that as well
|
||||
# if you change this file
|
||||
-r ../requirements.txt
|
||||
sphinx>=1.7
|
||||
recommonmark==0.4.0
|
||||
|
739
docs/rest-api.yml
Normal file
@@ -0,0 +1,739 @@
|
||||
# see me at: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/master/docs/rest-api.yml#/default
|
||||
swagger: '2.0'
|
||||
info:
|
||||
title: JupyterHub
|
||||
description: The REST API for JupyterHub
|
||||
version: 0.9.4
|
||||
license:
|
||||
name: BSD-3-Clause
|
||||
schemes:
|
||||
- [http, https]
|
||||
securityDefinitions:
|
||||
token:
|
||||
type: apiKey
|
||||
name: Authorization
|
||||
in: header
|
||||
security:
|
||||
- token: []
|
||||
basePath: /hub/api
|
||||
produces:
|
||||
- application/json
|
||||
consumes:
|
||||
- application/json
|
||||
paths:
|
||||
/:
|
||||
get:
|
||||
summary: Get JupyterHub version
|
||||
description: |
|
||||
This endpoint is not authenticated for the purpose of clients and user
|
||||
to identify the JupyterHub version before setting up authentication.
|
||||
responses:
|
||||
'200':
|
||||
description: The JupyterHub version
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
version:
|
||||
type: string
|
||||
description: The version of JupyterHub itself
|
||||
/info:
|
||||
get:
|
||||
summary: Get detailed info about JupyterHub
|
||||
description: |
|
||||
Detailed JupyterHub information, including Python version,
|
||||
JupyterHub's version and executable path,
|
||||
and which Authenticator and Spawner are active.
|
||||
responses:
|
||||
'200':
|
||||
description: Detailed JupyterHub info
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
version:
|
||||
type: string
|
||||
description: The version of JupyterHub itself
|
||||
python:
|
||||
type: string
|
||||
description: The Python version, as returned by sys.version
|
||||
sys_executable:
|
||||
type: string
|
||||
description: The path to sys.executable running JupyterHub
|
||||
authenticator:
|
||||
type: object
|
||||
properties:
|
||||
class:
|
||||
type: string
|
||||
description: The Python class currently active for JupyterHub Authentication
|
||||
version:
|
||||
type: string
|
||||
description: The version of the currently active Authenticator
|
||||
spawner:
|
||||
type: object
|
||||
properties:
|
||||
class:
|
||||
type: string
|
||||
description: The Python class currently active for spawning single-user notebook servers
|
||||
version:
|
||||
type: string
|
||||
description: The version of the currently active Spawner
|
||||
/users:
|
||||
get:
|
||||
summary: List users
|
||||
responses:
|
||||
'200':
|
||||
description: The Hub's user list
|
||||
schema:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/definitions/User'
|
||||
post:
|
||||
summary: Create multiple users
|
||||
parameters:
|
||||
- name: data
|
||||
in: body
|
||||
required: true
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
usernames:
|
||||
type: array
|
||||
description: list of usernames to create on the Hub
|
||||
items:
|
||||
type: string
|
||||
admin:
|
||||
description: whether the created users should be admins
|
||||
type: boolean
|
||||
responses:
|
||||
'201':
|
||||
description: The users have been created
|
||||
schema:
|
||||
type: array
|
||||
description: The created users
|
||||
items:
|
||||
$ref: '#/definitions/User'
|
||||
/users/{name}:
|
||||
get:
|
||||
summary: Get a user by name
|
||||
parameters:
|
||||
- name: name
|
||||
description: username
|
||||
in: path
|
||||
required: true
|
||||
type: string
|
||||
responses:
|
||||
'200':
|
||||
description: The User model
|
||||
schema:
|
||||
$ref: '#/definitions/User'
|
||||
post:
|
||||
summary: Create a single user
|
||||
parameters:
|
||||
- name: name
|
||||
description: username
|
||||
in: path
|
||||
required: true
|
||||
type: string
|
||||
responses:
|
||||
'201':
|
||||
description: The user has been created
|
||||
schema:
|
||||
$ref: '#/definitions/User'
|
||||
patch:
|
||||
summary: Modify a user
|
||||
description: Change a user's name or admin status
|
||||
parameters:
|
||||
- name: name
|
||||
description: username
|
||||
in: path
|
||||
required: true
|
||||
type: string
|
||||
- name: data
|
||||
in: body
|
||||
required: true
|
||||
description: Updated user info. At least one key to be updated (name or admin) is required.
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
name:
|
||||
type: string
|
||||
description: the new name (optional, if another key is updated i.e. admin)
|
||||
admin:
|
||||
type: boolean
|
||||
description: update admin (optional, if another key is updated i.e. name)
|
||||
responses:
|
||||
'200':
|
||||
description: The updated user info
|
||||
schema:
|
||||
$ref: '#/definitions/User'
|
||||
delete:
|
||||
summary: Delete a user
|
||||
parameters:
|
||||
- name: name
|
||||
description: username
|
||||
in: path
|
||||
required: true
|
||||
type: string
|
||||
responses:
|
||||
'204':
|
||||
description: The user has been deleted
|
||||
/users/{name}/server:
|
||||
post:
|
||||
summary: Start a user's single-user notebook server
|
||||
parameters:
|
||||
- name: name
|
||||
description: username
|
||||
in: path
|
||||
required: true
|
||||
type: string
|
||||
responses:
|
||||
'201':
|
||||
description: The user's notebook server has started
|
||||
'202':
|
||||
description: The user's notebook server has not yet started, but has been requested
|
||||
delete:
|
||||
summary: Stop a user's server
|
||||
parameters:
|
||||
- name: name
|
||||
description: username
|
||||
in: path
|
||||
required: true
|
||||
type: string
|
||||
responses:
|
||||
'204':
|
||||
description: The user's notebook server has stopped
|
||||
'202':
|
||||
description: The user's notebook server has not yet stopped as it is taking a while to stop
|
||||
/users/{name}/servers/{server_name}:
|
||||
post:
|
||||
summary: Start a user's single-user named-server notebook server
|
||||
parameters:
|
||||
- name: name
|
||||
description: username
|
||||
in: path
|
||||
required: true
|
||||
type: string
|
||||
- name: server_name
|
||||
description: name given to a named-server
|
||||
in: path
|
||||
required: true
|
||||
type: string
|
||||
responses:
|
||||
'201':
|
||||
description: The user's notebook named-server has started
|
||||
'202':
|
||||
description: The user's notebook named-server has not yet started, but has been requested
|
||||
delete:
|
||||
summary: Stop a user's named-server
|
||||
parameters:
|
||||
- name: name
|
||||
description: username
|
||||
in: path
|
||||
required: true
|
||||
type: string
|
||||
- name: server_name
|
||||
description: name given to a named-server
|
||||
in: path
|
||||
required: true
|
||||
type: string
|
||||
responses:
|
||||
'204':
|
||||
description: The user's notebook named-server has stopped
|
||||
'202':
|
||||
description: The user's notebook named-server has not yet stopped as it is taking a while to stop
|
||||
/users/{name}/tokens:
|
||||
get:
|
||||
summary: List tokens for the user
|
||||
responses:
|
||||
'200':
|
||||
description: The list of tokens
|
||||
schema:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/definitions/Token'
|
||||
post:
|
||||
summary: Create a new token for the user
|
||||
parameters:
|
||||
- name: expires_in
|
||||
type: number
|
||||
required: false
|
||||
in: body
|
||||
description: lifetime (in seconds) after which the requested token will expire.
|
||||
- name: note
|
||||
type: string
|
||||
required: false
|
||||
in: body
|
||||
description: A note attached to the token for future bookkeeping
|
||||
responses:
|
||||
'201':
|
||||
description: The newly created token
|
||||
schema:
|
||||
$ref: '#/definitions/Token'
|
||||
/users/{name}/tokens/{token_id}:
|
||||
get:
|
||||
summary: Get the model for a token by id
|
||||
responses:
|
||||
'200':
|
||||
description: The info for the new token
|
||||
schema:
|
||||
$ref: '#/definitions/Token'
|
||||
delete:
|
||||
summary: Delete (revoke) a token by id
|
||||
responses:
|
||||
'204':
|
||||
description: The token has been deleted
|
||||
/user:
|
||||
summary: Return authenticated user's model
|
||||
description:
|
||||
parameters:
|
||||
responses:
|
||||
'200':
|
||||
description: The authenticated user's model is returned.
|
||||
/groups:
|
||||
get:
|
||||
summary: List groups
|
||||
responses:
|
||||
'200':
|
||||
description: The list of groups
|
||||
schema:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/definitions/Group'
|
||||
/groups/{name}:
|
||||
get:
|
||||
summary: Get a group by name
|
||||
parameters:
|
||||
- name: name
|
||||
description: group name
|
||||
in: path
|
||||
required: true
|
||||
type: string
|
||||
responses:
|
||||
'200':
|
||||
description: The group model
|
||||
schema:
|
||||
$ref: '#/definitions/Group'
|
||||
post:
|
||||
summary: Create a group
|
||||
parameters:
|
||||
- name: name
|
||||
description: group name
|
||||
in: path
|
||||
required: true
|
||||
type: string
|
||||
responses:
|
||||
'201':
|
||||
description: The group has been created
|
||||
schema:
|
||||
$ref: '#/definitions/Group'
|
||||
delete:
|
||||
summary: Delete a group
|
||||
parameters:
|
||||
- name: name
|
||||
description: group name
|
||||
in: path
|
||||
required: true
|
||||
type: string
|
||||
responses:
|
||||
'204':
|
||||
description: The group has been deleted
|
||||
/groups/{name}/users:
|
||||
post:
|
||||
summary: Add users to a group
|
||||
parameters:
|
||||
- name: name
|
||||
description: group name
|
||||
in: path
|
||||
required: true
|
||||
type: string
|
||||
- name: data
|
||||
in: body
|
||||
required: true
|
||||
description: The users to add to the group
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
users:
|
||||
type: array
|
||||
description: List of usernames to add to the group
|
||||
items:
|
||||
type: string
|
||||
responses:
|
||||
'200':
|
||||
description: The users have been added to the group
|
||||
schema:
|
||||
$ref: '#/definitions/Group'
|
||||
delete:
|
||||
summary: Remove users from a group
|
||||
parameters:
|
||||
- name: name
|
||||
description: group name
|
||||
in: path
|
||||
required: true
|
||||
type: string
|
||||
- name: data
|
||||
in: body
|
||||
required: true
|
||||
description: The users to remove from the group
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
users:
|
||||
type: array
|
||||
description: List of usernames to remove from the group
|
||||
items:
|
||||
type: string
|
||||
responses:
|
||||
'200':
|
||||
description: The users have been removed from the group
|
||||
/services:
|
||||
get:
|
||||
summary: List services
|
||||
responses:
|
||||
'200':
|
||||
description: The service list
|
||||
schema:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/definitions/Service'
|
||||
/services/{name}:
|
||||
get:
|
||||
summary: Get a service by name
|
||||
parameters:
|
||||
- name: name
|
||||
description: service name
|
||||
in: path
|
||||
required: true
|
||||
type: string
|
||||
responses:
|
||||
'200':
|
||||
description: The Service model
|
||||
schema:
|
||||
$ref: '#/definitions/Service'
|
||||
/proxy:
|
||||
get:
|
||||
summary: Get the proxy's routing table
|
||||
description: A convenience alias for getting the routing table directly from the proxy
|
||||
responses:
|
||||
'200':
|
||||
description: Routing table
|
||||
schema:
|
||||
type: object
|
||||
description: configurable-http-proxy routing table (see configurable-http-proxy docs for details)
|
||||
post:
|
||||
summary: Force the Hub to sync with the proxy
|
||||
responses:
|
||||
'200':
|
||||
description: Success
|
||||
patch:
|
||||
summary: Notify the Hub about a new proxy
|
||||
description: Notifies the Hub of a new proxy to use.
|
||||
parameters:
|
||||
- name: data
|
||||
in: body
|
||||
required: true
|
||||
description: Any values that have changed for the new proxy. All keys are optional.
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
ip:
|
||||
type: string
|
||||
description: IP address of the new proxy
|
||||
port:
|
||||
type: string
|
||||
description: Port of the new proxy
|
||||
protocol:
|
||||
type: string
|
||||
description: Protocol of new proxy, if changed
|
||||
auth_token:
|
||||
type: string
|
||||
description: CONFIGPROXY_AUTH_TOKEN for the new proxy
|
||||
responses:
|
||||
'200':
|
||||
description: Success
|
||||
/authorizations/token:
|
||||
post:
|
||||
summary: Request a new API token
|
||||
description: |
|
||||
Request a new API token to use with the JupyterHub REST API.
|
||||
If not already authenticated, username and password can be sent
|
||||
in the JSON request body.
|
||||
Logging in via this method is only available when the active Authenticator
|
||||
accepts passwords (e.g. not OAuth).
|
||||
parameters:
|
||||
- name: username
|
||||
in: body
|
||||
required: false
|
||||
type: string
|
||||
- name: password
|
||||
in: body
|
||||
required: false
|
||||
type: string
|
||||
responses:
|
||||
'200':
|
||||
description: The new API token
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
token:
|
||||
type: string
|
||||
description: The new API token.
|
||||
'403':
|
||||
description: The user can not be authenticated.
|
||||
/authorizations/token/{token}:
|
||||
get:
|
||||
summary: Identify a user or service from an API token
|
||||
parameters:
|
||||
- name: token
|
||||
in: path
|
||||
required: true
|
||||
type: string
|
||||
responses:
|
||||
'200':
|
||||
description: The user or service identified by the API token
|
||||
'404':
|
||||
description: A user or service is not found.
|
||||
/authorizations/cookie/{cookie_name}/{cookie_value}:
|
||||
get:
|
||||
summary: Identify a user from a cookie
|
||||
description: Used by single-user notebook servers to hand off cookie authentication to the Hub
|
||||
parameters:
|
||||
- name: cookie_name
|
||||
in: path
|
||||
required: true
|
||||
type: string
|
||||
- name: cookie_value
|
||||
in: path
|
||||
required: true
|
||||
type: string
|
||||
responses:
|
||||
'200':
|
||||
description: The user identified by the cookie
|
||||
schema:
|
||||
$ref: '#/definitions/User'
|
||||
'404':
|
||||
description: A user is not found.
|
||||
/oauth2/authorize:
|
||||
get:
|
||||
summary: 'OAuth 2.0 authorize endpoint'
|
||||
description: |
|
||||
Redirect users to this URL to begin the OAuth process.
|
||||
It is not an API endpoint.
|
||||
parameters:
|
||||
- name: client_id
|
||||
description: The client id
|
||||
in: query
|
||||
required: true
|
||||
type: string
|
||||
- name: response_type
|
||||
description: The response type (always 'code')
|
||||
in: query
|
||||
required: true
|
||||
type: string
|
||||
- name: state
|
||||
description: A state string
|
||||
in: query
|
||||
required: false
|
||||
type: string
|
||||
- name: redirect_uri
|
||||
description: The redirect url
|
||||
in: query
|
||||
required: true
|
||||
type: string
|
||||
/oauth2/token:
|
||||
post:
|
||||
summary: Request an OAuth2 token
|
||||
description: |
|
||||
Request an OAuth2 token from an authorization code.
|
||||
This request completes the OAuth process.
|
||||
consumes:
|
||||
- application/x-www-form-urlencoded
|
||||
parameters:
|
||||
- name: client_id
|
||||
description: The client id
|
||||
in: form
|
||||
required: true
|
||||
type: string
|
||||
- name: client_secret
|
||||
description: The client secret
|
||||
in: form
|
||||
required: true
|
||||
type: string
|
||||
- name: grant_type
|
||||
description: The grant type (always 'authorization_code')
|
||||
in: form
|
||||
required: true
|
||||
type: string
|
||||
- name: code
|
||||
description: The code provided by the authorization redirect
|
||||
in: form
|
||||
required: true
|
||||
type: string
|
||||
- name: redirect_uri
|
||||
description: The redirect url
|
||||
in: form
|
||||
required: true
|
||||
type: string
|
||||
responses:
|
||||
'200':
|
||||
description: JSON response including the token
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
access_token:
|
||||
type: string
|
||||
description: The new API token for the user
|
||||
token_type:
|
||||
type: string
|
||||
description: Will always be 'Bearer'
|
||||
/shutdown:
|
||||
post:
|
||||
summary: Shutdown the Hub
|
||||
parameters:
|
||||
- name: proxy
|
||||
in: body
|
||||
type: boolean
|
||||
description: Whether the proxy should be shutdown as well (default from Hub config)
|
||||
- name: servers
|
||||
in: body
|
||||
type: boolean
|
||||
description: Whether users' notebook servers should be shutdown as well (default from Hub config)
|
||||
definitions:
|
||||
User:
|
||||
type: object
|
||||
properties:
|
||||
name:
|
||||
type: string
|
||||
description: The user's name
|
||||
admin:
|
||||
type: boolean
|
||||
description: Whether the user is an admin
|
||||
groups:
|
||||
type: array
|
||||
description: The names of groups where this user is a member
|
||||
items:
|
||||
type: string
|
||||
server:
|
||||
type: string
|
||||
description: The user's notebook server's base URL, if running; null if not.
|
||||
pending:
|
||||
type: string
|
||||
enum: ["spawn", "stop", null]
|
||||
description: The currently pending action, if any
|
||||
last_activity:
|
||||
type: string
|
||||
format: date-time
|
||||
description: Timestamp of last-seen activity from the user
|
||||
servers:
|
||||
type: object
|
||||
description: The active servers for this user.
|
||||
items:
|
||||
schema:
|
||||
$ref: '#/definitions/Server'
|
||||
Server:
|
||||
type: object
|
||||
properties:
|
||||
name:
|
||||
type: string
|
||||
description: The server's name. The user's default server has an empty name ('')
|
||||
ready:
|
||||
type: boolean
|
||||
description: |
|
||||
Whether the server is ready for traffic.
|
||||
Will always be false when any transition is pending.
|
||||
pending:
|
||||
type: string
|
||||
enum: ["spawn", "stop", null]
|
||||
description: |
|
||||
The currently pending action, if any.
|
||||
A server is not ready if an action is pending.
|
||||
url:
|
||||
type: string
|
||||
description: |
|
||||
The URL where the server can be accessed
|
||||
(typically /user/:name/:server.name/).
|
||||
progress_url:
|
||||
type: string
|
||||
description: |
|
||||
The URL for an event-stream to retrieve events during a spawn.
|
||||
started:
|
||||
type: string
|
||||
format: date-time
|
||||
description: UTC timestamp when the server was last started.
|
||||
last_activity:
|
||||
type: string
|
||||
format: date-time
|
||||
description: UTC timestamp last-seen activity on this server.
|
||||
state:
|
||||
type: object
|
||||
description: Arbitrary internal state from this server's spawner. Only available on the hub's users list or get-user-by-name method, and only if a hub admin. None otherwise.
|
||||
Group:
|
||||
type: object
|
||||
properties:
|
||||
name:
|
||||
type: string
|
||||
description: The group's name
|
||||
users:
|
||||
type: array
|
||||
description: The names of users who are members of this group
|
||||
items:
|
||||
type: string
|
||||
Service:
|
||||
type: object
|
||||
properties:
|
||||
name:
|
||||
type: string
|
||||
description: The service's name
|
||||
admin:
|
||||
type: boolean
|
||||
description: Whether the service is an admin
|
||||
url:
|
||||
type: string
|
||||
description: The internal url where the service is running
|
||||
prefix:
|
||||
type: string
|
||||
description: The proxied URL prefix to the service's url
|
||||
pid:
|
||||
type: number
|
||||
description: The PID of the service process (if managed)
|
||||
command:
|
||||
type: array
|
||||
description: The command used to start the service (if managed)
|
||||
items:
|
||||
type: string
|
||||
info:
|
||||
type: object
|
||||
description: |
|
||||
Additional information a deployment can attach to a service.
|
||||
JupyterHub does not use this field.
|
||||
Token:
|
||||
type: object
|
||||
properties:
|
||||
token:
|
||||
type: string
|
||||
description: The token itself. Only present in responses to requests for a new token.
|
||||
id:
|
||||
type: string
|
||||
description: The id of the API token. Used for modifying or deleting the token.
|
||||
user:
|
||||
type: string
|
||||
description: The user that owns a token (undefined if owned by a service)
|
||||
service:
|
||||
type: string
|
||||
description: The service that owns the token (undefined of owned by a user)
|
||||
note:
|
||||
type: string
|
||||
description: A note about the token, typically describing what it was created for.
|
||||
created:
|
||||
type: string
|
||||
format: date-time
|
||||
description: Timestamp when this token was created
|
||||
expires_at:
|
||||
type: string
|
||||
format: date-time
|
||||
description: Timestamp when this token expires. Null if there is no expiry.
|
||||
last_activity:
|
||||
type: string
|
||||
format: date-time
|
||||
description: |
|
||||
Timestamp of last-seen activity using this token.
|
||||
Can be null if token has never been used.
|
@@ -1,10 +1,106 @@
|
||||
/* Added to avoid logo being too squeezed */
|
||||
.navbar-brand {
|
||||
height: 4rem !important;
|
||||
}
|
||||
|
||||
/* hide redundant funky-formatted swagger-ui version */
|
||||
|
||||
.swagger-ui .info .title small {
|
||||
display: none !important;
|
||||
}
|
||||
div#helm-chart-schema h2,
|
||||
div#helm-chart-schema h3,
|
||||
div#helm-chart-schema h4,
|
||||
div#helm-chart-schema h5,
|
||||
div#helm-chart-schema h6 {
|
||||
font-family: courier new;
|
||||
}
|
||||
|
||||
h3, h3 ~ * {
|
||||
margin-left: 3% !important;
|
||||
}
|
||||
|
||||
h4, h4 ~ * {
|
||||
margin-left: 6% !important;
|
||||
}
|
||||
|
||||
h5, h5 ~ * {
|
||||
margin-left: 9% !important;
|
||||
}
|
||||
|
||||
h6, h6 ~ * {
|
||||
margin-left: 12% !important;
|
||||
}
|
||||
|
||||
h7, h7 ~ * {
|
||||
margin-left: 15% !important;
|
||||
}
|
||||
|
||||
img.logo {
|
||||
width:100%
|
||||
}
|
||||
|
||||
.right-next {
|
||||
float: right;
|
||||
max-width: 45%;
|
||||
overflow: auto;
|
||||
text-overflow: ellipsis;
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
.right-next::after{
|
||||
content: ' »';
|
||||
}
|
||||
|
||||
.left-prev {
|
||||
float: left;
|
||||
max-width: 45%;
|
||||
overflow: auto;
|
||||
text-overflow: ellipsis;
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
.left-prev::before{
|
||||
content: '« ';
|
||||
}
|
||||
|
||||
.prev-next-bottom {
|
||||
margin-top: 3em;
|
||||
}
|
||||
|
||||
.prev-next-top {
|
||||
margin-bottom: 1em;
|
||||
}
|
||||
|
||||
/* Sidebar TOC and headers */
|
||||
|
||||
div.sphinxsidebarwrapper div {
|
||||
margin-bottom: .8em;
|
||||
}
|
||||
div.sphinxsidebar h3 {
|
||||
font-size: 1.3em;
|
||||
padding-top: 0px;
|
||||
font-weight: 800;
|
||||
margin-left: 0px !important;
|
||||
}
|
||||
|
||||
div.sphinxsidebar p.caption {
|
||||
font-size: 1.2em;
|
||||
margin-bottom: 0px;
|
||||
margin-left: 0px !important;
|
||||
font-weight: 900;
|
||||
color: #767676;
|
||||
}
|
||||
|
||||
div.sphinxsidebar ul {
|
||||
font-size: .8em;
|
||||
margin-top: 0px;
|
||||
padding-left: 3%;
|
||||
margin-left: 0px !important;
|
||||
}
|
||||
|
||||
div.relations ul {
|
||||
font-size: 1em;
|
||||
margin-left: 0px !important;
|
||||
}
|
||||
|
||||
div#searchbox form {
|
||||
margin-left: 0px !important;
|
||||
}
|
||||
|
||||
/* body elements */
|
||||
.toctree-wrapper span.caption-text {
|
||||
color: #767676;
|
||||
font-style: italic;
|
||||
font-weight: 300;
|
||||
}
|
||||
|
Before Width: | Height: | Size: 6.7 KiB After Width: | Height: | Size: 38 KiB |
16
docs/source/_templates/navigation.html
Normal file
@@ -0,0 +1,16 @@
|
||||
{# Custom template for navigation.html
|
||||
|
||||
alabaster theme does not provide blocks for titles to
|
||||
be overridden so this custom theme handles title and
|
||||
toctree for sidebar
|
||||
#}
|
||||
<h3>{{ _('Table of Contents') }}</h3>
|
||||
{{ toctree(includehidden=theme_sidebar_includehidden, collapse=theme_sidebar_collapse) }}
|
||||
{% if theme_extra_nav_links %}
|
||||
<hr />
|
||||
<ul>
|
||||
{% for text, uri in theme_extra_nav_links.items() %}
|
||||
<li class="toctree-l1"><a href="{{ uri }}">{{ text }}</a></li>
|
||||
{% endfor %}
|
||||
</ul>
|
||||
{% endif %}
|
30
docs/source/_templates/page.html
Normal file
@@ -0,0 +1,30 @@
|
||||
{% extends '!page.html' %}
|
||||
|
||||
{# Custom template for page.html
|
||||
|
||||
Alabaster theme does not provide blocks for prev/next at bottom of each page.
|
||||
This is _in addition_ to the prev/next in the sidebar. The "Prev/Next" text
|
||||
or symbols are handled by CSS classes in _static/custom.css
|
||||
#}
|
||||
|
||||
{% macro prev_next(prev, next, prev_title='', next_title='') %}
|
||||
{%- if prev %}
|
||||
<a class='left-prev' href="{{ prev.link|e }}" title="{{ _('previous chapter')}}">{{ prev_title or prev.title }}</a>
|
||||
{%- endif %}
|
||||
{%- if next %}
|
||||
<a class='right-next' href="{{ next.link|e }}" title="{{ _('next chapter')}}">{{ next_title or next.title }}</a>
|
||||
{%- endif %}
|
||||
<div style='clear:both;'></div>
|
||||
{% endmacro %}
|
||||
|
||||
|
||||
{% block body %}
|
||||
<div class='prev-next-top'>
|
||||
{{ prev_next(prev, next, 'Previous', 'Next') }}
|
||||
</div>
|
||||
|
||||
{{super()}}
|
||||
<div class='prev-next-bottom'>
|
||||
{{ prev_next(prev, next) }}
|
||||
</div>
|
||||
{% endblock %}
|
17
docs/source/_templates/relations.html
Normal file
@@ -0,0 +1,17 @@
|
||||
{# Custom template for relations.html
|
||||
|
||||
alabaster theme does not provide previous/next page by default
|
||||
#}
|
||||
<div class="relations">
|
||||
<h3>Navigation</h3>
|
||||
<ul>
|
||||
<li><a href="{{ pathto(master_doc) }}">Documentation Home</a><ul>
|
||||
{%- if prev %}
|
||||
<li><a href="{{ prev.link|e }}" title="Previous">Previous topic</a></li>
|
||||
{%- endif %}
|
||||
{%- if next %}
|
||||
<li><a href="{{ next.link|e }}" title="Next">Next topic</a></li>
|
||||
{%- endif %}
|
||||
</ul>
|
||||
</ul>
|
||||
</div>
|
16
docs/source/api/app.rst
Normal file
@@ -0,0 +1,16 @@
|
||||
=========================
|
||||
Application configuration
|
||||
=========================
|
||||
|
||||
Module: :mod:`jupyterhub.app`
|
||||
=============================
|
||||
|
||||
.. automodule:: jupyterhub.app
|
||||
|
||||
.. currentmodule:: jupyterhub.app
|
||||
|
||||
:class:`JupyterHub`
|
||||
-------------------
|
||||
|
||||
.. autoconfigurable:: JupyterHub
|
||||
|
28
docs/source/api/auth.rst
Normal file
@@ -0,0 +1,28 @@
|
||||
==============
|
||||
Authenticators
|
||||
==============
|
||||
|
||||
Module: :mod:`jupyterhub.auth`
|
||||
==============================
|
||||
|
||||
.. automodule:: jupyterhub.auth
|
||||
|
||||
.. currentmodule:: jupyterhub.auth
|
||||
|
||||
:class:`Authenticator`
|
||||
----------------------
|
||||
|
||||
.. autoconfigurable:: Authenticator
|
||||
:members:
|
||||
|
||||
:class:`LocalAuthenticator`
|
||||
---------------------------
|
||||
|
||||
.. autoconfigurable:: LocalAuthenticator
|
||||
:members:
|
||||
|
||||
:class:`PAMAuthenticator`
|
||||
-------------------------
|
||||
|
||||
.. autoconfigurable:: PAMAuthenticator
|
||||
|
38
docs/source/api/index.rst
Normal file
@@ -0,0 +1,38 @@
|
||||
.. _api-index:
|
||||
|
||||
##################
|
||||
The JupyterHub API
|
||||
##################
|
||||
|
||||
:Release: |release|
|
||||
:Date: |today|
|
||||
|
||||
JupyterHub also provides a REST API for administration of the Hub and users.
|
||||
The documentation on `Using JupyterHub's REST API <../reference/rest.html>`_ provides
|
||||
information on:
|
||||
|
||||
- what you can do with the API
|
||||
- creating an API token
|
||||
- adding API tokens to the config files
|
||||
- making an API request programmatically using the requests library
|
||||
- learning more about JupyterHub's API
|
||||
|
||||
The same JupyterHub API spec, as found here, is available in an interactive form
|
||||
`here (on swagger's petstore) <http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#!/default>`__.
|
||||
The `OpenAPI Initiative`_ (fka Swagger™) is a project used to describe
|
||||
and document RESTful APIs.
|
||||
|
||||
JupyterHub API Reference:
|
||||
|
||||
.. toctree::
|
||||
|
||||
app
|
||||
auth
|
||||
spawner
|
||||
proxy
|
||||
user
|
||||
service
|
||||
services.auth
|
||||
|
||||
|
||||
.. _OpenAPI Initiative: https://www.openapis.org/
|
23
docs/source/api/proxy.rst
Normal file
@@ -0,0 +1,23 @@
|
||||
=======
|
||||
Proxies
|
||||
=======
|
||||
|
||||
Module: :mod:`jupyterhub.proxy`
|
||||
===============================
|
||||
|
||||
.. automodule:: jupyterhub.proxy
|
||||
|
||||
.. currentmodule:: jupyterhub.proxy
|
||||
|
||||
:class:`Proxy`
|
||||
--------------
|
||||
|
||||
.. autoconfigurable:: Proxy
|
||||
:members:
|
||||
|
||||
:class:`ConfigurableHTTPProxy`
|
||||
------------------------------
|
||||
|
||||
.. autoconfigurable:: ConfigurableHTTPProxy
|
||||
:members: debug, auth_token, check_running_interval, api_url, command
|
||||
|
17
docs/source/api/service.rst
Normal file
@@ -0,0 +1,17 @@
|
||||
========
|
||||
Services
|
||||
========
|
||||
|
||||
Module: :mod:`jupyterhub.services.service`
|
||||
==========================================
|
||||
|
||||
.. automodule:: jupyterhub.services.service
|
||||
|
||||
.. currentmodule:: jupyterhub.services.service
|
||||
|
||||
:class:`Service`
|
||||
----------------
|
||||
|
||||
.. autoconfigurable:: Service
|
||||
:members: name, admin, url, api_token, managed, kind, command, cwd, environment, user, oauth_client_id, server, prefix, proxy_spec
|
||||
|
41
docs/source/api/services.auth.rst
Normal file
@@ -0,0 +1,41 @@
|
||||
=======================
|
||||
Services Authentication
|
||||
=======================
|
||||
|
||||
Module: :mod:`jupyterhub.services.auth`
|
||||
=======================================
|
||||
|
||||
.. automodule:: jupyterhub.services.auth
|
||||
|
||||
.. currentmodule:: jupyterhub.services.auth
|
||||
|
||||
|
||||
:class:`HubAuth`
|
||||
----------------
|
||||
|
||||
.. autoconfigurable:: HubAuth
|
||||
:members:
|
||||
|
||||
:class:`HubOAuth`
|
||||
-----------------
|
||||
|
||||
.. autoconfigurable:: HubOAuth
|
||||
:members:
|
||||
|
||||
|
||||
:class:`HubAuthenticated`
|
||||
-------------------------
|
||||
|
||||
.. autoclass:: HubAuthenticated
|
||||
:members:
|
||||
|
||||
:class:`HubOAuthenticated`
|
||||
--------------------------
|
||||
|
||||
.. autoclass:: HubOAuthenticated
|
||||
|
||||
:class:`HubOAuthCallbackHandler`
|
||||
--------------------------------
|
||||
|
||||
.. autoclass:: HubOAuthCallbackHandler
|
||||
|
22
docs/source/api/spawner.rst
Normal file
@@ -0,0 +1,22 @@
|
||||
========
|
||||
Spawners
|
||||
========
|
||||
|
||||
Module: :mod:`jupyterhub.spawner`
|
||||
=================================
|
||||
|
||||
.. automodule:: jupyterhub.spawner
|
||||
|
||||
.. currentmodule:: jupyterhub.spawner
|
||||
|
||||
:class:`Spawner`
|
||||
----------------
|
||||
|
||||
.. autoconfigurable:: Spawner
|
||||
:members: options_from_form, poll, start, stop, get_args, get_env, get_state, template_namespace, format_string
|
||||
|
||||
:class:`LocalProcessSpawner`
|
||||
----------------------------
|
||||
|
||||
.. autoconfigurable:: LocalProcessSpawner
|
||||
|
37
docs/source/api/user.rst
Normal file
@@ -0,0 +1,37 @@
|
||||
=====
|
||||
Users
|
||||
=====
|
||||
|
||||
Module: :mod:`jupyterhub.user`
|
||||
==============================
|
||||
|
||||
.. automodule:: jupyterhub.user
|
||||
|
||||
.. currentmodule:: jupyterhub.user
|
||||
|
||||
:class:`UserDict`
|
||||
-----------------
|
||||
|
||||
.. autoclass:: UserDict
|
||||
:members:
|
||||
|
||||
|
||||
:class:`User`
|
||||
-------------
|
||||
|
||||
.. autoclass:: User
|
||||
:members: escaped_name
|
||||
|
||||
.. attribute:: name
|
||||
|
||||
The user's name
|
||||
|
||||
.. attribute:: server
|
||||
|
||||
The user's Server data object if running, None otherwise.
|
||||
Has ``ip``, ``port`` attributes.
|
||||
|
||||
.. attribute:: spawner
|
||||
|
||||
The user's :class:`~.Spawner` instance.
|
||||
|
446
docs/source/changelog.md
Normal file
@@ -0,0 +1,446 @@
|
||||
# Changelog
|
||||
|
||||
For detailed changes from the prior release, click on the version number, and
|
||||
its link will bring up a GitHub listing of changes. Use `git log` on the
|
||||
command line for details.
|
||||
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
## 0.9
|
||||
|
||||
### [0.9.4] 2018-09-24
|
||||
|
||||
JupyterHub 0.9.4 is a small bugfix release.
|
||||
|
||||
- Fixes an issue that required all running user servers to be restarted
|
||||
when performing an upgrade from 0.8 to 0.9.
|
||||
- Fixes content-type for API endpoints back to `application/json`.
|
||||
It was `text/html` in 0.9.0-0.9.3.
|
||||
|
||||
### [0.9.3] 2018-09-12
|
||||
|
||||
JupyterHub 0.9.3 contains small bugfixes and improvements
|
||||
|
||||
- Fix token page and model handling of `expires_at`.
|
||||
This field was missing from the REST API model for tokens
|
||||
and could cause the token page to not render
|
||||
- Add keep-alive to progress event stream to avoid proxies dropping
|
||||
the connection due to inactivity
|
||||
- Documentation and example improvements
|
||||
- Disable quit button when using notebook 5.6
|
||||
- Prototype new feature (may change prior to 1.0):
|
||||
pass requesting Handler to Spawners during start,
|
||||
accessible as `self.handler`
|
||||
|
||||
### [0.9.2] 2018-08-10
|
||||
|
||||
JupyterHub 0.9.2 contains small bugfixes and improvements.
|
||||
|
||||
- Documentation and example improvements
|
||||
- Add `Spawner.consecutive_failure_limit` config for aborting the Hub if too many spawns fail in a row.
|
||||
- Fix for handling SIGTERM when run with asyncio (tornado 5)
|
||||
- Windows compatibility fixes
|
||||
|
||||
|
||||
### [0.9.1] 2018-07-04
|
||||
|
||||
JupyterHub 0.9.1 contains a number of small bugfixes on top of 0.9.
|
||||
|
||||
- Use a PID file for the proxy to decrease the likelihood that a leftover proxy process will prevent JupyterHub from restarting
|
||||
- `c.LocalProcessSpawner.shell_cmd` is now configurable
|
||||
- API requests to stopped servers (requests to the hub for `/user/:name/api/...`) fail with 404 rather than triggering a restart of the server
|
||||
- Compatibility fix for notebook 5.6.0 which will introduce further
|
||||
security checks for local connections
|
||||
- Managed services always use localhost to talk to the Hub if the Hub listening on all interfaces
|
||||
- When using a URL prefix, the Hub route will be `JupyterHub.base_url` instead of unconditionally `/`
|
||||
- additional fixes and improvements
|
||||
|
||||
### [0.9.0] 2018-06-15
|
||||
|
||||
JupyterHub 0.9 is a major upgrade of JupyterHub.
|
||||
There are several changes to the database schema,
|
||||
so make sure to backup your database and run:
|
||||
|
||||
jupyterhub upgrade-db
|
||||
|
||||
after upgrading jupyterhub.
|
||||
|
||||
The biggest change for 0.9 is the switch to asyncio coroutines everywhere
|
||||
instead of tornado coroutines. Custom Spawners and Authenticators are still
|
||||
free to use tornado coroutines for async methods, as they will continue to
|
||||
work. As part of this upgrade, JupyterHub 0.9 drops support for Python < 3.5
|
||||
and tornado < 5.0.
|
||||
|
||||
|
||||
#### Changed
|
||||
|
||||
- Require Python >= 3.5
|
||||
- Require tornado >= 5.0
|
||||
- Use asyncio coroutines throughout
|
||||
- Set status 409 for conflicting actions instead of 400,
|
||||
e.g. creating users or groups that already exist.
|
||||
- timestamps in REST API continue to be UTC, but now include 'Z' suffix
|
||||
to identify them as such.
|
||||
- REST API User model always includes `servers` dict,
|
||||
not just when named servers are enabled.
|
||||
- `server` info is no longer available to oauth identification endpoints,
|
||||
only user info and group membership.
|
||||
- `User.last_activity` may be None if a user has not been seen,
|
||||
rather than starting with the user creation time
|
||||
which is now separately stored as `User.created`.
|
||||
- static resources are now found in `$PREFIX/share/jupyterhub` instead of `share/jupyter/hub` for improved consistency.
|
||||
- Deprecate `.extra_log_file` config. Use pipe redirection instead:
|
||||
|
||||
jupyterhub &>> /var/log/jupyterhub.log
|
||||
|
||||
- Add `JupyterHub.bind_url` config for setting the full bind URL of the proxy.
|
||||
Sets ip, port, base_url all at once.
|
||||
- Add `JupyterHub.hub_bind_url` for setting the full host+port of the Hub.
|
||||
`hub_bind_url` supports unix domain sockets, e.g.
|
||||
`unix+http://%2Fsrv%2Fjupyterhub.sock`
|
||||
- Deprecate `JupyterHub.hub_connect_port` config in favor of `JupyterHub.hub_connect_url`. `hub_connect_ip` is not deprecated
|
||||
and can still be used in the common case where only the ip address of the hub differs from the bind ip.
|
||||
|
||||
#### Added
|
||||
|
||||
- Spawners can define a `.progress` method which should be an async generator.
|
||||
The generator should yield events of the form:
|
||||
```python
|
||||
{
|
||||
"message": "some-state-message",
|
||||
"progress": 50,
|
||||
}
|
||||
```
|
||||
These messages will be shown with a progress bar on the spawn-pending page.
|
||||
The `async_generator` package can be used to make async generators
|
||||
compatible with Python 3.5.
|
||||
- track activity of individual API tokens
|
||||
- new REST API for managing API tokens at `/hub/api/user/tokens[/token-id]`
|
||||
- allow viewing/revoking tokens via token page
|
||||
- User creation time is available in the REST API as `User.created`
|
||||
- Server start time is stored as `Server.started`
|
||||
- `Spawner.start` may return a URL for connecting to a notebook instead of `(ip, port)`. This enables Spawners to launch servers that setup their own HTTPS.
|
||||
- Optimize database performance by disabling sqlalchemy expire_on_commit by default.
|
||||
- Add `python -m jupyterhub.dbutil shell` entrypoint for quickly
|
||||
launching an IPython session connected to your JupyterHub database.
|
||||
- Include `User.auth_state` in user model on single-user REST endpoints for admins only.
|
||||
- Include `Server.state` in server model on REST endpoints for admins only.
|
||||
- Add `Authenticator.blacklist` for blacklisting users instead of whitelisting.
|
||||
- Pass `c.JupyterHub.tornado_settings['cookie_options']` down to Spawners
|
||||
so that cookie options (e.g. `expires_days`) can be set globally for the whole application.
|
||||
- SIGINFO (`ctrl-t`) handler showing the current status of all running threads,
|
||||
coroutines, and CPU/memory/FD consumption.
|
||||
- Add async `Spawner.get_options_form` alternative to `.options_form`, so it can be a coroutine.
|
||||
- Add `JupyterHub.redirect_to_server` config to govern whether
|
||||
users should be sent to their server on login or the JuptyerHub home page.
|
||||
- html page templates can be more easily customized and extended.
|
||||
- Allow registering external OAuth clients for using the Hub as an OAuth provider.
|
||||
- Add basic prometheus metrics at `/hub/metrics` endpoint.
|
||||
- Add session-id cookie, enabling immediate revocation of login tokens.
|
||||
- Authenticators may specify that users are admins by specifying the `admin` key when return the user model as a dict.
|
||||
- Added "Start All" button to admin page for launching all user servers at once.
|
||||
- Services have an `info` field which is a dictionary.
|
||||
This is accessible via the REST API.
|
||||
- `JupyterHub.extra_handlers` allows defining additional tornado RequestHandlers attached to the Hub.
|
||||
- API tokens may now expire.
|
||||
Expiry is available in the REST model as `expires_at`,
|
||||
and settable when creating API tokens by specifying `expires_in`.
|
||||
|
||||
|
||||
#### Fixed
|
||||
|
||||
- Remove green from theme to improve accessibility
|
||||
- Fix error when proxy deletion fails due to route already being deleted
|
||||
- clear `?redirects` from URL on successful launch
|
||||
- disable send2trash by default, which is rarely desirable for jupyterhub
|
||||
- Put PAM calls in a thread so they don't block the main application
|
||||
in cases where PAM is slow (e.g. LDAP).
|
||||
- Remove implicit spawn from login handler,
|
||||
instead relying on subsequent request for `/user/:name` to trigger spawn.
|
||||
- Fixed several inconsistencies for initial redirects,
|
||||
depending on whether server is running or not and whether the user is logged in or not.
|
||||
- Admin requests for `/user/:name` (when admin-access is enabled) launch the right server if it's not running instead of redirecting to their own.
|
||||
- Major performance improvement starting up JupyterHub with many users,
|
||||
especially when most are inactive.
|
||||
- Various fixes in race conditions and performance improvements with the default proxy.
|
||||
- Fixes for CORS headers
|
||||
- Stop setting `.form-control` on spawner form inputs unconditionally.
|
||||
- Better recovery from database errors and database connection issues
|
||||
without having to restart the Hub.
|
||||
- Fix handling of `~` character in usernames.
|
||||
- Fix jupyterhub startup when `getpass.getuser()` would fail,
|
||||
e.g. due to missing entry in passwd file in containers.
|
||||
|
||||
|
||||
## 0.8
|
||||
|
||||
### [0.8.1] 2017-11-07
|
||||
|
||||
JupyterHub 0.8.1 is a collection of bugfixes and small improvements on 0.8.
|
||||
|
||||
#### Added
|
||||
|
||||
- Run tornado with AsyncIO by default
|
||||
- Add `jupyterhub --upgrade-db` flag for automatically upgrading the database as part of startup.
|
||||
This is useful for cases where manually running `jupyterhub upgrade-db`
|
||||
as a separate step is unwieldy.
|
||||
- Avoid creating backups of the database when no changes are to be made by
|
||||
`jupyterhub upgrade-db`.
|
||||
|
||||
#### Fixed
|
||||
|
||||
- Add some further validation to usernames - `/` is not allowed in usernames.
|
||||
- Fix empty logout page when using auto_login
|
||||
- Fix autofill of username field in default login form.
|
||||
- Fix listing of users on the admin page who have not yet started their server.
|
||||
- Fix ever-growing traceback when re-raising Exceptions from spawn failures.
|
||||
- Remove use of deprecated `bower` for javascript client dependencies.
|
||||
|
||||
|
||||
### [0.8.0] 2017-10-03
|
||||
|
||||
JupyterHub 0.8 is a big release!
|
||||
|
||||
Perhaps the biggest change is the use of OAuth to negotiate authentication
|
||||
between the Hub and single-user services.
|
||||
Due to this change, it is important that the single-user server
|
||||
and Hub are both running the same version of JupyterHub.
|
||||
If you are using containers (e.g. via DockerSpawner or KubeSpawner),
|
||||
this means upgrading jupyterhub in your user images at the same time as the Hub.
|
||||
In most cases, a
|
||||
|
||||
pip install jupyterhub==version
|
||||
|
||||
in your Dockerfile is sufficient.
|
||||
|
||||
#### Added
|
||||
|
||||
- JupyterHub now defined a `Proxy` API for custom
|
||||
proxy implementations other than the default.
|
||||
The defaults are unchanged,
|
||||
but configuration of the proxy is now done on the `ConfigurableHTTPProxy` class instead of the top-level JupyterHub.
|
||||
TODO: docs for writing a custom proxy.
|
||||
- Single-user servers and services
|
||||
(anything that uses HubAuth)
|
||||
can now accept token-authenticated requests via the Authentication header.
|
||||
- Authenticators can now store state in the Hub's database.
|
||||
To do so, the `authenticate` method should return a dict of the form
|
||||
|
||||
```python
|
||||
{
|
||||
'username': 'name',
|
||||
'state': {}
|
||||
}
|
||||
```
|
||||
|
||||
This data will be encrypted and requires `JUPYTERHUB_CRYPT_KEY` environment variable to be set
|
||||
and the `Authenticator.enable_auth_state` flag to be True.
|
||||
If these are not set, auth_state returned by the Authenticator will not be stored.
|
||||
- There is preliminary support for multiple (named) servers per user in the REST API.
|
||||
Named servers can be created via API requests, but there is currently no UI for managing them.
|
||||
- Add `LocalProcessSpawner.popen_kwargs` and `LocalProcessSpawner.shell_cmd`
|
||||
for customizing how user server processes are launched.
|
||||
- Add `Authenticator.auto_login` flag for skipping the "Login with..." page explicitly.
|
||||
- Add `JupyterHub.hub_connect_ip` configuration
|
||||
for the ip that should be used when connecting to the Hub.
|
||||
This is promoting (and deprecating) `DockerSpawner.hub_ip_connect`
|
||||
for use by all Spawners.
|
||||
- Add `Spawner.pre_spawn_hook(spawner)` hook for customizing
|
||||
pre-spawn events.
|
||||
- Add `JupyterHub.active_server_limit` and `JupyterHub.concurrent_spawn_limit`
|
||||
for limiting the total number of running user servers and the number of pending spawns, respectively.
|
||||
|
||||
|
||||
#### Changed
|
||||
|
||||
- more arguments to spawners are now passed via environment variables (`.get_env()`)
|
||||
rather than CLI arguments (`.get_args()`)
|
||||
- internally generated tokens no longer get extra hash rounds,
|
||||
significantly speeding up authentication.
|
||||
The hash rounds were deemed unnecessary because the tokens were already
|
||||
generated with high entropy.
|
||||
- `JUPYTERHUB_API_TOKEN` env is available at all times,
|
||||
rather than being removed during single-user start.
|
||||
The token is now accessible to kernel processes,
|
||||
enabling user kernels to make authenticated API requests to Hub-authenticated services.
|
||||
- Cookie secrets should be 32B hex instead of large base64 secrets.
|
||||
- pycurl is used by default, if available.
|
||||
|
||||
#### Fixed
|
||||
|
||||
So many things fixed!
|
||||
|
||||
- Collisions are checked when users are renamed
|
||||
- Fix bug where OAuth authenticators could not logout users
|
||||
due to being redirected right back through the login process.
|
||||
- If there are errors loading your config files,
|
||||
JupyterHub will refuse to start with an informative error.
|
||||
Previously, the bad config would be ignored and JupyterHub would launch with default configuration.
|
||||
- Raise 403 error on unauthorized user rather than redirect to login,
|
||||
which could cause redirect loop.
|
||||
- Set `httponly` on cookies because it's prudent.
|
||||
- Improve support for MySQL as the database backend
|
||||
- Many race conditions and performance problems under heavy load have been fixed.
|
||||
- Fix alembic tagging of database schema versions.
|
||||
|
||||
#### Removed
|
||||
|
||||
- End support for Python 3.3
|
||||
|
||||
## 0.7
|
||||
|
||||
### [0.7.2] - 2017-01-09
|
||||
|
||||
#### Added
|
||||
|
||||
- Support service environment variables and defaults in `jupyterhub-singleuser`
|
||||
for easier deployment of notebook servers as a Service.
|
||||
- Add `--group` parameter for deploying `jupyterhub-singleuser` as a Service with group authentication.
|
||||
- Include URL parameters when redirecting through `/user-redirect/`
|
||||
|
||||
### Fixed
|
||||
|
||||
- Fix group authentication for HubAuthenticated services
|
||||
|
||||
### [0.7.1] - 2017-01-02
|
||||
|
||||
#### Added
|
||||
|
||||
- `Spawner.will_resume` for signaling that a single-user server is paused instead of stopped.
|
||||
This is needed for cases like `DockerSpawner.remove_containers = False`,
|
||||
where the first API token is re-used for subsequent spawns.
|
||||
- Warning on startup about single-character usernames,
|
||||
caused by common `set('string')` typo in config.
|
||||
|
||||
#### Fixed
|
||||
|
||||
- Removed spurious warning about empty `next_url`, which is AOK.
|
||||
|
||||
### [0.7.0] - 2016-12-2
|
||||
|
||||
#### Added
|
||||
|
||||
- Implement Services API [\#705](https://github.com/jupyterhub/jupyterhub/pull/705)
|
||||
- Add `/api/` and `/api/info` endpoints [\#675](https://github.com/jupyterhub/jupyterhub/pull/675)
|
||||
- Add documentation for JupyterLab, pySpark configuration, troubleshooting,
|
||||
and more.
|
||||
- Add logging of error if adding users already in database. [\#689](https://github.com/jupyterhub/jupyterhub/pull/689)
|
||||
- Add HubAuth class for authenticating with JupyterHub. This class can
|
||||
be used by any application, even outside tornado.
|
||||
- Add user groups.
|
||||
- Add `/hub/user-redirect/...` URL for redirecting users to a file on their own server.
|
||||
|
||||
|
||||
#### Changed
|
||||
|
||||
- Always install with setuptools but not eggs (effectively require
|
||||
`pip install .`) [\#722](https://github.com/jupyterhub/jupyterhub/pull/722)
|
||||
- Updated formatting of changelog. [\#711](https://github.com/jupyterhub/jupyterhub/pull/711)
|
||||
- Single-user server is provided by JupyterHub package, so single-user servers depend on JupyterHub now.
|
||||
|
||||
#### Fixed
|
||||
|
||||
- Fix docker repository location [\#719](https://github.com/jupyterhub/jupyterhub/pull/719)
|
||||
- Fix swagger spec conformance and timestamp type in API spec
|
||||
- Various redirect-loop-causing bugs have been fixed.
|
||||
|
||||
|
||||
#### Removed
|
||||
|
||||
- Deprecate `--no-ssl` command line option. It has no meaning and warns if
|
||||
used. [\#789](https://github.com/jupyterhub/jupyterhub/pull/789)
|
||||
- Deprecate `%U` username substitution in favor of `{username}`. [\#748](https://github.com/jupyterhub/jupyterhub/pull/748)
|
||||
- Removed deprecated SwarmSpawner link. [\#699](https://github.com/jupyterhub/jupyterhub/pull/699)
|
||||
|
||||
## 0.6
|
||||
|
||||
### [0.6.1] - 2016-05-04
|
||||
|
||||
Bugfixes on 0.6:
|
||||
|
||||
- statsd is an optional dependency, only needed if in use
|
||||
- Notice more quickly when servers have crashed
|
||||
- Better error pages for proxy errors
|
||||
- Add Stop All button to admin panel for stopping all servers at once
|
||||
|
||||
### [0.6.0] - 2016-04-25
|
||||
|
||||
- JupyterHub has moved to a new `jupyterhub` namespace on GitHub and Docker. What was `juptyer/jupyterhub` is now `jupyterhub/jupyterhub`, etc.
|
||||
- `jupyterhub/jupyterhub` image on DockerHub no longer loads the jupyterhub_config.py in an ONBUILD step. A new `jupyterhub/jupyterhub-onbuild` image does this
|
||||
- Add statsd support, via `c.JupyterHub.statsd_{host,port,prefix}`
|
||||
- Update to traitlets 4.1 `@default`, `@observe` APIs for traits
|
||||
- Allow disabling PAM sessions via `c.PAMAuthenticator.open_sessions = False`. This may be needed on SELinux-enabled systems, where our PAM session logic often does not work properly
|
||||
- Add `Spawner.environment` configurable, for defining extra environment variables to load for single-user servers
|
||||
- JupyterHub API tokens can be pregenerated and loaded via `JupyterHub.api_tokens`, a dict of `token: username`.
|
||||
- JupyterHub API tokens can be requested via the REST API, with a POST request to `/api/authorizations/token`.
|
||||
This can only be used if the Authenticator has a username and password.
|
||||
- Various fixes for user URLs and redirects
|
||||
|
||||
|
||||
## [0.5] - 2016-03-07
|
||||
|
||||
|
||||
- Single-user server must be run with Jupyter Notebook ≥ 4.0
|
||||
- Require `--no-ssl` confirmation to allow the Hub to be run without SSL (e.g. behind SSL termination in nginx)
|
||||
- Add lengths to text fields for MySQL support
|
||||
- Add `Spawner.disable_user_config` for preventing user-owned configuration from modifying single-user servers.
|
||||
- Fixes for MySQL support.
|
||||
- Add ability to run each user's server on its own subdomain. Requires wildcard DNS and wildcard SSL to be feasible. Enable subdomains by setting `JupyterHub.subdomain_host = 'https://jupyterhub.domain.tld[:port]'`.
|
||||
- Use `127.0.0.1` for local communication instead of `localhost`, avoiding issues with DNS on some systems.
|
||||
- Fix race that could add users to proxy prematurely if spawning is slow.
|
||||
|
||||
## 0.4
|
||||
|
||||
### [0.4.1] - 2016-02-03
|
||||
|
||||
Fix removal of `/login` page in 0.4.0, breaking some OAuth providers.
|
||||
|
||||
### [0.4.0] - 2016-02-01
|
||||
|
||||
- Add `Spawner.user_options_form` for specifying an HTML form to present to users,
|
||||
allowing users to influence the spawning of their own servers.
|
||||
- Add `Authenticator.pre_spawn_start` and `Authenticator.post_spawn_stop` hooks,
|
||||
so that Authenticators can do setup or teardown (e.g. passing credentials to Spawner,
|
||||
mounting data sources, etc.).
|
||||
These methods are typically used with custom Authenticator+Spawner pairs.
|
||||
- 0.4 will be the last JupyterHub release where single-user servers running IPython 3 is supported instead of Notebook ≥ 4.0.
|
||||
|
||||
|
||||
## [0.3] - 2015-11-04
|
||||
|
||||
- No longer make the user starting the Hub an admin
|
||||
- start PAM sessions on login
|
||||
- hooks for Authenticators to fire before spawners start and after they stop,
|
||||
allowing deeper interaction between Spawner/Authenticator pairs.
|
||||
- login redirect fixes
|
||||
|
||||
## [0.2] - 2015-07-12
|
||||
|
||||
- Based on standalone traitlets instead of IPython.utils.traitlets
|
||||
- multiple users in admin panel
|
||||
- Fixes for usernames that require escaping
|
||||
|
||||
## 0.1 - 2015-03-07
|
||||
|
||||
First preview release
|
||||
|
||||
|
||||
[Unreleased]: https://github.com/jupyterhub/jupyterhub/compare/0.9.4...HEAD
|
||||
[0.9.4]: https://github.com/jupyterhub/jupyterhub/compare/0.9.3...0.9.4
|
||||
[0.9.3]: https://github.com/jupyterhub/jupyterhub/compare/0.9.2...0.9.3
|
||||
[0.9.2]: https://github.com/jupyterhub/jupyterhub/compare/0.9.1...0.9.2
|
||||
[0.9.1]: https://github.com/jupyterhub/jupyterhub/compare/0.9.0...0.9.1
|
||||
[0.9.0]: https://github.com/jupyterhub/jupyterhub/compare/0.8.1...0.9.0
|
||||
[0.8.1]: https://github.com/jupyterhub/jupyterhub/compare/0.8.0...0.8.1
|
||||
[0.8.0]: https://github.com/jupyterhub/jupyterhub/compare/0.7.2...0.8.0
|
||||
[0.7.2]: https://github.com/jupyterhub/jupyterhub/compare/0.7.1...0.7.2
|
||||
[0.7.1]: https://github.com/jupyterhub/jupyterhub/compare/0.7.0...0.7.1
|
||||
[0.7.0]: https://github.com/jupyterhub/jupyterhub/compare/0.6.1...0.7.0
|
||||
[0.6.1]: https://github.com/jupyterhub/jupyterhub/compare/0.6.0...0.6.1
|
||||
[0.6.0]: https://github.com/jupyterhub/jupyterhub/compare/0.5.0...0.6.0
|
||||
[0.5]: https://github.com/jupyterhub/jupyterhub/compare/0.4.1...0.5.0
|
||||
[0.4.1]: https://github.com/jupyterhub/jupyterhub/compare/0.4.0...0.4.1
|
||||
[0.4.0]: https://github.com/jupyterhub/jupyterhub/compare/0.3.0...0.4.0
|
||||
[0.3]: https://github.com/jupyterhub/jupyterhub/compare/0.2.0...0.3.0
|
||||
[0.2]: https://github.com/jupyterhub/jupyterhub/compare/0.1.0...0.2.0
|
@@ -1,247 +1,202 @@
|
||||
# Configuration file for Sphinx to build our documentation to HTML.
|
||||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Configuration reference: https://www.sphinx-doc.org/en/master/usage/configuration.html
|
||||
#
|
||||
import contextlib
|
||||
import datetime
|
||||
import io
|
||||
import sys
|
||||
import os
|
||||
import subprocess
|
||||
import shlex
|
||||
|
||||
from docutils import nodes
|
||||
from sphinx.directives.other import SphinxDirective
|
||||
# For conversion from markdown to html
|
||||
import recommonmark.parser
|
||||
|
||||
# Set paths
|
||||
sys.path.insert(0, os.path.abspath('.'))
|
||||
|
||||
# -- General configuration ------------------------------------------------
|
||||
|
||||
# Minimal Sphinx version
|
||||
needs_sphinx = '1.4'
|
||||
|
||||
# Sphinx extension modules
|
||||
extensions = [
|
||||
'sphinx.ext.autodoc',
|
||||
'sphinx.ext.intersphinx',
|
||||
'sphinx.ext.napoleon',
|
||||
'autodoc_traits',
|
||||
]
|
||||
|
||||
templates_path = ['_templates']
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = 'index'
|
||||
|
||||
# General information about the project.
|
||||
project = u'JupyterHub'
|
||||
copyright = u'2016, Project Jupyter team'
|
||||
author = u'Project Jupyter team'
|
||||
|
||||
# Autopopulate version
|
||||
from os.path import dirname
|
||||
|
||||
docs = dirname(dirname(__file__))
|
||||
root = dirname(docs)
|
||||
sys.path.insert(0, root)
|
||||
sys.path.insert(0, os.path.join(docs, 'sphinxext'))
|
||||
|
||||
import jupyterhub
|
||||
from jupyterhub.app import JupyterHub
|
||||
|
||||
# -- Project information -----------------------------------------------------
|
||||
# ref: https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
|
||||
#
|
||||
project = "JupyterHub"
|
||||
author = "Project Jupyter Contributors"
|
||||
copyright = f"{datetime.date.today().year}, {author}"
|
||||
# The short X.Y version.
|
||||
version = '%i.%i' % jupyterhub.version_info[:2]
|
||||
# The full version, including alpha/beta/rc tags.
|
||||
release = jupyterhub.__version__
|
||||
|
||||
language = None
|
||||
exclude_patterns = []
|
||||
pygments_style = 'sphinx'
|
||||
todo_include_todos = False
|
||||
|
||||
# -- General Sphinx configuration --------------------------------------------
|
||||
# ref: https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
|
||||
#
|
||||
extensions = [
|
||||
"sphinx.ext.autodoc",
|
||||
"sphinx.ext.intersphinx",
|
||||
"sphinx.ext.napoleon",
|
||||
"autodoc_traits",
|
||||
"sphinx_copybutton",
|
||||
"sphinx-jsonschema",
|
||||
"sphinxext.opengraph",
|
||||
"sphinxext.rediraffe",
|
||||
"jupyterhub_sphinx_theme",
|
||||
"myst_parser",
|
||||
]
|
||||
root_doc = "index"
|
||||
source_suffix = [".md"]
|
||||
# default_role let's use use `foo` instead of ``foo`` in rST
|
||||
default_role = "literal"
|
||||
# Set the default role so we can use `foo` instead of ``foo``
|
||||
default_role = 'literal'
|
||||
|
||||
# -- Source -------------------------------------------------------------
|
||||
|
||||
# -- MyST configuration ------------------------------------------------------
|
||||
# ref: https://myst-parser.readthedocs.io/en/latest/configuration.html
|
||||
#
|
||||
myst_heading_anchors = 2
|
||||
source_parsers = {'.md': 'recommonmark.parser.CommonMarkParser'}
|
||||
|
||||
myst_enable_extensions = [
|
||||
# available extensions: https://myst-parser.readthedocs.io/en/latest/syntax/optional.html
|
||||
"attrs_inline",
|
||||
"colon_fence",
|
||||
"deflist",
|
||||
"fieldlist",
|
||||
"substitution",
|
||||
]
|
||||
source_suffix = ['.rst', '.md']
|
||||
# source_encoding = 'utf-8-sig'
|
||||
|
||||
myst_substitutions = {
|
||||
# date example: Dev 07, 2022
|
||||
"date": datetime.date.today().strftime("%b %d, %Y").title(),
|
||||
"version": jupyterhub.__version__,
|
||||
# -- Options for HTML output ----------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages.
|
||||
html_theme = 'alabaster'
|
||||
|
||||
html_logo = '_static/images/logo/logo.png'
|
||||
html_favicon = '_static/images/logo/favicon.ico'
|
||||
|
||||
# Paths that contain custom static files (such as style sheets)
|
||||
html_static_path = ['_static']
|
||||
|
||||
html_theme_options = {
|
||||
'show_related': True,
|
||||
'description': 'Documentation for JupyterHub',
|
||||
'github_user': 'jupyterhub',
|
||||
'github_repo': 'jupyterhub',
|
||||
'github_banner': False,
|
||||
'github_button': True,
|
||||
'github_type': 'star',
|
||||
'show_powered_by': False,
|
||||
'extra_nav_links': {
|
||||
'GitHub Repo': 'http://github.com/jupyterhub/jupyterhub',
|
||||
'Issue Tracker': 'http://github.com/jupyterhub/jupyterhub/issues',
|
||||
},
|
||||
}
|
||||
|
||||
html_sidebars = {
|
||||
'**': [
|
||||
'about.html',
|
||||
'searchbox.html',
|
||||
'navigation.html',
|
||||
'relations.html',
|
||||
'sourcelink.html',
|
||||
]
|
||||
}
|
||||
|
||||
# -- Custom directives to generate documentation -----------------------------
|
||||
# ref: https://myst-parser.readthedocs.io/en/latest/syntax/roles-and-directives.html
|
||||
#
|
||||
# We define custom directives to help us generate documentation using Python on
|
||||
# demand when referenced from our documentation files.
|
||||
#
|
||||
htmlhelp_basename = 'JupyterHubdoc'
|
||||
|
||||
# Create a temp instance of JupyterHub for use by two separate directive classes
|
||||
# to get the output from using the "--generate-config" and "--help-all" CLI
|
||||
# flags respectively.
|
||||
#
|
||||
jupyterhub_app = JupyterHub()
|
||||
# -- Options for LaTeX output ---------------------------------------------
|
||||
|
||||
latex_elements = {
|
||||
# 'papersize': 'letterpaper',
|
||||
# 'pointsize': '10pt',
|
||||
# 'preamble': '',
|
||||
# 'figure_align': 'htbp',
|
||||
}
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title,
|
||||
# author, documentclass [howto, manual, or own class]).
|
||||
latex_documents = [
|
||||
(
|
||||
master_doc,
|
||||
'JupyterHub.tex',
|
||||
u'JupyterHub Documentation',
|
||||
u'Project Jupyter team',
|
||||
'manual',
|
||||
)
|
||||
]
|
||||
|
||||
# latex_logo = None
|
||||
# latex_use_parts = False
|
||||
# latex_show_pagerefs = False
|
||||
# latex_show_urls = False
|
||||
# latex_appendices = []
|
||||
# latex_domain_indices = True
|
||||
|
||||
|
||||
class ConfigDirective(SphinxDirective):
|
||||
"""Generate the configuration file output for use in the documentation."""
|
||||
# -- manual page output -------------------------------------------------
|
||||
|
||||
has_content = False
|
||||
required_arguments = 0
|
||||
optional_arguments = 0
|
||||
final_argument_whitespace = False
|
||||
option_spec = {}
|
||||
# One entry per manual page. List of tuples
|
||||
# (source start file, name, description, authors, manual section).
|
||||
man_pages = [(master_doc, 'jupyterhub', u'JupyterHub Documentation', [author], 1)]
|
||||
|
||||
def run(self):
|
||||
# The generated configuration file for this version
|
||||
generated_config = jupyterhub_app.generate_config_file()
|
||||
# post-process output
|
||||
home_dir = os.environ["HOME"]
|
||||
generated_config = generated_config.replace(home_dir, "$HOME", 1)
|
||||
par = nodes.literal_block(text=generated_config)
|
||||
return [par]
|
||||
# man_show_urls = False
|
||||
|
||||
|
||||
class HelpAllDirective(SphinxDirective):
|
||||
"""Print the output of jupyterhub help --all for use in the documentation."""
|
||||
# -- Texinfo output -----------------------------------------------------
|
||||
|
||||
has_content = False
|
||||
required_arguments = 0
|
||||
optional_arguments = 0
|
||||
final_argument_whitespace = False
|
||||
option_spec = {}
|
||||
# Grouping the document tree into Texinfo files. List of tuples
|
||||
# (source start file, target name, title, author,
|
||||
# dir menu entry, description, category)
|
||||
texinfo_documents = [
|
||||
(
|
||||
master_doc,
|
||||
'JupyterHub',
|
||||
u'JupyterHub Documentation',
|
||||
author,
|
||||
'JupyterHub',
|
||||
'One line description of project.',
|
||||
'Miscellaneous',
|
||||
)
|
||||
]
|
||||
|
||||
def run(self):
|
||||
# The output of the help command for this version
|
||||
buffer = io.StringIO()
|
||||
with contextlib.redirect_stdout(buffer):
|
||||
jupyterhub_app.print_help("--help-all")
|
||||
all_help = buffer.getvalue()
|
||||
# post-process output
|
||||
home_dir = os.environ["HOME"]
|
||||
all_help = all_help.replace(home_dir, "$HOME", 1)
|
||||
par = nodes.literal_block(text=all_help)
|
||||
return [par]
|
||||
# texinfo_appendices = []
|
||||
# texinfo_domain_indices = True
|
||||
# texinfo_show_urls = 'footnote'
|
||||
# texinfo_no_detailmenu = False
|
||||
|
||||
|
||||
def setup(app):
|
||||
app.add_css_file("custom.css")
|
||||
app.add_directive("jupyterhub-generate-config", ConfigDirective)
|
||||
app.add_directive("jupyterhub-help-all", HelpAllDirective)
|
||||
# -- Epub output --------------------------------------------------------
|
||||
|
||||
# Bibliographic Dublin Core info.
|
||||
epub_title = project
|
||||
epub_author = author
|
||||
epub_publisher = author
|
||||
epub_copyright = copyright
|
||||
|
||||
# -- Read The Docs -----------------------------------------------------------
|
||||
#
|
||||
# Since RTD runs sphinx-build directly without running "make html", we run the
|
||||
# pre-requisite steps for "make html" from here if needed.
|
||||
#
|
||||
if os.environ.get("READTHEDOCS"):
|
||||
docs = os.path.dirname(os.path.dirname(__file__))
|
||||
subprocess.check_call(["make", "metrics", "scopes"], cwd=docs)
|
||||
# A list of files that should not be packed into the epub file.
|
||||
epub_exclude_files = ['search.html']
|
||||
|
||||
# -- Intersphinx ----------------------------------------------------------
|
||||
|
||||
intersphinx_mapping = {'https://docs.python.org/3/': None}
|
||||
|
||||
# -- Read The Docs --------------------------------------------------------
|
||||
|
||||
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
|
||||
if not on_rtd:
|
||||
html_theme = 'alabaster'
|
||||
else:
|
||||
# readthedocs.org uses their theme by default, so no need to specify it
|
||||
# build rest-api, since RTD doesn't run make
|
||||
from subprocess import check_call as sh
|
||||
|
||||
sh(['make', 'rest-api'], cwd=docs)
|
||||
|
||||
# -- Spell checking -------------------------------------------------------
|
||||
|
||||
# -- Spell checking ----------------------------------------------------------
|
||||
# ref: https://sphinxcontrib-spelling.readthedocs.io/en/latest/customize.html#configuration-options
|
||||
#
|
||||
# The "sphinxcontrib.spelling" extension is optionally enabled if its available.
|
||||
#
|
||||
try:
|
||||
import sphinxcontrib.spelling # noqa
|
||||
import sphinxcontrib.spelling
|
||||
except ImportError:
|
||||
pass
|
||||
else:
|
||||
extensions.append("sphinxcontrib.spelling")
|
||||
spelling_word_list_filename = "spelling_wordlist.txt"
|
||||
|
||||
|
||||
# -- Options for HTML output -------------------------------------------------
|
||||
# ref: https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
|
||||
#
|
||||
html_logo = "_static/images/logo/logo.png"
|
||||
html_favicon = "_static/images/logo/favicon.ico"
|
||||
html_static_path = ["_static"]
|
||||
|
||||
html_theme = "jupyterhub_sphinx_theme"
|
||||
html_theme_options = {
|
||||
"icon_links": [
|
||||
{
|
||||
"name": "GitHub",
|
||||
"url": "https://github.com/jupyterhub/jupyterhub",
|
||||
"icon": "fa-brands fa-github",
|
||||
},
|
||||
],
|
||||
"use_edit_page_button": True,
|
||||
"navbar_align": "left",
|
||||
}
|
||||
html_context = {
|
||||
"github_user": "jupyterhub",
|
||||
"github_repo": "jupyterhub",
|
||||
"github_version": "main",
|
||||
"doc_path": "docs/source",
|
||||
}
|
||||
|
||||
|
||||
# -- Options for linkcheck builder -------------------------------------------
|
||||
# ref: https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-the-linkcheck-builder
|
||||
#
|
||||
linkcheck_ignore = [
|
||||
r"(.*)github\.com(.*)#", # javascript based anchors
|
||||
r"(.*)/#%21(.*)/(.*)", # /#!forum/jupyter - encoded anchor edge case
|
||||
r"https://github.com/[^/]*$", # too many github usernames / searches in changelog
|
||||
"https://github.com/jupyterhub/jupyterhub/pull/", # too many PRs in changelog
|
||||
"https://github.com/jupyterhub/jupyterhub/compare/", # too many comparisons in changelog
|
||||
r"https?://(localhost|127.0.0.1).*", # ignore localhost references in auto-links
|
||||
r".*/rest-api.html#.*", # ignore javascript-resolved internal rest-api links
|
||||
r"https://jupyter.chameleoncloud.org", # FIXME: ignore (presumably) short-term SSL issue
|
||||
]
|
||||
linkcheck_anchors_ignore = [
|
||||
"/#!",
|
||||
"/#%21",
|
||||
]
|
||||
|
||||
# -- Intersphinx -------------------------------------------------------------
|
||||
# ref: https://www.sphinx-doc.org/en/master/usage/extensions/intersphinx.html#configuration
|
||||
#
|
||||
intersphinx_mapping = {
|
||||
"python": ("https://docs.python.org/3/", None),
|
||||
"tornado": ("https://www.tornadoweb.org/en/stable/", None),
|
||||
"jupyter-server": ("https://jupyter-server.readthedocs.io/en/stable/", None),
|
||||
"nbgitpuller": ("https://nbgitpuller.readthedocs.io/en/latest", None),
|
||||
}
|
||||
|
||||
# -- Options for the opengraph extension -------------------------------------
|
||||
# ref: https://github.com/wpilibsuite/sphinxext-opengraph#options
|
||||
#
|
||||
# ogp_site_url is set automatically by RTD
|
||||
ogp_image = "_static/logo.png"
|
||||
ogp_use_first_image = True
|
||||
|
||||
|
||||
# -- Options for the rediraffe extension -------------------------------------
|
||||
# ref: https://github.com/wpilibsuite/sphinxext-rediraffe#readme
|
||||
#
|
||||
# This extension helps us relocate content without breaking links. If a
|
||||
# document is moved internally, a redirect link should be configured as below to
|
||||
# help us not break links.
|
||||
#
|
||||
# The workflow for adding redirects can be as follows:
|
||||
# 1. Change "rediraffe_branch" below to point to the commit/ branch you
|
||||
# want to base off the changes.
|
||||
# 2. Option 1: run "make rediraffecheckdiff"
|
||||
# a. Analyze the output of this command.
|
||||
# b. Manually add the redirect entries to the "redirects.txt" file.
|
||||
# Option 2: run "make rediraffewritediff"
|
||||
# a. rediraffe will then automatically add the obvious redirects to redirects.txt.
|
||||
# b. Analyze the output of the command for broken links.
|
||||
# c. Check the "redirects.txt" file for any files that were moved/ renamed but are not listed.
|
||||
# d. Manually add the redirects that have been mised by the automatic builder to "redirects.txt".
|
||||
# Option 3: Do not use the commands above and, instead, do everything manually - by taking
|
||||
# note of the files you have moved or renamed and adding them to the "redirects.txt" file.
|
||||
#
|
||||
# If you are basing changes off another branch/ commit, always change back
|
||||
# rediraffe_branch to main before pushing your changes upstream.
|
||||
#
|
||||
rediraffe_branch = os.environ.get("REDIRAFFE_BRANCH", "main")
|
||||
rediraffe_redirects = "redirects.txt"
|
||||
|
||||
# allow 80% match for autogenerated redirects
|
||||
rediraffe_auto_redirect_perc = 80
|
||||
|
||||
# rediraffe_redirects = {
|
||||
# "old-file": "new-folder/new-file-name",
|
||||
# }
|
||||
spelling_word_list_filename = 'spelling_wordlist.txt'
|
||||
|
@@ -1,27 +0,0 @@
|
||||
# Community communication channels
|
||||
|
||||
We use different channels of communication for different purposes. Whichever one you use will depend on what kind of communication you want to engage in.
|
||||
|
||||
## Discourse (recommended)
|
||||
|
||||
We use [Discourse](https://discourse.jupyter.org) for online discussions and support questions.
|
||||
You can ask questions here if you are a first-time contributor to the JupyterHub project.
|
||||
Everyone in the Jupyter community is welcome to bring ideas and questions there.
|
||||
|
||||
We recommend that you first use our Discourse as all past and current discussions on it are archived and searchable. Thus, all discussions remain useful and accessible to the whole community.
|
||||
|
||||
## Gitter
|
||||
|
||||
We use [our Gitter channel](https://gitter.im/jupyterhub/jupyterhub) for online, real-time text chat; a place for more ephemeral discussions. When you're not on Discourse, you can stop here to have other discussions on the fly.
|
||||
|
||||
## Github Issues
|
||||
|
||||
[Github issues](https://docs.github.com/en/issues/tracking-your-work-with-issues/about-issues) are used for most long-form project discussions, bug reports and feature requests.
|
||||
|
||||
- Issues related to a specific authenticator or spawner should be opened in the appropriate repository for the authenticator or spawner.
|
||||
- If you are using a specific JupyterHub distribution (such as [Zero to JupyterHub on Kubernetes](https://github.com/jupyterhub/zero-to-jupyterhub-k8s) or [The Littlest JupyterHub](https://github.com/jupyterhub/the-littlest-jupyterhub/)), you should open issues directly in their repository.
|
||||
- If you cannot find a repository to open your issue in, do not worry! Open the issue in the [main JupyterHub repository](https://github.com/jupyterhub/jupyterhub/) and our community will help you figure it out.
|
||||
|
||||
```{note}
|
||||
Our community is distributed across the world in various timezones, so please be patient if you do not get a response immediately!
|
||||
```
|
@@ -1,76 +0,0 @@
|
||||
(contributing-docs)=
|
||||
|
||||
# Contributing Documentation
|
||||
|
||||
Documentation is often more important than code. This page helps
|
||||
you get set up on how to contribute to JupyterHub's documentation.
|
||||
|
||||
## Building documentation locally
|
||||
|
||||
We use [sphinx](https://www.sphinx-doc.org) to build our documentation. It takes
|
||||
our documentation source files (written in [markdown](https://daringfireball.net/projects/markdown/) or [reStructuredText](https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html) &
|
||||
stored under the `docs/source` directory) and converts it into various
|
||||
formats for people to read. To make sure the documentation you write or
|
||||
change renders correctly, it is good practice to test it locally.
|
||||
|
||||
1. Make sure you have successfully completed {ref}`contributing/setup`.
|
||||
|
||||
2. Install the packages required to build the docs.
|
||||
|
||||
```bash
|
||||
python3 -m pip install -r docs/requirements.txt
|
||||
```
|
||||
|
||||
3. Build the html version of the docs. This is the most commonly used
|
||||
output format, so verifying it renders correctly is usually good
|
||||
enough.
|
||||
|
||||
```bash
|
||||
cd docs
|
||||
make html
|
||||
```
|
||||
|
||||
This step will display any syntax or formatting errors in the documentation,
|
||||
along with the filename / line number in which they occurred. Fix them,
|
||||
and re-run the `make html` command to re-render the documentation.
|
||||
|
||||
4. View the rendered documentation by opening `_build/html/index.html` in
|
||||
a web browser.
|
||||
|
||||
:::{tip}
|
||||
**On Windows**, you can open a file from the terminal with `start <path-to-file>`.
|
||||
|
||||
**On macOS**, you can do the same with `open <path-to-file>`.
|
||||
|
||||
**On Linux**, you can do the same with `xdg-open <path-to-file>`.
|
||||
|
||||
After opening index.html in your browser you can just refresh the page whenever
|
||||
you rebuild the docs via `make html`
|
||||
:::
|
||||
|
||||
(contributing-docs-conventions)=
|
||||
|
||||
## Documentation conventions
|
||||
|
||||
This section lists various conventions we use in our documentation. This is a
|
||||
living document that grows over time, so feel free to add to it / change it!
|
||||
|
||||
Our entire documentation does not yet fully conform to these conventions yet,
|
||||
so help in making it so would be appreciated!
|
||||
|
||||
### `pip` invocation
|
||||
|
||||
There are many ways to invoke a `pip` command, we recommend the following
|
||||
approach:
|
||||
|
||||
```bash
|
||||
python3 -m pip
|
||||
```
|
||||
|
||||
This invokes pip explicitly using the python3 binary that you are
|
||||
currently using. This is the **recommended way** to invoke pip
|
||||
in our documentation, since it is least likely to cause problems
|
||||
with python3 and pip being from different environments.
|
||||
|
||||
For more information on how to invoke `pip` commands, see
|
||||
[the pip documentation](https://pip.pypa.io/en/stable/).
|
@@ -1,22 +0,0 @@
|
||||
# Contributing
|
||||
|
||||
We want you to contribute to JupyterHub in ways that are most exciting
|
||||
and useful to you. We value documentation, testing, bug reporting & code equally,
|
||||
and are glad to have your contributions in whatever form you wish.
|
||||
|
||||
Be sure to first check our [Code of Conduct](https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md)
|
||||
([reporting guidelines](https://github.com/jupyter/governance/blob/HEAD/conduct/reporting_online.md)), which help keep our community welcoming to as many people as possible.
|
||||
|
||||
This section covers information about our community, as well as ways that you can connect and get involved.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 2
|
||||
|
||||
contributor-list
|
||||
community
|
||||
setup
|
||||
docs
|
||||
tests
|
||||
roadmap
|
||||
security
|
||||
```
|
@@ -1,95 +0,0 @@
|
||||
# The JupyterHub roadmap
|
||||
|
||||
This roadmap collects "next steps" for JupyterHub. It is about creating a
|
||||
shared understanding of the project's vision and direction amongst
|
||||
the community of users, contributors, and maintainers.
|
||||
The goal is to communicate priorities and upcoming release plans.
|
||||
It is not aimed at limiting contributions to what is listed here.
|
||||
|
||||
## Using the roadmap
|
||||
|
||||
### Sharing Feedback on the Roadmap
|
||||
|
||||
All of the community is encouraged to provide feedback as well as share new
|
||||
ideas with the community. Please do so by submitting an issue. If you want to
|
||||
have an informal conversation first use one of the other communication channels.
|
||||
After submitting the issue, others from the community will probably
|
||||
respond with questions or comments they have to clarify the issue. The
|
||||
maintainers will help identify what a good next step is for the issue.
|
||||
|
||||
### What do we mean by "next step"?
|
||||
|
||||
When submitting an issue, think about what "next step" category best describes
|
||||
your issue:
|
||||
|
||||
- **now**, concrete/actionable step that is ready for someone to start work on.
|
||||
These might be items that have a link to an issue or more abstract like
|
||||
"decrease typos and dead links in the documentation"
|
||||
- **soon**, less concrete/actionable step that is going to happen soon,
|
||||
discussions around the topic are coming close to an end at which point it can
|
||||
move into the "now" category
|
||||
- **later**, abstract ideas or tasks, need a lot of discussion or
|
||||
experimentation to shape the idea so that it can be executed. Can also
|
||||
contain concrete/actionable steps that have been postponed on purpose
|
||||
(these are steps that could be in "now" but the decision was taken to work on
|
||||
them later)
|
||||
|
||||
### Reviewing and Updating the Roadmap
|
||||
|
||||
The roadmap will get updated as time passes (next review by 1st December) based
|
||||
on discussions and ideas captured as issues.
|
||||
This means this list should not be exhaustive, it should only represent
|
||||
the "top of the stack" of ideas. It should
|
||||
not function as a wish list, collection of feature requests or todo list.
|
||||
For those please create a
|
||||
[new issue](https://github.com/jupyterhub/jupyterhub/issues/new).
|
||||
|
||||
The roadmap should give the reader an idea of what is happening next, what needs
|
||||
input and discussion before it can happen and what has been postponed.
|
||||
|
||||
## The roadmap proper
|
||||
|
||||
### Project vision
|
||||
|
||||
JupyterHub is a dependable tool used by humans that reduces the complexity of
|
||||
creating the environment in which a piece of software can be executed.
|
||||
|
||||
### Now
|
||||
|
||||
These "Now" items are considered active areas of focus for the project:
|
||||
|
||||
- HubShare - a sharing service for use with JupyterHub.
|
||||
- Users should be able to:
|
||||
- Push a project to other users.
|
||||
- Get a checkout of a project from other users.
|
||||
- Push updates to a published project.
|
||||
- Pull updates from a published project.
|
||||
- Manage conflicts/merges by simply picking a version (our/theirs)
|
||||
- Get a checkout of a project from the internet. These steps are completely different from saving notebooks/files.
|
||||
- Have directories that are managed by git completely separately from our stuff.
|
||||
- Look at pushed content that they have access to without an explicit pull.
|
||||
- Define and manage teams of users.
|
||||
- Adding/removing a user to/from a team gives/removes them access to all projects that team has access to.
|
||||
- Build other services, such as static HTML publishing and dashboarding on top of these things.
|
||||
|
||||
### Soon
|
||||
|
||||
These "Soon" items are under discussion. Once an item reaches the point of an
|
||||
actionable plan, the item will be moved to the "Now" section. Typically,
|
||||
these will be moved at a future review of the roadmap.
|
||||
|
||||
- resource monitoring and management:
|
||||
- (prometheus?) API for resource monitoring
|
||||
- tracking activity on single-user servers instead of the proxy
|
||||
- notes and activity tracking per API token
|
||||
|
||||
### Later
|
||||
|
||||
The "Later" items are things that are at the back of the project's mind. At this
|
||||
time there is no active plan for an item. The project would like to find the
|
||||
resources and time to discuss these ideas.
|
||||
|
||||
- real-time collaboration
|
||||
- Enter into real-time collaboration mode for a project that starts a shared execution context.
|
||||
- Once the single-user notebook package supports realtime collaboration,
|
||||
implement sharing mechanism integrated into the Hub.
|
@@ -1,9 +0,0 @@
|
||||
# Reporting security issues in Jupyter or JupyterHub
|
||||
|
||||
If you find a security vulnerability in Jupyter or JupyterHub,
|
||||
whether it is a failure of the security model described in [Security Overview](web-security)
|
||||
or a failure in implementation,
|
||||
please report it to <mailto:security@ipython.org>.
|
||||
|
||||
If you prefer to encrypt your security reports,
|
||||
you can use {download}`this PGP public key </ipython_security.asc>`.
|
@@ -1,175 +0,0 @@
|
||||
(contributing/setup)=
|
||||
|
||||
# Setting up a development install
|
||||
|
||||
## System requirements
|
||||
|
||||
JupyterHub can only run on macOS or Linux operating systems. If you are
|
||||
using Windows, we recommend using [VirtualBox](https://virtualbox.org)
|
||||
or a similar system to run [Ubuntu Linux](https://ubuntu.com) for
|
||||
development.
|
||||
|
||||
### Install Python
|
||||
|
||||
JupyterHub is written in the [Python](https://python.org) programming language and
|
||||
requires you have at least version 3.6 installed locally. If you haven’t
|
||||
installed Python before, the recommended way to install it is to use
|
||||
[Miniforge](https://github.com/conda-forge/miniforge#download).
|
||||
|
||||
### Install nodejs
|
||||
|
||||
[NodeJS 12+](https://nodejs.org/en/) is required for building some JavaScript components.
|
||||
`configurable-http-proxy`, the default proxy implementation for JupyterHub, is written in Javascript.
|
||||
If you have not installed NodeJS before, we recommend installing it in the `miniconda` environment you set up for Python.
|
||||
You can do so with `conda install nodejs`.
|
||||
|
||||
Many in the Jupyter community use \[`nvm`\](<https://github.com/nvm-sh/nvm>) to
|
||||
managing node dependencies.
|
||||
|
||||
### Install git
|
||||
|
||||
JupyterHub uses [Git](https://git-scm.com) & [GitHub](https://github.com)
|
||||
for development & collaboration. You need to [install git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) to work on
|
||||
JupyterHub. We also recommend getting a free account on GitHub.com.
|
||||
|
||||
## Setting up a development install
|
||||
|
||||
When developing JupyterHub, you would need to make changes and be able to instantly view the results of the changes. To achieve that, a developer install is required.
|
||||
|
||||
:::{note}
|
||||
This guide does not attempt to dictate _how_ development
|
||||
environments should be isolated since that is a personal preference and can
|
||||
be achieved in many ways, for example, `tox`, `conda`, `docker`, etc. See this
|
||||
[forum thread](https://discourse.jupyter.org/t/thoughts-on-using-tox/3497) for
|
||||
a more detailed discussion.
|
||||
:::
|
||||
|
||||
1. Clone the [JupyterHub git repository](https://github.com/jupyterhub/jupyterhub)
|
||||
to your computer.
|
||||
|
||||
```bash
|
||||
git clone https://github.com/jupyterhub/jupyterhub
|
||||
cd jupyterhub
|
||||
```
|
||||
|
||||
2. Make sure the `python` you installed and the `npm` you installed
|
||||
are available to you on the command line.
|
||||
|
||||
```bash
|
||||
python -V
|
||||
```
|
||||
|
||||
This should return a version number greater than or equal to 3.6.
|
||||
|
||||
```bash
|
||||
npm -v
|
||||
```
|
||||
|
||||
This should return a version number greater than or equal to 5.0.
|
||||
|
||||
3. Install `configurable-http-proxy` (required to run and test the default JupyterHub configuration) and `yarn` (required to build some components):
|
||||
|
||||
```bash
|
||||
npm install -g configurable-http-proxy yarn
|
||||
```
|
||||
|
||||
If you get an error that says `Error: EACCES: permission denied`, you might need to prefix the command with `sudo`.
|
||||
`sudo` may be required to perform a system-wide install.
|
||||
If you do not have access to sudo, you may instead run the following commands:
|
||||
|
||||
```bash
|
||||
npm install configurable-http-proxy yarn
|
||||
export PATH=$PATH:$(pwd)/node_modules/.bin
|
||||
```
|
||||
|
||||
The second line needs to be run every time you open a new terminal.
|
||||
|
||||
If you are using conda you can instead run:
|
||||
|
||||
```bash
|
||||
conda install configurable-http-proxy yarn
|
||||
```
|
||||
|
||||
4. Install an editable version of JupyterHub and its requirements for
|
||||
development and testing. This lets you edit JupyterHub code in a text editor
|
||||
& restart the JupyterHub process to see your code changes immediately.
|
||||
|
||||
```bash
|
||||
python3 -m pip install --editable ".[test]"
|
||||
```
|
||||
|
||||
5. Set up a database.
|
||||
|
||||
The default database engine is `sqlite` so if you are just trying
|
||||
to get up and running quickly for local development that should be
|
||||
available via [Python](https://docs.python.org/3.5/library/sqlite3.html).
|
||||
See [The Hub's Database](hub-database) for details on other supported databases.
|
||||
|
||||
6. You are now ready to start JupyterHub!
|
||||
|
||||
```bash
|
||||
jupyterhub
|
||||
```
|
||||
|
||||
7. You can access JupyterHub from your browser at
|
||||
`http://localhost:8000` now.
|
||||
|
||||
Happy developing!
|
||||
|
||||
## Using DummyAuthenticator & SimpleLocalProcessSpawner
|
||||
|
||||
To simplify testing of JupyterHub, it is helpful to use
|
||||
{class}`~jupyterhub.auth.DummyAuthenticator` instead of the default JupyterHub
|
||||
authenticator and SimpleLocalProcessSpawner instead of the default spawner.
|
||||
|
||||
There is a sample configuration file that does this in
|
||||
`testing/jupyterhub_config.py`. To launch JupyterHub with this
|
||||
configuration:
|
||||
|
||||
```bash
|
||||
jupyterhub -f testing/jupyterhub_config.py
|
||||
```
|
||||
|
||||
The default JupyterHub [authenticator](PAMAuthenticator)
|
||||
& [spawner](LocalProcessSpawner)
|
||||
require your system to have user accounts for each user you want to log in to
|
||||
JupyterHub as.
|
||||
|
||||
DummyAuthenticator allows you to log in with any username & password,
|
||||
while SimpleLocalProcessSpawner allows you to start servers without having to
|
||||
create a Unix user for each JupyterHub user. Together, these make it
|
||||
much easier to test JupyterHub.
|
||||
|
||||
Tip: If you are working on parts of JupyterHub that are common to all
|
||||
authenticators & spawners, we recommend using both DummyAuthenticator &
|
||||
SimpleLocalProcessSpawner. If you are working on just authenticator-related
|
||||
parts, use only SimpleLocalProcessSpawner. Similarly, if you are working on
|
||||
just spawner-related parts, use only DummyAuthenticator.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
This section lists common ways setting up your development environment may
|
||||
fail, and how to fix them. Please add to the list if you encounter yet
|
||||
another way it can fail!
|
||||
|
||||
### `lessc` not found
|
||||
|
||||
If the `python3 -m pip install --editable .` command fails and complains about
|
||||
`lessc` being unavailable, you may need to explicitly install some
|
||||
additional JavaScript dependencies:
|
||||
|
||||
```bash
|
||||
npm install
|
||||
```
|
||||
|
||||
This will fetch client-side JavaScript dependencies necessary to compile
|
||||
CSS.
|
||||
|
||||
You may also need to manually update JavaScript and CSS after some
|
||||
development updates, with:
|
||||
|
||||
```bash
|
||||
python3 setup.py js # fetch updated client-side js
|
||||
python3 setup.py css # recompile CSS from LESS sources
|
||||
python3 setup.py jsx # build React admin app
|
||||
```
|
@@ -1,157 +0,0 @@
|
||||
(contributing-tests)=
|
||||
|
||||
# Testing JupyterHub and linting code
|
||||
|
||||
Unit testing helps to validate that JupyterHub works the way we think it does,
|
||||
and continues to do so when changes occur. They also help communicate
|
||||
precisely what we expect our code to do.
|
||||
|
||||
JupyterHub uses [pytest](https://pytest.org) for all the tests. You
|
||||
can find them under the [jupyterhub/tests](https://github.com/jupyterhub/jupyterhub/tree/main/jupyterhub/tests) directory in the git repository.
|
||||
|
||||
## Running the tests
|
||||
|
||||
1. Make sure you have completed {ref}`contributing/setup`.
|
||||
Once you are done, you would be able to run `jupyterhub` from the command line and access it from your web browser.
|
||||
This ensures that the dev environment is properly set up for tests to run.
|
||||
|
||||
2. You can run all tests in JupyterHub
|
||||
|
||||
```bash
|
||||
pytest -v jupyterhub/tests
|
||||
```
|
||||
|
||||
This should display progress as it runs all the tests, printing
|
||||
information about any test failures as they occur.
|
||||
|
||||
If you wish to confirm test coverage the run tests with the `--cov` flag:
|
||||
|
||||
```bash
|
||||
pytest -v --cov=jupyterhub jupyterhub/tests
|
||||
```
|
||||
|
||||
3. You can also run tests in just a specific file:
|
||||
|
||||
```bash
|
||||
pytest -v jupyterhub/tests/<test-file-name>
|
||||
```
|
||||
|
||||
4. To run a specific test only, you can do:
|
||||
|
||||
```bash
|
||||
pytest -v jupyterhub/tests/<test-file-name>::<test-name>
|
||||
```
|
||||
|
||||
This runs the test with function name `<test-name>` defined in
|
||||
`<test-file-name>`. This is very useful when you are iteratively
|
||||
developing a single test.
|
||||
|
||||
For example, to run the test `test_shutdown` in the file `test_api.py`,
|
||||
you would run:
|
||||
|
||||
```bash
|
||||
pytest -v jupyterhub/tests/test_api.py::test_shutdown
|
||||
```
|
||||
|
||||
For more details, refer to the [pytest usage documentation](https://pytest.readthedocs.io/en/latest/usage.html).
|
||||
|
||||
## Test organisation
|
||||
|
||||
The tests live in `jupyterhub/tests` and are organized roughly into:
|
||||
|
||||
1. `test_api.py` tests the REST API
|
||||
2. `test_pages.py` tests loading the HTML pages
|
||||
|
||||
and other collections of tests for different components.
|
||||
When writing a new test, there should usually be a test of
|
||||
similar functionality already written and related tests should
|
||||
be added nearby.
|
||||
|
||||
The fixtures live in `jupyterhub/tests/conftest.py`. There are
|
||||
fixtures that can be used for JupyterHub components, such as:
|
||||
|
||||
- `app`: an instance of JupyterHub with mocked parts
|
||||
- `auth_state_enabled`: enables persisting auth_state (like authentication tokens)
|
||||
- `db`: a sqlite in-memory DB session
|
||||
- `` io_loop` ``: a Tornado event loop
|
||||
- `event_loop`: a new asyncio event loop
|
||||
- `user`: creates a new temporary user
|
||||
- `admin_user`: creates a new temporary admin user
|
||||
- single user servers
|
||||
\- `cleanup_after`: allows cleanup of single user servers between tests
|
||||
- mocked service
|
||||
\- `MockServiceSpawner`: a spawner that mocks services for testing with a short poll interval
|
||||
\- `` mockservice` ``: mocked service with no external service url
|
||||
\- `mockservice_url`: mocked service with a url to test external services
|
||||
|
||||
And fixtures to add functionality or spawning behavior:
|
||||
|
||||
- `admin_access`: grants admin access
|
||||
- `` no_patience` ``: sets slow-spawning timeouts to zero
|
||||
- `slow_spawn`: enables the SlowSpawner (a spawner that takes a few seconds to start)
|
||||
- `never_spawn`: enables the NeverSpawner (a spawner that will never start)
|
||||
- `bad_spawn`: enables the BadSpawner (a spawner that fails immediately)
|
||||
- `slow_bad_spawn`: enables the SlowBadSpawner (a spawner that fails after a short delay)
|
||||
|
||||
Refer to the [pytest fixtures documentation](https://pytest.readthedocs.io/en/latest/fixture.html) to learn how to use fixtures that exists already and to create new ones.
|
||||
|
||||
### The Pytest-Asyncio Plugin
|
||||
|
||||
When testing the various JupyterHub components and their various implementations, it sometimes becomes necessary to have a running instance of JupyterHub to test against.
|
||||
The [`app`](https://github.com/jupyterhub/jupyterhub/blob/270b61992143b29af8c2fab90c4ed32f2f6fe209/jupyterhub/tests/conftest.py#L60) fixture mocks a JupyterHub application for use in testing by:
|
||||
|
||||
- enabling ssl if internal certificates are available
|
||||
- creating an instance of [MockHub](https://github.com/jupyterhub/jupyterhub/blob/270b61992143b29af8c2fab90c4ed32f2f6fe209/jupyterhub/tests/mocking.py#L221) using any provided configurations as arguments
|
||||
- initializing the mocked instance
|
||||
- starting the mocked instance
|
||||
- finally, a registered finalizer function performs a cleanup and stops the mocked instance
|
||||
|
||||
The JupyterHub test suite uses the [pytest-asyncio plugin](https://pytest-asyncio.readthedocs.io/en/latest/) that handles [event-loop](https://docs.python.org/3/library/asyncio-eventloop.html) integration in [Tornado](https://www.tornadoweb.org/en/stable/) applications. This allows for the use of top-level awaits when calling async functions or [fixtures](https://docs.pytest.org/en/6.2.x/fixture.html#what-fixtures-are) during testing. All test functions and fixtures labelled as `async` will run on the same event loop.
|
||||
|
||||
```{note}
|
||||
With the introduction of [top-level awaits](https://piccolo-orm.com/blog/top-level-await-in-python/), the use of the `io_loop` fixture of the [pytest-tornado plugin](https://www.tornadoweb.org/en/stable/ioloop.html) is no longer necessary. It was initially used to call coroutines. With the upgrades made to `pytest-asyncio`, this usage is now deprecated. It is now, only utilized within the JupyterHub test suite to ensure complete cleanup of resources used during testing such as open file descriptors. This is demonstrated in this [pull request](https://github.com/jupyterhub/jupyterhub/pull/4332).
|
||||
More information is provided below.
|
||||
```
|
||||
|
||||
One of the general goals of the [JupyterHub Pytest Plugin project](https://github.com/jupyterhub/pytest-jupyterhub) is to ensure the MockHub cleanup fully closes and stops all utilized resources during testing so the use of the `io_loop` fixture for teardown is not necessary. This was highlighted in this [issue](https://github.com/jupyterhub/pytest-jupyterhub/issues/30)
|
||||
|
||||
For more information on asyncio and event-loops, here are some resources:
|
||||
|
||||
- **Read**: [Introduction to the Python event loop](https://www.pythontutorial.net/python-concurrency/python-event-loop)
|
||||
- **Read**: [Overview of Async IO in Python 3.7](https://stackabuse.com/overview-of-async-io-in-python-3-7)
|
||||
- **Watch**: [Asyncio: Understanding Async / Await in Python](https://www.youtube.com/watch?v=bs9tlDFWWdQ)
|
||||
- **Watch**: [Learn Python's AsyncIO #2 - The Event Loop](https://www.youtube.com/watch?v=E7Yn5biBZ58)
|
||||
|
||||
## Troubleshooting Test Failures
|
||||
|
||||
### All the tests are failing
|
||||
|
||||
Make sure you have completed all the steps in {ref}`contributing/setup` successfully, and are able to access JupyterHub from your browser at http://localhost:8000 after starting `jupyterhub` in your command line.
|
||||
|
||||
## Code formatting and linting
|
||||
|
||||
JupyterHub automatically enforces code formatting. This means that pull requests
|
||||
with changes breaking this formatting will receive a commit from pre-commit.ci
|
||||
automatically.
|
||||
|
||||
To automatically format code locally, you can install pre-commit and register a
|
||||
_git hook_ to automatically check with pre-commit before you make a commit if
|
||||
the formatting is okay.
|
||||
|
||||
```bash
|
||||
pip install pre-commit
|
||||
pre-commit install --install-hooks
|
||||
```
|
||||
|
||||
To run pre-commit manually you would do:
|
||||
|
||||
```bash
|
||||
# check for changes to code not yet committed
|
||||
pre-commit run
|
||||
|
||||
# check for changes also in already committed code
|
||||
pre-commit run --all-files
|
||||
```
|
||||
|
||||
You may also install [black integration](https://github.com/psf/black#editor-integration)
|
||||
into your text editor to format code automatically.
|
@@ -120,4 +120,3 @@ contribution on JupyterHub:
|
||||
- yuvipanda
|
||||
- zoltan-fedor
|
||||
- zonca
|
||||
- Neeraj Natu
|
@@ -1,308 +0,0 @@
|
||||
# Capacity planning
|
||||
|
||||
General capacity planning advice for JupyterHub is hard to give,
|
||||
because it depends almost entirely on what your users are doing,
|
||||
and what JupyterHub users do varies _wildly_ in terms of resource consumption.
|
||||
|
||||
**There is no single answer to "I have X users, what resources do I need?" or "How many users can I support with this machine?"**
|
||||
|
||||
Here are three _typical_ Jupyter use patterns that require vastly different resources:
|
||||
|
||||
- **Learning**: negligible resources because computation is mostly idle,
|
||||
e.g. students learning programming for the first time
|
||||
- **Production code**: very intense, sustained load, e.g. training machine learning models
|
||||
- **Bursting**: _mostly_ idle, but needs a lot of resources for short periods of time
|
||||
(interactive research often looks like this)
|
||||
|
||||
But just because there's no single answer doesn't mean we can't help.
|
||||
So we have gathered here some useful information to help you make your decisions
|
||||
about what resources you need based on how your users work,
|
||||
including the relative invariants in terms of resources that JupyterHub itself needs.
|
||||
|
||||
## JupyterHub infrastructure
|
||||
|
||||
JupyterHub consists of a few components that are always running.
|
||||
These take up very little resources,
|
||||
especially relative to the resources consumed by users when you have more than a few.
|
||||
|
||||
As an example, an instance of mybinder.org (running JupyterHub 1.5.0),
|
||||
running with typically ~100-150 users has:
|
||||
|
||||
| Component | CPU (mean/peak) | Memory (mean/peak) |
|
||||
| --------- | --------------- | ------------------ |
|
||||
| Hub | 4% / 13% | (230 MB / 260 MB) |
|
||||
| Proxy | 6% / 13% | (47 MB / 65 MB) |
|
||||
|
||||
So it would be pretty generous to allocate ~25% of one CPU core
|
||||
and ~500MB of RAM to overall JupyterHub infrastructure.
|
||||
|
||||
The rest is going to be up to your users.
|
||||
Per-user overhead from JupyterHub is typically negligible
|
||||
up to at least a few hundred concurrent active users.
|
||||
|
||||
```{figure} /images/mybinder-hub-components-cpu-memory.png
|
||||
JupyterHub component resource usage for mybinder.org.
|
||||
```
|
||||
|
||||
## Factors to consider
|
||||
|
||||
### Static vs elastic resources
|
||||
|
||||
A big factor in planning resources is:
|
||||
**how much does it cost to change your mind?**
|
||||
If you are using a single shared machine with local storage,
|
||||
migrating to a new one because it turns out your users don't fit might be very costly.
|
||||
You will have to get a new machine, set it up, and maybe even migrate user data.
|
||||
|
||||
On the other hand, if you are using ephemeral resources,
|
||||
such as node pools in Kubernetes,
|
||||
changing resource types costs close to nothing
|
||||
because nodes can automatically be added or removed as needed.
|
||||
|
||||
Take that cost into account when you are picking how much memory or cpu to allocate to users.
|
||||
|
||||
Static resources (like [the-littlest-jupyterhub][]) provide for more **stable, predictable costs**,
|
||||
but elastic resources (like [zero-to-jupyterhub][]) tend to provide **lower overall costs**
|
||||
(especially when deployed with monitoring allowing cost optimizations over time),
|
||||
but which are **less predictable**.
|
||||
|
||||
[the-littlest-jupyterhub]: https://the-littlest-jupyterhub.readthedocs.io
|
||||
|
||||
[zero-to-jupyterhub]: https://z2jh.jupyter.org
|
||||
|
||||
(limits-requests)=
|
||||
|
||||
### Limit vs Request for resources
|
||||
|
||||
Many scheduling tools like Kubernetes have two separate ways of allocating resources to users.
|
||||
A **Request** or **Reservation** describes how much resources are _set aside_ for each user.
|
||||
Often, this doesn't have any practical effect other than deciding when a given machine is considered 'full'.
|
||||
If you are using expandable resources like an autoscaling Kubernetes cluster,
|
||||
a new node must be launched and added to the pool if you 'request' more resources than fit on currently running nodes (a cluster **scale-up event**).
|
||||
If you are running on a single VM, this describes how many users you can run at the same time, full stop.
|
||||
|
||||
A **Limit**, on the other hand, enforces a limit to how much resources any given user can consume.
|
||||
For more information on what happens when users try to exceed their limits, see [](oversubscription).
|
||||
|
||||
In the strictest, safest case, you can have these two numbers be the same.
|
||||
That means that each user is _limited_ to fit within the resources allocated to it.
|
||||
This avoids **[oversubscription](oversubscription)** of resources (allowing use of more than you have available),
|
||||
at the expense (in a literal, this-costs-money sense) of reserving lots of usually-idle capacity.
|
||||
|
||||
However, you often find that a small fraction of users use more resources than others.
|
||||
In this case you may give users limits that _go beyond the amount of resources requested_.
|
||||
This is called **oversubscribing** the resources available to users.
|
||||
|
||||
Having a gap between the request and the limit means you can fit a number of _typical_ users on a node (based on the request),
|
||||
but still limit how much a runaway user can gobble up for themselves.
|
||||
|
||||
(oversubscription)=
|
||||
|
||||
### Oversubscribed CPU is okay, running out of memory is bad
|
||||
|
||||
An important consideration when assigning resources to users is: **What happens when users need more than I've given them?**
|
||||
|
||||
A good summary to keep in mind:
|
||||
|
||||
> When tasks don't get enough CPU, things are slow.
|
||||
> When they don't get enough memory, things are broken.
|
||||
|
||||
This means it's **very important that users have enough memory**,
|
||||
but much less important that they always have exclusive access to all the CPU they can use.
|
||||
|
||||
This relates to [Limits and Requests](limits-requests),
|
||||
because these are the consequences of your limits and/or requests not matching what users actually try to use.
|
||||
|
||||
A table of mismatched resource allocation situations and their consequences:
|
||||
|
||||
| issue | consequence |
|
||||
| -------------------------------------------------------- | ------------------------------------------------------------------------------------- |
|
||||
| Requests too high | Unnecessarily high cost and/or low capacity. |
|
||||
| CPU limit too low | Poor performance experienced by users |
|
||||
| CPU oversubscribed (too-low request + too-high limit) | Poor performance across the system; may crash, if severe |
|
||||
| Memory limit too low | Servers killed by Out-of-Memory Killer (OOM); lost work for users |
|
||||
| Memory oversubscribed (too-low request + too-high limit) | System memory exhaustion - all kinds of hangs and crashes and weird errors. Very bad. |
|
||||
|
||||
Note that the 'oversubscribed' problem case is where the request is lower than _typical_ usage,
|
||||
meaning that the total reserved resources isn't enough for the total _actual_ consumption.
|
||||
This doesn't mean that _all_ your users exceed the request,
|
||||
just that the _limit_ gives enough room for the _average_ user to exceed the request.
|
||||
|
||||
All of these considerations are important _per node_.
|
||||
Larger nodes means more users per node, and therefore more users to average over.
|
||||
It also means more chances for multiple outliers on the same node.
|
||||
|
||||
### Example case for oversubscribing memory
|
||||
|
||||
Take for example, this system and sampling of user behavior:
|
||||
|
||||
- System memory = 8G
|
||||
- memory request = 1G, limit = 3G
|
||||
- typical 'heavy' user: 2G
|
||||
- typical 'light' user: 0.5G
|
||||
|
||||
This will assign 8 users to those 8G of RAM (remember: only requests are used for deciding when a machine is 'full').
|
||||
As long as the total of 8 users _actual_ usage is under 8G, everything is fine.
|
||||
But the _limit_ allows a total of 24G to be used,
|
||||
which would be a mess if everyone used their full limit.
|
||||
But _not_ everyone uses the full limit, which is the point!
|
||||
|
||||
This pattern is fine if 1/8 of your users are 'heavy' because _typical_ usage will be ~0.7G,
|
||||
and your total usage will be ~5G (`1 × 2 + 7 × 0.5 = 5.5`).
|
||||
|
||||
But if _50%_ of your users are 'heavy' you have a problem because that means your users will be trying to use 10G (`4 × 2 + 4 × 0.5 = 10`),
|
||||
which you don't have.
|
||||
|
||||
You can make guesses at these numbers, but the only _real_ way to get them is to measure (see [](measuring)).
|
||||
|
||||
### CPU:memory ratio
|
||||
|
||||
Most of the time, you'll find that only one resource is the limiting factor for your users.
|
||||
Most often it's memory, but for certain tasks, it could be CPU (or even GPUs).
|
||||
|
||||
Many cloud deployments have just one or a few fixed ratios of cpu to memory
|
||||
(e.g. 'general purpose', 'high memory', and 'high cpu').
|
||||
Setting your secondary resource allocation according to this ratio
|
||||
after selecting the more important limit results in a balanced resource allocation.
|
||||
|
||||
For instance, some of Google Cloud's ratios are:
|
||||
|
||||
| node type | GB RAM / CPU core |
|
||||
| ----------- | ----------------- |
|
||||
| n2-highmem | 8 |
|
||||
| n2-standard | 4 |
|
||||
| n2-highcpu | 1 |
|
||||
|
||||
(idleness)=
|
||||
|
||||
### Idleness
|
||||
|
||||
Jupyter being an interactive tool means people tend to spend a lot more time reading and thinking than actually running resource-intensive code.
|
||||
This significantly affects how much _cpu_ resources a typical active user needs,
|
||||
but often does not significantly affect the _memory_.
|
||||
|
||||
Ways to think about this:
|
||||
|
||||
- More idle users means unused CPU.
|
||||
This generally means setting your CPU _limit_ higher than your CPU _request_.
|
||||
- What do your users do when they _are_ running code?
|
||||
Is it typically single-threaded local computation in a notebook?
|
||||
If so, there's little reason to set a limit higher than 1 CPU core.
|
||||
- Do typical computations take a long time, or just a few seconds?
|
||||
Longer typical computations means it's more likely for users to be trying to use the CPU at the same moment,
|
||||
suggesting a higher _request_.
|
||||
- Even with idle users, parallel computation adds up quickly - one user fully loading 4 cores and 3 using almost nothing still averages to more than a full CPU core per user.
|
||||
- Long-running intense computations suggest higher requests.
|
||||
|
||||
Again, using mybinder.org as an example—we run around 100 users on 8-core nodes,
|
||||
and still see fairly _low_ overall CPU usage on each user node.
|
||||
The limit here is actually Kubernetes' pods per node, not memory _or_ CPU.
|
||||
This is likely a extreme case, as many Binder users come from clicking links on webpages
|
||||
without any actual intention of running code.
|
||||
|
||||
```{figure} /images/mybinder-load5.png
|
||||
mybinder.org node CPU usage is low with 50-150 users sharing just 8 cores
|
||||
```
|
||||
|
||||
### Concurrent users and culling idle servers
|
||||
|
||||
Related to [][idleness], all of these resource consumptions and limits are calculated based on **concurrently active users**,
|
||||
not total users.
|
||||
You might have 10,000 users of your JupyterHub deployment, but only 100 of them running at any given time.
|
||||
That 100 is the main number you need to use for your capacity planning.
|
||||
JupyterHub costs scale very little based on the number of _total_ users,
|
||||
up to a point.
|
||||
|
||||
There are two important definitions for **active user**:
|
||||
|
||||
- Are they _actually_ there (i.e. a human interacting with Jupyter, or running code that might be )
|
||||
- Is their server running (this is where resource reservations and limits are actually applied)
|
||||
|
||||
Connecting those two definitions (how long are servers running if their humans aren't using them) is an important area of deployment configuration, usually implemented via the [JupyterHub idle culler service][idle-culler].
|
||||
|
||||
[idle-culler]: https://github.com/jupyterhub/jupyterhub-idle-culler
|
||||
|
||||
There are a lot of considerations when it comes to culling idle users that will depend:
|
||||
|
||||
- How much does it save me to shut down user servers? (e.g. keeping an elastic cluster small, or keeping a fixed-size deployment available to active users)
|
||||
- How much does it cost my users to have their servers shut down? (e.g. lost work if shutdown prematurely)
|
||||
- How easy do I want it to be for users to keep their servers running? (e.g. Do they want to run unattended simulations overnight? Do you want them to?)
|
||||
|
||||
Like many other things in this guide, there are many correct answers leading to different configuration choices.
|
||||
For more detail on culling configuration and considerations, consult the [JupyterHub idle culler documentation][idle-culler].
|
||||
|
||||
## More tips
|
||||
|
||||
### Start strict and generous, then measure
|
||||
|
||||
A good tip, in general, is to give your users as much resources as you can afford that you think they _might_ use.
|
||||
Then, use resource usage metrics like prometheus to analyze what your users _actually_ need,
|
||||
and tune accordingly.
|
||||
Remember: **Limits affect your user experience and stability. Requests mostly affect your costs**.
|
||||
|
||||
For example, a sensible starting point (lacking any other information) might be:
|
||||
|
||||
```yaml
|
||||
request:
|
||||
cpu: 0.5
|
||||
mem: 2G
|
||||
limit:
|
||||
cpu: 1
|
||||
mem: 2G
|
||||
```
|
||||
|
||||
(more memory if significant computations are likely - machine learning models, data analysis, etc.)
|
||||
|
||||
Some actions
|
||||
|
||||
- If you see out-of-memory killer events, increase the limit (or talk to your users!)
|
||||
- If you see typical memory well below your limit, reduce the request (but not the limit)
|
||||
- If _nobody_ uses that much memory, reduce your limit
|
||||
- If CPU is your limiting scheduling factor and your CPUs are mostly idle,
|
||||
reduce the cpu request (maybe even to 0!).
|
||||
- If CPU usage continues to be low, increase the limit to 2 or 4 to allow bursts of parallel execution.
|
||||
|
||||
(measuring)=
|
||||
|
||||
### Measuring user resource consumption
|
||||
|
||||
It is _highly_ recommended to deploy monitoring services such as [Prometheus][]
|
||||
and [Grafana][] to get a view of your users' resource usage.
|
||||
This is the only way to truly know what your users need.
|
||||
|
||||
JupyterHub has some experimental [grafana dashboards][] you can use as a starting point,
|
||||
to keep an eye on your resource usage.
|
||||
Here are some sample charts from (again from mybinder.org),
|
||||
showing >90% of users using less than 10% CPU and 200MB,
|
||||
but a few outliers near the limit of 1 CPU and 2GB of RAM.
|
||||
This is the kind of information you can use to tune your requests and limits.
|
||||
|
||||

|
||||
|
||||
[prometheus]: https://prometheus.io
|
||||
[grafana]: https://grafana.com
|
||||
[grafana dashboards]: https://github.com/jupyterhub/grafana-dashboards
|
||||
|
||||
### Measuring costs
|
||||
|
||||
Measuring costs may be as important as measuring your users activity.
|
||||
If you are using a cloud provider, you can often use cost thresholds and quotas to instruct them to notify you if your costs are too high,
|
||||
e.g. "Have AWS send me an email if I hit X spending trajectory on week 3 of the month."
|
||||
You can then use this information to tune your resources based on what you can afford.
|
||||
You can mix this information with user resource consumption to figure out if you have a problem,
|
||||
e.g. "my users really do need X resources, but I can only afford to give them 80% of X."
|
||||
This information may prove useful when asking your budget-approving folks for more funds.
|
||||
|
||||
### Additional resources
|
||||
|
||||
There are lots of other resources for cost and capacity planning that may be specific to JupyterHub and/or your cloud provider.
|
||||
|
||||
Here are some useful links to other resources
|
||||
|
||||
- [Zero to JupyterHub](https://z2jh.jupyter.org) documentation on
|
||||
- [projecting costs](https://z2jh.jupyter.org/en/latest/administrator/cost.html)
|
||||
- [configuring user resources](https://z2jh.jupyter.org/en/latest/jupyterhub/customizing/user-resources.html)
|
||||
- Cloud platform cost calculators:
|
||||
- [Google Cloud](https://cloud.google.com/products/calculator/)
|
||||
- [Amazon AWS](https://calculator.aws)
|
||||
- [Microsoft Azure](https://azure.microsoft.com/en-us/pricing/calculator/)
|
@@ -1,183 +0,0 @@
|
||||
(hub-database)=
|
||||
|
||||
# The Hub's Database
|
||||
|
||||
JupyterHub uses a database to store information about users, services, and other data needed for operating the Hub.
|
||||
This is the **state** of the Hub.
|
||||
|
||||
## Why does JupyterHub have a database?
|
||||
|
||||
JupyterHub is a **stateful** application (more on that 'state' later).
|
||||
Updating JupyterHub's configuration or upgrading the version of JupyterHub requires restarting the JupyterHub process to apply the changes.
|
||||
We want to minimize the disruption caused by restarting the Hub process, so it can be a mundane, frequent, routine activity.
|
||||
Storing state information outside the process for later retrieval is necessary for this, and one of the main thing databases are for.
|
||||
|
||||
A lot of the operations in JupyterHub are also **relationships**, which is exactly what SQL databases are great at.
|
||||
For example:
|
||||
|
||||
- Given an API token, what user is making the request?
|
||||
- Which users don't have running servers?
|
||||
- Which servers belong to user X?
|
||||
- Which users have not been active in the last 24 hours?
|
||||
|
||||
Finally, a database allows us to have more information stored without needing it all loaded in memory,
|
||||
e.g. supporting a large number (several thousands) of inactive users.
|
||||
|
||||
## What's in the database?
|
||||
|
||||
The short answer of what's in the JupyterHub database is "everything."
|
||||
JupyterHub's **state** lives in the database.
|
||||
That is, everything JupyterHub needs to be aware of to function that _doesn't_ come from the configuration files, such as
|
||||
|
||||
- users, roles, role assignments
|
||||
- state, urls of running servers
|
||||
- Hashed API tokens
|
||||
- Short-lived state related to OAuth flow
|
||||
- Timestamps for when users, tokens, and servers were last used
|
||||
|
||||
### What's _not_ in the database
|
||||
|
||||
Not _quite_ all of JupyterHub's state is in the database.
|
||||
This mostly involves transient state, such as the 'pending' transitions of Spawners (starting, stopping, etc.).
|
||||
Anything not in the database must be reconstructed on Hub restart, and the only sources of information to do that are the database and JupyterHub configuration file(s).
|
||||
|
||||
## How does JupyterHub use the database?
|
||||
|
||||
JupyterHub makes some _unusual_ choices in how it connects to the database.
|
||||
These choices represent trade-offs favoring single-process simplicity and performance at the expense of horizontal scalability (multiple Hub instances).
|
||||
|
||||
We often say that the Hub 'owns' the database.
|
||||
This ownership means that we assume the Hub is the only process that will talk to the database.
|
||||
This assumption enables us to make several caching optimizations that dramatically improve JupyterHub's performance (i.e. data written recently to the database can be read from memory instead of fetched again from the database) that would not work if multiple processes could be interacting with the database at the same time.
|
||||
|
||||
Database operations are also synchronous, so while JupyterHub is waiting on a database operation, it cannot respond to other requests.
|
||||
This allows us to avoid complex locking mechanisms, because transaction races can only occur during an `await`, so we only need to make sure we've completed any given transaction before the next `await` in a given request.
|
||||
|
||||
:::{note}
|
||||
We are slowly working to remove these assumptions, and moving to a more traditional db session per-request pattern.
|
||||
This will enable multiple Hub instances and enable scaling JupyterHub, but will significantly reduce the number of active users a single Hub instance can serve.
|
||||
:::
|
||||
|
||||
### Database performance in a typical request
|
||||
|
||||
Most authenticated requests to JupyterHub involve a few database transactions:
|
||||
|
||||
1. look up the authenticated user (e.g. look up token by hash, then resolve owner and permissions)
|
||||
2. record activity
|
||||
3. perform any relevant changes involved in processing the request (e.g. create the records for a running server when starting one)
|
||||
|
||||
This means that the database is involved in almost every request, but only in quite small, simple queries, e.g.:
|
||||
|
||||
- lookup one token by hash
|
||||
- lookup one user by name
|
||||
- list tokens or servers for one user (typically 1-10)
|
||||
- etc.
|
||||
|
||||
### The database as a limiting factor
|
||||
|
||||
As a result of the above transactions in most requests, database performance is the _leading_ factor in JupyterHub's baseline requests-per-second performance, but that cost does not scale significantly with the number of users, active or otherwise.
|
||||
However, the database is _rarely_ a limiting factor in JupyterHub performance in a practical sense, because the main thing JupyterHub does is start, stop, and monitor whole servers, which take far more time than any small database transaction, no matter how many records you have or how slow your database is (within reason).
|
||||
Additionally, there is usually _very_ little load on the database itself.
|
||||
|
||||
By far the most taxing activity on the database is the 'list all users' endpoint, primarily used by the [idle-culling service](https://github.com/jupyterhub/jupyterhub-idle-culler).
|
||||
Database-based optimizations have been added to make even these operations feasible for large numbers of users:
|
||||
|
||||
1. State filtering on [GET /hub/api/users?state=active](../reference/rest-api.html#/default/get_users){.external},
|
||||
which limits the number of results in the query to only the relevant subset (added in JupyterHub 1.3), rather than all users.
|
||||
2. [Pagination](api-pagination) of all list endpoints, allowing the request of a large number of resources to be more fairly balanced with other Hub activities across multiple requests (added in 2.0).
|
||||
|
||||
:::{note}
|
||||
It's important to note when discussing performance and limiting factors and that all of this only applies to requests to `/hub/...`.
|
||||
The Hub and its database are not involved in most requests to single-user servers (`/user/...`), which is by design, and largely motivated by the fact that the Hub itself doesn't _need_ to be fast because its operations are infrequent and large.
|
||||
:::
|
||||
|
||||
## Database backends
|
||||
|
||||
JupyterHub supports a variety of database backends via [SQLAlchemy][].
|
||||
The default is sqlite, which works great for many cases, but you should be able to use many backends supported by SQLAlchemy.
|
||||
Usually, this will mean PostgreSQL or MySQL, both of which are officially supported and well tested with JupyterHub, but others may work as well.
|
||||
See [SQLAlchemy's docs][sqlalchemy-dialect] for how to connect to different database backends.
|
||||
Doing so generally involves:
|
||||
|
||||
1. installing a Python package that provides a client implementation, and
|
||||
2. setting [](JupyterHub.db_url) to connect to your database with the specified implementation
|
||||
|
||||
[sqlalchemy-dialect]: https://docs.sqlalchemy.org/en/20/dialects/
|
||||
[sqlalchemy]: https://www.sqlalchemy.org
|
||||
|
||||
### Default backend: SQLite
|
||||
|
||||
The default database backend for JupyterHub is [SQLite](https://sqlite.org).
|
||||
We have chosen SQLite as JupyterHub's default because it's simple (the 'database' is a single file) and ubiquitous (it is in the Python standard library).
|
||||
It works very well for testing, small deployments, and workshops.
|
||||
|
||||
For production systems, SQLite has some disadvantages when used with JupyterHub:
|
||||
|
||||
- `upgrade-db` may not always work, and you may need to start with a fresh database
|
||||
- `downgrade-db` **will not** work if you want to rollback to an earlier
|
||||
version, so backup the `jupyterhub.sqlite` file before upgrading (JupyterHub automatically creates a date-stamped backup file when upgrading sqlite)
|
||||
|
||||
The sqlite documentation provides a helpful page about [when to use SQLite and
|
||||
where traditional RDBMS may be a better choice](https://sqlite.org/whentouse.html).
|
||||
|
||||
### Picking your database backend (PostgreSQL, MySQL)
|
||||
|
||||
When running a long term deployment or a production system, we recommend using a full-fledged relational database, such as [PostgreSQL](https://www.postgresql.org) or [MySQL](https://www.mysql.com), that supports the SQL `ALTER TABLE` statement, which is used in some database upgrade steps.
|
||||
|
||||
In general, you select your database backend with [](JupyterHub.db_url), and can further configure it (usually not necessary) with [](JupyterHub.db_kwargs).
|
||||
|
||||
## Notes and Tips
|
||||
|
||||
### SQLite
|
||||
|
||||
The SQLite database should not be used on NFS. SQLite uses reader/writer locks
|
||||
to control access to the database. This locking mechanism might not work
|
||||
correctly if the database file is kept on an NFS filesystem. This is because
|
||||
`fcntl()` file locking is broken on many NFS implementations. Therefore, you
|
||||
should avoid putting SQLite database files on NFS since it will not handle well
|
||||
multiple processes which might try to access the file at the same time.
|
||||
|
||||
### PostgreSQL
|
||||
|
||||
We recommend using PostgreSQL for production if you are unsure whether to use
|
||||
MySQL or PostgreSQL or if you do not have a strong preference.
|
||||
There is additional configuration required for MySQL that is not needed for PostgreSQL.
|
||||
|
||||
For example, to connect to a postgres database with psycopg2:
|
||||
|
||||
1. install psycopg2: `pip instal psycopg2` (or `psycopg2-binary` to avoid compilation, which is [not recommended for production][psycopg2-binary])
|
||||
2. set authentication via environment variables `PGUSER` and `PGPASSWORD`
|
||||
3. configure [](JupyterHub.db_url):
|
||||
|
||||
```python
|
||||
c.JupyterHub.db_url = "postgres+psycopg2://my-postgres-server:5432/my-db-name"
|
||||
```
|
||||
|
||||
[psycopg2-binary]: https://www.psycopg.org/docs/install.html#psycopg-vs-psycopg-binary
|
||||
|
||||
### MySQL / MariaDB
|
||||
|
||||
- You should probably use the `pymysql` or `mysqlclient` sqlalchemy provider, or another backend [recommended by sqlalchemy](https://docs.sqlalchemy.org/en/20/dialects/mysql.html#dialect-mysql)
|
||||
- You also need to set `pool_recycle` to some value (typically 60 - 300, JupyterHub will default to 60)
|
||||
which depends on your MySQL setup. This is necessary since MySQL kills
|
||||
connections serverside if they've been idle for a while, and the connection
|
||||
from the hub will be idle for longer than most connections. This behavior
|
||||
will lead to frustrating 'the connection has gone away' errors from
|
||||
sqlalchemy if `pool_recycle` is not set.
|
||||
- If you use `utf8mb4` collation with MySQL earlier than 5.7.7 or MariaDB
|
||||
earlier than 10.2.1 you may get an `1709, Index column size too large` error.
|
||||
To fix this you need to set `innodb_large_prefix` to enabled and
|
||||
`innodb_file_format` to `Barracuda` to allow for the index sizes jupyterhub
|
||||
uses. `row_format` will be set to `DYNAMIC` as long as those options are set
|
||||
correctly. Later versions of MariaDB and MySQL should set these values by
|
||||
default, as well as have a default `DYNAMIC` `row_format` and pose no trouble
|
||||
to users.
|
||||
|
||||
For example, to connect to a mysql database with mysqlclient:
|
||||
|
||||
1. install mysqlclient: `pip install mysqlclient`
|
||||
2. configure [](JupyterHub.db_url):
|
||||
|
||||
```python
|
||||
c.JupyterHub.db_url = "mysql+mysqldb://myuser:mypassword@my-sql-server:3306/my-db-name"
|
||||
```
|
@@ -1,14 +0,0 @@
|
||||
# Explanation
|
||||
|
||||
_Explanation_ documentation provide big-picture descriptions of how JupyterHub works. This section is meant to build your understanding of particular topics.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
capacity-planning
|
||||
database
|
||||
websecurity
|
||||
oauth
|
||||
singleuser
|
||||
../rbac/index
|
||||
```
|
@@ -1,373 +0,0 @@
|
||||
# JupyterHub and OAuth
|
||||
|
||||
JupyterHub uses [OAuth 2](https://oauth.net/2/) as an internal mechanism for authenticating users.
|
||||
As such, JupyterHub itself always functions as an OAuth **provider**.
|
||||
You can find out more about what that means [below](oauth-terms).
|
||||
|
||||
Additionally, JupyterHub is _often_ deployed with [OAuthenticator](https://oauthenticator.readthedocs.io),
|
||||
where an external identity provider, such as GitHub or KeyCloak, is used to authenticate users.
|
||||
When this is the case, there are _two_ nested OAuth flows:
|
||||
an _internal_ OAuth flow where JupyterHub is the **provider**,
|
||||
and an _external_ OAuth flow, where JupyterHub is the **client**.
|
||||
|
||||
This means that when you are using JupyterHub, there is always _at least one_ and often two layers of OAuth involved in a user logging in and accessing their server.
|
||||
|
||||
The following points are noteworthy:
|
||||
|
||||
- Single-user servers _never_ need to communicate with or be aware of the upstream provider configured in your Authenticator.
|
||||
As far as the servers are concerned, only JupyterHub is an OAuth provider,
|
||||
and how users authenticate with the Hub itself is irrelevant.
|
||||
- When interacting with a single-user server,
|
||||
there are ~always two tokens:
|
||||
first, a token issued to the server itself to communicate with the Hub API,
|
||||
and second, a per-user token in the browser to represent the completed login process and authorized permissions.
|
||||
More on this [later](two-tokens).
|
||||
|
||||
(oauth-terms)=
|
||||
|
||||
## Key OAuth terms
|
||||
|
||||
Here are some key definitions to keep in mind when we are talking about OAuth.
|
||||
You can also read more in detail [here](https://www.oauth.com/oauth2-servers/definitions/).
|
||||
|
||||
- **provider**: The entity responsible for managing identity and authorization;
|
||||
always a web server.
|
||||
JupyterHub is _always_ an OAuth provider for JupyterHub's components.
|
||||
When OAuthenticator is used, an external service, such as GitHub or KeyCloak, is also an OAuth provider.
|
||||
- **client**: An entity that requests OAuth **tokens** on a user's behalf;
|
||||
generally a web server of some kind.
|
||||
OAuth **clients** are services that _delegate_ authentication and/or authorization
|
||||
to an OAuth **provider**.
|
||||
JupyterHub _services_ or single-user _servers_ are OAuth **clients** of the JupyterHub **provider**.
|
||||
When OAuthenticator is used, JupyterHub is itself _also_ an OAuth **client** for the external OAuth **provider**, e.g. GitHub.
|
||||
- **browser**: A user's web browser, which makes requests and stores things like cookies.
|
||||
- **token**: The secret value used to represent a user's authorization. This is the final product of the OAuth process.
|
||||
- **code**: A short-lived temporary secret that the **client** exchanges
|
||||
for a **token** at the conclusion of OAuth,
|
||||
in what's generally called the "OAuth callback handler."
|
||||
|
||||
## One oauth flow
|
||||
|
||||
OAuth **flow** is what we call the sequence of HTTP requests involved in authenticating a user and issuing a token, ultimately used for authorizing access to a service or single-user server.
|
||||
|
||||
A single OAuth flow typically goes like this:
|
||||
|
||||
### OAuth request and redirect
|
||||
|
||||
1. A **browser** makes an HTTP request to an OAuth **client**.
|
||||
2. There are no credentials, so the client _redirects_ the browser to an "authorize" page on the OAuth **provider** with some extra information:
|
||||
- the OAuth **client ID** of the client itself.
|
||||
- the **redirect URI** to be redirected back to after completion.
|
||||
- the **scopes** requested, which the user should be presented with to confirm.
|
||||
This is the "X would like to be able to Y on your behalf. Allow this?" page you see on all the "Login with ..." pages around the Internet.
|
||||
3. During this authorize step,
|
||||
the browser must be _authenticated_ with the provider.
|
||||
This is often already stored in a cookie,
|
||||
but if not the provider webapp must begin its _own_ authentication process before serving the authorization page.
|
||||
This _may_ even begin another OAuth flow!
|
||||
4. After the user tells the provider that they want to proceed with the authorization,
|
||||
the provider records this authorization in a short-lived record called an **OAuth code**.
|
||||
5. Finally, the oauth provider redirects the browser _back_ to the oauth client's "redirect URI"
|
||||
(or "OAuth callback URI"),
|
||||
with the OAuth code in a URL parameter.
|
||||
|
||||
That marks the end of the requests made between the **browser** and the **provider**.
|
||||
|
||||
### State after redirect
|
||||
|
||||
At this point:
|
||||
|
||||
- The browser is authenticated with the _provider_.
|
||||
- The user's authorized permissions are recorded in an _OAuth code_.
|
||||
- The _provider_ knows that the permissions requested by the OAuth client have been granted, but the client doesn't know this yet.
|
||||
- All the requests so far have been made directly by the browser.
|
||||
No requests have originated from the client or provider.
|
||||
|
||||
### OAuth Client Handles Callback Request
|
||||
|
||||
At this stage, we get to finish the OAuth process.
|
||||
Let's dig into what the OAuth client does when it handles
|
||||
the OAuth callback request.
|
||||
|
||||
- The OAuth client receives the _code_ and makes an API request to the _provider_ to exchange the code for a real _token_.
|
||||
This is the first direct request between the OAuth _client_ and the _provider_.
|
||||
- Once the token is retrieved, the client _usually_
|
||||
makes a second API request to the _provider_
|
||||
to retrieve information about the owner of the token (the user).
|
||||
This is the step where behavior diverges for different OAuth providers.
|
||||
Up to this point, all OAuth providers are the same, following the OAuth specification.
|
||||
However, OAuth does not define a standard for issuing tokens in exchange for information about their owner or permissions ([OpenID Connect](https://openid.net/connect/) does that),
|
||||
so this step may be different for each OAuth provider.
|
||||
- Finally, the OAuth client stores its own record that the user is authorized in a cookie.
|
||||
This could be the token itself, or any other appropriate representation of successful authentication.
|
||||
- Now that credentials have been established,
|
||||
the browser can be redirected to the _original_ URL where it started,
|
||||
to try the request again.
|
||||
If the client wasn't able to keep track of the original URL all this time
|
||||
(not always easy!),
|
||||
you might end up back at a default landing page instead of where you started the login process. This is frustrating!
|
||||
|
||||
😮💨 _phew_.
|
||||
|
||||
So that's _one_ OAuth process.
|
||||
|
||||
## Full sequence of OAuth in JupyterHub
|
||||
|
||||
Let's go through the above OAuth process in JupyterHub,
|
||||
with specific examples of each HTTP request and what information it contains.
|
||||
For bonus points, we are using the double-OAuth example of JupyterHub configured with GitHubOAuthenticator.
|
||||
|
||||
To disambiguate, we will call the OAuth process where JupyterHub is the **provider** "internal OAuth,"
|
||||
and the one with JupyterHub as a **client** "external OAuth."
|
||||
|
||||
Our starting point:
|
||||
|
||||
- a user's single-user server is running. Let's call them `danez`
|
||||
- Jupyterhub is running with GitHub as an OAuth provider (this means two full instances of OAuth),
|
||||
- Danez has a fresh browser session with no cookies yet.
|
||||
|
||||
First request:
|
||||
|
||||
- browser->single-user server running JupyterLab or Jupyter Classic
|
||||
- `GET /user/danez/notebooks/mynotebook.ipynb`
|
||||
- no credentials, so single-user server (as an OAuth **client**) starts internal OAuth process with JupyterHub (the **provider**)
|
||||
- response: 302 redirect -> `/hub/api/oauth2/authorize`
|
||||
with:
|
||||
- client-id=`jupyterhub-user-danez`
|
||||
- redirect-uri=`/user/danez/oauth_callback` (we'll come back later!)
|
||||
|
||||
Second request, following redirect:
|
||||
|
||||
- browser->JupyterHub
|
||||
- `GET /hub/api/oauth2/authorize`
|
||||
- no credentials, so JupyterHub starts external OAuth process _with GitHub_
|
||||
- response: 302 redirect -> `https://github.com/login/oauth/authorize`
|
||||
with:
|
||||
- client-id=`jupyterhub-client-uuid`
|
||||
- redirect-uri=`/hub/oauth_callback` (we'll come back later!)
|
||||
|
||||
_pause_ This is where JupyterHub configuration comes into play.
|
||||
Recall, in this case JupyterHub is using:
|
||||
|
||||
```python
|
||||
c.JupyterHub.authenticator_class = 'github'
|
||||
```
|
||||
|
||||
That means authenticating a request to the Hub itself starts
|
||||
a _second_, external OAuth process with GitHub as a provider.
|
||||
This external OAuth process is optional, though.
|
||||
If you were using the default username+password PAMAuthenticator,
|
||||
this redirect would have been to `/hub/login` instead, to present the user
|
||||
with a login form.
|
||||
|
||||
Third request, following redirect:
|
||||
|
||||
- browser->GitHub
|
||||
- `GET https://github.com/login/oauth/authorize`
|
||||
|
||||
Here, GitHub prompts for login and asks for confirmation of authorization
|
||||
(more redirects if you aren't logged in to GitHub yet, but ultimately back to this `/authorize` URL).
|
||||
|
||||
After successful authorization
|
||||
(either by looking up a pre-existing authorization,
|
||||
or recording it via form submission)
|
||||
GitHub issues an **OAuth code** and redirects to `/hub/oauth_callback?code=github-code`
|
||||
|
||||
Next request:
|
||||
|
||||
- browser->JupyterHub
|
||||
- `GET /hub/oauth_callback?code=github-code`
|
||||
|
||||
Inside the callback handler, JupyterHub makes two API requests:
|
||||
|
||||
The first:
|
||||
|
||||
- JupyterHub->GitHub
|
||||
- `POST https://github.com/login/oauth/access_token`
|
||||
- request made with OAuth **code** from URL parameter
|
||||
- response includes an access **token**
|
||||
|
||||
The second:
|
||||
|
||||
- JupyterHub->GitHub
|
||||
- `GET https://api.github.com/user`
|
||||
- request made with access **token** in the `Authorization` header
|
||||
- response is the user model, including username, email, etc.
|
||||
|
||||
Now the external OAuth callback request completes with:
|
||||
|
||||
- set cookie on `/hub/` path, recording jupyterhub authentication so we don't need to do external OAuth with GitHub again for a while
|
||||
- redirect -> `/hub/api/oauth2/authorize`
|
||||
|
||||
🎉 At this point, we have completed our first OAuth flow! 🎉
|
||||
|
||||
Now, we get our first repeated request:
|
||||
|
||||
- browser->jupyterhub
|
||||
- `GET /hub/api/oauth2/authorize`
|
||||
- this time with credentials,
|
||||
so jupyterhub either
|
||||
1. serves the internal authorization confirmation page, or
|
||||
2. automatically accepts authorization (shortcut taken when a user is visiting their own server)
|
||||
- redirect -> `/user/danez/oauth_callback?code=jupyterhub-code`
|
||||
|
||||
Here, we start the same OAuth callback process as before, but at Danez's single-user server for the _internal_ OAuth.
|
||||
|
||||
- browser->single-user server
|
||||
- `GET /user/danez/oauth_callback`
|
||||
|
||||
(in handler)
|
||||
|
||||
Inside the internal OAuth callback handler,
|
||||
Danez's server makes two API requests to JupyterHub:
|
||||
|
||||
The first:
|
||||
|
||||
- single-user server->JupyterHub
|
||||
- `POST /hub/api/oauth2/token`
|
||||
- request made with oauth code from url parameter
|
||||
- response includes an API token
|
||||
|
||||
The second:
|
||||
|
||||
- single-user server->JupyterHub
|
||||
- `GET /hub/api/user`
|
||||
- request made with token in the `Authorization` header
|
||||
- response is the user model, including username, groups, etc.
|
||||
|
||||
Finally completing `GET /user/danez/oauth_callback`:
|
||||
|
||||
- response sets cookie, storing encrypted access token
|
||||
- _finally_ redirects back to the original `/user/danez/notebooks/mynotebook.ipynb`
|
||||
|
||||
Final request:
|
||||
|
||||
- browser -> single-user server
|
||||
- `GET /user/danez/notebooks/mynotebook.ipynb`
|
||||
- encrypted jupyterhub token in cookie
|
||||
|
||||
To authenticate this request, the single token stored in the encrypted cookie is passed to the Hub for verification:
|
||||
|
||||
- single-user server -> Hub
|
||||
- `GET /hub/api/user`
|
||||
- browser's token in Authorization header
|
||||
- response: user model with name, groups, etc.
|
||||
|
||||
If the user model matches who should be allowed (e.g. Danez),
|
||||
then the request is allowed.
|
||||
See [Scopes in JupyterHub](jupyterhub-scopes) for how JupyterHub uses scopes to determine authorized access to servers and services.
|
||||
|
||||
_the end_
|
||||
|
||||
## Token caches and expiry
|
||||
|
||||
Because tokens represent information from an external source,
|
||||
they can become 'stale,'
|
||||
or the information they represent may no longer be accurate.
|
||||
For example: a user's GitHub account may no longer be authorized to use JupyterHub,
|
||||
that should ultimately propagate to revoking access and force logging in again.
|
||||
|
||||
To handle this, OAuth tokens and the various places they are stored can _expire_,
|
||||
which should have the same effect as no credentials,
|
||||
and trigger the authorization process again.
|
||||
|
||||
In JupyterHub's internal OAuth, we have these layers of information that can go stale:
|
||||
|
||||
- The OAuth client has a **cache** of Hub responses for tokens,
|
||||
so it doesn't need to make API requests to the Hub for every request it receives.
|
||||
This cache has an expiry of five minutes by default,
|
||||
and is governed by the configuration `HubAuth.cache_max_age` in the single-user server.
|
||||
- The internal OAuth token is stored in a cookie, which has its own expiry (default: 14 days),
|
||||
governed by `JupyterHub.cookie_max_age_days`.
|
||||
- The internal OAuth token itself can also expire,
|
||||
which is by default the same as the cookie expiry,
|
||||
since it makes sense for the token itself and the place it is stored to expire at the same time.
|
||||
This is governed by `JupyterHub.cookie_max_age_days` first,
|
||||
or can overridden by `JupyterHub.oauth_token_expires_in`.
|
||||
|
||||
That's all for _internal_ auth storage,
|
||||
but the information from the _external_ authentication provider
|
||||
(could be PAM or GitHub OAuth, etc.) can also expire.
|
||||
Authenticator configuration governs when JupyterHub needs to ask again,
|
||||
triggering the external login process anew before letting a user proceed.
|
||||
|
||||
- `jupyterhub-hub-login` cookie stores that a browser is authenticated with the Hub.
|
||||
This expires according to `JupyterHub.cookie_max_age_days` configuration,
|
||||
with a default of 14 days.
|
||||
The `jupyterhub-hub-login` cookie is encrypted with `JupyterHub.cookie_secret`
|
||||
configuration.
|
||||
- {meth}`.Authenticator.refresh_user` is a method to refresh a user's auth info.
|
||||
By default, it does nothing, but it can return an updated user model if a user's information has changed,
|
||||
or force a full login process again if needed.
|
||||
- {attr}`.Authenticator.auth_refresh_age` configuration governs how often
|
||||
`refresh_user()` will be called to check if a user must login again (default: 300 seconds).
|
||||
- {attr}`.Authenticator.refresh_pre_spawn` configuration governs whether
|
||||
`refresh_user()` should be called prior to spawning a server,
|
||||
to force fresh auth info when a server is launched (default: False).
|
||||
This can be useful when Authenticators pass access tokens to spawner environments, to ensure they aren't getting a stale token that's about to expire.
|
||||
|
||||
**So what happens when these things expire or get stale?**
|
||||
|
||||
- If the HubAuth **token response cache** expires,
|
||||
when a request is made with a token,
|
||||
the Hub is asked for the latest information about the token.
|
||||
This usually has no visible effect, since it is just refreshing a cache.
|
||||
If it turns out that the token itself has expired or been revoked,
|
||||
the request will be denied.
|
||||
- If the token has expired, but is still in the cookie:
|
||||
when the token response cache expires,
|
||||
the next time the server asks the hub about the token,
|
||||
no user will be identified and the internal OAuth process begins again.
|
||||
- If the token _cookie_ expires, the next browser request will be made with no credentials,
|
||||
and the internal OAuth process will begin again.
|
||||
This will usually have the form of a transparent redirect browsers won't notice.
|
||||
However, if this occurs on an API request in a long-lived page visit
|
||||
such as a JupyterLab session, the API request may fail and require
|
||||
a page refresh to get renewed credentials.
|
||||
- If the _JupyterHub_ cookie expires, the next time the browser makes a request to the Hub,
|
||||
the Hub's authorization process must begin again (e.g. login with GitHub).
|
||||
Hub cookie expiry on its own **does not** mean that a user can no longer access their single-user server!
|
||||
- If credentials from the upstream provider (e.g. GitHub) become stale or outdated,
|
||||
these will not be refreshed until/unless `refresh_user` is called
|
||||
_and_ `refresh_user()` on the given Authenticator is implemented to perform such a check.
|
||||
At this point, few Authenticators implement `refresh_user` to support this feature.
|
||||
If your Authenticator does not or cannot implement `refresh_user`,
|
||||
the only way to force a check is to reset the `JupyterHub.cookie_secret` encryption key,
|
||||
which invalidates the `jupyterhub-hub-login` cookie for all users.
|
||||
|
||||
### Logging out
|
||||
|
||||
Logging out of JupyterHub means clearing and revoking many of these credentials:
|
||||
|
||||
- The `jupyterhub-hub-login` cookie is revoked, meaning the next request to the Hub itself will require a new login.
|
||||
- The token stored in the `jupyterhub-user-username` cookie for the single-user server
|
||||
will be revoked, based on its associaton with `jupyterhub-session-id`, but the _cookie itself cannot be cleared at this point_
|
||||
- The shared `jupyterhub-session-id` is cleared, which ensures that the HubAuth **token response cache** will not be used,
|
||||
and the next request with the expired token will ask the Hub, which will inform the single-user server that the token has expired
|
||||
|
||||
## Extra bits
|
||||
|
||||
(two-tokens)=
|
||||
|
||||
### A tale of two tokens
|
||||
|
||||
**TODO**: discuss API token issued to server at startup ($JUPYTERHUB_API_TOKEN)
|
||||
and OAuth-issued token in the cookie,
|
||||
and some details of how JupyterLab currently deals with that.
|
||||
They are different, and JupyterLab should be making requests using the token from the cookie,
|
||||
not the token from the server,
|
||||
but that is not currently the case.
|
||||
|
||||
### Redirect loops
|
||||
|
||||
In general, an authenticated web endpoint has this behavior,
|
||||
based on the authentication/authorization state of the browser:
|
||||
|
||||
- If authorized, allow the request to happen
|
||||
- If authenticated (I know who you are) but not authorized (you are not allowed), fail with a 403 permission denied error
|
||||
- If not authenticated, start a redirect process to establish authorization,
|
||||
which should end in a redirect back to the original URL to try again.
|
||||
**This is why problems in authentication result in redirect loops!**
|
||||
If the second request fails to detect the authentication that should have been established during the redirect,
|
||||
it will start the authentication redirect process over again,
|
||||
and keep redirecting in a loop until the browser balks.
|
@@ -1,109 +0,0 @@
|
||||
(singleuser)=
|
||||
|
||||
# The JupyterHub single-user server
|
||||
|
||||
When a user logs into JupyterHub, they get a 'server', which we usually call the **single-user server**, because it's a server that's meant for a single JupyterHub user.
|
||||
Each JupyterHub user gets a different one (or more than one!).
|
||||
|
||||
A single-user server is a process running somewhere that is:
|
||||
|
||||
1. accessible over http[s],
|
||||
2. authenticated via JupyterHub using OAuth 2.0,
|
||||
3. started by a [Spawner](spawners), and
|
||||
4. 'owned' by a single JupyterHub user
|
||||
|
||||
## The single-user server command
|
||||
|
||||
The Spawner's default single-user server startup command, `jupyterhub-singleuser`, launches `jupyter-server`, the same program used when you run `jupyter lab` on your laptop.
|
||||
(_It can also launch the legacy `jupyter-notebook` server_).
|
||||
That's why JupyterHub looks familiar to folks who are already using Jupyter at home or elsewhere.
|
||||
It's the same!
|
||||
`jupyterhub-singleuser` _customizes_ that program to change (approximately) one thing: **authenticate requests with JupyterHub**.
|
||||
|
||||
(singleuser-auth)=
|
||||
|
||||
## Single-user server authentication
|
||||
|
||||
Implementation-wise, JupyterHub single-user servers are a special-case of {ref}`services`
|
||||
and as such use the same (OAuth) authentication mechanism (more on OAuth in JupyterHub at [](oauth)).
|
||||
This is primarily implemented in the {class}`~.HubOAuth` class.
|
||||
|
||||
This code resides in `jupyterhub.singleuser` subpackage of JupyterHub.
|
||||
The main task of this code is to:
|
||||
|
||||
1. resolve a JupyterHub token to a JupyterHub user (authenticate)
|
||||
2. check permissions (`access:servers`) for the token to make sure the request should be allowed (authorize)
|
||||
3. if not authorized, begin the OAuth process with a redirect to the Hub
|
||||
4. after login, store OAuth tokens in a cookie only used by this single-user server
|
||||
5. implement logout to clear the cookie
|
||||
|
||||
Most of this is implemented in the {class}`~.HubOAuth` class. `jupyterhub.singleuser` is responsible for _adapting_ the base Jupyter Server to use HubOAuth for these tasks.
|
||||
|
||||
### JupyterHub authentication extension
|
||||
|
||||
By default, `jupyter-server` uses its own cookie to authenticate.
|
||||
If that cookie is not present, the server redirects you a login page and asks you to enter a password or token.
|
||||
|
||||
Jupyter Server 2.0 introduces two new _APIs_ for customizing authentication: the [IdentityProvider](inv:jupyter-server#jupyter_server.auth.IdentityProvider) and the [Authorizer](inv:jupyter-server#jupyter_server.auth.Authorizer).
|
||||
More information can be found in the [Jupyter Server documentation](https://jupyter-server.readthedocs.io).
|
||||
|
||||
JupyterHub implements these APIs in `jupyterhub.singleuser.extension`.
|
||||
|
||||
The IdentityProvider is responsible for _authenticating_ requests.
|
||||
In JupyterHub, that means extracting OAuth tokens from the request and resolving them to a JupyterHub user.
|
||||
|
||||
The Authorizer is a _separate_ API for _authorizing_ actions on particular resources.
|
||||
Because the JupyterHub IdentityProvider only allows _authenticating_ users who already have the necessary `access:servers` permission to access the server, the default Authorizer only contains a redundant check for this same permission, and ignores the resource inputs.
|
||||
However, specifying a _custom_ Authorizer allows for granular permissions, such as read-only access to subsets of a shared server.
|
||||
|
||||
### JupyterHub authentication via subclass
|
||||
|
||||
Prior to Jupyter Server 2 (i.e. Jupyter Server 1.x or the legacy `jupyter-notebook` server), JupyterHub authentication is applied via _subclass_.
|
||||
Originally a subclass of `NotebookApp`,
|
||||
this approach works with both `jupyter-server` and `jupyter-notebook`.
|
||||
Instead of using the extension mechanisms above,
|
||||
the server application is _subclassed_. This worked well in the `jupyter-notebook` days,
|
||||
but doesn't fit well with Jupyter Server's extension-based architecture.
|
||||
|
||||
### Selecting jupyterhub-singleuser implementation
|
||||
|
||||
Using the JupyterHub singleuser-server extension is the default behavior of JupyterHub 4 and Jupyter Server 2, otherwise the subclass approach is taken.
|
||||
|
||||
You can opt-out of the extension by setting the environment variable `JUPYTERHUB_SINGLEUSER_EXTENSION=0`:
|
||||
|
||||
```python
|
||||
c.Spawner.environment.update(
|
||||
{
|
||||
"JUPYTERHUB_SINGLEUSER_EXTENSION": "0",
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
The subclass approach will also be taken if you've opted to use the classic notebook server with:
|
||||
|
||||
```
|
||||
JUPYTERHUB_SINGLEUSER_APP=notebook
|
||||
```
|
||||
|
||||
which was introduced in JupyterHub 2.
|
||||
|
||||
## Other customizations
|
||||
|
||||
`jupyterhub-singleuser` makes other small customizations to how the single-user server behaves:
|
||||
|
||||
1. logs activity on the single-user server, used in [idle-culling](https://github.com/jupyterhub/jupyterhub-idle-culler).
|
||||
2. disables some features that don't make sense in JupyterHub (trash, retrying ports)
|
||||
3. loading options such as URLs and SSL configuration from the environment
|
||||
4. customize logging for consistency with JupyterHub logs
|
||||
|
||||
## Running a single-user server that's not `jupyterhub-singleuser`
|
||||
|
||||
By default, `jupyterhub-singleuser` is the same `jupyter-server` used by JupyterLab, Jupyter notebook (>= 7), etc.
|
||||
But technically, all JupyterHub cares about is that it is:
|
||||
|
||||
1. an http server at the prescribed URL, accessible from the Hub and proxy, and
|
||||
2. authenticated via [OAuth](oauth) with the Hub (it doesn't even have to do this, if you want to do your own authentication, as is done in BinderHub)
|
||||
|
||||
which means that you can customize JupyterHub to launch _any_ web application that meets these criteria, by following the specifications in {ref}`services`.
|
||||
|
||||
Most of the time, though, it's easier to use [jupyter-server-proxy](https://jupyter-server-proxy.readthedocs.io) if you want to launch additional web applications in JupyterHub.
|
@@ -1,136 +0,0 @@
|
||||
(web-security)=
|
||||
|
||||
# Security Overview
|
||||
|
||||
The **Security Overview** section helps you learn about:
|
||||
|
||||
- the design of JupyterHub with respect to web security
|
||||
- the semi-trusted user
|
||||
- the available mitigations to protect untrusted users from each other
|
||||
- the value of periodic security audits
|
||||
|
||||
This overview also helps you obtain a deeper understanding of how JupyterHub
|
||||
works.
|
||||
|
||||
## Semi-trusted and untrusted users
|
||||
|
||||
JupyterHub is designed to be a _simple multi-user server for modestly sized
|
||||
groups_ of **semi-trusted** users. While the design reflects serving
|
||||
semi-trusted users, JupyterHub can also be suitable for serving **untrusted** users.
|
||||
|
||||
As a result, using JupyterHub with **untrusted** users means more work by the
|
||||
administrator, since much care is required to secure a Hub, with extra caution on
|
||||
protecting users from each other.
|
||||
|
||||
One aspect of JupyterHub's _design simplicity_ for **semi-trusted** users is that
|
||||
the Hub and single-user servers are placed in a _single domain_, behind a
|
||||
[_proxy_][configurable-http-proxy]. If the Hub is serving untrusted
|
||||
users, many of the web's cross-site protections are not applied between
|
||||
single-user servers and the Hub, or between single-user servers and each
|
||||
other, since browsers see the whole thing (proxy, Hub, and single user
|
||||
servers) as a single website (i.e. single domain).
|
||||
|
||||
## Protect users from each other
|
||||
|
||||
To protect users from each other, a user must **never** be able to write arbitrary
|
||||
HTML and serve it to another user on the Hub's domain. This is prevented by JupyterHub's
|
||||
authentication setup because only the owner of a given single-user notebook server is
|
||||
allowed to view user-authored pages served by the given single-user notebook
|
||||
server.
|
||||
|
||||
To protect all users from each other, JupyterHub administrators must
|
||||
ensure that:
|
||||
|
||||
- A user **does not have permission** to modify their single-user notebook server,
|
||||
including:
|
||||
- the installation of new packages in the Python environment that runs
|
||||
their single-user server;
|
||||
- the creation of new files in any `PATH` directory that precedes the
|
||||
directory containing `jupyterhub-singleuser` (if the `PATH` is used
|
||||
to resolve the single-user executable instead of using an absolute path);
|
||||
- the modification of environment variables (e.g. PATH, PYTHONPATH) for
|
||||
their single-user server;
|
||||
- the modification of the configuration of the notebook server
|
||||
(the `~/.jupyter` or `JUPYTER_CONFIG_DIR` directory).
|
||||
|
||||
If any additional services are run on the same domain as the Hub, the services
|
||||
**must never** display user-authored HTML that is neither _sanitized_ nor _sandboxed_
|
||||
(e.g. IFramed) to any user that lacks authentication as the author of a file.
|
||||
|
||||
## Mitigate security issues
|
||||
|
||||
The several approaches to mitigating security issues with configuration
|
||||
options provided by JupyterHub include:
|
||||
|
||||
### Enable subdomains
|
||||
|
||||
JupyterHub provides the ability to run single-user servers on their own
|
||||
subdomains. This means the cross-origin protections between servers has the
|
||||
desired effect, and user servers and the Hub are protected from each other. A
|
||||
user's single-user server will be at `username.jupyter.mydomain.com`. This also
|
||||
requires all user subdomains to point to the same address, which is most easily
|
||||
accomplished with wildcard DNS. Since this spreads the service across multiple
|
||||
domains, you will need wildcard SSL as well. Unfortunately, for many
|
||||
institutional domains, wildcard DNS and SSL are not available. **If you do plan
|
||||
to serve untrusted users, enabling subdomains is highly encouraged**, as it
|
||||
resolves the cross-site issues.
|
||||
|
||||
### Disable user config
|
||||
|
||||
If subdomains are unavailable or undesirable, JupyterHub provides a
|
||||
configuration option `Spawner.disable_user_config`, which can be set to prevent
|
||||
the user-owned configuration files from being loaded. After implementing this
|
||||
option, `PATH`s and package installation are the other things that the
|
||||
admin must enforce.
|
||||
|
||||
### Prevent spawners from evaluating shell configuration files
|
||||
|
||||
For most Spawners, `PATH` is not something users can influence, but it's important that
|
||||
the Spawner should _not_ evaluate shell configuration files prior to launching the server.
|
||||
|
||||
### Isolate packages using virtualenv
|
||||
|
||||
Package isolation is most easily handled by running the single-user server in
|
||||
a virtualenv with disabled system-site-packages. The user should not have
|
||||
permission to install packages into this environment.
|
||||
|
||||
It is important to note that the control over the environment only affects the
|
||||
single-user server, and not the environment(s) in which the user's kernel(s)
|
||||
may run. Installing additional packages in the kernel environment does not
|
||||
pose additional risk to the web application's security.
|
||||
|
||||
### Encrypt internal connections with SSL/TLS
|
||||
|
||||
By default, all communications on the server, between the proxy, hub, and single
|
||||
-user notebooks are performed unencrypted. Setting the `internal_ssl` flag in
|
||||
`jupyterhub_config.py` secures the aforementioned routes. Turning this
|
||||
feature on does require that the enabled `Spawner` can use the certificates
|
||||
generated by the `Hub` (the default `LocalProcessSpawner` can, for instance).
|
||||
|
||||
It is also important to note that this encryption **does not** cover the
|
||||
`zmq tcp` sockets between the Notebook client and kernel yet. While users cannot
|
||||
submit arbitrary commands to another user's kernel, they can bind to these
|
||||
sockets and listen. When serving untrusted users, this eavesdropping can be
|
||||
mitigated by setting `KernelManager.transport` to `ipc`. This applies standard
|
||||
Unix permissions to the communication sockets thereby restricting
|
||||
communication to the socket owner. The `internal_ssl` option will eventually
|
||||
extend to securing the `tcp` sockets as well.
|
||||
|
||||
## Security audits
|
||||
|
||||
We recommend that you do periodic reviews of your deployment's security. It's
|
||||
good practice to keep [JupyterHub](https://readthedocs.org/projects/jupyterhub/), [configurable-http-proxy][], and [nodejs
|
||||
versions](https://github.com/nodejs/Release) up to date.
|
||||
|
||||
A handy website for testing your deployment is
|
||||
[Qualsys' SSL analyzer tool](https://www.ssllabs.com/ssltest/analyze.html).
|
||||
|
||||
[configurable-http-proxy]: https://github.com/jupyterhub/configurable-http-proxy
|
||||
|
||||
## Vulnerability reporting
|
||||
|
||||
If you believe you have found a security vulnerability in JupyterHub, or any
|
||||
Jupyter project, please report it to
|
||||
[security@ipython.org](mailto:security@ipython.org). If you prefer to encrypt
|
||||
your security reports, you can use [this PGP public
|
||||
key](https://jupyter.org/assets/ipython_security.asc).
|
@@ -1,76 +0,0 @@
|
||||
# Frequently asked questions
|
||||
|
||||
## How do I share links to notebooks?
|
||||
|
||||
Sharing links to notebooks is a common activity,
|
||||
and can look different depending on what you mean by 'share.'
|
||||
Your first instinct might be to copy the URL you see in the browser,
|
||||
e.g. `jupyterhub.example/user/yourname/notebooks/coolthing.ipynb`,
|
||||
but this usually won't work, depending on the permissions of the person you share the link with.
|
||||
|
||||
Unfortunately, 'share' means at least a few things to people in a JupyterHub context.
|
||||
We'll cover 3 common cases here, when they are applicable, and what assumptions they make:
|
||||
|
||||
1. sharing links that will open the same file on the visitor's own server
|
||||
2. sharing links that will bring the visitor to _your_ server (e.g. for real-time collaboration, or RTC)
|
||||
3. publishing notebooks and sharing links that will download the notebook into the user's server
|
||||
|
||||
### link to the same file on the visitor's server
|
||||
|
||||
This is for the case where you have JupyterHub on a shared (or sufficiently similar) filesystem, where you want to share a link that will cause users to login and start their _own_ server, to view or edit the file.
|
||||
|
||||
**Assumption:** the same path on someone else's server is valid and points to the same file
|
||||
|
||||
This is useful in e.g. classes where you know students have certain files in certain locations, or collaborations where you know you have a shared filesystem where everyone has access to the same files.
|
||||
|
||||
A link should look like `https://jupyterhub.example/hub/user-redirect/lab/tree/foo.ipynb`.
|
||||
You can hand-craft these URLs from the URL you are looking at, where you see `/user/name/lab/tree/foo.ipynb` use `/hub/user-redirect/lab/tree/foo.ipynb` (replace `/user/name/` with `/hub/user-redirect/`).
|
||||
Or you can use JupyterLab's "copy shareable link" in the context menu in the file browser:
|
||||
|
||||

|
||||
|
||||
which will produce a correct URL with `/hub/user-redirect/` in it.
|
||||
|
||||
### link to the file on your server
|
||||
|
||||
This is for the case where you want to both be using _your_ server, e.g. for real-time collaboration (RTC).
|
||||
|
||||
**Assumption:** the user has (or should have) access to your server.
|
||||
|
||||
**Assumption:** your server is running _or_ the user has permission to start it.
|
||||
|
||||
By default, JupyterHub users don't have access to each other's servers, but JupyterHub 2.0 administrators can grant users limited access permissions to each other's servers.
|
||||
If the visitor doesn't have access to the server, these links will result in a 403 Permission Denied error.
|
||||
|
||||
In many cases, for this situation you can copy the link in your URL bar (`/user/yourname/lab`), or you can add `/tree/path/to/specific/notebook.ipynb` to open a specific file.
|
||||
|
||||
The [jupyterlab-link-share] JupyterLab extension generates these links, and even can _grant_ other users access to your server.
|
||||
|
||||
[jupyterlab-link-share]: https://github.com/jupyterlab-contrib/jupyterlab-link-share
|
||||
|
||||
:::{warning}
|
||||
Note that the way the extension _grants_ access is handing over credentials to allow the other user to **_BECOME YOU_**.
|
||||
This is usually not appropriate in JupyterHub.
|
||||
:::
|
||||
|
||||
### link to a published copy
|
||||
|
||||
Another way to 'share' notebooks is to publish copies, e.g. pushing the notebook to a git repository and sharing a download link.
|
||||
This way is especially useful for course materials,
|
||||
where no assumptions are necessary about the user's environment,
|
||||
except for having one package installed.
|
||||
|
||||
**Assumption:** The [nbgitpuller](inv:nbgitpuller#index) server extension is installed
|
||||
|
||||
Unlike the other two methods, nbgitpuller doesn't provide an extension to copy a shareable link for the document you're currently looking at,
|
||||
but it does provide a [link generator](inv:nbgitpuller#link),
|
||||
which uses the `user-redirect` approach above.
|
||||
|
||||
When visiting an nbgitpuller link:
|
||||
|
||||
- The visitor will be directed to their own server
|
||||
- Your repo will be cloned (or updated if it's already been cloned)
|
||||
- and then the file opened when it's ready
|
||||
|
||||
[nbgitpuller]: https://nbgitpuller.readthedocs.io
|
||||
[nbgitpuller-link]: https://nbgitpuller.readthedocs.io/en/latest/link.html
|
@@ -1,11 +0,0 @@
|
||||
# FAQs
|
||||
|
||||
Find answers to some of the most frequently-asked questions around JupyterHub and how it works.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 2
|
||||
|
||||
faq
|
||||
institutional-faq
|
||||
troubleshooting
|
||||
```
|
@@ -1,267 +0,0 @@
|
||||
# Institutional FAQ
|
||||
|
||||
This page contains common questions from users of JupyterHub,
|
||||
broken down by their roles within organizations.
|
||||
|
||||
## For all
|
||||
|
||||
### Is it appropriate for adoption within a larger institutional context?
|
||||
|
||||
Yes! JupyterHub has been used at-scale for large pools of users, as well
|
||||
as complex and high-performance computing.
|
||||
For example,
|
||||
|
||||
- UC Berkeley uses
|
||||
JupyterHub for its Data Science Education Program courses (serving over
|
||||
3,000 students).
|
||||
- The Pangeo project uses JupyterHub to provide access
|
||||
to scalable cloud computing with Dask.
|
||||
|
||||
JupyterHub is stable and customizable
|
||||
to the use-cases of large organizations.
|
||||
|
||||
### I keep hearing about Jupyter Notebook, JupyterLab, and now JupyterHub. What’s the difference?
|
||||
|
||||
Here is a quick breakdown of these three tools:
|
||||
|
||||
- **The Jupyter Notebook** is a document specification (the `.ipynb`) file that interweaves
|
||||
narrative text with code cells and their outputs. It is also a graphical interface
|
||||
that allows users to edit these documents. There are also several other graphical interfaces
|
||||
that allow users to edit the `.ipynb` format (nteract, Jupyter Lab, Google Colab, Kaggle, etc).
|
||||
- **JupyterLab** is a flexible and extendible user interface for interactive computing. It
|
||||
has several extensions that are tailored for using Jupyter Notebooks, as well as extensions
|
||||
for other parts of the data science stack.
|
||||
- **JupyterHub** is an application that manages interactive computing sessions for **multiple users**.
|
||||
It also connects users with infrastructure they wish to access. It can provide
|
||||
remote access to Jupyter Notebooks and JupyterLab for many people.
|
||||
|
||||
## For management
|
||||
|
||||
### Briefly, what problem does JupyterHub solve for us?
|
||||
|
||||
JupyterHub provides a shared platform for data science and collaboration.
|
||||
It allows users to utilize familiar data science workflows (such as the scientific Python stack,
|
||||
the R tidyverse, and Jupyter Notebooks) on institutional infrastructure. It also gives administrators
|
||||
some control over access to resources, security, environments, and authentication.
|
||||
|
||||
### Is JupyterHub mature? Why should we trust it?
|
||||
|
||||
Yes - the core JupyterHub application recently
|
||||
reached 1.0 status, and is considered stable and performant for most institutions.
|
||||
JupyterHub has also been deployed (along with other tools) to work on
|
||||
scalable infrastructure, large datasets, and high-performance computing.
|
||||
|
||||
### Who else uses JupyterHub?
|
||||
|
||||
JupyterHub is used at a variety of institutions in academia,
|
||||
industry, and government research labs. It is most-commonly used by two kinds of groups:
|
||||
|
||||
- Small teams (e.g., data science teams, research labs, or collaborative projects) to provide a
|
||||
shared resource for interactive computing, collaboration, and analytics.
|
||||
- Large teams (e.g., a department, a large class, or a large group of remote users) to provide
|
||||
access to organizational hardware, data, and analytics environments at scale.
|
||||
|
||||
Here is a sample of organizations that use JupyterHub:
|
||||
|
||||
- **Universities and colleges**: UC Berkeley, UC San Diego, Cal Poly SLO, Harvard University, University of Chicago,
|
||||
University of Oslo, University of Sheffield, Université Paris Sud, University of Versailles
|
||||
- **Research laboratories**: NASA, NCAR, NOAA, the Large Synoptic Survey Telescope, Brookhaven National Lab,
|
||||
Minnesota Supercomputing Institute, ALCF, CERN, Lawrence Livermore National Laboratory, HUNT
|
||||
- **Online communities**: Pangeo, Quantopian, mybinder.org, MathHub, Open Humans
|
||||
- **Computing infrastructure providers**: NERSC, San Diego Supercomputing Center, Compute Canada
|
||||
- **Companies**: Capital One, SANDVIK code, Globus
|
||||
|
||||
See the [Gallery of JupyterHub deployments](gallery-of-deployments) for
|
||||
a more complete list of JupyterHub deployments at institutions.
|
||||
|
||||
### How does JupyterHub compare with hosted products, like Google Colaboratory, RStudio.cloud, or Anaconda Enterprise?
|
||||
|
||||
JupyterHub puts you in control of your data, infrastructure, and coding environment.
|
||||
In addition, it is vendor neutral, which reduces lock-in to a particular vendor or service.
|
||||
JupyterHub provides access to interactive computing environments in the cloud (similar to each of these services).
|
||||
Compared with the tools above, it is more flexible, more customizable, free, and
|
||||
gives administrators more control over their setup and hardware.
|
||||
|
||||
Because JupyterHub is an open-source, community-driven tool, it can be extended and
|
||||
modified to fit an institution's needs. It plays nicely with the open source data science
|
||||
stack, and can serve a variety of computing environments, user interfaces, and
|
||||
computational hardware. It can also be deployed anywhere - on enterprise cloud infrastructure, on
|
||||
High-Performance-Computing machines, on local hardware, or even on a single laptop, which
|
||||
is not possible with most other tools for shared interactive computing.
|
||||
|
||||
## For IT
|
||||
|
||||
### How would I set up JupyterHub on institutional hardware?
|
||||
|
||||
That depends on what kind of hardware you've got. JupyterHub is flexible enough to be deployed
|
||||
on a variety of hardware, including in-room hardware, on-prem clusters, cloud infrastructure,
|
||||
etc.
|
||||
|
||||
The most common way to set up a JupyterHub is to use a JupyterHub distribution, these are pre-configured
|
||||
and opinionated ways to set up a JupyterHub on particular kinds of infrastructure. The two distributions
|
||||
that we currently suggest are:
|
||||
|
||||
- [Zero to JupyterHub for Kubernetes](https://z2jh.jupyter.org) is a scalable JupyterHub deployment and
|
||||
guide that runs on Kubernetes. Better for larger or dynamic user groups (50-10,000) or more complex
|
||||
compute/data needs.
|
||||
- [The Littlest JupyterHub](https://tljh.jupyter.org) is a lightweight JupyterHub that runs on a single
|
||||
machine (in the cloud or under your desk). Better for smaller user groups (4-80) or more
|
||||
lightweight computational resources.
|
||||
|
||||
### Does JupyterHub run well in the cloud?
|
||||
|
||||
**Yes** - most deployments of JupyterHub are run via cloud infrastructure and on a variety of cloud providers.
|
||||
Depending on the distribution of JupyterHub that you'd like to use, you can also connect your JupyterHub
|
||||
deployment with a number of other cloud-native services so that users have access to other resources from
|
||||
their interactive computing sessions.
|
||||
|
||||
For example, if you use the [Zero to JupyterHub for Kubernetes](https://z2jh.jupyter.org) distribution,
|
||||
you'll be able to utilize container-based workflows of other technologies such as the [dask-kubernetes](https://kubernetes.dask.org/en/latest/)
|
||||
project for distributed computing.
|
||||
|
||||
The Z2JH Helm Chart also has some functionality built in for auto-scaling your cluster up and down
|
||||
as more resources are needed - allowing you to utilize the benefits of a flexible cloud-based deployment.
|
||||
|
||||
### Is JupyterHub secure?
|
||||
|
||||
The short answer: yes.
|
||||
JupyterHub as a standalone application has been battle-tested at an institutional
|
||||
level for several years, and makes a number of "default" security decisions that are reasonable for most
|
||||
users.
|
||||
|
||||
- For security considerations in the base JupyterHub application,
|
||||
[see the JupyterHub security page](web-security).
|
||||
- For security considerations when deploying JupyterHub on Kubernetes, see the
|
||||
[JupyterHub on Kubernetes security page](https://z2jh.jupyter.org/en/latest/security.html).
|
||||
|
||||
The longer answer: it depends on your deployment. Because JupyterHub is very flexible, it can be used
|
||||
in a variety of deployment setups. This often entails connecting your JupyterHub to **other** infrastructure
|
||||
(such as a [Dask Gateway service](https://gateway.dask.org/)). There are many security decisions to be made
|
||||
in these cases, and the security of your JupyterHub deployment will often depend on these decisions.
|
||||
|
||||
If you are worried about security, don't hesitate to reach out to the JupyterHub community in the
|
||||
[Jupyter Community Forum](https://discourse.jupyter.org/c/jupyterhub). This community of practice has many
|
||||
individuals with experience running secure JupyterHub deployments and will be very glad to help you out.
|
||||
|
||||
### Does JupyterHub provide computing or data infrastructure?
|
||||
|
||||
**No** - JupyterHub manages user sessions and can _control_ computing infrastructure, but it does not provide these
|
||||
things itself. You are expected to run JupyterHub on your own infrastructure (local or in the cloud). Moreover,
|
||||
JupyterHub has no internal concept of "data", but is designed to be able to communicate with data repositories
|
||||
(again, either locally or remotely) for use within interactive computing sessions.
|
||||
|
||||
### How do I manage users?
|
||||
|
||||
JupyterHub offers a few options for managing your users. Upon setting up a JupyterHub, you can choose what
|
||||
kind of **authentication** you'd like to use. For example, you can have users sign up with an institutional
|
||||
email address, or choose a username / password when they first log-in, or offload authentication onto
|
||||
another service such as an organization's OAuth.
|
||||
|
||||
The users of a JupyterHub are stored locally, and can be modified manually by an administrator of the JupyterHub.
|
||||
Moreover, the _active_ users on a JupyterHub can be found on the administrator's page. This page
|
||||
gives you the abiltiy to stop or restart kernels, inspect user filesystems, and even take over user
|
||||
sessions to assist them with debugging.
|
||||
|
||||
### How do I manage software environments?
|
||||
|
||||
A key benefit of JupyterHub is the ability for an administrator to define the environment(s) that users
|
||||
have access to. There are many ways to do this, depending on what kind of infrastructure you're using for
|
||||
your JupyterHub.
|
||||
|
||||
For example, **The Littlest JupyterHub** runs on a single VM. In this case, the administrator defines
|
||||
an environment by installing packages to a shared folder that exists on the path of all users. The
|
||||
**JupyterHub for Kubernetes** deployment uses Docker images to define environments. You can create your
|
||||
own list of Docker images that users can select from, and can also control things like the amount of
|
||||
RAM available to users, or the types of machines that their sessions will use in the cloud.
|
||||
|
||||
### How does JupyterHub manage computational resources?
|
||||
|
||||
For interactive computing sessions, JupyterHub controls computational resources via a **spawner**.
|
||||
Spawners define how a new user session is created, and are customized for particular kinds of
|
||||
infrastructure. For example, the KubeSpawner knows how to control a Kubernetes deployment
|
||||
to create new pods when users log in.
|
||||
|
||||
For more sophisticated computational resources (like distributed computing), JupyterHub can
|
||||
connect with other infrastructure tools (like Dask or Spark). This allows users to control
|
||||
scalable or high-performance resources from within their JupyterHub sessions. The logic of
|
||||
how those resources are controlled is taken care of by the non-JupyterHub application.
|
||||
|
||||
### Can JupyterHub be used with my high-performance computing resources?
|
||||
|
||||
Yes - JupyterHub can provide access to many kinds of computing infrastructure.
|
||||
Especially when combined with other open-source schedulers such as Dask, you can manage fairly
|
||||
complex computing infrastructures from the interactive sessions of a JupyterHub. For example
|
||||
[see the Dask HPC page](https://docs.dask.org/en/latest/setup/hpc.html).
|
||||
|
||||
### How much resources do user sessions take?
|
||||
|
||||
This is highly configurable by the administrator. If you wish for your users to have simple
|
||||
data analytics environments for prototyping and light data exploring, you can restrict their
|
||||
memory and CPU based on the resources that you have available. If you'd like your JupyterHub
|
||||
to serve as a gateway to high-performance computing or data resources, you may increase the
|
||||
resources available on user machines, or connect them with computing infrastructures elsewhere.
|
||||
|
||||
### Can I customize the look and feel of a JupyterHub?
|
||||
|
||||
JupyterHub provides some customization of the graphics displayed to users. The most common
|
||||
modification is to add custom branding to the JupyterHub login page, loading pages, and
|
||||
various elements that persist across all pages (such as headers).
|
||||
|
||||
## For Technical Leads
|
||||
|
||||
### Will JupyterHub “just work” with our team's interactive computing setup?
|
||||
|
||||
Depending on the complexity of your setup, you'll have different experiences with "out of the box"
|
||||
distributions of JupyterHub. If all of the resources you need will fit on a single VM, then
|
||||
[The Littlest JupyterHub](https://tljh.jupyter.org) should get you up-and-running within
|
||||
a half day or so. For more complex setups, such as scalable Kubernetes clusters or access
|
||||
to high-performance computing and data, it will require more time and expertise with
|
||||
the technologies your JupyterHub will use (e.g., dev-ops knowledge with cloud computing).
|
||||
|
||||
In general, the base JupyterHub deployment is not the bottleneck for setup, it is connecting
|
||||
your JupyterHub with the various services and tools that you wish to provide to your users.
|
||||
|
||||
### How well does JupyterHub scale? What are JupyterHub's limitations?
|
||||
|
||||
JupyterHub works well at both a small scale (e.g., a single VM or machine) as well as a
|
||||
high scale (e.g., a scalable Kubernetes cluster). It can be used for teams as small as 2, and
|
||||
for user bases as large as 10,000. The scalability of JupyterHub largely depends on the
|
||||
infrastructure on which it is deployed. JupyterHub has been designed to be lightweight and
|
||||
flexible, so you can tailor your JupyterHub deployment to your needs.
|
||||
|
||||
### Is JupyterHub resilient? What happens when a machine goes down?
|
||||
|
||||
For JupyterHubs that are deployed in a containerized environment (e.g., Kubernetes), it is
|
||||
possible to configure the JupyterHub to be fairly resistant to failures in the system.
|
||||
For example, if JupyterHub fails, then user sessions will not be affected (though new
|
||||
users will not be able to log in). When a JupyterHub process is restarted, it should
|
||||
seamlessly connect with the user database and the system will return to normal.
|
||||
Again, the details of your JupyterHub deployment (e.g., whether it's deployed on a scalable cluster)
|
||||
will affect the resiliency of the deployment.
|
||||
|
||||
### What interfaces does JupyterHub support?
|
||||
|
||||
Out of the box, JupyterHub supports a variety of popular data science interfaces for user sessions,
|
||||
such as JupyterLab, Jupyter Notebooks, and RStudio. Any interface that can be served
|
||||
via a web address can be served with a JupyterHub (with the right setup).
|
||||
|
||||
### Does JupyterHub make it easier for our team to collaborate?
|
||||
|
||||
JupyterHub provides a standardized environment and access to shared resources for your teams.
|
||||
This greatly reduces the cost associated with sharing analyses and content with other team
|
||||
members, and makes it easier to collaborate and build off of one another's ideas. Combined with
|
||||
access to high-performance computing and data, JupyterHub provides a common resource to
|
||||
amplify your team's ability to prototype their analyses, scale them to larger data, and then
|
||||
share their results with one another.
|
||||
|
||||
JupyterHub also provides a computational framework to share computational narratives between
|
||||
different levels of an organization. For example, data scientists can share Jupyter Notebooks
|
||||
rendered as [Voilà dashboards](https://voila.readthedocs.io/en/stable/) with those who are not
|
||||
familiar with programming, or create publicly-available interactive analyses to allow others to
|
||||
interact with your work.
|
||||
|
||||
### Can I use JupyterHub with R/RStudio or other languages and environments?
|
||||
|
||||
Yes, Jupyter is a polyglot project, and there are over 40 community-provided kernels for a variety
|
||||
of languages (the most common being Python, Julia, and R). You can also use a JupyterHub to provide
|
||||
access to other interfaces, such as RStudio, that provide their own access to a language kernel.
|
169
docs/source/gallery-jhub-deployments.md
Normal file
@@ -0,0 +1,169 @@
|
||||
# A Gallery of JupyterHub Deployments
|
||||
|
||||
**A JupyterHub Community Resource**
|
||||
|
||||
We've compiled this list of JupyterHub deployments to help the community
|
||||
see the breadth and growth of JupyterHub's use in education, research, and
|
||||
high performance computing.
|
||||
|
||||
Please submit pull requests to update information or to add new institutions or uses.
|
||||
|
||||
|
||||
## Academic Institutions, Research Labs, and Supercomputer Centers
|
||||
|
||||
### University of California Berkeley
|
||||
|
||||
- [BIDS - Berkeley Institute for Data Science](https://bids.berkeley.edu/)
|
||||
- [Teaching with Jupyter notebooks and JupyterHub](https://bids.berkeley.edu/resources/videos/teaching-ipythonjupyter-notebooks-and-jupyterhub)
|
||||
|
||||
- [Data 8](http://data8.org/)
|
||||
- [GitHub organization](https://github.com/data-8)
|
||||
|
||||
- [NERSC](http://www.nersc.gov/)
|
||||
- [Press release on Jupyter and Cori](http://www.nersc.gov/news-publications/nersc-news/nersc-center-news/2016/jupyter-notebooks-will-open-up-new-possibilities-on-nerscs-cori-supercomputer/)
|
||||
- [Moving and sharing data](https://www.nersc.gov/assets/Uploads/03-MovingAndSharingData-Cholia.pdf)
|
||||
|
||||
- [Research IT](http://research-it.berkeley.edu)
|
||||
- [JupyterHub server supports campus research computation](http://research-it.berkeley.edu/blog/17/01/24/free-fully-loaded-jupyterhub-server-supports-campus-research-computation)
|
||||
|
||||
### University of California Davis
|
||||
|
||||
- [Spinning up multiple Jupyter Notebooks on AWS for a tutorial](https://github.com/mblmicdiv/course2017/blob/master/exercises/sourmash-setup.md)
|
||||
|
||||
Although not technically a JupyterHub deployment, this tutorial setup
|
||||
may be helpful to others in the Jupyter community.
|
||||
|
||||
Thank you C. Titus Brown for sharing this with the Software Carpentry
|
||||
mailing list.
|
||||
|
||||
```
|
||||
* I started a big Amazon machine;
|
||||
* I installed Docker and built a custom image containing my software of
|
||||
interest;
|
||||
* I ran multiple containers, one connected to port 8000, one on 8001,
|
||||
etc. and gave each student a different port;
|
||||
* students could connect in and use the Terminal program in Jupyter to
|
||||
execute commands, and could upload/download files via the Jupyter
|
||||
console interface;
|
||||
* in theory I could have used notebooks too, but for this I didn’t have
|
||||
need.
|
||||
|
||||
I am aware that JupyterHub can probably do all of this including manage
|
||||
the containers, but I’m still a bit shy of diving into that; this was
|
||||
fairly straightforward, gave me disposable containers that were isolated
|
||||
for each individual student, and worked almost flawlessly. Should be
|
||||
easy to do with RStudio too.
|
||||
```
|
||||
|
||||
### Cal Poly San Luis Obispo
|
||||
|
||||
- [jupyterhub-deploy-teaching](https://github.com/jupyterhub/jupyterhub-deploy-teaching) based on work by Brian Granger for Cal Poly's Data Science 301 Course
|
||||
|
||||
### Clemson University
|
||||
|
||||
- Advanced Computing
|
||||
- [Palmetto cluster and JupyterHub](http://citi.sites.clemson.edu/2016/08/18/JupyterHub-for-Palmetto-Cluster.html)
|
||||
|
||||
### University of Colorado Boulder
|
||||
|
||||
- (CU Research Computing) CURC
|
||||
- [JupyterHub User Guide](https://www.rc.colorado.edu/support/user-guide/jupyterhub.html)
|
||||
- Slurm job dispatched on Crestone compute cluster
|
||||
- log troubleshooting
|
||||
- Profiles in IPython Clusters tab
|
||||
- [Parallel Processing with JupyterHub tutorial](https://www.rc.colorado.edu/support/examples-and-tutorials/parallel-processing-with-jupyterhub.html)
|
||||
- [Parallel Programming with JupyterHub document](https://www.rc.colorado.edu/book/export/html/833)
|
||||
|
||||
- Earth Lab at CU
|
||||
- [Tutorial on Parallel R on JupyterHub](https://earthdatascience.org/tutorials/parallel-r-on-jupyterhub/)
|
||||
|
||||
### HTCondor
|
||||
|
||||
- [HTCondor Python Bindings Tutorial from HTCondor Week 2017 includes information on their JupyterHub tutorials](https://research.cs.wisc.edu/htcondor/HTCondorWeek2017/presentations/TueBockelman_Python.pdf)
|
||||
|
||||
### University of Illinois
|
||||
|
||||
- https://datascience.business.illinois.edu
|
||||
|
||||
### MIT and Lincoln Labs
|
||||
|
||||
|
||||
### Michigan State University
|
||||
|
||||
- [Setting up JupyterHub](https://mediaspace.msu.edu/media/Setting+Up+Your+JupyterHub+Password/1_hgv13aag/11980471)
|
||||
|
||||
### University of Minnesota
|
||||
|
||||
- [JupyterHub Inside HPC](https://insidehpc.com/tag/jupyterhub/)
|
||||
|
||||
### University of Missouri
|
||||
|
||||
- https://dsa.missouri.edu/faq/
|
||||
|
||||
### University of Rochester CIRC
|
||||
|
||||
- [JupyterHub Userguide](https://info.circ.rochester.edu/Web_Applications/JupyterHub.html) - Slurm, beehive
|
||||
|
||||
### University of California San Diego
|
||||
|
||||
- San Diego Supercomputer Center - Andrea Zonca
|
||||
- [Deploy JupyterHub on a Supercomputer with SSH](https://zonca.github.io/2017/05/jupyterhub-hpc-batchspawner-ssh.html)
|
||||
- [Run Jupyterhub on a Supercomputer](https://zonca.github.io/2015/04/jupyterhub-hpc.html)
|
||||
- [Deploy JupyterHub on a VM for a Workshop](https://zonca.github.io/2016/04/jupyterhub-sdsc-cloud.html)
|
||||
- [Customize your Python environment in Jupyterhub](https://zonca.github.io/2017/02/customize-python-environment-jupyterhub.html)
|
||||
- [Jupyterhub deployment on multiple nodes with Docker Swarm](https://zonca.github.io/2016/05/jupyterhub-docker-swarm.html)
|
||||
- [Sample deployment of Jupyterhub in HPC on SDSC Comet](https://zonca.github.io/2017/02/sample-deployment-jupyterhub-hpc.html)
|
||||
|
||||
- Educational Technology Services - Paul Jamason
|
||||
- [jupyterhub.ucsd.edu](https://jupyterhub.ucsd.edu)
|
||||
|
||||
### TACC University of Texas
|
||||
|
||||
### Texas A&M
|
||||
|
||||
- Kristen Thyng - Oceanography
|
||||
- [Teaching with JupyterHub and nbgrader](http://kristenthyng.com/blog/2016/09/07/jupyterhub+nbgrader/)
|
||||
|
||||
|
||||
|
||||
## Service Providers
|
||||
|
||||
### AWS
|
||||
|
||||
- [running-jupyter-notebook-and-jupyterhub-on-amazon-emr](https://aws.amazon.com/blogs/big-data/running-jupyter-notebook-and-jupyterhub-on-amazon-emr/)
|
||||
|
||||
### Google Cloud Platform
|
||||
|
||||
- [Using Tensorflow and JupyterHub in Classrooms](https://cloud.google.com/solutions/using-tensorflow-jupyterhub-classrooms)
|
||||
- [using-tensorflow-and-jupyterhub blog post](https://opensource.googleblog.com/2016/10/using-tensorflow-and-jupyterhub.html)
|
||||
|
||||
### Everware
|
||||
|
||||
[Everware](https://github.com/everware) Reproducible and reusable science powered by jupyterhub and docker. Like nbviewer, but executable. CERN, Geneva [website](http://everware.xyz/)
|
||||
|
||||
|
||||
### Microsoft Azure
|
||||
|
||||
- https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-data-science-linux-dsvm-intro
|
||||
|
||||
### Rackspace Carina
|
||||
|
||||
- https://getcarina.com/blog/learning-how-to-whale/
|
||||
- http://carolynvanslyck.com/talk/carina/jupyterhub/#/
|
||||
|
||||
### jcloud.io
|
||||
- Open to public JupyterHub server
|
||||
- https://jcloud.io
|
||||
## Miscellaneous
|
||||
|
||||
- https://medium.com/@ybarraud/setting-up-jupyterhub-with-sudospawner-and-anaconda-844628c0dbee#.rm3yt87e1
|
||||
- https://groups.google.com/forum/#!topic/jupyter/nkPSEeMr8c0 Mailing list UT deployment
|
||||
- JupyterHub setup on Centos https://gist.github.com/johnrc/604971f7d41ebf12370bf5729bf3e0a4
|
||||
- Deploy JupyterHub to Docker Swarm https://jupyterhub.surge.sh/#/welcome
|
||||
- http://www.laketide.com/building-your-lab-part-3/
|
||||
- http://estrellita.hatenablog.com/entry/2015/07/31/083202
|
||||
- http://www.walkingrandomly.com/?p=5734
|
||||
- https://wrdrd.com/docs/consulting/education-technology
|
||||
- https://bitbucket.org/jackhale/fenics-jupyter
|
||||
- [LinuxCluster blog](https://linuxcluster.wordpress.com/category/application/jupyterhub/)
|
||||
- [Network Technology](https://arnesund.com/tag/jupyterhub/) [Spark Cluster on OpenStack with Multi-User Jupyter Notebook](https://arnesund.com/2015/09/21/spark-cluster-on-openstack-with-multi-user-jupyter-notebook/)
|
99
docs/source/getting-started/authenticators-users-basics.md
Normal file
@@ -0,0 +1,99 @@
|
||||
# Authentication and User Basics
|
||||
|
||||
The default Authenticator uses [PAM][] to authenticate system users with
|
||||
their username and password. With the default Authenticator, any user
|
||||
with an account and password on the system will be allowed to login.
|
||||
|
||||
## Create a whitelist of users
|
||||
|
||||
You can restrict which users are allowed to login with a whitelist,
|
||||
`Authenticator.whitelist`:
|
||||
|
||||
|
||||
```python
|
||||
c.Authenticator.whitelist = {'mal', 'zoe', 'inara', 'kaylee'}
|
||||
```
|
||||
|
||||
Users in the whitelist are added to the Hub database when the Hub is
|
||||
started.
|
||||
|
||||
## Configure admins (`admin_users`)
|
||||
|
||||
Admin users of JupyterHub, `admin_users`, can add and remove users from
|
||||
the user `whitelist`. `admin_users` can take actions on other users'
|
||||
behalf, such as stopping and restarting their servers.
|
||||
|
||||
A set of initial admin users, `admin_users` can configured be as follows:
|
||||
|
||||
```python
|
||||
c.Authenticator.admin_users = {'mal', 'zoe'}
|
||||
```
|
||||
Users in the admin list are automatically added to the user `whitelist`,
|
||||
if they are not already present.
|
||||
|
||||
## Give admin access to other users' notebook servers (`admin_access`)
|
||||
|
||||
Since the default `JupyterHub.admin_access` setting is False, the admins
|
||||
do not have permission to log in to the single user notebook servers
|
||||
owned by *other users*. If `JupyterHub.admin_access` is set to True,
|
||||
then admins have permission to log in *as other users* on their
|
||||
respective machines, for debugging. **As a courtesy, you should make
|
||||
sure your users know if admin_access is enabled.**
|
||||
|
||||
## Add or remove users from the Hub
|
||||
|
||||
Users can be added to and removed from the Hub via either the admin
|
||||
panel or the REST API. When a user is **added**, the user will be
|
||||
automatically added to the whitelist and database. Restarting the Hub
|
||||
will not require manually updating the whitelist in your config file,
|
||||
as the users will be loaded from the database.
|
||||
|
||||
After starting the Hub once, it is not sufficient to **remove** a user
|
||||
from the whitelist in your config file. You must also remove the user
|
||||
from the Hub's database, either by deleting the user from JupyterHub's
|
||||
admin page, or you can clear the `jupyterhub.sqlite` database and start
|
||||
fresh.
|
||||
|
||||
## Use LocalAuthenticator to create system users
|
||||
|
||||
The `LocalAuthenticator` is a special kind of authenticator that has
|
||||
the ability to manage users on the local system. When you try to add a
|
||||
new user to the Hub, a `LocalAuthenticator` will check if the user
|
||||
already exists. If you set the configuration value, `create_system_users`,
|
||||
to `True` in the configuration file, the `LocalAuthenticator` has
|
||||
the privileges to add users to the system. The setting in the config
|
||||
file is:
|
||||
|
||||
```python
|
||||
c.LocalAuthenticator.create_system_users = True
|
||||
```
|
||||
|
||||
Adding a user to the Hub that doesn't already exist on the system will
|
||||
result in the Hub creating that user via the system `adduser` command
|
||||
line tool. This option is typically used on hosted deployments of
|
||||
JupyterHub, to avoid the need to manually create all your users before
|
||||
launching the service. This approach is not recommended when running
|
||||
JupyterHub in situations where JupyterHub users map directly onto the
|
||||
system's UNIX users.
|
||||
|
||||
## Use OAuthenticator to support OAuth with popular service providers
|
||||
|
||||
JupyterHub's [OAuthenticator][] currently supports the following
|
||||
popular services:
|
||||
|
||||
- Auth0
|
||||
- Bitbucket
|
||||
- CILogon
|
||||
- GitHub
|
||||
- GitLab
|
||||
- Globus
|
||||
- Google
|
||||
- MediaWiki
|
||||
- Okpy
|
||||
- OpenShift
|
||||
|
||||
A generic implementation, which you can use for OAuth authentication
|
||||
with any provider, is also available.
|
||||
|
||||
[PAM]: https://en.wikipedia.org/wiki/Pluggable_authentication_module
|
||||
[OAuthenticator]: https://github.com/jupyterhub/oauthenticator
|
@@ -1,7 +1,7 @@
|
||||
# Configuration Basics
|
||||
|
||||
This section contains basic information about configuring settings for a JupyterHub
|
||||
deployment. The [Technical Reference](reference-index)
|
||||
The section contains basic information about configuring settings for a JupyterHub
|
||||
deployment. The [Technical Reference](../reference/index.html)
|
||||
documentation provides additional details.
|
||||
|
||||
This section will help you learn how to:
|
||||
@@ -11,8 +11,6 @@ This section will help you learn how to:
|
||||
- configure JupyterHub using command line options
|
||||
- find information and examples for some common deployments
|
||||
|
||||
(generate-config-file)=
|
||||
|
||||
## Generate a default config file
|
||||
|
||||
On startup, JupyterHub will look by default for a configuration file,
|
||||
@@ -46,30 +44,30 @@ jupyterhub -f /etc/jupyterhub/jupyterhub_config.py
|
||||
```
|
||||
|
||||
The IPython documentation provides additional information on the
|
||||
[config system](https://ipython.readthedocs.io/en/stable/development/config.html)
|
||||
[config system](http://ipython.readthedocs.io/en/stable/development/config.html)
|
||||
that Jupyter uses.
|
||||
|
||||
## Configure using command line options
|
||||
|
||||
To display all command line options that are available for configuration run the following command:
|
||||
To display all command line options that are available for configuration:
|
||||
|
||||
```bash
|
||||
jupyterhub --help-all
|
||||
```
|
||||
|
||||
Configuration using the command line options is done when launching JupyterHub.
|
||||
For example, to start JupyterHub on `10.0.1.2:443` with https, you
|
||||
For example, to start JupyterHub on ``10.0.1.2:443`` with https, you
|
||||
would enter:
|
||||
|
||||
```bash
|
||||
jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert
|
||||
```
|
||||
```
|
||||
|
||||
All configurable options may technically be set on the command line,
|
||||
All configurable options may technically be set on the command-line,
|
||||
though some are inconvenient to type. To set a particular configuration
|
||||
parameter, `c.Class.trait`, you would use the command line option,
|
||||
`--Class.trait`, when starting JupyterHub. For example, to configure the
|
||||
`c.Spawner.notebook_dir` trait from the command line, use the
|
||||
`c.Spawner.notebook_dir` trait from the command-line, use the
|
||||
`--Spawner.notebook_dir` option:
|
||||
|
||||
```bash
|
||||
@@ -79,24 +77,11 @@ jupyterhub --Spawner.notebook_dir='~/assignments'
|
||||
## Configure for various deployment environments
|
||||
|
||||
The default authentication and process spawning mechanisms can be replaced, and
|
||||
specific [authenticators](authenticators-users-basics) and
|
||||
[spawners](spawners-basics) can be set in the configuration file.
|
||||
specific [authenticators](./authenticators-users-basics.html) and
|
||||
[spawners](./spawners-basics.html) can be set in the configuration file.
|
||||
This enables JupyterHub to be used with a variety of authentication methods or
|
||||
process control and deployment environments. [Some examples](config-examples),
|
||||
meant as illustrations, are:
|
||||
process control and deployment environments. [Some examples](../reference/config-examples.html),
|
||||
meant as illustration, are:
|
||||
|
||||
- Using GitHub OAuth instead of PAM with [OAuthenticator](https://github.com/jupyterhub/oauthenticator)
|
||||
- Spawning single-user servers with Docker, using the [DockerSpawner](https://github.com/jupyterhub/dockerspawner)
|
||||
|
||||
## Run the proxy separately
|
||||
|
||||
This is _not_ strictly necessary, but useful in many cases. If you
|
||||
use a custom proxy (e.g. Traefik), this is also not needed.
|
||||
|
||||
Connections to user servers go through the proxy, and _not_ the hub
|
||||
itself. If the proxy stays running when the hub restarts (for
|
||||
maintenance, re-configuration, etc.), then user connections are not
|
||||
interrupted. For simplicity, by default the hub starts the proxy
|
||||
automatically, so if the hub restarts, the proxy restarts, and user
|
||||
connections are interrupted. It is easy to run the proxy separately,
|
||||
for information see [the separate proxy page](separate-proxy).
|
12
docs/source/getting-started/index.rst
Normal file
@@ -0,0 +1,12 @@
|
||||
Getting Started
|
||||
===============
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
config-basics
|
||||
networking-basics
|
||||
security-basics
|
||||
authenticators-users-basics
|
||||
spawners-basics
|
||||
services-basics
|
@@ -11,7 +11,7 @@ This section will help you with basic proxy and network configuration to:
|
||||
|
||||
The Proxy's main IP address setting determines where JupyterHub is available to users.
|
||||
By default, JupyterHub is configured to be available on all network interfaces
|
||||
(`''`) on port 8000. _Note_: Use of `'*'` is discouraged for IP configuration;
|
||||
(`''`) on port 8000. *Note*: Use of `'*'` is discouraged for IP configuration;
|
||||
instead, use of `'0.0.0.0'` is preferred.
|
||||
|
||||
Changing the Proxy's main IP address and port can be done with the following
|
||||
@@ -41,9 +41,9 @@ port.
|
||||
|
||||
## Set the Proxy's REST API communication URL (optional)
|
||||
|
||||
By default, the proxy's REST API listens on port 8081 of `localhost` only.
|
||||
The Hub service talks to the proxy via a REST API on a secondary port.
|
||||
The REST API URL (hostname and port) can be configured separately and override the default settings.
|
||||
By default, this REST API listens on port 8081 of `localhost` only.
|
||||
The Hub service talks to the proxy via a REST API on a secondary port. The
|
||||
API URL can be configured separately and override the default settings.
|
||||
|
||||
### Set api_url
|
||||
|
||||
@@ -74,7 +74,7 @@ The Hub service listens only on `localhost` (port 8081) by default.
|
||||
The Hub needs to be accessible from both the proxy and all Spawners.
|
||||
When spawning local servers, an IP address setting of `localhost` is fine.
|
||||
|
||||
If _either_ the Proxy _or_ (more likely) the Spawners will be remote or
|
||||
If *either* the Proxy *or* (more likely) the Spawners will be remote or
|
||||
isolated in containers, the Hub must listen on an IP that is accessible.
|
||||
|
||||
```python
|
||||
@@ -82,20 +82,20 @@ c.JupyterHub.hub_ip = '10.0.1.4'
|
||||
c.JupyterHub.hub_port = 54321
|
||||
```
|
||||
|
||||
**Added in 0.8:** The `c.JupyterHub.hub_connect_ip` setting is the IP address or
|
||||
**Added in 0.8:** The `c.JupyterHub.hub_connect_ip` setting is the ip address or
|
||||
hostname that other services should use to connect to the Hub. A common
|
||||
configuration for, e.g. docker, is:
|
||||
|
||||
```python
|
||||
c.JupyterHub.hub_ip = '0.0.0.0' # listen on all interfaces
|
||||
c.JupyterHub.hub_connect_ip = '10.0.1.4' # IP as seen on the docker network. Can also be a hostname.
|
||||
c.JupyterHub.hub_connect_ip = '10.0.1.4' # ip as seen on the docker network. Can also be a hostname.
|
||||
```
|
||||
|
||||
## Adjusting the hub's URL
|
||||
|
||||
The hub will most commonly be running on a hostname of its own. If it
|
||||
The hub will most commonly be running on a hostname of its own. If it
|
||||
is not – for example, if the hub is being reverse-proxied and being
|
||||
exposed at a URL such as `https://proxy.example.org/jupyter/` – then
|
||||
you will need to tell JupyterHub the base URL of the service. In such
|
||||
you will need to tell JupyterHub the base URL of the service. In such
|
||||
a case, it is both necessary and sufficient to set
|
||||
`c.JupyterHub.base_url = '/jupyter/'` in the configuration.
|
186
docs/source/getting-started/security-basics.rst
Normal file
@@ -0,0 +1,186 @@
|
||||
Security settings
|
||||
=================
|
||||
|
||||
.. important::
|
||||
|
||||
You should not run JupyterHub without SSL encryption on a public network.
|
||||
|
||||
Security is the most important aspect of configuring Jupyter. Three
|
||||
configuration settings are the main aspects of security configuration:
|
||||
|
||||
1. :ref:`SSL encryption <ssl-encryption>` (to enable HTTPS)
|
||||
2. :ref:`Cookie secret <cookie-secret>` (a key for encrypting browser cookies)
|
||||
3. Proxy :ref:`authentication token <authentication-token>` (used for the Hub and
|
||||
other services to authenticate to the Proxy)
|
||||
|
||||
The Hub hashes all secrets (e.g., auth tokens) before storing them in its
|
||||
database. A loss of control over read-access to the database should have
|
||||
minimal impact on your deployment; if your database has been compromised, it
|
||||
is still a good idea to revoke existing tokens.
|
||||
|
||||
.. _ssl-encryption:
|
||||
|
||||
Enabling SSL encryption
|
||||
-----------------------
|
||||
|
||||
Since JupyterHub includes authentication and allows arbitrary code execution,
|
||||
you should not run it without SSL (HTTPS).
|
||||
|
||||
Using an SSL certificate
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This will require you to obtain an official, trusted SSL certificate or create a
|
||||
self-signed certificate. Once you have obtained and installed a key and
|
||||
certificate you need to specify their locations in the ``jupyterhub_config.py``
|
||||
configuration file as follows:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
c.JupyterHub.ssl_key = '/path/to/my.key'
|
||||
c.JupyterHub.ssl_cert = '/path/to/my.cert'
|
||||
|
||||
|
||||
Some cert files also contain the key, in which case only the cert is needed. It
|
||||
is important that these files be put in a secure location on your server, where
|
||||
they are not readable by regular users.
|
||||
|
||||
If you are using a **chain certificate**, see also chained certificate for SSL
|
||||
in the JupyterHub `Troubleshooting FAQ <../troubleshooting.html>`_.
|
||||
|
||||
Using letsencrypt
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
It is also possible to use `letsencrypt <https://letsencrypt.org/>`_ to obtain
|
||||
a free, trusted SSL certificate. If you run letsencrypt using the default
|
||||
options, the needed configuration is (replace ``mydomain.tld`` by your fully
|
||||
qualified domain name):
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
c.JupyterHub.ssl_key = '/etc/letsencrypt/live/{mydomain.tld}/privkey.pem'
|
||||
c.JupyterHub.ssl_cert = '/etc/letsencrypt/live/{mydomain.tld}/fullchain.pem'
|
||||
|
||||
If the fully qualified domain name (FQDN) is ``example.com``, the following
|
||||
would be the needed configuration:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
c.JupyterHub.ssl_key = '/etc/letsencrypt/live/example.com/privkey.pem'
|
||||
c.JupyterHub.ssl_cert = '/etc/letsencrypt/live/example.com/fullchain.pem'
|
||||
|
||||
|
||||
If SSL termination happens outside of the Hub
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In certain cases, for example if the hub is running behind a reverse proxy, and
|
||||
`SSL termination is being provided by NGINX <https://www.nginx.com/resources/admin-guide/nginx-ssl-termination/>`_,
|
||||
it is reasonable to run the hub without SSL.
|
||||
|
||||
To achieve this, simply omit the configuration settings
|
||||
``c.JupyterHub.ssl_key`` and ``c.JupyterHub.ssl_cert``
|
||||
(setting them to ``None`` does not have the same effect, and is an error).
|
||||
|
||||
.. _cookie-secret:
|
||||
|
||||
Cookie secret
|
||||
-------------
|
||||
|
||||
The cookie secret is an encryption key, used to encrypt the browser cookies
|
||||
which are used for authentication. Three common methods are described for
|
||||
generating and configuring the cookie secret.
|
||||
|
||||
Generating and storing as a cookie secret file
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The cookie secret should be 32 random bytes, encoded as hex, and is typically
|
||||
stored in a ``jupyterhub_cookie_secret`` file. An example command to generate the
|
||||
``jupyterhub_cookie_secret`` file is:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
openssl rand -hex 32 > /srv/jupyterhub/jupyterhub_cookie_secret
|
||||
|
||||
In most deployments of JupyterHub, you should point this to a secure location on
|
||||
the file system, such as ``/srv/jupyterhub/jupyterhub_cookie_secret``.
|
||||
|
||||
The location of the ``jupyterhub_cookie_secret`` file can be specified in the
|
||||
``jupyterhub_config.py`` file as follows:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
c.JupyterHub.cookie_secret_file = '/srv/jupyterhub/jupyterhub_cookie_secret'
|
||||
|
||||
If the cookie secret file doesn't exist when the Hub starts, a new cookie
|
||||
secret is generated and stored in the file. The file must not be readable by
|
||||
``group`` or ``other`` or the server won't start. The recommended permissions
|
||||
for the cookie secret file are ``600`` (owner-only rw).
|
||||
|
||||
Generating and storing as an environment variable
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If you would like to avoid the need for files, the value can be loaded in the
|
||||
Hub process from the ``JPY_COOKIE_SECRET`` environment variable, which is a
|
||||
hex-encoded string. You can set it this way:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export JPY_COOKIE_SECRET=`openssl rand -hex 32`
|
||||
|
||||
For security reasons, this environment variable should only be visible to the
|
||||
Hub. If you set it dynamically as above, all users will be logged out each time
|
||||
the Hub starts.
|
||||
|
||||
Generating and storing as a binary string
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can also set the cookie secret in the configuration file
|
||||
itself, ``jupyterhub_config.py``, as a binary string:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
c.JupyterHub.cookie_secret = bytes.fromhex('64 CHAR HEX STRING')
|
||||
|
||||
|
||||
.. important::
|
||||
|
||||
If the cookie secret value changes for the Hub, all single-user notebook
|
||||
servers must also be restarted.
|
||||
|
||||
|
||||
.. _authentication-token:
|
||||
|
||||
Proxy authentication token
|
||||
--------------------------
|
||||
|
||||
The Hub authenticates its requests to the Proxy using a secret token that
|
||||
the Hub and Proxy agree upon. The value of this string should be a random
|
||||
string (for example, generated by ``openssl rand -hex 32``).
|
||||
|
||||
Generating and storing token in the configuration file
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Or you can set the value in the configuration file, ``jupyterhub_config.py``:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
c.JupyterHub.proxy_auth_token = '0bc02bede919e99a26de1e2a7a5aadfaf6228de836ec39a05a6c6942831d8fe5'
|
||||
|
||||
Generating and storing as an environment variable
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can pass this value of the proxy authentication token to the Hub and Proxy
|
||||
using the ``CONFIGPROXY_AUTH_TOKEN`` environment variable:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export CONFIGPROXY_AUTH_TOKEN='openssl rand -hex 32'
|
||||
|
||||
This environment variable needs to be visible to the Hub and Proxy.
|
||||
|
||||
Default if token is not set
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If you don't set the Proxy authentication token, the Hub will generate a random
|
||||
key itself, which means that any time you restart the Hub you **must also
|
||||
restart the Proxy**. If the proxy is a subprocess of the Hub, this should happen
|
||||
automatically (this is the default configuration).
|
121
docs/source/getting-started/services-basics.md
Normal file
@@ -0,0 +1,121 @@
|
||||
# External services
|
||||
|
||||
When working with JupyterHub, a **Service** is defined as a process
|
||||
that interacts with the Hub's REST API. A Service may perform a specific
|
||||
or action or task. For example, shutting down individuals' single user
|
||||
notebook servers that have been is a good example of a task that could
|
||||
be automated by a Service. Let's look at how the [cull_idle_servers][]
|
||||
script can be used as a Service.
|
||||
|
||||
## Real-world example to cull idle servers
|
||||
|
||||
JupyterHub has a REST API that can be used by external services. This
|
||||
document will:
|
||||
|
||||
- explain some basic information about API tokens
|
||||
- clarify that API tokens can be used to authenticate to
|
||||
single-user servers as of [version 0.8.0](../changelog.html)
|
||||
- show how the [cull_idle_servers][] script can be:
|
||||
- used in a Hub-managed service
|
||||
- run as a standalone script
|
||||
|
||||
Both examples for `cull_idle_servers` will communicate tasks to the
|
||||
Hub via the REST API.
|
||||
|
||||
## API Token basics
|
||||
|
||||
### Create an API token
|
||||
|
||||
To run such an external service, an API token must be created and
|
||||
provided to the service.
|
||||
|
||||
As of [version 0.6.0](../changelog.html), the preferred way of doing
|
||||
this is to first generate an API token:
|
||||
|
||||
```bash
|
||||
openssl rand -hex 32
|
||||
```
|
||||
|
||||
In [version 0.8.0](../changelog.html), a TOKEN request page for
|
||||
generating an API token is available from the JupyterHub user interface:
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
### Pass environment variable with token to the Hub
|
||||
|
||||
In the case of `cull_idle_servers`, it is passed as the environment
|
||||
variable called `JUPYTERHUB_API_TOKEN`.
|
||||
|
||||
### Use API tokens for services and tasks that require external access
|
||||
|
||||
While API tokens are often associated with a specific user, API tokens
|
||||
can be used by services that require external access for activities
|
||||
that may not correspond to a specific human, e.g. adding users during
|
||||
setup for a tutorial or workshop. Add a service and its API token to the
|
||||
JupyterHub configuration file, `jupyterhub_config.py`:
|
||||
|
||||
```python
|
||||
c.JupyterHub.services = [
|
||||
{'name': 'adding-users', 'api_token': 'super-secret-token'},
|
||||
]
|
||||
```
|
||||
|
||||
### Restart JupyterHub
|
||||
|
||||
Upon restarting JupyterHub, you should see a message like below in the
|
||||
logs:
|
||||
|
||||
```
|
||||
Adding API token for <username>
|
||||
```
|
||||
|
||||
## Authenticating to single-user servers using API token
|
||||
|
||||
In JupyterHub 0.7, there is no mechanism for token authentication to
|
||||
single-user servers, and only cookies can be used for authentication.
|
||||
0.8 supports using JupyterHub API tokens to authenticate to single-user
|
||||
servers.
|
||||
|
||||
## Configure `cull-idle` to run as a Hub-Managed Service
|
||||
|
||||
In `jupyterhub_config.py`, add the following dictionary for the
|
||||
`cull-idle` Service to the `c.JupyterHub.services` list:
|
||||
|
||||
```python
|
||||
c.JupyterHub.services = [
|
||||
{
|
||||
'name': 'cull-idle',
|
||||
'admin': True,
|
||||
'command': 'python3 cull_idle_servers.py --timeout=3600'.split(),
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
where:
|
||||
|
||||
- `'admin': True` indicates that the Service has 'admin' permissions, and
|
||||
- `'command'` indicates that the Service will be launched as a
|
||||
subprocess, managed by the Hub.
|
||||
|
||||
## Run `cull-idle` manually as a standalone script
|
||||
|
||||
Now you can run your script, i.e. `cull_idle_servers`, by providing it
|
||||
the API token and it will authenticate through the REST API to
|
||||
interact with it.
|
||||
|
||||
This will run `cull-idle` manually. `cull-idle` can be run as a standalone
|
||||
script anywhere with access to the Hub, and will periodically check for idle
|
||||
servers and shut them down via the Hub's REST API. In order to shutdown the
|
||||
servers, the token given to cull-idle must have admin privileges.
|
||||
|
||||
Generate an API token and store it in the `JUPYTERHUB_API_TOKEN` environment
|
||||
variable. Run `cull_idle_servers.py` manually.
|
||||
|
||||
```bash
|
||||
export JUPYTERHUB_API_TOKEN='token'
|
||||
python3 cull_idle_servers.py [--timeout=900] [--url=http://127.0.0.1:8081/hub/api]
|
||||
```
|
||||
|
||||
[cull_idle_servers]: https://github.com/jupyterhub/jupyterhub/blob/master/examples/cull-idle/cull_idle_servers.py
|
@@ -1,14 +1,12 @@
|
||||
(spawners)=
|
||||
|
||||
# Spawners and single-user notebook servers
|
||||
|
||||
A Spawner starts each single-user notebook server. Since the single-user server is an instance of `jupyter notebook`, an entire separate
|
||||
multi-process application, many aspects of that server can be configured and there are a lot
|
||||
of ways to express that configuration.
|
||||
Since the single-user server is an instance of `jupyter notebook`, an entire separate
|
||||
multi-process application, there are many aspect of that server can configure, and a lot of ways
|
||||
to express that configuration.
|
||||
|
||||
At the JupyterHub level, you can set some values on the Spawner. The simplest of these is
|
||||
`Spawner.notebook_dir`, which lets you set the root directory for a user's server. This root
|
||||
notebook directory is the highest-level directory users will be able to access in the notebook
|
||||
notebook directory is the highest level directory users will be able to access in the notebook
|
||||
dashboard. In this example, the root notebook directory is set to `~/notebooks`, where `~` is
|
||||
expanded to the user's home directory.
|
||||
|
||||
@@ -16,13 +14,13 @@ expanded to the user's home directory.
|
||||
c.Spawner.notebook_dir = '~/notebooks'
|
||||
```
|
||||
|
||||
You can also specify extra command line arguments to the notebook server with:
|
||||
You can also specify extra command-line arguments to the notebook server with:
|
||||
|
||||
```python
|
||||
c.Spawner.args = ['--debug', '--profile=PHYS131']
|
||||
```
|
||||
|
||||
This could be used to set the user's default page for the single-user server:
|
||||
This could be used to set the users default page for the single user server:
|
||||
|
||||
```python
|
||||
c.Spawner.args = ['--NotebookApp.default_url=/notebooks/Welcome.ipynb']
|
@@ -1,128 +0,0 @@
|
||||
(api-only)=
|
||||
|
||||
# Deploying JupyterHub in "API only mode"
|
||||
|
||||
As a service for deploying and managing Jupyter servers for users, JupyterHub
|
||||
exposes this functionality _primarily_ via a [REST API](rest).
|
||||
For convenience, JupyterHub also ships with a _basic_ web UI built using that REST API.
|
||||
The basic web UI enables users to click a button to quickly start and stop their servers,
|
||||
and it lets admins perform some basic user and server management tasks.
|
||||
|
||||
The REST API has always provided additional functionality beyond what is available in the basic web UI.
|
||||
Similarly, we avoid implementing UI functionality that is also not available via the API.
|
||||
With JupyterHub 2.0, the basic web UI will **always** be composed using the REST API.
|
||||
In other words, no UI pages should rely on information not available via the REST API.
|
||||
Previously, some admin UI functionality could only be achieved via admin pages,
|
||||
such as paginated requests.
|
||||
|
||||
## Limited UI customization via templates
|
||||
|
||||
The JupyterHub UI is customizable via extensible HTML [templates](templates),
|
||||
but this has some limited scope to what can be customized.
|
||||
Adding some content and messages to existing pages is well supported,
|
||||
but changing the page flow and what pages are available are beyond the scope of what is customizable.
|
||||
|
||||
## Rich UI customization with REST API based apps
|
||||
|
||||
Increasingly, JupyterHub is used purely as an API for managing Jupyter servers
|
||||
for other Jupyter-based applications that might want to present a different user experience.
|
||||
If you want a fully customized user experience,
|
||||
you can now disable the Hub UI and use your own pages together with the JupyterHub REST API
|
||||
to build your own web application to serve your users,
|
||||
relying on the Hub only as an API for managing users and servers.
|
||||
|
||||
One example of such an application is [BinderHub][], which powers https://mybinder.org,
|
||||
and motivates many of these changes.
|
||||
|
||||
BinderHub is distinct from a traditional JupyterHub deployment
|
||||
because it uses temporary users created for each launch.
|
||||
Instead of presenting a login page,
|
||||
users are presented with a form to specify what environment they would like to launch:
|
||||
|
||||

|
||||
|
||||
When a launch is requested:
|
||||
|
||||
1. an image is built, if necessary
|
||||
2. a temporary user is created,
|
||||
3. a server is launched for that user, and
|
||||
4. when running, users are redirected to an already running server with an auth token in the URL
|
||||
5. after the session is over, the user is deleted
|
||||
|
||||
This means that a lot of JupyterHub's UI flow doesn't make sense:
|
||||
|
||||
- there is no way for users to login
|
||||
- the human user doesn't map onto a JupyterHub `User` in a meaningful way
|
||||
- when a server isn't running, there isn't a 'restart your server' action available because the user has been deleted
|
||||
- users do not have any access to any Hub functionality, so presenting pages for those features would be confusing
|
||||
|
||||
BinderHub is one of the motivating use cases for JupyterHub supporting being used _only_ via its API.
|
||||
We'll use BinderHub here as an example of various configuration options.
|
||||
|
||||
[binderhub]: https://binderhub.readthedocs.io
|
||||
|
||||
## Disabling Hub UI
|
||||
|
||||
`c.JupyterHub.hub_routespec` is a configuration option to specify which URL prefix should be routed to the Hub.
|
||||
The default is `/` which means that the Hub will receive all requests not already specified to be routed somewhere else.
|
||||
|
||||
There are three values that are most logical for `hub_routespec`:
|
||||
|
||||
- `/` - this is the default, and used in most deployments.
|
||||
It is also the only option prior to JupyterHub 1.4.
|
||||
- `/hub/` - this serves only Hub pages, both UI and API
|
||||
- `/hub/api` - this serves _only the Hub API_, so all Hub UI is disabled,
|
||||
aside from the OAuth confirmation page, if used.
|
||||
|
||||
If you choose a hub routespec other than `/`,
|
||||
the main JupyterHub feature you will lose is the automatic handling of requests for `/user/:username`
|
||||
when the requested server is not running.
|
||||
|
||||
JupyterHub's handling of this request shows this page,
|
||||
telling you that the server is not running,
|
||||
with a button to launch it again:
|
||||
|
||||

|
||||
|
||||
If you set `hub_routespec` to something other than `/`,
|
||||
it is likely that you also want to register another destination for `/` to handle requests to not-running servers.
|
||||
If you don't, you will see a default 404 page from the proxy:
|
||||
|
||||

|
||||
|
||||
For mybinder.org, the default "start my server" page doesn't make sense,
|
||||
because when a server is gone, there is no restart action.
|
||||
Instead, we provide hints about how to get back to a link to start a _new_ server:
|
||||
|
||||

|
||||
|
||||
To achieve this, mybinder.org registers a route for `/` that goes to a custom endpoint
|
||||
that runs nginx and only serves this static HTML error page.
|
||||
This is set with
|
||||
|
||||
```python
|
||||
c.Proxy.extra_routes = {
|
||||
"/": "http://custom-404-entpoint/",
|
||||
}
|
||||
```
|
||||
|
||||
You may want to use an alternate behavior, such as redirecting to a landing page,
|
||||
or taking some other action based on the requested page.
|
||||
|
||||
If you use `c.JupyterHub.hub_routespec = "/hub/"`,
|
||||
then all the Hub pages will be available,
|
||||
and only this default-page-404 issue will come up.
|
||||
|
||||
If you use `c.JupyterHub.hub_routespec = "/hub/api/"`,
|
||||
then only the Hub _API_ will be available,
|
||||
and all UI will be up to you.
|
||||
mybinder.org takes this last option,
|
||||
because none of the Hub UI pages really make sense.
|
||||
Binder users don't have any reason to know or care that JupyterHub happens
|
||||
to be an implementation detail of how their environment is managed.
|
||||
Seeing Hub error pages and messages in that situation is more likely to be confusing than helpful.
|
||||
|
||||
:::{versionadded} 1.4
|
||||
|
||||
`c.JupyterHub.hub_routespec` and `c.Proxy.extra_routes` are new in JupyterHub 1.4.
|
||||
:::
|
@@ -1,263 +0,0 @@
|
||||
# Configuring user environments
|
||||
|
||||
To deploy JupyterHub means you are providing Jupyter notebook environments for
|
||||
multiple users. Often, this includes a desire to configure the user
|
||||
environment in a custom way.
|
||||
|
||||
Since the `jupyterhub-singleuser` server extends the standard Jupyter notebook
|
||||
server, most configuration and documentation that applies to Jupyter Notebook
|
||||
applies to the single-user environments. Configuration of user environments
|
||||
typically does not occur through JupyterHub itself, but rather through system-wide
|
||||
configuration of Jupyter, which is inherited by `jupyterhub-singleuser`.
|
||||
|
||||
**Tip:** When searching for configuration tips for JupyterHub user environments, you might want to remove JupyterHub from your search because there are a lot more people out there configuring Jupyter than JupyterHub and the configuration is the same.
|
||||
|
||||
This section will focus on user environments, which includes the following:
|
||||
|
||||
- [Installing packages](#installing-packages)
|
||||
- [Configuring Jupyter and IPython](#configuring-jupyter-and-ipython)
|
||||
- [Installing kernelspecs](#installing-kernelspecs)
|
||||
- [Using containers vs. multi-user hosts](#multi-user-hosts-vs-containers)
|
||||
|
||||
## Installing packages
|
||||
|
||||
To make packages available to users, you will typically install packages system-wide or in a shared environment.
|
||||
|
||||
This installation location should always be in the same environment where
|
||||
`jupyterhub-singleuser` itself is installed in, and must be _readable and
|
||||
executable_ by your users. If you want your users to be able to install additional
|
||||
packages, the installation location must also be _writable_ by your users.
|
||||
|
||||
If you are using a standard Python installation on your system, use the following command:
|
||||
|
||||
```bash
|
||||
sudo python3 -m pip install numpy
|
||||
```
|
||||
|
||||
to install the numpy package in the default Python 3 environment on your system
|
||||
(typically `/usr/local`).
|
||||
|
||||
You may also use conda to install packages. If you do, you should make sure
|
||||
that the conda environment has appropriate permissions for users to be able to
|
||||
run Python code in the env. The env must be _readable and executable_ by all
|
||||
users. Additionally it must be _writeable_ if you want users to install
|
||||
additional packages.
|
||||
|
||||
## Configuring Jupyter and IPython
|
||||
|
||||
[Jupyter](https://jupyter-notebook.readthedocs.io/en/stable/configuring/config_overview.html)
|
||||
and [IPython](https://ipython.readthedocs.io/en/stable/development/config.html)
|
||||
have their own configuration systems.
|
||||
|
||||
As a JupyterHub administrator, you will typically want to install and configure environments for all JupyterHub users. For example, let's say you wish for each student in a class to have the same user environment configuration.
|
||||
|
||||
Jupyter and IPython support **"system-wide"** locations for configuration, which is the logical place to put global configuration that you want to affect all users. It's generally more efficient to configure user environments "system-wide", and it's a good practice to avoid creating files in the users' home directories.
|
||||
The typical locations for these config files are:
|
||||
|
||||
- **system-wide** in `/etc/{jupyter|ipython}`
|
||||
- **env-wide** (environment wide) in `{sys.prefix}/etc/{jupyter|ipython}`.
|
||||
|
||||
### Jupyter environment configuration priority
|
||||
|
||||
When Jupyter runs in an environment (conda or virtualenv), it prefers to load configuration from the environment over each user's own configuration (e.g. in `~/.jupyter`).
|
||||
This may cause issues if you use a _shared_ conda environment or virtualenv for users, because e.g. jupyterlab may try to write information like workspaces or settings to the environment instead of the user's own directory.
|
||||
This could fail with something like `Permission denied: $PREFIX/etc/jupyter/lab`.
|
||||
|
||||
To avoid this issue, set `JUPYTER_PREFER_ENV_PATH=0` in the user environment:
|
||||
|
||||
```python
|
||||
c.Spawner.environment.update(
|
||||
{
|
||||
"JUPYTER_PREFER_ENV_PATH": "0",
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
which tells Jupyter to prefer _user_ configuration paths (e.g. in `~/.jupyter`) to configuration set in the environment.
|
||||
|
||||
### Example: Enable an extension system-wide
|
||||
|
||||
For example, to enable the `cython` IPython extension for all of your users, create the file `/etc/ipython/ipython_config.py`:
|
||||
|
||||
```python
|
||||
c.InteractiveShellApp.extensions.append("cython")
|
||||
```
|
||||
|
||||
### Example: Enable a Jupyter notebook configuration setting for all users
|
||||
|
||||
:::{note}
|
||||
These examples configure the Jupyter ServerApp, which is used by JupyterLab, the default in JupyterHub 2.0.
|
||||
|
||||
If you are using the classing Jupyter Notebook server,
|
||||
the same things should work,
|
||||
with the following substitutions:
|
||||
|
||||
- Search for `jupyter_server_config`, and replace with `jupyter_notebook_config`
|
||||
- Search for `NotebookApp`, and replace with `ServerApp`
|
||||
|
||||
:::
|
||||
|
||||
To enable Jupyter notebook's internal idle-shutdown behavior (requires notebook ≥ 5.4), set the following in the `/etc/jupyter/jupyter_server_config.py` file:
|
||||
|
||||
```python
|
||||
# shutdown the server after no activity for an hour
|
||||
c.ServerApp.shutdown_no_activity_timeout = 60 * 60
|
||||
# shutdown kernels after no activity for 20 minutes
|
||||
c.MappingKernelManager.cull_idle_timeout = 20 * 60
|
||||
# check for idle kernels every two minutes
|
||||
c.MappingKernelManager.cull_interval = 2 * 60
|
||||
```
|
||||
|
||||
## Installing kernelspecs
|
||||
|
||||
You may have multiple Jupyter kernels installed and want to make sure that they are available to all of your users. This means installing kernelspecs either system-wide (e.g. in /usr/local/) or in the `sys.prefix` of JupyterHub
|
||||
itself.
|
||||
|
||||
Jupyter kernelspec installation is system-wide by default, but some kernels
|
||||
may default to installing kernelspecs in your home directory. These will need
|
||||
to be moved system-wide to ensure that they are accessible.
|
||||
|
||||
To see where your kernelspecs are, you can use the following command:
|
||||
|
||||
```bash
|
||||
jupyter kernelspec list
|
||||
```
|
||||
|
||||
### Example: Installing kernels system-wide
|
||||
|
||||
Let's assume that I have a Python 2 and Python 3 environment that I want to make sure are available, I can install their specs **system-wide** (in /usr/local) using the following command:
|
||||
|
||||
```bash
|
||||
/path/to/python3 -m ipykernel install --prefix=/usr/local
|
||||
/path/to/python2 -m ipykernel install --prefix=/usr/local
|
||||
```
|
||||
|
||||
## Multi-user hosts vs. Containers
|
||||
|
||||
There are two broad categories of user environments that depend on what
|
||||
Spawner you choose:
|
||||
|
||||
- Multi-user hosts (shared system)
|
||||
- Container-based
|
||||
|
||||
How you configure user environments for each category can differ a bit
|
||||
depending on what Spawner you are using.
|
||||
|
||||
The first category is a **shared system (multi-user host)** where
|
||||
each user has a JupyterHub account, a home directory as well as being
|
||||
a real system user. In this example, shared configuration and installation
|
||||
must be in a 'system-wide' location, such as `/etc/`, or `/usr/local`
|
||||
or a custom prefix such as `/opt/conda`.
|
||||
|
||||
When JupyterHub uses **container-based** Spawners (e.g. KubeSpawner or
|
||||
DockerSpawner), the 'system-wide' environment is really the container image used for users.
|
||||
|
||||
In both cases, you want to _avoid putting configuration in user home
|
||||
directories_ because users can change those configuration settings. Also, home directories typically persist once they are created, thereby making it difficult for admins to update later.
|
||||
|
||||
## Named servers
|
||||
|
||||
By default, in a JupyterHub deployment, each user has one server only.
|
||||
|
||||
JupyterHub can, however, have multiple servers per user.
|
||||
This is mostly useful in deployments where users can configure the environment in which their server will start (e.g. resource requests on an HPC cluster), so that a given user can have multiple configurations running at the same time, without having to stop and restart their own server.
|
||||
|
||||
To allow named servers, include this code snippet in your config file:
|
||||
|
||||
```python
|
||||
c.JupyterHub.allow_named_servers = True
|
||||
```
|
||||
|
||||
Named servers were implemented in the REST API in JupyterHub 0.8,
|
||||
and JupyterHub 1.0 introduces UI for managing named servers via the user home page:
|
||||
|
||||

|
||||
|
||||
as well as the admin page:
|
||||
|
||||

|
||||
|
||||
Named servers can be accessed, created, started, stopped, and deleted
|
||||
from these pages. Activity tracking is now per server as well.
|
||||
|
||||
To limit the number of **named server** per user by setting a constant value, include this code snippet in your config file:
|
||||
|
||||
```python
|
||||
c.JupyterHub.named_server_limit_per_user = 5
|
||||
```
|
||||
|
||||
Alternatively, to use a callable/awaitable based on the handler object, include this code snippet in your config file:
|
||||
|
||||
```python
|
||||
def named_server_limit_per_user_fn(handler):
|
||||
user = handler.current_user
|
||||
if user and user.admin:
|
||||
return 0
|
||||
return 5
|
||||
|
||||
c.JupyterHub.named_server_limit_per_user = named_server_limit_per_user_fn
|
||||
```
|
||||
|
||||
This can be useful for quota service implementations. The example above limits the number of named servers for non-admin users only.
|
||||
|
||||
If `named_server_limit_per_user` is set to `0`, no limit is enforced.
|
||||
|
||||
When using named servers, Spawners may need additional configuration to take the `servername` into account. Whilst `KubeSpawner` takes the `servername` into account by default in [`pod_name_template`](https://jupyterhub-kubespawner.readthedocs.io/en/latest/spawner.html#kubespawner.KubeSpawner.pod_name_template), other Spawners may not. Check the documentation for the specific Spawner to see how singleuser servers are named, for example in `DockerSpawner` this involves modifying the [`name_template`](https://jupyterhub-dockerspawner.readthedocs.io/en/latest/api/index.html) setting to include `servername`, eg. `"{prefix}-{username}-{servername}"`.
|
||||
|
||||
(classic-notebook-ui)=
|
||||
|
||||
## Switching back to the classic notebook
|
||||
|
||||
By default, the single-user server launches JupyterLab,
|
||||
which is based on [Jupyter Server][].
|
||||
|
||||
This is the default server when running JupyterHub ≥ 2.0.
|
||||
To switch to using the legacy Jupyter Notebook server (notebook < 7.0), you can set the `JUPYTERHUB_SINGLEUSER_APP` environment variable
|
||||
(in the single-user environment) to:
|
||||
|
||||
```bash
|
||||
export JUPYTERHUB_SINGLEUSER_APP='notebook.notebookapp.NotebookApp'
|
||||
```
|
||||
|
||||
:::{note}
|
||||
|
||||
```
|
||||
JUPYTERHUB_SINGLEUSER_APP='notebook.notebookapp.NotebookApp'
|
||||
```
|
||||
|
||||
is only valid for notebook < 7. notebook v7 is based on jupyter-server,
|
||||
and the default jupyter-server application must be used.
|
||||
Selecting the new notebook UI is no longer a matter of selecting the server app to launch,
|
||||
but only the default URL for users to visit.
|
||||
To use notebook v7 with JupyterHub, leave the default singleuser app config alone (or specify `JUPYTERHUB_SINGLEUSER_APP=jupyter-server`) and set the default _URL_ for user servers:
|
||||
|
||||
```python
|
||||
c.Spawner.default_url = '/tree/'
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
[jupyter server]: https://jupyter-server.readthedocs.io
|
||||
[jupyter notebook]: https://jupyter-notebook.readthedocs.io
|
||||
|
||||
:::{versionchanged} 2.0
|
||||
|
||||
JupyterLab is now the default single-user UI, if available,
|
||||
which is based on the [Jupyter Server][],
|
||||
no longer the legacy [Jupyter Notebook][] server.
|
||||
JupyterHub prior to 2.0 launched the legacy notebook server (`jupyter notebook`),
|
||||
and the Jupyter server could be selected by specifying the following:
|
||||
|
||||
```python
|
||||
# jupyterhub_config.py
|
||||
c.Spawner.cmd = ["jupyter-labhub"]
|
||||
```
|
||||
|
||||
Alternatively, for an otherwise customized Jupyter Server app,
|
||||
set the environment variable using the following command:
|
||||
|
||||
```bash
|
||||
export JUPYTERHUB_SINGLEUSER_APP='jupyter_server.serverapp.ServerApp'
|
||||
```
|
||||
|
||||
:::
|
@@ -1,34 +0,0 @@
|
||||
# How-to
|
||||
|
||||
The _How-to_ guides provide practical step-by-step details to help you achieve a particular goal. They are useful when you are trying to get something done but require you to understand and adapt the steps to your specific usecase.
|
||||
|
||||
Use the following guides when:
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
api-only
|
||||
proxy
|
||||
rest
|
||||
separate-proxy
|
||||
templates
|
||||
upgrading
|
||||
log-messages
|
||||
|
||||
```
|
||||
|
||||
(config-examples)=
|
||||
|
||||
## Configuration
|
||||
|
||||
The following guides provide examples, including configuration files and tips, for the
|
||||
following:
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
configuration/config-user-env
|
||||
configuration/config-ghoauth
|
||||
configuration/config-proxy
|
||||
configuration/config-sudo
|
||||
```
|
@@ -1,72 +0,0 @@
|
||||
# Interpreting common log messages
|
||||
|
||||
When debugging errors and outages, looking at the logs emitted by
|
||||
JupyterHub is very helpful. This document intends to describe some common
|
||||
log messages, what they mean and what are the most common causes that generated them, as well as some possible ways to fix them.
|
||||
|
||||
## Failing suspected API request to not-running server
|
||||
|
||||
### Example
|
||||
|
||||
Your logs might be littered with lines that look scary
|
||||
|
||||
```
|
||||
[W 2022-03-10 17:25:19.774 JupyterHub base:1349] Failing suspected API request to not-running server: /hub/user/<user-name>/api/metrics/v1
|
||||
```
|
||||
|
||||
### Cause
|
||||
|
||||
This likely means that the user's server has stopped running but they
|
||||
still have a browser tab open. For example, you might have 3 tabs open and you shut
|
||||
the server down via one.
|
||||
Another possible reason could be that you closed your laptop and the server was culled for inactivity, then reopened the laptop!
|
||||
However, the client-side code (JupyterLab, Classic Notebook, etc) doesn't interpret the shut-down server and continues to make some API requests.
|
||||
|
||||
JupyterHub's architecture means that the proxy routes all requests that
|
||||
don't go to a running user server to the hub process itself. The hub
|
||||
process then explicitly returns a failure response, so the client knows
|
||||
that the server is not running anymore. This is used by JupyterLab to
|
||||
inform the user that the server is not running anymore, and provide an option
|
||||
to restart it.
|
||||
|
||||
Most commonly, you'll see this in reference to the `/api/metrics/v1`
|
||||
URL, used by [jupyter-resource-usage](https://github.com/jupyter-server/jupyter-resource-usage).
|
||||
|
||||
### Actions you can take
|
||||
|
||||
This log message is benign, and there is usually no action for you to take.
|
||||
|
||||
## JupyterHub Singleuser Version mismatch
|
||||
|
||||
### Example
|
||||
|
||||
```
|
||||
jupyterhub version 1.5.0 != jupyterhub-singleuser version 1.3.0. This could cause failure to authenticate and result in redirect loops!
|
||||
```
|
||||
|
||||
### Cause
|
||||
|
||||
JupyterHub requires the `jupyterhub` python package installed inside the image or
|
||||
environment, the user server starts in. This message indicates that the version of
|
||||
the `jupyterhub` package installed inside the user image or environment is not
|
||||
the same as the JupyterHub server's version itself. This is not necessarily always a
|
||||
problem - some version drift is mostly acceptable, and the only two known cases of
|
||||
breakage are across the 0.7 and 2.0 version releases. In those cases, issues pop
|
||||
up immediately after upgrading your version of JupyterHub, so **always check the JupyterHub
|
||||
changelog before upgrading!**. The primary problems this _could_ cause are:
|
||||
|
||||
1. Infinite redirect loops after the user server starts
|
||||
2. Missing expected environment variables in the user server once it starts
|
||||
3. Failure for the started user server to authenticate with the JupyterHub server -
|
||||
note that this is _not_ the same as _user authentication_ failing!
|
||||
|
||||
However, for the most part, unless you are seeing these specific issues, the log
|
||||
message should be counted as a warning to get the `jupyterhub` package versions
|
||||
aligned, rather than as an indicator of an existing problem.
|
||||
|
||||
### Actions you can take
|
||||
|
||||
Upgrade the version of the `jupyterhub` package in your user environment or image
|
||||
so that it matches the version of JupyterHub running your JupyterHub server! If you
|
||||
are using the [zero-to-jupyterhub](https://z2jh.jupyter.org) helm chart, you can find the appropriate
|
||||
version of the `jupyterhub` package to install in your user image [here](https://jupyterhub.github.io/helm-chart/)
|
@@ -1,339 +0,0 @@
|
||||
(using-jupyterhub-rest-api)=
|
||||
|
||||
# Using JupyterHub's REST API
|
||||
|
||||
This section will give you information on:
|
||||
|
||||
- What you can do with the API
|
||||
- How to create an API token
|
||||
- Assigning permissions to a token
|
||||
- Updating to admin services
|
||||
- Making an API request programmatically using the requests library
|
||||
- Paginating API requests
|
||||
- Enabling users to spawn multiple named-servers via the API
|
||||
- Learn more about JupyterHub's API
|
||||
|
||||
Before we discuss about JupyterHub's REST API, you can learn about [REST APIs here](https://en.wikipedia.org/wiki/Representational_state_transfer). A REST
|
||||
API provides a standard way for users to get and send information to the
|
||||
Hub.
|
||||
|
||||
## What you can do with the API
|
||||
|
||||
Using the [JupyterHub REST API](jupyterhub-rest-API), you can perform actions on the Hub,
|
||||
such as:
|
||||
|
||||
- Checking which users are active
|
||||
- Adding or removing users
|
||||
- Stopping or starting single user notebook servers
|
||||
- Authenticating services
|
||||
- Communicating with an individual Jupyter server's REST API
|
||||
|
||||
## Create an API token
|
||||
|
||||
To send requests using the JupyterHub API, you must pass an API token with
|
||||
the request.
|
||||
|
||||
While JupyterHub is running, any JupyterHub user can request a token via the `token` page.
|
||||
This is accessible via a `token` link in the top nav bar from the JupyterHub home page,
|
||||
or at the URL `/hub/token`.
|
||||
|
||||
:::{figure-md}
|
||||
|
||||

|
||||
|
||||
JupyterHub's API token page
|
||||
:::
|
||||
|
||||
:::{figure-md}
|
||||

|
||||
|
||||
JupyterHub's token page after successfully requesting a token.
|
||||
|
||||
:::
|
||||
|
||||
### Register API tokens via configuration
|
||||
|
||||
Sometimes, you'll want to pre-generate a token for access to JupyterHub,
|
||||
typically for use by external services,
|
||||
so that both JupyterHub and the service have access to the same value.
|
||||
|
||||
First, you need to generate a good random secret.
|
||||
A good way of generating an API token is by running:
|
||||
|
||||
```bash
|
||||
openssl rand -hex 32
|
||||
```
|
||||
|
||||
This `openssl` command generates a random token that can be added to the JupyterHub configuration in `jupyterhub_config.py`.
|
||||
|
||||
For external services, this would be registered with JupyterHub via configuration:
|
||||
|
||||
```python
|
||||
c.JupyterHub.services = [
|
||||
{
|
||||
"name": "my-service",
|
||||
"api_token": the_secret_value,
|
||||
},
|
||||
]
|
||||
```
|
||||
|
||||
At this point, requests authenticated with the token will be associated with The service `my-service`.
|
||||
|
||||
```{note}
|
||||
You can also load additional tokens for users via the `JupyterHub.api_tokens` configuration.
|
||||
|
||||
However, this option has been deprecated since the introduction of services.
|
||||
```
|
||||
|
||||
## Assigning permissions to a token
|
||||
|
||||
Prior to JupyterHub 2.0, there were two levels of permissions:
|
||||
|
||||
1. user, and
|
||||
2. admin
|
||||
|
||||
where a token would always have full permissions to do whatever its owner could do.
|
||||
|
||||
In JupyterHub 2.0,
|
||||
specific permissions are now defined as '**scopes**',
|
||||
and can be assigned both at the user/service level,
|
||||
and at the individual token level.
|
||||
|
||||
This allows e.g. a user with full admin permissions to request a token with limited permissions.
|
||||
|
||||
## Updating to admin services
|
||||
|
||||
```{note}
|
||||
The `api_tokens` configuration has been softly deprecated since the introduction of services.
|
||||
We have no plans to remove it,
|
||||
but deployments are encouraged to use service configuration instead.
|
||||
```
|
||||
|
||||
If you have been using `api_tokens` to create an admin user
|
||||
and the token for that user to perform some automations, then
|
||||
the services' mechanism may be a better fit if you have the following configuration:
|
||||
|
||||
```python
|
||||
c.JupyterHub.admin_users = {"service-admin"}
|
||||
c.JupyterHub.api_tokens = {
|
||||
"secret-token": "service-admin",
|
||||
}
|
||||
```
|
||||
|
||||
This can be updated to create a service, with the following configuration:
|
||||
|
||||
```python
|
||||
c.JupyterHub.services = [
|
||||
{
|
||||
# give the token a name
|
||||
"name": "service-admin",
|
||||
"api_token": "secret-token",
|
||||
# "admin": True, # if using JupyterHub 1.x
|
||||
},
|
||||
]
|
||||
|
||||
# roles were introduced in JupyterHub 2.0
|
||||
# prior to 2.0, only "admin": True or False was available
|
||||
|
||||
c.JupyterHub.load_roles = [
|
||||
{
|
||||
"name": "service-role",
|
||||
"scopes": [
|
||||
# specify the permissions the token should have
|
||||
"admin:users",
|
||||
],
|
||||
"services": [
|
||||
# assign the service the above permissions
|
||||
"service-admin",
|
||||
],
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
The token will have the permissions listed in the role
|
||||
(see [scopes][] for a list of available permissions),
|
||||
but there will no longer be a user account created to house it.
|
||||
The main noticeable difference between a user and a service is that there will be no notebook server associated with the account
|
||||
and the service will not show up in the various user list pages and APIs.
|
||||
|
||||
## Make an API request
|
||||
|
||||
To authenticate your requests, pass the API token in the request's
|
||||
Authorization header.
|
||||
|
||||
### Use requests
|
||||
|
||||
Using the popular Python [requests](https://docs.python-requests.org)
|
||||
library, an API GET request is made, and the request sends an API token for
|
||||
authorization. The response contains information about the users, here's example code to make an API request for the users of a JupyterHub deployment
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
api_url = 'http://127.0.0.1:8081/hub/api'
|
||||
|
||||
r = requests.get(api_url + '/users',
|
||||
headers={
|
||||
'Authorization': f'token {token}',
|
||||
}
|
||||
)
|
||||
|
||||
r.raise_for_status()
|
||||
users = r.json()
|
||||
```
|
||||
|
||||
This example provides a slightly more complicated request, yet the
|
||||
process is very similar:
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
api_url = 'http://127.0.0.1:8081/hub/api'
|
||||
|
||||
data = {'name': 'mygroup', 'users': ['user1', 'user2']}
|
||||
|
||||
r = requests.post(api_url + '/groups/formgrade-data301/users',
|
||||
headers={
|
||||
'Authorization': f'token {token}',
|
||||
},
|
||||
json=data,
|
||||
)
|
||||
r.raise_for_status()
|
||||
r.json()
|
||||
```
|
||||
|
||||
The same API token can also authorize access to the [Jupyter Notebook REST API][]
|
||||
|
||||
provided by notebook servers managed by JupyterHub if it has the necessary `access:servers` scope.
|
||||
|
||||
(api-pagination)=
|
||||
|
||||
## Paginating API requests
|
||||
|
||||
```{versionadded} 2.0
|
||||
|
||||
```
|
||||
|
||||
Pagination is available through the `offset` and `limit` query parameters on
|
||||
list endpoints, which can be used to return ideally sized windows of results.
|
||||
Here's example code demonstrating pagination on the `GET /users`
|
||||
endpoint to fetch the first 20 records.
|
||||
|
||||
```python
|
||||
import os
|
||||
import requests
|
||||
|
||||
api_url = 'http://127.0.0.1:8081/hub/api'
|
||||
|
||||
r = requests.get(
|
||||
api_url + '/users?offset=0&limit=20',
|
||||
headers={
|
||||
"Accept": "application/jupyterhub-pagination+json",
|
||||
"Authorization": f"token {token}",
|
||||
},
|
||||
)
|
||||
r.raise_for_status()
|
||||
r.json()
|
||||
```
|
||||
|
||||
For backward-compatibility, the default structure of list responses is unchanged.
|
||||
However, this lacks pagination information (e.g. is there a next page),
|
||||
so if you have enough users that they won't fit in the first response,
|
||||
it is a good idea to opt-in to the new paginated list format.
|
||||
There is a new schema for list responses which include pagination information.
|
||||
You can request this by including the header:
|
||||
|
||||
```
|
||||
Accept: application/jupyterhub-pagination+json
|
||||
```
|
||||
|
||||
with your request, in which case a response will look like:
|
||||
|
||||
```python
|
||||
{
|
||||
"items": [
|
||||
{
|
||||
"name": "username",
|
||||
"kind": "user",
|
||||
...
|
||||
},
|
||||
],
|
||||
"_pagination": {
|
||||
"offset": 0,
|
||||
"limit": 20,
|
||||
"total": 50,
|
||||
"next": {
|
||||
"offset": 20,
|
||||
"limit": 20,
|
||||
"url": "http://127.0.0.1:8081/hub/api/users?limit=20&offset=20"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
where the list results (same as pre-2.0) will be in `items`,
|
||||
and pagination info will be in `_pagination`.
|
||||
The `next` field will include the `offset`, `limit`, and `url` for requesting the next page.
|
||||
`next` will be `null` if there is no next page.
|
||||
|
||||
Pagination is governed by two configuration options:
|
||||
|
||||
- `JupyterHub.api_page_default_limit` - the page size, if `limit` is unspecified in the request
|
||||
and the new pagination API is requested
|
||||
(default: 50)
|
||||
- `JupyterHub.api_page_max_limit` - the maximum page size a request can ask for (default: 200)
|
||||
|
||||
Pagination is enabled on the `GET /users`, `GET /groups`, and `GET /proxy` REST endpoints.
|
||||
|
||||
## Enabling users to spawn multiple named-servers via the API
|
||||
|
||||
Support for multiple servers per user was introduced in JupyterHub [version 0.8.](changelog)
|
||||
Prior to that, each user could only launch a single default server via the API
|
||||
like this:
|
||||
|
||||
```bash
|
||||
curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/server"
|
||||
```
|
||||
|
||||
With the named-server functionality, it's now possible to launch more than one
|
||||
specifically named servers against a given user. This could be used, for instance,
|
||||
to launch each server based on a different image.
|
||||
|
||||
First you must enable named-servers by including the following setting in the `jupyterhub_config.py` file.
|
||||
|
||||
`c.JupyterHub.allow_named_servers = True`
|
||||
|
||||
If you are using the [zero-to-jupyterhub-k8s](https://github.com/jupyterhub/zero-to-jupyterhub-k8s) set-up to run JupyterHub,
|
||||
then instead of editing the `jupyterhub_config.py` file directly, you could pass
|
||||
the following as part of the `config.yaml` file, as per the [tutorial](https://z2jh.jupyter.org/en/latest/):
|
||||
|
||||
```bash
|
||||
hub:
|
||||
extraConfig: |
|
||||
c.JupyterHub.allow_named_servers = True
|
||||
```
|
||||
|
||||
With that setting in place, a new named-server is activated like this:
|
||||
|
||||
```bash
|
||||
curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/servers/<serverA>"
|
||||
curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/servers/<serverB>"
|
||||
```
|
||||
|
||||
The same servers can be stopped by substituting `DELETE` for `POST` above.
|
||||
|
||||
### Some caveats for using named-servers
|
||||
|
||||
For named-servers via the API to work, the spawner used to spawn these servers
|
||||
will need to be able to handle the case of multiple servers per user and ensure
|
||||
uniqueness of names, particularly if servers are spawned via docker containers
|
||||
or kubernetes pods.
|
||||
|
||||
## Learn more about the API
|
||||
|
||||
You can see the full [JupyterHub REST API](jupyterhub-rest-api) for more details.
|
||||
|
||||
[openapi initiative]: https://www.openapis.org/
|
||||
[jupyterhub rest api]: ./rest-api
|
||||
[scopes]: ../rbac/scopes.md
|
||||
[jupyter notebook rest api]: https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyter/notebook/HEAD/notebook/services/api/api.yaml
|
@@ -1,78 +0,0 @@
|
||||
(separate-proxy)=
|
||||
|
||||
# Running proxy separately from the hub
|
||||
|
||||
## Background
|
||||
|
||||
The thing which users directly connect to is the proxy, which by default is
|
||||
`configurable-http-proxy`. The proxy either redirects users to the
|
||||
hub (for login and managing servers), or to their own single-user
|
||||
servers. Thus, as long as the proxy stays running, access to existing
|
||||
servers continues, even if the hub itself restarts or goes down.
|
||||
|
||||
When you first configure the hub, you may not even realize this
|
||||
because the proxy is automatically managed by the hub. This is great
|
||||
for getting started and even most use-cases, although, everytime you restart the
|
||||
hub, all user connections are also restarted. However, it is also simple to
|
||||
run the proxy as a service separate from the hub, so that you are free
|
||||
to reconfigure the hub while only interrupting users who are waiting for their notebook server to start.
|
||||
starting their notebook server.
|
||||
|
||||
The default JupyterHub proxy is
|
||||
[configurable-http-proxy](https://github.com/jupyterhub/configurable-http-proxy). If you are using a different proxy, such
|
||||
as [Traefik](https://github.com/traefik/traefik), these instructions are probably not relevant to you.
|
||||
|
||||
## Configuration options
|
||||
|
||||
`c.JupyterHub.cleanup_servers = False` should be set, which tells the
|
||||
hub to not stop servers when the hub restarts (this is useful even if
|
||||
you don't run the proxy separately).
|
||||
|
||||
`c.ConfigurableHTTPProxy.should_start = False` should be set, which
|
||||
tells the hub that the proxy should not be started (because you start
|
||||
it yourself).
|
||||
|
||||
`c.ConfigurableHTTPProxy.auth_token = "CONFIGPROXY_AUTH_TOKEN"` should be set to a
|
||||
token for authenticating communication with the proxy.
|
||||
|
||||
`c.ConfigurableHTTPProxy.api_url = 'http://localhost:8001'` should be
|
||||
set to the URL which the hub uses to connect _to the proxy's API_.
|
||||
|
||||
## Proxy configuration
|
||||
|
||||
You need to configure a service to start the proxy. An example
|
||||
command line argument for this is:
|
||||
|
||||
```bash
|
||||
$ configurable-http-proxy --ip=127.0.0.1 --port=8000 --api-ip=127.0.0.1 --api-port=8001 --default-target=http://localhost:8081 --error-target=http://localhost:8081/hub/error
|
||||
```
|
||||
|
||||
(Details on how to do this is out of the scope of this tutorial. For example, it might be a
|
||||
systemd service configured within another docker container). The proxy has no
|
||||
configuration files, all configuration is via the command line and
|
||||
environment variables.
|
||||
|
||||
`--api-ip` and `--api-port` (which tells the proxy where to listen) should match the hub's `ConfigurableHTTPProxy.api_url`.
|
||||
|
||||
`--ip`, `-port`, and other options configure the _user_ connections to the proxy.
|
||||
|
||||
`--default-target` and `--error-target` should point to the hub, and used when users navigate to the proxy originally.
|
||||
|
||||
You must define the environment variable `CONFIGPROXY_AUTH_TOKEN` to
|
||||
match the token given to `c.ConfigurableHTTPProxy.auth_token`.
|
||||
|
||||
You should check the [configurable-http-proxy
|
||||
options](https://github.com/jupyterhub/configurable-http-proxy) to see
|
||||
what other options are needed, for example, SSL options. Note that
|
||||
these options are configured in the hub if the hub is starting the proxy, so you
|
||||
need to configure the options there.
|
||||
|
||||
## Docker image
|
||||
|
||||
You can use [jupyterhub configurable-http-proxy docker
|
||||
image](https://hub.docker.com/r/jupyterhub/configurable-http-proxy/)
|
||||
to run the proxy.
|
||||
|
||||
## See also
|
||||
|
||||
- [jupyterhub configurable-http-proxy](https://github.com/jupyterhub/configurable-http-proxy)
|
@@ -1,89 +0,0 @@
|
||||
# Working with templates and UI
|
||||
|
||||
The pages of the JupyterHub application are generated from
|
||||
[Jinja](https://jinja.palletsprojects.com) templates. These allow the header, for
|
||||
example, to be defined once and incorporated into all pages. By providing
|
||||
your own template(s), you can have complete control over JupyterHub's
|
||||
appearance.
|
||||
|
||||
## Custom Templates
|
||||
|
||||
JupyterHub will look for custom templates in all paths included in the
|
||||
`JupyterHub.template_paths` configuration option, falling back on these
|
||||
[default templates](https://github.com/jupyterhub/jupyterhub/tree/HEAD/share/jupyterhub/templates)
|
||||
if no custom template(s) with specified name(s) are found. This fallback
|
||||
behavior is new in version 0.9; previous versions searched only the paths
|
||||
explicitly included in `template_paths`. You may override as many
|
||||
or as few templates as you desire.
|
||||
|
||||
## Extending Templates
|
||||
|
||||
Jinja provides a mechanism to [extend templates](https://jinja.palletsprojects.com/en/3.0.x/templates/#template-inheritance).
|
||||
|
||||
A base template can define `block`(s) within itself that child templates can fill up or
|
||||
supply content to. The
|
||||
[JupyterHub default templates](https://github.com/jupyterhub/jupyterhub/tree/HEAD/share/jupyterhub/templates)
|
||||
make extensive use of blocks, thus allowing you to customize parts of the
|
||||
interface easily.
|
||||
|
||||
In general, a child template can extend a base template, `page.html`, by beginning with:
|
||||
|
||||
```html
|
||||
{% extends "page.html" %}
|
||||
```
|
||||
|
||||
This works, unless you are trying to extend the default template for the same
|
||||
file name. Starting in version 0.9, you may refer to the base file with a
|
||||
`templates/` prefix. Thus, if you are writing a custom `page.html`, start the
|
||||
file with this block:
|
||||
|
||||
```html
|
||||
{% extends "templates/page.html" %}
|
||||
```
|
||||
|
||||
By defining `block`s with the same name as in the base template, child templates
|
||||
can replace those sections with custom content. The content from the base
|
||||
template can be included in the child template with the `{{ super() }}` directive.
|
||||
|
||||
### Example
|
||||
|
||||
To add an additional message to the spawn-pending page, below the existing
|
||||
text about the server starting up, place the content below in a file named
|
||||
`spawn_pending.html`. This directory must also be included in the
|
||||
`JupyterHub.template_paths` configuration option.
|
||||
|
||||
```html
|
||||
{% extends "templates/spawn_pending.html" %} {% block message %} {{ super() }}
|
||||
<p>Patience is a virtue.</p>
|
||||
{% endblock %}
|
||||
```
|
||||
|
||||
## Page Announcements
|
||||
|
||||
To add announcements to be displayed on a page, you have two options:
|
||||
|
||||
- [Extend the page templates as described above](#extending-templates)
|
||||
- Use configuration variables
|
||||
|
||||
### Announcement Configuration Variables
|
||||
|
||||
If you set the configuration variable `JupyterHub.template_vars = {'announcement': 'some_text'}`, the given `some_text` will be placed on
|
||||
the top of all pages. The more specific variables
|
||||
`announcement_login`, `announcement_spawn`, `announcement_home`, and
|
||||
`announcement_logout` are more specific and only show on their
|
||||
respective pages (overriding the global `announcement` variable).
|
||||
Note that changing these variables requires a restart, unlike direct
|
||||
template extension.
|
||||
|
||||
Alternatively, you can get the same effect by extending templates, which allows you
|
||||
to update the messages without restarting. Set
|
||||
`c.JupyterHub.template_paths` as mentioned above, and then create a
|
||||
template (for example, `login.html`) with:
|
||||
|
||||
```html
|
||||
{% extends "templates/login.html" %} {% set announcement = 'some message' %}
|
||||
```
|
||||
|
||||
Extending `page.html` puts the message on all pages, but note that
|
||||
extending `page.html` takes precedence over an extension of a specific
|
||||
page (unlike the variable-based approach above).
|
@@ -1,141 +0,0 @@
|
||||
(upgrading-jupyterhub)=
|
||||
|
||||
# Upgrading JupyterHub
|
||||
|
||||
JupyterHub offers easy upgrade pathways between minor versions. This
|
||||
document describes how to do these upgrades.
|
||||
|
||||
If you are using {ref}`a JupyterHub distribution <index/distributions>`, you
|
||||
should consult the distribution's documentation on how to upgrade. This documentation is
|
||||
for those who have set up their JupyterHub without using a distribution.
|
||||
|
||||
This documentation is lengthy because it is quite detailed. Most likely, upgrading
|
||||
JupyterHub is painless, quick and with minimal user interruption.
|
||||
|
||||
The steps are discussed in detail, so if you get stuck at any step you can always refer to this guide.
|
||||
|
||||
## Read the Changelog
|
||||
|
||||
The [changelog](changelog) contains information on what has
|
||||
changed with the new JupyterHub release and any deprecation warnings.
|
||||
Read these notes to familiarize yourself with the coming changes. There
|
||||
might be new releases of the authenticators & spawners you use, so
|
||||
read the changelogs for those too!
|
||||
|
||||
## Notify your users
|
||||
|
||||
If you use the default configuration where `configurable-http-proxy`
|
||||
is managed by JupyterHub, your users will see service disruption during
|
||||
the upgrade process. You should notify them, and pick a time to do the
|
||||
upgrade where they will be least disrupted.
|
||||
|
||||
If you use a different proxy or run `configurable-http-proxy`
|
||||
independent of JupyterHub, your users will be able to continue using notebook
|
||||
servers they had already launched, but will not be able to launch new servers or sign in.
|
||||
|
||||
## Backup database & config
|
||||
|
||||
Before doing an upgrade, it is critical to back up:
|
||||
|
||||
1. Your JupyterHub database (SQLite by default, or MySQL / Postgres if you used those).
|
||||
If you use SQLite (the default), you should backup the `jupyterhub.sqlite` file.
|
||||
2. Your `jupyterhub_config.py` file.
|
||||
3. Your users' home directories. This is unlikely to be affected directly by
|
||||
a JupyterHub upgrade, but we recommend a backup since user data is critical.
|
||||
|
||||
## Shut down JupyterHub
|
||||
|
||||
Shut down the JupyterHub process. This would vary depending on how you
|
||||
have set up JupyterHub to run. It is most likely using a process
|
||||
supervisor of some sort (`systemd` or `supervisord` or even `docker`).
|
||||
Use the supervisor-specific command to stop the JupyterHub process.
|
||||
|
||||
## Upgrade JupyterHub packages
|
||||
|
||||
There are two environments where the `jupyterhub` package is installed:
|
||||
|
||||
1. The _hub environment_: where the JupyterHub server process
|
||||
runs. This is started with the `jupyterhub` command, and is what
|
||||
people generally think of as JupyterHub.
|
||||
2. The _notebook user environments_: where the user notebook
|
||||
servers are launched from, and is probably custom to your own
|
||||
installation. This could be just one environment (different from the
|
||||
hub environment) that is shared by all users, one environment
|
||||
per user, or the same environment as the hub environment. The hub
|
||||
launched the `jupyterhub-singleuser` command in this environment,
|
||||
which in turn starts the notebook server.
|
||||
|
||||
You need to make sure the version of the `jupyterhub` package matches
|
||||
in both these environments. If you installed `jupyterhub` with pip,
|
||||
you can upgrade it with:
|
||||
|
||||
```bash
|
||||
python3 -m pip install --upgrade jupyterhub==<version>
|
||||
```
|
||||
|
||||
Where `<version>` is the version of JupyterHub you are upgrading to.
|
||||
|
||||
If you used `conda` to install `jupyterhub`, you should upgrade it
|
||||
with:
|
||||
|
||||
```bash
|
||||
conda install -c conda-forge jupyterhub==<version>
|
||||
```
|
||||
|
||||
You should also check for new releases of the authenticator & spawner you
|
||||
are using. You might wish to upgrade those packages, too, along with JupyterHub
|
||||
or upgrade them separately.
|
||||
|
||||
## Upgrade JupyterHub database
|
||||
|
||||
Once new packages are installed, you need to upgrade the JupyterHub
|
||||
database. From the hub environment, in the same directory as your
|
||||
`jupyterhub_config.py` file, you should run:
|
||||
|
||||
```bash
|
||||
jupyterhub upgrade-db
|
||||
```
|
||||
|
||||
This should find the location of your database, and run the necessary upgrades
|
||||
for it.
|
||||
|
||||
### SQLite database disadvantages
|
||||
|
||||
SQLite has some disadvantages when it comes to upgrading JupyterHub. These
|
||||
are:
|
||||
|
||||
- `upgrade-db` may not work, and you may need to delete your database
|
||||
and start with a fresh one.
|
||||
- `downgrade-db` **will not** work if you want to rollback to an
|
||||
earlier version, so backup the `jupyterhub.sqlite` file before
|
||||
upgrading.
|
||||
|
||||
### What happens if I delete my database?
|
||||
|
||||
Losing the Hub database is often not a big deal. Information that
|
||||
resides only in the Hub database includes:
|
||||
|
||||
- active login tokens (user cookies, service tokens)
|
||||
- users added via JupyterHub UI, instead of config files
|
||||
- info about running servers
|
||||
|
||||
If the following conditions are true, you should be fine clearing the
|
||||
Hub database and starting over:
|
||||
|
||||
- users specified in the config file, or login using an external
|
||||
authentication provider (Google, GitHub, LDAP, etc)
|
||||
- user servers are stopped during the upgrade
|
||||
- don't mind causing users to log in again after the upgrade
|
||||
|
||||
## Start JupyterHub
|
||||
|
||||
Once the database upgrade is completed, start the `jupyterhub`
|
||||
process again.
|
||||
|
||||
1. Log in and start the server to make sure things work as
|
||||
expected.
|
||||
2. Check the logs for any errors or deprecation warnings. You
|
||||
might have to update your `jupyterhub_config.py` file to
|
||||
deal with any deprecated options.
|
||||
|
||||
Congratulations, your JupyterHub has been upgraded!
|
Before Width: | Height: | Size: 160 KiB |
Before Width: | Height: | Size: 138 KiB |
Before Width: | Height: | Size: 38 KiB |
Before Width: | Height: | Size: 141 KiB |
Before Width: | Height: | Size: 1.1 MiB |