Compare commits

..

10 Commits
2.0.2 ... 0.9.4

Author SHA1 Message Date
Min RK
b1111363fd release 0.9.4 2018-09-24 13:02:36 +02:00
Min RK
6c99b807c2 update changelog for 0.9.4 2018-09-24 13:00:27 +02:00
Min RK
8d650f594e changelog for 0.9.4 2018-09-24 12:58:16 +02:00
Min RK
04a0a3a2e5 fix oauth client cleanup
- delete oauth clients for servers when they shutdown
- avoid deleting oauth clients for servers still running across an 0.8 -> 0.9 upgrade, when the oauth client ids changed from `user-NAME` to `jupyterhub-user-NAME`
2018-09-24 12:58:10 +02:00
Min RK
9cebfd6367 Fix content-type on API endpoints
and includes content-type header checks in tests to catch regressions
2018-09-24 12:57:26 +02:00
Min RK
587cd70221 omit pdf builds on rtd due to bug in sphinx 2018-09-24 12:57:01 +02:00
Min RK
e94f5e043a release 0.9.3 2018-09-12 09:46:02 +02:00
Min RK
5456fb6356 remove spurious print from keepalive code
and send keepalive every 8 seconds

to protect against possibly aggressive proxies dropping connections after 10 seconds of inactivity
2018-09-12 09:46:02 +02:00
Min RK
fb75b9a392 write needs no await 2018-09-11 16:42:29 +02:00
Min RK
90d341e6f7 changelog for 0.9.3
Mainly small fixes, but the token page could be completely broken

This release will include the spawner.handler addition,
but not the oauthlib change currently in master
2018-09-11 16:42:21 +02:00
340 changed files with 9726 additions and 40844 deletions

21
.circleci/config.yml Normal file
View File

@@ -0,0 +1,21 @@
# Python CircleCI 2.0 configuration file
# Updating CircleCI configuration from v1 to v2
# Check https://circleci.com/docs/2.0/language-python/ for more details
#
version: 2
jobs:
build:
machine: true
steps:
- checkout
- run:
name: build images
command: |
docker build -t jupyterhub/jupyterhub .
docker build -t jupyterhub/jupyterhub-onbuild onbuild
docker build -t jupyterhub/jupyterhub:alpine -f dockerfiles/Dockerfile.alpine .
docker build -t jupyterhub/singleuser singleuser
- run:
name: smoke test jupyterhub
command: |
docker run --rm -it jupyterhub/jupyterhub jupyterhub --help

View File

@@ -1,5 +1,4 @@
[run]
parallel = True
branch = False
omit =
jupyterhub/tests/*

View File

@@ -10,12 +10,13 @@
# E402: module level import not at top of file
# I100: Import statements are in the wrong order
# I101: Imported names are in the wrong order. Should be
ignore = E, C, W, F401, F403, F811, F841, E402, I100, I101, D400
builtins = c, get_config
ignore = E, C, W, F401, F403, F811, F841, E402, I100, I101
exclude =
.cache,
.github,
docs,
examples,
jupyterhub/alembic*,
onbuild,
scripts,

37
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,37 @@
---
name: Bug report
about: Create a report to help us improve
---
Hi! Thanks for using JupyterHub.
If you are reporting an issue with JupyterHub, please use the [GitHub issue](https://github.com/jupyterhub/jupyterhub/issues) search feature to check if your issue has been asked already. If it has, please add your comments to the existing issue.
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
- Running `jupyter troubleshoot` from the command line, if possible, and posting
its output would also be helpful.
- Running in `--debug` mode can also be helpful for troubleshooting.

View File

@@ -0,0 +1,7 @@
---
name: Installation and configuration issues
about: Installation and configuration assistance
---
If you are having issues with installation or configuration, you may ask for help on the JupyterHub gitter channel or file an issue here.

View File

@@ -1,231 +0,0 @@
# This is a GitHub workflow defining a set of jobs with a set of steps.
# ref: https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions
#
# Test build release artifacts (PyPI package, Docker images) and publish them on
# pushed git tags.
#
name: Release
on:
pull_request:
paths-ignore:
- "docs/**"
- "**.md"
- "**.rst"
- ".github/workflows/*"
- "!.github/workflows/release.yml"
push:
paths-ignore:
- "docs/**"
- "**.md"
- "**.rst"
- ".github/workflows/*"
- "!.github/workflows/release.yml"
branches-ignore:
- "dependabot/**"
- "pre-commit-ci-update-config"
tags:
- "**"
workflow_dispatch:
jobs:
build-release:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: 3.8
- uses: actions/setup-node@v1
with:
node-version: "14"
- name: install build package
run: |
pip install --upgrade pip
pip install build
pip freeze
- name: build release
run: |
python -m build --sdist --wheel .
ls -l dist
- name: verify wheel
run: |
cd dist
pip install ./*.whl
# verify data-files are installed where they are found
cat <<EOF | python
import os
from jupyterhub._data import DATA_FILES_PATH
print(f"DATA_FILES_PATH={DATA_FILES_PATH}")
assert os.path.exists(DATA_FILES_PATH), DATA_FILES_PATH
for subpath in (
"templates/page.html",
"static/css/style.min.css",
"static/components/jquery/dist/jquery.js",
):
path = os.path.join(DATA_FILES_PATH, subpath)
assert os.path.exists(path), path
print("OK")
EOF
# ref: https://github.com/actions/upload-artifact#readme
- uses: actions/upload-artifact@v2
with:
name: jupyterhub-${{ github.sha }}
path: "dist/*"
if-no-files-found: error
- name: Publish to PyPI
if: startsWith(github.ref, 'refs/tags/')
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
run: |
pip install twine
twine upload --skip-existing dist/*
publish-docker:
runs-on: ubuntu-20.04
services:
# So that we can test this in PRs/branches
local-registry:
image: registry:2
ports:
- 5000:5000
steps:
- name: Should we push this image to a public registry?
run: |
if [ "${{ startsWith(github.ref, 'refs/tags/') || (github.ref == 'refs/heads/main') }}" = "true" ]; then
# Empty => Docker Hub
echo "REGISTRY=" >> $GITHUB_ENV
else
echo "REGISTRY=localhost:5000/" >> $GITHUB_ENV
fi
- uses: actions/checkout@v2
# Setup docker to build for multiple platforms, see:
# https://github.com/docker/build-push-action/tree/v2.4.0#usage
# https://github.com/docker/build-push-action/blob/v2.4.0/docs/advanced/multi-platform.md
- name: Set up QEMU (for docker buildx)
uses: docker/setup-qemu-action@25f0500ff22e406f7191a2a8ba8cda16901ca018 # associated tag: v1.0.2
- name: Set up Docker Buildx (for multi-arch builds)
uses: docker/setup-buildx-action@2a4b53665e15ce7d7049afb11ff1f70ff1610609 # associated tag: v1.1.2
with:
# Allows pushing to registry on localhost:5000
driver-opts: network=host
- name: Setup push rights to Docker Hub
# This was setup by...
# 1. Creating a Docker Hub service account "jupyterhubbot"
# 2. Creating a access token for the service account specific to this
# repository: https://hub.docker.com/settings/security
# 3. Making the account part of the "bots" team, and granting that team
# permissions to push to the relevant images:
# https://hub.docker.com/orgs/jupyterhub/teams/bots/permissions
# 4. Registering the username and token as a secret for this repo:
# https://github.com/jupyterhub/jupyterhub/settings/secrets/actions
if: env.REGISTRY != 'localhost:5000/'
run: |
docker login -u "${{ secrets.DOCKERHUB_USERNAME }}" -p "${{ secrets.DOCKERHUB_TOKEN }}"
# image: jupyterhub/jupyterhub
#
# https://github.com/jupyterhub/action-major-minor-tag-calculator
# If this is a tagged build this will return additional parent tags.
# E.g. 1.2.3 is expanded to Docker tags
# [{prefix}:1.2.3, {prefix}:1.2, {prefix}:1, {prefix}:latest] unless
# this is a backported tag in which case the newer tags aren't updated.
# For branches this will return the branch name.
# If GITHUB_TOKEN isn't available (e.g. in PRs) returns no tags [].
- name: Get list of jupyterhub tags
id: jupyterhubtags
uses: jupyterhub/action-major-minor-tag-calculator@v2
with:
githubToken: ${{ secrets.GITHUB_TOKEN }}
prefix: "${{ env.REGISTRY }}jupyterhub/jupyterhub:"
defaultTag: "${{ env.REGISTRY }}jupyterhub/jupyterhub:noref"
branchRegex: ^\w[\w-.]*$
- name: Build and push jupyterhub
uses: docker/build-push-action@e1b7f96249f2e4c8e4ac1519b9608c0d48944a1f
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
# tags parameter must be a string input so convert `gettags` JSON
# array into a comma separated list of tags
tags: ${{ join(fromJson(steps.jupyterhubtags.outputs.tags)) }}
# image: jupyterhub/jupyterhub-onbuild
#
- name: Get list of jupyterhub-onbuild tags
id: onbuildtags
uses: jupyterhub/action-major-minor-tag-calculator@v2
with:
githubToken: ${{ secrets.GITHUB_TOKEN }}
prefix: "${{ env.REGISTRY }}jupyterhub/jupyterhub-onbuild:"
defaultTag: "${{ env.REGISTRY }}jupyterhub/jupyterhub-onbuild:noref"
branchRegex: ^\w[\w-.]*$
- name: Build and push jupyterhub-onbuild
uses: docker/build-push-action@e1b7f96249f2e4c8e4ac1519b9608c0d48944a1f
with:
build-args: |
BASE_IMAGE=${{ fromJson(steps.jupyterhubtags.outputs.tags)[0] }}
context: onbuild
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ join(fromJson(steps.onbuildtags.outputs.tags)) }}
# image: jupyterhub/jupyterhub-demo
#
- name: Get list of jupyterhub-demo tags
id: demotags
uses: jupyterhub/action-major-minor-tag-calculator@v2
with:
githubToken: ${{ secrets.GITHUB_TOKEN }}
prefix: "${{ env.REGISTRY }}jupyterhub/jupyterhub-demo:"
defaultTag: "${{ env.REGISTRY }}jupyterhub/jupyterhub-demo:noref"
branchRegex: ^\w[\w-.]*$
- name: Build and push jupyterhub-demo
uses: docker/build-push-action@e1b7f96249f2e4c8e4ac1519b9608c0d48944a1f
with:
build-args: |
BASE_IMAGE=${{ fromJson(steps.onbuildtags.outputs.tags)[0] }}
context: demo-image
# linux/arm64 currently fails:
# ERROR: Could not build wheels for argon2-cffi which use PEP 517 and cannot be installed directly
# ERROR: executor failed running [/bin/sh -c python3 -m pip install notebook]: exit code: 1
platforms: linux/amd64
push: true
tags: ${{ join(fromJson(steps.demotags.outputs.tags)) }}
# image: jupyterhub/singleuser
#
- name: Get list of jupyterhub/singleuser tags
id: singleusertags
uses: jupyterhub/action-major-minor-tag-calculator@v2
with:
githubToken: ${{ secrets.GITHUB_TOKEN }}
prefix: "${{ env.REGISTRY }}jupyterhub/singleuser:"
defaultTag: "${{ env.REGISTRY }}jupyterhub/singleuser:noref"
branchRegex: ^\w[\w-.]*$
- name: Build and push jupyterhub/singleuser
uses: docker/build-push-action@e1b7f96249f2e4c8e4ac1519b9608c0d48944a1f
with:
build-args: |
JUPYTERHUB_VERSION=${{ github.ref_type == 'tag' && github.ref_name || format('git:{0}', github.sha) }}
context: singleuser
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ join(fromJson(steps.singleusertags.outputs.tags)) }}

View File

@@ -1,31 +0,0 @@
# https://github.com/dessant/support-requests
name: "Support Requests"
on:
issues:
types: [labeled, unlabeled, reopened]
permissions:
issues: write
jobs:
action:
runs-on: ubuntu-latest
steps:
- uses: dessant/support-requests@v2
with:
github-token: ${{ github.token }}
support-label: "support"
issue-comment: |
Hi there @{issue-author} :wave:!
I closed this issue because it was labelled as a support question.
Please help us organize discussion by posting this on the http://discourse.jupyter.org/ forum.
Our goal is to sustain a positive experience for both users and developers. We use GitHub issues for specific discussions related to changing a repository's content, and let the forum be where we can more generally help and inspire each other.
Thanks you for being an active member of our community! :heart:
close-issue: true
lock-issue: false
issue-lock-reason: "off-topic"

View File

@@ -1,64 +0,0 @@
# This is a GitHub workflow defining a set of jobs with a set of steps.
# ref: https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions
#
# This workflow validates the REST API definition and runs the pytest tests in
# the docs/ folder. This workflow does not build the documentation. That is
# instead tested via ReadTheDocs (https://readthedocs.org/projects/jupyterhub/).
#
name: Test docs
# The tests defined in docs/ are currently influenced by changes to _version.py
# and scopes.py.
on:
pull_request:
paths:
- "docs/**"
- "jupyterhub/_version.py"
- "jupyterhub/scopes.py"
- ".github/workflows/*"
- "!.github/workflows/test-docs.yml"
push:
paths:
- "docs/**"
- "jupyterhub/_version.py"
- "jupyterhub/scopes.py"
- ".github/workflows/*"
- "!.github/workflows/test-docs.yml"
branches-ignore:
- "dependabot/**"
- "pre-commit-ci-update-config"
tags:
- "**"
workflow_dispatch:
env:
# UTF-8 content may be interpreted as ascii and causes errors without this.
LANG: C.UTF-8
PYTEST_ADDOPTS: "--verbose --color=yes"
jobs:
validate-rest-api-definition:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
- name: Validate REST API definition
uses: char0n/swagger-editor-validate@182d1a5d26ff5c2f4f452c43bd55e2c7d8064003
with:
definition-file: docs/source/_static/rest-api.yml
test-docs:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: "3.9"
- name: Install requirements
run: |
pip install -r docs/requirements.txt pytest -e .
- name: pytest docs/
run: |
pytest docs/

View File

@@ -1,256 +0,0 @@
# This is a GitHub workflow defining a set of jobs with a set of steps.
# ref: https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions
#
name: Test
on:
pull_request:
paths-ignore:
- "docs/**"
- "**.md"
- "**.rst"
- ".github/workflows/*"
- "!.github/workflows/test.yml"
push:
paths-ignore:
- "docs/**"
- "**.md"
- "**.rst"
- ".github/workflows/*"
- "!.github/workflows/test.yml"
branches-ignore:
- "dependabot/**"
- "pre-commit-ci-update-config"
tags:
- "**"
workflow_dispatch:
env:
# UTF-8 content may be interpreted as ascii and causes errors without this.
LANG: C.UTF-8
PYTEST_ADDOPTS: "--verbose --color=yes"
jobs:
jstest:
# Run javascript tests
runs-on: ubuntu-20.04
timeout-minutes: 5
steps:
- uses: actions/checkout@v2
# NOTE: actions/setup-node@v1 make use of a cache within the GitHub base
# environment and setup in a fraction of a second.
- name: Install Node
uses: actions/setup-node@v1
with:
node-version: "14"
- name: Install Node dependencies
run: |
npm install -g yarn
- name: Run yarn
run: |
cd jsx
yarn
- name: yarn test
run: |
cd jsx
yarn test
# Run "pytest jupyterhub/tests" in various configurations
pytest:
runs-on: ubuntu-20.04
timeout-minutes: 15
strategy:
# Keep running even if one variation of the job fail
fail-fast: false
matrix:
# We run this job multiple times with different parameterization
# specified below, these parameters have no meaning on their own and
# gain meaning on how job steps use them.
#
# subdomain:
# Tests everything when JupyterHub is configured to add routes for
# users with dedicated subdomains like user1.jupyter.example.com
# rather than jupyter.example.com/user/user1.
#
# db: [mysql/postgres]
# Tests everything when JupyterHub works against a dedicated mysql or
# postgresql server.
#
# nbclassic:
# Tests everything when the user instances are started with
# notebook instead of jupyter_server.
#
# ssl:
# Tests everything using internal SSL connections instead of
# unencrypted HTTP
#
# main_dependencies:
# Tests everything when the we use the latest available dependencies
# from: traitlets.
#
# NOTE: Since only the value of these parameters are presented in the
# GitHub UI when the workflow run, we avoid using true/false as
# values by instead duplicating the name to signal true.
include:
- python: "3.6"
oldest_dependencies: oldest_dependencies
nbclassic: nbclassic
- python: "3.6"
subdomain: subdomain
- python: "3.7"
db: mysql
- python: "3.7"
ssl: ssl
- python: "3.8"
db: postgres
- python: "3.8"
nbclassic: nbclassic
- python: "3.9"
main_dependencies: main_dependencies
steps:
# NOTE: In GitHub workflows, environment variables are set by writing
# assignment statements to a file. They will be set in the following
# steps as if would used `export MY_ENV=my-value`.
- name: Configure environment variables
run: |
if [ "${{ matrix.subdomain }}" != "" ]; then
echo "JUPYTERHUB_TEST_SUBDOMAIN_HOST=http://localhost.jovyan.org:8000" >> $GITHUB_ENV
fi
if [ "${{ matrix.db }}" == "mysql" ]; then
echo "MYSQL_HOST=127.0.0.1" >> $GITHUB_ENV
echo "JUPYTERHUB_TEST_DB_URL=mysql+mysqlconnector://root@127.0.0.1:3306/jupyterhub" >> $GITHUB_ENV
fi
if [ "${{ matrix.ssl }}" == "ssl" ]; then
echo "SSL_ENABLED=1" >> $GITHUB_ENV
fi
if [ "${{ matrix.db }}" == "postgres" ]; then
echo "PGHOST=127.0.0.1" >> $GITHUB_ENV
echo "PGUSER=test_user" >> $GITHUB_ENV
echo "PGPASSWORD=hub[test/:?" >> $GITHUB_ENV
echo "JUPYTERHUB_TEST_DB_URL=postgresql://test_user:hub%5Btest%2F%3A%3F@127.0.0.1:5432/jupyterhub" >> $GITHUB_ENV
fi
if [ "${{ matrix.jupyter_server }}" != "" ]; then
echo "JUPYTERHUB_SINGLEUSER_APP=jupyterhub.tests.mockserverapp.MockServerApp" >> $GITHUB_ENV
fi
- uses: actions/checkout@v2
# NOTE: actions/setup-node@v1 make use of a cache within the GitHub base
# environment and setup in a fraction of a second.
- name: Install Node v14
uses: actions/setup-node@v1
with:
node-version: "14"
- name: Install Node dependencies
run: |
npm install
npm install -g configurable-http-proxy
npm list
# NOTE: actions/setup-python@v2 make use of a cache within the GitHub base
# environment and setup in a fraction of a second.
- name: Install Python ${{ matrix.python }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python }}
- name: Install Python dependencies
run: |
pip install --upgrade pip
pip install --upgrade . -r dev-requirements.txt
if [ "${{ matrix.oldest_dependencies }}" != "" ]; then
# take any dependencies in requirements.txt such as tornado>=5.0
# and transform them to tornado==5.0 so we can run tests with
# the earliest-supported versions
cat requirements.txt | grep '>=' | sed -e 's@>=@==@g' > oldest-requirements.txt
pip install -r oldest-requirements.txt
fi
if [ "${{ matrix.main_dependencies }}" != "" ]; then
pip install git+https://github.com/ipython/traitlets#egg=traitlets --force
fi
if [ "${{ matrix.nbclassic }}" != "" ]; then
pip uninstall jupyter_server --yes
pip install notebook
fi
if [ "${{ matrix.db }}" == "mysql" ]; then
pip install mysql-connector-python
fi
if [ "${{ matrix.db }}" == "postgres" ]; then
pip install psycopg2-binary
fi
pip freeze
# NOTE: If you need to debug this DB setup step, consider the following.
#
# 1. mysql/postgressql are database servers we start as docker containers,
# and we use clients named mysql/psql.
#
# 2. When we start a database server we need to pass environment variables
# explicitly as part of the `docker run` command. These environment
# variables are named differently from the similarly named environment
# variables used by the clients.
#
# - mysql server ref: https://hub.docker.com/_/mysql/
# - mysql client ref: https://dev.mysql.com/doc/refman/5.7/en/environment-variables.html
# - postgres server ref: https://hub.docker.com/_/postgres/
# - psql client ref: https://www.postgresql.org/docs/9.5/libpq-envars.html
#
# 3. When we connect, they should use 127.0.0.1 rather than the
# default way of connecting which leads to errors like below both for
# mysql and postgresql unless we set MYSQL_HOST/PGHOST to 127.0.0.1.
#
# - ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
#
- name: Start a database server (${{ matrix.db }})
if: ${{ matrix.db }}
run: |
if [ "${{ matrix.db }}" == "mysql" ]; then
if [[ -z "$(which mysql)" ]]; then
sudo apt-get update
sudo apt-get install -y mysql-client
fi
DB=mysql bash ci/docker-db.sh
DB=mysql bash ci/init-db.sh
fi
if [ "${{ matrix.db }}" == "postgres" ]; then
if [[ -z "$(which psql)" ]]; then
sudo apt-get update
sudo apt-get install -y postgresql-client
fi
DB=postgres bash ci/docker-db.sh
DB=postgres bash ci/init-db.sh
fi
- name: Run pytest
run: |
pytest --maxfail=2 --cov=jupyterhub jupyterhub/tests
- name: Submit codecov report
run: |
codecov
docker-build:
runs-on: ubuntu-20.04
timeout-minutes: 20
steps:
- uses: actions/checkout@v2
- name: build images
run: |
docker build -t jupyterhub/jupyterhub .
docker build -t jupyterhub/jupyterhub-onbuild onbuild
docker build -t jupyterhub/jupyterhub:alpine -f dockerfiles/Dockerfile.alpine .
docker build -t jupyterhub/singleuser singleuser
- name: smoke test jupyterhub
run: |
docker run --rm -t jupyterhub/jupyterhub jupyterhub --help
- name: verify static files
run: |
docker run --rm -t -v $PWD/dockerfiles:/io jupyterhub/jupyterhub python3 /io/test.py

8
.gitignore vendored
View File

@@ -8,7 +8,6 @@ dist
docs/_build
docs/build
docs/source/_static/rest-api
docs/source/rbac/scope-table.md
.ipynb_checkpoints
# ignore config file at the top-level of the repo
# but not sub-dirs
@@ -22,13 +21,6 @@ share/jupyterhub/static/css/style.min.css.map
*.egg-info
MANIFEST
.coverage
.coverage.*
htmlcov
.idea/
.vscode/
.pytest_cache
pip-wheel-metadata
docs/source/reference/metrics.rst
oldest-requirements.txt
jupyterhub-proxy.pid
examples/server-api/service-token

View File

@@ -1,30 +0,0 @@
repos:
- repo: https://github.com/asottile/pyupgrade
rev: v2.31.0
hooks:
- id: pyupgrade
args:
- --py36-plus
- repo: https://github.com/asottile/reorder_python_imports
rev: v2.6.0
hooks:
- id: reorder-python-imports
- repo: https://github.com/psf/black
rev: 21.12b0
hooks:
- id: black
- repo: https://github.com/pre-commit/mirrors-prettier
rev: v2.5.1
hooks:
- id: prettier
- repo: https://github.com/PyCQA/flake8
rev: "4.0.1"
hooks:
- id: flake8
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.1.0
hooks:
- id: end-of-file-fixer
- id: check-case-conflict
- id: check-executables-have-shebangs
- id: requirements-txt-fixer

View File

@@ -1,2 +0,0 @@
share/jupyterhub/templates/
share/jupyterhub/static/js/admin-react.js

68
.travis.yml Normal file
View File

@@ -0,0 +1,68 @@
language: python
sudo: false
cache:
- pip
python:
- 3.6
- 3.5
- nightly
env:
global:
- ASYNC_TEST_TIMEOUT=15
- MYSQL_HOST=127.0.0.1
- MYSQL_TCP_PORT=13306
services:
- postgres
- docker
# installing dependencies
before_install:
- nvm install 6; nvm use 6
- npm install
- npm install -g configurable-http-proxy
- |
# setup database
if [[ $JUPYTERHUB_TEST_DB_URL == mysql* ]]; then
unset MYSQL_UNIX_PORT
DB=mysql bash ci/docker-db.sh
DB=mysql bash ci/init-db.sh
pip install 'mysql-connector<2.2'
elif [[ $JUPYTERHUB_TEST_DB_URL == postgresql* ]]; then
DB=postgres bash ci/init-db.sh
pip install psycopg2-binary
fi
install:
- pip install --upgrade pip
- pip install --pre -r dev-requirements.txt .
- pip freeze
# running tests
script:
- |
# run tests
set -e
pytest -v --maxfail=2 --cov=jupyterhub jupyterhub/tests
- |
# build docs
pushd docs
pip install -r requirements.txt
make html
popd
after_success:
- codecov
matrix:
fast_finish: true
include:
- python: 3.6
env: JUPYTERHUB_TEST_SUBDOMAIN_HOST=http://localhost.jovyan.org:8000
- python: 3.6
env:
- JUPYTERHUB_TEST_DB_URL=mysql+mysqlconnector://root@127.0.0.1:$MYSQL_TCP_PORT/jupyterhub
- python: 3.6
env:
- JUPYTERHUB_TEST_DB_URL=postgresql://postgres@127.0.0.1/jupyterhub
- python: 3.7
dist: xenial
allow_failures:
- python: nightly

26
CHECKLIST-Release.md Normal file
View File

@@ -0,0 +1,26 @@
# Release checklist
- [ ] Upgrade Docs prior to Release
- [ ] Change log
- [ ] New features documented
- [ ] Update the contributor list - thank you page
- [ ] Upgrade and test Reference Deployments
- [ ] Release software
- [ ] Make sure 0 issues in milestone
- [ ] Follow release process steps
- [ ] Send builds to PyPI (Warehouse) and Conda Forge
- [ ] Blog post and/or release note
- [ ] Notify users of release
- [ ] Email Jupyter and Jupyter In Education mailing lists
- [ ] Tweet (optional)
- [ ] Increment the version number for the next release
- [ ] Update roadmap

View File

@@ -1 +1 @@
Please refer to [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md).
Please refer to [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/master/conduct/code_of_conduct.md).

View File

@@ -1,139 +1,98 @@
# Contributing to JupyterHub
# Contributing
Welcome! As a [Jupyter](https://jupyter.org) project,
you can follow the [Jupyter contributor guide](https://jupyter.readthedocs.io/en/latest/contributing/content-contributor.html).
Welcome! As a [Jupyter](https://jupyter.org) project, we follow the [Jupyter contributor guide](https://jupyter.readthedocs.io/en/latest/contributor/content-contributor.html).
Make sure to also follow [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md)
for a friendly and welcoming collaborative environment.
## Setting up a development environment
## Set up your development system
<!--
https://jupyterhub.readthedocs.io/en/stable/contributing/setup.html
contains a lot of the same information. Should we merge the docs and
just have this page link to that one?
-->
JupyterHub requires Python >= 3.5 and nodejs.
As a Python project, a development install of JupyterHub follows standard practices for the basics (steps 1-2).
1. clone the repo
```bash
git clone https://github.com/jupyterhub/jupyterhub
```
2. do a development install with pip
```bash
cd jupyterhub
python3 -m pip install --editable .
```
3. install the development requirements,
which include things like testing tools
```bash
python3 -m pip install -r dev-requirements.txt
```
4. install configurable-http-proxy with npm:
```bash
npm install -g configurable-http-proxy
```
5. set up pre-commit hooks for automatic code formatting, etc.
```bash
pre-commit install
```
You can also invoke the pre-commit hook manually at any time with
```bash
pre-commit run
```
## Contributing
JupyterHub has adopted automatic code formatting so you shouldn't
need to worry too much about your code style.
As long as your code is valid,
the pre-commit hook should take care of how it should look.
You can invoke the pre-commit hook by hand at any time with:
For a development install, clone the [repository](https://github.com/jupyterhub/jupyterhub)
and then install from source:
```bash
pre-commit run
git clone https://github.com/jupyterhub/jupyterhub
cd jupyterhub
npm install -g configurable-http-proxy
pip3 install -r dev-requirements.txt -e .
```
which should run any autoformatting on your code
and tell you about any errors it couldn't fix automatically.
You may also install [black integration](https://github.com/psf/black#editor-integration)
into your text editor to format code automatically.
### Troubleshooting a development install
If you have already committed files before setting up the pre-commit
hook with `pre-commit install`, you can fix everything up using
`pre-commit run --all-files`. You need to make the fixing commit
yourself after that.
If the `pip3 install` command fails and complains about `lessc` being
unavailable, you may need to explicitly install some additional JavaScript
dependencies:
## Testing
npm install
It's a good idea to write tests to exercise any new features,
or that trigger any bugs that you have fixed to catch regressions.
This will fetch client-side JavaScript dependencies necessary to compile CSS.
You can run the tests with:
You may also need to manually update JavaScript and CSS after some development
updates, with:
```bash
pytest -v
python3 setup.py js # fetch updated client-side js
python3 setup.py css # recompile CSS from LESS sources
```
in the repo directory. If you want to just run certain tests,
check out the [pytest docs](https://pytest.readthedocs.io/en/latest/usage.html)
for how pytest can be called.
For instance, to test only spawner-related things in the REST API:
## Running the test suite
We use [pytest](http://doc.pytest.org/en/latest/) for running tests.
1. Set up a development install as described above.
2. Set environment variable for `ASYNC_TEST_TIMEOUT` to 15 seconds:
```bash
pytest -v -k spawn jupyterhub/tests/test_api.py
export ASYNC_TEST_TIMEOUT=15
```
The tests live in `jupyterhub/tests` and are organized roughly into:
3. Run tests.
1. `test_api.py` tests the REST API
2. `test_pages.py` tests loading the HTML pages
To run all the tests:
and other collections of tests for different components.
When writing a new test, there should usually be a test of
similar functionality already written and related tests should
be added nearby.
```bash
pytest -v jupyterhub/tests
```
The fixtures live in `jupyterhub/tests/conftest.py`. There are
fixtures that can be used for JupyterHub components, such as:
To run an individual test file (i.e. `test_api.py`):
- `app`: an instance of JupyterHub with mocked parts
- `auth_state_enabled`: enables persisting auth_state (like authentication tokens)
- `db`: a sqlite in-memory DB session
- `io_loop`: a Tornado event loop
- `event_loop`: a new asyncio event loop
- `user`: creates a new temporary user
- `admin_user`: creates a new temporary admin user
- single user servers
- `cleanup_after`: allows cleanup of single user servers between tests
- mocked service
- `MockServiceSpawner`: a spawner that mocks services for testing with a short poll interval
- `mockservice`: mocked service with no external service url
- `mockservice_url`: mocked service with a url to test external services
```bash
pytest -v jupyterhub/tests/test_api.py
```
And fixtures to add functionality or spawning behavior:
### Troubleshooting tests
- `admin_access`: grants admin access
- `no_patience`: sets slow-spawning timeouts to zero
- `slow_spawn`: enables the SlowSpawner (a spawner that takes a few seconds to start)
- `never_spawn`: enables the NeverSpawner (a spawner that will never start)
- `bad_spawn`: enables the BadSpawner (a spawner that fails immediately)
- `slow_bad_spawn`: enables the SlowBadSpawner (a spawner that fails after a short delay)
If you see test failures because of timeouts, you may wish to increase the
`ASYNC_TEST_TIMEOUT` used by the
[pytest-tornado-plugin](https://github.com/eugeniy/pytest-tornado/blob/c79f68de2222eb7cf84edcfe28650ebf309a4d0c/README.rst#markers)
from the default of 5 seconds:
To read more about fixtures check out the
[pytest docs](https://docs.pytest.org/en/latest/fixture.html)
for how to use the existing fixtures, and how to create new ones.
```bash
export ASYNC_TEST_TIMEOUT=15
```
When in doubt, feel free to [ask](https://gitter.im/jupyterhub/jupyterhub).
If you see many test errors and failures, double check that you have installed
`configurable-http-proxy`.
## Building the Docs locally
1. Install the development system as described above.
2. Install the dependencies for documentation:
```bash
python3 -m pip install -r docs/requirements.txt
```
3. Build the docs:
```bash
cd docs
make clean
make html
```
4. View the docs:
```bash
open build/html/index.html
```

View File

@@ -24,7 +24,7 @@ software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
@@ -46,8 +46,8 @@ Jupyter uses a shared copyright model. Each contributor maintains copyright
over their contributions to Jupyter. But, it is important to note that these
contributions are typically only changes to the repositories. Thus, the Jupyter
source code, in its entirety is not the copyright of any single person or
institution. Instead, it is the collective copyright of the entire Jupyter
Development Team. If individual contributors want to maintain a record of what
institution. Instead, it is the collective copyright of the entire Jupyter
Development Team. If individual contributors want to maintain a record of what
changes/contributions they have specific copyright on, they should indicate
their copyright in the commit message of the change, when they commit the
change to one of the Jupyter repositories.

View File

@@ -21,81 +21,40 @@
# your jupyterhub_config.py will be added automatically
# from your docker directory.
ARG BASE_IMAGE=ubuntu:focal-20200729
FROM $BASE_IMAGE AS builder
USER root
FROM ubuntu:18.04
LABEL maintainer="Jupyter Project <jupyter@googlegroups.com>"
# install nodejs, utf8 locale, set CDN because default httpredir is unreliable
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update \
&& apt-get install -yq --no-install-recommends \
build-essential \
ca-certificates \
locales \
python3-dev \
python3-pip \
python3-pycurl \
nodejs \
npm \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get -y update && \
apt-get -y upgrade && \
apt-get -y install wget git bzip2 && \
apt-get purge && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
ENV LANG C.UTF-8
RUN python3 -m pip install --upgrade setuptools pip wheel
# install Python + NodeJS with conda
RUN wget -q https://repo.continuum.io/miniconda/Miniconda3-4.5.1-Linux-x86_64.sh -O /tmp/miniconda.sh && \
echo '0c28787e3126238df24c5d4858bd0744 */tmp/miniconda.sh' | md5sum -c - && \
bash /tmp/miniconda.sh -f -b -p /opt/conda && \
/opt/conda/bin/conda install --yes -c conda-forge \
python=3.6 sqlalchemy tornado jinja2 traitlets requests pip pycurl \
nodejs configurable-http-proxy && \
/opt/conda/bin/pip install --upgrade pip && \
rm /tmp/miniconda.sh
ENV PATH=/opt/conda/bin:$PATH
# copy everything except whats in .dockerignore, its a
# compromise between needing to rebuild and maintaining
# what needs to be part of the build
COPY . /src/jupyterhub/
ADD . /src/jupyterhub
WORKDIR /src/jupyterhub
# Build client component packages (they will be copied into ./share and
# packaged with the built wheel.)
RUN python3 setup.py bdist_wheel
RUN python3 -m pip wheel --wheel-dir wheelhouse dist/*.whl
FROM $BASE_IMAGE
USER root
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get install -yq --no-install-recommends \
ca-certificates \
curl \
gnupg \
locales \
python3-pip \
python3-pycurl \
nodejs \
npm \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
ENV SHELL=/bin/bash \
LC_ALL=en_US.UTF-8 \
LANG=en_US.UTF-8 \
LANGUAGE=en_US.UTF-8
RUN locale-gen $LC_ALL
# always make sure pip is up to date!
RUN python3 -m pip install --no-cache --upgrade setuptools pip
RUN npm install -g configurable-http-proxy@^4.2.0 \
&& rm -rf ~/.npm
# install the wheels we built in the first stage
COPY --from=builder /src/jupyterhub/wheelhouse /tmp/wheelhouse
RUN python3 -m pip install --no-cache /tmp/wheelhouse/*
RUN pip install . && \
rm -rf $PWD ~/.cache ~/.npm
RUN mkdir -p /srv/jupyterhub/
WORKDIR /srv/jupyterhub/
EXPOSE 8000
LABEL maintainer="Jupyter Project <jupyter@googlegroups.com>"
LABEL org.jupyter.service="jupyterhub"
CMD ["jupyterhub"]

1
PULL_REQUEST_TEMPLATE.md Normal file
View File

@@ -0,0 +1 @@

111
README.md
View File

@@ -6,37 +6,26 @@
**[License](#license)** |
**[Help and Resources](#help-and-resources)**
---
Please note that this repository is participating in a study into the sustainability of open source projects. Data will be gathered about this repository for approximately the next 12 months, starting from 2021-06-11.
Data collected will include the number of contributors, number of PRs, time taken to close/merge these PRs, and issues closed.
For more information, please visit
[our informational page](https://sustainable-open-science-and-software.github.io/) or download our [participant information sheet](https://sustainable-open-science-and-software.github.io/assets/PIS_sustainable_software.pdf).
---
# [JupyterHub](https://github.com/jupyterhub/jupyterhub)
[![Latest PyPI version](https://img.shields.io/pypi/v/jupyterhub?logo=pypi)](https://pypi.python.org/pypi/jupyterhub)
[![Latest conda-forge version](https://img.shields.io/conda/vn/conda-forge/jupyterhub?logo=conda-forge)](https://anaconda.org/conda-forge/jupyterhub)
[![Documentation build status](https://img.shields.io/readthedocs/jupyterhub?logo=read-the-docs)](https://jupyterhub.readthedocs.org/en/latest/)
[![GitHub Workflow Status - Test](https://img.shields.io/github/workflow/status/jupyterhub/jupyterhub/Test?logo=github&label=tests)](https://github.com/jupyterhub/jupyterhub/actions)
[![DockerHub build status](https://img.shields.io/docker/build/jupyterhub/jupyterhub?logo=docker&label=build)](https://hub.docker.com/r/jupyterhub/jupyterhub/tags)
[![Test coverage of code](https://codecov.io/gh/jupyterhub/jupyterhub/branch/main/graph/badge.svg)](https://codecov.io/gh/jupyterhub/jupyterhub)
[![GitHub](https://img.shields.io/badge/issue_tracking-github-blue?logo=github)](https://github.com/jupyterhub/jupyterhub/issues)
[![Discourse](https://img.shields.io/badge/help_forum-discourse-blue?logo=discourse)](https://discourse.jupyter.org/c/jupyterhub)
[![Gitter](https://img.shields.io/badge/social_chat-gitter-blue?logo=gitter)](https://gitter.im/jupyterhub/jupyterhub)
[![PyPI](https://img.shields.io/pypi/v/jupyterhub.svg)](https://pypi.python.org/pypi/jupyterhub)
[![Documentation Status](https://readthedocs.org/projects/jupyterhub/badge/?version=latest)](https://jupyterhub.readthedocs.org/en/latest/?badge=latest)
[![Documentation Status](http://readthedocs.org/projects/jupyterhub/badge/?version=0.7.2)](https://jupyterhub.readthedocs.io/en/0.7.2/?badge=0.7.2)
[![Build Status](https://travis-ci.org/jupyterhub/jupyterhub.svg?branch=master)](https://travis-ci.org/jupyterhub/jupyterhub)
[![Circle CI](https://circleci.com/gh/jupyterhub/jupyterhub.svg?style=shield&circle-token=b5b65862eb2617b9a8d39e79340b0a6b816da8cc)](https://circleci.com/gh/jupyterhub/jupyterhub)
[![codecov.io](https://codecov.io/github/jupyterhub/jupyterhub/coverage.svg?branch=master)](https://codecov.io/github/jupyterhub/jupyterhub?branch=master)
[![Google Group](https://img.shields.io/badge/google-group-blue.svg)](https://groups.google.com/forum/#!forum/jupyter)
With [JupyterHub](https://jupyterhub.readthedocs.io) you can create a
**multi-user Hub** that spawns, manages, and proxies multiple instances of the
**multi-user Hub** which spawns, manages, and proxies multiple instances of the
single-user [Jupyter notebook](https://jupyter-notebook.readthedocs.io)
server.
[Project Jupyter](https://jupyter.org) created JupyterHub to support many
users. The Hub can offer notebook servers to a class of students, a corporate
data science workgroup, a scientific research project, or a high-performance
data science workgroup, a scientific research project, or a high performance
computing group.
## Technical overview
@@ -50,32 +39,38 @@ Three main actors make up JupyterHub:
Basic principles for operation are:
- Hub launches a proxy.
- The Proxy forwards all requests to Hub by default.
- Hub handles login and spawns single-user servers on demand.
- Hub configures proxy to forward URL prefixes to the single-user notebook
- Proxy forwards all requests to Hub by default.
- Hub handles login, and spawns single-user servers on demand.
- Hub configures proxy to forward url prefixes to the single-user notebook
servers.
JupyterHub also provides a
[REST API][]
[REST API](http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/master/docs/rest-api.yml#/default)
for administration of the Hub and its users.
[rest api]: https://juptyerhub.readthedocs.io/en/latest/reference/rest-api.html
## Installation
### Check prerequisites
- A Linux/Unix based system
- [Python](https://www.python.org/downloads/) 3.6 or greater
- [Python](https://www.python.org/downloads/) 3.5 or greater
- [nodejs/npm](https://www.npmjs.com/)
- If you are using **`conda`**, the nodejs and npm dependencies will be installed for
* If you are using **`conda`**, the nodejs and npm dependencies will be installed for
you by conda.
- If you are using **`pip`**, install a recent version (at least 12.0) of
* If you are using **`pip`**, install a recent version of
[nodejs/npm](https://docs.npmjs.com/getting-started/installing-node).
For example, install it on Linux (Debian/Ubuntu) using:
```
sudo apt-get install npm nodejs-legacy
```
The `nodejs-legacy` package installs the `node` executable and is currently
required for npm to work on Debian/Ubuntu.
- If using the default PAM Authenticator, a [pluggable authentication module (PAM)](https://en.wikipedia.org/wiki/Pluggable_authentication_module).
- TLS certificate and key for HTTPS communication
- Domain name
@@ -89,11 +84,12 @@ To install JupyterHub along with its dependencies including nodejs/npm:
conda install -c conda-forge jupyterhub
```
If you plan to run notebook servers locally, install JupyterLab or Jupyter notebook:
If you plan to run notebook servers locally, install the Jupyter notebook
or JupyterLab:
```bash
conda install jupyterlab
conda install notebook
conda install jupyterlab
```
#### Using `pip`
@@ -102,13 +98,13 @@ JupyterHub can be installed with `pip`, and the proxy with `npm`:
```bash
npm install -g configurable-http-proxy
python3 -m pip install jupyterhub
python3 -m pip install jupyterhub
```
If you plan to run notebook servers locally, you will need to install
[JupyterLab or Jupyter notebook](https://jupyter.readthedocs.io/en/latest/install.html):
If you plan to run notebook servers locally, you will need to install the
[Jupyter notebook](https://jupyter.readthedocs.io/en/latest/install.html)
package:
python3 -m pip install --upgrade jupyterlab
python3 -m pip install --upgrade notebook
### Run the Hub server
@@ -117,12 +113,13 @@ To start the Hub server, run the command:
jupyterhub
Visit `http://localhost:8000` in your browser, and sign in with your system username and password.
Visit `https://localhost:8000` in your browser, and sign in with your unix
PAM credentials.
_Note_: To allow multiple users to sign in to the server, you will need to
run the `jupyterhub` command as a _privileged user_, such as root.
*Note*: To allow multiple users to sign into the server, you will need to
run the `jupyterhub` command as a *privileged user*, such as root.
The [wiki](https://github.com/jupyterhub/jupyterhub/wiki/Using-sudo-to-run-JupyterHub-without-root-privileges)
describes how to run the server as a _less privileged user_, which requires
describes how to run the server as a *less privileged user*, which requires
more configuration of the system.
## Configuration
@@ -141,18 +138,18 @@ To generate a default config file with settings and descriptions:
### Start the Hub
To start the Hub on a specific url and port `10.0.1.2:443` with **https**:
To start the Hub on a specific url and port ``10.0.1.2:443`` with **https**:
jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert
### Authenticators
| Authenticator | Description |
| ---------------------------------------------------------------------------- | ------------------------------------------------- |
| PAMAuthenticator | Default, built-in authenticator |
| [OAuthenticator](https://github.com/jupyterhub/oauthenticator) | OAuth + JupyterHub Authenticator = OAuthenticator |
| [ldapauthenticator](https://github.com/jupyterhub/ldapauthenticator) | Simple LDAP Authenticator Plugin for JupyterHub |
| [kerberosauthenticator](https://github.com/jupyterhub/kerberosauthenticator) | Kerberos Authenticator Plugin for JupyterHub |
| Authenticator | Description |
| --------------------------------------------------------------------------- | ------------------------------------------------- |
| PAMAuthenticator | Default, built-in authenticator |
| [OAuthenticator](https://github.com/jupyterhub/oauthenticator) | OAuth + JupyterHub Authenticator = OAuthenticator |
| [ldapauthenticator](https://github.com/jupyterhub/ldapauthenticator) | Simple LDAP Authenticator Plugin for JupyterHub |
| [kdcAuthenticator](https://github.com/bloomberg/jupyterhub-kdcauthenticator)| Kerberos Authenticator Plugin for JupyterHub |
### Spawners
@@ -164,7 +161,6 @@ To start the Hub on a specific url and port `10.0.1.2:443` with **https**:
| [sudospawner](https://github.com/jupyterhub/sudospawner) | Spawn single-user servers without being root |
| [systemdspawner](https://github.com/jupyterhub/systemdspawner) | Spawn single-user notebook servers using systemd |
| [batchspawner](https://github.com/jupyterhub/batchspawner) | Designed for clusters using batch scheduling software |
| [yarnspawner](https://github.com/jupyterhub/yarnspawner) | Spawn single-user notebook servers distributed on a Hadoop cluster |
| [wrapspawner](https://github.com/jupyterhub/wrapspawner) | WrapSpawner and ProfilesSpawner enabling runtime configuration of spawners |
## Docker
@@ -203,14 +199,11 @@ These accounts will be used for authentication in JupyterHub's default configura
## Contributing
If you would like to contribute to the project, please read our
[contributor documentation](https://jupyter.readthedocs.io/en/latest/contributing/content-contributor.html)
[contributor documentation](http://jupyter.readthedocs.io/en/latest/contributor/content-contributor.html)
and the [`CONTRIBUTING.md`](CONTRIBUTING.md). The `CONTRIBUTING.md` file
explains how to set up a development installation, how to run the test suite,
and how to contribute to documentation.
For a high-level view of the vision and next directions of the project, see the
[JupyterHub community roadmap](docs/source/contributing/roadmap.md).
### A note about platform support
JupyterHub is supported on Linux/Unix based systems.
@@ -230,22 +223,20 @@ docker container or Linux VM.
We use a shared copyright model that enables all contributors to maintain the
copyright on their contributions.
All code is licensed under the terms of the [revised BSD license](./COPYING.md).
All code is licensed under the terms of the revised BSD license.
## Help and resources
We encourage you to ask questions and share ideas on the [Jupyter community forum](https://discourse.jupyter.org/).
You can also talk with us on our JupyterHub [Gitter](https://gitter.im/jupyterhub/jupyterhub) channel.
We encourage you to ask questions on the [Jupyter mailing list](https://groups.google.com/forum/#!forum/jupyter).
To participate in development discussions or get help, talk with us on
our JupyterHub [Gitter](https://gitter.im/jupyterhub/jupyterhub) channel.
- [Reporting Issues](https://github.com/jupyterhub/jupyterhub/issues)
- [JupyterHub tutorial](https://github.com/jupyterhub/jupyterhub-tutorial)
- [Documentation for JupyterHub](https://jupyterhub.readthedocs.io/en/latest/) | [PDF (latest)](https://media.readthedocs.org/pdf/jupyterhub/latest/jupyterhub.pdf) | [PDF (stable)](https://media.readthedocs.org/pdf/jupyterhub/stable/jupyterhub.pdf)
- [Documentation for JupyterHub's REST API][rest api]
- [Documentation for JupyterHub's REST API](http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/master/docs/rest-api.yml#/default)
- [Documentation for Project Jupyter](http://jupyter.readthedocs.io/en/latest/index.html) | [PDF](https://media.readthedocs.org/pdf/jupyter/latest/jupyter.pdf)
- [Project Jupyter website](https://jupyter.org)
- [Project Jupyter community](https://jupyter.org/community)
JupyterHub follows the Jupyter [Community Guides](https://jupyter.readthedocs.io/en/latest/community/content-community.html).
---

View File

@@ -1,50 +0,0 @@
# How to make a release
`jupyterhub` is a package [available on
PyPI](https://pypi.org/project/jupyterhub/) and
[conda-forge](https://conda-forge.org/).
These are instructions on how to make a release on PyPI.
The PyPI release is done automatically by CI when a tag is pushed.
For you to follow along according to these instructions, you need:
- To have push rights to the [jupyterhub GitHub
repository](https://github.com/jupyterhub/jupyterhub).
## Steps to make a release
1. Checkout main and make sure it is up to date.
```shell
ORIGIN=${ORIGIN:-origin} # set to the canonical remote, e.g. 'upstream' if 'origin' is not the official repo
git checkout main
git fetch $ORIGIN main
git reset --hard $ORIGIN/main
```
1. Make sure `docs/source/changelog.md` is up-to-date.
[github-activity][] can help with this.
1. Update the version with `tbump`.
You can see what will happen without making any changes with `tbump --dry-run ${VERSION}`
```shell
tbump ${VERSION}
```
This will tag and publish a release,
which will be finished on CI.
1. Reset the version back to dev, e.g. `2.1.0.dev` after releasing `2.0.0`
```shell
tbump --no-tag ${NEXT_VERSION}.dev
```
1. Following the release to PyPI, an automated PR should arrive to
[conda-forge/jupyterhub-feedstock][],
check for the tests to succeed on this PR and then merge it to successfully
update the package for `conda` on the conda-forge channel.
[github-activity]: https://github.com/choldgraf/github-activity
[conda-forge/jupyterhub-feedstock]: https://github.com/conda-forge/jupyterhub-feedstock

View File

@@ -1,5 +0,0 @@
# Reporting a Vulnerability
If you believe youve found a security vulnerability in a Jupyter
project, please report it to security@ipython.org. If you prefer to
encrypt your security reports, you can use [this PGP public key](https://jupyter-notebook.readthedocs.io/en/stable/_downloads/1d303a645f2505a8fd283826fafc9908/ipython_security.asc).

View File

@@ -1,16 +1,19 @@
#!/usr/bin/env python
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
"""
bower-lite
Since Bower's on its way out,
stage frontend dependencies from node_modules into components
"""
import json
import os
import shutil
from os.path import join
import shutil
HERE = os.path.abspath(os.path.dirname(__file__))
@@ -29,5 +32,5 @@ dependencies = package_json['dependencies']
for dep in dependencies:
src = join(node_modules, dep)
dest = join(components, dep)
print(f"{src} -> {dest}")
print("%s -> %s" % (src, dest))
shutil.copytree(src, dest)

View File

@@ -1,60 +1,50 @@
#!/usr/bin/env bash
# The goal of this script is to start a database server as a docker container.
#
# Required environment variables:
# - DB: The database server to start, either "postgres" or "mysql".
#
# - PGUSER/PGPASSWORD: For the creation of a postgresql user with associated
# password.
# source this file to setup postgres and mysql
# for local testing (as similar as possible to docker)
set -eu
set -e
# Stop and remove any existing database container
DOCKER_CONTAINER="hub-test-$DB"
docker rm -f "$DOCKER_CONTAINER" 2>/dev/null || true
export MYSQL_HOST=127.0.0.1
export MYSQL_TCP_PORT=${MYSQL_TCP_PORT:-13306}
export PGHOST=127.0.0.1
NAME="hub-test-$DB"
DOCKER_RUN="docker run -d --name $NAME"
# Prepare environment variables to startup and await readiness of either a mysql
# or postgresql server.
if [[ "$DB" == "mysql" ]]; then
# Environment variables can influence both the mysql server in the docker
# container and the mysql client.
#
# ref server: https://hub.docker.com/_/mysql/
# ref client: https://dev.mysql.com/doc/refman/5.7/en/setting-environment-variables.html
#
DOCKER_RUN_ARGS="-p 3306:3306 --env MYSQL_ALLOW_EMPTY_PASSWORD=1 mysql:5.7"
READINESS_CHECK="mysql --user root --execute \q"
elif [[ "$DB" == "postgres" ]]; then
# Environment variables can influence both the postgresql server in the
# docker container and the postgresql client (psql).
#
# ref server: https://hub.docker.com/_/postgres/
# ref client: https://www.postgresql.org/docs/9.5/libpq-envars.html
#
# POSTGRES_USER / POSTGRES_PASSWORD will create a user on startup of the
# postgres server, but PGUSER and PGPASSWORD are the environment variables
# used by the postgresql client psql, so we configure the user based on how
# we want to connect.
#
DOCKER_RUN_ARGS="-p 5432:5432 --env "POSTGRES_USER=${PGUSER}" --env "POSTGRES_PASSWORD=${PGPASSWORD}" postgres:9.5"
READINESS_CHECK="psql --command \q"
else
echo '$DB must be mysql or postgres'
exit 1
fi
docker rm -f "$NAME" 2>/dev/null || true
# Start the database server
docker run --detach --name "$DOCKER_CONTAINER" $DOCKER_RUN_ARGS
case "$DB" in
"mysql")
RUN_ARGS="-e MYSQL_ALLOW_EMPTY_PASSWORD=1 -p $MYSQL_TCP_PORT:3306 mysql:5.7"
CHECK="mysql --host $MYSQL_HOST --port $MYSQL_TCP_PORT --user root -e \q"
;;
"postgres")
RUN_ARGS="-p 5432:5432 postgres:9.5"
CHECK="psql --user postgres -c \q"
;;
*)
echo '$DB must be mysql or postgres'
exit 1
esac
$DOCKER_RUN $RUN_ARGS
# Wait for the database server to start
echo -n "waiting for $DB "
for i in {1..60}; do
if $READINESS_CHECK; then
echo 'done'
break
else
echo -n '.'
sleep 1
fi
if $CHECK; then
echo 'done'
break
else
echo -n '.'
sleep 1
fi
done
$READINESS_CHECK
$CHECK
echo -e "
Set these environment variables:
export MYSQL_HOST=127.0.0.1
export MYSQL_TCP_PORT=$MYSQL_TCP_PORT
export PGHOST=127.0.0.1
"

View File

@@ -1,26 +1,27 @@
#!/usr/bin/env bash
# The goal of this script is to initialize a running database server with clean
# databases for use during tests.
#
# Required environment variables:
# - DB: The database server to start, either "postgres" or "mysql".
# initialize jupyterhub databases for testing
set -eu
set -e
# Prepare env vars SQL_CLIENT and EXTRA_CREATE_DATABASE_ARGS
if [[ "$DB" == "mysql" ]]; then
SQL_CLIENT="mysql --user root --execute "
EXTRA_CREATE_DATABASE_ARGS='CHARACTER SET utf8 COLLATE utf8_general_ci'
elif [[ "$DB" == "postgres" ]]; then
SQL_CLIENT="psql --command "
else
echo '$DB must be mysql or postgres'
exit 1
fi
MYSQL="mysql --user root --host $MYSQL_HOST --port $MYSQL_TCP_PORT -e "
PSQL="psql --user postgres -c "
case "$DB" in
"mysql")
EXTRA_CREATE='CHARACTER SET utf8 COLLATE utf8_general_ci'
SQL="$MYSQL"
;;
"postgres")
SQL="$PSQL"
;;
*)
echo '$DB must be mysql or postgres'
exit 1
esac
# Configure a set of databases in the database server for upgrade tests
set -x
for SUFFIX in '' _upgrade_100 _upgrade_122 _upgrade_130; do
$SQL_CLIENT "DROP DATABASE jupyterhub${SUFFIX};" 2>/dev/null || true
$SQL_CLIENT "CREATE DATABASE jupyterhub${SUFFIX} ${EXTRA_CREATE_DATABASE_ARGS:-};"
for SUFFIX in '' _upgrade_072 _upgrade_081; do
$SQL "DROP DATABASE jupyterhub${SUFFIX};" 2>/dev/null || true
$SQL "CREATE DATABASE jupyterhub${SUFFIX} ${EXTRA_CREATE};"
done

View File

@@ -1,16 +0,0 @@
# Demo JupyterHub Docker image
#
# This should only be used for demo or testing and not as a base image to build on.
#
# It includes the notebook package and it uses the DummyAuthenticator and the SimpleLocalProcessSpawner.
ARG BASE_IMAGE=jupyterhub/jupyterhub-onbuild
FROM ${BASE_IMAGE}
# Install the notebook package
RUN python3 -m pip install notebook
# Create a demo user
RUN useradd --create-home demo
RUN chown demo .
USER demo

View File

@@ -1,26 +0,0 @@
## Demo Dockerfile
This is a demo JupyterHub Docker image to help you get a quick overview of what
JupyterHub is and how it works.
It uses the SimpleLocalProcessSpawner to spawn new user servers and
DummyAuthenticator for authentication.
The DummyAuthenticator allows you to log in with any username & password and the
SimpleLocalProcessSpawner allows starting servers without having to create a
local user for each JupyterHub user.
### Important!
This should only be used for demo or testing purposes!
It shouldn't be used as a base image to build on.
### Try it
1. `cd` to the root of your jupyterhub repo.
2. Build the demo image with `docker build -t jupyterhub-demo demo-image`.
3. Run the demo image with `docker run -d -p 8000:8000 jupyterhub-demo`.
4. Visit http://localhost:8000 and login with any username and password
5. Happy demo-ing :tada:!

View File

@@ -1,7 +0,0 @@
# Configuration file for jupyterhub-demo
c = get_config()
# Use DummyAuthenticator and SimpleSpawner
c.JupyterHub.spawner_class = "simple"
c.JupyterHub.authenticator_class = "dummy"

View File

@@ -1,21 +1,14 @@
-r requirements.txt
mock
beautifulsoup4
codecov
cryptography
pytest-cov
pytest-tornado
pytest>=3.3
notebook
requests-mock
virtualenv
# temporary pin of attrs for jsonschema 0.3.0a1
# seems to be a pip bug
attrs>=17.4.0
beautifulsoup4
codecov
coverage
cryptography
html5lib # needed for beautifulsoup
jupyterlab >=3
mock
pre-commit
pytest>=3.3
pytest-asyncio
pytest-cov
requests-mock
tbump
# blacklist urllib3 releases affected by https://github.com/urllib3/urllib3/issues/1683
# I *think* this should only affect testing, not production
urllib3!=1.25.4,!=1.25.5
virtualenv

View File

@@ -1,14 +1,11 @@
FROM alpine:3.13
ENV LANG=en_US.UTF-8
RUN apk add --no-cache \
python3 \
py3-pip \
py3-ruamel.yaml \
py3-cryptography \
py3-sqlalchemy
FROM python:3.6.3-alpine3.6
ARG JUPYTERHUB_VERSION=0.8.1
ARG JUPYTERHUB_VERSION=1.3.0
RUN pip3 install --no-cache jupyterhub==${JUPYTERHUB_VERSION}
ENV LANG=en_US.UTF-8
USER nobody
CMD ["jupyterhub"]

View File

@@ -1,20 +1,21 @@
## What is Dockerfile.alpine
Dockerfile.alpine contains base image for jupyterhub. It does not work independently, but only as part of a full jupyterhub cluster
Dockerfile.alpine contains base image for jupyterhub. It does not work independently, but only as part of a full jupyterhub cluster
## How to use it?
1. A running configurable-http-proxy, whose API is accessible.
1. A running configurable-http-proxy, whose API is accessible.
2. A jupyterhub_config file.
3. Authentication and other libraries required by the specific jupyterhub_config file.
## Steps to test it outside a cluster
- start configurable-http-proxy in another container
- specify CONFIGPROXY_AUTH_TOKEN env in both containers
- put both containers on the same network (e.g. docker network create jupyterhub; docker run ... --net jupyterhub)
- tell jupyterhub where CHP is (e.g. c.ConfigurableHTTPProxy.api_url = 'http://chp:8001')
- tell jupyterhub not to start the proxy itself (c.ConfigurableHTTPProxy.should_start = False)
- Use dummy authenticator for ease of testing. Update following in jupyterhub_config file
- c.JupyterHub.authenticator_class = 'dummyauthenticator.DummyAuthenticator'
- c.DummyAuthenticator.password = "your strong password"
* start configurable-http-proxy in another container
* specify CONFIGPROXY_AUTH_TOKEN env in both containers
* put both containers on the same network (e.g. docker create network jupyterhub; docker run ... --net jupyterhub)
* tell jupyterhub where CHP is (e.g. c.ConfigurableHTTPProxy.api_url = 'http://chp:8001')
* tell jupyterhub not to start the proxy itself (c.ConfigurableHTTPProxy.should_start = False)
* Use dummy authenticator for ease of testing. Update following in jupyterhub_config file
- c.JupyterHub.authenticator_class = 'dummyauthenticator.DummyAuthenticator'
- c.DummyAuthenticator.password = "your strong password"

View File

@@ -1,9 +0,0 @@
import os
from jupyterhub._data import DATA_FILES_PATH
print(f"DATA_FILES_PATH={DATA_FILES_PATH}")
for sub_path in ("templates", "static/components", "static/css/style.min.css"):
path = os.path.join(DATA_FILES_PATH, sub_path)
assert os.path.exists(path), path

View File

@@ -48,22 +48,19 @@ help:
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
@echo " spelling to run spell check on documentation"
@echo " metrics to generate documentation for metrics by inspecting the source code"
clean:
rm -rf $(BUILDDIR)/*
metrics: source/reference/metrics.rst
node_modules: package.json
npm install && touch node_modules
source/reference/metrics.rst: generate-metrics.py
python3 generate-metrics.py
rest-api: source/_static/rest-api/index.html
scopes: source/rbac/scope-table.md
source/_static/rest-api/index.html: rest-api.yml node_modules
npm run rest-api
source/rbac/scope-table.md: source/rbac/generate-scope-table.py
python3 source/rbac/generate-scope-table.py
html: metrics scopes
html: rest-api
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."

22
docs/environment.yml Normal file
View File

@@ -0,0 +1,22 @@
# ReadTheDocs uses the `environment.yaml` so make sure to update that as well
# if you change the dependencies of JupyterHub in the various `requirements.txt`
name: jhub_docs
channels:
- conda-forge
dependencies:
- nodejs
- python=3.6
- alembic
- jinja2
- pamela
- requests
- sqlalchemy>=1
- tornado>=5.0
- traitlets>=4.1
- sphinx>=1.7
- pip:
- python-oauth2
- recommonmark==0.4.0
- async_generator
- prometheus_client
- attrs>=17.4.0

View File

@@ -1,57 +0,0 @@
import os
from os.path import join
from pytablewriter import RstSimpleTableWriter
from pytablewriter.style import Style
import jupyterhub.metrics
HERE = os.path.abspath(os.path.dirname(__file__))
class Generator:
@classmethod
def create_writer(cls, table_name, headers, values):
writer = RstSimpleTableWriter()
writer.table_name = table_name
writer.headers = headers
writer.value_matrix = values
writer.margin = 1
[writer.set_style(header, Style(align="center")) for header in headers]
return writer
def _parse_metrics(self):
table_rows = []
for name in dir(jupyterhub.metrics):
obj = getattr(jupyterhub.metrics, name)
if obj.__class__.__module__.startswith('prometheus_client.'):
for metric in obj.describe():
table_rows.append([metric.type, metric.name, metric.documentation])
return table_rows
def prometheus_metrics(self):
generated_directory = f"{HERE}/source/reference"
if not os.path.exists(generated_directory):
os.makedirs(generated_directory)
filename = f"{generated_directory}/metrics.rst"
table_name = ""
headers = ["Type", "Name", "Description"]
values = self._parse_metrics()
writer = self.create_writer(table_name, headers, values)
title = "List of Prometheus Metrics"
underline = "============================"
content = f"{title}\n{underline}\n{writer.dumps()}"
with open(filename, 'w') as f:
f.write(content)
print(f"Generated {filename}.")
def main():
doc_generator = Generator()
doc_generator.prometheus_metrics()
if __name__ == "__main__":
main()

14
docs/package.json Normal file
View File

@@ -0,0 +1,14 @@
{
"name": "jupyterhub-docs-build",
"version": "0.8.0",
"description": "build JupyterHub swagger docs",
"scripts": {
"rest-api": "bootprint openapi ./rest-api.yml source/_static/rest-api"
},
"author": "",
"license": "BSD-3-Clause",
"devDependencies": {
"bootprint": "^1.0.0",
"bootprint-openapi": "^1.0.0"
}
}

View File

@@ -1,12 +1,5 @@
# ReadTheDocs uses the `environment.yaml` so make sure to update that as well
# if you change this file
-r ../requirements.txt
alabaster_jupyterhub
autodoc-traits
myst-parser
pre-commit
pydata-sphinx-theme
pytablewriter>=0.56
ruamel.yaml
sphinx>=1.7
sphinx-copybutton
sphinx-jsonschema
recommonmark==0.4.0

739
docs/rest-api.yml Normal file
View File

@@ -0,0 +1,739 @@
# see me at: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/master/docs/rest-api.yml#/default
swagger: '2.0'
info:
title: JupyterHub
description: The REST API for JupyterHub
version: 0.9.4
license:
name: BSD-3-Clause
schemes:
- [http, https]
securityDefinitions:
token:
type: apiKey
name: Authorization
in: header
security:
- token: []
basePath: /hub/api
produces:
- application/json
consumes:
- application/json
paths:
/:
get:
summary: Get JupyterHub version
description: |
This endpoint is not authenticated for the purpose of clients and user
to identify the JupyterHub version before setting up authentication.
responses:
'200':
description: The JupyterHub version
schema:
type: object
properties:
version:
type: string
description: The version of JupyterHub itself
/info:
get:
summary: Get detailed info about JupyterHub
description: |
Detailed JupyterHub information, including Python version,
JupyterHub's version and executable path,
and which Authenticator and Spawner are active.
responses:
'200':
description: Detailed JupyterHub info
schema:
type: object
properties:
version:
type: string
description: The version of JupyterHub itself
python:
type: string
description: The Python version, as returned by sys.version
sys_executable:
type: string
description: The path to sys.executable running JupyterHub
authenticator:
type: object
properties:
class:
type: string
description: The Python class currently active for JupyterHub Authentication
version:
type: string
description: The version of the currently active Authenticator
spawner:
type: object
properties:
class:
type: string
description: The Python class currently active for spawning single-user notebook servers
version:
type: string
description: The version of the currently active Spawner
/users:
get:
summary: List users
responses:
'200':
description: The Hub's user list
schema:
type: array
items:
$ref: '#/definitions/User'
post:
summary: Create multiple users
parameters:
- name: data
in: body
required: true
schema:
type: object
properties:
usernames:
type: array
description: list of usernames to create on the Hub
items:
type: string
admin:
description: whether the created users should be admins
type: boolean
responses:
'201':
description: The users have been created
schema:
type: array
description: The created users
items:
$ref: '#/definitions/User'
/users/{name}:
get:
summary: Get a user by name
parameters:
- name: name
description: username
in: path
required: true
type: string
responses:
'200':
description: The User model
schema:
$ref: '#/definitions/User'
post:
summary: Create a single user
parameters:
- name: name
description: username
in: path
required: true
type: string
responses:
'201':
description: The user has been created
schema:
$ref: '#/definitions/User'
patch:
summary: Modify a user
description: Change a user's name or admin status
parameters:
- name: name
description: username
in: path
required: true
type: string
- name: data
in: body
required: true
description: Updated user info. At least one key to be updated (name or admin) is required.
schema:
type: object
properties:
name:
type: string
description: the new name (optional, if another key is updated i.e. admin)
admin:
type: boolean
description: update admin (optional, if another key is updated i.e. name)
responses:
'200':
description: The updated user info
schema:
$ref: '#/definitions/User'
delete:
summary: Delete a user
parameters:
- name: name
description: username
in: path
required: true
type: string
responses:
'204':
description: The user has been deleted
/users/{name}/server:
post:
summary: Start a user's single-user notebook server
parameters:
- name: name
description: username
in: path
required: true
type: string
responses:
'201':
description: The user's notebook server has started
'202':
description: The user's notebook server has not yet started, but has been requested
delete:
summary: Stop a user's server
parameters:
- name: name
description: username
in: path
required: true
type: string
responses:
'204':
description: The user's notebook server has stopped
'202':
description: The user's notebook server has not yet stopped as it is taking a while to stop
/users/{name}/servers/{server_name}:
post:
summary: Start a user's single-user named-server notebook server
parameters:
- name: name
description: username
in: path
required: true
type: string
- name: server_name
description: name given to a named-server
in: path
required: true
type: string
responses:
'201':
description: The user's notebook named-server has started
'202':
description: The user's notebook named-server has not yet started, but has been requested
delete:
summary: Stop a user's named-server
parameters:
- name: name
description: username
in: path
required: true
type: string
- name: server_name
description: name given to a named-server
in: path
required: true
type: string
responses:
'204':
description: The user's notebook named-server has stopped
'202':
description: The user's notebook named-server has not yet stopped as it is taking a while to stop
/users/{name}/tokens:
get:
summary: List tokens for the user
responses:
'200':
description: The list of tokens
schema:
type: array
items:
$ref: '#/definitions/Token'
post:
summary: Create a new token for the user
parameters:
- name: expires_in
type: number
required: false
in: body
description: lifetime (in seconds) after which the requested token will expire.
- name: note
type: string
required: false
in: body
description: A note attached to the token for future bookkeeping
responses:
'201':
description: The newly created token
schema:
$ref: '#/definitions/Token'
/users/{name}/tokens/{token_id}:
get:
summary: Get the model for a token by id
responses:
'200':
description: The info for the new token
schema:
$ref: '#/definitions/Token'
delete:
summary: Delete (revoke) a token by id
responses:
'204':
description: The token has been deleted
/user:
summary: Return authenticated user's model
description:
parameters:
responses:
'200':
description: The authenticated user's model is returned.
/groups:
get:
summary: List groups
responses:
'200':
description: The list of groups
schema:
type: array
items:
$ref: '#/definitions/Group'
/groups/{name}:
get:
summary: Get a group by name
parameters:
- name: name
description: group name
in: path
required: true
type: string
responses:
'200':
description: The group model
schema:
$ref: '#/definitions/Group'
post:
summary: Create a group
parameters:
- name: name
description: group name
in: path
required: true
type: string
responses:
'201':
description: The group has been created
schema:
$ref: '#/definitions/Group'
delete:
summary: Delete a group
parameters:
- name: name
description: group name
in: path
required: true
type: string
responses:
'204':
description: The group has been deleted
/groups/{name}/users:
post:
summary: Add users to a group
parameters:
- name: name
description: group name
in: path
required: true
type: string
- name: data
in: body
required: true
description: The users to add to the group
schema:
type: object
properties:
users:
type: array
description: List of usernames to add to the group
items:
type: string
responses:
'200':
description: The users have been added to the group
schema:
$ref: '#/definitions/Group'
delete:
summary: Remove users from a group
parameters:
- name: name
description: group name
in: path
required: true
type: string
- name: data
in: body
required: true
description: The users to remove from the group
schema:
type: object
properties:
users:
type: array
description: List of usernames to remove from the group
items:
type: string
responses:
'200':
description: The users have been removed from the group
/services:
get:
summary: List services
responses:
'200':
description: The service list
schema:
type: array
items:
$ref: '#/definitions/Service'
/services/{name}:
get:
summary: Get a service by name
parameters:
- name: name
description: service name
in: path
required: true
type: string
responses:
'200':
description: The Service model
schema:
$ref: '#/definitions/Service'
/proxy:
get:
summary: Get the proxy's routing table
description: A convenience alias for getting the routing table directly from the proxy
responses:
'200':
description: Routing table
schema:
type: object
description: configurable-http-proxy routing table (see configurable-http-proxy docs for details)
post:
summary: Force the Hub to sync with the proxy
responses:
'200':
description: Success
patch:
summary: Notify the Hub about a new proxy
description: Notifies the Hub of a new proxy to use.
parameters:
- name: data
in: body
required: true
description: Any values that have changed for the new proxy. All keys are optional.
schema:
type: object
properties:
ip:
type: string
description: IP address of the new proxy
port:
type: string
description: Port of the new proxy
protocol:
type: string
description: Protocol of new proxy, if changed
auth_token:
type: string
description: CONFIGPROXY_AUTH_TOKEN for the new proxy
responses:
'200':
description: Success
/authorizations/token:
post:
summary: Request a new API token
description: |
Request a new API token to use with the JupyterHub REST API.
If not already authenticated, username and password can be sent
in the JSON request body.
Logging in via this method is only available when the active Authenticator
accepts passwords (e.g. not OAuth).
parameters:
- name: username
in: body
required: false
type: string
- name: password
in: body
required: false
type: string
responses:
'200':
description: The new API token
schema:
type: object
properties:
token:
type: string
description: The new API token.
'403':
description: The user can not be authenticated.
/authorizations/token/{token}:
get:
summary: Identify a user or service from an API token
parameters:
- name: token
in: path
required: true
type: string
responses:
'200':
description: The user or service identified by the API token
'404':
description: A user or service is not found.
/authorizations/cookie/{cookie_name}/{cookie_value}:
get:
summary: Identify a user from a cookie
description: Used by single-user notebook servers to hand off cookie authentication to the Hub
parameters:
- name: cookie_name
in: path
required: true
type: string
- name: cookie_value
in: path
required: true
type: string
responses:
'200':
description: The user identified by the cookie
schema:
$ref: '#/definitions/User'
'404':
description: A user is not found.
/oauth2/authorize:
get:
summary: 'OAuth 2.0 authorize endpoint'
description: |
Redirect users to this URL to begin the OAuth process.
It is not an API endpoint.
parameters:
- name: client_id
description: The client id
in: query
required: true
type: string
- name: response_type
description: The response type (always 'code')
in: query
required: true
type: string
- name: state
description: A state string
in: query
required: false
type: string
- name: redirect_uri
description: The redirect url
in: query
required: true
type: string
/oauth2/token:
post:
summary: Request an OAuth2 token
description: |
Request an OAuth2 token from an authorization code.
This request completes the OAuth process.
consumes:
- application/x-www-form-urlencoded
parameters:
- name: client_id
description: The client id
in: form
required: true
type: string
- name: client_secret
description: The client secret
in: form
required: true
type: string
- name: grant_type
description: The grant type (always 'authorization_code')
in: form
required: true
type: string
- name: code
description: The code provided by the authorization redirect
in: form
required: true
type: string
- name: redirect_uri
description: The redirect url
in: form
required: true
type: string
responses:
'200':
description: JSON response including the token
schema:
type: object
properties:
access_token:
type: string
description: The new API token for the user
token_type:
type: string
description: Will always be 'Bearer'
/shutdown:
post:
summary: Shutdown the Hub
parameters:
- name: proxy
in: body
type: boolean
description: Whether the proxy should be shutdown as well (default from Hub config)
- name: servers
in: body
type: boolean
description: Whether users' notebook servers should be shutdown as well (default from Hub config)
definitions:
User:
type: object
properties:
name:
type: string
description: The user's name
admin:
type: boolean
description: Whether the user is an admin
groups:
type: array
description: The names of groups where this user is a member
items:
type: string
server:
type: string
description: The user's notebook server's base URL, if running; null if not.
pending:
type: string
enum: ["spawn", "stop", null]
description: The currently pending action, if any
last_activity:
type: string
format: date-time
description: Timestamp of last-seen activity from the user
servers:
type: object
description: The active servers for this user.
items:
schema:
$ref: '#/definitions/Server'
Server:
type: object
properties:
name:
type: string
description: The server's name. The user's default server has an empty name ('')
ready:
type: boolean
description: |
Whether the server is ready for traffic.
Will always be false when any transition is pending.
pending:
type: string
enum: ["spawn", "stop", null]
description: |
The currently pending action, if any.
A server is not ready if an action is pending.
url:
type: string
description: |
The URL where the server can be accessed
(typically /user/:name/:server.name/).
progress_url:
type: string
description: |
The URL for an event-stream to retrieve events during a spawn.
started:
type: string
format: date-time
description: UTC timestamp when the server was last started.
last_activity:
type: string
format: date-time
description: UTC timestamp last-seen activity on this server.
state:
type: object
description: Arbitrary internal state from this server's spawner. Only available on the hub's users list or get-user-by-name method, and only if a hub admin. None otherwise.
Group:
type: object
properties:
name:
type: string
description: The group's name
users:
type: array
description: The names of users who are members of this group
items:
type: string
Service:
type: object
properties:
name:
type: string
description: The service's name
admin:
type: boolean
description: Whether the service is an admin
url:
type: string
description: The internal url where the service is running
prefix:
type: string
description: The proxied URL prefix to the service's url
pid:
type: number
description: The PID of the service process (if managed)
command:
type: array
description: The command used to start the service (if managed)
items:
type: string
info:
type: object
description: |
Additional information a deployment can attach to a service.
JupyterHub does not use this field.
Token:
type: object
properties:
token:
type: string
description: The token itself. Only present in responses to requests for a new token.
id:
type: string
description: The id of the API token. Used for modifying or deleting the token.
user:
type: string
description: The user that owns a token (undefined if owned by a service)
service:
type: string
description: The service that owns the token (undefined of owned by a user)
note:
type: string
description: A note about the token, typically describing what it was created for.
created:
type: string
format: date-time
description: Timestamp when this token was created
expires_at:
type: string
format: date-time
description: Timestamp when this token expires. Null if there is no expiry.
last_activity:
type: string
format: date-time
description: |
Timestamp of last-seen activity using this token.
Can be null if token has never been used.

View File

@@ -1,10 +1,106 @@
/* Added to avoid logo being too squeezed */
.navbar-brand {
height: 4rem !important;
}
/* hide redundant funky-formatted swagger-ui version */
.swagger-ui .info .title small {
display: none !important;
}
div#helm-chart-schema h2,
div#helm-chart-schema h3,
div#helm-chart-schema h4,
div#helm-chart-schema h5,
div#helm-chart-schema h6 {
font-family: courier new;
}
h3, h3 ~ * {
margin-left: 3% !important;
}
h4, h4 ~ * {
margin-left: 6% !important;
}
h5, h5 ~ * {
margin-left: 9% !important;
}
h6, h6 ~ * {
margin-left: 12% !important;
}
h7, h7 ~ * {
margin-left: 15% !important;
}
img.logo {
width:100%
}
.right-next {
float: right;
max-width: 45%;
overflow: auto;
text-overflow: ellipsis;
white-space: nowrap;
}
.right-next::after{
content: ' »';
}
.left-prev {
float: left;
max-width: 45%;
overflow: auto;
text-overflow: ellipsis;
white-space: nowrap;
}
.left-prev::before{
content: '« ';
}
.prev-next-bottom {
margin-top: 3em;
}
.prev-next-top {
margin-bottom: 1em;
}
/* Sidebar TOC and headers */
div.sphinxsidebarwrapper div {
margin-bottom: .8em;
}
div.sphinxsidebar h3 {
font-size: 1.3em;
padding-top: 0px;
font-weight: 800;
margin-left: 0px !important;
}
div.sphinxsidebar p.caption {
font-size: 1.2em;
margin-bottom: 0px;
margin-left: 0px !important;
font-weight: 900;
color: #767676;
}
div.sphinxsidebar ul {
font-size: .8em;
margin-top: 0px;
padding-left: 3%;
margin-left: 0px !important;
}
div.relations ul {
font-size: 1em;
margin-left: 0px !important;
}
div#searchbox form {
margin-left: 0px !important;
}
/* body elements */
.toctree-wrapper span.caption-text {
color: #767676;
font-style: italic;
font-weight: 300;
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.7 KiB

After

Width:  |  Height:  |  Size: 38 KiB

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,16 @@
{# Custom template for navigation.html
alabaster theme does not provide blocks for titles to
be overridden so this custom theme handles title and
toctree for sidebar
#}
<h3>{{ _('Table of Contents') }}</h3>
{{ toctree(includehidden=theme_sidebar_includehidden, collapse=theme_sidebar_collapse) }}
{% if theme_extra_nav_links %}
<hr />
<ul>
{% for text, uri in theme_extra_nav_links.items() %}
<li class="toctree-l1"><a href="{{ uri }}">{{ text }}</a></li>
{% endfor %}
</ul>
{% endif %}

View File

@@ -0,0 +1,30 @@
{% extends '!page.html' %}
{# Custom template for page.html
Alabaster theme does not provide blocks for prev/next at bottom of each page.
This is _in addition_ to the prev/next in the sidebar. The "Prev/Next" text
or symbols are handled by CSS classes in _static/custom.css
#}
{% macro prev_next(prev, next, prev_title='', next_title='') %}
{%- if prev %}
<a class='left-prev' href="{{ prev.link|e }}" title="{{ _('previous chapter')}}">{{ prev_title or prev.title }}</a>
{%- endif %}
{%- if next %}
<a class='right-next' href="{{ next.link|e }}" title="{{ _('next chapter')}}">{{ next_title or next.title }}</a>
{%- endif %}
<div style='clear:both;'></div>
{% endmacro %}
{% block body %}
<div class='prev-next-top'>
{{ prev_next(prev, next, 'Previous', 'Next') }}
</div>
{{super()}}
<div class='prev-next-bottom'>
{{ prev_next(prev, next) }}
</div>
{% endblock %}

View File

@@ -0,0 +1,17 @@
{# Custom template for relations.html
alabaster theme does not provide previous/next page by default
#}
<div class="relations">
<h3>Navigation</h3>
<ul>
<li><a href="{{ pathto(master_doc) }}">Documentation Home</a><ul>
{%- if prev %}
<li><a href="{{ prev.link|e }}" title="Previous">Previous topic</a></li>
{%- endif %}
{%- if next %}
<li><a href="{{ next.link|e }}" title="Next">Next topic</a></li>
{%- endif %}
</ul>
</ul>
</div>

View File

@@ -1,157 +0,0 @@
====================
Upgrading JupyterHub
====================
JupyterHub offers easy upgrade pathways between minor versions. This
document describes how to do these upgrades.
If you are using :ref:`a JupyterHub distribution <index/distributions>`, you
should consult the distribution's documentation on how to upgrade. This
document is if you have set up your own JupyterHub without using a
distribution.
It is long because is pretty detailed! Most likely, upgrading
JupyterHub is painless, quick and with minimal user interruption.
Read the Changelog
==================
The `changelog <../changelog.html>`_ contains information on what has
changed with the new JupyterHub release, and any deprecation warnings.
Read these notes to familiarize yourself with the coming changes. There
might be new releases of authenticators & spawners you are using, so
read the changelogs for those too!
Notify your users
=================
If you are using the default configuration where ``configurable-http-proxy``
is managed by JupyterHub, your users will see service disruption during
the upgrade process. You should notify them, and pick a time to do the
upgrade where they will be least disrupted.
If you are using a different proxy, or running ``configurable-http-proxy``
independent of JupyterHub, your users will be able to continue using notebook
servers they had already launched, but will not be able to launch new servers
nor sign in.
Backup database & config
========================
Before doing an upgrade, it is critical to back up:
#. Your JupyterHub database (sqlite by default, or MySQL / Postgres
if you used those). If you are using sqlite (the default), you
should backup the ``jupyterhub.sqlite`` file.
#. Your ``jupyterhub_config.py`` file.
#. Your user's home directories. This is unlikely to be affected directly by
a JupyterHub upgrade, but we recommend a backup since user data is very
critical.
Shutdown JupyterHub
===================
Shutdown the JupyterHub process. This would vary depending on how you
have set up JupyterHub to run. Most likely, it is using a process
supervisor of some sort (``systemd`` or ``supervisord`` or even ``docker``).
Use the supervisor specific command to stop the JupyterHub process.
Upgrade JupyterHub packages
===========================
There are two environments where the ``jupyterhub`` package is installed:
#. The *hub environment*, which is where the JupyterHub server process
runs. This is started with the ``jupyterhub`` command, and is what
people generally think of as JupyterHub.
#. The *notebook user environments*. This is where the user notebook
servers are launched from, and is probably custom to your own
installation. This could be just one environment (different from the
hub environment) that is shared by all users, one environment
per user, or same environment as the hub environment. The hub
launched the ``jupyterhub-singleuser`` command in this environment,
which in turn starts the notebook server.
You need to make sure the version of the ``jupyterhub`` package matches
in both these environments. If you installed ``jupyterhub`` with pip,
you can upgrade it with:
.. code-block:: bash
python3 -m pip install --upgrade jupyterhub==<version>
Where ``<version>`` is the version of JupyterHub you are upgrading to.
If you used ``conda`` to install ``jupyterhub``, you should upgrade it
with:
.. code-block:: bash
conda install -c conda-forge jupyterhub==<version>
Where ``<version>`` is the version of JupyterHub you are upgrading to.
You should also check for new releases of the authenticator & spawner you
are using. You might wish to upgrade those packages too along with JupyterHub,
or upgrade them separately.
Upgrade JupyterHub database
===========================
Once new packages are installed, you need to upgrade the JupyterHub
database. From the hub environment, in the same directory as your
``jupyterhub_config.py`` file, you should run:
.. code-block:: bash
jupyterhub upgrade-db
This should find the location of your database, and run necessary upgrades
for it.
SQLite database disadvantages
-----------------------------
SQLite has some disadvantages when it comes to upgrading JupyterHub. These
are:
- ``upgrade-db`` may not work, and you may need delete your database
and start with a fresh one.
- ``downgrade-db`` **will not** work if you want to rollback to an
earlier version, so backup the ``jupyterhub.sqlite`` file before
upgrading
What happens if I delete my database?
-------------------------------------
Losing the Hub database is often not a big deal. Information that
resides only in the Hub database includes:
- active login tokens (user cookies, service tokens)
- users added via JupyterHub UI, instead of config files
- info about running servers
If the following conditions are true, you should be fine clearing the
Hub database and starting over:
- users specified in config file, or login using an external
authentication provider (Google, GitHub, LDAP, etc)
- user servers are stopped during upgrade
- don't mind causing users to login again after upgrade
Start JupyterHub
================
Once the database upgrade is completed, start the ``jupyterhub``
process again.
#. Log-in and start the server to make sure things work as
expected.
#. Check the logs for any errors or deprecation warnings. You
might have to update your ``jupyterhub_config.py`` file to
deal with any deprecated options.
Congratulations, your JupyterHub has been upgraded!

View File

@@ -13,3 +13,4 @@ Module: :mod:`jupyterhub.app`
-------------------
.. autoconfigurable:: JupyterHub

View File

@@ -26,7 +26,3 @@ Module: :mod:`jupyterhub.auth`
.. autoconfigurable:: PAMAuthenticator
:class:`DummyAuthenticator`
---------------------------
.. autoconfigurable:: DummyAuthenticator

View File

@@ -1,8 +1,8 @@
.. _api-index:
##############
JupyterHub API
##############
##################
The JupyterHub API
##################
:Release: |release|
:Date: |today|
@@ -17,6 +17,11 @@ information on:
- making an API request programmatically using the requests library
- learning more about JupyterHub's API
The same JupyterHub API spec, as found here, is available in an interactive form
`here (on swagger's petstore) <http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#!/default>`__.
The `OpenAPI Initiative`_ (fka Swagger™) is a project used to describe
and document RESTful APIs.
JupyterHub API Reference:
.. toctree::

View File

@@ -20,3 +20,4 @@ Module: :mod:`jupyterhub.proxy`
.. autoconfigurable:: ConfigurableHTTPProxy
:members: debug, auth_token, check_running_interval, api_url, command

View File

@@ -14,3 +14,4 @@ Module: :mod:`jupyterhub.services.service`
.. autoconfigurable:: Service
:members: name, admin, url, api_token, managed, kind, command, cwd, environment, user, oauth_client_id, server, prefix, proxy_spec

View File

@@ -38,3 +38,4 @@ Module: :mod:`jupyterhub.services.auth`
--------------------------------
.. autoclass:: HubOAuthCallbackHandler

View File

@@ -13,9 +13,10 @@ Module: :mod:`jupyterhub.spawner`
----------------
.. autoconfigurable:: Spawner
:members: options_from_form, poll, start, stop, get_args, get_env, get_state, template_namespace, format_string, create_certs, move_certs
:members: options_from_form, poll, start, stop, get_args, get_env, get_state, template_namespace, format_string
:class:`LocalProcessSpawner`
----------------------------
.. autoconfigurable:: LocalProcessSpawner

View File

@@ -34,3 +34,4 @@ Module: :mod:`jupyterhub.user`
.. attribute:: spawner
The user's :class:`~.Spawner` instance.

File diff suppressed because one or more lines are too long

View File

@@ -1,6 +1,11 @@
# -*- coding: utf-8 -*-
#
import os
import sys
import os
import shlex
# For conversion from markdown to html
import recommonmark.parser
# Set paths
sys.path.insert(0, os.path.abspath('.'))
@@ -16,22 +21,17 @@ extensions = [
'sphinx.ext.intersphinx',
'sphinx.ext.napoleon',
'autodoc_traits',
'sphinx_copybutton',
'sphinx-jsonschema',
'myst_parser',
]
myst_enable_extensions = [
'colon_fence',
'deflist',
]
templates_path = ['_templates']
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = 'JupyterHub'
copyright = '2016, Project Jupyter team'
author = 'Project Jupyter team'
project = u'JupyterHub'
copyright = u'2016, Project Jupyter team'
author = u'Project Jupyter team'
# Autopopulate version
from os.path import dirname
@@ -39,6 +39,7 @@ from os.path import dirname
docs = dirname(dirname(__file__))
root = dirname(docs)
sys.path.insert(0, root)
sys.path.insert(0, os.path.join(docs, 'sphinxext'))
import jupyterhub
@@ -55,64 +56,9 @@ todo_include_todos = False
# Set the default role so we can use `foo` instead of ``foo``
default_role = 'literal'
# -- Config -------------------------------------------------------------
from jupyterhub.app import JupyterHub
from docutils import nodes
from sphinx.directives.other import SphinxDirective
from contextlib import redirect_stdout
from io import StringIO
# create a temp instance of JupyterHub just to get the output of the generate-config
# and help --all commands.
jupyterhub_app = JupyterHub()
class ConfigDirective(SphinxDirective):
"""Generate the configuration file output for use in the documentation."""
has_content = False
required_arguments = 0
optional_arguments = 0
final_argument_whitespace = False
option_spec = {}
def run(self):
# The generated configuration file for this version
generated_config = jupyterhub_app.generate_config_file()
# post-process output
home_dir = os.environ['HOME']
generated_config = generated_config.replace(home_dir, '$HOME', 1)
par = nodes.literal_block(text=generated_config)
return [par]
class HelpAllDirective(SphinxDirective):
"""Print the output of jupyterhub help --all for use in the documentation."""
has_content = False
required_arguments = 0
optional_arguments = 0
final_argument_whitespace = False
option_spec = {}
def run(self):
# The output of the help command for this version
buffer = StringIO()
with redirect_stdout(buffer):
jupyterhub_app.print_help('--help-all')
all_help = buffer.getvalue()
# post-process output
home_dir = os.environ['HOME']
all_help = all_help.replace(home_dir, '$HOME', 1)
par = nodes.literal_block(text=all_help)
return [par]
def setup(app):
app.add_css_file('custom.css')
app.add_directive('jupyterhub-generate-config', ConfigDirective)
app.add_directive('jupyterhub-help-all', HelpAllDirective)
# -- Source -------------------------------------------------------------
source_parsers = {'.md': 'recommonmark.parser.CommonMarkParser'}
source_suffix = ['.rst', '.md']
# source_encoding = 'utf-8-sig'
@@ -120,7 +66,7 @@ source_suffix = ['.rst', '.md']
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages.
html_theme = 'pydata_sphinx_theme'
html_theme = 'alabaster'
html_logo = '_static/images/logo/logo.png'
html_favicon = '_static/images/logo/favicon.ico'
@@ -128,25 +74,33 @@ html_favicon = '_static/images/logo/favicon.ico'
# Paths that contain custom static files (such as style sheets)
html_static_path = ['_static']
htmlhelp_basename = 'JupyterHubdoc'
html_theme_options = {
"icon_links": [
{
"name": "GitHub",
"url": "https://github.com/jupyterhub/jupyterhub",
"icon": "fab fa-github-square",
},
{
"name": "Discourse",
"url": "https://discourse.jupyter.org/c/jupyterhub/10",
"icon": "fab fa-discourse",
},
],
"use_edit_page_button": True,
"navbar_align": "left",
'show_related': True,
'description': 'Documentation for JupyterHub',
'github_user': 'jupyterhub',
'github_repo': 'jupyterhub',
'github_banner': False,
'github_button': True,
'github_type': 'star',
'show_powered_by': False,
'extra_nav_links': {
'GitHub Repo': 'http://github.com/jupyterhub/jupyterhub',
'Issue Tracker': 'http://github.com/jupyterhub/jupyterhub/issues',
},
}
html_sidebars = {
'**': [
'about.html',
'searchbox.html',
'navigation.html',
'relations.html',
'sourcelink.html',
]
}
htmlhelp_basename = 'JupyterHubdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
@@ -163,8 +117,8 @@ latex_documents = [
(
master_doc,
'JupyterHub.tex',
'JupyterHub Documentation',
'Project Jupyter team',
u'JupyterHub Documentation',
u'Project Jupyter team',
'manual',
)
]
@@ -181,7 +135,7 @@ latex_documents = [
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [(master_doc, 'jupyterhub', 'JupyterHub Documentation', [author], 1)]
man_pages = [(master_doc, 'jupyterhub', u'JupyterHub Documentation', [author], 1)]
# man_show_urls = False
@@ -195,7 +149,7 @@ texinfo_documents = [
(
master_doc,
'JupyterHub',
'JupyterHub Documentation',
u'JupyterHub Documentation',
author,
'JupyterHub',
'One line description of project.',
@@ -222,20 +176,19 @@ epub_exclude_files = ['search.html']
# -- Intersphinx ----------------------------------------------------------
intersphinx_mapping = {
'python': ('https://docs.python.org/3/', None),
'tornado': ('https://www.tornadoweb.org/en/stable/', None),
}
intersphinx_mapping = {'https://docs.python.org/3/': None}
# -- Read The Docs --------------------------------------------------------
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
if on_rtd:
if not on_rtd:
html_theme = 'alabaster'
else:
# readthedocs.org uses their theme by default, so no need to specify it
# build both metrics and rest-api, since RTD doesn't run make
# build rest-api, since RTD doesn't run make
from subprocess import check_call as sh
sh(['make', 'metrics', 'scopes'], cwd=docs)
sh(['make', 'rest-api'], cwd=docs)
# -- Spell checking -------------------------------------------------------

View File

@@ -1,30 +0,0 @@
.. _contributing/community:
================================
Community communication channels
================================
We use `Discourse <https://discourse.jupyter.org>` for online discussion.
Everyone in the Jupyter community is welcome to bring ideas and questions there.
In addition, we use `Gitter <https://gitter.im>`_ for online, real-time text chat,
a place for more ephemeral discussions.
The primary Gitter channel for JupyterHub is `jupyterhub/jupyterhub <https://gitter.im/jupyterhub/jupyterhub>`_.
Gitter isn't archived or searchable, so we recommend going to discourse first
to make sure that discussions are most useful and accessible to the community.
Remember that our community is distributed across the world in various
timezones, so be patient if you do not get an answer immediately!
GitHub issues are used for most long-form project discussions, bug reports
and feature requests. Issues related to a specific authenticator or
spawner should be directed to the appropriate repository for the
authenticator or spawner. If you are using a specific JupyterHub
distribution (such as `Zero to JupyterHub on Kubernetes <http://github.com/jupyterhub/zero-to-jupyterhub-k8s>`_
or `The Littlest JupyterHub <http://github.com/jupyterhub/the-littlest-jupyterhub/>`_),
you should open issues directly in their repository. If you can not
find a repository to open your issue in, do not worry! Create it in the `main
JupyterHub repository <https://github.com/jupyterhub/jupyterhub/>`_ and our
community will help you figure it out.
A `mailing list <https://groups.google.com/forum/#!forum/jupyter>`_ for all
of Project Jupyter exists, along with one for `teaching with Jupyter
<https://groups.google.com/forum/#!forum/jupyter-education>`_.

View File

@@ -1,78 +0,0 @@
.. _contributing/docs:
==========================
Contributing Documentation
==========================
Documentation is often more important than code. This page helps
you get set up on how to contribute documentation to JupyterHub.
Building documentation locally
==============================
We use `sphinx <http://sphinx-doc.org>`_ to build our documentation. It takes
our documentation source files (written in `markdown
<https://daringfireball.net/projects/markdown/>`_ or `reStructuredText
<https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_ &
stored under the ``docs/source`` directory) and converts it into various
formats for people to read. To make sure the documentation you write or
change renders correctly, it is good practice to test it locally.
#. Make sure you have successfuly completed :ref:`contributing/setup`.
#. Install the packages required to build the docs.
.. code-block:: bash
python3 -m pip install -r docs/requirements.txt
#. Build the html version of the docs. This is the most commonly used
output format, so verifying it renders as you should is usually good
enough.
.. code-block:: bash
cd docs
make html
This step will display any syntax or formatting errors in the documentation,
along with the filename / line number in which they occurred. Fix them,
and re-run the ``make html`` command to re-render the documentation.
#. View the rendered documentation by opening ``build/html/index.html`` in
a web browser.
.. tip::
On macOS, you can open a file from the terminal with ``open <path-to-file>``.
On Linux, you can do the same with ``xdg-open <path-to-file>``.
.. _contributing/docs/conventions:
Documentation conventions
=========================
This section lists various conventions we use in our documentation. This is a
living document that grows over time, so feel free to add to it / change it!
Our entire documentation does not yet fully conform to these conventions yet,
so help in making it so would be appreciated!
``pip`` invocation
------------------
There are many ways to invoke a ``pip`` command, we recommend the following
approach:
.. code-block:: bash
python3 -m pip
This invokes pip explicitly using the python3 binary that you are
currently using. This is the **recommended way** to invoke pip
in our documentation, since it is least likely to cause problems
with python3 and pip being from different environments.
For more information on how to invoke ``pip`` commands, see
`the pip documentation <https://pip.pypa.io/en/stable/>`_.

View File

@@ -1,21 +0,0 @@
============
Contributing
============
We want you to contribute to JupyterHub in ways that are most exciting
& useful to you. We value documentation, testing, bug reporting & code equally,
and are glad to have your contributions in whatever form you wish :)
Our `Code of Conduct <https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md>`_
(`reporting guidelines <https://github.com/jupyter/governance/blob/HEAD/conduct/reporting_online.md>`_)
helps keep our community welcoming to as many people as possible.
.. toctree::
:maxdepth: 2
community
setup
docs
tests
roadmap
security

View File

@@ -1,95 +0,0 @@
# The JupyterHub roadmap
This roadmap collects "next steps" for JupyterHub. It is about creating a
shared understanding of the project's vision and direction amongst
the community of users, contributors, and maintainers.
The goal is to communicate priorities and upcoming release plans.
It is not a aimed at limiting contributions to what is listed here.
## Using the roadmap
### Sharing Feedback on the Roadmap
All of the community is encouraged to provide feedback as well as share new
ideas with the community. Please do so by submitting an issue. If you want to
have an informal conversation first use one of the other communication channels.
After submitting the issue, others from the community will probably
respond with questions or comments they have to clarify the issue. The
maintainers will help identify what a good next step is for the issue.
### What do we mean by "next step"?
When submitting an issue, think about what "next step" category best describes
your issue:
- **now**, concrete/actionable step that is ready for someone to start work on.
These might be items that have a link to an issue or more abstract like
"decrease typos and dead links in the documentation"
- **soon**, less concrete/actionable step that is going to happen soon,
discussions around the topic are coming close to an end at which point it can
move into the "now" category
- **later**, abstract ideas or tasks, need a lot of discussion or
experimentation to shape the idea so that it can be executed. Can also
contain concrete/actionable steps that have been postponed on purpose
(these are steps that could be in "now" but the decision was taken to work on
them later)
### Reviewing and Updating the Roadmap
The roadmap will get updated as time passes (next review by 1st December) based
on discussions and ideas captured as issues.
This means this list should not be exhaustive, it should only represent
the "top of the stack" of ideas. It should
not function as a wish list, collection of feature requests or todo list.
For those please create a
[new issue](https://github.com/jupyterhub/jupyterhub/issues/new).
The roadmap should give the reader an idea of what is happening next, what needs
input and discussion before it can happen and what has been postponed.
## The roadmap proper
### Project vision
JupyterHub is a dependable tool used by humans that reduces the complexity of
creating the environment in which a piece of software can be executed.
### Now
These "Now" items are considered active areas of focus for the project:
- HubShare - a sharing service for use with JupyterHub.
- Users should be able to:
- Push a project to other users.
- Get a checkout of a project from other users.
- Push updates to a published project.
- Pull updates from a published project.
- Manage conflicts/merges by simply picking a version (our/theirs)
- Get a checkout of a project from the internet. These steps are completely different from saving notebooks/files.
- Have directories that are managed by git completely separately from our stuff.
- Look at pushed content that they have access to without an explicit pull.
- Define and manage teams of users.
- Adding/removing a user to/from a team gives/removes them access to all projects that team has access to.
- Build other services, such as static HTML publishing and dashboarding on top of these things.
### Soon
These "Soon" items are under discussion. Once an item reaches the point of an
actionable plan, the item will be moved to the "Now" section. Typically,
these will be moved at a future review of the roadmap.
- resource monitoring and management:
- (prometheus?) API for resource monitoring
- tracking activity on single-user servers instead of the proxy
- notes and activity tracking per API token
### Later
The "Later" items are things that are at the back of the project's mind. At this
time there is no active plan for an item. The project would like to find the
resources and time to discuss these ideas.
- real-time collaboration
- Enter into real-time collaboration mode for a project that starts a shared execution context.
- Once the single-user notebook package supports realtime collaboration,
implement sharing mechanism integrated into the Hub.

View File

@@ -1,10 +0,0 @@
Reporting security issues in Jupyter or JupyterHub
==================================================
If you find a security vulnerability in Jupyter or JupyterHub,
whether it is a failure of the security model described in :doc:`../reference/websecurity`
or a failure in implementation,
please report it to security@ipython.org.
If you prefer to encrypt your security reports,
you can use :download:`this PGP public key </ipython_security.asc>`.

View File

@@ -1,188 +0,0 @@
.. _contributing/setup:
================================
Setting up a development install
================================
System requirements
===================
JupyterHub can only run on MacOS or Linux operating systems. If you are
using Windows, we recommend using `VirtualBox <https://virtualbox.org>`_
or a similar system to run `Ubuntu Linux <https://ubuntu.com>`_ for
development.
Install Python
--------------
JupyterHub is written in the `Python <https://python.org>`_ programming language, and
requires you have at least version 3.5 installed locally. If you havent
installed Python before, the recommended way to install it is to use
`miniconda <https://conda.io/miniconda.html>`_. Remember to get the Python 3 version,
and **not** the Python 2 version!
Install nodejs
--------------
``configurable-http-proxy``, the default proxy implementation for
JupyterHub, is written in Javascript to run on `NodeJS
<https://nodejs.org/en/>`_. If you have not installed nodejs before, we
recommend installing it in the ``miniconda`` environment you set up for
Python. You can do so with ``conda install nodejs``.
Install git
-----------
JupyterHub uses `git <https://git-scm.com>`_ & `GitHub <https://github.com>`_
for development & collaboration. You need to `install git
<https://git-scm.com/book/en/v2/Getting-Started-Installing-Git>`_ to work on
JupyterHub. We also recommend getting a free account on GitHub.com.
Setting up a development install
================================
When developing JupyterHub, you need to make changes to the code & see
their effects quickly. You need to do a developer install to make that
happen.
.. note:: This guide does not attempt to dictate *how* development
environements should be isolated since that is a personal preference and can
be achieved in many ways, for example `tox`, `conda`, `docker`, etc. See this
`forum thread <https://discourse.jupyter.org/t/thoughts-on-using-tox/3497>`_ for
a more detailed discussion.
1. Clone the `JupyterHub git repository <https://github.com/jupyterhub/jupyterhub>`_
to your computer.
.. code:: bash
git clone https://github.com/jupyterhub/jupyterhub
cd jupyterhub
2. Make sure the ``python`` you installed and the ``npm`` you installed
are available to you on the command line.
.. code:: bash
python -V
This should return a version number greater than or equal to 3.5.
.. code:: bash
npm -v
This should return a version number greater than or equal to 5.0.
3. Install ``configurable-http-proxy``. This is required to run
JupyterHub.
.. code:: bash
npm install -g configurable-http-proxy
If you get an error that says ``Error: EACCES: permission denied``,
you might need to prefix the command with ``sudo``. If you do not
have access to sudo, you may instead run the following commands:
.. code:: bash
npm install configurable-http-proxy
export PATH=$PATH:$(pwd)/node_modules/.bin
The second line needs to be run every time you open a new terminal.
4. Install the python packages required for JupyterHub development.
.. code:: bash
python3 -m pip install -r dev-requirements.txt
python3 -m pip install -r requirements.txt
5. Setup a database.
The default database engine is ``sqlite`` so if you are just trying
to get up and running quickly for local development that should be
available via `python <https://docs.python.org/3.5/library/sqlite3.html>`__.
See :doc:`/reference/database` for details on other supported databases.
6. Install the development version of JupyterHub. This lets you edit
JupyterHub code in a text editor & restart the JupyterHub process to
see your code changes immediately.
.. code:: bash
python3 -m pip install --editable .
7. You are now ready to start JupyterHub!
.. code:: bash
jupyterhub
8. You can access JupyterHub from your browser at
``http://localhost:8000`` now.
Happy developing!
Using DummyAuthenticator & SimpleLocalProcessSpawner
====================================================
To simplify testing of JupyterHub, its helpful to use
:class:`~jupyterhub.auth.DummyAuthenticator` instead of the default JupyterHub
authenticator and SimpleLocalProcessSpawner instead of the default spawner.
There is a sample configuration file that does this in
``testing/jupyterhub_config.py``. To launch jupyterhub with this
configuration:
.. code:: bash
jupyterhub -f testing/jupyterhub_config.py
The default JupyterHub `authenticator
<https://jupyterhub.readthedocs.io/en/stable/reference/authenticators.html#the-default-pam-authenticator>`_
& `spawner
<https://jupyterhub.readthedocs.io/en/stable/api/spawner.html#localprocessspawner>`_
require your system to have user accounts for each user you want to log in to
JupyterHub as.
DummyAuthenticator allows you to log in with any username & password,
while SimpleLocalProcessSpawner allows you to start servers without having to
create a unix user for each JupyterHub user. Together, these make it
much easier to test JupyterHub.
Tip: If you are working on parts of JupyterHub that are common to all
authenticators & spawners, we recommend using both DummyAuthenticator &
SimpleLocalProcessSpawner. If you are working on just authenticator related
parts, use only SimpleLocalProcessSpawner. Similarly, if you are working on
just spawner related parts, use only DummyAuthenticator.
Troubleshooting
===============
This section lists common ways setting up your development environment may
fail, and how to fix them. Please add to the list if you encounter yet
another way it can fail!
``lessc`` not found
-------------------
If the ``python3 -m pip install --editable .`` command fails and complains about
``lessc`` being unavailable, you may need to explicitly install some
additional JavaScript dependencies:
.. code:: bash
npm install
This will fetch client-side JavaScript dependencies necessary to compile
CSS.
You may also need to manually update JavaScript and CSS after some
development updates, with:
.. code:: bash
python3 setup.py js # fetch updated client-side js
python3 setup.py css # recompile CSS from LESS sources

View File

@@ -1,68 +0,0 @@
.. _contributing/tests:
==================
Testing JupyterHub
==================
Unit test help validate that JupyterHub works the way we think it does,
and continues to do so when changes occur. They also help communicate
precisely what we expect our code to do.
JupyterHub uses `pytest <https://pytest.org>`_ for all our tests. You
can find them under ``jupyterhub/tests`` directory in the git repository.
Running the tests
==================
#. Make sure you have completed :ref:`contributing/setup`. You should be able
to start ``jupyterhub`` from the commandline & access it from your
web browser. This ensures that the dev environment is properly set
up for tests to run.
#. You can run all tests in JupyterHub
.. code-block:: bash
pytest -v jupyterhub/tests
This should display progress as it runs all the tests, printing
information about any test failures as they occur.
If you wish to confirm test coverage the run tests with the `--cov` flag:
.. code-block:: bash
pytest -v --cov=jupyterhub jupyterhub/tests
#. You can also run tests in just a specific file:
.. code-block:: bash
pytest -v jupyterhub/tests/<test-file-name>
#. To run a specific test only, you can do:
.. code-block:: bash
pytest -v jupyterhub/tests/<test-file-name>::<test-name>
This runs the test with function name ``<test-name>`` defined in
``<test-file-name>``. This is very useful when you are iteratively
developing a single test.
For example, to run the test ``test_shutdown`` in the file ``test_api.py``,
you would run:
.. code-block:: bash
pytest -v jupyterhub/tests/test_api.py::test_shutdown
Troubleshooting Test Failures
=============================
All the tests are failing
-------------------------
Make sure you have completed all the steps in :ref:`contributing/setup` successfully, and
can launch ``jupyterhub`` from the terminal.

View File

@@ -1,46 +0,0 @@
Eventlogging and Telemetry
==========================
JupyterHub can be configured to record structured events from a running server using Jupyter's `Telemetry System`_. The types of events that JupyterHub emits are defined by `JSON schemas`_ listed at the bottom of this page_.
.. _logging: https://docs.python.org/3/library/logging.html
.. _`Telemetry System`: https://github.com/jupyter/telemetry
.. _`JSON schemas`: https://json-schema.org/
How to emit events
------------------
Event logging is handled by its ``Eventlog`` object. This leverages Python's standing logging_ library to emit, filter, and collect event data.
To begin recording events, you'll need to set two configurations:
1. ``handlers``: tells the EventLog *where* to route your events. This trait is a list of Python logging handlers that route events to
2. ``allows_schemas``: tells the EventLog *which* events should be recorded. No events are emitted by default; all recorded events must be listed here.
Here's a basic example:
.. code-block::
import logging
c.EventLog.handlers = [
logging.FileHandler('event.log'),
]
c.EventLog.allowed_schemas = [
'hub.jupyter.org/server-action'
]
The output is a file, ``"event.log"``, with events recorded as JSON data.
.. _page:
Event schemas
-------------
.. toctree::
:maxdepth: 2
server-actions.rst

View File

@@ -1 +0,0 @@
.. jsonschema:: ../../../jupyterhub/event-schemas/server-actions/v1.yaml

View File

@@ -8,29 +8,27 @@ high performance computing.
Please submit pull requests to update information or to add new institutions or uses.
## Academic Institutions, Research Labs, and Supercomputer Centers
### University of California Berkeley
- [BIDS - Berkeley Institute for Data Science](https://bids.berkeley.edu/)
- [Teaching with Jupyter notebooks and JupyterHub](https://bids.berkeley.edu/resources/videos/teaching-ipythonjupyter-notebooks-and-jupyterhub)
- [Teaching with Jupyter notebooks and JupyterHub](https://bids.berkeley.edu/resources/videos/teaching-ipythonjupyter-notebooks-and-jupyterhub)
- [Data 8](http://data8.org/)
- [GitHub organization](https://github.com/data-8)
- [GitHub organization](https://github.com/data-8)
- [NERSC](http://www.nersc.gov/)
- [Press release on Jupyter and Cori](http://www.nersc.gov/news-publications/nersc-news/nersc-center-news/2016/jupyter-notebooks-will-open-up-new-possibilities-on-nerscs-cori-supercomputer/)
- [Moving and sharing data](https://www.nersc.gov/assets/Uploads/03-MovingAndSharingData-Cholia.pdf)
- [Press release on Jupyter and Cori](http://www.nersc.gov/news-publications/nersc-news/nersc-center-news/2016/jupyter-notebooks-will-open-up-new-possibilities-on-nerscs-cori-supercomputer/)
- [Moving and sharing data](https://www.nersc.gov/assets/Uploads/03-MovingAndSharingData-Cholia.pdf)
- [Research IT](http://research-it.berkeley.edu)
- [JupyterHub server supports campus research computation](http://research-it.berkeley.edu/blog/17/01/24/free-fully-loaded-jupyterhub-server-supports-campus-research-computation)
- [JupyterHub server supports campus research computation](http://research-it.berkeley.edu/blog/17/01/24/free-fully-loaded-jupyterhub-server-supports-campus-research-computation)
### University of California Davis
- [Spinning up multiple Jupyter Notebooks on AWS for a tutorial](https://github.com/mblmicdiv/course2017/blob/HEAD/exercises/sourmash-setup.md)
- [Spinning up multiple Jupyter Notebooks on AWS for a tutorial](https://github.com/mblmicdiv/course2017/blob/master/exercises/sourmash-setup.md)
Although not technically a JupyterHub deployment, this tutorial setup
may be helpful to others in the Jupyter community.
@@ -61,35 +59,23 @@ easy to do with RStudio too.
- [jupyterhub-deploy-teaching](https://github.com/jupyterhub/jupyterhub-deploy-teaching) based on work by Brian Granger for Cal Poly's Data Science 301 Course
### Chameleon
[Chameleon](https://www.chameleoncloud.org) is a NSF-funded configurable experimental environment for large-scale computer science systems research with [bare metal reconfigurability](https://chameleoncloud.readthedocs.io/en/latest/technical/baremetal.html). Chameleon users utilize JupyterHub to document and reproduce their complex CISE and networking experiments.
- [Shared JupyterHub](https://jupyter.chameleoncloud.org): provides a common "workbench" environment for any Chameleon user.
- [Trovi](https://www.chameleoncloud.org/experiment/share): a sharing portal of experiments, tutorials, and examples, which users can launch as a dedicated isolated environments on Chameleon's JupyterHub.
### Clemson University
- Advanced Computing
- [Palmetto cluster and JupyterHub](http://citi.sites.clemson.edu/2016/08/18/JupyterHub-for-Palmetto-Cluster.html)
- [Palmetto cluster and JupyterHub](http://citi.sites.clemson.edu/2016/08/18/JupyterHub-for-Palmetto-Cluster.html)
### University of Colorado Boulder
- (CU Research Computing) CURC
- [JupyterHub User Guide](https://www.rc.colorado.edu/support/user-guide/jupyterhub.html)
- Slurm job dispatched on Crestone compute cluster
- log troubleshooting
- Profiles in IPython Clusters tab
- [Parallel Processing with JupyterHub tutorial](https://www.rc.colorado.edu/support/examples-and-tutorials/parallel-processing-with-jupyterhub.html)
- [Parallel Programming with JupyterHub document](https://www.rc.colorado.edu/book/export/html/833)
- (CU Research Computing) CURC
- [JupyterHub User Guide](https://www.rc.colorado.edu/support/user-guide/jupyterhub.html)
- Slurm job dispatched on Crestone compute cluster
- log troubleshooting
- Profiles in IPython Clusters tab
- [Parallel Processing with JupyterHub tutorial](https://www.rc.colorado.edu/support/examples-and-tutorials/parallel-processing-with-jupyterhub.html)
- [Parallel Programming with JupyterHub document](https://www.rc.colorado.edu/book/export/html/833)
- Earth Lab at CU
- [Tutorial on Parallel R on JupyterHub](https://earthdatascience.org/tutorials/parallel-r-on-jupyterhub/)
### George Washington University
- [Jupyter Hub](http://go.gwu.edu/jupyter) with university single-sign-on. Deployed early 2017.
- [Tutorial on Parallel R on JupyterHub](https://earthdatascience.org/tutorials/parallel-r-on-jupyterhub/)
### HTCondor
@@ -97,15 +83,10 @@ easy to do with RStudio too.
### University of Illinois
- https://datascience.business.illinois.edu (currently down; checked 04/26/19)
### IllustrisTNG Simulation Project
- [JupyterHub/Lab-based analysis platform, part of the TNG public data release](http://www.tng-project.org/data/)
- https://datascience.business.illinois.edu
### MIT and Lincoln Labs
- https://supercloud.mit.edu/
### Michigan State University
@@ -119,44 +100,31 @@ easy to do with RStudio too.
- https://dsa.missouri.edu/faq/
### Paderborn University
- [Data Science (DICE) group](https://dice.cs.uni-paderborn.de/)
- [nbgraderutils](https://github.com/dice-group/nbgraderutils): Use JupyterHub + nbgrader + iJava kernel for online Java exercises. Used in lecture Statistical Natural Language Processing.
### Penn State University
- [Press release](https://news.psu.edu/story/523093/2018/05/24/new-open-source-web-apps-available-students-and-faculty): "New open-source web apps available for students and faculty" (but Hub is currently down; checked 04/26/19)
### University of Rochester CIRC
### University of Rochester CIRC
- [JupyterHub Userguide](https://info.circ.rochester.edu/Web_Applications/JupyterHub.html) - Slurm, beehive
### University of California San Diego
- San Diego Supercomputer Center - Andrea Zonca
- [Deploy JupyterHub on a Supercomputer with SSH](https://zonca.github.io/2017/05/jupyterhub-hpc-batchspawner-ssh.html)
- [Run Jupyterhub on a Supercomputer](https://zonca.github.io/2015/04/jupyterhub-hpc.html)
- [Deploy JupyterHub on a VM for a Workshop](https://zonca.github.io/2016/04/jupyterhub-sdsc-cloud.html)
- [Customize your Python environment in Jupyterhub](https://zonca.github.io/2017/02/customize-python-environment-jupyterhub.html)
- [Jupyterhub deployment on multiple nodes with Docker Swarm](https://zonca.github.io/2016/05/jupyterhub-docker-swarm.html)
- [Sample deployment of Jupyterhub in HPC on SDSC Comet](https://zonca.github.io/2017/02/sample-deployment-jupyterhub-hpc.html)
- [Deploy JupyterHub on a Supercomputer with SSH](https://zonca.github.io/2017/05/jupyterhub-hpc-batchspawner-ssh.html)
- [Run Jupyterhub on a Supercomputer](https://zonca.github.io/2015/04/jupyterhub-hpc.html)
- [Deploy JupyterHub on a VM for a Workshop](https://zonca.github.io/2016/04/jupyterhub-sdsc-cloud.html)
- [Customize your Python environment in Jupyterhub](https://zonca.github.io/2017/02/customize-python-environment-jupyterhub.html)
- [Jupyterhub deployment on multiple nodes with Docker Swarm](https://zonca.github.io/2016/05/jupyterhub-docker-swarm.html)
- [Sample deployment of Jupyterhub in HPC on SDSC Comet](https://zonca.github.io/2017/02/sample-deployment-jupyterhub-hpc.html)
- Educational Technology Services - Paul Jamason
- [jupyterhub.ucsd.edu](https://jupyterhub.ucsd.edu)
- [jupyterhub.ucsd.edu](https://jupyterhub.ucsd.edu)
### TACC University of Texas
### Texas A&M
- Kristen Thyng - Oceanography
- [Teaching with JupyterHub and nbgrader](http://kristenthyng.com/blog/2016/09/07/jupyterhub+nbgrader/)
- [Teaching with JupyterHub and nbgrader](http://kristenthyng.com/blog/2016/09/07/jupyterhub+nbgrader/)
### Elucidata
- What's new in Jupyter Notebooks @[Elucidata](https://elucidata.io/):
- Using Jupyter Notebooks with Jupyterhub on GCP, managed by GKE - https://medium.com/elucidata/why-you-should-be-using-a-jupyter-notebook-8385a4ccd93d
## Service Providers
@@ -173,6 +141,7 @@ easy to do with RStudio too.
[Everware](https://github.com/everware) Reproducible and reusable science powered by jupyterhub and docker. Like nbviewer, but executable. CERN, Geneva [website](http://everware.xyz/)
### Microsoft Azure
- https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-data-science-linux-dsvm-intro
@@ -182,10 +151,9 @@ easy to do with RStudio too.
- https://getcarina.com/blog/learning-how-to-whale/
- http://carolynvanslyck.com/talk/carina/jupyterhub/#/
### Hadoop
- [Deploying JupyterHub on Hadoop](https://jupyterhub-on-hadoop.readthedocs.io)
### jcloud.io
- Open to public JupyterHub server
- https://jcloud.io
## Miscellaneous
- https://medium.com/@ybarraud/setting-up-jupyterhub-with-sudospawner-and-anaconda-844628c0dbee#.rm3yt87e1

View File

@@ -4,59 +4,39 @@ The default Authenticator uses [PAM][] to authenticate system users with
their username and password. With the default Authenticator, any user
with an account and password on the system will be allowed to login.
## Create a set of allowed users
## Create a whitelist of users
You can restrict which users are allowed to login with a whitelist,
`Authenticator.whitelist`:
You can restrict which users are allowed to login with a set,
`Authenticator.allowed_users`:
```python
c.Authenticator.allowed_users = {'mal', 'zoe', 'inara', 'kaylee'}
c.Authenticator.whitelist = {'mal', 'zoe', 'inara', 'kaylee'}
```
Users in the `allowed_users` set are added to the Hub database when the Hub is
Users in the whitelist are added to the Hub database when the Hub is
started.
```{warning}
If this configuration value is not set, then **all authenticated users will be allowed into your hub**.
```
## Configure admins (`admin_users`)
```{note}
As of JupyterHub 2.0, the full permissions of `admin_users`
should not be required.
Instead, you can assign [roles][] to users or groups
with only the scopes they require.
```
Admin users of JupyterHub, `admin_users`, can add and remove users from
the user `allowed_users` set. `admin_users` can take actions on other users'
the user `whitelist`. `admin_users` can take actions on other users'
behalf, such as stopping and restarting their servers.
A set of initial admin users, `admin_users` can be configured as follows:
A set of initial admin users, `admin_users` can configured be as follows:
```python
c.Authenticator.admin_users = {'mal', 'zoe'}
```
Users in the admin set are automatically added to the user `allowed_users` set,
Users in the admin list are automatically added to the user `whitelist`,
if they are not already present.
Each authenticator may have different ways of determining whether a user is an
administrator. By default JupyterHub uses the PAMAuthenticator which provides the
`admin_groups` option and can set administrator status based on a user
group. For example we can let any user in the `wheel` group be admin:
```python
c.PAMAuthenticator.admin_groups = {'wheel'}
```
## Give admin access to other users' notebook servers (`admin_access`)
Since the default `JupyterHub.admin_access` setting is `False`, the admins
Since the default `JupyterHub.admin_access` setting is False, the admins
do not have permission to log in to the single user notebook servers
owned by _other users_. If `JupyterHub.admin_access` is set to `True`,
then admins have permission to log in _as other users_ on their
owned by *other users*. If `JupyterHub.admin_access` is set to True,
then admins have permission to log in *as other users* on their
respective machines, for debugging. **As a courtesy, you should make
sure your users know if admin_access is enabled.**
@@ -64,12 +44,12 @@ sure your users know if admin_access is enabled.**
Users can be added to and removed from the Hub via either the admin
panel or the REST API. When a user is **added**, the user will be
automatically added to the `allowed_users` set and database. Restarting the Hub
will not require manually updating the `allowed_users` set in your config file,
automatically added to the whitelist and database. Restarting the Hub
will not require manually updating the whitelist in your config file,
as the users will be loaded from the database.
After starting the Hub once, it is not sufficient to **remove** a user
from the allowed users set in your config file. You must also remove the user
from the whitelist in your config file. You must also remove the user
from the Hub's database, either by deleting the user from JupyterHub's
admin page, or you can clear the `jupyterhub.sqlite` database and start
fresh.
@@ -102,7 +82,6 @@ JupyterHub's [OAuthenticator][] currently supports the following
popular services:
- Auth0
- Azure AD
- Bitbucket
- CILogon
- GitHub
@@ -116,16 +95,5 @@ popular services:
A generic implementation, which you can use for OAuth authentication
with any provider, is also available.
## Use DummyAuthenticator for testing
The `DummyAuthenticator` is a simple authenticator that
allows for any username/password unless a global password has been set. If
set, it will allow for any username as long as the correct password is provided.
To set a global password, add this to the config file:
```python
c.DummyAuthenticator.password = "some_password"
```
[pam]: https://en.wikipedia.org/wiki/Pluggable_authentication_module
[oauthenticator]: https://github.com/jupyterhub/oauthenticator
[PAM]: https://en.wikipedia.org/wiki/Pluggable_authentication_module
[OAuthenticator]: https://github.com/jupyterhub/oauthenticator

View File

@@ -1,7 +1,7 @@
# Configuration Basics
The section contains basic information about configuring settings for a JupyterHub
deployment. The [Technical Reference](../reference/index)
deployment. The [Technical Reference](../reference/index.html)
documentation provides additional details.
This section will help you learn how to:
@@ -56,18 +56,18 @@ To display all command line options that are available for configuration:
```
Configuration using the command line options is done when launching JupyterHub.
For example, to start JupyterHub on `10.0.1.2:443` with https, you
For example, to start JupyterHub on ``10.0.1.2:443`` with https, you
would enter:
```bash
jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert
```
```
All configurable options may technically be set on the command line,
All configurable options may technically be set on the command-line,
though some are inconvenient to type. To set a particular configuration
parameter, `c.Class.trait`, you would use the command line option,
`--Class.trait`, when starting JupyterHub. For example, to configure the
`c.Spawner.notebook_dir` trait from the command line, use the
`c.Spawner.notebook_dir` trait from the command-line, use the
`--Spawner.notebook_dir` option:
```bash
@@ -77,24 +77,11 @@ jupyterhub --Spawner.notebook_dir='~/assignments'
## Configure for various deployment environments
The default authentication and process spawning mechanisms can be replaced, and
specific [authenticators](./authenticators-users-basics) and
[spawners](./spawners-basics) can be set in the configuration file.
specific [authenticators](./authenticators-users-basics.html) and
[spawners](./spawners-basics.html) can be set in the configuration file.
This enables JupyterHub to be used with a variety of authentication methods or
process control and deployment environments. [Some examples](../reference/config-examples),
process control and deployment environments. [Some examples](../reference/config-examples.html),
meant as illustration, are:
- Using GitHub OAuth instead of PAM with [OAuthenticator](https://github.com/jupyterhub/oauthenticator)
- Spawning single-user servers with Docker, using the [DockerSpawner](https://github.com/jupyterhub/dockerspawner)
## Run the proxy separately
This is _not_ strictly necessary, but useful in many cases. If you
use a custom proxy (e.g. Traefik), this is also not needed.
Connections to user servers go through the proxy, and _not_ the hub
itself. If the proxy stays running when the hub restarts (for
maintenance, re-configuration, etc.), then user connections are not
interrupted. For simplicity, by default the hub starts the proxy
automatically, so if the hub restarts, the proxy restarts, and user
connections are interrupted. It is easy to run the proxy separately,
for information see [the separate proxy page](../reference/separate-proxy).

View File

@@ -1,35 +0,0 @@
# Frequently asked questions
## How do I share links to notebooks?
In short, where you see `/user/name/notebooks/foo.ipynb` use `/hub/user-redirect/notebooks/foo.ipynb` (replace `/user/name` with `/hub/user-redirect`).
Sharing links to notebooks is a common activity,
and can look different based on what you mean.
Your first instinct might be to copy the URL you see in the browser,
e.g. `hub.jupyter.org/user/yourname/notebooks/coolthing.ipynb`.
However, let's break down what this URL means:
`hub.jupyter.org/user/yourname/` is the URL prefix handled by _your server_,
which means that sharing this URL is asking the person you share the link with
to come to _your server_ and look at the exact same file.
In most circumstances, this is forbidden by permissions because the person you share with does not have access to your server.
What actually happens when someone visits this URL will depend on whether your server is running and other factors.
But what is our actual goal?
A typical situation is that you have some shared or common filesystem,
such that the same path corresponds to the same document
(either the exact same document or another copy of it).
Typically, what folks want when they do sharing like this
is for each visitor to open the same file _on their own server_,
so Breq would open `/user/breq/notebooks/foo.ipynb` and
Seivarden would open `/user/seivarden/notebooks/foo.ipynb`, etc.
JupyterHub has a special URL that does exactly this!
It's called `/hub/user-redirect/...`.
So if you replace `/user/yourname` in your URL bar
with `/hub/user-redirect` any visitor should get the same
URL on their own server, rather than visiting yours.
In JupyterLab 2.0, this should also be the result of the "Copy Shareable Link"
action in the file browser.

View File

@@ -1,10 +1,5 @@
Get Started
===========
This section covers how to configure and customize JupyterHub for your
needs. It contains information about authentication, networking, security, and
other topics that are relevant to individuals or organizations deploying their
own JupyterHub.
Getting Started
===============
.. toctree::
:maxdepth: 2
@@ -15,5 +10,3 @@ own JupyterHub.
authenticators-users-basics
spawners-basics
services-basics
faq
institutional-faq

View File

@@ -1,260 +0,0 @@
# Institutional FAQ
This page contains common questions from users of JupyterHub,
broken down by their roles within organizations.
## For all
### Is it appropriate for adoption within a larger institutional context?
Yes! JupyterHub has been used at-scale for large pools of users, as well
as complex and high-performance computing. For example, UC Berkeley uses
JupyterHub for its Data Science Education Program courses (serving over
3,000 students). The Pangeo project uses JupyterHub to provide access
to scalable cloud computing with Dask. JupyterHub is stable and customizable
to the use-cases of large organizations.
### I keep hearing about Jupyter Notebook, JupyterLab, and now JupyterHub. Whats the difference?
Here is a quick breakdown of these three tools:
- **The Jupyter Notebook** is a document specification (the `.ipynb`) file that interweaves
narrative text with code cells and their outputs. It is also a graphical interface
that allows users to edit these documents. There are also several other graphical interfaces
that allow users to edit the `.ipynb` format (nteract, Jupyter Lab, Google Colab, Kaggle, etc).
- **JupyterLab** is a flexible and extendible user interface for interactive computing. It
has several extensions that are tailored for using Jupyter Notebooks, as well as extensions
for other parts of the data science stack.
- **JupyterHub** is an application that manages interactive computing sessions for **multiple users**.
It also connects them with infrastructure those users wish to access. It can provide
remote access to Jupyter Notebooks and JupyterLab for many people.
## For management
### Briefly, what problem does JupyterHub solve for us?
JupyterHub provides a shared platform for data science and collaboration.
It allows users to utilize familiar data science workflows (such as the scientific Python stack,
the R tidyverse, and Jupyter Notebooks) on institutional infrastructure. It also allows administrators
some control over access to resources, security, environments, and authentication.
### Is JupyterHub mature? Why should we trust it?
Yes - the core JupyterHub application recently
reached 1.0 status, and is considered stable and performant for most institutions.
JupyterHub has also been deployed (along with other tools) to work on
scalable infrastructure, large datasets, and high-performance computing.
### Who else uses JupyterHub?
JupyterHub is used at a variety of institutions in academia,
industry, and government research labs. It is most-commonly used by two kinds of groups:
- Small teams (e.g., data science teams, research labs, or collaborative projects) to provide a
shared resource for interactive computing, collaboration, and analytics.
- Large teams (e.g., a department, a large class, or a large group of remote users) to provide
access to organizational hardware, data, and analytics environments at scale.
Here is a sample of organizations that use JupyterHub:
- **Universities and colleges**: UC Berkeley, UC San Diego, Cal Poly SLO, Harvard University, University of Chicago,
University of Oslo, University of Sheffield, Université Paris Sud, University of Versailles
- **Research laboratories**: NASA, NCAR, NOAA, the Large Synoptic Survey Telescope, Brookhaven National Lab,
Minnesota Supercomputing Institute, ALCF, CERN, Lawrence Livermore National Laboratory
- **Online communities**: Pangeo, Quantopian, mybinder.org, MathHub, Open Humans
- **Computing infrastructure providers**: NERSC, San Diego Supercomputing Center, Compute Canada
- **Companies**: Capital One, SANDVIK code, Globus
See the [Gallery of JupyterHub deployments](../gallery-jhub-deployments.md) for
a more complete list of JupyterHub deployments at institutions.
### How does JupyterHub compare with hosted products, like Google Colaboratory, RStudio.cloud, or Anaconda Enterprise?
JupyterHub puts you in control of your data, infrastructure, and coding environment.
In addition, it is vendor neutral, which reduces lock-in to a particular vendor or service.
JupyterHub provides access to interactive computing environments in the cloud (similar to each of these services).
Compared with the tools above, it is more flexible, more customizable, free, and
gives administrators more control over their setup and hardware.
Because JupyterHub is an open-source, community-driven tool, it can be extended and
modified to fit an institution's needs. It plays nicely with the open source data science
stack, and can serve a variety of computing enviroments, user interfaces, and
computational hardware. It can also be deployed anywhere - on enterprise cloud infrastructure, on
High-Performance-Computing machines, on local hardware, or even on a single laptop, which
is not possible with most other tools for shared interactive computing.
## For IT
### How would I set up JupyterHub on institutional hardware?
That depends on what kind of hardware you've got. JupyterHub is flexible enough to be deployed
on a variety of hardware, including in-room hardware, on-prem clusters, cloud infrastructure,
etc.
The most common way to set up a JupyterHub is to use a JupyterHub distribution, these are pre-configured
and opinionated ways to set up a JupyterHub on particular kinds of infrastructure. The two distributions
that we currently suggest are:
- [Zero to JupyterHub for Kubernetes](https://z2jh.jupyter.org) is a scalable JupyterHub deployment and
guide that runs on Kubernetes. Better for larger or dynamic user groups (50-10,000) or more complex
compute/data needs.
- [The Littlest JupyterHub](https://tljh.jupyter.org) is a lightweight JupyterHub that runs on a single
single machine (in the cloud or under your desk). Better for smaller user groups (4-80) or more
lightweight computational resources.
### Does JupyterHub run well in the cloud?
Yes - most deployments of JupyterHub are run via cloud infrastructure and on a variety of cloud providers.
Depending on the distribution of JupyterHub that you'd like to use, you can also connect your JupyterHub
deployment with a number of other cloud-native services so that users have access to other resources from
their interactive computing sessions.
For example, if you use the [Zero to JupyterHub for Kubernetes](https://z2jh.jupyter.org) distribution,
you'll be able to utilize container-based workflows of other technologies such as the [dask-kubernetes](https://kubernetes.dask.org/en/latest/)
project for distributed computing.
The Z2JH Helm Chart also has some functionality built in for auto-scaling your cluster up and down
as more resources are needed - allowing you to utilize the benefits of a flexible cloud-based deployment.
### Is JupyterHub secure?
The short answer: yes. JupyterHub as a standalone application has been battle-tested at an institutional
level for several years, and makes a number of "default" security decisions that are reasonable for most
users.
- For security considerations in the base JupyterHub application,
[see the JupyterHub security page](https://jupyterhub.readthedocs.io/en/stable/reference/websecurity.html).
- For security considerations when deploying JupyterHub on Kubernetes, see the
[JupyterHub on Kubernetes security page](https://zero-to-jupyterhub.readthedocs.io/en/latest/security.html).
The longer answer: it depends on your deployment. Because JupyterHub is very flexible, it can be used
in a variety of deployment setups. This often entails connecting your JupyterHub to **other** infrastructure
(such as a [Dask Gateway service](https://gateway.dask.org/)). There are many security decisions to be made
in these cases, and the security of your JupyterHub deployment will often depend on these decisions.
If you are worried about security, don't hesitate to reach out to the JupyterHub community in the
[Jupyter Community Forum](https://discourse.jupyter.org/c/jupyterhub). This community of practice has many
individuals with experience running secure JupyterHub deployments.
### Does JupyterHub provide computing or data infrastructure?
No - JupyterHub manages user sessions and can _control_ computing infrastructure, but it does not provide these
things itself. You are expected to run JupyterHub on your own infrastructure (local or in the cloud). Moreover,
JupyterHub has no internal concept of "data", but is designed to be able to communicate with data repositories
(again, either locally or remotely) for use within interactive computing sessions.
### How do I manage users?
JupyterHub offers a few options for managing your users. Upon setting up a JupyterHub, you can choose what
kind of **authentication** you'd like to use. For example, you can have users sign up with an institutional
email address, or choose a username / password when they first log-in, or offload authentication onto
another service such as an organization's OAuth.
The users of a JupyterHub are stored locally, and can be modified manually by an administrator of the JupyterHub.
Moreover, the _active_ users on a JupyterHub can be found on the administrator's page. This page
gives you the abiltiy to stop or restart kernels, inspect user filesystems, and even take over user
sessions to assist them with debugging.
### How do I manage software environments?
A key benefit of JupyterHub is the ability for an administrator to define the environment(s) that users
have access to. There are many ways to do this, depending on what kind of infrastructure you're using for
your JupyterHub.
For example, **The Littlest JupyterHub** runs on a single VM. In this case, the administrator defines
an environment by installing packages to a shared folder that exists on the path of all users. The
**JupyterHub for Kubernetes** deployment uses Docker images to define environments. You can create your
own list of Docker images that users can select from, and can also control things like the amount of
RAM available to users, or the types of machines that their sessions will use in the cloud.
### How does JupyterHub manage computational resources?
For interactive computing sessions, JupyterHub controls computational resources via a **spawner**.
Spawners define how a new user session is created, and are customized for particular kinds of
infrastructure. For example, the KubeSpawner knows how to control a Kubernetes deployment
to create new pods when users log in.
For more sophisticated computational resources (like distributed computing), JupyterHub can
connect with other infrastructure tools (like Dask or Spark). This allows users to control
scalable or high-performance resources from within their JupyterHub sessions. The logic of
how those resources are controlled is taken care of by the non-JupyterHub application.
### Can JupyterHub be used with my high-performance computing resources?
Yes - JupyterHub can provide access to many kinds of computing infrastructure.
Especially when combined with other open-source schedulers such as Dask, you can manage fairly
complex computing infrastructures from the interactive sessions of a JupyterHub. For example
[see the Dask HPC page](https://docs.dask.org/en/latest/setup/hpc.html).
### How much resources do user sessions take?
This is highly configurable by the administrator. If you wish for your users to have simple
data analytics environments for prototyping and light data exploring, you can restrict their
memory and CPU based on the resources that you have available. If you'd like your JupyterHub
to serve as a gateway to high-performance compute or data resources, you may increase the
resources available on user machines, or connect them with computing infrastructures elsewhere.
### Can I customize the look and feel of a JupyterHub?
JupyterHub provides some customization of the graphics displayed to users. The most common
modification is to add custom branding to the JupyterHub login page, loading pages, and
various elements that persist across all pages (such as headers).
## For Technical Leads
### Will JupyterHub “just work” with our team's interactive computing setup?
Depending on the complexity of your setup, you'll have different experiences with "out of the box"
distributions of JupyterHub. If all of the resources you need will fit on a single VM, then
[The Littlest JupyterHub](https://tljh.jupyter.org) should get you up-and-running within
a half day or so. For more complex setups, such as scalable Kubernetes clusters or access
to high-performance computing and data, it will require more time and expertise with
the technologies your JupyterHub will use (e.g., dev-ops knowledge with cloud computing).
In general, the base JupyterHub deployment is not the bottleneck for setup, it is connecting
your JupyterHub with the various services and tools that you wish to provide to your users.
### How well does JupyterHub scale? What are JupyterHub's limitations?
JupyterHub works well at both a small scale (e.g., a single VM or machine) as well as a
high scale (e.g., a scalable Kubernetes cluster). It can be used for teams as small as 2, and
for user bases as large as 10,000. The scalability of JupyterHub largely depends on the
infrastructure on which it is deployed. JupyterHub has been designed to be lightweight and
flexible, so you can tailor your JupyterHub deployment to your needs.
### Is JupyterHub resilient? What happens when a machine goes down?
For JupyterHubs that are deployed in a containerized environment (e.g., Kubernetes), it is
possible to configure the JupyterHub to be fairly resistant to failures in the system.
For example, if JupyterHub fails, then user sessions will not be affected (though new
users will not be able to log in). When a JupyterHub process is restarted, it should
seamlessly connect with the user database and the system will return to normal.
Again, the details of your JupyterHub deployment (e.g., whether it's deployed on a scalable cluster)
will affect the resiliency of the deployment.
### What interfaces does JupyterHub support?
Out of the box, JupyterHub supports a variety of popular data science interfaces for user sessions,
such as JupyterLab, Jupyter Notebooks, and RStudio. Any interface that can be served
via a web address can be served with a JupyterHub (with the right setup).
### Does JupyterHub make it easier for our team to collaborate?
JupyterHub provides a standardized environment and access to shared resources for your teams.
This greatly reduces the cost associated with sharing analyses and content with other team
members, and makes it easier to collaborate and build off of one another's ideas. Combined with
access to high-performance computing and data, JupyterHub provides a common resource to
amplify your team's ability to prototype their analyses, scale them to larger data, and then
share their results with one another.
JupyterHub also provides a computational framework to share computational narratives between
different levels of an organization. For example, data scientists can share Jupyter Notebooks
rendered as [Voilà dashboards](https://voila.readthedocs.io/en/stable/) with those who are not
familiar with programming, or create publicly-available interactive analyses to allow others to
interact with your work.
### Can I use JupyterHub with R/RStudio or other languages and environments?
Yes, Jupyter is a polyglot project, and there are over 40 community-provided kernels for a variety
of languages (the most common being Python, Julia, and R). You can also use a JupyterHub to provide
access to other interfaces, such as RStudio, that provide their own access to a language kernel.

View File

@@ -11,7 +11,7 @@ This section will help you with basic proxy and network configuration to:
The Proxy's main IP address setting determines where JupyterHub is available to users.
By default, JupyterHub is configured to be available on all network interfaces
(`''`) on port 8000. _Note_: Use of `'*'` is discouraged for IP configuration;
(`''`) on port 8000. *Note*: Use of `'*'` is discouraged for IP configuration;
instead, use of `'0.0.0.0'` is preferred.
Changing the Proxy's main IP address and port can be done with the following
@@ -41,9 +41,9 @@ port.
## Set the Proxy's REST API communication URL (optional)
By default, this REST API listens on port 8001 of `localhost` only.
By default, this REST API listens on port 8081 of `localhost` only.
The Hub service talks to the proxy via a REST API on a secondary port. The
API URL can be configured separately to override the default settings.
API URL can be configured separately and override the default settings.
### Set api_url
@@ -74,7 +74,7 @@ The Hub service listens only on `localhost` (port 8081) by default.
The Hub needs to be accessible from both the proxy and all Spawners.
When spawning local servers, an IP address setting of `localhost` is fine.
If _either_ the Proxy _or_ (more likely) the Spawners will be remote or
If *either* the Proxy *or* (more likely) the Spawners will be remote or
isolated in containers, the Hub must listen on an IP that is accessible.
```python
@@ -82,20 +82,20 @@ c.JupyterHub.hub_ip = '10.0.1.4'
c.JupyterHub.hub_port = 54321
```
**Added in 0.8:** The `c.JupyterHub.hub_connect_ip` setting is the IP address or
**Added in 0.8:** The `c.JupyterHub.hub_connect_ip` setting is the ip address or
hostname that other services should use to connect to the Hub. A common
configuration for, e.g. docker, is:
```python
c.JupyterHub.hub_ip = '0.0.0.0' # listen on all interfaces
c.JupyterHub.hub_connect_ip = '10.0.1.4' # IP as seen on the docker network. Can also be a hostname.
c.JupyterHub.hub_connect_ip = '10.0.1.4' # ip as seen on the docker network. Can also be a hostname.
```
## Adjusting the hub's URL
The hub will most commonly be running on a hostname of its own. If it
The hub will most commonly be running on a hostname of its own. If it
is not for example, if the hub is being reverse-proxied and being
exposed at a URL such as `https://proxy.example.org/jupyter/` then
you will need to tell JupyterHub the base URL of the service. In such
you will need to tell JupyterHub the base URL of the service. In such
a case, it is both necessary and sufficient to set
`c.JupyterHub.base_url = '/jupyter/'` in the configuration.

View File

@@ -80,49 +80,6 @@ To achieve this, simply omit the configuration settings
``c.JupyterHub.ssl_key`` and ``c.JupyterHub.ssl_cert``
(setting them to ``None`` does not have the same effect, and is an error).
.. _authentication-token:
Proxy authentication token
--------------------------
The Hub authenticates its requests to the Proxy using a secret token that
the Hub and Proxy agree upon. Note that this applies to the default
``ConfigurableHTTPProxy`` implementation. Not all proxy implementations
use an auth token.
The value of this token should be a random string (for example, generated by
``openssl rand -hex 32``). You can store it in the configuration file or an
environment variable
Generating and storing token in the configuration file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can set the value in the configuration file, ``jupyterhub_config.py``:
.. code-block:: python
c.ConfigurableHTTPProxy.api_token = 'abc123...' # any random string
Generating and storing as an environment variable
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can pass this value of the proxy authentication token to the Hub and Proxy
using the ``CONFIGPROXY_AUTH_TOKEN`` environment variable:
.. code-block:: bash
export CONFIGPROXY_AUTH_TOKEN=$(openssl rand -hex 32)
This environment variable needs to be visible to the Hub and Proxy.
Default if token is not set
~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you don't set the Proxy authentication token, the Hub will generate a random
key itself, which means that any time you restart the Hub you **must also
restart the Proxy**. If the proxy is a subprocess of the Hub, this should happen
automatically (this is the default configuration).
.. _cookie-secret:
Cookie secret
@@ -167,7 +124,7 @@ hex-encoded string. You can set it this way:
.. code-block:: bash
export JPY_COOKIE_SECRET=$(openssl rand -hex 32)
export JPY_COOKIE_SECRET=`openssl rand -hex 32`
For security reasons, this environment variable should only be visible to the
Hub. If you set it dynamically as above, all users will be logged out each time
@@ -189,73 +146,41 @@ itself, ``jupyterhub_config.py``, as a binary string:
If the cookie secret value changes for the Hub, all single-user notebook
servers must also be restarted.
.. _cookies:
Cookies used by JupyterHub authentication
-----------------------------------------
.. _authentication-token:
The following cookies are used by the Hub for handling user authentication.
Proxy authentication token
--------------------------
This section was created based on this post_ from Discourse.
The Hub authenticates its requests to the Proxy using a secret token that
the Hub and Proxy agree upon. The value of this string should be a random
string (for example, generated by ``openssl rand -hex 32``).
.. _post: https://discourse.jupyter.org/t/how-to-force-re-login-for-users/1998/6
Generating and storing token in the configuration file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
jupyterhub-hub-login
~~~~~~~~~~~~~~~~~~~~
Or you can set the value in the configuration file, ``jupyterhub_config.py``:
This is the login token used when visiting Hub-served pages that are
protected by authentication such as the main home, the spawn form, etc.
If this cookie is set, then the user is logged in.
.. code-block:: python
Resetting the Hub cookie secret effectively revokes this cookie.
c.JupyterHub.proxy_auth_token = '0bc02bede919e99a26de1e2a7a5aadfaf6228de836ec39a05a6c6942831d8fe5'
This cookie is restricted to the path ``/hub/``.
Generating and storing as an environment variable
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
jupyterhub-user-<username>
~~~~~~~~~~~~~~~~~~~~~~~~~~
You can pass this value of the proxy authentication token to the Hub and Proxy
using the ``CONFIGPROXY_AUTH_TOKEN`` environment variable:
This is the cookie used for authenticating with a single-user server.
It is set by the single-user server after OAuth with the Hub.
.. code-block:: bash
Effectively the same as ``jupyterhub-hub-login``, but for the
single-user server instead of the Hub. It contains an OAuth access token,
which is checked with the Hub to authenticate the browser.
export CONFIGPROXY_AUTH_TOKEN='openssl rand -hex 32'
Each OAuth access token is associated with a session id (see ``jupyterhub-session-id`` section
below).
This environment variable needs to be visible to the Hub and Proxy.
To avoid hitting the Hub on every request, the authentication response
is cached. And to avoid a stale cache the cache key is comprised of both
the token and session id.
Default if token is not set
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Resetting the Hub cookie secret effectively revokes this cookie.
This cookie is restricted to the path ``/user/<username>``, so that
only the users server receives it.
jupyterhub-session-id
~~~~~~~~~~~~~~~~~~~~~
This is a random string, meaningless in itself, and the only cookie
shared by the Hub and single-user servers.
Its sole purpose is to coordinate logout of the multiple OAuth cookies.
This cookie is set to ``/`` so all endpoints can receive it, or clear it, etc.
jupyterhub-user-<username>-oauth-state
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A short-lived cookie, used solely to store and validate OAuth state.
It is only set while OAuth between the single-user server and the Hub
is processing.
If you use your browser development tools, you should see this cookie
for a very brief moment before your are logged in,
with an expiration date shorter than ``jupyterhub-hub-login`` or
``jupyterhub-user-<username>``.
This cookie should not exist after you have successfully logged in.
This cookie is restricted to the path ``/user/<username>``, so that only
the users server receives it.
If you don't set the Proxy authentication token, the Hub will generate a random
key itself, which means that any time you restart the Hub you **must also
restart the Proxy**. If the proxy is a subprocess of the Hub, this should happen
automatically (this is the default configuration).

View File

@@ -2,10 +2,10 @@
When working with JupyterHub, a **Service** is defined as a process
that interacts with the Hub's REST API. A Service may perform a specific
action or task. For example, shutting down individuals' single user
notebook servers that have been idle for some time is a good example of
a task that could be automated by a Service. Let's look at how the
[jupyterhub_idle_culler][] script can be used as a Service.
or action or task. For example, shutting down individuals' single user
notebook servers that have been is a good example of a task that could
be automated by a Service. Let's look at how the [cull_idle_servers][]
script can be used as a Service.
## Real-world example to cull idle servers
@@ -14,12 +14,12 @@ document will:
- explain some basic information about API tokens
- clarify that API tokens can be used to authenticate to
single-user servers as of [version 0.8.0](../changelog)
- show how the [jupyterhub_idle_culler][] script can be:
- used in a Hub-managed service
- run as a standalone script
single-user servers as of [version 0.8.0](../changelog.html)
- show how the [cull_idle_servers][] script can be:
- used in a Hub-managed service
- run as a standalone script
Both examples for `jupyterhub_idle_culler` will communicate tasks to the
Both examples for `cull_idle_servers` will communicate tasks to the
Hub via the REST API.
## API Token basics
@@ -29,14 +29,14 @@ Hub via the REST API.
To run such an external service, an API token must be created and
provided to the service.
As of [version 0.6.0](../changelog), the preferred way of doing
As of [version 0.6.0](../changelog.html), the preferred way of doing
this is to first generate an API token:
```bash
openssl rand -hex 32
```
In [version 0.8.0](../changelog), a TOKEN request page for
In [version 0.8.0](../changelog.html), a TOKEN request page for
generating an API token is available from the JupyterHub user interface:
![Request API TOKEN page](../images/token-request.png)
@@ -78,73 +78,44 @@ single-user servers, and only cookies can be used for authentication.
0.8 supports using JupyterHub API tokens to authenticate to single-user
servers.
## Configure the idle culler to run as a Hub-Managed Service
Install the idle culler:
```
pip install jupyterhub-idle-culler
```
## Configure `cull-idle` to run as a Hub-Managed Service
In `jupyterhub_config.py`, add the following dictionary for the
`idle-culler` Service to the `c.JupyterHub.services` list:
`cull-idle` Service to the `c.JupyterHub.services` list:
```python
c.JupyterHub.services = [
{
'name': 'idle-culler',
'command': [sys.executable, '-m', 'jupyterhub_idle_culler', '--timeout=3600'],
}
]
c.JupyterHub.load_roles = [
{
"name": "list-and-cull", # name the role
"services": [
"idle-culler", # assign the service to this role
],
"scopes": [
# declare what permissions the service should have
"list:users", # list users
"read:users:activity", # read user last-activity
"admin:servers", # start/stop servers
],
'name': 'cull-idle',
'admin': True,
'command': 'python3 cull_idle_servers.py --timeout=3600'.split(),
}
]
```
where:
- `command` indicates that the Service will be launched as a
- `'admin': True` indicates that the Service has 'admin' permissions, and
- `'command'` indicates that the Service will be launched as a
subprocess, managed by the Hub.
```{versionchanged} 2.0
Prior to 2.0, the idle-culler required 'admin' permissions.
It now needs the scopes:
- `list:users` to access the user list endpoint
- `read:users:activity` to read activity info
- `admin:servers` to start/stop servers
```
## Run `cull-idle` manually as a standalone script
Now you can run your script by providing it
Now you can run your script, i.e. `cull_idle_servers`, by providing it
the API token and it will authenticate through the REST API to
interact with it.
This will run the idle culler service manually. It can be run as a standalone
This will run `cull-idle` manually. `cull-idle` can be run as a standalone
script anywhere with access to the Hub, and will periodically check for idle
servers and shut them down via the Hub's REST API. In order to shutdown the
servers, the token given to `cull-idle` must have permission to list users
and admin their servers.
servers, the token given to cull-idle must have admin privileges.
Generate an API token and store it in the `JUPYTERHUB_API_TOKEN` environment
variable. Run `jupyterhub_idle_culler` manually.
variable. Run `cull_idle_servers.py` manually.
```bash
export JUPYTERHUB_API_TOKEN='token'
python -m jupyterhub_idle_culler [--timeout=900] [--url=http://127.0.0.1:8081/hub/api]
python3 cull_idle_servers.py [--timeout=900] [--url=http://127.0.0.1:8081/hub/api]
```
[jupyterhub_idle_culler]: https://github.com/jupyterhub/jupyterhub-idle-culler
[cull_idle_servers]: https://github.com/jupyterhub/jupyterhub/blob/master/examples/cull-idle/cull_idle_servers.py

View File

@@ -1,8 +1,8 @@
# Spawners and single-user notebook servers
Since the single-user server is an instance of `jupyter notebook`, an entire separate
multi-process application, there are many aspects of that server that can be configured, and a lot
of ways to express that configuration.
multi-process application, there are many aspect of that server can configure, and a lot of ways
to express that configuration.
At the JupyterHub level, you can set some values on the Spawner. The simplest of these is
`Spawner.notebook_dir`, which lets you set the root directory for a user's server. This root
@@ -14,7 +14,7 @@ expanded to the user's home directory.
c.Spawner.notebook_dir = '~/notebooks'
```
You can also specify extra command line arguments to the notebook server with:
You can also specify extra command-line arguments to the notebook server with:
```python
c.Spawner.args = ['--debug', '--profile=PHYS131']

Binary file not shown.

Before

Width:  |  Height:  |  Size: 160 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 158 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 446 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 483 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 103 KiB

View File

@@ -1,15 +0,0 @@
=====
About
=====
JupyterHub is an open source project and community. It is a part of the
`Jupyter Project <https://jupyter.org>`_. JupyterHub is an open and inclusive
community, and invites contributions from anyone. This section covers information
about our community, as well as ways that you can connect and get involved.
.. toctree::
:maxdepth: 1
contributor-list
changelog
gallery-jhub-deployments

View File

@@ -1,13 +0,0 @@
=====================
Administrator's Guide
=====================
This guide covers best-practices, tips, common questions and operations, as
well as other information relevant to running your own JupyterHub over time.
.. toctree::
:maxdepth: 2
troubleshooting
admin/upgrading
changelog

View File

@@ -1,38 +1,21 @@
==========
JupyterHub
==========
`JupyterHub`_ is the best way to serve `Jupyter notebook`_ for multiple users.
It can be used in a class of students, a corporate data science group or scientific
research group. It is a multi-user **Hub** that spawns, manages, and proxies multiple
`JupyterHub`_, a multi-user **Hub**, spawns, manages, and proxies multiple
instances of the single-user `Jupyter notebook`_ server.
JupyterHub can be used to serve notebooks to a class of students, a corporate
data science group, or a scientific research group.
To make life easier, JupyterHub has distributions. Be sure to
take a look at them before continuing with the configuration of the broad
original system of `JupyterHub`_. Today, you can find two main cases:
1. If you need a simple case for a small amount of users (0-100) and single server
take a look at
`The Littlest JupyterHub <https://github.com/jupyterhub/the-littlest-jupyterhub>`__ distribution.
2. If you need to allow for even more users, a dynamic amount of servers can be used on a cloud,
take a look at the `Zero to JupyterHub with Kubernetes <https://github.com/jupyterhub/zero-to-jupyterhub-k8s>`__ .
Four subsystems make up JupyterHub:
* a **Hub** (tornado process) that is the heart of JupyterHub
* a **configurable http proxy** (node-http-proxy) that receives the requests from the client's browser
* multiple **single-user Jupyter notebook servers** (Python/IPython/tornado) that are monitored by Spawners
* an **authentication class** that manages how users can access the system
Besides these central pieces, you can add optional configurations through a `config.py` file and manage users kernels on an admin panel. A simplification of the whole system can be seen in the figure below:
.. image:: images/jhub-fluxogram.jpeg
.. image:: images/jhub-parts.png
:alt: JupyterHub subsystems
:width: 80%
:align: center
:width: 40%
:align: right
Three subsystems make up JupyterHub:
* a multi-user **Hub** (tornado process)
* a **configurable http proxy** (node-http-proxy)
* multiple **single-user Jupyter notebook servers** (Python/IPython/tornado)
JupyterHub performs the following functions:
@@ -43,115 +26,100 @@ JupyterHub performs the following functions:
notebook servers
For convenient administration of the Hub, its users, and services,
JupyterHub also provides a :doc:`REST API <reference/rest-api>`.
The JupyterHub team and Project Jupyter value our community, and JupyterHub
follows the Jupyter `Community Guides <https://jupyter.readthedocs.io/en/latest/community/content-community.html>`_.
JupyterHub also provides a `REST API`_.
Contents
========
--------
.. _index/distributions:
**Installation Guide**
Distributions
-------------
* :doc:`installation-guide`
* :doc:`quickstart`
* :doc:`quickstart-docker`
* :doc:`installation-basics`
A JupyterHub **distribution** is tailored towards a particular set of
use cases. These are generally easier to set up than setting up
JupyterHub from scratch, assuming they fit your use case.
**Getting Started**
The two popular ones are:
* :doc:`getting-started/index`
* :doc:`getting-started/config-basics`
* :doc:`getting-started/networking-basics`
* :doc:`getting-started/security-basics`
* :doc:`getting-started/authenticators-users-basics`
* :doc:`getting-started/spawners-basics`
* :doc:`getting-started/services-basics`
* `Zero to JupyterHub on Kubernetes <http://z2jh.jupyter.org>`_, for
running JupyterHub on top of `Kubernetes <https://k8s.io>`_. This
can scale to large number of machines & users.
* `The Littlest JupyterHub <http://tljh.jupyter.org>`_, for an easy
to set up & run JupyterHub supporting 1-100 users on a single machine.
**Technical Reference**
Installation Guide
------------------
* :doc:`reference/index`
* :doc:`reference/technical-overview`
* :doc:`reference/websecurity`
* :doc:`reference/authenticators`
* :doc:`reference/spawners`
* :doc:`reference/services`
* :doc:`reference/rest`
* :doc:`reference/upgrading`
* :doc:`reference/templates`
* :doc:`reference/config-user-env`
* :doc:`reference/config-examples`
* :doc:`reference/config-ghoauth`
* :doc:`reference/config-proxy`
* :doc:`reference/config-sudo`
.. toctree::
:maxdepth: 2
**API Reference**
installation-guide
* :doc:`api/index`
Getting Started
---------------
**Tutorials**
.. toctree::
:maxdepth: 2
* :doc:`tutorials/index`
* :doc:`tutorials/upgrade-dot-eight`
* `Zero to JupyterHub with Kubernetes <https://zero-to-jupyterhub.readthedocs.io/en/latest/>`_
getting-started/index
**Troubleshooting**
Technical Reference
-------------------
* :doc:`troubleshooting`
.. toctree::
:maxdepth: 2
**About JupyterHub**
reference/index
* :doc:`contributor-list`
* :doc:`gallery-jhub-deployments`
Administrators guide
--------------------
**Changelog**
.. toctree::
:maxdepth: 2
index-admin
API Reference
-------------
.. toctree::
:maxdepth: 2
api/index
RBAC Reference
--------------
.. toctree::
:maxdepth: 2
rbac/index
Contributing
------------
We want you to contribute to JupyterHub in ways that are most exciting
& useful to you. We value documentation, testing, bug reporting & code equally,
and are glad to have your contributions in whatever form you wish :)
Our `Code of Conduct <https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md>`_
(`reporting guidelines <https://github.com/jupyter/governance/blob/HEAD/conduct/reporting_online.md>`_)
helps keep our community welcoming to as many people as possible.
.. toctree::
:maxdepth: 2
contributing/index
About JupyterHub
----------------
.. toctree::
:maxdepth: 2
index-about
* :doc:`changelog`
Indices and tables
==================
------------------
* :ref:`genindex`
* :ref:`modindex`
Questions? Suggestions?
=======================
-----------------------
- `Jupyter mailing list <https://groups.google.com/forum/#!forum/jupyter>`_
- `Jupyter website <https://jupyter.org>`_
.. _contents:
Full Table of Contents
----------------------
.. toctree::
:maxdepth: 2
installation-guide
getting-started/index
reference/index
api/index
tutorials/index
troubleshooting
contributor-list
gallery-jhub-deployments
changelog
.. _JupyterHub: https://github.com/jupyterhub/jupyterhub
.. _Jupyter notebook: https://jupyter-notebook.readthedocs.io/en/latest/
.. _REST API: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#!/default

View File

@@ -6,7 +6,7 @@ JupyterHub is supported on Linux/Unix based systems. To use JupyterHub, you need
a Unix server (typically Linux) running somewhere that is accessible to your
team on the network. The JupyterHub server can be on an internal network at your
organization, or it can run on the public internet (in which case, take care
with the Hub's [security](./getting-started/security-basics)).
with the Hub's [security](./security-basics.html)).
JupyterHub officially **does not** support Windows. You may be able to use
JupyterHub on Windows if you use a Spawner and Authenticator that work on
@@ -28,7 +28,7 @@ Prior to beginning installation, it's helpful to consider some of the following:
- Spawner of singleuser notebook servers (Docker, Batch, etc.)
- Services (nbgrader, etc.)
- JupyterHub database (default SQLite; traditional RDBMS such as PostgreSQL,)
MySQL, or other databases supported by [SQLAlchemy](http://www.sqlalchemy.org))
MySQL, or other databases supported by [SQLAlchemy](http://www.sqlalchemy.org))
## Folders and File Locations

View File

@@ -1,6 +0,0 @@
:orphan:
JupyterHub the hard way
=======================
This guide has moved to https://github.com/jupyterhub/jupyterhub-the-hard-way/blob/HEAD/docs/installation-guide-hard.md

View File

@@ -1,9 +1,5 @@
Installation
============
These sections cover how to get up-and-running with JupyterHub. They cover
some basics of the tools needed to deploy JupyterHub as well as how to get it
running on your own infrastructure.
Installation Guide
==================
.. toctree::
:maxdepth: 3

View File

@@ -1,52 +0,0 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v2.0.22 (GNU/Linux)
mQINBFMx2LoBEAC9xU8JiKI1VlCJ4PT9zqhU5nChQZ06/bj1BBftiMJG07fdGVO0
ibOn4TrCoRYaeRlet0UpHzxT4zDa5h3/usJaJNTSRwtWePw2o7Lik8J+F3LionRf
8Jz81WpJ+81Klg4UWKErXjBHsu/50aoQm6ZNYG4S2nwOmMVEC4nc44IAA0bb+6kW
saFKKzEDsASGyuvyutdyUHiCfvvh5GOC2h9mXYvl4FaMW7K+d2UgCYERcXDNy7C1
Bw+uepQ9ELKdG4ZpvonO6BNr1BWLln3wk93AQfD5qhfsYRJIyj0hJlaRLtBU3i6c
xs+gQNF4mPmybpPSGuOyUr4FYC7NfoG7IUMLj+DYa6d8LcMJO+9px4IbdhQvzGtC
qz5av1TX7/+gnS4L8C9i1g8xgI+MtvogngPmPY4repOlK6y3l/WtxUPkGkyYkn3s
RzYyE/GJgTwuxFXzMQs91s+/iELFQq/QwmEJf+g/QYfSAuM+lVGajEDNBYVAQkxf
gau4s8Gm0GzTZmINilk+7TxpXtKbFc/Yr4A/fMIHmaQ7KmJB84zKwONsQdVv7Jjj
0dpwu8EIQdHxX3k7/Q+KKubEivgoSkVwuoQTG15X9xrOsDZNwfOVQh+JKazPvJtd
SNfep96r9t/8gnXv9JI95CGCQ8lNhXBUSBM3BDPTbudc4b6lFUyMXN0mKQARAQAB
tCxJUHl0aG9uIFNlY3VyaXR5IFRlYW0gPHNlY3VyaXR5QGlweXRob24ub3JnPokC
OAQTAQIAIgUCUzHYugIbAwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AACgkQEwJc
LcmZYkjuXg//R/t6nMNQmf9W1h52IVfUbRAVmvZ5d063hQHKV2dssxtnA2dRm/x5
JZu8Wz7ZrEZpyqwRJO14sxN1/lC3v+zs9XzYXr2lBTZuKCPIBypYVGIynCuWJBQJ
rWnfG4+u1RHahnjqlTWTY1C/le6v7SjAvCb6GbdA6k4ZL2EJjQlRaHDmzw3rV/+l
LLx6/tYzIsotuflm/bFumyOMmpQQpJjnCkWIVjnRICZvuAn97jLgtTI0+0Rzf4Zb
k2BwmHwDRqWCTTcRI9QvTl8AzjW+dNImN22TpGOBPfYj8BCZ9twrpKUbf+jNqJ1K
THQzFtpdJ6SzqiFVm74xW4TKqCLkbCQ/HtVjTGMGGz/y7KTtaLpGutQ6XE8SSy6P
EffSb5u+kKlQOWaH7Mc3B0yAojz6T3j5RSI8ts6pFi6pZhDg9hBfPK2dT0v/7Mkv
E1Z7q2IdjZnhhtGWjDAMtDDn2NbY2wuGoa5jAWAR0WvIbEZ3kOxuLE5/ZOG1FyYm
noJRliBz7038nT92EoD5g1pdzuxgXtGCpYyyjRZwaLmmi4CvA+oThKmnqWNY5lyY
ricdNHDiyEXK0YafJL1oZgM86MSb0jKJMp5U11nUkUGzkroFfpGDmzBwAzEPgeiF
40+qgsKB9lqwb3G7PxvfSi3XwxfXgpm1cTyEaPSzsVzve3d1xeqb7Yq5Ag0EUzHY
ugEQALQ5FtLdNoxTxMsgvrRr1ejLiUeRNUfXtN1TYttOfvAhfBVnszjtkpIW8DCB
JF/bA7ETiH8OYYn/Fm6MPI5H64IHEncpzxjf57jgpXd9CA9U2OMk/P1nve5zYchP
QmP2fJxeAWr0aRH0Mse5JS5nCkh8Xv4nAjsBYeLTJEVOb1gPQFXOiFcVp3gaKAzX
GWOZ/mtG/uaNsabH/3TkcQQEgJefd11DWgMB7575GU+eME7c6hn3FPITA5TC5HUX
azvjv/PsWGTTVAJluJ3fUDvhpbGwYOh1uV0rB68lPpqVIro18IIJhNDnccM/xqko
4fpJdokdg4L1wih+B04OEXnwgjWG8OIphR/oL/+M37VV2U7Om/GE6LGefaYccC9c
tIaacRQJmZpG/8RsimFIY2wJ07z8xYBITmhMmOt0bLBv0mU0ym5KH9Dnru1m9QDO
AHwcKrDgL85f9MCn+YYw0d1lYxjOXjf+moaeW3izXCJ5brM+MqVtixY6aos3YO29
J7SzQ4aEDv3h/oKdDfZny21jcVPQxGDui8sqaZCi8usCcyqWsKvFHcr6vkwaufcm
3Knr2HKVotOUF5CDZybopIz1sJvY/5Dx9yfRmtivJtglrxoDKsLi1rQTlEQcFhCS
ACjf7txLtv03vWHxmp4YKQFkkOlbyhIcvfPVLTvqGerdT2FHABEBAAGJAh8EGAEC
AAkFAlMx2LoCGwwACgkQEwJcLcmZYkgK0BAAny0YUugpZldiHzYNf8I6p2OpiDWv
ZHaguTTPg2LJSKaTd+5UHZwRFIWjcSiFu+qTGLNtZAdcr0D5f991CPvyDSLYgOwb
Jm2p3GM2KxfECWzFbB/n/PjbZ5iky3+5sPlOdBR4TkfG4fcu5GwUgCkVe5u3USAk
C6W5lpeaspDz39HAPRSIOFEX70+xV+6FZ17B7nixFGN+giTpGYOEdGFxtUNmHmf+
waJoPECyImDwJvmlMTeP9jfahlB6Pzaxt6TBZYHetI/JR9FU69EmA+XfCSGt5S+0
Eoc330gpsSzo2VlxwRCVNrcuKmG7PsFFANok05ssFq1/Djv5rJ++3lYb88b8HSP2
3pQJPrM7cQNU8iPku9yLXkY5qsoZOH+3yAia554Dgc8WBhp6fWh58R0dIONQxbbo
apNdwvlI8hKFB7TiUL6PNShE1yL+XD201iNkGAJXbLMIC1ImGLirUfU267A3Cop5
hoGs179HGBcyj/sKA3uUIFdNtP+NndaP3v4iYhCitdVCvBJMm6K3tW88qkyRGzOk
4PW422oyWKwbAPeMk5PubvEFuFAIoBAFn1zecrcOg85RzRnEeXaiemmmH8GOe1Xu
Kh+7h8XXyG6RPFy8tCcLOTk+miTqX+4VWy+kVqoS2cQ5IV8WsJ3S7aeIy0H89Z8n
5vmLc+Ibz+eT+rM=
=XVDe
-----END PGP PUBLIC KEY BLOCK-----

View File

@@ -25,7 +25,7 @@ Starting JupyterHub with docker
The JupyterHub docker image can be started with the following command::
docker run -d -p 8000:8000 --name jupyterhub jupyterhub/jupyterhub jupyterhub
docker run -d --name jupyterhub jupyterhub/jupyterhub jupyterhub
This command will create a container named ``jupyterhub`` that you can
**stop and resume** with ``docker stop/start``.

View File

@@ -5,45 +5,35 @@
Before installing JupyterHub, you will need:
- a Linux/Unix based system
- [Python](https://www.python.org/downloads/) 3.6 or greater. An understanding
of using [`pip`](https://pip.pypa.io) or
- [Python](https://www.python.org/downloads/) 3.5 or greater. An understanding
of using [`pip`](https://pip.pypa.io/en/stable/) or
[`conda`](https://conda.io/docs/get-started.html) for
installing Python packages is helpful.
- [nodejs/npm](https://www.npmjs.com/). [Install nodejs/npm](https://docs.npmjs.com/getting-started/installing-node),
using your operating system's package manager.
- If you are using **`conda`**, the nodejs and npm dependencies will be installed for
* If you are using **`conda`**, the nodejs and npm dependencies will be installed for
you by conda.
- If you are using **`pip`**, install a recent version of
* If you are using **`pip`**, install a recent version of
[nodejs/npm](https://docs.npmjs.com/getting-started/installing-node).
For example, install it on Linux (Debian/Ubuntu) using:
```
sudo apt-get install nodejs npm
sudo apt-get install npm nodejs-legacy
```
The `nodejs-legacy` package installs the `node` executable and is currently
required for npm to work on Debian/Ubuntu.
[nodesource][] is a great resource to get more recent versions of the nodejs runtime,
if your system package manager only has an old version of Node.js (e.g. 10 or older).
- A [pluggable authentication module (PAM)](https://en.wikipedia.org/wiki/Pluggable_authentication_module)
to use the [default Authenticator](./getting-started/authenticators-users-basics.md).
PAM is often available by default on most distributions, if this is not the case it can be installed by
using the operating system's package manager.
- TLS certificate and key for HTTPS communication
- Domain name
[nodesource]: https://github.com/nodesource/distributions#table-of-contents
Before running the single-user notebook servers (which may be on the same
system as the Hub or not), you will need:
- [JupyterLab][] version 3 or greater,
or [Jupyter Notebook][]
4 or greater.
[jupyterlab]: https://jupyterlab.readthedocs.io
[jupyter notebook]: https://jupyter.readthedocs.io/en/latest/install.html
- [Jupyter Notebook](https://jupyter.readthedocs.io/en/latest/install.html)
version 4 or greater
## Installation
@@ -54,14 +44,14 @@ JupyterHub can be installed with `pip` (and the proxy with `npm`) or `conda`:
```bash
python3 -m pip install jupyterhub
npm install -g configurable-http-proxy
python3 -m pip install jupyterlab notebook # needed if running the notebook servers in the same environment
python3 -m pip install notebook # needed if running the notebook servers locally
```
**conda** (one command installs jupyterhub and proxy):
```bash
conda install -c conda-forge jupyterhub # installs jupyterhub and proxy
conda install jupyterlab notebook # needed if running the notebook servers in the same environment
conda install notebook # needed if running the notebook servers locally
```
Test your installation. If installed, these commands should return the packages'
@@ -80,16 +70,16 @@ To start the Hub server, run the command:
jupyterhub
```
Visit `http://localhost:8000` in your browser, and sign in with your unix
Visit `https://localhost:8000` in your browser, and sign in with your unix
credentials.
To **allow multiple users to sign in** to the Hub server, you must start
`jupyterhub` as a _privileged user_, such as root:
`jupyterhub` as a *privileged user*, such as root:
```bash
sudo jupyterhub
```
The [wiki](https://github.com/jupyterhub/jupyterhub/wiki/Using-sudo-to-run-JupyterHub-without-root-privileges)
describes how to run the server as a _less privileged user_. This requires
describes how to run the server as a *less privileged user*. This requires
additional configuration of the system.

View File

@@ -1,161 +0,0 @@
"""
This script updates two files with the RBAC scope descriptions found in
`scopes.py`.
The files are:
1. scope-table.md
This file is git ignored and referenced by the documentation.
2. rest-api.yml
This file is JupyterHub's REST API schema. Both a version and the RBAC
scopes descriptions are updated in it.
"""
import os
from collections import defaultdict
from pathlib import Path
from subprocess import run
from pytablewriter import MarkdownTableWriter
from ruamel.yaml import YAML
from jupyterhub import __version__
from jupyterhub.scopes import scope_definitions
HERE = os.path.abspath(os.path.dirname(__file__))
DOCS = Path(HERE).parent.parent.absolute()
REST_API_YAML = DOCS.joinpath("source", "_static", "rest-api.yml")
SCOPE_TABLE_MD = Path(HERE).joinpath("scope-table.md")
class ScopeTableGenerator:
def __init__(self):
self.scopes = scope_definitions
@classmethod
def create_writer(cls, table_name, headers, values):
writer = MarkdownTableWriter()
writer.table_name = table_name
writer.headers = headers
writer.value_matrix = values
writer.margin = 1
return writer
def _get_scope_relationships(self):
"""Returns a tuple of dictionary of all scope-subscope pairs and a list of just subscopes:
({scope: subscope}, [subscopes])
used for creating hierarchical scope table in _parse_scopes()
"""
pairs = []
for scope, data in self.scopes.items():
subscopes = data.get('subscopes')
if subscopes is not None:
for subscope in subscopes:
pairs.append((scope, subscope))
else:
pairs.append((scope, None))
subscopes = [pair[1] for pair in pairs]
pairs_dict = defaultdict(list)
for scope, subscope in pairs:
pairs_dict[scope].append(subscope)
return pairs_dict, subscopes
def _get_top_scopes(self, subscopes):
"""Returns a list of highest level scopes
(not a subscope of any other scopes)"""
top_scopes = []
for scope in self.scopes.keys():
if scope not in subscopes:
top_scopes.append(scope)
return top_scopes
def _parse_scopes(self):
"""Returns a list of table rows where row:
[indented scopename string, scope description string]"""
scope_pairs, subscopes = self._get_scope_relationships()
top_scopes = self._get_top_scopes(subscopes)
table_rows = []
md_indent = "&nbsp;&nbsp;&nbsp;"
def _add_subscopes(table_rows, scopename, depth=0):
description = self.scopes[scopename]['description']
doc_description = self.scopes[scopename].get('doc_description', '')
if doc_description:
description = doc_description
table_row = [f"{md_indent * depth}`{scopename}`", description]
table_rows.append(table_row)
for subscope in scope_pairs[scopename]:
if subscope:
_add_subscopes(table_rows, subscope, depth + 1)
for scope in top_scopes:
_add_subscopes(table_rows, scope)
return table_rows
def write_table(self):
"""Generates the RBAC scopes reference documentation as a markdown table
and writes it to the .gitignored `scope-table.md`."""
filename = SCOPE_TABLE_MD
table_name = ""
headers = ["Scope", "Grants permission to:"]
values = self._parse_scopes()
writer = self.create_writer(table_name, headers, values)
title = "Table 1. Available scopes and their hierarchy"
content = f"{title}\n{writer.dumps()}"
with open(filename, 'w') as f:
f.write(content)
print(f"Generated {filename}.")
print(
"Run 'make clean' before 'make html' to ensure the built scopes.html contains latest scope table changes."
)
def write_api(self):
"""Loads `rest-api.yml` and writes it back with a dynamically set
JupyterHub version field and list of RBAC scopes descriptions from
`scopes.py`."""
filename = REST_API_YAML
yaml = YAML(typ="rt")
yaml.preserve_quotes = True
yaml.indent(mapping=2, offset=2, sequence=4)
scope_dict = {}
with open(filename) as f:
content = yaml.load(f.read())
content["info"]["version"] = __version__
for scope in self.scopes:
description = self.scopes[scope]['description']
doc_description = self.scopes[scope].get('doc_description', '')
if doc_description:
description = doc_description
scope_dict[scope] = description
content['components']['securitySchemes']['oauth2']['flows'][
'authorizationCode'
]['scopes'] = scope_dict
with open(filename, 'w') as f:
yaml.dump(content, f)
run(
['pre-commit', 'run', 'prettier', '--files', filename],
cwd=HERE,
check=False,
)
def main():
table_generator = ScopeTableGenerator()
table_generator.write_table()
table_generator.write_api()
if __name__ == "__main__":
main()

View File

@@ -1,37 +0,0 @@
# JupyterHub RBAC
Role Based Access Control (RBAC) in JupyterHub serves to provide fine grained control of access to Jupyterhub's API resources.
RBAC is new in JupyterHub 2.0.
## Motivation
The JupyterHub API requires authorization to access its APIs.
This ensures that an arbitrary user, or even an unauthenticated third party, are not allowed to perform such actions.
For instance, the behaviour prior to adoption of RBAC is that creating or deleting users requires _admin rights_.
The prior system is functional, but lacks flexibility. If your Hub serves a number of users in different groups, you might want to delegate permissions to other users or automate certain processes.
Prior to RBAC, appointing a 'group-only admin' or a bot that culls idle servers, requires granting full admin rights to all actions. This poses a risk of the user or service intentionally or unintentionally accessing and modifying any data within the Hub and violates the [principle of least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege).
To remedy situations like this, JupyterHub is transitioning to an RBAC system. By equipping users, groups and services with _roles_ that supply them with a collection of permissions (_scopes_), administrators are able to fine-tune which parties are granted access to which resources.
## Definitions
**Scopes** are specific permissions used to evaluate API requests. For example: the API endpoint `users/servers`, which enables starting or stopping user servers, is guarded by the scope `servers`.
Scopes are not directly assigned to requesters. Rather, when a client performs an API call, their access will be evaluated based on their assigned roles.
**Roles** are collections of scopes that specify the level of what a client is allowed to do. For example, a group administrator may be granted permission to control the servers of group members, but not to create, modify or delete group members themselves.
Within the RBAC framework, this is achieved by assigning a role to the administrator that covers exactly those privileges.
## Technical Overview
```{toctree}
:maxdepth: 2
roles
scopes
use-cases
tech-implementation
upgrade
```

Some files were not shown because too many files have changed in this diff Show More