Compare commits

..

18 Commits

Author SHA1 Message Date
Min RK
5980ff1011 0.9.6 2019-04-01 12:17:49 +02:00
Min RK
2e8781c35b Changelog for 0.9.6
replace 0.9.5 which has only a partial fix

issue is now confirmed to affect all browsers
2019-04-01 12:17:40 +02:00
Min RK
3f1332e38f Further login redirect validation 2019-04-01 12:12:15 +02:00
Min RK
db851cd230 Merge pull request #2488 from minrk/post_push
Docker hook fixes
2019-04-01 12:06:49 +02:00
Min RK
8c8e26802a fix unbound variable in post_push 2019-03-28 13:00:09 +01:00
Min RK
6a4900c468 release 0.9.5 2019-03-28 11:07:09 +01:00
Min RK
efbb692540 changelog for 0.9.5 2019-03-28 11:04:00 +01:00
Min RK
244ab813fe protect against some browsers' buggy handling of backslash as slash 2019-03-28 10:30:36 +01:00
Min RK
b1111363fd release 0.9.4 2018-09-24 13:02:36 +02:00
Min RK
6c99b807c2 update changelog for 0.9.4 2018-09-24 13:00:27 +02:00
Min RK
8d650f594e changelog for 0.9.4 2018-09-24 12:58:16 +02:00
Min RK
04a0a3a2e5 fix oauth client cleanup
- delete oauth clients for servers when they shutdown
- avoid deleting oauth clients for servers still running across an 0.8 -> 0.9 upgrade, when the oauth client ids changed from `user-NAME` to `jupyterhub-user-NAME`
2018-09-24 12:58:10 +02:00
Min RK
9cebfd6367 Fix content-type on API endpoints
and includes content-type header checks in tests to catch regressions
2018-09-24 12:57:26 +02:00
Min RK
587cd70221 omit pdf builds on rtd due to bug in sphinx 2018-09-24 12:57:01 +02:00
Min RK
e94f5e043a release 0.9.3 2018-09-12 09:46:02 +02:00
Min RK
5456fb6356 remove spurious print from keepalive code
and send keepalive every 8 seconds

to protect against possibly aggressive proxies dropping connections after 10 seconds of inactivity
2018-09-12 09:46:02 +02:00
Min RK
fb75b9a392 write needs no await 2018-09-11 16:42:29 +02:00
Min RK
90d341e6f7 changelog for 0.9.3
Mainly small fixes, but the token page could be completely broken

This release will include the spawner.handler addition,
but not the oauthlib change currently in master
2018-09-11 16:42:21 +02:00
328 changed files with 8942 additions and 37182 deletions

21
.circleci/config.yml Normal file
View File

@@ -0,0 +1,21 @@
# Python CircleCI 2.0 configuration file
# Updating CircleCI configuration from v1 to v2
# Check https://circleci.com/docs/2.0/language-python/ for more details
#
version: 2
jobs:
build:
machine: true
steps:
- checkout
- run:
name: build images
command: |
docker build -t jupyterhub/jupyterhub .
docker build -t jupyterhub/jupyterhub-onbuild onbuild
docker build -t jupyterhub/jupyterhub:alpine -f dockerfiles/Dockerfile.alpine .
docker build -t jupyterhub/singleuser singleuser
- run:
name: smoke test jupyterhub
command: |
docker run --rm -it jupyterhub/jupyterhub jupyterhub --help

View File

@@ -1,5 +1,4 @@
[run]
parallel = True
branch = False
omit =
jupyterhub/tests/*

View File

@@ -10,12 +10,13 @@
# E402: module level import not at top of file
# I100: Import statements are in the wrong order
# I101: Imported names are in the wrong order. Should be
ignore = E, C, W, F401, F403, F811, F841, E402, I100, I101, D400
builtins = c, get_config
ignore = E, C, W, F401, F403, F811, F841, E402, I100, I101
exclude =
.cache,
.github,
docs,
examples,
jupyterhub/alembic*,
onbuild,
scripts,

37
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,37 @@
---
name: Bug report
about: Create a report to help us improve
---
Hi! Thanks for using JupyterHub.
If you are reporting an issue with JupyterHub, please use the [GitHub issue](https://github.com/jupyterhub/jupyterhub/issues) search feature to check if your issue has been asked already. If it has, please add your comments to the existing issue.
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
- Running `jupyter troubleshoot` from the command line, if possible, and posting
its output would also be helpful.
- Running in `--debug` mode can also be helpful for troubleshooting.

View File

@@ -0,0 +1,7 @@
---
name: Installation and configuration issues
about: Installation and configuration assistance
---
If you are having issues with installation or configuration, you may ask for help on the JupyterHub gitter channel or file an issue here.

View File

@@ -1,192 +0,0 @@
# Build releases and (on tags) publish to PyPI
name: Release
# always build releases (to make sure wheel-building works)
# but only publish to PyPI on tags
on:
push:
branches:
- "!dependabot/**"
tags:
- "*"
pull_request:
jobs:
build-release:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: 3.8
- uses: actions/setup-node@v1
with:
node-version: "14"
- name: install build package
run: |
pip install --upgrade pip
pip install build
pip freeze
- name: build release
run: |
python -m build --sdist --wheel .
ls -l dist
- name: verify wheel
run: |
cd dist
pip install ./*.whl
# verify data-files are installed where they are found
cat <<EOF | python
import os
from jupyterhub._data import DATA_FILES_PATH
print(f"DATA_FILES_PATH={DATA_FILES_PATH}")
assert os.path.exists(DATA_FILES_PATH), DATA_FILES_PATH
for subpath in (
"templates/page.html",
"static/css/style.min.css",
"static/components/jquery/dist/jquery.js",
):
path = os.path.join(DATA_FILES_PATH, subpath)
assert os.path.exists(path), path
print("OK")
EOF
# ref: https://github.com/actions/upload-artifact#readme
- uses: actions/upload-artifact@v2
with:
name: jupyterhub-${{ github.sha }}
path: "dist/*"
if-no-files-found: error
- name: Publish to PyPI
if: startsWith(github.ref, 'refs/tags/')
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
run: |
pip install twine
twine upload --skip-existing dist/*
publish-docker:
runs-on: ubuntu-20.04
services:
# So that we can test this in PRs/branches
local-registry:
image: registry:2
ports:
- 5000:5000
steps:
- name: Should we push this image to a public registry?
run: |
if [ "${{ startsWith(github.ref, 'refs/tags/') || (github.ref == 'refs/heads/main') }}" = "true" ]; then
# Empty => Docker Hub
echo "REGISTRY=" >> $GITHUB_ENV
else
echo "REGISTRY=localhost:5000/" >> $GITHUB_ENV
fi
- uses: actions/checkout@v2
# Setup docker to build for multiple platforms, see:
# https://github.com/docker/build-push-action/tree/v2.4.0#usage
# https://github.com/docker/build-push-action/blob/v2.4.0/docs/advanced/multi-platform.md
- name: Set up QEMU (for docker buildx)
uses: docker/setup-qemu-action@25f0500ff22e406f7191a2a8ba8cda16901ca018 # associated tag: v1.0.2
- name: Set up Docker Buildx (for multi-arch builds)
uses: docker/setup-buildx-action@2a4b53665e15ce7d7049afb11ff1f70ff1610609 # associated tag: v1.1.2
with:
# Allows pushing to registry on localhost:5000
driver-opts: network=host
- name: Setup push rights to Docker Hub
# This was setup by...
# 1. Creating a Docker Hub service account "jupyterhubbot"
# 2. Creating a access token for the service account specific to this
# repository: https://hub.docker.com/settings/security
# 3. Making the account part of the "bots" team, and granting that team
# permissions to push to the relevant images:
# https://hub.docker.com/orgs/jupyterhub/teams/bots/permissions
# 4. Registering the username and token as a secret for this repo:
# https://github.com/jupyterhub/jupyterhub/settings/secrets/actions
if: env.REGISTRY != 'localhost:5000/'
run: |
docker login -u "${{ secrets.DOCKERHUB_USERNAME }}" -p "${{ secrets.DOCKERHUB_TOKEN }}"
# https://github.com/jupyterhub/action-major-minor-tag-calculator
# If this is a tagged build this will return additional parent tags.
# E.g. 1.2.3 is expanded to Docker tags
# [{prefix}:1.2.3, {prefix}:1.2, {prefix}:1, {prefix}:latest] unless
# this is a backported tag in which case the newer tags aren't updated.
# For branches this will return the branch name.
# If GITHUB_TOKEN isn't available (e.g. in PRs) returns no tags [].
- name: Get list of jupyterhub tags
id: jupyterhubtags
uses: jupyterhub/action-major-minor-tag-calculator@v1
with:
githubToken: ${{ secrets.GITHUB_TOKEN }}
prefix: "${{ env.REGISTRY }}jupyterhub/jupyterhub:"
defaultTag: "${{ env.REGISTRY }}jupyterhub/jupyterhub:noref"
branchRegex: ^\w[\w-.]*$
- name: Build and push jupyterhub
uses: docker/build-push-action@e1b7f96249f2e4c8e4ac1519b9608c0d48944a1f # associated tag: v2.4.0
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
# tags parameter must be a string input so convert `gettags` JSON
# array into a comma separated list of tags
tags: ${{ join(fromJson(steps.jupyterhubtags.outputs.tags)) }}
# jupyterhub-onbuild
- name: Get list of jupyterhub-onbuild tags
id: onbuildtags
uses: jupyterhub/action-major-minor-tag-calculator@v1
with:
githubToken: ${{ secrets.GITHUB_TOKEN }}
prefix: "${{ env.REGISTRY }}jupyterhub/jupyterhub-onbuild:"
defaultTag: "${{ env.REGISTRY }}jupyterhub/jupyterhub-onbuild:noref"
branchRegex: ^\w[\w-.]*$
- name: Build and push jupyterhub-onbuild
uses: docker/build-push-action@e1b7f96249f2e4c8e4ac1519b9608c0d48944a1f # associated tag: v2.4.0
with:
build-args: |
BASE_IMAGE=${{ fromJson(steps.jupyterhubtags.outputs.tags)[0] }}
context: onbuild
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ join(fromJson(steps.onbuildtags.outputs.tags)) }}
# jupyterhub-demo
- name: Get list of jupyterhub-demo tags
id: demotags
uses: jupyterhub/action-major-minor-tag-calculator@v1
with:
githubToken: ${{ secrets.GITHUB_TOKEN }}
prefix: "${{ env.REGISTRY }}jupyterhub/jupyterhub-demo:"
defaultTag: "${{ env.REGISTRY }}jupyterhub/jupyterhub-demo:noref"
branchRegex: ^\w[\w-.]*$
- name: Build and push jupyterhub-demo
uses: docker/build-push-action@e1b7f96249f2e4c8e4ac1519b9608c0d48944a1f # associated tag: v2.4.0
with:
build-args: |
BASE_IMAGE=${{ fromJson(steps.onbuildtags.outputs.tags)[0] }}
context: demo-image
# linux/arm64 currently fails:
# ERROR: Could not build wheels for argon2-cffi which use PEP 517 and cannot be installed directly
# ERROR: executor failed running [/bin/sh -c python3 -m pip install notebook]: exit code: 1
platforms: linux/amd64
push: true
tags: ${{ join(fromJson(steps.demotags.outputs.tags)) }}

View File

@@ -1,215 +0,0 @@
# This is a GitHub workflow defining a set of jobs with a set of steps.
# ref: https://docs.github.com/en/free-pro-team@latest/actions/reference/workflow-syntax-for-github-actions
#
name: Test
# Trigger the workflow's on all PRs but only on pushed tags or commits to
# main/master branch to avoid PRs developed in a GitHub fork's dedicated branch
# to trigger.
on:
pull_request:
push:
workflow_dispatch:
env:
# UTF-8 content may be interpreted as ascii and causes errors without this.
LANG: C.UTF-8
jobs:
# Run "pytest jupyterhub/tests" in various configurations
pytest:
runs-on: ubuntu-20.04
timeout-minutes: 10
strategy:
# Keep running even if one variation of the job fail
fail-fast: false
matrix:
# We run this job multiple times with different parameterization
# specified below, these parameters have no meaning on their own and
# gain meaning on how job steps use them.
#
# subdomain:
# Tests everything when JupyterHub is configured to add routes for
# users with dedicated subdomains like user1.jupyter.example.com
# rather than jupyter.example.com/user/user1.
#
# db: [mysql/postgres]
# Tests everything when JupyterHub works against a dedicated mysql or
# postgresql server.
#
# jupyter_server:
# Tests everything when the user instances are started with
# jupyter_server instead of notebook.
#
# ssl:
# Tests everything using internal SSL connections instead of
# unencrypted HTTP
#
# main_dependencies:
# Tests everything when the we use the latest available dependencies
# from: ipytraitlets.
#
# NOTE: Since only the value of these parameters are presented in the
# GitHub UI when the workflow run, we avoid using true/false as
# values by instead duplicating the name to signal true.
include:
- python: "3.6"
oldest_dependencies: oldest_dependencies
- python: "3.6"
subdomain: subdomain
- python: "3.7"
db: mysql
- python: "3.7"
ssl: ssl
- python: "3.8"
db: postgres
- python: "3.8"
jupyter_server: jupyter_server
- python: "3.9"
main_dependencies: main_dependencies
steps:
# NOTE: In GitHub workflows, environment variables are set by writing
# assignment statements to a file. They will be set in the following
# steps as if would used `export MY_ENV=my-value`.
- name: Configure environment variables
run: |
if [ "${{ matrix.subdomain }}" != "" ]; then
echo "JUPYTERHUB_TEST_SUBDOMAIN_HOST=http://localhost.jovyan.org:8000" >> $GITHUB_ENV
fi
if [ "${{ matrix.db }}" == "mysql" ]; then
echo "MYSQL_HOST=127.0.0.1" >> $GITHUB_ENV
echo "JUPYTERHUB_TEST_DB_URL=mysql+mysqlconnector://root@127.0.0.1:3306/jupyterhub" >> $GITHUB_ENV
fi
if [ "${{ matrix.ssl }}" == "ssl" ]; then
echo "SSL_ENABLED=1" >> $GITHUB_ENV
fi
if [ "${{ matrix.db }}" == "postgres" ]; then
echo "PGHOST=127.0.0.1" >> $GITHUB_ENV
echo "PGUSER=test_user" >> $GITHUB_ENV
echo "PGPASSWORD=hub[test/:?" >> $GITHUB_ENV
echo "JUPYTERHUB_TEST_DB_URL=postgresql://test_user:hub%5Btest%2F%3A%3F@127.0.0.1:5432/jupyterhub" >> $GITHUB_ENV
fi
if [ "${{ matrix.jupyter_server }}" != "" ]; then
echo "JUPYTERHUB_SINGLEUSER_APP=jupyterhub.tests.mockserverapp.MockServerApp" >> $GITHUB_ENV
fi
- uses: actions/checkout@v2
# NOTE: actions/setup-node@v1 make use of a cache within the GitHub base
# environment and setup in a fraction of a second.
- name: Install Node v14
uses: actions/setup-node@v1
with:
node-version: "14"
- name: Install Node dependencies
run: |
npm install
npm install -g configurable-http-proxy
npm install -g yarn
npm list
# NOTE: actions/setup-python@v2 make use of a cache within the GitHub base
# environment and setup in a fraction of a second.
- name: Install Python ${{ matrix.python }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python }}
- name: Install Python dependencies
run: |
pip install --upgrade pip
pip install --upgrade . -r dev-requirements.txt
if [ "${{ matrix.oldest_dependencies }}" != "" ]; then
# take any dependencies in requirements.txt such as tornado>=5.0
# and transform them to tornado==5.0 so we can run tests with
# the earliest-supported versions
cat requirements.txt | grep '>=' | sed -e 's@>=@==@g' > oldest-requirements.txt
pip install -r oldest-requirements.txt
fi
if [ "${{ matrix.main_dependencies }}" != "" ]; then
pip install git+https://github.com/ipython/traitlets#egg=traitlets --force
fi
if [ "${{ matrix.jupyter_server }}" != "" ]; then
pip uninstall notebook --yes
pip install jupyter_server
fi
if [ "${{ matrix.db }}" == "mysql" ]; then
pip install mysql-connector-python
fi
if [ "${{ matrix.db }}" == "postgres" ]; then
pip install psycopg2-binary
fi
pip freeze
# NOTE: If you need to debug this DB setup step, consider the following.
#
# 1. mysql/postgressql are database servers we start as docker containers,
# and we use clients named mysql/psql.
#
# 2. When we start a database server we need to pass environment variables
# explicitly as part of the `docker run` command. These environment
# variables are named differently from the similarly named environment
# variables used by the clients.
#
# - mysql server ref: https://hub.docker.com/_/mysql/
# - mysql client ref: https://dev.mysql.com/doc/refman/5.7/en/environment-variables.html
# - postgres server ref: https://hub.docker.com/_/postgres/
# - psql client ref: https://www.postgresql.org/docs/9.5/libpq-envars.html
#
# 3. When we connect, they should use 127.0.0.1 rather than the
# default way of connecting which leads to errors like below both for
# mysql and postgresql unless we set MYSQL_HOST/PGHOST to 127.0.0.1.
#
# - ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
#
- name: Start a database server (${{ matrix.db }})
if: ${{ matrix.db }}
run: |
if [ "${{ matrix.db }}" == "mysql" ]; then
sudo apt-get update
sudo apt-get install -y mysql-client
DB=mysql bash ci/docker-db.sh
DB=mysql bash ci/init-db.sh
fi
if [ "${{ matrix.db }}" == "postgres" ]; then
sudo apt-get update
sudo apt-get install -y postgresql-client
DB=postgres bash ci/docker-db.sh
DB=postgres bash ci/init-db.sh
fi
- name: Run pytest
# FIXME: --color=yes explicitly set because:
# https://github.com/actions/runner/issues/241
run: |
pytest -v --maxfail=2 --color=yes --cov=jupyterhub jupyterhub/tests
- name: Run yarn jest test
run: |
cd jsx && yarn && yarn test
- name: Submit codecov report
run: |
codecov
docker-build:
runs-on: ubuntu-20.04
timeout-minutes: 10
steps:
- uses: actions/checkout@v2
- name: build images
run: |
docker build -t jupyterhub/jupyterhub .
docker build -t jupyterhub/jupyterhub-onbuild onbuild
docker build -t jupyterhub/jupyterhub:alpine -f dockerfiles/Dockerfile.alpine .
docker build -t jupyterhub/singleuser singleuser
- name: smoke test jupyterhub
run: |
docker run --rm -t jupyterhub/jupyterhub jupyterhub --help
- name: verify static files
run: |
docker run --rm -t -v $PWD/dockerfiles:/io jupyterhub/jupyterhub python3 /io/test.py

8
.gitignore vendored
View File

@@ -8,7 +8,6 @@ dist
docs/_build
docs/build
docs/source/_static/rest-api
docs/source/rbac/scope-table.md
.ipynb_checkpoints
# ignore config file at the top-level of the repo
# but not sub-dirs
@@ -22,13 +21,6 @@ share/jupyterhub/static/css/style.min.css.map
*.egg-info
MANIFEST
.coverage
.coverage.*
htmlcov
.idea/
.vscode/
.pytest_cache
pip-wheel-metadata
docs/source/reference/metrics.rst
oldest-requirements.txt
jupyterhub-proxy.pid
examples/server-api/service-token

View File

@@ -1,30 +0,0 @@
repos:
- repo: https://github.com/asottile/pyupgrade
rev: v2.26.0
hooks:
- id: pyupgrade
args:
- --py36-plus
- repo: https://github.com/asottile/reorder_python_imports
rev: v2.6.0
hooks:
- id: reorder-python-imports
- repo: https://github.com/psf/black
rev: 21.8b0
hooks:
- id: black
- repo: https://github.com/pre-commit/mirrors-prettier
rev: v2.4.0
hooks:
- id: prettier
- repo: https://github.com/PyCQA/flake8
rev: "3.9.2"
hooks:
- id: flake8
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.0.1
hooks:
- id: end-of-file-fixer
- id: check-case-conflict
- id: check-executables-have-shebangs
- id: requirements-txt-fixer

View File

@@ -1,2 +0,0 @@
share/jupyterhub/templates/
share/jupyterhub/static/js/admin-react.js

68
.travis.yml Normal file
View File

@@ -0,0 +1,68 @@
language: python
sudo: false
cache:
- pip
python:
- 3.6
- 3.5
- nightly
env:
global:
- ASYNC_TEST_TIMEOUT=15
- MYSQL_HOST=127.0.0.1
- MYSQL_TCP_PORT=13306
services:
- postgres
- docker
# installing dependencies
before_install:
- nvm install 6; nvm use 6
- npm install
- npm install -g configurable-http-proxy
- |
# setup database
if [[ $JUPYTERHUB_TEST_DB_URL == mysql* ]]; then
unset MYSQL_UNIX_PORT
DB=mysql bash ci/docker-db.sh
DB=mysql bash ci/init-db.sh
pip install 'mysql-connector<2.2'
elif [[ $JUPYTERHUB_TEST_DB_URL == postgresql* ]]; then
DB=postgres bash ci/init-db.sh
pip install psycopg2-binary
fi
install:
- pip install --upgrade pip
- pip install --pre -r dev-requirements.txt .
- pip freeze
# running tests
script:
- |
# run tests
set -e
pytest -v --maxfail=2 --cov=jupyterhub jupyterhub/tests
- |
# build docs
pushd docs
pip install -r requirements.txt
make html
popd
after_success:
- codecov
matrix:
fast_finish: true
include:
- python: 3.6
env: JUPYTERHUB_TEST_SUBDOMAIN_HOST=http://localhost.jovyan.org:8000
- python: 3.6
env:
- JUPYTERHUB_TEST_DB_URL=mysql+mysqlconnector://root@127.0.0.1:$MYSQL_TCP_PORT/jupyterhub
- python: 3.6
env:
- JUPYTERHUB_TEST_DB_URL=postgresql://postgres@127.0.0.1/jupyterhub
- python: 3.7
dist: xenial
allow_failures:
- python: nightly

View File

@@ -2,24 +2,24 @@
- [ ] Upgrade Docs prior to Release
- [ ] Change log
- [ ] New features documented
- [ ] Update the contributor list - thank you page
- [ ] Change log
- [ ] New features documented
- [ ] Update the contributor list - thank you page
- [ ] Upgrade and test Reference Deployments
- [ ] Release software
- [ ] Make sure 0 issues in milestone
- [ ] Follow release process steps
- [ ] Send builds to PyPI (Warehouse) and Conda Forge
- [ ] Make sure 0 issues in milestone
- [ ] Follow release process steps
- [ ] Send builds to PyPI (Warehouse) and Conda Forge
- [ ] Blog post and/or release note
- [ ] Notify users of release
- [ ] Email Jupyter and Jupyter In Education mailing lists
- [ ] Tweet (optional)
- [ ] Email Jupyter and Jupyter In Education mailing lists
- [ ] Tweet (optional)
- [ ] Increment the version number for the next release

View File

@@ -1 +1 @@
Please refer to [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md).
Please refer to [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/master/conduct/code_of_conduct.md).

View File

@@ -1,139 +1,98 @@
# Contributing to JupyterHub
# Contributing
Welcome! As a [Jupyter](https://jupyter.org) project,
you can follow the [Jupyter contributor guide](https://jupyter.readthedocs.io/en/latest/contributing/content-contributor.html).
Welcome! As a [Jupyter](https://jupyter.org) project, we follow the [Jupyter contributor guide](https://jupyter.readthedocs.io/en/latest/contributor/content-contributor.html).
Make sure to also follow [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md)
for a friendly and welcoming collaborative environment.
## Setting up a development environment
## Set up your development system
<!--
https://jupyterhub.readthedocs.io/en/stable/contributing/setup.html
contains a lot of the same information. Should we merge the docs and
just have this page link to that one?
-->
JupyterHub requires Python >= 3.5 and nodejs.
As a Python project, a development install of JupyterHub follows standard practices for the basics (steps 1-2).
1. clone the repo
```bash
git clone https://github.com/jupyterhub/jupyterhub
```
2. do a development install with pip
```bash
cd jupyterhub
python3 -m pip install --editable .
```
3. install the development requirements,
which include things like testing tools
```bash
python3 -m pip install -r dev-requirements.txt
```
4. install configurable-http-proxy with npm:
```bash
npm install -g configurable-http-proxy
```
5. set up pre-commit hooks for automatic code formatting, etc.
```bash
pre-commit install
```
You can also invoke the pre-commit hook manually at any time with
```bash
pre-commit run
```
## Contributing
JupyterHub has adopted automatic code formatting so you shouldn't
need to worry too much about your code style.
As long as your code is valid,
the pre-commit hook should take care of how it should look.
You can invoke the pre-commit hook by hand at any time with:
For a development install, clone the [repository](https://github.com/jupyterhub/jupyterhub)
and then install from source:
```bash
pre-commit run
git clone https://github.com/jupyterhub/jupyterhub
cd jupyterhub
npm install -g configurable-http-proxy
pip3 install -r dev-requirements.txt -e .
```
which should run any autoformatting on your code
and tell you about any errors it couldn't fix automatically.
You may also install [black integration](https://github.com/psf/black#editor-integration)
into your text editor to format code automatically.
### Troubleshooting a development install
If you have already committed files before setting up the pre-commit
hook with `pre-commit install`, you can fix everything up using
`pre-commit run --all-files`. You need to make the fixing commit
yourself after that.
If the `pip3 install` command fails and complains about `lessc` being
unavailable, you may need to explicitly install some additional JavaScript
dependencies:
## Testing
npm install
It's a good idea to write tests to exercise any new features,
or that trigger any bugs that you have fixed to catch regressions.
This will fetch client-side JavaScript dependencies necessary to compile CSS.
You can run the tests with:
You may also need to manually update JavaScript and CSS after some development
updates, with:
```bash
pytest -v
python3 setup.py js # fetch updated client-side js
python3 setup.py css # recompile CSS from LESS sources
```
in the repo directory. If you want to just run certain tests,
check out the [pytest docs](https://pytest.readthedocs.io/en/latest/usage.html)
for how pytest can be called.
For instance, to test only spawner-related things in the REST API:
## Running the test suite
We use [pytest](http://doc.pytest.org/en/latest/) for running tests.
1. Set up a development install as described above.
2. Set environment variable for `ASYNC_TEST_TIMEOUT` to 15 seconds:
```bash
pytest -v -k spawn jupyterhub/tests/test_api.py
export ASYNC_TEST_TIMEOUT=15
```
The tests live in `jupyterhub/tests` and are organized roughly into:
3. Run tests.
1. `test_api.py` tests the REST API
2. `test_pages.py` tests loading the HTML pages
To run all the tests:
and other collections of tests for different components.
When writing a new test, there should usually be a test of
similar functionality already written and related tests should
be added nearby.
```bash
pytest -v jupyterhub/tests
```
The fixtures live in `jupyterhub/tests/conftest.py`. There are
fixtures that can be used for JupyterHub components, such as:
To run an individual test file (i.e. `test_api.py`):
- `app`: an instance of JupyterHub with mocked parts
- `auth_state_enabled`: enables persisting auth_state (like authentication tokens)
- `db`: a sqlite in-memory DB session
- `io_loop`: a Tornado event loop
- `event_loop`: a new asyncio event loop
- `user`: creates a new temporary user
- `admin_user`: creates a new temporary admin user
- single user servers
- `cleanup_after`: allows cleanup of single user servers between tests
- mocked service
- `MockServiceSpawner`: a spawner that mocks services for testing with a short poll interval
- `mockservice`: mocked service with no external service url
- `mockservice_url`: mocked service with a url to test external services
```bash
pytest -v jupyterhub/tests/test_api.py
```
And fixtures to add functionality or spawning behavior:
### Troubleshooting tests
- `admin_access`: grants admin access
- `no_patience`: sets slow-spawning timeouts to zero
- `slow_spawn`: enables the SlowSpawner (a spawner that takes a few seconds to start)
- `never_spawn`: enables the NeverSpawner (a spawner that will never start)
- `bad_spawn`: enables the BadSpawner (a spawner that fails immediately)
- `slow_bad_spawn`: enables the SlowBadSpawner (a spawner that fails after a short delay)
If you see test failures because of timeouts, you may wish to increase the
`ASYNC_TEST_TIMEOUT` used by the
[pytest-tornado-plugin](https://github.com/eugeniy/pytest-tornado/blob/c79f68de2222eb7cf84edcfe28650ebf309a4d0c/README.rst#markers)
from the default of 5 seconds:
To read more about fixtures check out the
[pytest docs](https://docs.pytest.org/en/latest/fixture.html)
for how to use the existing fixtures, and how to create new ones.
```bash
export ASYNC_TEST_TIMEOUT=15
```
When in doubt, feel free to [ask](https://gitter.im/jupyterhub/jupyterhub).
If you see many test errors and failures, double check that you have installed
`configurable-http-proxy`.
## Building the Docs locally
1. Install the development system as described above.
2. Install the dependencies for documentation:
```bash
python3 -m pip install -r docs/requirements.txt
```
3. Build the docs:
```bash
cd docs
make clean
make html
```
4. View the docs:
```bash
open build/html/index.html
```

View File

@@ -24,7 +24,7 @@ software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
@@ -46,8 +46,8 @@ Jupyter uses a shared copyright model. Each contributor maintains copyright
over their contributions to Jupyter. But, it is important to note that these
contributions are typically only changes to the repositories. Thus, the Jupyter
source code, in its entirety is not the copyright of any single person or
institution. Instead, it is the collective copyright of the entire Jupyter
Development Team. If individual contributors want to maintain a record of what
institution. Instead, it is the collective copyright of the entire Jupyter
Development Team. If individual contributors want to maintain a record of what
changes/contributions they have specific copyright on, they should indicate
their copyright in the commit message of the change, when they commit the
change to one of the Jupyter repositories.

View File

@@ -21,81 +21,40 @@
# your jupyterhub_config.py will be added automatically
# from your docker directory.
ARG BASE_IMAGE=ubuntu:focal-20200729
FROM $BASE_IMAGE AS builder
USER root
FROM ubuntu:18.04
LABEL maintainer="Jupyter Project <jupyter@googlegroups.com>"
# install nodejs, utf8 locale, set CDN because default httpredir is unreliable
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update \
&& apt-get install -yq --no-install-recommends \
build-essential \
ca-certificates \
locales \
python3-dev \
python3-pip \
python3-pycurl \
nodejs \
npm \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get -y update && \
apt-get -y upgrade && \
apt-get -y install wget git bzip2 && \
apt-get purge && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
ENV LANG C.UTF-8
RUN python3 -m pip install --upgrade setuptools pip wheel
# install Python + NodeJS with conda
RUN wget -q https://repo.continuum.io/miniconda/Miniconda3-4.5.1-Linux-x86_64.sh -O /tmp/miniconda.sh && \
echo '0c28787e3126238df24c5d4858bd0744 */tmp/miniconda.sh' | md5sum -c - && \
bash /tmp/miniconda.sh -f -b -p /opt/conda && \
/opt/conda/bin/conda install --yes -c conda-forge \
python=3.6 sqlalchemy tornado jinja2 traitlets requests pip pycurl \
nodejs configurable-http-proxy && \
/opt/conda/bin/pip install --upgrade pip && \
rm /tmp/miniconda.sh
ENV PATH=/opt/conda/bin:$PATH
# copy everything except whats in .dockerignore, its a
# compromise between needing to rebuild and maintaining
# what needs to be part of the build
COPY . /src/jupyterhub/
ADD . /src/jupyterhub
WORKDIR /src/jupyterhub
# Build client component packages (they will be copied into ./share and
# packaged with the built wheel.)
RUN python3 setup.py bdist_wheel
RUN python3 -m pip wheel --wheel-dir wheelhouse dist/*.whl
FROM $BASE_IMAGE
USER root
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get install -yq --no-install-recommends \
ca-certificates \
curl \
gnupg \
locales \
python3-pip \
python3-pycurl \
nodejs \
npm \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
ENV SHELL=/bin/bash \
LC_ALL=en_US.UTF-8 \
LANG=en_US.UTF-8 \
LANGUAGE=en_US.UTF-8
RUN locale-gen $LC_ALL
# always make sure pip is up to date!
RUN python3 -m pip install --no-cache --upgrade setuptools pip
RUN npm install -g configurable-http-proxy@^4.2.0 \
&& rm -rf ~/.npm
# install the wheels we built in the first stage
COPY --from=builder /src/jupyterhub/wheelhouse /tmp/wheelhouse
RUN python3 -m pip install --no-cache /tmp/wheelhouse/*
RUN pip install . && \
rm -rf $PWD ~/.cache ~/.npm
RUN mkdir -p /srv/jupyterhub/
WORKDIR /srv/jupyterhub/
EXPOSE 8000
LABEL maintainer="Jupyter Project <jupyter@googlegroups.com>"
LABEL org.jupyter.service="jupyterhub"
CMD ["jupyterhub"]

1
PULL_REQUEST_TEMPLATE.md Normal file
View File

@@ -0,0 +1 @@

106
README.md
View File

@@ -6,37 +6,26 @@
**[License](#license)** |
**[Help and Resources](#help-and-resources)**
---
Please note that this repository is participating in a study into the sustainability of open source projects. Data will be gathered about this repository for approximately the next 12 months, starting from 2021-06-11.
Data collected will include the number of contributors, number of PRs, time taken to close/merge these PRs, and issues closed.
For more information, please visit
[our informational page](https://sustainable-open-science-and-software.github.io/) or download our [participant information sheet](https://sustainable-open-science-and-software.github.io/assets/PIS_sustainable_software.pdf).
---
# [JupyterHub](https://github.com/jupyterhub/jupyterhub)
[![Latest PyPI version](https://img.shields.io/pypi/v/jupyterhub?logo=pypi)](https://pypi.python.org/pypi/jupyterhub)
[![Latest conda-forge version](https://img.shields.io/conda/vn/conda-forge/jupyterhub?logo=conda-forge)](https://anaconda.org/conda-forge/jupyterhub)
[![Documentation build status](https://img.shields.io/readthedocs/jupyterhub?logo=read-the-docs)](https://jupyterhub.readthedocs.org/en/latest/)
[![GitHub Workflow Status - Test](https://img.shields.io/github/workflow/status/jupyterhub/jupyterhub/Test?logo=github&label=tests)](https://github.com/jupyterhub/jupyterhub/actions)
[![DockerHub build status](https://img.shields.io/docker/build/jupyterhub/jupyterhub?logo=docker&label=build)](https://hub.docker.com/r/jupyterhub/jupyterhub/tags)
[![Test coverage of code](https://codecov.io/gh/jupyterhub/jupyterhub/branch/main/graph/badge.svg)](https://codecov.io/gh/jupyterhub/jupyterhub)
[![GitHub](https://img.shields.io/badge/issue_tracking-github-blue?logo=github)](https://github.com/jupyterhub/jupyterhub/issues)
[![Discourse](https://img.shields.io/badge/help_forum-discourse-blue?logo=discourse)](https://discourse.jupyter.org/c/jupyterhub)
[![Gitter](https://img.shields.io/badge/social_chat-gitter-blue?logo=gitter)](https://gitter.im/jupyterhub/jupyterhub)
[![PyPI](https://img.shields.io/pypi/v/jupyterhub.svg)](https://pypi.python.org/pypi/jupyterhub)
[![Documentation Status](https://readthedocs.org/projects/jupyterhub/badge/?version=latest)](https://jupyterhub.readthedocs.org/en/latest/?badge=latest)
[![Documentation Status](http://readthedocs.org/projects/jupyterhub/badge/?version=0.7.2)](https://jupyterhub.readthedocs.io/en/0.7.2/?badge=0.7.2)
[![Build Status](https://travis-ci.org/jupyterhub/jupyterhub.svg?branch=master)](https://travis-ci.org/jupyterhub/jupyterhub)
[![Circle CI](https://circleci.com/gh/jupyterhub/jupyterhub.svg?style=shield&circle-token=b5b65862eb2617b9a8d39e79340b0a6b816da8cc)](https://circleci.com/gh/jupyterhub/jupyterhub)
[![codecov.io](https://codecov.io/github/jupyterhub/jupyterhub/coverage.svg?branch=master)](https://codecov.io/github/jupyterhub/jupyterhub?branch=master)
[![Google Group](https://img.shields.io/badge/google-group-blue.svg)](https://groups.google.com/forum/#!forum/jupyter)
With [JupyterHub](https://jupyterhub.readthedocs.io) you can create a
**multi-user Hub** that spawns, manages, and proxies multiple instances of the
**multi-user Hub** which spawns, manages, and proxies multiple instances of the
single-user [Jupyter notebook](https://jupyter-notebook.readthedocs.io)
server.
[Project Jupyter](https://jupyter.org) created JupyterHub to support many
users. The Hub can offer notebook servers to a class of students, a corporate
data science workgroup, a scientific research project, or a high-performance
data science workgroup, a scientific research project, or a high performance
computing group.
## Technical overview
@@ -50,30 +39,38 @@ Three main actors make up JupyterHub:
Basic principles for operation are:
- Hub launches a proxy.
- The Proxy forwards all requests to Hub by default.
- Hub handles login and spawns single-user servers on demand.
- Hub configures proxy to forward URL prefixes to the single-user notebook
- Proxy forwards all requests to Hub by default.
- Hub handles login, and spawns single-user servers on demand.
- Hub configures proxy to forward url prefixes to the single-user notebook
servers.
JupyterHub also provides a
[REST API](https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/HEAD/docs/rest-api.yml#/default)
[REST API](http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/master/docs/rest-api.yml#/default)
for administration of the Hub and its users.
## Installation
### Check prerequisites
- A Linux/Unix based system
- [Python](https://www.python.org/downloads/) 3.6 or greater
- [Python](https://www.python.org/downloads/) 3.5 or greater
- [nodejs/npm](https://www.npmjs.com/)
- If you are using **`conda`**, the nodejs and npm dependencies will be installed for
* If you are using **`conda`**, the nodejs and npm dependencies will be installed for
you by conda.
- If you are using **`pip`**, install a recent version (at least 12.0) of
* If you are using **`pip`**, install a recent version of
[nodejs/npm](https://docs.npmjs.com/getting-started/installing-node).
For example, install it on Linux (Debian/Ubuntu) using:
```
sudo apt-get install npm nodejs-legacy
```
The `nodejs-legacy` package installs the `node` executable and is currently
required for npm to work on Debian/Ubuntu.
- If using the default PAM Authenticator, a [pluggable authentication module (PAM)](https://en.wikipedia.org/wiki/Pluggable_authentication_module).
- TLS certificate and key for HTTPS communication
- Domain name
@@ -87,11 +84,12 @@ To install JupyterHub along with its dependencies including nodejs/npm:
conda install -c conda-forge jupyterhub
```
If you plan to run notebook servers locally, install JupyterLab or Jupyter notebook:
If you plan to run notebook servers locally, install the Jupyter notebook
or JupyterLab:
```bash
conda install jupyterlab
conda install notebook
conda install jupyterlab
```
#### Using `pip`
@@ -100,13 +98,13 @@ JupyterHub can be installed with `pip`, and the proxy with `npm`:
```bash
npm install -g configurable-http-proxy
python3 -m pip install jupyterhub
python3 -m pip install jupyterhub
```
If you plan to run notebook servers locally, you will need to install
[JupyterLab or Jupyter notebook](https://jupyter.readthedocs.io/en/latest/install.html):
If you plan to run notebook servers locally, you will need to install the
[Jupyter notebook](https://jupyter.readthedocs.io/en/latest/install.html)
package:
python3 -m pip install --upgrade jupyterlab
python3 -m pip install --upgrade notebook
### Run the Hub server
@@ -118,10 +116,10 @@ To start the Hub server, run the command:
Visit `https://localhost:8000` in your browser, and sign in with your unix
PAM credentials.
_Note_: To allow multiple users to sign in to the server, you will need to
run the `jupyterhub` command as a _privileged user_, such as root.
*Note*: To allow multiple users to sign into the server, you will need to
run the `jupyterhub` command as a *privileged user*, such as root.
The [wiki](https://github.com/jupyterhub/jupyterhub/wiki/Using-sudo-to-run-JupyterHub-without-root-privileges)
describes how to run the server as a _less privileged user_, which requires
describes how to run the server as a *less privileged user*, which requires
more configuration of the system.
## Configuration
@@ -140,18 +138,18 @@ To generate a default config file with settings and descriptions:
### Start the Hub
To start the Hub on a specific url and port `10.0.1.2:443` with **https**:
To start the Hub on a specific url and port ``10.0.1.2:443`` with **https**:
jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert
### Authenticators
| Authenticator | Description |
| ---------------------------------------------------------------------------- | ------------------------------------------------- |
| PAMAuthenticator | Default, built-in authenticator |
| [OAuthenticator](https://github.com/jupyterhub/oauthenticator) | OAuth + JupyterHub Authenticator = OAuthenticator |
| [ldapauthenticator](https://github.com/jupyterhub/ldapauthenticator) | Simple LDAP Authenticator Plugin for JupyterHub |
| [kerberosauthenticator](https://github.com/jupyterhub/kerberosauthenticator) | Kerberos Authenticator Plugin for JupyterHub |
| Authenticator | Description |
| --------------------------------------------------------------------------- | ------------------------------------------------- |
| PAMAuthenticator | Default, built-in authenticator |
| [OAuthenticator](https://github.com/jupyterhub/oauthenticator) | OAuth + JupyterHub Authenticator = OAuthenticator |
| [ldapauthenticator](https://github.com/jupyterhub/ldapauthenticator) | Simple LDAP Authenticator Plugin for JupyterHub |
| [kdcAuthenticator](https://github.com/bloomberg/jupyterhub-kdcauthenticator)| Kerberos Authenticator Plugin for JupyterHub |
### Spawners
@@ -163,7 +161,6 @@ To start the Hub on a specific url and port `10.0.1.2:443` with **https**:
| [sudospawner](https://github.com/jupyterhub/sudospawner) | Spawn single-user servers without being root |
| [systemdspawner](https://github.com/jupyterhub/systemdspawner) | Spawn single-user notebook servers using systemd |
| [batchspawner](https://github.com/jupyterhub/batchspawner) | Designed for clusters using batch scheduling software |
| [yarnspawner](https://github.com/jupyterhub/yarnspawner) | Spawn single-user notebook servers distributed on a Hadoop cluster |
| [wrapspawner](https://github.com/jupyterhub/wrapspawner) | WrapSpawner and ProfilesSpawner enabling runtime configuration of spawners |
## Docker
@@ -202,14 +199,11 @@ These accounts will be used for authentication in JupyterHub's default configura
## Contributing
If you would like to contribute to the project, please read our
[contributor documentation](https://jupyter.readthedocs.io/en/latest/contributing/content-contributor.html)
[contributor documentation](http://jupyter.readthedocs.io/en/latest/contributor/content-contributor.html)
and the [`CONTRIBUTING.md`](CONTRIBUTING.md). The `CONTRIBUTING.md` file
explains how to set up a development installation, how to run the test suite,
and how to contribute to documentation.
For a high-level view of the vision and next directions of the project, see the
[JupyterHub community roadmap](docs/source/contributing/roadmap.md).
### A note about platform support
JupyterHub is supported on Linux/Unix based systems.
@@ -229,22 +223,20 @@ docker container or Linux VM.
We use a shared copyright model that enables all contributors to maintain the
copyright on their contributions.
All code is licensed under the terms of the [revised BSD license](./COPYING.md).
All code is licensed under the terms of the revised BSD license.
## Help and resources
We encourage you to ask questions and share ideas on the [Jupyter community forum](https://discourse.jupyter.org/).
You can also talk with us on our JupyterHub [Gitter](https://gitter.im/jupyterhub/jupyterhub) channel.
We encourage you to ask questions on the [Jupyter mailing list](https://groups.google.com/forum/#!forum/jupyter).
To participate in development discussions or get help, talk with us on
our JupyterHub [Gitter](https://gitter.im/jupyterhub/jupyterhub) channel.
- [Reporting Issues](https://github.com/jupyterhub/jupyterhub/issues)
- [JupyterHub tutorial](https://github.com/jupyterhub/jupyterhub-tutorial)
- [Documentation for JupyterHub](https://jupyterhub.readthedocs.io/en/latest/) | [PDF (latest)](https://media.readthedocs.org/pdf/jupyterhub/latest/jupyterhub.pdf) | [PDF (stable)](https://media.readthedocs.org/pdf/jupyterhub/stable/jupyterhub.pdf)
- [Documentation for JupyterHub's REST API](https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/HEAD/docs/rest-api.yml#/default)
- [Documentation for JupyterHub's REST API](http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/master/docs/rest-api.yml#/default)
- [Documentation for Project Jupyter](http://jupyter.readthedocs.io/en/latest/index.html) | [PDF](https://media.readthedocs.org/pdf/jupyter/latest/jupyter.pdf)
- [Project Jupyter website](https://jupyter.org)
- [Project Jupyter community](https://jupyter.org/community)
JupyterHub follows the Jupyter [Community Guides](https://jupyter.readthedocs.io/en/latest/community/content-community.html).
---

View File

@@ -1,16 +1,19 @@
#!/usr/bin/env python
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
"""
bower-lite
Since Bower's on its way out,
stage frontend dependencies from node_modules into components
"""
import json
import os
import shutil
from os.path import join
import shutil
HERE = os.path.abspath(os.path.dirname(__file__))
@@ -29,5 +32,5 @@ dependencies = package_json['dependencies']
for dep in dependencies:
src = join(node_modules, dep)
dest = join(components, dep)
print(f"{src} -> {dest}")
print("%s -> %s" % (src, dest))
shutil.copytree(src, dest)

View File

@@ -1,60 +1,50 @@
#!/usr/bin/env bash
# The goal of this script is to start a database server as a docker container.
#
# Required environment variables:
# - DB: The database server to start, either "postgres" or "mysql".
#
# - PGUSER/PGPASSWORD: For the creation of a postgresql user with associated
# password.
# source this file to setup postgres and mysql
# for local testing (as similar as possible to docker)
set -eu
set -e
# Stop and remove any existing database container
DOCKER_CONTAINER="hub-test-$DB"
docker rm -f "$DOCKER_CONTAINER" 2>/dev/null || true
export MYSQL_HOST=127.0.0.1
export MYSQL_TCP_PORT=${MYSQL_TCP_PORT:-13306}
export PGHOST=127.0.0.1
NAME="hub-test-$DB"
DOCKER_RUN="docker run -d --name $NAME"
# Prepare environment variables to startup and await readiness of either a mysql
# or postgresql server.
if [[ "$DB" == "mysql" ]]; then
# Environment variables can influence both the mysql server in the docker
# container and the mysql client.
#
# ref server: https://hub.docker.com/_/mysql/
# ref client: https://dev.mysql.com/doc/refman/5.7/en/setting-environment-variables.html
#
DOCKER_RUN_ARGS="-p 3306:3306 --env MYSQL_ALLOW_EMPTY_PASSWORD=1 mysql:5.7"
READINESS_CHECK="mysql --user root --execute \q"
elif [[ "$DB" == "postgres" ]]; then
# Environment variables can influence both the postgresql server in the
# docker container and the postgresql client (psql).
#
# ref server: https://hub.docker.com/_/postgres/
# ref client: https://www.postgresql.org/docs/9.5/libpq-envars.html
#
# POSTGRES_USER / POSTGRES_PASSWORD will create a user on startup of the
# postgres server, but PGUSER and PGPASSWORD are the environment variables
# used by the postgresql client psql, so we configure the user based on how
# we want to connect.
#
DOCKER_RUN_ARGS="-p 5432:5432 --env "POSTGRES_USER=${PGUSER}" --env "POSTGRES_PASSWORD=${PGPASSWORD}" postgres:9.5"
READINESS_CHECK="psql --command \q"
else
echo '$DB must be mysql or postgres'
exit 1
fi
docker rm -f "$NAME" 2>/dev/null || true
# Start the database server
docker run --detach --name "$DOCKER_CONTAINER" $DOCKER_RUN_ARGS
case "$DB" in
"mysql")
RUN_ARGS="-e MYSQL_ALLOW_EMPTY_PASSWORD=1 -p $MYSQL_TCP_PORT:3306 mysql:5.7"
CHECK="mysql --host $MYSQL_HOST --port $MYSQL_TCP_PORT --user root -e \q"
;;
"postgres")
RUN_ARGS="-p 5432:5432 postgres:9.5"
CHECK="psql --user postgres -c \q"
;;
*)
echo '$DB must be mysql or postgres'
exit 1
esac
$DOCKER_RUN $RUN_ARGS
# Wait for the database server to start
echo -n "waiting for $DB "
for i in {1..60}; do
if $READINESS_CHECK; then
echo 'done'
break
else
echo -n '.'
sleep 1
fi
if $CHECK; then
echo 'done'
break
else
echo -n '.'
sleep 1
fi
done
$READINESS_CHECK
$CHECK
echo -e "
Set these environment variables:
export MYSQL_HOST=127.0.0.1
export MYSQL_TCP_PORT=$MYSQL_TCP_PORT
export PGHOST=127.0.0.1
"

View File

@@ -1,26 +1,27 @@
#!/usr/bin/env bash
# The goal of this script is to initialize a running database server with clean
# databases for use during tests.
#
# Required environment variables:
# - DB: The database server to start, either "postgres" or "mysql".
# initialize jupyterhub databases for testing
set -eu
set -e
# Prepare env vars SQL_CLIENT and EXTRA_CREATE_DATABASE_ARGS
if [[ "$DB" == "mysql" ]]; then
SQL_CLIENT="mysql --user root --execute "
EXTRA_CREATE_DATABASE_ARGS='CHARACTER SET utf8 COLLATE utf8_general_ci'
elif [[ "$DB" == "postgres" ]]; then
SQL_CLIENT="psql --command "
else
echo '$DB must be mysql or postgres'
exit 1
fi
MYSQL="mysql --user root --host $MYSQL_HOST --port $MYSQL_TCP_PORT -e "
PSQL="psql --user postgres -c "
case "$DB" in
"mysql")
EXTRA_CREATE='CHARACTER SET utf8 COLLATE utf8_general_ci'
SQL="$MYSQL"
;;
"postgres")
SQL="$PSQL"
;;
*)
echo '$DB must be mysql or postgres'
exit 1
esac
# Configure a set of databases in the database server for upgrade tests
set -x
for SUFFIX in '' _upgrade_100 _upgrade_122 _upgrade_130; do
$SQL_CLIENT "DROP DATABASE jupyterhub${SUFFIX};" 2>/dev/null || true
$SQL_CLIENT "CREATE DATABASE jupyterhub${SUFFIX} ${EXTRA_CREATE_DATABASE_ARGS:-};"
for SUFFIX in '' _upgrade_072 _upgrade_081; do
$SQL "DROP DATABASE jupyterhub${SUFFIX};" 2>/dev/null || true
$SQL "CREATE DATABASE jupyterhub${SUFFIX} ${EXTRA_CREATE};"
done

View File

@@ -1,16 +0,0 @@
# Demo JupyterHub Docker image
#
# This should only be used for demo or testing and not as a base image to build on.
#
# It includes the notebook package and it uses the DummyAuthenticator and the SimpleLocalProcessSpawner.
ARG BASE_IMAGE=jupyterhub/jupyterhub-onbuild
FROM ${BASE_IMAGE}
# Install the notebook package
RUN python3 -m pip install notebook
# Create a demo user
RUN useradd --create-home demo
RUN chown demo .
USER demo

View File

@@ -1,26 +0,0 @@
## Demo Dockerfile
This is a demo JupyterHub Docker image to help you get a quick overview of what
JupyterHub is and how it works.
It uses the SimpleLocalProcessSpawner to spawn new user servers and
DummyAuthenticator for authentication.
The DummyAuthenticator allows you to log in with any username & password and the
SimpleLocalProcessSpawner allows starting servers without having to create a
local user for each JupyterHub user.
### Important!
This should only be used for demo or testing purposes!
It shouldn't be used as a base image to build on.
### Try it
1. `cd` to the root of your jupyterhub repo.
2. Build the demo image with `docker build -t jupyterhub-demo demo-image`.
3. Run the demo image with `docker run -d -p 8000:8000 jupyterhub-demo`.
4. Visit http://localhost:8000 and login with any username and password
5. Happy demo-ing :tada:!

View File

@@ -1,7 +0,0 @@
# Configuration file for jupyterhub-demo
c = get_config()
# Use DummyAuthenticator and SimpleSpawner
c.JupyterHub.spawner_class = "simple"
c.JupyterHub.authenticator_class = "dummy"

View File

@@ -1,20 +1,14 @@
-r requirements.txt
mock
beautifulsoup4
codecov
cryptography
pytest-cov
pytest-tornado
pytest>=3.3
notebook
requests-mock
virtualenv
# temporary pin of attrs for jsonschema 0.3.0a1
# seems to be a pip bug
attrs>=17.4.0
beautifulsoup4
codecov
coverage
cryptography
html5lib # needed for beautifulsoup
mock
notebook
pre-commit
pytest>=3.3
pytest-asyncio
pytest-cov
requests-mock
# blacklist urllib3 releases affected by https://github.com/urllib3/urllib3/issues/1683
# I *think* this should only affect testing, not production
urllib3!=1.25.4,!=1.25.5
virtualenv

View File

@@ -1,14 +1,11 @@
FROM alpine:3.13
ENV LANG=en_US.UTF-8
RUN apk add --no-cache \
python3 \
py3-pip \
py3-ruamel.yaml \
py3-cryptography \
py3-sqlalchemy
FROM python:3.6.3-alpine3.6
ARG JUPYTERHUB_VERSION=0.8.1
ARG JUPYTERHUB_VERSION=1.3.0
RUN pip3 install --no-cache jupyterhub==${JUPYTERHUB_VERSION}
ENV LANG=en_US.UTF-8
USER nobody
CMD ["jupyterhub"]

View File

@@ -1,20 +1,21 @@
## What is Dockerfile.alpine
Dockerfile.alpine contains base image for jupyterhub. It does not work independently, but only as part of a full jupyterhub cluster
Dockerfile.alpine contains base image for jupyterhub. It does not work independently, but only as part of a full jupyterhub cluster
## How to use it?
1. A running configurable-http-proxy, whose API is accessible.
1. A running configurable-http-proxy, whose API is accessible.
2. A jupyterhub_config file.
3. Authentication and other libraries required by the specific jupyterhub_config file.
## Steps to test it outside a cluster
- start configurable-http-proxy in another container
- specify CONFIGPROXY_AUTH_TOKEN env in both containers
- put both containers on the same network (e.g. docker network create jupyterhub; docker run ... --net jupyterhub)
- tell jupyterhub where CHP is (e.g. c.ConfigurableHTTPProxy.api_url = 'http://chp:8001')
- tell jupyterhub not to start the proxy itself (c.ConfigurableHTTPProxy.should_start = False)
- Use dummy authenticator for ease of testing. Update following in jupyterhub_config file
- c.JupyterHub.authenticator_class = 'dummyauthenticator.DummyAuthenticator'
- c.DummyAuthenticator.password = "your strong password"
* start configurable-http-proxy in another container
* specify CONFIGPROXY_AUTH_TOKEN env in both containers
* put both containers on the same network (e.g. docker create network jupyterhub; docker run ... --net jupyterhub)
* tell jupyterhub where CHP is (e.g. c.ConfigurableHTTPProxy.api_url = 'http://chp:8001')
* tell jupyterhub not to start the proxy itself (c.ConfigurableHTTPProxy.should_start = False)
* Use dummy authenticator for ease of testing. Update following in jupyterhub_config file
- c.JupyterHub.authenticator_class = 'dummyauthenticator.DummyAuthenticator'
- c.DummyAuthenticator.password = "your strong password"

View File

@@ -1,9 +0,0 @@
import os
from jupyterhub._data import DATA_FILES_PATH
print(f"DATA_FILES_PATH={DATA_FILES_PATH}")
for sub_path in ("templates", "static/components", "static/css/style.min.css"):
path = os.path.join(DATA_FILES_PATH, sub_path)
assert os.path.exists(path), path

View File

@@ -48,7 +48,6 @@ help:
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
@echo " spelling to run spell check on documentation"
@echo " metrics to generate documentation for metrics by inspecting the source code"
clean:
rm -rf $(BUILDDIR)/*
@@ -61,17 +60,7 @@ rest-api: source/_static/rest-api/index.html
source/_static/rest-api/index.html: rest-api.yml node_modules
npm run rest-api
metrics: source/reference/metrics.rst
source/reference/metrics.rst: generate-metrics.py
python3 generate-metrics.py
scopes: source/rbac/scope-table.md
source/rbac/scope-table.md: source/rbac/generate-scope-table.py
python3 source/rbac/generate-scope-table.py
html: rest-api metrics scopes
html: rest-api
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."

22
docs/environment.yml Normal file
View File

@@ -0,0 +1,22 @@
# ReadTheDocs uses the `environment.yaml` so make sure to update that as well
# if you change the dependencies of JupyterHub in the various `requirements.txt`
name: jhub_docs
channels:
- conda-forge
dependencies:
- nodejs
- python=3.6
- alembic
- jinja2
- pamela
- requests
- sqlalchemy>=1
- tornado>=5.0
- traitlets>=4.1
- sphinx>=1.7
- pip:
- python-oauth2
- recommonmark==0.4.0
- async_generator
- prometheus_client
- attrs>=17.4.0

View File

@@ -1,57 +0,0 @@
import os
from os.path import join
from pytablewriter import RstSimpleTableWriter
from pytablewriter.style import Style
import jupyterhub.metrics
HERE = os.path.abspath(os.path.dirname(__file__))
class Generator:
@classmethod
def create_writer(cls, table_name, headers, values):
writer = RstSimpleTableWriter()
writer.table_name = table_name
writer.headers = headers
writer.value_matrix = values
writer.margin = 1
[writer.set_style(header, Style(align="center")) for header in headers]
return writer
def _parse_metrics(self):
table_rows = []
for name in dir(jupyterhub.metrics):
obj = getattr(jupyterhub.metrics, name)
if obj.__class__.__module__.startswith('prometheus_client.'):
for metric in obj.describe():
table_rows.append([metric.type, metric.name, metric.documentation])
return table_rows
def prometheus_metrics(self):
generated_directory = f"{HERE}/source/reference"
if not os.path.exists(generated_directory):
os.makedirs(generated_directory)
filename = f"{generated_directory}/metrics.rst"
table_name = ""
headers = ["Type", "Name", "Description"]
values = self._parse_metrics()
writer = self.create_writer(table_name, headers, values)
title = "List of Prometheus Metrics"
underline = "============================"
content = f"{title}\n{underline}\n{writer.dumps()}"
with open(filename, 'w') as f:
f.write(content)
print(f"Generated {filename}.")
def main():
doc_generator = Generator()
doc_generator.prometheus_metrics()
if __name__ == "__main__":
main()

View File

@@ -1,12 +1,5 @@
# ReadTheDocs uses the `environment.yaml` so make sure to update that as well
# if you change this file
-r ../requirements.txt
alabaster_jupyterhub
# Temporary fix of #3021. Revert back to released autodoc-traits when
# 0.1.0 released.
https://github.com/jupyterhub/autodoc-traits/archive/d22282c1c18c6865436e06d8b329c06fe12a07f8.zip
myst-parser
pydata-sphinx-theme
pytablewriter>=0.56
sphinx>=1.7
sphinx-copybutton
sphinx-jsonschema
recommonmark==0.4.0

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,106 @@
/* Added to avoid logo being too squeezed */
.navbar-brand {
height: 4rem !important;
}
div#helm-chart-schema h2,
div#helm-chart-schema h3,
div#helm-chart-schema h4,
div#helm-chart-schema h5,
div#helm-chart-schema h6 {
font-family: courier new;
}
h3, h3 ~ * {
margin-left: 3% !important;
}
h4, h4 ~ * {
margin-left: 6% !important;
}
h5, h5 ~ * {
margin-left: 9% !important;
}
h6, h6 ~ * {
margin-left: 12% !important;
}
h7, h7 ~ * {
margin-left: 15% !important;
}
img.logo {
width:100%
}
.right-next {
float: right;
max-width: 45%;
overflow: auto;
text-overflow: ellipsis;
white-space: nowrap;
}
.right-next::after{
content: ' »';
}
.left-prev {
float: left;
max-width: 45%;
overflow: auto;
text-overflow: ellipsis;
white-space: nowrap;
}
.left-prev::before{
content: '« ';
}
.prev-next-bottom {
margin-top: 3em;
}
.prev-next-top {
margin-bottom: 1em;
}
/* Sidebar TOC and headers */
div.sphinxsidebarwrapper div {
margin-bottom: .8em;
}
div.sphinxsidebar h3 {
font-size: 1.3em;
padding-top: 0px;
font-weight: 800;
margin-left: 0px !important;
}
div.sphinxsidebar p.caption {
font-size: 1.2em;
margin-bottom: 0px;
margin-left: 0px !important;
font-weight: 900;
color: #767676;
}
div.sphinxsidebar ul {
font-size: .8em;
margin-top: 0px;
padding-left: 3%;
margin-left: 0px !important;
}
div.relations ul {
font-size: 1em;
margin-left: 0px !important;
}
div#searchbox form {
margin-left: 0px !important;
}
/* body elements */
.toctree-wrapper span.caption-text {
color: #767676;
font-style: italic;
font-weight: 300;
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.7 KiB

After

Width:  |  Height:  |  Size: 38 KiB

View File

@@ -0,0 +1,16 @@
{# Custom template for navigation.html
alabaster theme does not provide blocks for titles to
be overridden so this custom theme handles title and
toctree for sidebar
#}
<h3>{{ _('Table of Contents') }}</h3>
{{ toctree(includehidden=theme_sidebar_includehidden, collapse=theme_sidebar_collapse) }}
{% if theme_extra_nav_links %}
<hr />
<ul>
{% for text, uri in theme_extra_nav_links.items() %}
<li class="toctree-l1"><a href="{{ uri }}">{{ text }}</a></li>
{% endfor %}
</ul>
{% endif %}

View File

@@ -0,0 +1,30 @@
{% extends '!page.html' %}
{# Custom template for page.html
Alabaster theme does not provide blocks for prev/next at bottom of each page.
This is _in addition_ to the prev/next in the sidebar. The "Prev/Next" text
or symbols are handled by CSS classes in _static/custom.css
#}
{% macro prev_next(prev, next, prev_title='', next_title='') %}
{%- if prev %}
<a class='left-prev' href="{{ prev.link|e }}" title="{{ _('previous chapter')}}">{{ prev_title or prev.title }}</a>
{%- endif %}
{%- if next %}
<a class='right-next' href="{{ next.link|e }}" title="{{ _('next chapter')}}">{{ next_title or next.title }}</a>
{%- endif %}
<div style='clear:both;'></div>
{% endmacro %}
{% block body %}
<div class='prev-next-top'>
{{ prev_next(prev, next, 'Previous', 'Next') }}
</div>
{{super()}}
<div class='prev-next-bottom'>
{{ prev_next(prev, next) }}
</div>
{% endblock %}

View File

@@ -0,0 +1,17 @@
{# Custom template for relations.html
alabaster theme does not provide previous/next page by default
#}
<div class="relations">
<h3>Navigation</h3>
<ul>
<li><a href="{{ pathto(master_doc) }}">Documentation Home</a><ul>
{%- if prev %}
<li><a href="{{ prev.link|e }}" title="Previous">Previous topic</a></li>
{%- endif %}
{%- if next %}
<li><a href="{{ next.link|e }}" title="Next">Next topic</a></li>
{%- endif %}
</ul>
</ul>
</div>

View File

@@ -1,159 +0,0 @@
.. _admin/upgrading:
====================
Upgrading JupyterHub
====================
JupyterHub offers easy upgrade pathways between minor versions. This
document describes how to do these upgrades.
If you are using :ref:`a JupyterHub distribution <index/distributions>`, you
should consult the distribution's documentation on how to upgrade. This
document is if you have set up your own JupyterHub without using a
distribution.
It is long because is pretty detailed! Most likely, upgrading
JupyterHub is painless, quick and with minimal user interruption.
Read the Changelog
==================
The `changelog <../changelog.html>`_ contains information on what has
changed with the new JupyterHub release, and any deprecation warnings.
Read these notes to familiarize yourself with the coming changes. There
might be new releases of authenticators & spawners you are using, so
read the changelogs for those too!
Notify your users
=================
If you are using the default configuration where ``configurable-http-proxy``
is managed by JupyterHub, your users will see service disruption during
the upgrade process. You should notify them, and pick a time to do the
upgrade where they will be least disrupted.
If you are using a different proxy, or running ``configurable-http-proxy``
independent of JupyterHub, your users will be able to continue using notebook
servers they had already launched, but will not be able to launch new servers
nor sign in.
Backup database & config
========================
Before doing an upgrade, it is critical to back up:
#. Your JupyterHub database (sqlite by default, or MySQL / Postgres
if you used those). If you are using sqlite (the default), you
should backup the ``jupyterhub.sqlite`` file.
#. Your ``jupyterhub_config.py`` file.
#. Your user's home directories. This is unlikely to be affected directly by
a JupyterHub upgrade, but we recommend a backup since user data is very
critical.
Shutdown JupyterHub
===================
Shutdown the JupyterHub process. This would vary depending on how you
have set up JupyterHub to run. Most likely, it is using a process
supervisor of some sort (``systemd`` or ``supervisord`` or even ``docker``).
Use the supervisor specific command to stop the JupyterHub process.
Upgrade JupyterHub packages
===========================
There are two environments where the ``jupyterhub`` package is installed:
#. The *hub environment*, which is where the JupyterHub server process
runs. This is started with the ``jupyterhub`` command, and is what
people generally think of as JupyterHub.
#. The *notebook user environments*. This is where the user notebook
servers are launched from, and is probably custom to your own
installation. This could be just one environment (different from the
hub environment) that is shared by all users, one environment
per user, or same environment as the hub environment. The hub
launched the ``jupyterhub-singleuser`` command in this environment,
which in turn starts the notebook server.
You need to make sure the version of the ``jupyterhub`` package matches
in both these environments. If you installed ``jupyterhub`` with pip,
you can upgrade it with:
.. code-block:: bash
python3 -m pip install --upgrade jupyterhub==<version>
Where ``<version>`` is the version of JupyterHub you are upgrading to.
If you used ``conda`` to install ``jupyterhub``, you should upgrade it
with:
.. code-block:: bash
conda install -c conda-forge jupyterhub==<version>
Where ``<version>`` is the version of JupyterHub you are upgrading to.
You should also check for new releases of the authenticator & spawner you
are using. You might wish to upgrade those packages too along with JupyterHub,
or upgrade them separately.
Upgrade JupyterHub database
===========================
Once new packages are installed, you need to upgrade the JupyterHub
database. From the hub environment, in the same directory as your
``jupyterhub_config.py`` file, you should run:
.. code-block:: bash
jupyterhub upgrade-db
This should find the location of your database, and run necessary upgrades
for it.
SQLite database disadvantages
-----------------------------
SQLite has some disadvantages when it comes to upgrading JupyterHub. These
are:
- ``upgrade-db`` may not work, and you may need delete your database
and start with a fresh one.
- ``downgrade-db`` **will not** work if you want to rollback to an
earlier version, so backup the ``jupyterhub.sqlite`` file before
upgrading
What happens if I delete my database?
-------------------------------------
Losing the Hub database is often not a big deal. Information that
resides only in the Hub database includes:
- active login tokens (user cookies, service tokens)
- users added via JupyterHub UI, instead of config files
- info about running servers
If the following conditions are true, you should be fine clearing the
Hub database and starting over:
- users specified in config file, or login using an external
authentication provider (Google, GitHub, LDAP, etc)
- user servers are stopped during upgrade
- don't mind causing users to login again after upgrade
Start JupyterHub
================
Once the database upgrade is completed, start the ``jupyterhub``
process again.
#. Log-in and start the server to make sure things work as
expected.
#. Check the logs for any errors or deprecation warnings. You
might have to update your ``jupyterhub_config.py`` file to
deal with any deprecated options.
Congratulations, your JupyterHub has been upgraded!

View File

@@ -13,3 +13,4 @@ Module: :mod:`jupyterhub.app`
-------------------
.. autoconfigurable:: JupyterHub

View File

@@ -26,7 +26,3 @@ Module: :mod:`jupyterhub.auth`
.. autoconfigurable:: PAMAuthenticator
:class:`DummyAuthenticator`
---------------------------
.. autoconfigurable:: DummyAuthenticator

View File

@@ -1,8 +1,8 @@
.. _api-index:
##############
JupyterHub API
##############
##################
The JupyterHub API
##################
:Release: |release|
:Date: |today|
@@ -18,7 +18,7 @@ information on:
- learning more about JupyterHub's API
The same JupyterHub API spec, as found here, is available in an interactive form
`here (on swagger's petstore) <https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/HEAD/docs/rest-api.yml#!/default>`__.
`here (on swagger's petstore) <http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#!/default>`__.
The `OpenAPI Initiative`_ (fka Swagger™) is a project used to describe
and document RESTful APIs.

View File

@@ -20,3 +20,4 @@ Module: :mod:`jupyterhub.proxy`
.. autoconfigurable:: ConfigurableHTTPProxy
:members: debug, auth_token, check_running_interval, api_url, command

View File

@@ -14,3 +14,4 @@ Module: :mod:`jupyterhub.services.service`
.. autoconfigurable:: Service
:members: name, admin, url, api_token, managed, kind, command, cwd, environment, user, oauth_client_id, server, prefix, proxy_spec

View File

@@ -38,3 +38,4 @@ Module: :mod:`jupyterhub.services.auth`
--------------------------------
.. autoclass:: HubOAuthCallbackHandler

View File

@@ -13,9 +13,10 @@ Module: :mod:`jupyterhub.spawner`
----------------
.. autoconfigurable:: Spawner
:members: options_from_form, poll, start, stop, get_args, get_env, get_state, template_namespace, format_string, create_certs, move_certs
:members: options_from_form, poll, start, stop, get_args, get_env, get_state, template_namespace, format_string
:class:`LocalProcessSpawner`
----------------------------
.. autoconfigurable:: LocalProcessSpawner

View File

@@ -34,3 +34,4 @@ Module: :mod:`jupyterhub.user`
.. attribute:: spawner
The user's :class:`~.Spawner` instance.

File diff suppressed because one or more lines are too long

View File

@@ -1,6 +1,11 @@
# -*- coding: utf-8 -*-
#
import os
import sys
import os
import shlex
# For conversion from markdown to html
import recommonmark.parser
# Set paths
sys.path.insert(0, os.path.abspath('.'))
@@ -16,22 +21,17 @@ extensions = [
'sphinx.ext.intersphinx',
'sphinx.ext.napoleon',
'autodoc_traits',
'sphinx_copybutton',
'sphinx-jsonschema',
'myst_parser',
]
myst_enable_extensions = [
'colon_fence',
'deflist',
]
templates_path = ['_templates']
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = 'JupyterHub'
copyright = '2016, Project Jupyter team'
author = 'Project Jupyter team'
project = u'JupyterHub'
copyright = u'2016, Project Jupyter team'
author = u'Project Jupyter team'
# Autopopulate version
from os.path import dirname
@@ -39,6 +39,7 @@ from os.path import dirname
docs = dirname(dirname(__file__))
root = dirname(docs)
sys.path.insert(0, root)
sys.path.insert(0, os.path.join(docs, 'sphinxext'))
import jupyterhub
@@ -55,64 +56,9 @@ todo_include_todos = False
# Set the default role so we can use `foo` instead of ``foo``
default_role = 'literal'
# -- Config -------------------------------------------------------------
from jupyterhub.app import JupyterHub
from docutils import nodes
from sphinx.directives.other import SphinxDirective
from contextlib import redirect_stdout
from io import StringIO
# create a temp instance of JupyterHub just to get the output of the generate-config
# and help --all commands.
jupyterhub_app = JupyterHub()
class ConfigDirective(SphinxDirective):
"""Generate the configuration file output for use in the documentation."""
has_content = False
required_arguments = 0
optional_arguments = 0
final_argument_whitespace = False
option_spec = {}
def run(self):
# The generated configuration file for this version
generated_config = jupyterhub_app.generate_config_file()
# post-process output
home_dir = os.environ['HOME']
generated_config = generated_config.replace(home_dir, '$HOME', 1)
par = nodes.literal_block(text=generated_config)
return [par]
class HelpAllDirective(SphinxDirective):
"""Print the output of jupyterhub help --all for use in the documentation."""
has_content = False
required_arguments = 0
optional_arguments = 0
final_argument_whitespace = False
option_spec = {}
def run(self):
# The output of the help command for this version
buffer = StringIO()
with redirect_stdout(buffer):
jupyterhub_app.print_help('--help-all')
all_help = buffer.getvalue()
# post-process output
home_dir = os.environ['HOME']
all_help = all_help.replace(home_dir, '$HOME', 1)
par = nodes.literal_block(text=all_help)
return [par]
def setup(app):
app.add_css_file('custom.css')
app.add_directive('jupyterhub-generate-config', ConfigDirective)
app.add_directive('jupyterhub-help-all', HelpAllDirective)
# -- Source -------------------------------------------------------------
source_parsers = {'.md': 'recommonmark.parser.CommonMarkParser'}
source_suffix = ['.rst', '.md']
# source_encoding = 'utf-8-sig'
@@ -120,7 +66,7 @@ source_suffix = ['.rst', '.md']
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages.
html_theme = 'pydata_sphinx_theme'
html_theme = 'alabaster'
html_logo = '_static/images/logo/logo.png'
html_favicon = '_static/images/logo/favicon.ico'
@@ -128,6 +74,31 @@ html_favicon = '_static/images/logo/favicon.ico'
# Paths that contain custom static files (such as style sheets)
html_static_path = ['_static']
html_theme_options = {
'show_related': True,
'description': 'Documentation for JupyterHub',
'github_user': 'jupyterhub',
'github_repo': 'jupyterhub',
'github_banner': False,
'github_button': True,
'github_type': 'star',
'show_powered_by': False,
'extra_nav_links': {
'GitHub Repo': 'http://github.com/jupyterhub/jupyterhub',
'Issue Tracker': 'http://github.com/jupyterhub/jupyterhub/issues',
},
}
html_sidebars = {
'**': [
'about.html',
'searchbox.html',
'navigation.html',
'relations.html',
'sourcelink.html',
]
}
htmlhelp_basename = 'JupyterHubdoc'
# -- Options for LaTeX output ---------------------------------------------
@@ -146,8 +117,8 @@ latex_documents = [
(
master_doc,
'JupyterHub.tex',
'JupyterHub Documentation',
'Project Jupyter team',
u'JupyterHub Documentation',
u'Project Jupyter team',
'manual',
)
]
@@ -164,7 +135,7 @@ latex_documents = [
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [(master_doc, 'jupyterhub', 'JupyterHub Documentation', [author], 1)]
man_pages = [(master_doc, 'jupyterhub', u'JupyterHub Documentation', [author], 1)]
# man_show_urls = False
@@ -178,7 +149,7 @@ texinfo_documents = [
(
master_doc,
'JupyterHub',
'JupyterHub Documentation',
u'JupyterHub Documentation',
author,
'JupyterHub',
'One line description of project.',
@@ -210,12 +181,14 @@ intersphinx_mapping = {'https://docs.python.org/3/': None}
# -- Read The Docs --------------------------------------------------------
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
if on_rtd:
if not on_rtd:
html_theme = 'alabaster'
else:
# readthedocs.org uses their theme by default, so no need to specify it
# build both metrics and rest-api, since RTD doesn't run make
# build rest-api, since RTD doesn't run make
from subprocess import check_call as sh
sh(['make', 'metrics', 'rest-api', 'scopes'], cwd=docs)
sh(['make', 'rest-api'], cwd=docs)
# -- Spell checking -------------------------------------------------------

View File

@@ -1,30 +0,0 @@
.. _contributing/community:
================================
Community communication channels
================================
We use `Discourse <https://discourse.jupyter.org>` for online discussion.
Everyone in the Jupyter community is welcome to bring ideas and questions there.
In addition, we use `Gitter <https://gitter.im>`_ for online, real-time text chat,
a place for more ephemeral discussions.
The primary Gitter channel for JupyterHub is `jupyterhub/jupyterhub <https://gitter.im/jupyterhub/jupyterhub>`_.
Gitter isn't archived or searchable, so we recommend going to discourse first
to make sure that discussions are most useful and accessible to the community.
Remember that our community is distributed across the world in various
timezones, so be patient if you do not get an answer immediately!
GitHub issues are used for most long-form project discussions, bug reports
and feature requests. Issues related to a specific authenticator or
spawner should be directed to the appropriate repository for the
authenticator or spawner. If you are using a specific JupyterHub
distribution (such as `Zero to JupyterHub on Kubernetes <http://github.com/jupyterhub/zero-to-jupyterhub-k8s>`_
or `The Littlest JupyterHub <http://github.com/jupyterhub/the-littlest-jupyterhub/>`_),
you should open issues directly in their repository. If you can not
find a repository to open your issue in, do not worry! Create it in the `main
JupyterHub repository <https://github.com/jupyterhub/jupyterhub/>`_ and our
community will help you figure it out.
A `mailing list <https://groups.google.com/forum/#!forum/jupyter>`_ for all
of Project Jupyter exists, along with one for `teaching with Jupyter
<https://groups.google.com/forum/#!forum/jupyter-education>`_.

View File

@@ -1,78 +0,0 @@
.. _contributing/docs:
==========================
Contributing Documentation
==========================
Documentation is often more important than code. This page helps
you get set up on how to contribute documentation to JupyterHub.
Building documentation locally
==============================
We use `sphinx <http://sphinx-doc.org>`_ to build our documentation. It takes
our documentation source files (written in `markdown
<https://daringfireball.net/projects/markdown/>`_ or `reStructuredText
<https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_ &
stored under the ``docs/source`` directory) and converts it into various
formats for people to read. To make sure the documentation you write or
change renders correctly, it is good practice to test it locally.
#. Make sure you have successfuly completed :ref:`contributing/setup`.
#. Install the packages required to build the docs.
.. code-block:: bash
python3 -m pip install -r docs/requirements.txt
#. Build the html version of the docs. This is the most commonly used
output format, so verifying it renders as you should is usually good
enough.
.. code-block:: bash
cd docs
make html
This step will display any syntax or formatting errors in the documentation,
along with the filename / line number in which they occurred. Fix them,
and re-run the ``make html`` command to re-render the documentation.
#. View the rendered documentation by opening ``build/html/index.html`` in
a web browser.
.. tip::
On macOS, you can open a file from the terminal with ``open <path-to-file>``.
On Linux, you can do the same with ``xdg-open <path-to-file>``.
.. _contributing/docs/conventions:
Documentation conventions
=========================
This section lists various conventions we use in our documentation. This is a
living document that grows over time, so feel free to add to it / change it!
Our entire documentation does not yet fully conform to these conventions yet,
so help in making it so would be appreciated!
``pip`` invocation
------------------
There are many ways to invoke a ``pip`` command, we recommend the following
approach:
.. code-block:: bash
python3 -m pip
This invokes pip explicitly using the python3 binary that you are
currently using. This is the **recommended way** to invoke pip
in our documentation, since it is least likely to cause problems
with python3 and pip being from different environments.
For more information on how to invoke ``pip`` commands, see
`the pip documentation <https://pip.pypa.io/en/stable/>`_.

View File

@@ -1,21 +0,0 @@
============
Contributing
============
We want you to contribute to JupyterHub in ways that are most exciting
& useful to you. We value documentation, testing, bug reporting & code equally,
and are glad to have your contributions in whatever form you wish :)
Our `Code of Conduct <https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md>`_
(`reporting guidelines <https://github.com/jupyter/governance/blob/HEAD/conduct/reporting_online.md>`_)
helps keep our community welcoming to as many people as possible.
.. toctree::
:maxdepth: 2
community
setup
docs
tests
roadmap
security

View File

@@ -1,95 +0,0 @@
# The JupyterHub roadmap
This roadmap collects "next steps" for JupyterHub. It is about creating a
shared understanding of the project's vision and direction amongst
the community of users, contributors, and maintainers.
The goal is to communicate priorities and upcoming release plans.
It is not a aimed at limiting contributions to what is listed here.
## Using the roadmap
### Sharing Feedback on the Roadmap
All of the community is encouraged to provide feedback as well as share new
ideas with the community. Please do so by submitting an issue. If you want to
have an informal conversation first use one of the other communication channels.
After submitting the issue, others from the community will probably
respond with questions or comments they have to clarify the issue. The
maintainers will help identify what a good next step is for the issue.
### What do we mean by "next step"?
When submitting an issue, think about what "next step" category best describes
your issue:
- **now**, concrete/actionable step that is ready for someone to start work on.
These might be items that have a link to an issue or more abstract like
"decrease typos and dead links in the documentation"
- **soon**, less concrete/actionable step that is going to happen soon,
discussions around the topic are coming close to an end at which point it can
move into the "now" category
- **later**, abstract ideas or tasks, need a lot of discussion or
experimentation to shape the idea so that it can be executed. Can also
contain concrete/actionable steps that have been postponed on purpose
(these are steps that could be in "now" but the decision was taken to work on
them later)
### Reviewing and Updating the Roadmap
The roadmap will get updated as time passes (next review by 1st December) based
on discussions and ideas captured as issues.
This means this list should not be exhaustive, it should only represent
the "top of the stack" of ideas. It should
not function as a wish list, collection of feature requests or todo list.
For those please create a
[new issue](https://github.com/jupyterhub/jupyterhub/issues/new).
The roadmap should give the reader an idea of what is happening next, what needs
input and discussion before it can happen and what has been postponed.
## The roadmap proper
### Project vision
JupyterHub is a dependable tool used by humans that reduces the complexity of
creating the environment in which a piece of software can be executed.
### Now
These "Now" items are considered active areas of focus for the project:
- HubShare - a sharing service for use with JupyterHub.
- Users should be able to:
- Push a project to other users.
- Get a checkout of a project from other users.
- Push updates to a published project.
- Pull updates from a published project.
- Manage conflicts/merges by simply picking a version (our/theirs)
- Get a checkout of a project from the internet. These steps are completely different from saving notebooks/files.
- Have directories that are managed by git completely separately from our stuff.
- Look at pushed content that they have access to without an explicit pull.
- Define and manage teams of users.
- Adding/removing a user to/from a team gives/removes them access to all projects that team has access to.
- Build other services, such as static HTML publishing and dashboarding on top of these things.
### Soon
These "Soon" items are under discussion. Once an item reaches the point of an
actionable plan, the item will be moved to the "Now" section. Typically,
these will be moved at a future review of the roadmap.
- resource monitoring and management:
- (prometheus?) API for resource monitoring
- tracking activity on single-user servers instead of the proxy
- notes and activity tracking per API token
### Later
The "Later" items are things that are at the back of the project's mind. At this
time there is no active plan for an item. The project would like to find the
resources and time to discuss these ideas.
- real-time collaboration
- Enter into real-time collaboration mode for a project that starts a shared execution context.
- Once the single-user notebook package supports realtime collaboration,
implement sharing mechanism integrated into the Hub.

View File

@@ -1,10 +0,0 @@
Reporting security issues in Jupyter or JupyterHub
==================================================
If you find a security vulnerability in Jupyter or JupyterHub,
whether it is a failure of the security model described in :doc:`../reference/websecurity`
or a failure in implementation,
please report it to security@ipython.org.
If you prefer to encrypt your security reports,
you can use :download:`this PGP public key </ipython_security.asc>`.

View File

@@ -1,188 +0,0 @@
.. _contributing/setup:
================================
Setting up a development install
================================
System requirements
===================
JupyterHub can only run on MacOS or Linux operating systems. If you are
using Windows, we recommend using `VirtualBox <https://virtualbox.org>`_
or a similar system to run `Ubuntu Linux <https://ubuntu.com>`_ for
development.
Install Python
--------------
JupyterHub is written in the `Python <https://python.org>`_ programming language, and
requires you have at least version 3.5 installed locally. If you havent
installed Python before, the recommended way to install it is to use
`miniconda <https://conda.io/miniconda.html>`_. Remember to get the Python 3 version,
and **not** the Python 2 version!
Install nodejs
--------------
``configurable-http-proxy``, the default proxy implementation for
JupyterHub, is written in Javascript to run on `NodeJS
<https://nodejs.org/en/>`_. If you have not installed nodejs before, we
recommend installing it in the ``miniconda`` environment you set up for
Python. You can do so with ``conda install nodejs``.
Install git
-----------
JupyterHub uses `git <https://git-scm.com>`_ & `GitHub <https://github.com>`_
for development & collaboration. You need to `install git
<https://git-scm.com/book/en/v2/Getting-Started-Installing-Git>`_ to work on
JupyterHub. We also recommend getting a free account on GitHub.com.
Setting up a development install
================================
When developing JupyterHub, you need to make changes to the code & see
their effects quickly. You need to do a developer install to make that
happen.
.. note:: This guide does not attempt to dictate *how* development
environements should be isolated since that is a personal preference and can
be achieved in many ways, for example `tox`, `conda`, `docker`, etc. See this
`forum thread <https://discourse.jupyter.org/t/thoughts-on-using-tox/3497>`_ for
a more detailed discussion.
1. Clone the `JupyterHub git repository <https://github.com/jupyterhub/jupyterhub>`_
to your computer.
.. code:: bash
git clone https://github.com/jupyterhub/jupyterhub
cd jupyterhub
2. Make sure the ``python`` you installed and the ``npm`` you installed
are available to you on the command line.
.. code:: bash
python -V
This should return a version number greater than or equal to 3.5.
.. code:: bash
npm -v
This should return a version number greater than or equal to 5.0.
3. Install ``configurable-http-proxy``. This is required to run
JupyterHub.
.. code:: bash
npm install -g configurable-http-proxy
If you get an error that says ``Error: EACCES: permission denied``,
you might need to prefix the command with ``sudo``. If you do not
have access to sudo, you may instead run the following commands:
.. code:: bash
npm install configurable-http-proxy
export PATH=$PATH:$(pwd)/node_modules/.bin
The second line needs to be run every time you open a new terminal.
4. Install the python packages required for JupyterHub development.
.. code:: bash
python3 -m pip install -r dev-requirements.txt
python3 -m pip install -r requirements.txt
5. Setup a database.
The default database engine is ``sqlite`` so if you are just trying
to get up and running quickly for local development that should be
available via `python <https://docs.python.org/3.5/library/sqlite3.html>`__.
See :doc:`/reference/database` for details on other supported databases.
6. Install the development version of JupyterHub. This lets you edit
JupyterHub code in a text editor & restart the JupyterHub process to
see your code changes immediately.
.. code:: bash
python3 -m pip install --editable .
7. You are now ready to start JupyterHub!
.. code:: bash
jupyterhub
8. You can access JupyterHub from your browser at
``http://localhost:8000`` now.
Happy developing!
Using DummyAuthenticator & SimpleLocalProcessSpawner
====================================================
To simplify testing of JupyterHub, its helpful to use
:class:`~jupyterhub.auth.DummyAuthenticator` instead of the default JupyterHub
authenticator and SimpleLocalProcessSpawner instead of the default spawner.
There is a sample configuration file that does this in
``testing/jupyterhub_config.py``. To launch jupyterhub with this
configuration:
.. code:: bash
jupyterhub -f testing/jupyterhub_config.py
The default JupyterHub `authenticator
<https://jupyterhub.readthedocs.io/en/stable/reference/authenticators.html#the-default-pam-authenticator>`_
& `spawner
<https://jupyterhub.readthedocs.io/en/stable/api/spawner.html#localprocessspawner>`_
require your system to have user accounts for each user you want to log in to
JupyterHub as.
DummyAuthenticator allows you to log in with any username & password,
while SimpleLocalProcessSpawner allows you to start servers without having to
create a unix user for each JupyterHub user. Together, these make it
much easier to test JupyterHub.
Tip: If you are working on parts of JupyterHub that are common to all
authenticators & spawners, we recommend using both DummyAuthenticator &
SimpleLocalProcessSpawner. If you are working on just authenticator related
parts, use only SimpleLocalProcessSpawner. Similarly, if you are working on
just spawner related parts, use only DummyAuthenticator.
Troubleshooting
===============
This section lists common ways setting up your development environment may
fail, and how to fix them. Please add to the list if you encounter yet
another way it can fail!
``lessc`` not found
-------------------
If the ``python3 -m pip install --editable .`` command fails and complains about
``lessc`` being unavailable, you may need to explicitly install some
additional JavaScript dependencies:
.. code:: bash
npm install
This will fetch client-side JavaScript dependencies necessary to compile
CSS.
You may also need to manually update JavaScript and CSS after some
development updates, with:
.. code:: bash
python3 setup.py js # fetch updated client-side js
python3 setup.py css # recompile CSS from LESS sources

View File

@@ -1,68 +0,0 @@
.. _contributing/tests:
==================
Testing JupyterHub
==================
Unit test help validate that JupyterHub works the way we think it does,
and continues to do so when changes occur. They also help communicate
precisely what we expect our code to do.
JupyterHub uses `pytest <https://pytest.org>`_ for all our tests. You
can find them under ``jupyterhub/tests`` directory in the git repository.
Running the tests
==================
#. Make sure you have completed :ref:`contributing/setup`. You should be able
to start ``jupyterhub`` from the commandline & access it from your
web browser. This ensures that the dev environment is properly set
up for tests to run.
#. You can run all tests in JupyterHub
.. code-block:: bash
pytest -v jupyterhub/tests
This should display progress as it runs all the tests, printing
information about any test failures as they occur.
If you wish to confirm test coverage the run tests with the `--cov` flag:
.. code-block:: bash
pytest -v --cov=jupyterhub jupyterhub/tests
#. You can also run tests in just a specific file:
.. code-block:: bash
pytest -v jupyterhub/tests/<test-file-name>
#. To run a specific test only, you can do:
.. code-block:: bash
pytest -v jupyterhub/tests/<test-file-name>::<test-name>
This runs the test with function name ``<test-name>`` defined in
``<test-file-name>``. This is very useful when you are iteratively
developing a single test.
For example, to run the test ``test_shutdown`` in the file ``test_api.py``,
you would run:
.. code-block:: bash
pytest -v jupyterhub/tests/test_api.py::test_shutdown
Troubleshooting Test Failures
=============================
All the tests are failing
-------------------------
Make sure you have completed all the steps in :ref:`contributing/setup` successfully, and
can launch ``jupyterhub`` from the terminal.

View File

@@ -1,46 +0,0 @@
Eventlogging and Telemetry
==========================
JupyterHub can be configured to record structured events from a running server using Jupyter's `Telemetry System`_. The types of events that JupyterHub emits are defined by `JSON schemas`_ listed at the bottom of this page_.
.. _logging: https://docs.python.org/3/library/logging.html
.. _`Telemetry System`: https://github.com/jupyter/telemetry
.. _`JSON schemas`: https://json-schema.org/
How to emit events
------------------
Event logging is handled by its ``Eventlog`` object. This leverages Python's standing logging_ library to emit, filter, and collect event data.
To begin recording events, you'll need to set two configurations:
1. ``handlers``: tells the EventLog *where* to route your events. This trait is a list of Python logging handlers that route events to
2. ``allows_schemas``: tells the EventLog *which* events should be recorded. No events are emitted by default; all recorded events must be listed here.
Here's a basic example:
.. code-block::
import logging
c.EventLog.handlers = [
logging.FileHandler('event.log'),
]
c.EventLog.allowed_schemas = [
'hub.jupyter.org/server-action'
]
The output is a file, ``"event.log"``, with events recorded as JSON data.
.. _page:
Event schemas
-------------
.. toctree::
:maxdepth: 2
server-actions.rst

View File

@@ -1 +0,0 @@
.. jsonschema:: ../../../jupyterhub/event-schemas/server-actions/v1.yaml

View File

@@ -8,29 +8,27 @@ high performance computing.
Please submit pull requests to update information or to add new institutions or uses.
## Academic Institutions, Research Labs, and Supercomputer Centers
### University of California Berkeley
- [BIDS - Berkeley Institute for Data Science](https://bids.berkeley.edu/)
- [Teaching with Jupyter notebooks and JupyterHub](https://bids.berkeley.edu/resources/videos/teaching-ipythonjupyter-notebooks-and-jupyterhub)
- [Teaching with Jupyter notebooks and JupyterHub](https://bids.berkeley.edu/resources/videos/teaching-ipythonjupyter-notebooks-and-jupyterhub)
- [Data 8](http://data8.org/)
- [GitHub organization](https://github.com/data-8)
- [GitHub organization](https://github.com/data-8)
- [NERSC](http://www.nersc.gov/)
- [Press release on Jupyter and Cori](http://www.nersc.gov/news-publications/nersc-news/nersc-center-news/2016/jupyter-notebooks-will-open-up-new-possibilities-on-nerscs-cori-supercomputer/)
- [Moving and sharing data](https://www.nersc.gov/assets/Uploads/03-MovingAndSharingData-Cholia.pdf)
- [Press release on Jupyter and Cori](http://www.nersc.gov/news-publications/nersc-news/nersc-center-news/2016/jupyter-notebooks-will-open-up-new-possibilities-on-nerscs-cori-supercomputer/)
- [Moving and sharing data](https://www.nersc.gov/assets/Uploads/03-MovingAndSharingData-Cholia.pdf)
- [Research IT](http://research-it.berkeley.edu)
- [JupyterHub server supports campus research computation](http://research-it.berkeley.edu/blog/17/01/24/free-fully-loaded-jupyterhub-server-supports-campus-research-computation)
- [JupyterHub server supports campus research computation](http://research-it.berkeley.edu/blog/17/01/24/free-fully-loaded-jupyterhub-server-supports-campus-research-computation)
### University of California Davis
- [Spinning up multiple Jupyter Notebooks on AWS for a tutorial](https://github.com/mblmicdiv/course2017/blob/HEAD/exercises/sourmash-setup.md)
- [Spinning up multiple Jupyter Notebooks on AWS for a tutorial](https://github.com/mblmicdiv/course2017/blob/master/exercises/sourmash-setup.md)
Although not technically a JupyterHub deployment, this tutorial setup
may be helpful to others in the Jupyter community.
@@ -61,35 +59,23 @@ easy to do with RStudio too.
- [jupyterhub-deploy-teaching](https://github.com/jupyterhub/jupyterhub-deploy-teaching) based on work by Brian Granger for Cal Poly's Data Science 301 Course
### Chameleon
[Chameleon](https://www.chameleoncloud.org) is a NSF-funded configurable experimental environment for large-scale computer science systems research with [bare metal reconfigurability](https://chameleoncloud.readthedocs.io/en/latest/technical/baremetal.html). Chameleon users utilize JupyterHub to document and reproduce their complex CISE and networking experiments.
- [Shared JupyterHub](https://jupyter.chameleoncloud.org): provides a common "workbench" environment for any Chameleon user.
- [Trovi](https://www.chameleoncloud.org/experiment/share): a sharing portal of experiments, tutorials, and examples, which users can launch as a dedicated isolated environments on Chameleon's JupyterHub.
### Clemson University
- Advanced Computing
- [Palmetto cluster and JupyterHub](http://citi.sites.clemson.edu/2016/08/18/JupyterHub-for-Palmetto-Cluster.html)
- [Palmetto cluster and JupyterHub](http://citi.sites.clemson.edu/2016/08/18/JupyterHub-for-Palmetto-Cluster.html)
### University of Colorado Boulder
- (CU Research Computing) CURC
- [JupyterHub User Guide](https://www.rc.colorado.edu/support/user-guide/jupyterhub.html)
- Slurm job dispatched on Crestone compute cluster
- log troubleshooting
- Profiles in IPython Clusters tab
- [Parallel Processing with JupyterHub tutorial](https://www.rc.colorado.edu/support/examples-and-tutorials/parallel-processing-with-jupyterhub.html)
- [Parallel Programming with JupyterHub document](https://www.rc.colorado.edu/book/export/html/833)
- (CU Research Computing) CURC
- [JupyterHub User Guide](https://www.rc.colorado.edu/support/user-guide/jupyterhub.html)
- Slurm job dispatched on Crestone compute cluster
- log troubleshooting
- Profiles in IPython Clusters tab
- [Parallel Processing with JupyterHub tutorial](https://www.rc.colorado.edu/support/examples-and-tutorials/parallel-processing-with-jupyterhub.html)
- [Parallel Programming with JupyterHub document](https://www.rc.colorado.edu/book/export/html/833)
- Earth Lab at CU
- [Tutorial on Parallel R on JupyterHub](https://earthdatascience.org/tutorials/parallel-r-on-jupyterhub/)
### George Washington University
- [Jupyter Hub](http://go.gwu.edu/jupyter) with university single-sign-on. Deployed early 2017.
- [Tutorial on Parallel R on JupyterHub](https://earthdatascience.org/tutorials/parallel-r-on-jupyterhub/)
### HTCondor
@@ -97,15 +83,10 @@ easy to do with RStudio too.
### University of Illinois
- https://datascience.business.illinois.edu (currently down; checked 04/26/19)
### IllustrisTNG Simulation Project
- [JupyterHub/Lab-based analysis platform, part of the TNG public data release](http://www.tng-project.org/data/)
- https://datascience.business.illinois.edu
### MIT and Lincoln Labs
- https://supercloud.mit.edu/
### Michigan State University
@@ -119,44 +100,31 @@ easy to do with RStudio too.
- https://dsa.missouri.edu/faq/
### Paderborn University
- [Data Science (DICE) group](https://dice.cs.uni-paderborn.de/)
- [nbgraderutils](https://github.com/dice-group/nbgraderutils): Use JupyterHub + nbgrader + iJava kernel for online Java exercises. Used in lecture Statistical Natural Language Processing.
### Penn State University
- [Press release](https://news.psu.edu/story/523093/2018/05/24/new-open-source-web-apps-available-students-and-faculty): "New open-source web apps available for students and faculty" (but Hub is currently down; checked 04/26/19)
### University of Rochester CIRC
### University of Rochester CIRC
- [JupyterHub Userguide](https://info.circ.rochester.edu/Web_Applications/JupyterHub.html) - Slurm, beehive
### University of California San Diego
- San Diego Supercomputer Center - Andrea Zonca
- [Deploy JupyterHub on a Supercomputer with SSH](https://zonca.github.io/2017/05/jupyterhub-hpc-batchspawner-ssh.html)
- [Run Jupyterhub on a Supercomputer](https://zonca.github.io/2015/04/jupyterhub-hpc.html)
- [Deploy JupyterHub on a VM for a Workshop](https://zonca.github.io/2016/04/jupyterhub-sdsc-cloud.html)
- [Customize your Python environment in Jupyterhub](https://zonca.github.io/2017/02/customize-python-environment-jupyterhub.html)
- [Jupyterhub deployment on multiple nodes with Docker Swarm](https://zonca.github.io/2016/05/jupyterhub-docker-swarm.html)
- [Sample deployment of Jupyterhub in HPC on SDSC Comet](https://zonca.github.io/2017/02/sample-deployment-jupyterhub-hpc.html)
- [Deploy JupyterHub on a Supercomputer with SSH](https://zonca.github.io/2017/05/jupyterhub-hpc-batchspawner-ssh.html)
- [Run Jupyterhub on a Supercomputer](https://zonca.github.io/2015/04/jupyterhub-hpc.html)
- [Deploy JupyterHub on a VM for a Workshop](https://zonca.github.io/2016/04/jupyterhub-sdsc-cloud.html)
- [Customize your Python environment in Jupyterhub](https://zonca.github.io/2017/02/customize-python-environment-jupyterhub.html)
- [Jupyterhub deployment on multiple nodes with Docker Swarm](https://zonca.github.io/2016/05/jupyterhub-docker-swarm.html)
- [Sample deployment of Jupyterhub in HPC on SDSC Comet](https://zonca.github.io/2017/02/sample-deployment-jupyterhub-hpc.html)
- Educational Technology Services - Paul Jamason
- [jupyterhub.ucsd.edu](https://jupyterhub.ucsd.edu)
- [jupyterhub.ucsd.edu](https://jupyterhub.ucsd.edu)
### TACC University of Texas
### Texas A&M
- Kristen Thyng - Oceanography
- [Teaching with JupyterHub and nbgrader](http://kristenthyng.com/blog/2016/09/07/jupyterhub+nbgrader/)
- [Teaching with JupyterHub and nbgrader](http://kristenthyng.com/blog/2016/09/07/jupyterhub+nbgrader/)
### Elucidata
- What's new in Jupyter Notebooks @[Elucidata](https://elucidata.io/):
- Using Jupyter Notebooks with Jupyterhub on GCP, managed by GKE - https://medium.com/elucidata/why-you-should-be-using-a-jupyter-notebook-8385a4ccd93d
## Service Providers
@@ -173,6 +141,7 @@ easy to do with RStudio too.
[Everware](https://github.com/everware) Reproducible and reusable science powered by jupyterhub and docker. Like nbviewer, but executable. CERN, Geneva [website](http://everware.xyz/)
### Microsoft Azure
- https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-data-science-linux-dsvm-intro
@@ -182,10 +151,9 @@ easy to do with RStudio too.
- https://getcarina.com/blog/learning-how-to-whale/
- http://carolynvanslyck.com/talk/carina/jupyterhub/#/
### Hadoop
- [Deploying JupyterHub on Hadoop](https://jupyterhub-on-hadoop.readthedocs.io)
### jcloud.io
- Open to public JupyterHub server
- https://jcloud.io
## Miscellaneous
- https://medium.com/@ybarraud/setting-up-jupyterhub-with-sudospawner-and-anaconda-844628c0dbee#.rm3yt87e1

View File

@@ -4,55 +4,39 @@ The default Authenticator uses [PAM][] to authenticate system users with
their username and password. With the default Authenticator, any user
with an account and password on the system will be allowed to login.
## Create a set of allowed users
## Create a whitelist of users
You can restrict which users are allowed to login with a whitelist,
`Authenticator.whitelist`:
You can restrict which users are allowed to login with a set,
`Authenticator.allowed_users`:
```python
c.Authenticator.allowed_users = {'mal', 'zoe', 'inara', 'kaylee'}
c.Authenticator.whitelist = {'mal', 'zoe', 'inara', 'kaylee'}
```
Users in the `allowed_users` set are added to the Hub database when the Hub is
Users in the whitelist are added to the Hub database when the Hub is
started.
## Configure admins (`admin_users`)
```{note}
As of JupyterHub 2.0, the full permissions of `admin_users`
should not be required.
Instead, you can assign [roles][] to users or groups
with only the scopes they require.
```
Admin users of JupyterHub, `admin_users`, can add and remove users from
the user `allowed_users` set. `admin_users` can take actions on other users'
the user `whitelist`. `admin_users` can take actions on other users'
behalf, such as stopping and restarting their servers.
A set of initial admin users, `admin_users` can be configured as follows:
A set of initial admin users, `admin_users` can configured be as follows:
```python
c.Authenticator.admin_users = {'mal', 'zoe'}
```
Users in the admin set are automatically added to the user `allowed_users` set,
Users in the admin list are automatically added to the user `whitelist`,
if they are not already present.
Each authenticator may have different ways of determining whether a user is an
administrator. By default JupyterHub uses the PAMAuthenticator which provides the
`admin_groups` option and can set administrator status based on a user
group. For example we can let any user in the `wheel` group be admin:
```python
c.PAMAuthenticator.admin_groups = {'wheel'}
```
## Give admin access to other users' notebook servers (`admin_access`)
Since the default `JupyterHub.admin_access` setting is `False`, the admins
Since the default `JupyterHub.admin_access` setting is False, the admins
do not have permission to log in to the single user notebook servers
owned by _other users_. If `JupyterHub.admin_access` is set to `True`,
then admins have permission to log in _as other users_ on their
owned by *other users*. If `JupyterHub.admin_access` is set to True,
then admins have permission to log in *as other users* on their
respective machines, for debugging. **As a courtesy, you should make
sure your users know if admin_access is enabled.**
@@ -60,12 +44,12 @@ sure your users know if admin_access is enabled.**
Users can be added to and removed from the Hub via either the admin
panel or the REST API. When a user is **added**, the user will be
automatically added to the `allowed_users` set and database. Restarting the Hub
will not require manually updating the `allowed_users` set in your config file,
automatically added to the whitelist and database. Restarting the Hub
will not require manually updating the whitelist in your config file,
as the users will be loaded from the database.
After starting the Hub once, it is not sufficient to **remove** a user
from the allowed users set in your config file. You must also remove the user
from the whitelist in your config file. You must also remove the user
from the Hub's database, either by deleting the user from JupyterHub's
admin page, or you can clear the `jupyterhub.sqlite` database and start
fresh.
@@ -98,7 +82,6 @@ JupyterHub's [OAuthenticator][] currently supports the following
popular services:
- Auth0
- Azure AD
- Bitbucket
- CILogon
- GitHub
@@ -112,16 +95,5 @@ popular services:
A generic implementation, which you can use for OAuth authentication
with any provider, is also available.
## Use DummyAuthenticator for testing
The `DummyAuthenticator` is a simple authenticator that
allows for any username/password unless a global password has been set. If
set, it will allow for any username as long as the correct password is provided.
To set a global password, add this to the config file:
```python
c.DummyAuthenticator.password = "some_password"
```
[pam]: https://en.wikipedia.org/wiki/Pluggable_authentication_module
[oauthenticator]: https://github.com/jupyterhub/oauthenticator
[PAM]: https://en.wikipedia.org/wiki/Pluggable_authentication_module
[OAuthenticator]: https://github.com/jupyterhub/oauthenticator

View File

@@ -1,7 +1,7 @@
# Configuration Basics
The section contains basic information about configuring settings for a JupyterHub
deployment. The [Technical Reference](../reference/index)
deployment. The [Technical Reference](../reference/index.html)
documentation provides additional details.
This section will help you learn how to:
@@ -56,18 +56,18 @@ To display all command line options that are available for configuration:
```
Configuration using the command line options is done when launching JupyterHub.
For example, to start JupyterHub on `10.0.1.2:443` with https, you
For example, to start JupyterHub on ``10.0.1.2:443`` with https, you
would enter:
```bash
jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert
```
```
All configurable options may technically be set on the command line,
All configurable options may technically be set on the command-line,
though some are inconvenient to type. To set a particular configuration
parameter, `c.Class.trait`, you would use the command line option,
`--Class.trait`, when starting JupyterHub. For example, to configure the
`c.Spawner.notebook_dir` trait from the command line, use the
`c.Spawner.notebook_dir` trait from the command-line, use the
`--Spawner.notebook_dir` option:
```bash
@@ -77,24 +77,11 @@ jupyterhub --Spawner.notebook_dir='~/assignments'
## Configure for various deployment environments
The default authentication and process spawning mechanisms can be replaced, and
specific [authenticators](./authenticators-users-basics) and
[spawners](./spawners-basics) can be set in the configuration file.
specific [authenticators](./authenticators-users-basics.html) and
[spawners](./spawners-basics.html) can be set in the configuration file.
This enables JupyterHub to be used with a variety of authentication methods or
process control and deployment environments. [Some examples](../reference/config-examples),
process control and deployment environments. [Some examples](../reference/config-examples.html),
meant as illustration, are:
- Using GitHub OAuth instead of PAM with [OAuthenticator](https://github.com/jupyterhub/oauthenticator)
- Spawning single-user servers with Docker, using the [DockerSpawner](https://github.com/jupyterhub/dockerspawner)
## Run the proxy separately
This is _not_ strictly necessary, but useful in many cases. If you
use a custom proxy (e.g. Traefik), this is also not needed.
Connections to user servers go through the proxy, and _not_ the hub
itself. If the proxy stays running when the hub restarts (for
maintenance, re-configuration, etc.), then user connections are not
interrupted. For simplicity, by default the hub starts the proxy
automatically, so if the hub restarts, the proxy restarts, and user
connections are interrupted. It is easy to run the proxy separately,
for information see [the separate proxy page](../reference/separate-proxy).

View File

@@ -1,35 +0,0 @@
# Frequently asked questions
## How do I share links to notebooks?
In short, where you see `/user/name/notebooks/foo.ipynb` use `/hub/user-redirect/notebooks/foo.ipynb` (replace `/user/name` with `/hub/user-redirect`).
Sharing links to notebooks is a common activity,
and can look different based on what you mean.
Your first instinct might be to copy the URL you see in the browser,
e.g. `hub.jupyter.org/user/yourname/notebooks/coolthing.ipynb`.
However, let's break down what this URL means:
`hub.jupyter.org/user/yourname/` is the URL prefix handled by _your server_,
which means that sharing this URL is asking the person you share the link with
to come to _your server_ and look at the exact same file.
In most circumstances, this is forbidden by permissions because the person you share with does not have access to your server.
What actually happens when someone visits this URL will depend on whether your server is running and other factors.
But what is our actual goal?
A typical situation is that you have some shared or common filesystem,
such that the same path corresponds to the same document
(either the exact same document or another copy of it).
Typically, what folks want when they do sharing like this
is for each visitor to open the same file _on their own server_,
so Breq would open `/user/breq/notebooks/foo.ipynb` and
Seivarden would open `/user/seivarden/notebooks/foo.ipynb`, etc.
JupyterHub has a special URL that does exactly this!
It's called `/hub/user-redirect/...`.
So if you replace `/user/yourname` in your URL bar
with `/hub/user-redirect` any visitor should get the same
URL on their own server, rather than visiting yours.
In JupyterLab 2.0, this should also be the result of the "Copy Shareable Link"
action in the file browser.

View File

@@ -1,10 +1,5 @@
Get Started
===========
This section covers how to configure and customize JupyterHub for your
needs. It contains information about authentication, networking, security, and
other topics that are relevant to individuals or organizations deploying their
own JupyterHub.
Getting Started
===============
.. toctree::
:maxdepth: 2
@@ -15,5 +10,3 @@ own JupyterHub.
authenticators-users-basics
spawners-basics
services-basics
faq
institutional-faq

View File

@@ -1,260 +0,0 @@
# Institutional FAQ
This page contains common questions from users of JupyterHub,
broken down by their roles within organizations.
## For all
### Is it appropriate for adoption within a larger institutional context?
Yes! JupyterHub has been used at-scale for large pools of users, as well
as complex and high-performance computing. For example, UC Berkeley uses
JupyterHub for its Data Science Education Program courses (serving over
3,000 students). The Pangeo project uses JupyterHub to provide access
to scalable cloud computing with Dask. JupyterHub is stable and customizable
to the use-cases of large organizations.
### I keep hearing about Jupyter Notebook, JupyterLab, and now JupyterHub. Whats the difference?
Here is a quick breakdown of these three tools:
- **The Jupyter Notebook** is a document specification (the `.ipynb`) file that interweaves
narrative text with code cells and their outputs. It is also a graphical interface
that allows users to edit these documents. There are also several other graphical interfaces
that allow users to edit the `.ipynb` format (nteract, Jupyter Lab, Google Colab, Kaggle, etc).
- **JupyterLab** is a flexible and extendible user interface for interactive computing. It
has several extensions that are tailored for using Jupyter Notebooks, as well as extensions
for other parts of the data science stack.
- **JupyterHub** is an application that manages interactive computing sessions for **multiple users**.
It also connects them with infrastructure those users wish to access. It can provide
remote access to Jupyter Notebooks and JupyterLab for many people.
## For management
### Briefly, what problem does JupyterHub solve for us?
JupyterHub provides a shared platform for data science and collaboration.
It allows users to utilize familiar data science workflows (such as the scientific Python stack,
the R tidyverse, and Jupyter Notebooks) on institutional infrastructure. It also allows administrators
some control over access to resources, security, environments, and authentication.
### Is JupyterHub mature? Why should we trust it?
Yes - the core JupyterHub application recently
reached 1.0 status, and is considered stable and performant for most institutions.
JupyterHub has also been deployed (along with other tools) to work on
scalable infrastructure, large datasets, and high-performance computing.
### Who else uses JupyterHub?
JupyterHub is used at a variety of institutions in academia,
industry, and government research labs. It is most-commonly used by two kinds of groups:
- Small teams (e.g., data science teams, research labs, or collaborative projects) to provide a
shared resource for interactive computing, collaboration, and analytics.
- Large teams (e.g., a department, a large class, or a large group of remote users) to provide
access to organizational hardware, data, and analytics environments at scale.
Here is a sample of organizations that use JupyterHub:
- **Universities and colleges**: UC Berkeley, UC San Diego, Cal Poly SLO, Harvard University, University of Chicago,
University of Oslo, University of Sheffield, Université Paris Sud, University of Versailles
- **Research laboratories**: NASA, NCAR, NOAA, the Large Synoptic Survey Telescope, Brookhaven National Lab,
Minnesota Supercomputing Institute, ALCF, CERN, Lawrence Livermore National Laboratory
- **Online communities**: Pangeo, Quantopian, mybinder.org, MathHub, Open Humans
- **Computing infrastructure providers**: NERSC, San Diego Supercomputing Center, Compute Canada
- **Companies**: Capital One, SANDVIK code, Globus
See the [Gallery of JupyterHub deployments](../gallery-jhub-deployments.md) for
a more complete list of JupyterHub deployments at institutions.
### How does JupyterHub compare with hosted products, like Google Colaboratory, RStudio.cloud, or Anaconda Enterprise?
JupyterHub puts you in control of your data, infrastructure, and coding environment.
In addition, it is vendor neutral, which reduces lock-in to a particular vendor or service.
JupyterHub provides access to interactive computing environments in the cloud (similar to each of these services).
Compared with the tools above, it is more flexible, more customizable, free, and
gives administrators more control over their setup and hardware.
Because JupyterHub is an open-source, community-driven tool, it can be extended and
modified to fit an institution's needs. It plays nicely with the open source data science
stack, and can serve a variety of computing enviroments, user interfaces, and
computational hardware. It can also be deployed anywhere - on enterprise cloud infrastructure, on
High-Performance-Computing machines, on local hardware, or even on a single laptop, which
is not possible with most other tools for shared interactive computing.
## For IT
### How would I set up JupyterHub on institutional hardware?
That depends on what kind of hardware you've got. JupyterHub is flexible enough to be deployed
on a variety of hardware, including in-room hardware, on-prem clusters, cloud infrastructure,
etc.
The most common way to set up a JupyterHub is to use a JupyterHub distribution, these are pre-configured
and opinionated ways to set up a JupyterHub on particular kinds of infrastructure. The two distributions
that we currently suggest are:
- [Zero to JupyterHub for Kubernetes](https://z2jh.jupyter.org) is a scalable JupyterHub deployment and
guide that runs on Kubernetes. Better for larger or dynamic user groups (50-10,000) or more complex
compute/data needs.
- [The Littlest JupyterHub](https://tljh.jupyter.org) is a lightweight JupyterHub that runs on a single
single machine (in the cloud or under your desk). Better for smaller user groups (4-80) or more
lightweight computational resources.
### Does JupyterHub run well in the cloud?
Yes - most deployments of JupyterHub are run via cloud infrastructure and on a variety of cloud providers.
Depending on the distribution of JupyterHub that you'd like to use, you can also connect your JupyterHub
deployment with a number of other cloud-native services so that users have access to other resources from
their interactive computing sessions.
For example, if you use the [Zero to JupyterHub for Kubernetes](https://z2jh.jupyter.org) distribution,
you'll be able to utilize container-based workflows of other technologies such as the [dask-kubernetes](https://kubernetes.dask.org/en/latest/)
project for distributed computing.
The Z2JH Helm Chart also has some functionality built in for auto-scaling your cluster up and down
as more resources are needed - allowing you to utilize the benefits of a flexible cloud-based deployment.
### Is JupyterHub secure?
The short answer: yes. JupyterHub as a standalone application has been battle-tested at an institutional
level for several years, and makes a number of "default" security decisions that are reasonable for most
users.
- For security considerations in the base JupyterHub application,
[see the JupyterHub security page](https://jupyterhub.readthedocs.io/en/stable/reference/websecurity.html).
- For security considerations when deploying JupyterHub on Kubernetes, see the
[JupyterHub on Kubernetes security page](https://zero-to-jupyterhub.readthedocs.io/en/latest/security.html).
The longer answer: it depends on your deployment. Because JupyterHub is very flexible, it can be used
in a variety of deployment setups. This often entails connecting your JupyterHub to **other** infrastructure
(such as a [Dask Gateway service](https://gateway.dask.org/)). There are many security decisions to be made
in these cases, and the security of your JupyterHub deployment will often depend on these decisions.
If you are worried about security, don't hesitate to reach out to the JupyterHub community in the
[Jupyter Community Forum](https://discourse.jupyter.org/c/jupyterhub). This community of practice has many
individuals with experience running secure JupyterHub deployments.
### Does JupyterHub provide computing or data infrastructure?
No - JupyterHub manages user sessions and can _control_ computing infrastructure, but it does not provide these
things itself. You are expected to run JupyterHub on your own infrastructure (local or in the cloud). Moreover,
JupyterHub has no internal concept of "data", but is designed to be able to communicate with data repositories
(again, either locally or remotely) for use within interactive computing sessions.
### How do I manage users?
JupyterHub offers a few options for managing your users. Upon setting up a JupyterHub, you can choose what
kind of **authentication** you'd like to use. For example, you can have users sign up with an institutional
email address, or choose a username / password when they first log-in, or offload authentication onto
another service such as an organization's OAuth.
The users of a JupyterHub are stored locally, and can be modified manually by an administrator of the JupyterHub.
Moreover, the _active_ users on a JupyterHub can be found on the administrator's page. This page
gives you the abiltiy to stop or restart kernels, inspect user filesystems, and even take over user
sessions to assist them with debugging.
### How do I manage software environments?
A key benefit of JupyterHub is the ability for an administrator to define the environment(s) that users
have access to. There are many ways to do this, depending on what kind of infrastructure you're using for
your JupyterHub.
For example, **The Littlest JupyterHub** runs on a single VM. In this case, the administrator defines
an environment by installing packages to a shared folder that exists on the path of all users. The
**JupyterHub for Kubernetes** deployment uses Docker images to define environments. You can create your
own list of Docker images that users can select from, and can also control things like the amount of
RAM available to users, or the types of machines that their sessions will use in the cloud.
### How does JupyterHub manage computational resources?
For interactive computing sessions, JupyterHub controls computational resources via a **spawner**.
Spawners define how a new user session is created, and are customized for particular kinds of
infrastructure. For example, the KubeSpawner knows how to control a Kubernetes deployment
to create new pods when users log in.
For more sophisticated computational resources (like distributed computing), JupyterHub can
connect with other infrastructure tools (like Dask or Spark). This allows users to control
scalable or high-performance resources from within their JupyterHub sessions. The logic of
how those resources are controlled is taken care of by the non-JupyterHub application.
### Can JupyterHub be used with my high-performance computing resources?
Yes - JupyterHub can provide access to many kinds of computing infrastructure.
Especially when combined with other open-source schedulers such as Dask, you can manage fairly
complex computing infrastructures from the interactive sessions of a JupyterHub. For example
[see the Dask HPC page](https://docs.dask.org/en/latest/setup/hpc.html).
### How much resources do user sessions take?
This is highly configurable by the administrator. If you wish for your users to have simple
data analytics environments for prototyping and light data exploring, you can restrict their
memory and CPU based on the resources that you have available. If you'd like your JupyterHub
to serve as a gateway to high-performance compute or data resources, you may increase the
resources available on user machines, or connect them with computing infrastructures elsewhere.
### Can I customize the look and feel of a JupyterHub?
JupyterHub provides some customization of the graphics displayed to users. The most common
modification is to add custom branding to the JupyterHub login page, loading pages, and
various elements that persist across all pages (such as headers).
## For Technical Leads
### Will JupyterHub “just work” with our team's interactive computing setup?
Depending on the complexity of your setup, you'll have different experiences with "out of the box"
distributions of JupyterHub. If all of the resources you need will fit on a single VM, then
[The Littlest JupyterHub](https://tljh.jupyter.org) should get you up-and-running within
a half day or so. For more complex setups, such as scalable Kubernetes clusters or access
to high-performance computing and data, it will require more time and expertise with
the technologies your JupyterHub will use (e.g., dev-ops knowledge with cloud computing).
In general, the base JupyterHub deployment is not the bottleneck for setup, it is connecting
your JupyterHub with the various services and tools that you wish to provide to your users.
### How well does JupyterHub scale? What are JupyterHub's limitations?
JupyterHub works well at both a small scale (e.g., a single VM or machine) as well as a
high scale (e.g., a scalable Kubernetes cluster). It can be used for teams as small as 2, and
for user bases as large as 10,000. The scalability of JupyterHub largely depends on the
infrastructure on which it is deployed. JupyterHub has been designed to be lightweight and
flexible, so you can tailor your JupyterHub deployment to your needs.
### Is JupyterHub resilient? What happens when a machine goes down?
For JupyterHubs that are deployed in a containerized environment (e.g., Kubernetes), it is
possible to configure the JupyterHub to be fairly resistant to failures in the system.
For example, if JupyterHub fails, then user sessions will not be affected (though new
users will not be able to log in). When a JupyterHub process is restarted, it should
seamlessly connect with the user database and the system will return to normal.
Again, the details of your JupyterHub deployment (e.g., whether it's deployed on a scalable cluster)
will affect the resiliency of the deployment.
### What interfaces does JupyterHub support?
Out of the box, JupyterHub supports a variety of popular data science interfaces for user sessions,
such as JupyterLab, Jupyter Notebooks, and RStudio. Any interface that can be served
via a web address can be served with a JupyterHub (with the right setup).
### Does JupyterHub make it easier for our team to collaborate?
JupyterHub provides a standardized environment and access to shared resources for your teams.
This greatly reduces the cost associated with sharing analyses and content with other team
members, and makes it easier to collaborate and build off of one another's ideas. Combined with
access to high-performance computing and data, JupyterHub provides a common resource to
amplify your team's ability to prototype their analyses, scale them to larger data, and then
share their results with one another.
JupyterHub also provides a computational framework to share computational narratives between
different levels of an organization. For example, data scientists can share Jupyter Notebooks
rendered as [Voilà dashboards](https://voila.readthedocs.io/en/stable/) with those who are not
familiar with programming, or create publicly-available interactive analyses to allow others to
interact with your work.
### Can I use JupyterHub with R/RStudio or other languages and environments?
Yes, Jupyter is a polyglot project, and there are over 40 community-provided kernels for a variety
of languages (the most common being Python, Julia, and R). You can also use a JupyterHub to provide
access to other interfaces, such as RStudio, that provide their own access to a language kernel.

View File

@@ -11,7 +11,7 @@ This section will help you with basic proxy and network configuration to:
The Proxy's main IP address setting determines where JupyterHub is available to users.
By default, JupyterHub is configured to be available on all network interfaces
(`''`) on port 8000. _Note_: Use of `'*'` is discouraged for IP configuration;
(`''`) on port 8000. *Note*: Use of `'*'` is discouraged for IP configuration;
instead, use of `'0.0.0.0'` is preferred.
Changing the Proxy's main IP address and port can be done with the following
@@ -41,9 +41,9 @@ port.
## Set the Proxy's REST API communication URL (optional)
By default, this REST API listens on port 8001 of `localhost` only.
By default, this REST API listens on port 8081 of `localhost` only.
The Hub service talks to the proxy via a REST API on a secondary port. The
API URL can be configured separately to override the default settings.
API URL can be configured separately and override the default settings.
### Set api_url
@@ -74,7 +74,7 @@ The Hub service listens only on `localhost` (port 8081) by default.
The Hub needs to be accessible from both the proxy and all Spawners.
When spawning local servers, an IP address setting of `localhost` is fine.
If _either_ the Proxy _or_ (more likely) the Spawners will be remote or
If *either* the Proxy *or* (more likely) the Spawners will be remote or
isolated in containers, the Hub must listen on an IP that is accessible.
```python
@@ -82,20 +82,20 @@ c.JupyterHub.hub_ip = '10.0.1.4'
c.JupyterHub.hub_port = 54321
```
**Added in 0.8:** The `c.JupyterHub.hub_connect_ip` setting is the IP address or
**Added in 0.8:** The `c.JupyterHub.hub_connect_ip` setting is the ip address or
hostname that other services should use to connect to the Hub. A common
configuration for, e.g. docker, is:
```python
c.JupyterHub.hub_ip = '0.0.0.0' # listen on all interfaces
c.JupyterHub.hub_connect_ip = '10.0.1.4' # IP as seen on the docker network. Can also be a hostname.
c.JupyterHub.hub_connect_ip = '10.0.1.4' # ip as seen on the docker network. Can also be a hostname.
```
## Adjusting the hub's URL
The hub will most commonly be running on a hostname of its own. If it
The hub will most commonly be running on a hostname of its own. If it
is not for example, if the hub is being reverse-proxied and being
exposed at a URL such as `https://proxy.example.org/jupyter/` then
you will need to tell JupyterHub the base URL of the service. In such
you will need to tell JupyterHub the base URL of the service. In such
a case, it is both necessary and sufficient to set
`c.JupyterHub.base_url = '/jupyter/'` in the configuration.

View File

@@ -80,49 +80,6 @@ To achieve this, simply omit the configuration settings
``c.JupyterHub.ssl_key`` and ``c.JupyterHub.ssl_cert``
(setting them to ``None`` does not have the same effect, and is an error).
.. _authentication-token:
Proxy authentication token
--------------------------
The Hub authenticates its requests to the Proxy using a secret token that
the Hub and Proxy agree upon. Note that this applies to the default
``ConfigurableHTTPProxy`` implementation. Not all proxy implementations
use an auth token.
The value of this token should be a random string (for example, generated by
``openssl rand -hex 32``). You can store it in the configuration file or an
environment variable
Generating and storing token in the configuration file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can set the value in the configuration file, ``jupyterhub_config.py``:
.. code-block:: python
c.ConfigurableHTTPProxy.api_token = 'abc123...' # any random string
Generating and storing as an environment variable
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can pass this value of the proxy authentication token to the Hub and Proxy
using the ``CONFIGPROXY_AUTH_TOKEN`` environment variable:
.. code-block:: bash
export CONFIGPROXY_AUTH_TOKEN=$(openssl rand -hex 32)
This environment variable needs to be visible to the Hub and Proxy.
Default if token is not set
~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you don't set the Proxy authentication token, the Hub will generate a random
key itself, which means that any time you restart the Hub you **must also
restart the Proxy**. If the proxy is a subprocess of the Hub, this should happen
automatically (this is the default configuration).
.. _cookie-secret:
Cookie secret
@@ -167,7 +124,7 @@ hex-encoded string. You can set it this way:
.. code-block:: bash
export JPY_COOKIE_SECRET=$(openssl rand -hex 32)
export JPY_COOKIE_SECRET=`openssl rand -hex 32`
For security reasons, this environment variable should only be visible to the
Hub. If you set it dynamically as above, all users will be logged out each time
@@ -189,73 +146,41 @@ itself, ``jupyterhub_config.py``, as a binary string:
If the cookie secret value changes for the Hub, all single-user notebook
servers must also be restarted.
.. _cookies:
Cookies used by JupyterHub authentication
-----------------------------------------
.. _authentication-token:
The following cookies are used by the Hub for handling user authentication.
Proxy authentication token
--------------------------
This section was created based on this post_ from Discourse.
The Hub authenticates its requests to the Proxy using a secret token that
the Hub and Proxy agree upon. The value of this string should be a random
string (for example, generated by ``openssl rand -hex 32``).
.. _post: https://discourse.jupyter.org/t/how-to-force-re-login-for-users/1998/6
Generating and storing token in the configuration file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
jupyterhub-hub-login
~~~~~~~~~~~~~~~~~~~~
Or you can set the value in the configuration file, ``jupyterhub_config.py``:
This is the login token used when visiting Hub-served pages that are
protected by authentication such as the main home, the spawn form, etc.
If this cookie is set, then the user is logged in.
.. code-block:: python
Resetting the Hub cookie secret effectively revokes this cookie.
c.JupyterHub.proxy_auth_token = '0bc02bede919e99a26de1e2a7a5aadfaf6228de836ec39a05a6c6942831d8fe5'
This cookie is restricted to the path ``/hub/``.
Generating and storing as an environment variable
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
jupyterhub-user-<username>
~~~~~~~~~~~~~~~~~~~~~~~~~~
You can pass this value of the proxy authentication token to the Hub and Proxy
using the ``CONFIGPROXY_AUTH_TOKEN`` environment variable:
This is the cookie used for authenticating with a single-user server.
It is set by the single-user server after OAuth with the Hub.
.. code-block:: bash
Effectively the same as ``jupyterhub-hub-login``, but for the
single-user server instead of the Hub. It contains an OAuth access token,
which is checked with the Hub to authenticate the browser.
export CONFIGPROXY_AUTH_TOKEN='openssl rand -hex 32'
Each OAuth access token is associated with a session id (see ``jupyterhub-session-id`` section
below).
This environment variable needs to be visible to the Hub and Proxy.
To avoid hitting the Hub on every request, the authentication response
is cached. And to avoid a stale cache the cache key is comprised of both
the token and session id.
Default if token is not set
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Resetting the Hub cookie secret effectively revokes this cookie.
This cookie is restricted to the path ``/user/<username>``, so that
only the users server receives it.
jupyterhub-session-id
~~~~~~~~~~~~~~~~~~~~~
This is a random string, meaningless in itself, and the only cookie
shared by the Hub and single-user servers.
Its sole purpose is to coordinate logout of the multiple OAuth cookies.
This cookie is set to ``/`` so all endpoints can receive it, or clear it, etc.
jupyterhub-user-<username>-oauth-state
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A short-lived cookie, used solely to store and validate OAuth state.
It is only set while OAuth between the single-user server and the Hub
is processing.
If you use your browser development tools, you should see this cookie
for a very brief moment before your are logged in,
with an expiration date shorter than ``jupyterhub-hub-login`` or
``jupyterhub-user-<username>``.
This cookie should not exist after you have successfully logged in.
This cookie is restricted to the path ``/user/<username>``, so that only
the users server receives it.
If you don't set the Proxy authentication token, the Hub will generate a random
key itself, which means that any time you restart the Hub you **must also
restart the Proxy**. If the proxy is a subprocess of the Hub, this should happen
automatically (this is the default configuration).

View File

@@ -2,10 +2,10 @@
When working with JupyterHub, a **Service** is defined as a process
that interacts with the Hub's REST API. A Service may perform a specific
action or task. For example, shutting down individuals' single user
notebook servers that have been idle for some time is a good example of
a task that could be automated by a Service. Let's look at how the
[jupyterhub_idle_culler][] script can be used as a Service.
or action or task. For example, shutting down individuals' single user
notebook servers that have been is a good example of a task that could
be automated by a Service. Let's look at how the [cull_idle_servers][]
script can be used as a Service.
## Real-world example to cull idle servers
@@ -14,12 +14,12 @@ document will:
- explain some basic information about API tokens
- clarify that API tokens can be used to authenticate to
single-user servers as of [version 0.8.0](../changelog)
- show how the [jupyterhub_idle_culler][] script can be:
- used in a Hub-managed service
- run as a standalone script
single-user servers as of [version 0.8.0](../changelog.html)
- show how the [cull_idle_servers][] script can be:
- used in a Hub-managed service
- run as a standalone script
Both examples for `jupyterhub_idle_culler` will communicate tasks to the
Both examples for `cull_idle_servers` will communicate tasks to the
Hub via the REST API.
## API Token basics
@@ -29,14 +29,14 @@ Hub via the REST API.
To run such an external service, an API token must be created and
provided to the service.
As of [version 0.6.0](../changelog), the preferred way of doing
As of [version 0.6.0](../changelog.html), the preferred way of doing
this is to first generate an API token:
```bash
openssl rand -hex 32
```
In [version 0.8.0](../changelog), a TOKEN request page for
In [version 0.8.0](../changelog.html), a TOKEN request page for
generating an API token is available from the JupyterHub user interface:
![Request API TOKEN page](../images/token-request.png)
@@ -78,73 +78,44 @@ single-user servers, and only cookies can be used for authentication.
0.8 supports using JupyterHub API tokens to authenticate to single-user
servers.
## Configure the idle culler to run as a Hub-Managed Service
Install the idle culler:
```
pip install jupyterhub-idle-culler
```
## Configure `cull-idle` to run as a Hub-Managed Service
In `jupyterhub_config.py`, add the following dictionary for the
`idle-culler` Service to the `c.JupyterHub.services` list:
`cull-idle` Service to the `c.JupyterHub.services` list:
```python
c.JupyterHub.services = [
{
'name': 'idle-culler',
'command': [sys.executable, '-m', 'jupyterhub_idle_culler', '--timeout=3600'],
}
]
c.JupyterHub.load_roles = [
{
"name": "list-and-cull", # name the role
"services": [
"idle-culler", # assign the service to this role
],
"scopes": [
# declare what permissions the service should have
"list:users", # list users
"read:users:activity", # read user last-activity
"admin:servers", # start/stop servers
],
'name': 'cull-idle',
'admin': True,
'command': 'python3 cull_idle_servers.py --timeout=3600'.split(),
}
]
```
where:
- `command` indicates that the Service will be launched as a
- `'admin': True` indicates that the Service has 'admin' permissions, and
- `'command'` indicates that the Service will be launched as a
subprocess, managed by the Hub.
```{versionchanged} 2.0
Prior to 2.0, the idle-culler required 'admin' permissions.
It now needs the scopes:
- `list:users` to access the user list endpoint
- `read:users:activity` to read activity info
- `admin:servers` to start/stop servers
```
## Run `cull-idle` manually as a standalone script
Now you can run your script by providing it
Now you can run your script, i.e. `cull_idle_servers`, by providing it
the API token and it will authenticate through the REST API to
interact with it.
This will run the idle culler service manually. It can be run as a standalone
This will run `cull-idle` manually. `cull-idle` can be run as a standalone
script anywhere with access to the Hub, and will periodically check for idle
servers and shut them down via the Hub's REST API. In order to shutdown the
servers, the token given to `cull-idle` must have permission to list users
and admin their servers.
servers, the token given to cull-idle must have admin privileges.
Generate an API token and store it in the `JUPYTERHUB_API_TOKEN` environment
variable. Run `jupyterhub_idle_culler` manually.
variable. Run `cull_idle_servers.py` manually.
```bash
export JUPYTERHUB_API_TOKEN='token'
python -m jupyterhub_idle_culler [--timeout=900] [--url=http://127.0.0.1:8081/hub/api]
python3 cull_idle_servers.py [--timeout=900] [--url=http://127.0.0.1:8081/hub/api]
```
[jupyterhub_idle_culler]: https://github.com/jupyterhub/jupyterhub-idle-culler
[cull_idle_servers]: https://github.com/jupyterhub/jupyterhub/blob/master/examples/cull-idle/cull_idle_servers.py

View File

@@ -1,8 +1,8 @@
# Spawners and single-user notebook servers
Since the single-user server is an instance of `jupyter notebook`, an entire separate
multi-process application, there are many aspects of that server that can be configured, and a lot
of ways to express that configuration.
multi-process application, there are many aspect of that server can configure, and a lot of ways
to express that configuration.
At the JupyterHub level, you can set some values on the Spawner. The simplest of these is
`Spawner.notebook_dir`, which lets you set the root directory for a user's server. This root
@@ -14,7 +14,7 @@ expanded to the user's home directory.
c.Spawner.notebook_dir = '~/notebooks'
```
You can also specify extra command line arguments to the notebook server with:
You can also specify extra command-line arguments to the notebook server with:
```python
c.Spawner.args = ['--debug', '--profile=PHYS131']

Binary file not shown.

Before

Width:  |  Height:  |  Size: 158 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 446 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 483 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 103 KiB

View File

@@ -1,15 +0,0 @@
=====
About
=====
JupyterHub is an open source project and community. It is a part of the
`Jupyter Project <https://jupyter.org>`_. JupyterHub is an open and inclusive
community, and invites contributions from anyone. This section covers information
about our community, as well as ways that you can connect and get involved.
.. toctree::
:maxdepth: 1
contributor-list
changelog
gallery-jhub-deployments

View File

@@ -1,13 +0,0 @@
=====================
Administrator's Guide
=====================
This guide covers best-practices, tips, common questions and operations, as
well as other information relevant to running your own JupyterHub over time.
.. toctree::
:maxdepth: 2
troubleshooting
admin/upgrading
changelog

View File

@@ -1,38 +1,21 @@
==========
JupyterHub
==========
`JupyterHub`_ is the best way to serve `Jupyter notebook`_ for multiple users.
It can be used in a class of students, a corporate data science group or scientific
research group. It is a multi-user **Hub** that spawns, manages, and proxies multiple
`JupyterHub`_, a multi-user **Hub**, spawns, manages, and proxies multiple
instances of the single-user `Jupyter notebook`_ server.
JupyterHub can be used to serve notebooks to a class of students, a corporate
data science group, or a scientific research group.
To make life easier, JupyterHub has distributions. Be sure to
take a look at them before continuing with the configuration of the broad
original system of `JupyterHub`_. Today, you can find two main cases:
1. If you need a simple case for a small amount of users (0-100) and single server
take a look at
`The Littlest JupyterHub <https://github.com/jupyterhub/the-littlest-jupyterhub>`__ distribution.
2. If you need to allow for even more users, a dynamic amount of servers can be used on a cloud,
take a look at the `Zero to JupyterHub with Kubernetes <https://github.com/jupyterhub/zero-to-jupyterhub-k8s>`__ .
Four subsystems make up JupyterHub:
* a **Hub** (tornado process) that is the heart of JupyterHub
* a **configurable http proxy** (node-http-proxy) that receives the requests from the client's browser
* multiple **single-user Jupyter notebook servers** (Python/IPython/tornado) that are monitored by Spawners
* an **authentication class** that manages how users can access the system
Besides these central pieces, you can add optional configurations through a `config.py` file and manage users kernels on an admin panel. A simplification of the whole system can be seen in the figure below:
.. image:: images/jhub-fluxogram.jpeg
.. image:: images/jhub-parts.png
:alt: JupyterHub subsystems
:width: 80%
:align: center
:width: 40%
:align: right
Three subsystems make up JupyterHub:
* a multi-user **Hub** (tornado process)
* a **configurable http proxy** (node-http-proxy)
* multiple **single-user Jupyter notebook servers** (Python/IPython/tornado)
JupyterHub performs the following functions:
@@ -45,114 +28,98 @@ JupyterHub performs the following functions:
For convenient administration of the Hub, its users, and services,
JupyterHub also provides a `REST API`_.
The JupyterHub team and Project Jupyter value our community, and JupyterHub
follows the Jupyter `Community Guides <https://jupyter.readthedocs.io/en/latest/community/content-community.html>`_.
Contents
========
--------
.. _index/distributions:
**Installation Guide**
Distributions
-------------
* :doc:`installation-guide`
* :doc:`quickstart`
* :doc:`quickstart-docker`
* :doc:`installation-basics`
A JupyterHub **distribution** is tailored towards a particular set of
use cases. These are generally easier to set up than setting up
JupyterHub from scratch, assuming they fit your use case.
**Getting Started**
The two popular ones are:
* :doc:`getting-started/index`
* :doc:`getting-started/config-basics`
* :doc:`getting-started/networking-basics`
* :doc:`getting-started/security-basics`
* :doc:`getting-started/authenticators-users-basics`
* :doc:`getting-started/spawners-basics`
* :doc:`getting-started/services-basics`
* `Zero to JupyterHub on Kubernetes <http://z2jh.jupyter.org>`_, for
running JupyterHub on top of `Kubernetes <https://k8s.io>`_. This
can scale to large number of machines & users.
* `The Littlest JupyterHub <http://tljh.jupyter.org>`_, for an easy
to set up & run JupyterHub supporting 1-100 users on a single machine.
**Technical Reference**
Installation Guide
------------------
* :doc:`reference/index`
* :doc:`reference/technical-overview`
* :doc:`reference/websecurity`
* :doc:`reference/authenticators`
* :doc:`reference/spawners`
* :doc:`reference/services`
* :doc:`reference/rest`
* :doc:`reference/upgrading`
* :doc:`reference/templates`
* :doc:`reference/config-user-env`
* :doc:`reference/config-examples`
* :doc:`reference/config-ghoauth`
* :doc:`reference/config-proxy`
* :doc:`reference/config-sudo`
.. toctree::
:maxdepth: 2
**API Reference**
installation-guide
* :doc:`api/index`
Getting Started
---------------
**Tutorials**
.. toctree::
:maxdepth: 2
* :doc:`tutorials/index`
* :doc:`tutorials/upgrade-dot-eight`
* `Zero to JupyterHub with Kubernetes <https://zero-to-jupyterhub.readthedocs.io/en/latest/>`_
getting-started/index
**Troubleshooting**
Technical Reference
-------------------
* :doc:`troubleshooting`
.. toctree::
:maxdepth: 2
**About JupyterHub**
reference/index
* :doc:`contributor-list`
* :doc:`gallery-jhub-deployments`
Administrators guide
--------------------
**Changelog**
.. toctree::
:maxdepth: 2
index-admin
API Reference
-------------
.. toctree::
:maxdepth: 2
api/index
RBAC Reference
--------------
.. toctree::
:maxdepth: 2
rbac/index
Contributing
------------
We want you to contribute to JupyterHub in ways that are most exciting
& useful to you. We value documentation, testing, bug reporting & code equally,
and are glad to have your contributions in whatever form you wish :)
Our `Code of Conduct <https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md>`_
(`reporting guidelines <https://github.com/jupyter/governance/blob/HEAD/conduct/reporting_online.md>`_)
helps keep our community welcoming to as many people as possible.
.. toctree::
:maxdepth: 2
contributing/index
About JupyterHub
----------------
.. toctree::
:maxdepth: 2
index-about
* :doc:`changelog`
Indices and tables
==================
------------------
* :ref:`genindex`
* :ref:`modindex`
Questions? Suggestions?
=======================
-----------------------
- `Jupyter mailing list <https://groups.google.com/forum/#!forum/jupyter>`_
- `Jupyter website <https://jupyter.org>`_
.. _contents:
Full Table of Contents
----------------------
.. toctree::
:maxdepth: 2
installation-guide
getting-started/index
reference/index
api/index
tutorials/index
troubleshooting
contributor-list
gallery-jhub-deployments
changelog
.. _JupyterHub: https://github.com/jupyterhub/jupyterhub
.. _Jupyter notebook: https://jupyter-notebook.readthedocs.io/en/latest/
.. _REST API: https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/HEAD/docs/rest-api.yml#!/default
.. _REST API: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#!/default

View File

@@ -6,7 +6,7 @@ JupyterHub is supported on Linux/Unix based systems. To use JupyterHub, you need
a Unix server (typically Linux) running somewhere that is accessible to your
team on the network. The JupyterHub server can be on an internal network at your
organization, or it can run on the public internet (in which case, take care
with the Hub's [security](./getting-started/security-basics)).
with the Hub's [security](./security-basics.html)).
JupyterHub officially **does not** support Windows. You may be able to use
JupyterHub on Windows if you use a Spawner and Authenticator that work on
@@ -28,7 +28,7 @@ Prior to beginning installation, it's helpful to consider some of the following:
- Spawner of singleuser notebook servers (Docker, Batch, etc.)
- Services (nbgrader, etc.)
- JupyterHub database (default SQLite; traditional RDBMS such as PostgreSQL,)
MySQL, or other databases supported by [SQLAlchemy](http://www.sqlalchemy.org))
MySQL, or other databases supported by [SQLAlchemy](http://www.sqlalchemy.org))
## Folders and File Locations

View File

@@ -1,6 +0,0 @@
:orphan:
JupyterHub the hard way
=======================
This guide has moved to https://github.com/jupyterhub/jupyterhub-the-hard-way/blob/HEAD/docs/installation-guide-hard.md

View File

@@ -1,9 +1,5 @@
Installation
============
These sections cover how to get up-and-running with JupyterHub. They cover
some basics of the tools needed to deploy JupyterHub as well as how to get it
running on your own infrastructure.
Installation Guide
==================
.. toctree::
:maxdepth: 3

View File

@@ -1,52 +0,0 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v2.0.22 (GNU/Linux)
mQINBFMx2LoBEAC9xU8JiKI1VlCJ4PT9zqhU5nChQZ06/bj1BBftiMJG07fdGVO0
ibOn4TrCoRYaeRlet0UpHzxT4zDa5h3/usJaJNTSRwtWePw2o7Lik8J+F3LionRf
8Jz81WpJ+81Klg4UWKErXjBHsu/50aoQm6ZNYG4S2nwOmMVEC4nc44IAA0bb+6kW
saFKKzEDsASGyuvyutdyUHiCfvvh5GOC2h9mXYvl4FaMW7K+d2UgCYERcXDNy7C1
Bw+uepQ9ELKdG4ZpvonO6BNr1BWLln3wk93AQfD5qhfsYRJIyj0hJlaRLtBU3i6c
xs+gQNF4mPmybpPSGuOyUr4FYC7NfoG7IUMLj+DYa6d8LcMJO+9px4IbdhQvzGtC
qz5av1TX7/+gnS4L8C9i1g8xgI+MtvogngPmPY4repOlK6y3l/WtxUPkGkyYkn3s
RzYyE/GJgTwuxFXzMQs91s+/iELFQq/QwmEJf+g/QYfSAuM+lVGajEDNBYVAQkxf
gau4s8Gm0GzTZmINilk+7TxpXtKbFc/Yr4A/fMIHmaQ7KmJB84zKwONsQdVv7Jjj
0dpwu8EIQdHxX3k7/Q+KKubEivgoSkVwuoQTG15X9xrOsDZNwfOVQh+JKazPvJtd
SNfep96r9t/8gnXv9JI95CGCQ8lNhXBUSBM3BDPTbudc4b6lFUyMXN0mKQARAQAB
tCxJUHl0aG9uIFNlY3VyaXR5IFRlYW0gPHNlY3VyaXR5QGlweXRob24ub3JnPokC
OAQTAQIAIgUCUzHYugIbAwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AACgkQEwJc
LcmZYkjuXg//R/t6nMNQmf9W1h52IVfUbRAVmvZ5d063hQHKV2dssxtnA2dRm/x5
JZu8Wz7ZrEZpyqwRJO14sxN1/lC3v+zs9XzYXr2lBTZuKCPIBypYVGIynCuWJBQJ
rWnfG4+u1RHahnjqlTWTY1C/le6v7SjAvCb6GbdA6k4ZL2EJjQlRaHDmzw3rV/+l
LLx6/tYzIsotuflm/bFumyOMmpQQpJjnCkWIVjnRICZvuAn97jLgtTI0+0Rzf4Zb
k2BwmHwDRqWCTTcRI9QvTl8AzjW+dNImN22TpGOBPfYj8BCZ9twrpKUbf+jNqJ1K
THQzFtpdJ6SzqiFVm74xW4TKqCLkbCQ/HtVjTGMGGz/y7KTtaLpGutQ6XE8SSy6P
EffSb5u+kKlQOWaH7Mc3B0yAojz6T3j5RSI8ts6pFi6pZhDg9hBfPK2dT0v/7Mkv
E1Z7q2IdjZnhhtGWjDAMtDDn2NbY2wuGoa5jAWAR0WvIbEZ3kOxuLE5/ZOG1FyYm
noJRliBz7038nT92EoD5g1pdzuxgXtGCpYyyjRZwaLmmi4CvA+oThKmnqWNY5lyY
ricdNHDiyEXK0YafJL1oZgM86MSb0jKJMp5U11nUkUGzkroFfpGDmzBwAzEPgeiF
40+qgsKB9lqwb3G7PxvfSi3XwxfXgpm1cTyEaPSzsVzve3d1xeqb7Yq5Ag0EUzHY
ugEQALQ5FtLdNoxTxMsgvrRr1ejLiUeRNUfXtN1TYttOfvAhfBVnszjtkpIW8DCB
JF/bA7ETiH8OYYn/Fm6MPI5H64IHEncpzxjf57jgpXd9CA9U2OMk/P1nve5zYchP
QmP2fJxeAWr0aRH0Mse5JS5nCkh8Xv4nAjsBYeLTJEVOb1gPQFXOiFcVp3gaKAzX
GWOZ/mtG/uaNsabH/3TkcQQEgJefd11DWgMB7575GU+eME7c6hn3FPITA5TC5HUX
azvjv/PsWGTTVAJluJ3fUDvhpbGwYOh1uV0rB68lPpqVIro18IIJhNDnccM/xqko
4fpJdokdg4L1wih+B04OEXnwgjWG8OIphR/oL/+M37VV2U7Om/GE6LGefaYccC9c
tIaacRQJmZpG/8RsimFIY2wJ07z8xYBITmhMmOt0bLBv0mU0ym5KH9Dnru1m9QDO
AHwcKrDgL85f9MCn+YYw0d1lYxjOXjf+moaeW3izXCJ5brM+MqVtixY6aos3YO29
J7SzQ4aEDv3h/oKdDfZny21jcVPQxGDui8sqaZCi8usCcyqWsKvFHcr6vkwaufcm
3Knr2HKVotOUF5CDZybopIz1sJvY/5Dx9yfRmtivJtglrxoDKsLi1rQTlEQcFhCS
ACjf7txLtv03vWHxmp4YKQFkkOlbyhIcvfPVLTvqGerdT2FHABEBAAGJAh8EGAEC
AAkFAlMx2LoCGwwACgkQEwJcLcmZYkgK0BAAny0YUugpZldiHzYNf8I6p2OpiDWv
ZHaguTTPg2LJSKaTd+5UHZwRFIWjcSiFu+qTGLNtZAdcr0D5f991CPvyDSLYgOwb
Jm2p3GM2KxfECWzFbB/n/PjbZ5iky3+5sPlOdBR4TkfG4fcu5GwUgCkVe5u3USAk
C6W5lpeaspDz39HAPRSIOFEX70+xV+6FZ17B7nixFGN+giTpGYOEdGFxtUNmHmf+
waJoPECyImDwJvmlMTeP9jfahlB6Pzaxt6TBZYHetI/JR9FU69EmA+XfCSGt5S+0
Eoc330gpsSzo2VlxwRCVNrcuKmG7PsFFANok05ssFq1/Djv5rJ++3lYb88b8HSP2
3pQJPrM7cQNU8iPku9yLXkY5qsoZOH+3yAia554Dgc8WBhp6fWh58R0dIONQxbbo
apNdwvlI8hKFB7TiUL6PNShE1yL+XD201iNkGAJXbLMIC1ImGLirUfU267A3Cop5
hoGs179HGBcyj/sKA3uUIFdNtP+NndaP3v4iYhCitdVCvBJMm6K3tW88qkyRGzOk
4PW422oyWKwbAPeMk5PubvEFuFAIoBAFn1zecrcOg85RzRnEeXaiemmmH8GOe1Xu
Kh+7h8XXyG6RPFy8tCcLOTk+miTqX+4VWy+kVqoS2cQ5IV8WsJ3S7aeIy0H89Z8n
5vmLc+Ibz+eT+rM=
=XVDe
-----END PGP PUBLIC KEY BLOCK-----

View File

@@ -25,7 +25,7 @@ Starting JupyterHub with docker
The JupyterHub docker image can be started with the following command::
docker run -d -p 8000:8000 --name jupyterhub jupyterhub/jupyterhub jupyterhub
docker run -d --name jupyterhub jupyterhub/jupyterhub jupyterhub
This command will create a container named ``jupyterhub`` that you can
**stop and resume** with ``docker stop/start``.

View File

@@ -12,24 +12,20 @@ Before installing JupyterHub, you will need:
- [nodejs/npm](https://www.npmjs.com/). [Install nodejs/npm](https://docs.npmjs.com/getting-started/installing-node),
using your operating system's package manager.
- If you are using **`conda`**, the nodejs and npm dependencies will be installed for
* If you are using **`conda`**, the nodejs and npm dependencies will be installed for
you by conda.
- If you are using **`pip`**, install a recent version of
* If you are using **`pip`**, install a recent version of
[nodejs/npm](https://docs.npmjs.com/getting-started/installing-node).
For example, install it on Linux (Debian/Ubuntu) using:
```
sudo apt-get install npm nodejs-legacy
```
The `nodejs-legacy` package installs the `node` executable and is currently
required for npm to work on Debian/Ubuntu.
- A [pluggable authentication module (PAM)](https://en.wikipedia.org/wiki/Pluggable_authentication_module)
to use the [default Authenticator](./getting-started/authenticators-users-basics.md).
PAM is often available by default on most distributions, if this is not the case it can be installed by
using the operating system's package manager.
- TLS certificate and key for HTTPS communication
- Domain name
@@ -78,12 +74,12 @@ Visit `https://localhost:8000` in your browser, and sign in with your unix
credentials.
To **allow multiple users to sign in** to the Hub server, you must start
`jupyterhub` as a _privileged user_, such as root:
`jupyterhub` as a *privileged user*, such as root:
```bash
sudo jupyterhub
```
The [wiki](https://github.com/jupyterhub/jupyterhub/wiki/Using-sudo-to-run-JupyterHub-without-root-privileges)
describes how to run the server as a _less privileged user_. This requires
describes how to run the server as a *less privileged user*. This requires
additional configuration of the system.

View File

@@ -1,126 +0,0 @@
import os
from collections import defaultdict
from pathlib import Path
from pytablewriter import MarkdownTableWriter
from ruamel.yaml import YAML
from jupyterhub.scopes import scope_definitions
HERE = os.path.abspath(os.path.dirname(__file__))
PARENT = Path(HERE).parent.parent.absolute()
class ScopeTableGenerator:
def __init__(self):
self.scopes = scope_definitions
@classmethod
def create_writer(cls, table_name, headers, values):
writer = MarkdownTableWriter()
writer.table_name = table_name
writer.headers = headers
writer.value_matrix = values
writer.margin = 1
return writer
def _get_scope_relationships(self):
"""Returns a tuple of dictionary of all scope-subscope pairs and a list of just subscopes:
({scope: subscope}, [subscopes])
used for creating hierarchical scope table in _parse_scopes()
"""
pairs = []
for scope, data in self.scopes.items():
subscopes = data.get('subscopes')
if subscopes is not None:
for subscope in subscopes:
pairs.append((scope, subscope))
else:
pairs.append((scope, None))
subscopes = [pair[1] for pair in pairs]
pairs_dict = defaultdict(list)
for scope, subscope in pairs:
pairs_dict[scope].append(subscope)
return pairs_dict, subscopes
def _get_top_scopes(self, subscopes):
"""Returns a list of highest level scopes
(not a subscope of any other scopes)"""
top_scopes = []
for scope in self.scopes.keys():
if scope not in subscopes:
top_scopes.append(scope)
return top_scopes
def _parse_scopes(self):
"""Returns a list of table rows where row:
[indented scopename string, scope description string]"""
scope_pairs, subscopes = self._get_scope_relationships()
top_scopes = self._get_top_scopes(subscopes)
table_rows = []
md_indent = "&nbsp;&nbsp;&nbsp;"
def _add_subscopes(table_rows, scopename, depth=0):
description = self.scopes[scopename]['description']
doc_description = self.scopes[scopename].get('doc_description', '')
if doc_description:
description = doc_description
table_row = [f"{md_indent * depth}`{scopename}`", description]
table_rows.append(table_row)
for subscope in scope_pairs[scopename]:
if subscope:
_add_subscopes(table_rows, subscope, depth + 1)
for scope in top_scopes:
_add_subscopes(table_rows, scope)
return table_rows
def write_table(self):
"""Generates the scope table in markdown format and writes it into `scope-table.md`"""
filename = f"{HERE}/scope-table.md"
table_name = ""
headers = ["Scope", "Grants permission to:"]
values = self._parse_scopes()
writer = self.create_writer(table_name, headers, values)
title = "Table 1. Available scopes and their hierarchy"
content = f"{title}\n{writer.dumps()}"
with open(filename, 'w') as f:
f.write(content)
print(f"Generated {filename}.")
print(
"Run 'make clean' before 'make html' to ensure the built scopes.html contains latest scope table changes."
)
def write_api(self):
"""Generates the API description in markdown format and writes it into `rest-api.yml`"""
filename = f"{PARENT}/rest-api.yml"
yaml = YAML(typ='rt')
yaml.preserve_quotes = True
scope_dict = {}
with open(filename, 'r+') as f:
content = yaml.load(f.read())
f.seek(0)
for scope in self.scopes:
description = self.scopes[scope]['description']
doc_description = self.scopes[scope].get('doc_description', '')
if doc_description:
description = doc_description
scope_dict[scope] = description
content['securityDefinitions']['oauth2']['scopes'] = scope_dict
yaml.dump(content, f)
f.truncate()
def main():
table_generator = ScopeTableGenerator()
table_generator.write_table()
table_generator.write_api()
if __name__ == "__main__":
main()

View File

@@ -1,37 +0,0 @@
# JupyterHub RBAC
Role Based Access Control (RBAC) in JupyterHub serves to provide fine grained control of access to Jupyterhub's API resources.
RBAC is new in JupyterHub 2.0.
## Motivation
The JupyterHub API requires authorization to access its APIs.
This ensures that an arbitrary user, or even an unauthenticated third party, are not allowed to perform such actions.
For instance, the behaviour prior to adoption of RBAC is that creating or deleting users requires _admin rights_.
The prior system is functional, but lacks flexibility. If your Hub serves a number of users in different groups, you might want to delegate permissions to other users or automate certain processes.
Prior to RBAC, appointing a 'group-only admin' or a bot that culls idle servers, requires granting full admin rights to all actions. This poses a risk of the user or service intentionally or unintentionally accessing and modifying any data within the Hub and violates the [principle of least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege).
To remedy situations like this, JupyterHub is transitioning to an RBAC system. By equipping users, groups and services with _roles_ that supply them with a collection of permissions (_scopes_), administrators are able to fine-tune which parties are granted access to which resources.
## Definitions
**Scopes** are specific permissions used to evaluate API requests. For example: the API endpoint `users/servers`, which enables starting or stopping user servers, is guarded by the scope `servers`.
Scopes are not directly assigned to requesters. Rather, when a client performs an API call, their access will be evaluated based on their assigned roles.
**Roles** are collections of scopes that specify the level of what a client is allowed to do. For example, a group administrator may be granted permission to control the servers of group members, but not to create, modify or delete group members themselves.
Within the RBAC framework, this is achieved by assigning a role to the administrator that covers exactly those privileges.
## Technical Overview
```{toctree}
:maxdepth: 2
roles
scopes
use-cases
tech-implementation
upgrade
```

View File

@@ -1,162 +0,0 @@
(roles)=
# Roles
JupyterHub provides four roles that are available by default:
```{admonition} **Default roles**
- `user` role provides a {ref}`default user scope <default-user-scope-target>` `self` that grants access to the user's own resources.
- `admin` role contains all available scopes and grants full rights to all actions. This role **cannot be edited**.
- `token` role provides a {ref}`default token scope <default-token-scope-target>` `all` that resolves to the same permissions as the owner of the token has.
- `server` role allows for posting activity of "itself" only.
**These roles cannot be deleted.**
```
These default roles have a default collection of scopes,
but you can define the scopes associated with each role (excluding admin) to suit your needs,
as seen [below](overriding-default-roles).
The `user`, `admin`, and `token` roles by default all preserve the permissions prior to RBAC.
Only the `server` role is changed from pre-2.0, to reduce its permissions to activity-only
instead of the default of a full access token.
Additional custom roles can also be defined (see {ref}`define-role-target`).
Roles can be assigned to the following entities:
- Users
- Services
- Groups
- Tokens
An entity can have zero, one, or multiple roles, and there are no restrictions on which roles can be assigned to which entity. Roles can be added to or removed from entities at any time.
**Users** \
When a new user gets created, they are assigned their default role `user`. Additionaly, if the user is created with admin privileges (via `c.Authenticator.admin_users` in `jupyterhub_config.py` or `admin: true` via API), they will be also granted `admin` role. If existing user's admin status changes via API or `jupyterhub_config.py`, their default role will be updated accordingly (after next startup for the latter).
**Services** \
Services do not have a default role. Services without roles have no access to the guarded API end-points, so most services will require assignment of a role in order to function.
**Groups** \
A group does not require any role, and has no roles by default. If a user is a member of a group, they automatically inherit any of the group's permissions (see {ref}`resolving-roles-scopes-target` for more details). This is useful for assigning a set of common permissions to several users.
**Tokens** \
A tokens permissions are evaluated based on their owning entity. Since a token is always issued for a user or service, it can never have more permissions than its owner. If no specific role is requested for a new token, the token is assigned the `token` role.
(define-role-target)=
## Defining Roles
Roles can be defined or modified in the configuration file as a list of dictionaries. An example:
% TODO: think about loading users into roles if membership has been changed via API.
% What should be the result?
```python
# in jupyterhub_config.py
c.JupyterHub.load_roles = [
{
'name': 'server-rights',
'description': 'Allows parties to start and stop user servers',
'scopes': ['servers'],
'users': ['alice', 'bob'],
'services': ['idle-culler'],
'groups': ['admin-group'],
}
]
```
The role `server-rights` now allows the starting and stopping of servers by any of the following:
- users `alice` and `bob`
- the service `idle-culler`
- any member of the `admin-group`.
```{attention}
Tokens cannot be assigned roles through role definition but may be assigned specific roles when requested via API (see {ref}`requesting-api-token-target`).
```
Another example:
```python
# in jupyterhub_config.py
c.JupyterHub.load_roles = [
{
'description': 'Read-only user models',
'name': 'reader',
'scopes': ['read:users'],
'services': ['external'],
'users': ['maria', 'joe']
}
]
```
The role `reader` allows users `maria` and `joe` and service `external` to read (but not modify) any users model.
```{admonition} Requirements
:class: warning
In a role definition, the `name` field is required, while all other fields are optional.\
**Role names must:**
- be 3 - 255 characters
- use ascii lowercase, numbers, 'unreserved' URL punctuation `-_.~`
- start with a letter
- end with letter or number.
`users`, `services`, and `groups` only accept objects that already exist in the database or are defined previously in the file.
It is not possible to implicitly add a new user to the database by defining a new role.
```
If no scopes are defined for _new role_, JupyterHub will raise a warning. Providing non-existing scopes will result in an error.
In case the role with a certain name already exists in the database, its definition and scopes will be overwritten. This holds true for all roles except the `admin` role, which cannot be overwritten; an error will be raised if trying to do so. All the role bearers permissions present in the definition will change accordingly.
(overriding-default-roles)=
### Overriding default roles
Role definitions can include those of the "default" roles listed above (admin excluded),
if the default scopes associated with those roles do not suit your deployment.
For example, to specify what permissions the $JUPYTERHUB_API_TOKEN issued to all single-user servers
has,
define the `server` role.
To restore the JupyterHub 1.x behavior of servers being able to do anything their owners can do,
use the scope `all`:
```python
c.JupyterHub.load_roles = [
{
'name': 'server',
'scopes': ['all'],
}
]
```
or, better yet, identify the specific [scopes][] you want server environments to have access to.
[scopes]: available-scopes-target
If you don't want to get too detailed,
one option is the `self` scope,
which will have no effect on non-admin users,
but will restrict the token issued to admin user servers to only have access to their own resources,
instead of being able to take actions on behalf of all other users.
```python
c.JupyterHub.load_roles = [
{
'name': 'server',
'scopes': ['self'],
}
]
```
(removing-roles-target)=
## Removing roles
Only the entities present in the role definition in the `jupyterhub_config.py` remain the role bearers. If a user, service or group is removed from the role definition, they will lose the role on the next startup.
Once a role is loaded, it remains in the database until removing it from the `jupyterhub_config.py` and restarting the Hub. All previously defined role bearers will lose the role and associated permissions. Default roles, even if previously redefined through the config file and removed, will not be deleted from the database.

View File

@@ -1,126 +0,0 @@
# Scopes in JupyterHub
A scope has a syntax-based design that reveals which resources it provides access to. Resources are objects with a type, associated data, relationships to other resources, and a set of methods that operate on them (see [RESTful API](https://restful-api-design.readthedocs.io/en/latest/resources.html) documentation for more information).
`<resource>` in the RBAC scope design refers to the resource name in the [JupyterHub's API](../reference/rest-api.rst) endpoints in most cases. For instance, `<resource>` equal to `users` corresponds to JupyterHub's API endpoints beginning with _/users_.
(scope-conventions-target)=
## Scope conventions
- `<resource>` \
The top-level `<resource>` scopes, such as `users` or `groups`, grant read, write, and list permissions to the resource itself as well as its sub-resources. For example, the scope `users:activity` is included in the scope `users`.
- `read:<resource>` \
Limits permissions to read-only operations on single resources.
- `list:<resource>` \
Read-only access to listing endpoints.
Use `read:<resource>:<subresource>` to control what fields are returned.
- `admin:<resource>` \
Grants additional permissions such as create/delete on the corresponding resource in addition to read and write permissions.
- `access:<resource>` \
Grants access permissions to the `<resource>` via API or browser.
- `<resource>:<subresource>` \
The {ref}`vertically filtered <vertical-filtering-target>` scopes provide access to a subset of the information granted by the `<resource>` scope. E.g., the scope `users:activity` only provides permission to post user activity.
- `<resource>!<object>=<objectname>` \
{ref}`horizontal-filtering-target` is implemented by the `!<object>=<objectname>`scope structure. A resource (or sub-resource) can be filtered based on `user`, `server`, `group` or `service` name. For instance, `<resource>!user=charlie` limits access to only return resources of user `charlie`. \
Only one filter per scope is allowed, but filters for the same scope have an additive effect; a larger filter can be used by supplying the scope multiple times with different filters.
By adding a scope to an existing role, all role bearers will gain the associated permissions.
## Metascopes
Metascopes do not follow the general scope syntax. Instead, a metascope resolves to a set of scopes, which can refer to different resources, based on their owning entity. In JupyterHub, there are currently two metascopes:
1. default user scope `self`, and
2. default token scope `all`.
(default-user-scope-target)=
### Default user scope
Access to the user's own resources and subresources is covered by metascope `self`. This metascope includes the user's model, activity, servers and tokens. For example, `self` for a user named "gerard" includes:
- `users!user=gerard` where the `users` scope provides access to the full user model and activity. The filter restricts this access to the user's own resources.
- `servers!user=gerard` which grants the user access to their own servers without being able to create/delete any.
- `tokens!user=gerard` which allows the user to access, request and delete their own tokens.
- `access:servers!user=gerard` which allows the user to access their own servers via API or browser.
The `self` scope is only valid for user entities. In other cases (e.g., for services) it resolves to an empty set of scopes.
(default-token-scope-target)=
### Default token scope
The token metascope `all` covers the same scopes as the token owner's scopes during requests. For example, if a token owner has roles containing the scopes `read:groups` and `read:users`, the `all` scope resolves to the set of scopes `{read:groups, read:users}`.
If the token owner has default `user` role, the `all` scope resolves to `self`, which will subsequently be expanded to include all the user-specific scopes (or empty set in the case of services).
If the token owner is a member of any group with roles, the group scopes will also be included in resolving the `all` scope.
(horizontal-filtering-target)=
## Horizontal filtering
Horizontal filtering, also called _resource filtering_, is the concept of reducing the payload of an API call to cover only the subset of the _resources_ that the scopes of the client provides them access to.
Requested resources are filtered based on the filter of the corresponding scope. For instance, if a service requests a user list (guarded with scope `read:users`) with a role that only contains scopes `read:users!user=hannah` and `read:users!user=ivan`, the returned list of user models will be an intersection of all users and the collection `{hannah, ivan}`. In case this intersection is empty, the API call returns an HTTP 404 error, regardless if any users exist outside of the clients scope filter collection.
In case a user resource is being accessed, any scopes with _group_ filters will be expanded to filters for each _user_ in those groups.
### `!user` filter
The `!user` filter is a special horizontal filter that strictly refers to the **"owner only"** scopes, where _owner_ is a user entity. The filter resolves internally into `!user=<ownerusername>` ensuring that only the owner's resources may be accessed through the associated scopes.
For example, the `server` role assigned by default to server tokens contains `access:servers!user` and `users:activity!user` scopes. This allows the token to access and post activity of only the servers owned by the token owner.
The filter can be applied to any scope.
(vertical-filtering-target)=
## Vertical filtering
Vertical filtering, also called _attribute filtering_, is the concept of reducing the payload of an API call to cover only the _attributes_ of the resources that the scopes of the client provides them access to. This occurs when the client scopes are subscopes of the API endpoint that is called.
For instance, if a client requests a user list with the only scope being `read:users:groups`, the returned list of user models will contain only a list of groups per user.
In case the client has multiple subscopes, the call returns the union of the data the client has access to.
The payload of an API call can be filtered both horizontally and vertically simultaneously. For instance, performing an API call to the endpoint `/users/` with the scope `users:name!user=juliette` returns a payload of `[{name: 'juliette'}]` (provided that this name is present in the database).
(available-scopes-target)=
## Available scopes
Table below lists all available scopes and illustrates their hierarchy. Indented scopes indicate subscopes of the scope(s) above them.
There are four exceptions to the general {ref}`scope conventions <scope-conventions-target>`:
- `read:users:name` is a subscope of both `read:users` and `read:servers`. \
The `read:servers` scope requires access to the user name (server owner) due to named servers distinguished internally in the form `!server=username/servername`.
- `read:users:activity` is a subscope of both `read:users` and `users:activity`. \
Posting activity via the `users:activity`, which is not included in `users` scope, needs to check the last valid activity of the user.
- `read:roles:users` is a subscope of both `read:roles` and `admin:users`. \
Admin privileges to the _users_ resource include the information about user roles.
- `read:roles:groups` is a subscope of both `read:roles` and `admin:groups`. \
Similar to the `read:roles:users` above.
```{include} scope-table.md
```
```{Caution}
Note that only the {ref}`horizontal filtering <horizontal-filtering-target>` can be added to scopes to customize them. \
Metascopes `self` and `all`, `<resource>`, `<resource>:<subresource>`, `read:<resource>`, `admin:<resource>`, and `access:<resource>` scopes are predefined and cannot be changed otherwise.
```
### Scopes and APIs
The scopes are also listed in the [](../reference/rest-api.rst) documentation. Each API endpoint has a list of scopes which can be used to access the API; if no scopes are listed, the API is not authenticated and can be accessed without any permissions (i.e., no scopes).
Listed scopes by each API endpoint reflect the "lowest" permissions required to gain any access to the corresponding API. For example, posting user's activity (_POST /users/:name/activity_) needs `users:activity` scope. If scope `users` is passed during the request, the access will be granted as the required scope is a subscope of the `users` scope. If, on the other hand, `read:users:activity` scope is passed, the access will be denied.

View File

@@ -1,80 +0,0 @@
# Technical Implementation
Roles are stored in the database, where they are associated with users, services, etc., and can be added or modified as explained in {ref}`define-role-target` section. Users, services, groups, and tokens can gain, change, and lose roles. This is currently achieved via `jupyterhub_config.py` (see {ref}`define-role-target`) and will be made available via API in future. The latter will allow for changing a token's role, and thereby its permissions, without the need to issue a new token.
Roles and scopes utilities can be found in `roles.py` and `scopes.py` modules. Scope variables take on five different formats which is reflected throughout the utilities via specific nomenclature:
```{admonition} **Scope variable nomenclature**
:class: tip
- _scopes_ \
List of scopes with abbreviations (used in role definitions). E.g., `["users:activity!user"]`.
- _expanded scopes_ \
Set of expanded scopes without abbreviations (i.e., resolved metascopes, filters and subscopes). E.g., `{"users:activity!user=charlie", "read:users:activity!user=charlie"}`.
- _parsed scopes_ \
Dictionary JSON like format of expanded scopes. E.g., `{"users:activity": {"user": ["charlie"]}, "read:users:activity": {"users": ["charlie"]}}`.
- _intersection_ \
Set of expanded scopes as intersection of 2 expanded scope sets.
- _identify scopes_ \
Set of expanded scopes needed for identify (whoami) endpoints.
```
(resolving-roles-scopes-target)=
## Resolving roles and scopes
**Resolving roles** refers to determining which roles a user, service, token, or group has, extracting the list of scopes from each role and combining them into a single set of scopes.
**Resolving scopes** involves expanding scopes into all their possible subscopes (_expanded scopes_), parsing them into format used for access evaluation (_parsed scopes_) and, if applicable, comparing two sets of scopes (_intersection_). All procedures take into account the scope hierarchy, {ref}`vertical <vertical-filtering-target>` and {ref}`horizontal filtering <horizontal-filtering-target>`, limiting or elevated permissions (`read:<resource>` or `admin:<resource>`, respectively), and metascopes.
Roles and scopes are resolved on several occasions, for example when requesting an API token with specific roles or making an API request. The following sections provide more details.
(requesting-api-token-target)=
### Requesting API token with specific roles
API tokens grant access to JupyterHub's APIs. The RBAC framework allows for requesting tokens with specific existing roles. To date, it is only possible to add roles to a token through the _POST /users/:name/tokens_ API where the roles can be specified in the token parameters body (see [](../reference/rest-api.rst)).
RBAC adds several steps into the token issue flow.
If no roles are requested, the token is issued with the default `token` role (providing the requester is allowed to create the token).
If the token is requested with any roles, the permissions of requesting entity are checked against the requested permissions to ensure the token would not grant its owner additional privileges.
If, due to modifications of roles or entities, at API request time a token has any scopes that its owner does not, those scopes are removed. The API request is resolved without additional errors using the scopes _intersection_, but the Hub logs a warning (see {ref}`Figure 2 <api-request-chart>`).
Resolving a token's roles (yellow box in {ref}`Figure 1 <token-request-chart>`) corresponds to resolving all the token's owner roles (including the roles associated with their groups) and the token's requested roles into a set of scopes. The two sets are compared (Resolve the scopes box in orange in {ref}`Figure 1 <token-request-chart>`), taking into account the scope hierarchy but, solely for role assignment, omitting any {ref}`horizontal filter <horizontal-filtering-target>` comparison. If the token's scopes are a subset of the token owner's scopes, the token is issued with the requested roles; if not, JupyterHub will raise an error.
{ref}`Figure 1 <token-request-chart>` below illustrates the steps involved. The orange rectangles highlight where in the process the roles and scopes are resolved.
```{figure} ../images/rbac-token-request-chart.png
:align: center
:name: token-request-chart
Figure 1. Resolving roles and scopes during API token request
```
### Making an API request
With the RBAC framework each authenticated JupyterHub API request is guarded by a scope decorator that specifies which scopes are required to gain the access to the API.
When an API request is performed, the requesting API token's roles are again resolved (yellow box in {ref}`Figure 2 <api-request-chart>`) to ensure the token does not grant more permissions than its owner has at the request time (e.g., due to changing/losing roles).
If the owner's roles do not include some scopes of the token's scopes, only the _intersection_ of the token's and owner's scopes will be used. For example, using a token with scope `users` whose owner's role scope is `read:users:name` will result in only the `read:users:name` scope being passed on. In the case of no _intersection_, an empty set of scopes will be used.
The passed scopes are compared to the scopes required to access the API as follows:
- if the API scopes are present within the set of passed scopes, the access is granted and the API returns its "full" response
- if that is not the case, another check is utilized to determine if subscopes of the required API scopes can be found in the passed scope set:
- if found, the RBAC framework employs the {ref}`filtering <vertical-filtering-target>` procedures to refine the API response to access only resource attributes corresponding to the passed scopes. For example, providing a scope `read:users:activity!group=class-C` for the _GET /users_ API will return a list of user models from group `class-C` containing only the `last_activity` attribute for each user model
- if not found, the access to API is denied
{ref}`Figure 2 <api-request-chart>` illustrates this process highlighting the steps where the role and scope resolutions as well as filtering occur in orange.
```{figure} ../images/rbac-api-request-chart.png
:align: center
:name: api-request-chart
Figure 2. Resolving roles and scopes when an API request is made
```

View File

@@ -1,54 +0,0 @@
# Upgrading JupyterHub with RBAC framework
RBAC framework requires different database setup than any previous JupyterHub versions due to eliminating the distinction between OAuth and API tokens (see {ref}`oauth-vs-api-tokens-target` for more details). This requires merging the previously two different database tables into one. By doing so, all existing tokens created before the upgrade no longer comply with the new database version and must be replaced.
This is achieved by the Hub deleting all existing tokens during the database upgrade and recreating the tokens loaded via the `jupyterhub_config.py` file with updated structure. However, any manually issued or stored tokens are not recreated automatically and must be manually re-issued after the upgrade.
No other database records are affected.
(rbac-upgrade-steps-target)=
## Upgrade steps
1. All running **servers must be stopped** before proceeding with the upgrade.
2. To upgrade the Hub, follow the [Upgrading JupyterHub](../admin/upgrading.rst) instructions.
```{attention}
We advise against defining any new roles in the `jupyterhub.config.py` file right after the upgrade is completed and JupyterHub restarted for the first time. This preserves the 'current' state of the Hub. You can define and assign new roles on any other following startup.
```
3. After restarting the Hub **re-issue all tokens that were previously issued manually** (i.e., not through the `jupyterhub_config.py` file).
When the JupyterHub is restarted for the first time after the upgrade, all users, services and tokens stored in the database or re-loaded through the configuration file will be assigned their default role. Any newly added entities after that will be assigned their default role only if no other specific role is requested for them.
## Changing the permissions after the upgrade
Once all the {ref}`upgrade steps <rbac-upgrade-steps-target>` above are completed, the RBAC framework will be available for utilization. You can define new roles, modify default roles (apart from `admin`) and assign them to entities as described in the {ref}`define-role-target` section.
We recommended the following procedure to start with RBAC:
1. Identify which admin users and services you would like to grant only the permissions they need through the new roles.
2. Strip these users and services of their admin status via API or UI. This will change their roles from `admin` to `user`.
```{note}
Stripping entities of their roles is currently available only via `jupyterhub_config.py` (see {ref}`removing-roles-target`).
```
3. Define new roles that you would like to start using with appropriate scopes and assign them to these entities in `jupyterhub_config.py`.
4. Restart the JupyterHub for the new roles to take effect.
(oauth-vs-api-tokens-target)=
## OAuth vs API tokens
### Before RBAC
Previous JupyterHub versions utilize two types of tokens, OAuth token and API token.
OAuth token is issued by the Hub to a single-user server when the user logs in. The token is stored in the browser cookie and is used to identify the user who owns the server during the OAuth flow. This token by default expires when the cookie reaches its expiry time of 2 weeks (or after 1 hour in JupyterHub versions < 1.3.0).
API token is issued by the Hub to a single-user server when launched and is used to communicate with the Hub's APIs such as posting activity or completing the OAuth flow. This token has no expiry by default.
API tokens can also be issued to users via API ([_/hub/token_](../reference/urls.md) or [_POST /users/:username/tokens_](../reference/rest-api.rst)) and services via `jupyterhub_config.py` to perform API requests.
### With RBAC
The RBAC framework allows for granting tokens different levels of permissions via scopes attached to roles. The 'only identify' purpose of the separate OAuth tokens is no longer required. API tokens can be used used for every action, including the login and authentication, for which an API token with no role (i.e., no scope in {ref}`available-scopes-target`) is used.
OAuth tokens are therefore dropped from the Hub upgraded with the RBAC framework.

View File

@@ -1,130 +0,0 @@
# Use Cases
To determine which scopes a role should have, one can follow these steps:
1. Determine what actions the role holder should have/have not access to
2. Match the actions against the [JupyterHub's APIs](../reference/rest-api.rst)
3. Check which scopes are required to access the APIs
4. Combine scopes and subscopes if applicable
5. Customize the scopes with filters if needed
6. Define the role with required scopes and assign to users/services/groups/tokens
Below, different use cases are presented on how to use the RBAC framework.
## Service to cull idle servers
Finding and shutting down idle servers can save a lot of computational resources.
We can make use of [jupyterhub-idle-culler](https://github.com/jupyterhub/jupyterhub-idle-culler) to manage this for us.
Below follows a short tutorial on how to add a cull-idle service in the RBAC system.
1. Install the cull-idle server script with `pip install jupyterhub-idle-culler`.
2. Define a new service `idle-culler` and a new role for this service:
```python
# in jupyterhub_config.py
c.JupyterHub.services = [
{
"name": "idle-culler",
"command": [
sys.executable, "-m",
"jupyterhub_idle_culler",
"--timeout=3600"
],
}
]
c.JupyterHub.load_roles = [
{
"name": "idle-culler",
"description": "Culls idle servers",
"scopes": ["read:users:name", "read:users:activity", "servers"],
"services": ["idle-culler"],
}
]
```
```{important}
Note that in the RBAC system the `admin` field in the `idle-culler` service definition is omitted. Instead, the `idle-culler` role provides the service with only the permissions it needs.
If the optional actions of deleting the idle servers and/or removing inactive users are desired, **change the following scopes** in the `idle-culler` role definition:
- `servers` to `admin:servers` for deleting servers
- `read:users:name`, `read:users:activity` to `admin:users` for deleting users.
```
3. Restart JupyterHub to complete the process.
## API launcher
A service capable of creating/removing users and launching multiple servers should have access to:
1. _POST_ and _DELETE /users_
2. _POST_ and _DELETE /users/:name/server_ or _/users/:name/servers/:server_name_
3. Creating/deleting servers
The scopes required to access the API enpoints:
1. `admin:users`
2. `servers`
3. `admin:servers`
From the above, the role definition is:
```python
# in jupyterhub_config.py
c.JupyterHub.load_roles = [
{
"name": "api-launcher",
"description": "Manages servers",
"scopes": ["admin:users", "admin:servers"],
"services": [<service_name>]
}
]
```
If needed, the scopes can be modified to limit the permissions to e.g. a particular group with `!group=groupname` filter.
## Group admin roles
Roles can be used to specify different group member privileges.
For example, a group of students `class-A` may have a role allowing all group members to access information about their group. Teacher `johan`, who is a student of `class-A` but a teacher of another group of students `class-B`, can have additional role permitting him to access information about `class-B` students as well as start/stop their servers.
The roles can then be defined as follows:
```python
# in jupyterhub_config.py
c.JupyterHub.load_groups = {
'class-A': ['johan', 'student1', 'student2'],
'class-B': ['student3', 'student4']
}
c.JupyterHub.load_roles = [
{
'name': 'class-A-student',
'description': 'Grants access to information about the group',
'scopes': ['read:groups!group=class-A'],
'groups': ['class-A']
},
{
'name': 'class-B-student',
'description': 'Grants access to information about the group',
'scopes': ['read:groups!group=class-B'],
'groups': ['class-B']
},
{
'name': 'teacher',
'description': 'Allows for accessing information about teacher group members and starting/stopping their servers',
'scopes': [ 'read:users!group=class-B', 'servers!group=class-B'],
'users': ['johan']
}
]
```
In the above example, `johan` has privileges inherited from `class-A-student` role and the `teacher` role on top of those.
```{note}
The scope filters (`!group=`) limit the privileges only to the particular groups. `johan` can access the servers and information of `class-B` group members only.
```

View File

@@ -5,8 +5,8 @@ Hub and single user notebook servers.
## The default PAM Authenticator
JupyterHub ships with the default [PAM][]-based Authenticator, for
logging in with local user accounts via a username and password.
JupyterHub ships only with the default [PAM][]-based Authenticator,
for logging in with local user accounts via a username and password.
## The OAuthenticator
@@ -34,17 +34,12 @@ popular services:
A generic implementation, which you can use for OAuth authentication
with any provider, is also available.
## The Dummy Authenticator
When testing, it may be helpful to use the
{class}`jupyterhub.auth.DummyAuthenticator`. This allows for any username and
password unless if a global password has been set. Once set, any username will
still be accepted but the correct password will need to be provided.
## Additional Authenticators
A partial list of other authenticators is available on the
[JupyterHub wiki](https://github.com/jupyterhub/jupyterhub/wiki/Authenticators).
- ldapauthenticator for LDAP
- tmpauthenticator for temporary accounts
- For Shibboleth, [jhub_shibboleth_auth](https://github.com/gesiscss/jhub_shibboleth_auth)
and [jhub_remote_user_authenticator](https://github.com/cwaldbieser/jhub_remote_user_authenticator)
## Technical Overview of Authentication
@@ -75,6 +70,7 @@ Writing an Authenticator that looks up passwords in a dictionary
requires only overriding this one method:
```python
from tornado import gen
from IPython.utils.traitlets import Dict
from jupyterhub.auth import Authenticator
@@ -84,11 +80,13 @@ class DictionaryAuthenticator(Authenticator):
help="""dict of username:password for authentication"""
)
async def authenticate(self, handler, data):
@gen.coroutine
def authenticate(self, handler, data):
if self.passwords.get(data['username']) == data['password']:
return data['username']
```
#### Normalize usernames
Since the Authenticator and Spawner both use the same username,
@@ -105,15 +103,6 @@ c.Authenticator.username_map = {
}
```
When using `PAMAuthenticator`, you can set
`c.PAMAuthenticator.pam_normalize_username = True`, which will
normalize usernames using PAM (basically round-tripping them: username
to uid to username), which is useful in case you use some external
service that allows multiple usernames mapping to the same user (such
as ActiveDirectory, yes, this really happens). When
`pam_normalize_username` is on, usernames are _not_ normalized to
lowercase.
#### Validate usernames
In most cases, there is a very limited set of acceptable usernames.
@@ -130,6 +119,7 @@ To only allow usernames that start with 'w':
c.Authenticator.username_pattern = r'w.*'
```
### How to write a custom authenticator
You can use custom Authenticator subclasses to enable authentication
@@ -142,45 +132,12 @@ and [post_spawn_stop(user, spawner)][], are hooks that can be used to do
auth-related startup (e.g. opening PAM sessions) and cleanup
(e.g. closing PAM sessions).
See a list of custom Authenticators [on the wiki](https://github.com/jupyterhub/jupyterhub/wiki/Authenticators).
If you are interested in writing a custom authenticator, you can read
[this tutorial](http://jupyterhub-tutorial.readthedocs.io/en/latest/authenticators.html).
### Registering custom Authenticators via entry points
As of JupyterHub 1.0, custom authenticators can register themselves via
the `jupyterhub.authenticators` entry point metadata.
To do this, in your `setup.py` add:
```python
setup(
...
entry_points={
'jupyterhub.authenticators': [
'myservice = mypackage:MyAuthenticator',
],
},
)
```
If you have added this metadata to your package,
users can select your authenticator with the configuration:
```python
c.JupyterHub.authenticator_class = 'myservice'
```
instead of the full
```python
c.JupyterHub.authenticator_class = 'mypackage:MyAuthenticator'
```
previously required.
Additionally, configurable attributes for your authenticator will
appear in jupyterhub help output and auto-generated configuration files
via `jupyterhub --generate-config`.
### Authentication state
@@ -215,10 +172,12 @@ To store auth_state, two conditions must be met:
export JUPYTERHUB_CRYPT_KEY=$(openssl rand -hex 32)
```
JupyterHub uses [Fernet](https://cryptography.io/en/latest/fernet/) to encrypt auth_state.
To facilitate key-rotation, `JUPYTERHUB_CRYPT_KEY` may be a semicolon-separated list of encryption keys.
If there are multiple keys present, the **first** key is always used to persist any new auth_state.
#### Using auth_state
Typically, if `auth_state` is persisted it is desirable to affect the Spawner environment in some way.
@@ -228,9 +187,10 @@ to Spawner environment:
```python
class MyAuthenticator(Authenticator):
async def authenticate(self, handler, data=None):
username = await identify_user(handler, data)
upstream_token = await token_for_user(username)
@gen.coroutine
def authenticate(self, handler, data=None):
username = yield identify_user(handler, data)
upstream_token = yield token_for_user(username)
return {
'name': username,
'auth_state': {
@@ -238,9 +198,10 @@ class MyAuthenticator(Authenticator):
},
}
async def pre_spawn_start(self, user, spawner):
@gen.coroutine
def pre_spawn_start(self, user, spawner):
"""Pass upstream_token to spawner via environment variable"""
auth_state = await user.get_auth_state()
auth_state = yield user.get_auth_state()
if not auth_state:
# auth_state not enabled
return
@@ -259,10 +220,11 @@ PAM session.
Beginning with version 0.8, JupyterHub is an OAuth provider.
[authenticator]: https://github.com/jupyterhub/jupyterhub/blob/HEAD/jupyterhub/auth.py
[pam]: https://en.wikipedia.org/wiki/Pluggable_authentication_module
[oauth]: https://en.wikipedia.org/wiki/OAuth
[github oauth]: https://developer.github.com/v3/oauth/
[oauthenticator]: https://github.com/jupyterhub/oauthenticator
[Authenticator]: https://github.com/jupyterhub/jupyterhub/blob/master/jupyterhub/auth.py
[PAM]: https://en.wikipedia.org/wiki/Pluggable_authentication_module
[OAuth]: https://en.wikipedia.org/wiki/OAuth
[GitHub OAuth]: https://developer.github.com/v3/oauth/
[OAuthenticator]: https://github.com/jupyterhub/oauthenticator
[pre_spawn_start(user, spawner)]: https://jupyterhub.readthedocs.io/en/latest/api/auth.html#jupyterhub.auth.Authenticator.pre_spawn_start
[post_spawn_stop(user, spawner)]: https://jupyterhub.readthedocs.io/en/latest/api/auth.html#jupyterhub.auth.Authenticator.post_spawn_stop

View File

@@ -3,17 +3,18 @@
In this example, we show a configuration file for a fairly standard JupyterHub
deployment with the following assumptions:
- Running JupyterHub on a single cloud server
- Using SSL on the standard HTTPS port 443
- Using GitHub OAuth (using oauthenticator) for login
- Using the default spawner (to configure other spawners, uncomment and edit
* Running JupyterHub on a single cloud server
* Using SSL on the standard HTTPS port 443
* Using GitHub OAuth (using oauthenticator) for login
* Using the default spawner (to configure other spawners, uncomment and edit
`spawner_class` as well as follow the instructions for your desired spawner)
- Users exist locally on the server
- Users' notebooks to be served from `~/assignments` to allow users to browse
* Users exist locally on the server
* Users' notebooks to be served from `~/assignments` to allow users to browse
for notebooks within other users' home directories
- You want the landing page for each user to be a `Welcome.ipynb` notebook in
* You want the landing page for each user to be a `Welcome.ipynb` notebook in
their assignments directory.
- All runtime files are put into `/srv/jupyterhub` and log files in `/var/log`.
* All runtime files are put into `/srv/jupyterhub` and log files in `/var/log`.
The `jupyterhub_config.py` file would have these settings:
@@ -51,7 +52,7 @@ c.GitHubOAuthenticator.oauth_callback_url = os.environ['OAUTH_CALLBACK_URL']
c.LocalAuthenticator.create_system_users = True
# specify users and admin
c.Authenticator.allowed_users = {'rgbkrk', 'minrk', 'jhamrick'}
c.Authenticator.whitelist = {'rgbkrk', 'minrk', 'jhamrick'}
c.Authenticator.admin_users = {'jhamrick', 'rgbkrk'}
# uses the default spawner

View File

@@ -6,23 +6,21 @@ SSL port `443`. This could be useful if the JupyterHub server machine is also
hosting other domains or content on `443`. The goal in this example is to
satisfy the following:
- JupyterHub is running on a server, accessed _only_ via `HUB.DOMAIN.TLD:443`
- On the same machine, `NO_HUB.DOMAIN.TLD` strictly serves different content,
* JupyterHub is running on a server, accessed *only* via `HUB.DOMAIN.TLD:443`
* On the same machine, `NO_HUB.DOMAIN.TLD` strictly serves different content,
also on port `443`
- `nginx` or `apache` is used as the public access point (which means that
only nginx/apache will bind to `443`)
- After testing, the server in question should be able to score at least an A on the
* `nginx` or `apache` is used as the public access point (which means that
only nginx/apache will bind to `443`)
* After testing, the server in question should be able to score at least an A on the
Qualys SSL Labs [SSL Server Test](https://www.ssllabs.com/ssltest/)
Let's start out with needed JupyterHub configuration in `jupyterhub_config.py`:
```python
# Force the proxy to only listen to connections to 127.0.0.1 (on port 8000)
c.JupyterHub.bind_url = 'http://127.0.0.1:8000'
# Force the proxy to only listen to connections to 127.0.0.1
c.JupyterHub.ip = '127.0.0.1'
```
(For Jupyterhub < 0.9 use `c.JupyterHub.ip = '127.0.0.1'`.)
For high-quality SSL configuration, we also generate Diffie-Helman parameters.
This can take a few minutes:
@@ -83,12 +81,8 @@ server {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# websocket headers
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Scheme $scheme;
proxy_buffering off;
}
# Managing requests to verify letsencrypt host
@@ -143,21 +137,6 @@ Now restart `nginx`, restart the JupyterHub, and enjoy accessing
`https://HUB.DOMAIN.TLD` while serving other content securely on
`https://NO_HUB.DOMAIN.TLD`.
### SELinux permissions for nginx
On distributions with SELinux enabled (e.g. Fedora), one may encounter permission errors
when the nginx service is started.
We need to allow nginx to perform network relay and connect to the jupyterhub port. The
following commands do that:
```bash
semanage port -a -t http_port_t -p tcp 8000
setsebool -P httpd_can_network_relay 1
setsebool -P httpd_can_network_connect 1
```
Replace 8000 with the port the jupyterhub server is running from.
## Apache
@@ -211,25 +190,3 @@ Listen 443
</Location>
</VirtualHost>
```
In case of the need to run the jupyterhub under /jhub/ or other location please use the below configurations:
- JupyterHub running locally at http://127.0.0.1:8000/jhub/ or other location
httpd.conf amendments:
```bash
RewriteRule /jhub/(.*) ws://127.0.0.1:8000/jhub/$1 [NE.P,L]
RewriteRule /jhub/(.*) http://127.0.0.1:8000/jhub/$1 [NE,P,L]
ProxyPass /jhub/ http://127.0.0.1:8000/jhub/
ProxyPassReverse /jhub/ http://127.0.0.1:8000/jhub/
```
jupyterhub_config.py amendments:
```bash
--The public facing URL of the whole JupyterHub application.
--This is the address on which the proxy will bind. Sets protocol, ip, base_url
c.JupyterHub.bind_url = 'http://127.0.0.1:8000/jhub/'
```

View File

@@ -1,30 +0,0 @@
==============================
Configuration Reference
==============================
.. important::
Make sure the version of JupyterHub for this documentation matches your
installation version, as the output of this command may change between versions.
JupyterHub configuration
------------------------
As explained in the `Configuration Basics <../getting-started/config-basics.html#generate-a-default-config-file>`_
section, the ``jupyterhub_config.py`` can be automatically generated via
.. code-block:: bash
jupyterhub --generate-config
The following contains the output of that command for reference.
.. jupyterhub-generate-config::
JupyterHub help command output
------------------------------
This section contains the output of the command ``jupyterhub --help-all``.
.. jupyterhub-help-all::

View File

@@ -9,7 +9,7 @@ Only do this if you are very sure you must.
There are many Authenticators and Spawners available for JupyterHub. Some, such
as DockerSpawner or OAuthenticator, do not need any elevated permissions. This
document describes how to get the full default behavior of JupyterHub while
running notebook servers as real system users on a shared system without
running notebook servers as real system users on a shared system without
running the Hub itself as root.
Since JupyterHub needs to spawn processes as other users, the simplest way
@@ -37,7 +37,7 @@ Next, you will need [sudospawner](https://github.com/jupyter/sudospawner)
to enable monitoring the single-user servers with sudo:
```bash
sudo python3 -m pip install sudospawner
sudo pip install sudospawner
```
Now we have to configure sudo to allow the Hub user (`rhea`) to launch
@@ -50,13 +50,14 @@ To do this we add to `/etc/sudoers` (use `visudo` for safe editing of sudoers):
- specify the list of users `JUPYTER_USERS` for whom `rhea` can spawn servers
- set the command `JUPYTER_CMD` that `rhea` can execute on behalf of users
- give `rhea` permission to run `JUPYTER_CMD` on behalf of `JUPYTER_USERS`
- give `rhea` permission to run `JUPYTER_CMD` on behalf of `JUPYTER_USERS`
without entering a password
For example:
```bash
# comma-separated list of users that can spawn single-user servers
# comma-separated whitelist of users that can spawn single-user servers
# this should include all of your Hub users
Runas_Alias JUPYTER_USERS = rhea, zoe, wash
@@ -90,16 +91,16 @@ $ adduser -G jupyterhub newuser
Test that the new user doesn't need to enter a password to run the sudospawner
command.
This should prompt for your password to switch to rhea, but _not_ prompt for
This should prompt for your password to switch to rhea, but *not* prompt for
any password for the second switch. It should show some help output about
logging options:
```bash
$ sudo -u rhea sudo -n -u $USER /usr/local/bin/sudospawner --help
Usage: /usr/local/bin/sudospawner [OPTIONS]
Options:
--help show this help information
...
```
@@ -119,11 +120,6 @@ the shadow password database.
### Shadow group (Linux)
**Note:** On Fedora based distributions there is no clear way to configure
the PAM database to allow sufficient access for authenticating with the target user's password
from JupyterHub. As a workaround we recommend use an
[alternative authentication method](https://github.com/jupyterhub/jupyterhub/wiki/Authenticators).
```bash
$ ls -l /etc/shadow
-rw-r----- 1 root shadow 2197 Jul 21 13:41 shadow
@@ -150,13 +146,12 @@ We want our new user to be able to read the shadow passwords, so add it to the s
$ sudo usermod -a -G shadow rhea
```
If you want jupyterhub to serve pages on a restricted port (such as port 80 for http),
If you want jupyterhub to serve pages on a restricted port (such as port 80 for http),
then you will need to give `node` permission to do so:
```bash
sudo setcap 'cap_net_bind_service=+ep' /usr/bin/node
```
However, you may want to further understand the consequences of this.
You may also be interested in limiting the amount of CPU any process can use
@@ -165,6 +160,7 @@ distributions' packaging system. This can be used to keep any user's process
from using too much CPU cycles. You can configure it accoring to [these
instructions](http://ubuntuforums.org/showthread.php?t=992706).
### Shadow group (FreeBSD)
**NOTE:** This has not been tested and may not work as expected.
@@ -185,7 +181,7 @@ $ sudo chgrp shadow /etc/master.passwd
$ sudo chmod g+r /etc/master.passwd
```
We want our new user to be able to read the shadow passwords, so add it to the
We want our new user to be able to read the shadow passwords, so add it to the
shadow group:
```bash
@@ -208,8 +204,8 @@ The simplest way to deal with this is to make a directory owned by your Hub user
and use that as the CWD when launching the server.
```bash
$ sudo mkdir /etc/jupyterhub
$ sudo chown rhea /etc/jupyterhub
$ sudo mkdir /etc/jupyterhub
$ sudo chown rhea /etc/jupyterhub
```
## Start jupyterhub
@@ -217,20 +213,20 @@ $ sudo chown rhea /etc/jupyterhub
Finally, start the server as our newly configured user, `rhea`:
```bash
$ cd /etc/jupyterhub
$ sudo -u rhea jupyterhub --JupyterHub.spawner_class=sudospawner.SudoSpawner
$ cd /etc/jupyterhub
$ sudo -u rhea jupyterhub --JupyterHub.spawner_class=sudospawner.SudoSpawner
```
And try logging in.
## Troubleshooting: SELinux
### Troubleshooting: SELinux
If you still get a generic `Permission denied` `PermissionError`, it's possible SELinux is blocking you.
Here's how you can make a module to allow this.
First, put this in a file named `sudo_exec_selinux.te`:
First, put this in a file sudo_exec_selinux.te:
```bash
module sudo_exec_selinux 1.1;
module sudo_exec 1.1;
require {
type unconfined_t;
@@ -250,9 +246,9 @@ $ semodule_package -o sudo_exec_selinux.pp -m sudo_exec_selinux.mod
$ semodule -i sudo_exec_selinux.pp
```
## Troubleshooting: PAM session errors
### Troubleshooting: PAM session errors
If the PAM authentication doesn't work and you see errors for
`login:session-auth`, or similar, considering updating to a more recent version
of jupyterhub and disabling the opening of PAM sessions with
`c.PAMAuthenticator.open_sessions=False`.
`login:session-auth`, or similar, considering updating to `master`
and/or incorporating this commit https://github.com/jupyter/jupyterhub/commit/40368b8f555f04ffdd662ffe99d32392a088b1d2
and configuration option, `c.PAMAuthenticator.open_sessions = False`.

Some files were not shown because too many files have changed in this diff Show More