Resolve merge conflicts with Vertical Filtering and improve tests

This commit is contained in:
IvanaH8
2021-03-24 13:39:59 +01:00
89 changed files with 1645 additions and 1711 deletions

View File

@@ -24,7 +24,6 @@ jobs:
command: | command: |
docker run --rm -it -v $PWD/dockerfiles:/io jupyterhub/jupyterhub python3 /io/test.py docker run --rm -it -v $PWD/dockerfiles:/io jupyterhub/jupyterhub python3 /io/test.py
# Tell CircleCI to use this workflow when it builds the site # Tell CircleCI to use this workflow when it builds the site
workflows: workflows:
version: 2 version: 2

View File

@@ -51,6 +51,13 @@ jobs:
print("OK") print("OK")
EOF EOF
# ref: https://github.com/actions/upload-artifact#readme
- uses: actions/upload-artifact@v2
with:
name: jupyterhub-${{ github.sha }}
path: "dist/*"
if-no-files-found: error
- name: Publish to PyPI - name: Publish to PyPI
if: startsWith(github.ref, 'refs/tags/') if: startsWith(github.ref, 'refs/tags/')
env: env:

View File

@@ -51,7 +51,6 @@ jobs:
echo "or after-the-fact on already committed files with" echo "or after-the-fact on already committed files with"
echo " pre-commit run --all-files" echo " pre-commit run --all-files"
# Run "pytest jupyterhub/tests" in various configurations # Run "pytest jupyterhub/tests" in various configurations
pytest: pytest:
runs-on: ubuntu-20.04 runs-on: ubuntu-20.04
@@ -131,7 +130,6 @@ jobs:
echo "JUPYTERHUB_SINGLEUSER_APP=jupyterhub.tests.mockserverapp.MockServerApp" >> $GITHUB_ENV echo "JUPYTERHUB_SINGLEUSER_APP=jupyterhub.tests.mockserverapp.MockServerApp" >> $GITHUB_ENV
fi fi
- uses: actions/checkout@v2 - uses: actions/checkout@v2
# NOTE: actions/setup-node@v1 make use of a cache within the GitHub base # NOTE: actions/setup-node@v1 make use of a cache within the GitHub base
# environment and setup in a fraction of a second. # environment and setup in a fraction of a second.
- name: Install Node v14 - name: Install Node v14

View File

@@ -1,19 +1,24 @@
repos: repos:
- repo: https://github.com/asottile/reorder_python_imports - repo: https://github.com/asottile/reorder_python_imports
rev: v1.9.0 rev: v1.9.0
hooks: hooks:
- id: reorder-python-imports - id: reorder-python-imports
- repo: https://github.com/psf/black - repo: https://github.com/psf/black
rev: 20.8b1 rev: 20.8b1
hooks: hooks:
- id: black - id: black
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/mirrors-prettier
rev: v2.4.0 rev: v2.2.1
hooks:
- id: prettier
- repo: https://gitlab.com/pycqa/flake8
rev: "3.8.4"
hooks:
- id: flake8
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.4.0
hooks: hooks:
- id: end-of-file-fixer - id: end-of-file-fixer
- id: check-json
- id: check-yaml
- id: check-case-conflict - id: check-case-conflict
- id: check-executables-have-shebangs - id: check-executables-have-shebangs
- id: requirements-txt-fixer - id: requirements-txt-fixer
- id: flake8

1
.prettierignore Normal file
View File

@@ -0,0 +1 @@
share/jupyterhub/templates/

View File

@@ -18,7 +18,6 @@ JupyterHub requires Python >= 3.5 and nodejs.
As a Python project, a development install of JupyterHub follows standard practices for the basics (steps 1-2). As a Python project, a development install of JupyterHub follows standard practices for the basics (steps 1-2).
1. clone the repo 1. clone the repo
```bash ```bash
git clone https://github.com/jupyterhub/jupyterhub git clone https://github.com/jupyterhub/jupyterhub
@@ -29,17 +28,20 @@ As a Python project, a development install of JupyterHub follows standard practi
cd jupyterhub cd jupyterhub
python3 -m pip install --editable . python3 -m pip install --editable .
``` ```
3. install the development requirements, 3. install the development requirements,
which include things like testing tools which include things like testing tools
```bash ```bash
python3 -m pip install -r dev-requirements.txt python3 -m pip install -r dev-requirements.txt
``` ```
4. install configurable-http-proxy with npm: 4. install configurable-http-proxy with npm:
```bash ```bash
npm install -g configurable-http-proxy npm install -g configurable-http-proxy
``` ```
5. set up pre-commit hooks for automatic code formatting, etc. 5. set up pre-commit hooks for automatic code formatting, etc.
```bash ```bash

View File

@@ -6,10 +6,8 @@
**[License](#license)** | **[License](#license)** |
**[Help and Resources](#help-and-resources)** **[Help and Resources](#help-and-resources)**
# [JupyterHub](https://github.com/jupyterhub/jupyterhub) # [JupyterHub](https://github.com/jupyterhub/jupyterhub)
[![Latest PyPI version](https://img.shields.io/pypi/v/jupyterhub?logo=pypi)](https://pypi.python.org/pypi/jupyterhub) [![Latest PyPI version](https://img.shields.io/pypi/v/jupyterhub?logo=pypi)](https://pypi.python.org/pypi/jupyterhub)
[![Latest conda-forge version](https://img.shields.io/conda/vn/conda-forge/jupyterhub?logo=conda-forge)](https://www.npmjs.com/package/jupyterhub) [![Latest conda-forge version](https://img.shields.io/conda/vn/conda-forge/jupyterhub?logo=conda-forge)](https://www.npmjs.com/package/jupyterhub)
[![Documentation build status](https://img.shields.io/readthedocs/jupyterhub?logo=read-the-docs)](https://jupyterhub.readthedocs.org/en/latest/) [![Documentation build status](https://img.shields.io/readthedocs/jupyterhub?logo=read-the-docs)](https://jupyterhub.readthedocs.org/en/latest/)
@@ -53,17 +51,16 @@ for administration of the Hub and its users.
## Installation ## Installation
### Check prerequisites ### Check prerequisites
- A Linux/Unix based system - A Linux/Unix based system
- [Python](https://www.python.org/downloads/) 3.5 or greater - [Python](https://www.python.org/downloads/) 3.5 or greater
- [nodejs/npm](https://www.npmjs.com/) - [nodejs/npm](https://www.npmjs.com/)
* If you are using **`conda`**, the nodejs and npm dependencies will be installed for - If you are using **`conda`**, the nodejs and npm dependencies will be installed for
you by conda. you by conda.
* If you are using **`pip`**, install a recent version of - If you are using **`pip`**, install a recent version of
[nodejs/npm](https://docs.npmjs.com/getting-started/installing-node). [nodejs/npm](https://docs.npmjs.com/getting-started/installing-node).
For example, install it on Linux (Debian/Ubuntu) using: For example, install it on Linux (Debian/Ubuntu) using:
@@ -120,10 +117,10 @@ To start the Hub server, run the command:
Visit `https://localhost:8000` in your browser, and sign in with your unix Visit `https://localhost:8000` in your browser, and sign in with your unix
PAM credentials. PAM credentials.
*Note*: To allow multiple users to sign into the server, you will need to _Note_: To allow multiple users to sign into the server, you will need to
run the `jupyterhub` command as a *privileged user*, such as root. run the `jupyterhub` command as a _privileged user_, such as root.
The [wiki](https://github.com/jupyterhub/jupyterhub/wiki/Using-sudo-to-run-JupyterHub-without-root-privileges) The [wiki](https://github.com/jupyterhub/jupyterhub/wiki/Using-sudo-to-run-JupyterHub-without-root-privileges)
describes how to run the server as a *less privileged user*, which requires describes how to run the server as a _less privileged user_, which requires
more configuration of the system. more configuration of the system.
## Configuration ## Configuration
@@ -142,7 +139,7 @@ To generate a default config file with settings and descriptions:
### Start the Hub ### Start the Hub
To start the Hub on a specific url and port ``10.0.1.2:443`` with **https**: To start the Hub on a specific url and port `10.0.1.2:443` with **https**:
jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert

View File

@@ -15,6 +15,7 @@ This should only be used for demo or testing purposes!
It shouldn't be used as a base image to build on. It shouldn't be used as a base image to build on.
### Try it ### Try it
1. `cd` to the root of your jupyterhub repo. 1. `cd` to the root of your jupyterhub repo.
2. Build the demo image with `docker build -t jupyterhub-demo demo-image`. 2. Build the demo image with `docker build -t jupyterhub-demo demo-image`.

View File

@@ -10,9 +10,9 @@ html5lib # needed for beautifulsoup
mock mock
notebook notebook
pre-commit pre-commit
pytest>=3.3
pytest-asyncio pytest-asyncio
pytest-cov pytest-cov
pytest>=3.3
requests-mock requests-mock
# blacklist urllib3 releases affected by https://github.com/urllib3/urllib3/issues/1683 # blacklist urllib3 releases affected by https://github.com/urllib3/urllib3/issues/1683
# I *think* this should only affect testing, not production # I *think* this should only affect testing, not production

View File

@@ -1,4 +1,5 @@
## What is Dockerfile.alpine ## What is Dockerfile.alpine
Dockerfile.alpine contains base image for jupyterhub. It does not work independently, but only as part of a full jupyterhub cluster Dockerfile.alpine contains base image for jupyterhub. It does not work independently, but only as part of a full jupyterhub cluster
## How to use it? ## How to use it?
@@ -7,14 +8,13 @@ Dockerfile.alpine contains base image for jupyterhub. It does not work independ
2. A jupyterhub_config file. 2. A jupyterhub_config file.
3. Authentication and other libraries required by the specific jupyterhub_config file. 3. Authentication and other libraries required by the specific jupyterhub_config file.
## Steps to test it outside a cluster ## Steps to test it outside a cluster
* start configurable-http-proxy in another container - start configurable-http-proxy in another container
* specify CONFIGPROXY_AUTH_TOKEN env in both containers - specify CONFIGPROXY_AUTH_TOKEN env in both containers
* put both containers on the same network (e.g. docker network create jupyterhub; docker run ... --net jupyterhub) - put both containers on the same network (e.g. docker network create jupyterhub; docker run ... --net jupyterhub)
* tell jupyterhub where CHP is (e.g. c.ConfigurableHTTPProxy.api_url = 'http://chp:8001') - tell jupyterhub where CHP is (e.g. c.ConfigurableHTTPProxy.api_url = 'http://chp:8001')
* tell jupyterhub not to start the proxy itself (c.ConfigurableHTTPProxy.should_start = False) - tell jupyterhub not to start the proxy itself (c.ConfigurableHTTPProxy.should_start = False)
* Use dummy authenticator for ease of testing. Update following in jupyterhub_config file - Use dummy authenticator for ease of testing. Update following in jupyterhub_config file
- c.JupyterHub.authenticator_class = 'dummyauthenticator.DummyAuthenticator' - c.JupyterHub.authenticator_class = 'dummyauthenticator.DummyAuthenticator'
- c.DummyAuthenticator.password = "your strong password" - c.DummyAuthenticator.password = "your strong password"

View File

@@ -7,6 +7,6 @@ https://github.com/jupyterhub/autodoc-traits/archive/75885ee24636efbfebfceed1043
pydata-sphinx-theme pydata-sphinx-theme
pytablewriter>=0.56 pytablewriter>=0.56
recommonmark>=0.6 recommonmark>=0.6
sphinx>=1.7
sphinx-copybutton sphinx-copybutton
sphinx-jsonschema sphinx-jsonschema
sphinx>=1.7

View File

@@ -1,13 +1,12 @@
# see me at: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#/default # see me at: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#/default
swagger: '2.0' swagger: "2.0"
info: info:
title: JupyterHub title: JupyterHub
description: The REST API for JupyterHub description: The REST API for JupyterHub
version: 1.2.0dev version: 1.2.0dev
license: license:
name: BSD-3-Clause name: BSD-3-Clause
schemes: schemes: [http, https]
[http, https]
securityDefinitions: securityDefinitions:
token: token:
type: apiKey type: apiKey
@@ -16,8 +15,8 @@ securityDefinitions:
oauth2: oauth2:
type: oauth2 type: oauth2
flow: accessCode flow: accessCode
authorizationUrl: '/hub/api/oauth2/authorize' # what are the absolute URIs here? is oauth2 correct here or shall we use just authorizations? authorizationUrl: "/hub/api/oauth2/authorize" # what are the absolute URIs here? is oauth2 correct here or shall we use just authorizations?
tokenUrl: '/hub/api/oauth2/token' tokenUrl: "/hub/api/oauth2/token"
scopes: scopes:
all: Everything a user can do all: Everything a user can do
read:all: Read-only access to everything a user can read (also whoami handler) read:all: Read-only access to everything a user can read (also whoami handler)
@@ -61,7 +60,7 @@ paths:
This endpoint is not authenticated for the purpose of clients and user This endpoint is not authenticated for the purpose of clients and user
to identify the JupyterHub version before setting up authentication. to identify the JupyterHub version before setting up authentication.
responses: responses:
'200': "200":
description: The JupyterHub version description: The JupyterHub version
schema: schema:
type: object type: object
@@ -80,7 +79,7 @@ paths:
JupyterHub's version and executable path, JupyterHub's version and executable path,
and which Authenticator and Spawner are active. and which Authenticator and Spawner are active.
responses: responses:
'200': "200":
description: Detailed JupyterHub info description: Detailed JupyterHub info
schema: schema:
type: object type: object
@@ -135,12 +134,12 @@ paths:
Added in JupyterHub 1.3 Added in JupyterHub 1.3
responses: responses:
'200': "200":
description: The Hub's user list description: The Hub's user list
schema: schema:
type: array type: array
items: items:
$ref: '#/definitions/User' $ref: "#/definitions/User"
post: post:
summary: Create multiple users summary: Create multiple users
security: security:
@@ -162,13 +161,13 @@ paths:
description: whether the created users should be admins description: whether the created users should be admins
type: boolean type: boolean
responses: responses:
'201': "201":
description: The users have been created description: The users have been created
schema: schema:
type: array type: array
description: The created users description: The created users
items: items:
$ref: '#/definitions/User' $ref: "#/definitions/User"
/users/{name}: /users/{name}:
get: get:
summary: Get a user by name summary: Get a user by name
@@ -184,10 +183,10 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'200': "200":
description: The User model description: The User model
schema: schema:
$ref: '#/definitions/User' $ref: "#/definitions/User"
post: post:
summary: Create a single user summary: Create a single user
security: security:
@@ -200,10 +199,10 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'201': "201":
description: The user has been created description: The user has been created
schema: schema:
$ref: '#/definitions/User' $ref: "#/definitions/User"
patch: patch:
summary: Modify a user summary: Modify a user
description: Change a user's name or admin status description: Change a user's name or admin status
@@ -230,10 +229,10 @@ paths:
type: boolean type: boolean
description: update admin (optional, if another key is updated i.e. name) description: update admin (optional, if another key is updated i.e. name)
responses: responses:
'200': "200":
description: The updated user info description: The updated user info
schema: schema:
$ref: '#/definitions/User' $ref: "#/definitions/User"
delete: delete:
summary: Delete a user summary: Delete a user
security: security:
@@ -246,14 +245,12 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'204': "204":
description: The user has been deleted description: The user has been deleted
/users/{name}/activity: /users/{name}/activity:
post: post:
summary: summary: Notify Hub of activity for a given user.
Notify Hub of activity for a given user. description: Notify the Hub of activity by the user,
description:
Notify the Hub of activity by the user,
e.g. accessing a service or (more likely) e.g. accessing a service or (more likely)
actively using a server. actively using a server.
security: security:
@@ -285,7 +282,7 @@ paths:
The default server has an empty name (''). The default server has an empty name ('').
type: object type: object
properties: properties:
'<server name>': "<server name>":
description: | description: |
Activity for a single server. Activity for a single server.
type: object type: object
@@ -298,16 +295,16 @@ paths:
description: | description: |
Timestamp of last-seen activity on this server. Timestamp of last-seen activity on this server.
example: example:
last_activity: '2019-02-06T12:54:14Z' last_activity: "2019-02-06T12:54:14Z"
servers: servers:
'': "":
last_activity: '2019-02-06T12:54:14Z' last_activity: "2019-02-06T12:54:14Z"
gpu: gpu:
last_activity: '2019-02-06T12:54:14Z' last_activity: "2019-02-06T12:54:14Z"
responses: responses:
'401': "401":
$ref: '#/responses/Unauthorized' $ref: "#/responses/Unauthorized"
'404': "404":
description: No such user description: No such user
/users/{name}/server: /users/{name}/server:
post: post:
@@ -336,9 +333,9 @@ paths:
type: object type: object
responses: responses:
'201': "201":
description: The user's notebook server has started description: The user's notebook server has started
'202': "202":
description: The user's notebook server has not yet started, but has been requested description: The user's notebook server has not yet started, but has been requested
delete: delete:
summary: Stop a user's server summary: Stop a user's server
@@ -353,9 +350,9 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'204': "204":
description: The user's notebook server has stopped description: The user's notebook server has stopped
'202': "202":
description: The user's notebook server has not yet stopped as it is taking a while to stop description: The user's notebook server has not yet stopped as it is taking a while to stop
/users/{name}/servers/{server_name}: /users/{name}/servers/{server_name}:
post: post:
@@ -390,9 +387,9 @@ paths:
schema: schema:
type: object type: object
responses: responses:
'201': "201":
description: The user's notebook named-server has started description: The user's notebook named-server has started
'202': "202":
description: The user's notebook named-server has not yet started, but has been requested description: The user's notebook named-server has not yet started, but has been requested
delete: delete:
summary: Stop a user's named-server summary: Stop a user's named-server
@@ -425,9 +422,9 @@ paths:
Removing a server deletes things like the state of the stopped server. Removing a server deletes things like the state of the stopped server.
Default: false. Default: false.
responses: responses:
'204': "204":
description: The user's notebook named-server has stopped description: The user's notebook named-server has stopped
'202': "202":
description: The user's notebook named-server has not yet stopped as it is taking a while to stop description: The user's notebook named-server has not yet stopped as it is taking a while to stop
/users/{name}/tokens: /users/{name}/tokens:
parameters: parameters:
@@ -442,15 +439,15 @@ paths:
- oauth2: - oauth2:
- users:tokens - users:tokens
responses: responses:
'200': "200":
description: The list of tokens description: The list of tokens
schema: schema:
type: array type: array
items: items:
$ref: '#/definitions/Token' $ref: "#/definitions/Token"
'401': "401":
$ref: '#/responses/Unauthorized' $ref: "#/responses/Unauthorized"
'404': "404":
description: No such user description: No such user
post: post:
summary: Create a new token for the user summary: Create a new token for the user
@@ -474,13 +471,13 @@ paths:
type: list type: list
description: A list of role names that the token should have description: A list of role names that the token should have
responses: responses:
'201': "201":
description: The newly created token description: The newly created token
schema: schema:
$ref: '#/definitions/Token' $ref: "#/definitions/Token"
'400': "400":
description: Body must be a JSON dict or empty description: Body must be a JSON dict or empty
'403': "403":
description: Requested role does not exist description: Requested role does not exist
/users/{name}/tokens/{token_id}: /users/{name}/tokens/{token_id}:
parameters: parameters:
@@ -499,17 +496,17 @@ paths:
- oauth2: - oauth2:
- users:tokens - users:tokens
responses: responses:
'200': "200":
description: The info for the new token description: The info for the new token
schema: schema:
$ref: '#/definitions/Token' $ref: "#/definitions/Token"
delete: delete:
summary: Delete (revoke) a token by id summary: Delete (revoke) a token by id
security: security:
- oauth2: - oauth2:
- users:tokens - users:tokens
responses: responses:
'204': "204":
description: The token has been deleted description: The token has been deleted
/user: /user:
get: get:
@@ -519,10 +516,10 @@ paths:
- all - all
- read:all - read:all
responses: responses:
'200': "200":
description: The authenticated user's model is returned. description: The authenticated user's model is returned.
schema: schema:
$ref: '#/definitions/User' $ref: "#/definitions/User"
/groups: /groups:
get: get:
summary: List groups summary: List groups
@@ -531,12 +528,12 @@ paths:
- groups - groups
- read:groups - read:groups
responses: responses:
'200': "200":
description: The list of groups description: The list of groups
schema: schema:
type: array type: array
items: items:
$ref: '#/definitions/Group' $ref: "#/definitions/Group"
/groups/{name}: /groups/{name}:
get: get:
summary: Get a group by name summary: Get a group by name
@@ -552,10 +549,10 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'200': "200":
description: The group model description: The group model
schema: schema:
$ref: '#/definitions/Group' $ref: "#/definitions/Group"
post: post:
summary: Create a group summary: Create a group
security: security:
@@ -568,10 +565,10 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'201': "201":
description: The group has been created description: The group has been created
schema: schema:
$ref: '#/definitions/Group' $ref: "#/definitions/Group"
delete: delete:
summary: Delete a group summary: Delete a group
security: security:
@@ -584,7 +581,7 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'204': "204":
description: The group has been deleted description: The group has been deleted
/groups/{name}/users: /groups/{name}/users:
post: post:
@@ -612,10 +609,10 @@ paths:
items: items:
type: string type: string
responses: responses:
'200': "200":
description: The users have been added to the group description: The users have been added to the group
schema: schema:
$ref: '#/definitions/Group' $ref: "#/definitions/Group"
delete: delete:
summary: Remove users from a group summary: Remove users from a group
security: security:
@@ -641,7 +638,7 @@ paths:
items: items:
type: string type: string
responses: responses:
'200': "200":
description: The users have been removed from the group description: The users have been removed from the group
/services: /services:
get: get:
@@ -650,12 +647,12 @@ paths:
- oauth2: - oauth2:
- read:services - read:services
responses: responses:
'200': "200":
description: The service list description: The service list
schema: schema:
type: array type: array
items: items:
$ref: '#/definitions/Service' $ref: "#/definitions/Service"
/services/{name}: /services/{name}:
get: get:
summary: Get a service by name summary: Get a service by name
@@ -669,10 +666,10 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'200': "200":
description: The Service model description: The Service model
schema: schema:
$ref: '#/definitions/Service' $ref: "#/definitions/Service"
/proxy: /proxy:
get: get:
summary: Get the proxy's routing table summary: Get the proxy's routing table
@@ -681,7 +678,7 @@ paths:
- oauth2: - oauth2:
- proxy - proxy
responses: responses:
'200': "200":
description: Routing table description: Routing table
schema: schema:
type: object type: object
@@ -692,7 +689,7 @@ paths:
- oauth2: - oauth2:
- proxy - proxy
responses: responses:
'200': "200":
description: Success description: Success
patch: patch:
summary: Notify the Hub about a new proxy summary: Notify the Hub about a new proxy
@@ -721,7 +718,7 @@ paths:
type: string type: string
description: CONFIGPROXY_AUTH_TOKEN for the new proxy description: CONFIGPROXY_AUTH_TOKEN for the new proxy
responses: responses:
'200': "200":
description: Success description: Success
/authorizations/token: /authorizations/token:
post: post:
@@ -746,7 +743,7 @@ paths:
password: password:
type: string type: string
responses: responses:
'200': "200":
description: The new API token description: The new API token
schema: schema:
type: object type: object
@@ -754,7 +751,7 @@ paths:
token: token:
type: string type: string
description: The new API token. description: The new API token.
'403': "403":
description: The user can not be authenticated. description: The user can not be authenticated.
/authorizations/token/{token}: /authorizations/token/{token}:
get: get:
@@ -768,9 +765,9 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'200': "200":
description: The user or service identified by the API token description: The user or service identified by the API token
'404': "404":
description: A user or service is not found. description: A user or service is not found.
/authorizations/cookie/{cookie_name}/{cookie_value}: /authorizations/cookie/{cookie_name}/{cookie_value}:
get: get:
@@ -786,16 +783,16 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'200': "200":
description: The user identified by the cookie description: The user identified by the cookie
schema: schema:
$ref: '#/definitions/User' $ref: "#/definitions/User"
'404': "404":
description: A user is not found. description: A user is not found.
deprecated: true # minrk: lets not add a scope for this, lets remove it deprecated: true # minrk: lets not add a scope for this, lets remove it
/oauth2/authorize: /oauth2/authorize:
get: get:
summary: 'OAuth 2.0 authorize endpoint' summary: "OAuth 2.0 authorize endpoint"
description: | description: |
Redirect users to this URL to begin the OAuth process. Redirect users to this URL to begin the OAuth process.
It is not an API endpoint. It is not an API endpoint.
@@ -821,9 +818,9 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'200': "200":
description: Success description: Success
'400': "400":
description: OAuth2Error description: OAuth2Error
/oauth2/token: /oauth2/token:
post: post:
@@ -860,7 +857,7 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'200': "200":
description: JSON response including the token description: JSON response including the token
schema: schema:
type: object type: object
@@ -890,9 +887,9 @@ paths:
type: boolean type: boolean
description: Whether users' notebook servers should be shutdown as well (default from Hub config) description: Whether users' notebook servers should be shutdown as well (default from Hub config)
responses: responses:
'202': "202":
description: Shutdown successful description: Shutdown successful
'400': "400":
description: Unexpeced value for proxy or servers description: Unexpeced value for proxy or servers
# Descriptions of common responses # Descriptions of common responses
responses: responses:
@@ -935,7 +932,7 @@ definitions:
type: array type: array
description: The active servers for this user. description: The active servers for this user.
items: items:
$ref: '#/definitions/Server' $ref: "#/definitions/Server"
Server: Server:
type: object type: object
properties: properties:

File diff suppressed because one or more lines are too long

View File

@@ -6,8 +6,8 @@ the community of users, contributors, and maintainers.
The goal is to communicate priorities and upcoming release plans. The goal is to communicate priorities and upcoming release plans.
It is not a aimed at limiting contributions to what is listed here. It is not a aimed at limiting contributions to what is listed here.
## Using the roadmap ## Using the roadmap
### Sharing Feedback on the Roadmap ### Sharing Feedback on the Roadmap
All of the community is encouraged to provide feedback as well as share new All of the community is encouraged to provide feedback as well as share new
@@ -22,17 +22,17 @@ maintainers will help identify what a good next step is for the issue.
When submitting an issue, think about what "next step" category best describes When submitting an issue, think about what "next step" category best describes
your issue: your issue:
* **now**, concrete/actionable step that is ready for someone to start work on. - **now**, concrete/actionable step that is ready for someone to start work on.
These might be items that have a link to an issue or more abstract like These might be items that have a link to an issue or more abstract like
"decrease typos and dead links in the documentation" "decrease typos and dead links in the documentation"
* **soon**, less concrete/actionable step that is going to happen soon, - **soon**, less concrete/actionable step that is going to happen soon,
discussions around the topic are coming close to an end at which point it can discussions around the topic are coming close to an end at which point it can
move into the "now" category move into the "now" category
* **later**, abstract ideas or tasks, need a lot of discussion or - **later**, abstract ideas or tasks, need a lot of discussion or
experimentation to shape the idea so that it can be executed. Can also experimentation to shape the idea so that it can be executed. Can also
contain concrete/actionable steps that have been postponed on purpose contain concrete/actionable steps that have been postponed on purpose
(these are steps that could be in "now" but the decision was taken to work on (these are steps that could be in "now" but the decision was taken to work on
them later) them later)
### Reviewing and Updating the Roadmap ### Reviewing and Updating the Roadmap
@@ -47,8 +47,8 @@ For those please create a
The roadmap should give the reader an idea of what is happening next, what needs The roadmap should give the reader an idea of what is happening next, what needs
input and discussion before it can happen and what has been postponed. input and discussion before it can happen and what has been postponed.
## The roadmap proper ## The roadmap proper
### Project vision ### Project vision
JupyterHub is a dependable tool used by humans that reduces the complexity of JupyterHub is a dependable tool used by humans that reduces the complexity of
@@ -58,8 +58,8 @@ creating the environment in which a piece of software can be executed.
These "Now" items are considered active areas of focus for the project: These "Now" items are considered active areas of focus for the project:
* HubShare - a sharing service for use with JupyterHub. - HubShare - a sharing service for use with JupyterHub.
* Users should be able to: - Users should be able to:
- Push a project to other users. - Push a project to other users.
- Get a checkout of a project from other users. - Get a checkout of a project from other users.
- Push updates to a published project. - Push updates to a published project.
@@ -72,19 +72,17 @@ These "Now" items are considered active areas of focus for the project:
- Adding/removing a user to/from a team gives/removes them access to all projects that team has access to. - Adding/removing a user to/from a team gives/removes them access to all projects that team has access to.
- Build other services, such as static HTML publishing and dashboarding on top of these things. - Build other services, such as static HTML publishing and dashboarding on top of these things.
### Soon ### Soon
These "Soon" items are under discussion. Once an item reaches the point of an These "Soon" items are under discussion. Once an item reaches the point of an
actionable plan, the item will be moved to the "Now" section. Typically, actionable plan, the item will be moved to the "Now" section. Typically,
these will be moved at a future review of the roadmap. these will be moved at a future review of the roadmap.
* resource monitoring and management: - resource monitoring and management:
- (prometheus?) API for resource monitoring - (prometheus?) API for resource monitoring
- tracking activity on single-user servers instead of the proxy - tracking activity on single-user servers instead of the proxy
- notes and activity tracking per API token - notes and activity tracking per API token
### Later ### Later
The "Later" items are things that are at the back of the project's mind. At this The "Later" items are things that are at the back of the project's mind. At this

View File

@@ -8,18 +8,20 @@ high performance computing.
Please submit pull requests to update information or to add new institutions or uses. Please submit pull requests to update information or to add new institutions or uses.
## Academic Institutions, Research Labs, and Supercomputer Centers ## Academic Institutions, Research Labs, and Supercomputer Centers
### University of California Berkeley ### University of California Berkeley
- [BIDS - Berkeley Institute for Data Science](https://bids.berkeley.edu/) - [BIDS - Berkeley Institute for Data Science](https://bids.berkeley.edu/)
- [Teaching with Jupyter notebooks and JupyterHub](https://bids.berkeley.edu/resources/videos/teaching-ipythonjupyter-notebooks-and-jupyterhub) - [Teaching with Jupyter notebooks and JupyterHub](https://bids.berkeley.edu/resources/videos/teaching-ipythonjupyter-notebooks-and-jupyterhub)
- [Data 8](http://data8.org/) - [Data 8](http://data8.org/)
- [GitHub organization](https://github.com/data-8) - [GitHub organization](https://github.com/data-8)
- [NERSC](http://www.nersc.gov/) - [NERSC](http://www.nersc.gov/)
- [Press release on Jupyter and Cori](http://www.nersc.gov/news-publications/nersc-news/nersc-center-news/2016/jupyter-notebooks-will-open-up-new-possibilities-on-nerscs-cori-supercomputer/) - [Press release on Jupyter and Cori](http://www.nersc.gov/news-publications/nersc-news/nersc-center-news/2016/jupyter-notebooks-will-open-up-new-possibilities-on-nerscs-cori-supercomputer/)
- [Moving and sharing data](https://www.nersc.gov/assets/Uploads/03-MovingAndSharingData-Cholia.pdf) - [Moving and sharing data](https://www.nersc.gov/assets/Uploads/03-MovingAndSharingData-Cholia.pdf)
@@ -67,6 +69,7 @@ easy to do with RStudio too.
### University of Colorado Boulder ### University of Colorado Boulder
- (CU Research Computing) CURC - (CU Research Computing) CURC
- [JupyterHub User Guide](https://www.rc.colorado.edu/support/user-guide/jupyterhub.html) - [JupyterHub User Guide](https://www.rc.colorado.edu/support/user-guide/jupyterhub.html)
- Slurm job dispatched on Crestone compute cluster - Slurm job dispatched on Crestone compute cluster
- log troubleshooting - log troubleshooting
@@ -125,6 +128,7 @@ easy to do with RStudio too.
### University of California San Diego ### University of California San Diego
- San Diego Supercomputer Center - Andrea Zonca - San Diego Supercomputer Center - Andrea Zonca
- [Deploy JupyterHub on a Supercomputer with SSH](https://zonca.github.io/2017/05/jupyterhub-hpc-batchspawner-ssh.html) - [Deploy JupyterHub on a Supercomputer with SSH](https://zonca.github.io/2017/05/jupyterhub-hpc-batchspawner-ssh.html)
- [Run Jupyterhub on a Supercomputer](https://zonca.github.io/2015/04/jupyterhub-hpc.html) - [Run Jupyterhub on a Supercomputer](https://zonca.github.io/2015/04/jupyterhub-hpc.html)
- [Deploy JupyterHub on a VM for a Workshop](https://zonca.github.io/2016/04/jupyterhub-sdsc-cloud.html) - [Deploy JupyterHub on a VM for a Workshop](https://zonca.github.io/2016/04/jupyterhub-sdsc-cloud.html)
@@ -143,9 +147,9 @@ easy to do with RStudio too.
- [Teaching with JupyterHub and nbgrader](http://kristenthyng.com/blog/2016/09/07/jupyterhub+nbgrader/) - [Teaching with JupyterHub and nbgrader](http://kristenthyng.com/blog/2016/09/07/jupyterhub+nbgrader/)
### Elucidata ### Elucidata
- What's new in Jupyter Notebooks @[Elucidata](https://elucidata.io/):
- Using Jupyter Notebooks with Jupyterhub on GCP, managed by GKE - What's new in Jupyter Notebooks @[Elucidata](https://elucidata.io/):
- https://medium.com/elucidata/why-you-should-be-using-a-jupyter-notebook-8385a4ccd93d - Using Jupyter Notebooks with Jupyterhub on GCP, managed by GKE - https://medium.com/elucidata/why-you-should-be-using-a-jupyter-notebook-8385a4ccd93d
## Service Providers ## Service Providers
@@ -175,7 +179,6 @@ easy to do with RStudio too.
- [Deploying JupyterHub on Hadoop](https://jupyterhub-on-hadoop.readthedocs.io) - [Deploying JupyterHub on Hadoop](https://jupyterhub-on-hadoop.readthedocs.io)
## Miscellaneous ## Miscellaneous
- https://medium.com/@ybarraud/setting-up-jupyterhub-with-sudospawner-and-anaconda-844628c0dbee#.rm3yt87e1 - https://medium.com/@ybarraud/setting-up-jupyterhub-with-sudospawner-and-anaconda-844628c0dbee#.rm3yt87e1

View File

@@ -9,7 +9,6 @@ with an account and password on the system will be allowed to login.
You can restrict which users are allowed to login with a set, You can restrict which users are allowed to login with a set,
`Authenticator.allowed_users`: `Authenticator.allowed_users`:
```python ```python
c.Authenticator.allowed_users = {'mal', 'zoe', 'inara', 'kaylee'} c.Authenticator.allowed_users = {'mal', 'zoe', 'inara', 'kaylee'}
``` ```
@@ -28,6 +27,7 @@ A set of initial admin users, `admin_users` can configured be as follows:
```python ```python
c.Authenticator.admin_users = {'mal', 'zoe'} c.Authenticator.admin_users = {'mal', 'zoe'}
``` ```
Users in the admin set are automatically added to the user `allowed_users` set, Users in the admin set are automatically added to the user `allowed_users` set,
if they are not already present. if they are not already present.
@@ -44,8 +44,8 @@ c.PAMAuthenticator.admin_groups = {'wheel'}
Since the default `JupyterHub.admin_access` setting is False, the admins Since the default `JupyterHub.admin_access` setting is False, the admins
do not have permission to log in to the single user notebook servers do not have permission to log in to the single user notebook servers
owned by *other users*. If `JupyterHub.admin_access` is set to True, owned by _other users_. If `JupyterHub.admin_access` is set to True,
then admins have permission to log in *as other users* on their then admins have permission to log in _as other users_ on their
respective machines, for debugging. **As a courtesy, you should make respective machines, for debugging. **As a courtesy, you should make
sure your users know if admin_access is enabled.** sure your users know if admin_access is enabled.**
@@ -115,5 +115,5 @@ To set a global password, add this to the config file:
c.DummyAuthenticator.password = "some_password" c.DummyAuthenticator.password = "some_password"
``` ```
[PAM]: https://en.wikipedia.org/wiki/Pluggable_authentication_module [pam]: https://en.wikipedia.org/wiki/Pluggable_authentication_module
[OAuthenticator]: https://github.com/jupyterhub/oauthenticator [oauthenticator]: https://github.com/jupyterhub/oauthenticator

View File

@@ -56,7 +56,7 @@ To display all command line options that are available for configuration:
``` ```
Configuration using the command line options is done when launching JupyterHub. Configuration using the command line options is done when launching JupyterHub.
For example, to start JupyterHub on ``10.0.1.2:443`` with https, you For example, to start JupyterHub on `10.0.1.2:443` with https, you
would enter: would enter:
```bash ```bash
@@ -88,10 +88,10 @@ meant as illustration, are:
## Run the proxy separately ## Run the proxy separately
This is *not* strictly necessary, but useful in many cases. If you This is _not_ strictly necessary, but useful in many cases. If you
use a custom proxy (e.g. Traefik), this also not needed. use a custom proxy (e.g. Traefik), this also not needed.
Connections to user servers go through the proxy, and *not* the hub Connections to user servers go through the proxy, and _not_ the hub
itself. If the proxy stays running when the hub restarts (for itself. If the proxy stays running when the hub restarts (for
maintenance, re-configuration, etc.), then use connections are not maintenance, re-configuration, etc.), then use connections are not
interrupted. For simplicity, by default the hub starts the proxy interrupted. For simplicity, by default the hub starts the proxy

View File

@@ -1,6 +1,5 @@
# Frequently asked questions # Frequently asked questions
### How do I share links to notebooks? ### How do I share links to notebooks?
In short, where you see `/user/name/notebooks/foo.ipynb` use `/hub/user-redirect/notebooks/foo.ipynb` (replace `/user/name` with `/hub/user-redirect`). In short, where you see `/user/name/notebooks/foo.ipynb` use `/hub/user-redirect/notebooks/foo.ipynb` (replace `/user/name` with `/hub/user-redirect`).
@@ -11,9 +10,9 @@ Your first instinct might be to copy the URL you see in the browser,
e.g. `hub.jupyter.org/user/yourname/notebooks/coolthing.ipynb`. e.g. `hub.jupyter.org/user/yourname/notebooks/coolthing.ipynb`.
However, let's break down what this URL means: However, let's break down what this URL means:
`hub.jupyter.org/user/yourname/` is the URL prefix handled by *your server*, `hub.jupyter.org/user/yourname/` is the URL prefix handled by _your server_,
which means that sharing this URL is asking the person you share the link with which means that sharing this URL is asking the person you share the link with
to come to *your server* and look at the exact same file. to come to _your server_ and look at the exact same file.
In most circumstances, this is forbidden by permissions because the person you share with does not have access to your server. In most circumstances, this is forbidden by permissions because the person you share with does not have access to your server.
What actually happens when someone visits this URL will depend on whether your server is running and other factors. What actually happens when someone visits this URL will depend on whether your server is running and other factors.
@@ -22,7 +21,7 @@ A typical situation is that you have some shared or common filesystem,
such that the same path corresponds to the same document such that the same path corresponds to the same document
(either the exact same document or another copy of it). (either the exact same document or another copy of it).
Typically, what folks want when they do sharing like this Typically, what folks want when they do sharing like this
is for each visitor to open the same file *on their own server*, is for each visitor to open the same file _on their own server_,
so Breq would open `/user/breq/notebooks/foo.ipynb` and so Breq would open `/user/breq/notebooks/foo.ipynb` and
Seivarden would open `/user/seivarden/notebooks/foo.ipynb`, etc. Seivarden would open `/user/seivarden/notebooks/foo.ipynb`, etc.

View File

@@ -18,14 +18,14 @@ to the use-cases of large organizations.
Here is a quick breakdown of these three tools: Here is a quick breakdown of these three tools:
* **The Jupyter Notebook** is a document specification (the `.ipynb`) file that interweaves - **The Jupyter Notebook** is a document specification (the `.ipynb`) file that interweaves
narrative text with code cells and their outputs. It is also a graphical interface narrative text with code cells and their outputs. It is also a graphical interface
that allows users to edit these documents. There are also several other graphical interfaces that allows users to edit these documents. There are also several other graphical interfaces
that allow users to edit the `.ipynb` format (nteract, Jupyter Lab, Google Colab, Kaggle, etc). that allow users to edit the `.ipynb` format (nteract, Jupyter Lab, Google Colab, Kaggle, etc).
* **JupyterLab** is a flexible and extendible user interface for interactive computing. It - **JupyterLab** is a flexible and extendible user interface for interactive computing. It
has several extensions that are tailored for using Jupyter Notebooks, as well as extensions has several extensions that are tailored for using Jupyter Notebooks, as well as extensions
for other parts of the data science stack. for other parts of the data science stack.
* **JupyterHub** is an application that manages interactive computing sessions for **multiple users**. - **JupyterHub** is an application that manages interactive computing sessions for **multiple users**.
It also connects them with infrastructure those users wish to access. It can provide It also connects them with infrastructure those users wish to access. It can provide
remote access to Jupyter Notebooks and Jupyter Lab for many people. remote access to Jupyter Notebooks and Jupyter Lab for many people.
@@ -50,20 +50,20 @@ scalable infrastructure, large datasets, and high-performance computing.
JupyterHub is used at a variety of institutions in academia, JupyterHub is used at a variety of institutions in academia,
industry, and government research labs. It is most-commonly used by two kinds of groups: industry, and government research labs. It is most-commonly used by two kinds of groups:
* Small teams (e.g., data science teams, research labs, or collaborative projects) to provide a - Small teams (e.g., data science teams, research labs, or collaborative projects) to provide a
shared resource for interactive computing, collaboration, and analytics. shared resource for interactive computing, collaboration, and analytics.
* Large teams (e.g., a department, a large class, or a large group of remote users) to provide - Large teams (e.g., a department, a large class, or a large group of remote users) to provide
access to organizational hardware, data, and analytics environments at scale. access to organizational hardware, data, and analytics environments at scale.
Here are a sample of organizations that use JupyterHub: Here are a sample of organizations that use JupyterHub:
* **Universities and colleges**: UC Berkeley, UC San Diego, Cal Poly SLO, Harvard University, University of Chicago, - **Universities and colleges**: UC Berkeley, UC San Diego, Cal Poly SLO, Harvard University, University of Chicago,
University of Oslo, University of Sheffield, Université Paris Sud, University of Versailles University of Oslo, University of Sheffield, Université Paris Sud, University of Versailles
* **Research laboratories**: NASA, NCAR, NOAA, the Large Synoptic Survey Telescope, Brookhaven National Lab, - **Research laboratories**: NASA, NCAR, NOAA, the Large Synoptic Survey Telescope, Brookhaven National Lab,
Minnesota Supercomputing Institute, ALCF, CERN, Lawrence Livermore National Laboratory Minnesota Supercomputing Institute, ALCF, CERN, Lawrence Livermore National Laboratory
* **Online communities**: Pangeo, Quantopian, mybinder.org, MathHub, Open Humans - **Online communities**: Pangeo, Quantopian, mybinder.org, MathHub, Open Humans
* **Computing infrastructure providers**: NERSC, San Diego Supercomputing Center, Compute Canada - **Computing infrastructure providers**: NERSC, San Diego Supercomputing Center, Compute Canada
* **Companies**: Capital One, SANDVIK code, Globus - **Companies**: Capital One, SANDVIK code, Globus
See the [Gallery of JupyterHub deployments](../gallery-jhub-deployments.md) for See the [Gallery of JupyterHub deployments](../gallery-jhub-deployments.md) for
a more complete list of JupyterHub deployments at institutions. a more complete list of JupyterHub deployments at institutions.
@@ -95,14 +95,13 @@ The most common way to set up a JupyterHub is to use a JupyterHub distribution,
and opinionated ways to set up a JupyterHub on particular kinds of infrastructure. The two distributions and opinionated ways to set up a JupyterHub on particular kinds of infrastructure. The two distributions
that we currently suggest are: that we currently suggest are:
* [Zero to JupyterHub for Kubernetes](https://z2jh.jupyter.org) is a scalable JupyterHub deployment and - [Zero to JupyterHub for Kubernetes](https://z2jh.jupyter.org) is a scalable JupyterHub deployment and
guide that runs on Kubernetes. Better for larger or dynamic user groups (50-10,000) or more complex guide that runs on Kubernetes. Better for larger or dynamic user groups (50-10,000) or more complex
compute/data needs. compute/data needs.
* [The Littlest JupyterHub](https://tljh.jupyter.org) is a lightweight JupyterHub that runs on a single - [The Littlest JupyterHub](https://tljh.jupyter.org) is a lightweight JupyterHub that runs on a single
single machine (in the cloud or under your desk). Better for smaller usergroups (4-80) or more single machine (in the cloud or under your desk). Better for smaller usergroups (4-80) or more
lightweight computational resources. lightweight computational resources.
### Does JupyterHub run well in the cloud? ### Does JupyterHub run well in the cloud?
Yes - most deployments of JupyterHub are run via cloud infrastructure and on a variety of cloud providers. Yes - most deployments of JupyterHub are run via cloud infrastructure and on a variety of cloud providers.
@@ -123,9 +122,9 @@ The short answer: yes. JupyterHub as a standalone application has been battle-te
level for several years, and makes a number of "default" security decisions that are reasonable for most level for several years, and makes a number of "default" security decisions that are reasonable for most
users. users.
* For security considerations in the base JupyterHub application, - For security considerations in the base JupyterHub application,
[see the JupyterHub security page](https://jupyterhub.readthedocs.io/en/stable/reference/websecurity.html) [see the JupyterHub security page](https://jupyterhub.readthedocs.io/en/stable/reference/websecurity.html)
* For security considerations when deploying JupyterHub on Kubernetes, see the - For security considerations when deploying JupyterHub on Kubernetes, see the
[JupyterHub on Kubernetes security page](https://zero-to-jupyterhub.readthedocs.io/en/latest/security.html). [JupyterHub on Kubernetes security page](https://zero-to-jupyterhub.readthedocs.io/en/latest/security.html).
The longer answer: it depends on your deployment. Because JupyterHub is very flexible, it can be used The longer answer: it depends on your deployment. Because JupyterHub is very flexible, it can be used
@@ -137,15 +136,13 @@ If you are worried about security, don't hesitate to reach out to the JupyterHub
[Jupyter Community Forum](https://discourse.jupyter.org/c/jupyterhub). This community of practice has many [Jupyter Community Forum](https://discourse.jupyter.org/c/jupyterhub). This community of practice has many
individuals with experience running secure JupyterHub deployments. individuals with experience running secure JupyterHub deployments.
### Does JupyterHub provide computing or data infrastructure? ### Does JupyterHub provide computing or data infrastructure?
No - JupyterHub manages user sessions and can *control* computing infrastructure, but it does not provide these No - JupyterHub manages user sessions and can _control_ computing infrastructure, but it does not provide these
things itself. You are expected to run JupyterHub on your own infrastructure (local or in the cloud). Moreover, things itself. You are expected to run JupyterHub on your own infrastructure (local or in the cloud). Moreover,
JupyterHub has no internal concept of "data", but is designed to be able to communicate with data repositories JupyterHub has no internal concept of "data", but is designed to be able to communicate with data repositories
(again, either locally or remotely) for use within interactive computing sessions. (again, either locally or remotely) for use within interactive computing sessions.
### How do I manage users? ### How do I manage users?
JupyterHub offers a few options for managing your users. Upon setting up a JupyterHub, you can choose what JupyterHub offers a few options for managing your users. Upon setting up a JupyterHub, you can choose what
@@ -154,7 +151,7 @@ email address, or choose a username / password when they first log-in, or offloa
another service such as an organization's OAuth. another service such as an organization's OAuth.
The users of a JupyterHub are stored locally, and can be modified manually by an administrator of the JupyterHub. The users of a JupyterHub are stored locally, and can be modified manually by an administrator of the JupyterHub.
Moreover, the *active* users on a JupyterHub can be found on the administrator's page. This page Moreover, the _active_ users on a JupyterHub can be found on the administrator's page. This page
gives you the abiltiy to stop or restart kernels, inspect user filesystems, and even take over user gives you the abiltiy to stop or restart kernels, inspect user filesystems, and even take over user
sessions to assist them with debugging. sessions to assist them with debugging.
@@ -182,7 +179,6 @@ connect with other infrastructure tools (like Dask or Spark). This allows users
scalable or high-performance resources from within their JupyterHub sessions. The logic of scalable or high-performance resources from within their JupyterHub sessions. The logic of
how those resources are controlled is taken care of by the non-JupyterHub application. how those resources are controlled is taken care of by the non-JupyterHub application.
### Can JupyterHub be used with my high-performance computing resources? ### Can JupyterHub be used with my high-performance computing resources?
Yes - JupyterHub can provide access to many kinds of computing infrastructure. Yes - JupyterHub can provide access to many kinds of computing infrastructure.
@@ -218,7 +214,6 @@ the technologies your JupyterHub will use (e.g., dev-ops knowledge with cloud co
In general, the base JupyterHub deployment is not the bottleneck for setup, it is connecting In general, the base JupyterHub deployment is not the bottleneck for setup, it is connecting
your JupyterHub with the various services and tools that you wish to provide to your users. your JupyterHub with the various services and tools that you wish to provide to your users.
### How well does JupyterHub scale? What are JupyterHub's limitations? ### How well does JupyterHub scale? What are JupyterHub's limitations?
JupyterHub works well at both a small scale (e.g., a single VM or machine) as well as a JupyterHub works well at both a small scale (e.g., a single VM or machine) as well as a
@@ -227,7 +222,6 @@ for user bases as large as 10,000. The scalability of JupyterHub largely depends
infrastructure on which it is deployed. JupyterHub has been designed to be lightweight and infrastructure on which it is deployed. JupyterHub has been designed to be lightweight and
flexible, so you can tailor your JupyterHub deployment to your needs. flexible, so you can tailor your JupyterHub deployment to your needs.
### Is JupyterHub resilient? What happens when a machine goes down? ### Is JupyterHub resilient? What happens when a machine goes down?
For JupyterHubs that are deployed in a containerized environment (e.g., Kubernetes), it is For JupyterHubs that are deployed in a containerized environment (e.g., Kubernetes), it is

View File

@@ -11,7 +11,7 @@ This section will help you with basic proxy and network configuration to:
The Proxy's main IP address setting determines where JupyterHub is available to users. The Proxy's main IP address setting determines where JupyterHub is available to users.
By default, JupyterHub is configured to be available on all network interfaces By default, JupyterHub is configured to be available on all network interfaces
(`''`) on port 8000. *Note*: Use of `'*'` is discouraged for IP configuration; (`''`) on port 8000. _Note_: Use of `'*'` is discouraged for IP configuration;
instead, use of `'0.0.0.0'` is preferred. instead, use of `'0.0.0.0'` is preferred.
Changing the Proxy's main IP address and port can be done with the following Changing the Proxy's main IP address and port can be done with the following
@@ -74,7 +74,7 @@ The Hub service listens only on `localhost` (port 8081) by default.
The Hub needs to be accessible from both the proxy and all Spawners. The Hub needs to be accessible from both the proxy and all Spawners.
When spawning local servers, an IP address setting of `localhost` is fine. When spawning local servers, an IP address setting of `localhost` is fine.
If *either* the Proxy *or* (more likely) the Spawners will be remote or If _either_ the Proxy _or_ (more likely) the Spawners will be remote or
isolated in containers, the Hub must listen on an IP that is accessible. isolated in containers, the Hub must listen on an IP that is accessible.
```python ```python

View File

@@ -1,347 +0,0 @@
# Install JupyterHub and JupyterLab from the ground up
The combination of [JupyterHub](https://jupyterhub.readthedocs.io) and [JupyterLab](https://jupyterlab.readthedocs.io)
is a great way to make shared computing resources available to a group.
These instructions are a guide for a manual, 'bare metal' install of [JupyterHub](https://jupyterhub.readthedocs.io)
and [JupyterLab](https://jupyterlab.readthedocs.io). This is ideal for running on a single server: build a beast
of a machine and share it within your lab, or use a virtual machine from any VPS or cloud provider.
This guide has similar goals to [The Littlest JupyterHub](https://the-littlest-jupyterhub.readthedocs.io) setup
script. However, instead of bundling all these step for you into one installer, we will perform every step manually.
This makes it easy to customize any part (e.g. if you want to run other services on the same system and need to make them
work together), as well as giving you full control and understanding of your setup.
## Prerequisites
Your own server with administrator (root) access. This could be a local machine, a remotely hosted one, or a cloud instance
or VPS. Each user who will access JupyterHub should have a standard user account on the machine. The install will be done
through the command line - useful if you log into your machine remotely using SSH.
This tutorial was tested on **Ubuntu 18.04**. No other Linux distributions have been tested, but the instructions
should be reasonably straightforward to adapt.
## Goals
JupyterLab enables access to a multiple 'kernels', each one being a given environment for a given language. The most
common is a Python environment, for scientific computing usually one managed by the `conda` package manager.
This guide will set up JupyterHub and JupyterLab seperately from the Python environment. In other words, we treat
JupyterHub+JupyterLab as a 'app' or webservice, which will connect to the kernels available on the system. Specifically:
- We will create an installation of JupyterHub and JupyterLab using a virtualenv under `/opt` using the system Python.
- We will install conda globally.
- We will create a shared conda environment which can be used (but not modified) by all users.
- We will show how users can create their own private conda environments, where they can install whatever they like.
The default JupyterHub Authenticator uses PAM to authenticate system users with their username and password. One can
[choose the authenticator](https://jupyterhub.readthedocs.io/en/stable/reference/authenticators.html#authenticators)
that best suits their needs. In this guide we will use the default Authenticator because it makes it easy for everyone to manage data
in their home folder and to mix and match different services and access methods (e.g. SSH) which all work using the
Linux system user accounts. Therefore, each user of JupyterHub will need a standard system user account.
Another goal of this guide is to use system provided packages wherever possible. This has the advantage that these packages
get automatic patches and security updates (be sure to turn on automatic updates in Ubuntu). This means less maintenance
work and a more reliable system.
## Part 1: JupyterHub and JupyterLab
### Setup the JupyterHub and JupyterLab in a virtual environment
First we create a virtual environment under '/opt/jupyterhub'. The '/opt' folder is where apps not belonging to the operating
system are [commonly installed](https://unix.stackexchange.com/questions/11544/what-is-the-difference-between-opt-and-usr-local).
Both jupyterlab and jupyterhub will be installed into this virtualenv. Create it with the command:
```sh
sudo python3 -m venv /opt/jupyterhub/
```
Now we use pip to install the required Python packages into the new virtual environment. Be sure to install
`wheel` first. Since we are separating the user interface from the computing kernels, we don't install
any Python scientific packages here. The only exception is `ipywidgets` because this is needed to allow connection
between interactive tools running in the kernel and the user interface.
Note that we use `/opt/jupyterhub/bin/python3 -m pip install` each time - this [makes sure](https://snarky.ca/why-you-should-use-python-m-pip/)
that the packages are installed to the correct virtual environment.
Perform the install using the following commands:
```sh
sudo /opt/jupyterhub/bin/python3 -m pip install wheel
sudo /opt/jupyterhub/bin/python3 -m pip install jupyterhub jupyterlab
sudo /opt/jupyterhub/bin/python3 -m pip install ipywidgets
```
JupyterHub also currently defaults to requiring `configurable-http-proxy`, which needs `nodejs` and `npm`. The versions
of these available in Ubuntu therefore need to be installed first (they are a bit old but this is ok for our needs):
```sh
sudo apt install nodejs npm
```
Then install `configurable-http-proxy`:
```sh
sudo npm install -g configurable-http-proxy
```
### Create the configuration for JupyterHub
Now we start creating configuration files. To keep everything together, we put all the configuration into the folder
created for the virtualenv, under `/opt/jupyterhub/etc/`. For each thing needing configuration, we will create a further
subfolder and necessary files.
First create the folder for the JupyterHub configuration and navigate to it:
```sh
sudo mkdir -p /opt/jupyterhub/etc/jupyterhub/
cd /opt/jupyterhub/etc/jupyterhub/
```
Then generate the default configuration file
```sh
sudo /opt/jupyterhub/bin/jupyterhub --generate-config
```
This will produce the default configuration file `/opt/jupyterhub/etc/jupyterhub/jupyterhub_config.py`
You will need to edit the configuration file to make the JupyterLab interface by the default.
Set the following configuration option in your `jupyterhub_config.py` file:
```python
c.Spawner.default_url = '/lab'
```
Further configuration options may be found in the documentation.
### Setup Systemd service
We will setup JupyterHub to run as a system service using Systemd (which is responsible for managing all services and
servers that run on startup in Ubuntu). We will create a service file in a suitable location in the virtualenv folder
and then link it to the system services. First create the folder for the service file:
```sh
sudo mkdir -p /opt/jupyterhub/etc/systemd
```
Then create the following text file using your [favourite editor](https://micro-editor.github.io/) at
```sh
/opt/jupyterhub/etc/systemd/jupyterhub.service
```
Paste the following service unit definition into the file:
```
[Unit]
Description=JupyterHub
After=syslog.target network.target
[Service]
User=root
Environment="PATH=/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/jupyterhub/bin"
ExecStart=/opt/jupyterhub/bin/jupyterhub -f /opt/jupyterhub/etc/jupyterhub/jupyterhub_config.py
[Install]
WantedBy=multi-user.target
```
This sets up the environment to use the virtual environment we created, tells Systemd how to start jupyterhub using
the configuration file we created, specifies that jupyterhub will be started as the `root` user (needed so that it can
start jupyter on behalf of other logged in users), and specifies that jupyterhub should start on boot after the network
is enabled.
Finally, we need to make systemd aware of our service file. First we symlink our file into systemd's directory:
```sh
sudo ln -s /opt/jupyterhub/etc/systemd/jupyterhub.service /etc/systemd/system/jupyterhub.service
```
Then tell systemd to reload its configuration files
```sh
sudo systemctl daemon-reload
```
And finally enable the service
```sh
sudo systemctl enable jupyterhub.service
```
The service will start on reboot, but we can start it straight away using:
```sh
sudo systemctl start jupyterhub.service
```
...and check that it's running using:
```sh
sudo systemctl status jupyterhub.service
```
You should now be already be able to access jupyterhub using `<your servers ip>:8000` (assuming you haven't already set
up a firewall or something). However, when you log in the jupyter notebooks will be trying to use the Python virtualenv
that was created to install JupyterHub, this is not what we want. So on to part 2
## Part 2: Conda environments
### Install conda for the whole system
We will use `conda` to manage Python environments. We will install the officially maintained `conda` packages for Ubuntu,
this means they will get automatic updates with the rest of the system. Setup repo for the official Conda debian packages,
instructions are copied from [here](https://docs.conda.io/projects/conda/en/latest/user-guide/install/rpm-debian.html):
Install Anacononda public gpg key to trusted store
```sh
curl https://repo.anaconda.com/pkgs/misc/gpgkeys/anaconda.asc | gpg --dearmor > conda.gpg
sudo install -o root -g root -m 644 conda.gpg /etc/apt/trusted.gpg.d/
```
Add Debian repo
```sh
echo "deb [arch=amd64] https://repo.anaconda.com/pkgs/misc/debrepo/conda stable main" | sudo tee /etc/apt/sources.list.d/conda.list
```
Install conda
```sh
sudo apt update
sudo apt install conda
```
This will install conda into the folder `/opt/conda/`, with the conda command available at `/opt/conda/bin/conda`.
Finally, we can make conda more easily available to users by symlinking the conda shell setup script to the profile
'drop in' folder so that it gets run on login
```sh
sudo ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh
```
### Install a default conda environment for all users
First create a folder for conda envs (might exist already):
```sh
sudo mkdir /opt/conda/envs/
```
Then create a conda environment to your liking within that folder. Here we have called it 'python' because it will
be the obvious default - call it whatever you like. You can install whatever you like into this environment, but you MUST at least install `ipykernel`.
```sh
sudo /opt/conda/bin/conda create --prefix /opt/conda/envs/python python=3.7 ipykernel
```
Once your env is set up as desired, make it visible to Jupyter by installing the kernel spec. There are two options here:
1 ) Install into the JupyterHub virtualenv - this ensures it overrides the default python version. It will only be visible
to the JupyterHub installation we have just created. This is useful to avoid conda environments appearing where they are not expected.
```sh
sudo /opt/conda/envs/python/bin/python -m ipykernel install --prefix=/opt/jupyterhub/ --name 'python' --display-name "Python (default)"
```
2 ) Install it system-wide by putting it into `/usr/local`. It will be visible to any parallel install of JupyterHub or
JupyterLab, and will persist even if you later delete or modify the JupyterHub installation. This is useful if the kernels
might be used by other services, or if you want to modify the JupyterHub installation independently from the conda environments.
```sh
sudo /opt/conda/envs/python/bin/python -m ipykernel install --prefix /usr/local/ --name 'python' --display-name "Python (default)"
````
### Setting up users' own conda environments
There is relatively little for the administrator to do here, as users will have to set up their own environments using the shell.
On login they should run `conda init` or `/opt/conda/bin/conda`. The can then use conda to set up their environment,
although they must also install `ipykernel`. Once done, they can enable their kernel using:
```sh
/path/to/kernel/env/bin/python -m ipykernel install --name 'python-my-env' --display-name "Python My Env"
```
This will place the kernel spec into their home folder, where Jupyter will look for it on startup.
## Setting up a reverse proxy
The guide so far results in JupyterHub running on port 8000. It is not generally advisable to run open web services in
this way - instead, use a reverse proxy running on standard HTTP/HTTPS ports.
> **Important**: Be aware of the security implications especially if you are running a server that is accessible from the open internet
> i.e. not protected within an institutional intranet or private home/office network. You should set up a firewall and
> HTTPS encryption, which is outside of the scope of this guide. For HTTPS consider using [LetsEncrypt](https://letsencrypt.org/)
> or setting up a [self-signed certificate](https://www.digitalocean.com/community/tutorials/how-to-create-a-self-signed-ssl-certificate-for-nginx-in-ubuntu-18-04).
> Firewalls may be set up using `ufw` or `firewalld` and combined with `fail2ban`.
### Using Nginx
Nginx is a mature and established web server and reverse proxy and is easy to install using `sudo apt install nginx`.
Details on using Nginx as a reverse proxy can be found elsewhere. Here, we will only outline the additional steps needed
to setup JupyterHub with Nginx and host it at a given URL e.g. `<your-server-ip-or-url>/jupyter`.
This could be useful for example if you are running several services or web pages on the same server.
To achieve this needs a few tweaks to both the JupyterHub configuration and the Nginx config. First, edit the
configuration file `/opt/jupyterhub/etc/jupyterhub/jupyterhub_config.py` and add the line:
```python
c.JupyterHub.bind_url = 'http://:8000/jupyter'
```
where `/jupyter` will be the relative URL of the JupyterHub.
Now Nginx must be configured with a to pass all traffic from `/jupyter` to the the local address `127.0.0.1:8000`.
Add the following snippet to your nginx configuration file (e.g. `/etc/nginx/sites-available/default`).
```
location /jupyter/ {
# NOTE important to also set base url of jupyterhub to /jupyter in its config
proxy_pass http://127.0.0.1:8000;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# websocket headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
```
Also add this snippet before the *server* block:
```
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
```
Nginx will not run if there are errors in the configuration, check your configuration using:
```sh
nginx -t
```
If there are no errors, you can restart the Nginx service for the new configuration to take effect.
```sh
sudo systemctl restart nginx.service
```
## Getting started using your new JupyterHub
Once you have setup JupyterHub and Nginx proxy as described, you can browse to your JupyterHub IP or URL
(e.g. if your server IP address is `123.456.789.1` and you decided to host JupyterHub at the `/jupyter` URL, browse
to `123.456.789.1/jupyter`). You will find a login page where you enter your Linux username and password. On login
you will be presented with the JupyterLab interface, with the file browser pane showing the contents of your users'
home directory on the server.

View File

@@ -0,0 +1,6 @@
:orphan:
JupyterHub the hard way
=======================
This guide has moved to https://github.com/manics/jupyterhub-the-hard-way/blob/jupyterhub-alternative-doc/docs/installation-guide-hard.md

View File

@@ -11,4 +11,3 @@ running on your own infrastructure.
quickstart quickstart
quickstart-docker quickstart-docker
installation-basics installation-basics
installation-guide-hard

View File

@@ -12,10 +12,10 @@ Before installing JupyterHub, you will need:
- [nodejs/npm](https://www.npmjs.com/). [Install nodejs/npm](https://docs.npmjs.com/getting-started/installing-node), - [nodejs/npm](https://www.npmjs.com/). [Install nodejs/npm](https://docs.npmjs.com/getting-started/installing-node),
using your operating system's package manager. using your operating system's package manager.
* If you are using **`conda`**, the nodejs and npm dependencies will be installed for - If you are using **`conda`**, the nodejs and npm dependencies will be installed for
you by conda. you by conda.
* If you are using **`pip`**, install a recent version of - If you are using **`pip`**, install a recent version of
[nodejs/npm](https://docs.npmjs.com/getting-started/installing-node). [nodejs/npm](https://docs.npmjs.com/getting-started/installing-node).
For example, install it on Linux (Debian/Ubuntu) using: For example, install it on Linux (Debian/Ubuntu) using:
@@ -78,12 +78,12 @@ Visit `https://localhost:8000` in your browser, and sign in with your unix
credentials. credentials.
To **allow multiple users to sign in** to the Hub server, you must start To **allow multiple users to sign in** to the Hub server, you must start
`jupyterhub` as a *privileged user*, such as root: `jupyterhub` as a _privileged user_, such as root:
```bash ```bash
sudo jupyterhub sudo jupyterhub
``` ```
The [wiki](https://github.com/jupyterhub/jupyterhub/wiki/Using-sudo-to-run-JupyterHub-without-root-privileges) The [wiki](https://github.com/jupyterhub/jupyterhub/wiki/Using-sudo-to-run-JupyterHub-without-root-privileges)
describes how to run the server as a *less privileged user*. This requires describes how to run the server as a _less privileged user_. This requires
additional configuration of the system. additional configuration of the system.

View File

@@ -89,7 +89,6 @@ class DictionaryAuthenticator(Authenticator):
return data['username'] return data['username']
``` ```
#### Normalize usernames #### Normalize usernames
Since the Authenticator and Spawner both use the same username, Since the Authenticator and Spawner both use the same username,
@@ -112,10 +111,9 @@ normalize usernames using PAM (basically round-tripping them: username
to uid to username), which is useful in case you use some external to uid to username), which is useful in case you use some external
service that allows multiple usernames mapping to the same user (such service that allows multiple usernames mapping to the same user (such
as ActiveDirectory, yes, this really happens). When as ActiveDirectory, yes, this really happens). When
`pam_normalize_username` is on, usernames are *not* normalized to `pam_normalize_username` is on, usernames are _not_ normalized to
lowercase. lowercase.
#### Validate usernames #### Validate usernames
In most cases, there is a very limited set of acceptable usernames. In most cases, there is a very limited set of acceptable usernames.
@@ -132,7 +130,6 @@ To only allow usernames that start with 'w':
c.Authenticator.username_pattern = r'w.*' c.Authenticator.username_pattern = r'w.*'
``` ```
### How to write a custom authenticator ### How to write a custom authenticator
You can use custom Authenticator subclasses to enable authentication You can use custom Authenticator subclasses to enable authentication
@@ -145,7 +142,6 @@ and [post_spawn_stop(user, spawner)][], are hooks that can be used to do
auth-related startup (e.g. opening PAM sessions) and cleanup auth-related startup (e.g. opening PAM sessions) and cleanup
(e.g. closing PAM sessions). (e.g. closing PAM sessions).
See a list of custom Authenticators [on the wiki](https://github.com/jupyterhub/jupyterhub/wiki/Authenticators). See a list of custom Authenticators [on the wiki](https://github.com/jupyterhub/jupyterhub/wiki/Authenticators).
If you are interested in writing a custom authenticator, you can read If you are interested in writing a custom authenticator, you can read
@@ -186,7 +182,6 @@ Additionally, configurable attributes for your authenticator will
appear in jupyterhub help output and auto-generated configuration files appear in jupyterhub help output and auto-generated configuration files
via `jupyterhub --generate-config`. via `jupyterhub --generate-config`.
### Authentication state ### Authentication state
JupyterHub 0.8 adds the ability to persist state related to authentication, JupyterHub 0.8 adds the ability to persist state related to authentication,
@@ -220,12 +215,10 @@ To store auth_state, two conditions must be met:
export JUPYTERHUB_CRYPT_KEY=$(openssl rand -hex 32) export JUPYTERHUB_CRYPT_KEY=$(openssl rand -hex 32)
``` ```
JupyterHub uses [Fernet](https://cryptography.io/en/latest/fernet/) to encrypt auth_state. JupyterHub uses [Fernet](https://cryptography.io/en/latest/fernet/) to encrypt auth_state.
To facilitate key-rotation, `JUPYTERHUB_CRYPT_KEY` may be a semicolon-separated list of encryption keys. To facilitate key-rotation, `JUPYTERHUB_CRYPT_KEY` may be a semicolon-separated list of encryption keys.
If there are multiple keys present, the **first** key is always used to persist any new auth_state. If there are multiple keys present, the **first** key is always used to persist any new auth_state.
#### Using auth_state #### Using auth_state
Typically, if `auth_state` is persisted it is desirable to affect the Spawner environment in some way. Typically, if `auth_state` is persisted it is desirable to affect the Spawner environment in some way.
@@ -266,11 +259,10 @@ PAM session.
Beginning with version 0.8, JupyterHub is an OAuth provider. Beginning with version 0.8, JupyterHub is an OAuth provider.
[authenticator]: https://github.com/jupyterhub/jupyterhub/blob/master/jupyterhub/auth.py
[Authenticator]: https://github.com/jupyterhub/jupyterhub/blob/master/jupyterhub/auth.py [pam]: https://en.wikipedia.org/wiki/Pluggable_authentication_module
[PAM]: https://en.wikipedia.org/wiki/Pluggable_authentication_module [oauth]: https://en.wikipedia.org/wiki/OAuth
[OAuth]: https://en.wikipedia.org/wiki/OAuth [github oauth]: https://developer.github.com/v3/oauth/
[GitHub OAuth]: https://developer.github.com/v3/oauth/ [oauthenticator]: https://github.com/jupyterhub/oauthenticator
[OAuthenticator]: https://github.com/jupyterhub/oauthenticator
[pre_spawn_start(user, spawner)]: https://jupyterhub.readthedocs.io/en/latest/api/auth.html#jupyterhub.auth.Authenticator.pre_spawn_start [pre_spawn_start(user, spawner)]: https://jupyterhub.readthedocs.io/en/latest/api/auth.html#jupyterhub.auth.Authenticator.pre_spawn_start
[post_spawn_stop(user, spawner)]: https://jupyterhub.readthedocs.io/en/latest/api/auth.html#jupyterhub.auth.Authenticator.post_spawn_stop [post_spawn_stop(user, spawner)]: https://jupyterhub.readthedocs.io/en/latest/api/auth.html#jupyterhub.auth.Authenticator.post_spawn_stop

View File

@@ -3,18 +3,17 @@
In this example, we show a configuration file for a fairly standard JupyterHub In this example, we show a configuration file for a fairly standard JupyterHub
deployment with the following assumptions: deployment with the following assumptions:
* Running JupyterHub on a single cloud server - Running JupyterHub on a single cloud server
* Using SSL on the standard HTTPS port 443 - Using SSL on the standard HTTPS port 443
* Using GitHub OAuth (using oauthenticator) for login - Using GitHub OAuth (using oauthenticator) for login
* Using the default spawner (to configure other spawners, uncomment and edit - Using the default spawner (to configure other spawners, uncomment and edit
`spawner_class` as well as follow the instructions for your desired spawner) `spawner_class` as well as follow the instructions for your desired spawner)
* Users exist locally on the server - Users exist locally on the server
* Users' notebooks to be served from `~/assignments` to allow users to browse - Users' notebooks to be served from `~/assignments` to allow users to browse
for notebooks within other users' home directories for notebooks within other users' home directories
* You want the landing page for each user to be a `Welcome.ipynb` notebook in - You want the landing page for each user to be a `Welcome.ipynb` notebook in
their assignments directory. their assignments directory.
* All runtime files are put into `/srv/jupyterhub` and log files in `/var/log`. - All runtime files are put into `/srv/jupyterhub` and log files in `/var/log`.
The `jupyterhub_config.py` file would have these settings: The `jupyterhub_config.py` file would have these settings:

View File

@@ -6,12 +6,12 @@ SSL port `443`. This could be useful if the JupyterHub server machine is also
hosting other domains or content on `443`. The goal in this example is to hosting other domains or content on `443`. The goal in this example is to
satisfy the following: satisfy the following:
* JupyterHub is running on a server, accessed *only* via `HUB.DOMAIN.TLD:443` - JupyterHub is running on a server, accessed _only_ via `HUB.DOMAIN.TLD:443`
* On the same machine, `NO_HUB.DOMAIN.TLD` strictly serves different content, - On the same machine, `NO_HUB.DOMAIN.TLD` strictly serves different content,
also on port `443` also on port `443`
* `nginx` or `apache` is used as the public access point (which means that - `nginx` or `apache` is used as the public access point (which means that
only nginx/apache will bind to `443`) only nginx/apache will bind to `443`)
* After testing, the server in question should be able to score at least an A on the - After testing, the server in question should be able to score at least an A on the
Qualys SSL Labs [SSL Server Test](https://www.ssllabs.com/ssltest/) Qualys SSL Labs [SSL Server Test](https://www.ssllabs.com/ssltest/)
Let's start out with needed JupyterHub configuration in `jupyterhub_config.py`: Let's start out with needed JupyterHub configuration in `jupyterhub_config.py`:
@@ -144,6 +144,7 @@ Now restart `nginx`, restart the JupyterHub, and enjoy accessing
`https://NO_HUB.DOMAIN.TLD`. `https://NO_HUB.DOMAIN.TLD`.
### SELinux permissions for nginx ### SELinux permissions for nginx
On distributions with SELinux enabled (e.g. Fedora), one may encounter permission errors On distributions with SELinux enabled (e.g. Fedora), one may encounter permission errors
when the nginx service is started. when the nginx service is started.
@@ -155,8 +156,8 @@ semanage port -a -t http_port_t -p tcp 8000
setsebool -P httpd_can_network_relay 1 setsebool -P httpd_can_network_relay 1
setsebool -P httpd_can_network_connect 1 setsebool -P httpd_can_network_connect 1
``` ```
Replace 8000 with the port the jupyterhub server is running from.
Replace 8000 with the port the jupyterhub server is running from.
## Apache ## Apache
@@ -211,22 +212,24 @@ Listen 443
</VirtualHost> </VirtualHost>
``` ```
In case of the need to run the jupyterhub under /jhub/ or other location please use the below configurations: In case of the need to run the jupyterhub under /jhub/ or other location please use the below configurations:
- JupyterHub running locally at http://127.0.0.1:8000/jhub/ or other location - JupyterHub running locally at http://127.0.0.1:8000/jhub/ or other location
httpd.conf amendments: httpd.conf amendments:
```bash ```bash
RewriteRule /jhub/(.*) ws://127.0.0.1:8000/jhub/$1 [NE.P,L] RewriteRule /jhub/(.*) ws://127.0.0.1:8000/jhub/$1 [NE.P,L]
RewriteRule /jhub/(.*) http://127.0.0.1:8000/jhub/$1 [NE,P,L] RewriteRule /jhub/(.*) http://127.0.0.1:8000/jhub/$1 [NE,P,L]
ProxyPass /jhub/ http://127.0.0.1:8000/jhub/ ProxyPass /jhub/ http://127.0.0.1:8000/jhub/
ProxyPassReverse /jhub/ http://127.0.0.1:8000/jhub/ ProxyPassReverse /jhub/ http://127.0.0.1:8000/jhub/
``` ```
jupyterhub_config.py amendments: jupyterhub_config.py amendments:
```bash
```bash
--The public facing URL of the whole JupyterHub application. --The public facing URL of the whole JupyterHub application.
--This is the address on which the proxy will bind. Sets protocol, ip, base_url --This is the address on which the proxy will bind. Sets protocol, ip, base_url
c.JupyterHub.bind_url = 'http://127.0.0.1:8000/jhub/' c.JupyterHub.bind_url = 'http://127.0.0.1:8000/jhub/'
``` ```

View File

@@ -53,7 +53,6 @@ To do this we add to `/etc/sudoers` (use `visudo` for safe editing of sudoers):
- give `rhea` permission to run `JUPYTER_CMD` on behalf of `JUPYTER_USERS` - give `rhea` permission to run `JUPYTER_CMD` on behalf of `JUPYTER_USERS`
without entering a password without entering a password
For example: For example:
```bash ```bash
@@ -91,7 +90,7 @@ $ adduser -G jupyterhub newuser
Test that the new user doesn't need to enter a password to run the sudospawner Test that the new user doesn't need to enter a password to run the sudospawner
command. command.
This should prompt for your password to switch to rhea, but *not* prompt for This should prompt for your password to switch to rhea, but _not_ prompt for
any password for the second switch. It should show some help output about any password for the second switch. It should show some help output about
logging options: logging options:
@@ -157,6 +156,7 @@ then you will need to give `node` permission to do so:
```bash ```bash
sudo setcap 'cap_net_bind_service=+ep' /usr/bin/node sudo setcap 'cap_net_bind_service=+ep' /usr/bin/node
``` ```
However, you may want to further understand the consequences of this. However, you may want to further understand the consequences of this.
You may also be interested in limiting the amount of CPU any process can use You may also be interested in limiting the amount of CPU any process can use
@@ -165,7 +165,6 @@ distributions' packaging system. This can be used to keep any user's process
from using too much CPU cycles. You can configure it accoring to [these from using too much CPU cycles. You can configure it accoring to [these
instructions](http://ubuntuforums.org/showthread.php?t=992706). instructions](http://ubuntuforums.org/showthread.php?t=992706).
### Shadow group (FreeBSD) ### Shadow group (FreeBSD)
**NOTE:** This has not been tested and may not work as expected. **NOTE:** This has not been tested and may not work as expected.

View File

@@ -22,20 +22,18 @@ This section will focus on user environments, including:
- Installing kernelspecs - Installing kernelspecs
- Using containers vs. multi-user hosts - Using containers vs. multi-user hosts
## Installing packages ## Installing packages
To make packages available to users, you generally will install packages To make packages available to users, you generally will install packages
system-wide or in a shared environment. system-wide or in a shared environment.
This installation location should always be in the same environment that This installation location should always be in the same environment that
`jupyterhub-singleuser` itself is installed in, and must be *readable and `jupyterhub-singleuser` itself is installed in, and must be _readable and
executable* by your users. If you want users to be able to install additional executable_ by your users. If you want users to be able to install additional
packages, it must also be *writable* by your users. packages, it must also be _writable_ by your users.
If you are using a standard system Python install, you would use: If you are using a standard system Python install, you would use:
```bash ```bash
sudo python3 -m pip install numpy sudo python3 -m pip install numpy
``` ```
@@ -47,7 +45,6 @@ You may also use conda to install packages. If you do, you should make sure
that the conda environment has appropriate permissions for users to be able to that the conda environment has appropriate permissions for users to be able to
run Python code in the env. run Python code in the env.
## Configuring Jupyter and IPython ## Configuring Jupyter and IPython
[Jupyter](https://jupyter-notebook.readthedocs.io/en/stable/config_overview.html) [Jupyter](https://jupyter-notebook.readthedocs.io/en/stable/config_overview.html)
@@ -64,6 +61,7 @@ users. It's generally more efficient to configure user environments "system-wide
and it's a good idea to avoid creating files in users' home directories. and it's a good idea to avoid creating files in users' home directories.
The typical locations for these config files are: The typical locations for these config files are:
- **system-wide** in `/etc/{jupyter|ipython}` - **system-wide** in `/etc/{jupyter|ipython}`
- **env-wide** (environment wide) in `{sys.prefix}/etc/{jupyter|ipython}`. - **env-wide** (environment wide) in `{sys.prefix}/etc/{jupyter|ipython}`.
@@ -91,7 +89,6 @@ c.MappingKernelManager.cull_idle_timeout = 20 * 60
c.MappingKernelManager.cull_interval = 2 * 60 c.MappingKernelManager.cull_interval = 2 * 60
``` ```
## Installing kernelspecs ## Installing kernelspecs
You may have multiple Jupyter kernels installed and want to make sure that You may have multiple Jupyter kernels installed and want to make sure that
@@ -119,7 +116,6 @@ sure are available, I can install their specs system-wide (in /usr/local) with:
/path/to/python2 -m IPython kernel install --prefix=/usr/local /path/to/python2 -m IPython kernel install --prefix=/usr/local
``` ```
## Multi-user hosts vs. Containers ## Multi-user hosts vs. Containers
There are two broad categories of user environments that depend on what There are two broad categories of user environments that depend on what
@@ -141,8 +137,8 @@ When JupyterHub uses **container-based** Spawners (e.g. KubeSpawner or
DockerSpawner), the 'system-wide' environment is really the container image DockerSpawner), the 'system-wide' environment is really the container image
which you are using for users. which you are using for users.
In both cases, you want to *avoid putting configuration in user home In both cases, you want to _avoid putting configuration in user home
directories* because users can change those configuration settings. Also, directories_ because users can change those configuration settings. Also,
home directories typically persist once they are created, so they are home directories typically persist once they are created, so they are
difficult for admins to update later. difficult for admins to update later.

View File

@@ -136,7 +136,7 @@ async def delete_route(self, routespec):
### Retrieving routes ### Retrieving routes
For retrieval, you only *need* to implement a single method that retrieves all For retrieval, you only _need_ to implement a single method that retrieves all
routes. The return value for this function should be a dictionary, keyed by routes. The return value for this function should be a dictionary, keyed by
`routespect`, of dicts whose keys are the same three arguments passed to `routespect`, of dicts whose keys are the same three arguments passed to
`add_route` (`routespec`, `target`, `data`) `add_route` (`routespec`, `target`, `data`)

View File

@@ -187,6 +187,7 @@ hub:
``` ```
With that setting in place, a new named-server is activated like this: With that setting in place, a new named-server is activated like this:
```bash ```bash
curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/servers/<serverA>" curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/servers/<serverA>"
curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/servers/<serverB>" curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/servers/<serverB>"
@@ -201,7 +202,6 @@ will need to be able to handle the case of multiple servers per user and ensure
uniqueness of names, particularly if servers are spawned via docker containers uniqueness of names, particularly if servers are spawned via docker containers
or kubernetes pods. or kubernetes pods.
## Learn more about the API ## Learn more about the API
You can see the full [JupyterHub REST API][] for details. This REST API Spec can You can see the full [JupyterHub REST API][] for details. This REST API Spec can
@@ -210,6 +210,6 @@ Both resources contain the same information and differ only in its display.
Note: The Swagger specification is being renamed the [OpenAPI Initiative][]. Note: The Swagger specification is being renamed the [OpenAPI Initiative][].
[interactive style on swagger's petstore]: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#!/default [interactive style on swagger's petstore]: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#!/default
[OpenAPI Initiative]: https://www.openapis.org/ [openapi initiative]: https://www.openapis.org/
[JupyterHub REST API]: ./rest-api [jupyterhub rest api]: ./rest-api
[Jupyter Notebook REST API]: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/notebook/master/notebook/services/api/api.yaml [jupyter notebook rest api]: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/notebook/master/notebook/services/api/api.yaml

View File

@@ -1,6 +1,5 @@
# Running proxy separately from the hub # Running proxy separately from the hub
## Background ## Background
The thing which users directly connect to is the proxy, by default The thing which users directly connect to is the proxy, by default
@@ -22,7 +21,6 @@ The default JupyterHub proxy is
and that page has some docs. If you are using a different proxy, such and that page has some docs. If you are using a different proxy, such
as Traefik, these instructions are probably not relevant to you. as Traefik, these instructions are probably not relevant to you.
## Configuration options ## Configuration options
`c.JupyterHub.cleanup_servers = False` should be set, which tells the `c.JupyterHub.cleanup_servers = False` should be set, which tells the
@@ -37,16 +35,12 @@ it yourself).
token for authenticating communication with the proxy. token for authenticating communication with the proxy.
`c.ConfigurableHTTPProxy.api_url = 'http://localhost:8001'` should be `c.ConfigurableHTTPProxy.api_url = 'http://localhost:8001'` should be
set to the URL which the hub uses to connect *to the proxy's API*. set to the URL which the hub uses to connect _to the proxy's API_.
## Proxy configuration ## Proxy configuration
You need to configure a service to start the proxy. An example You need to configure a service to start the proxy. An example
command line for this is `configurable-http-proxy --ip=127.0.0.1 command line for this is `configurable-http-proxy --ip=127.0.0.1 --port=8000 --api-ip=127.0.0.1 --api-port=8001 --default-target=http://localhost:8081 --error-target=http://localhost:8081/hub/error`. (Details for how to
--port=8000 --api-ip=127.0.0.1 --api-port=8001
--default-target=http://localhost:8081
--error-target=http://localhost:8081/hub/error`. (Details for how to
do this is out of scope for this tutorial - for example it might be a do this is out of scope for this tutorial - for example it might be a
systemd service on within another docker cotainer). The proxy has no systemd service on within another docker cotainer). The proxy has no
configuration files, all configuration is via the command line and configuration files, all configuration is via the command line and
@@ -54,7 +48,7 @@ environment variables.
`--api-ip` and `--api-port` (which tells the proxy where to listen) should match the hub's `ConfigurableHTTPProxy.api_url`. `--api-ip` and `--api-port` (which tells the proxy where to listen) should match the hub's `ConfigurableHTTPProxy.api_url`.
`--ip`, `-port`, and other options configure the *user* connections to the proxy. `--ip`, `-port`, and other options configure the _user_ connections to the proxy.
`--default-target` and `--error-target` should point to the hub, and used when users navigate to the proxy originally. `--default-target` and `--error-target` should point to the hub, and used when users navigate to the proxy originally.
@@ -67,14 +61,12 @@ what other options are needed, for example SSL options. Note that
these are configured in the hub if the hub is starting the proxy - you these are configured in the hub if the hub is starting the proxy - you
need to move the options to here. need to move the options to here.
## Docker image ## Docker image
You can use [jupyterhub configurable-http-proxy docker You can use [jupyterhub configurable-http-proxy docker
image](https://hub.docker.com/r/jupyterhub/configurable-http-proxy/) image](https://hub.docker.com/r/jupyterhub/configurable-http-proxy/)
to run the proxy. to run the proxy.
## See also ## See also
* [jupyterhub configurable-http-proxy](https://github.com/jupyterhub/configurable-http-proxy) - [jupyterhub configurable-http-proxy](https://github.com/jupyterhub/configurable-http-proxy)

View File

@@ -50,11 +50,8 @@ A Service may have the following properties:
If a service is also to be managed by the Hub, it has a few extra options: If a service is also to be managed by the Hub, it has a few extra options:
- `command: (str/Popen list)` - Command for JupyterHub to spawn the service. - `command: (str/Popen list)` - Command for JupyterHub to spawn the service. - Only use this if the service should be a subprocess. - If command is not specified, the Service is assumed to be managed
- Only use this if the service should be a subprocess. externally. - If a command is specified for launching the Service, the Service will
- If command is not specified, the Service is assumed to be managed
externally.
- If a command is specified for launching the Service, the Service will
be started and managed by the Hub. be started and managed by the Hub.
- `environment: dict` - additional environment variables for the Service. - `environment: dict` - additional environment variables for the Service.
- `user: str` - the name of a system user to manage the Service. If - `user: str` - the name of a system user to manage the Service. If
@@ -199,16 +196,16 @@ can be used by services. You may go beyond this reference implementation and
create custom hub-authenticating clients and services. We describe the process create custom hub-authenticating clients and services. We describe the process
below. below.
The reference, or base, implementation is the [`HubAuth`][HubAuth] class, The reference, or base, implementation is the [`HubAuth`][hubauth] class,
which implements the requests to the Hub. which implements the requests to the Hub.
To use HubAuth, you must set the `.api_token`, either programmatically when constructing the class, To use HubAuth, you must set the `.api_token`, either programmatically when constructing the class,
or via the `JUPYTERHUB_API_TOKEN` environment variable. or via the `JUPYTERHUB_API_TOKEN` environment variable.
Most of the logic for authentication implementation is found in the Most of the logic for authentication implementation is found in the
[`HubAuth.user_for_cookie`][HubAuth.user_for_cookie] [`HubAuth.user_for_cookie`][hubauth.user_for_cookie]
and in the and in the
[`HubAuth.user_for_token`][HubAuth.user_for_token] [`HubAuth.user_for_token`][hubauth.user_for_token]
methods, which makes a request of the Hub, and returns: methods, which makes a request of the Hub, and returns:
- None, if no user could be identified, or - None, if no user could be identified, or
@@ -285,11 +282,10 @@ def whoami(user):
) )
``` ```
### Authenticating tornado services with JupyterHub ### Authenticating tornado services with JupyterHub
Since most Jupyter services are written with tornado, Since most Jupyter services are written with tornado,
we include a mixin class, [`HubAuthenticated`][HubAuthenticated], we include a mixin class, [`HubAuthenticated`][hubauthenticated],
for quickly authenticating your own tornado services with JupyterHub. for quickly authenticating your own tornado services with JupyterHub.
Tornado's `@web.authenticated` method calls a Handler's `.get_current_user` Tornado's `@web.authenticated` method calls a Handler's `.get_current_user`
@@ -310,7 +306,6 @@ class MyHandler(HubAuthenticated, web.RequestHandler):
... ...
``` ```
The HubAuth will automatically load the desired configuration from the Service The HubAuth will automatically load the desired configuration from the Service
environment variables. environment variables.
@@ -320,13 +315,12 @@ username and user group list, respectively. If a user matches neither the user
list nor the group list, they will not be allowed access. If both are left list nor the group list, they will not be allowed access. If both are left
undefined, then any user will be allowed. undefined, then any user will be allowed.
### Implementing your own Authentication with JupyterHub ### Implementing your own Authentication with JupyterHub
If you don't want to use the reference implementation If you don't want to use the reference implementation
(e.g. you find the implementation a poor fit for your Flask app), (e.g. you find the implementation a poor fit for your Flask app),
you can implement authentication via the Hub yourself. you can implement authentication via the Hub yourself.
We recommend looking at the [`HubAuth`][HubAuth] class implementation for reference, We recommend looking at the [`HubAuth`][hubauth] class implementation for reference,
and taking note of the following process: and taking note of the following process:
1. retrieve the cookie `jupyterhub-services` from the request. 1. retrieve the cookie `jupyterhub-services` from the request.
@@ -356,8 +350,7 @@ and taking note of the following process:
```json ```json
{ {
"name": "inara", "name": "inara",
"groups": ["serenity", "guild"], "groups": ["serenity", "guild"]
} }
``` ```
@@ -367,12 +360,11 @@ and an example of its configuration is found [here](https://github.com/jupyter/n
nbviewer can also be run as a Hub-Managed Service as described [nbviewer README][nbviewer example] nbviewer can also be run as a Hub-Managed Service as described [nbviewer README][nbviewer example]
section on securing the notebook viewer. section on securing the notebook viewer.
[requests]: http://docs.python-requests.org/en/master/ [requests]: http://docs.python-requests.org/en/master/
[services_auth]: ../api/services.auth.html [services_auth]: ../api/services.auth.html
[HubAuth]: ../api/services.auth.html#jupyterhub.services.auth.HubAuth [hubauth]: ../api/services.auth.html#jupyterhub.services.auth.HubAuth
[HubAuth.user_for_cookie]: ../api/services.auth.html#jupyterhub.services.auth.HubAuth.user_for_cookie [hubauth.user_for_cookie]: ../api/services.auth.html#jupyterhub.services.auth.HubAuth.user_for_cookie
[HubAuth.user_for_token]: ../api/services.auth.html#jupyterhub.services.auth.HubAuth.user_for_token [hubauth.user_for_token]: ../api/services.auth.html#jupyterhub.services.auth.HubAuth.user_for_token
[HubAuthenticated]: ../api/services.auth.html#jupyterhub.services.auth.HubAuthenticated [hubauthenticated]: ../api/services.auth.html#jupyterhub.services.auth.HubAuthenticated
[nbviewer example]: https://github.com/jupyter/nbviewer#securing-the-notebook-viewer [nbviewer example]: https://github.com/jupyter/nbviewer#securing-the-notebook-viewer
[jupyterhub_idle_culler]: https://github.com/jupyterhub/jupyterhub-idle-culler [jupyterhub_idle_culler]: https://github.com/jupyterhub/jupyterhub-idle-culler

View File

@@ -8,18 +8,17 @@ and a custom Spawner needs to be able to take three actions:
- poll whether the process is still running - poll whether the process is still running
- stop the process - stop the process
## Examples ## Examples
Custom Spawners for JupyterHub can be found on the [JupyterHub wiki](https://github.com/jupyterhub/jupyterhub/wiki/Spawners). Custom Spawners for JupyterHub can be found on the [JupyterHub wiki](https://github.com/jupyterhub/jupyterhub/wiki/Spawners).
Some examples include: Some examples include:
- [DockerSpawner](https://github.com/jupyterhub/dockerspawner) for spawning user servers in Docker containers - [DockerSpawner](https://github.com/jupyterhub/dockerspawner) for spawning user servers in Docker containers
* `dockerspawner.DockerSpawner` for spawning identical Docker containers for - `dockerspawner.DockerSpawner` for spawning identical Docker containers for
each users each users
* `dockerspawner.SystemUserSpawner` for spawning Docker containers with an - `dockerspawner.SystemUserSpawner` for spawning Docker containers with an
environment and home directory for each users environment and home directory for each users
* both `DockerSpawner` and `SystemUserSpawner` also work with Docker Swarm for - both `DockerSpawner` and `SystemUserSpawner` also work with Docker Swarm for
launching containers on remote machines launching containers on remote machines
- [SudoSpawner](https://github.com/jupyterhub/sudospawner) enables JupyterHub to - [SudoSpawner](https://github.com/jupyterhub/sudospawner) enables JupyterHub to
run without being root, by spawning an intermediate process via `sudo` run without being root, by spawning an intermediate process via `sudo`
@@ -30,7 +29,6 @@ Some examples include:
- [SSHSpawner](https://github.com/NERSC/sshspawner) to spawn notebooks - [SSHSpawner](https://github.com/NERSC/sshspawner) to spawn notebooks
on a remote server using SSH on a remote server using SSH
## Spawner control methods ## Spawner control methods
### Spawner.start ### Spawner.start
@@ -41,7 +39,7 @@ an object encapsulating the user's name, authentication, and server info.
The return value of `Spawner.start` should be the (ip, port) of the running server. The return value of `Spawner.start` should be the (ip, port) of the running server.
**NOTE:** When writing coroutines, *never* `yield` in between a database change and a commit. **NOTE:** When writing coroutines, _never_ `yield` in between a database change and a commit.
Most `Spawner.start` functions will look similar to this example: Most `Spawner.start` functions will look similar to this example:
@@ -80,7 +78,6 @@ to check if the local process is still running. On Windows, it uses `psutil.pid_
`Spawner.stop` should stop the process. It must be a tornado coroutine, which should return when the process has finished exiting. `Spawner.stop` should stop the process. It must be a tornado coroutine, which should return when the process has finished exiting.
## Spawner state ## Spawner state
JupyterHub should be able to stop and restart without tearing down JupyterHub should be able to stop and restart without tearing down
@@ -112,7 +109,6 @@ def clear_state(self):
self.pid = 0 self.pid = 0
``` ```
## Spawner options form ## Spawner options form
(new in 0.4) (new in 0.4)
@@ -170,8 +166,7 @@ which would return:
When `Spawner.start` is called, this dictionary is accessible as `self.user_options`. When `Spawner.start` is called, this dictionary is accessible as `self.user_options`.
[spawner]: https://github.com/jupyterhub/jupyterhub/blob/master/jupyterhub/spawner.py
[Spawner]: https://github.com/jupyterhub/jupyterhub/blob/master/jupyterhub/spawner.py
## Writing a custom spawner ## Writing a custom spawner
@@ -212,7 +207,6 @@ Additionally, configurable attributes for your spawner will
appear in jupyterhub help output and auto-generated configuration files appear in jupyterhub help output and auto-generated configuration files
via `jupyterhub --generate-config`. via `jupyterhub --generate-config`.
## Spawners, resource limits, and guarantees (Optional) ## Spawners, resource limits, and guarantees (Optional)
Some spawners of the single-user notebook servers allow setting limits or Some spawners of the single-user notebook servers allow setting limits or
@@ -224,10 +218,9 @@ support for them**. For example, LocalProcessSpawner, the default
spawner, does not support limits and guarantees. One of the spawners spawner, does not support limits and guarantees. One of the spawners
that supports limits and guarantees is the `systemdspawner`. that supports limits and guarantees is the `systemdspawner`.
### Memory Limits & Guarantees ### Memory Limits & Guarantees
`c.Spawner.mem_limit`: A **limit** specifies the *maximum amount of memory* `c.Spawner.mem_limit`: A **limit** specifies the _maximum amount of memory_
that may be allocated, though there is no promise that the maximum amount will that may be allocated, though there is no promise that the maximum amount will
be available. In supported spawners, you can set `c.Spawner.mem_limit` to be available. In supported spawners, you can set `c.Spawner.mem_limit` to
limit the total amount of memory that a single-user notebook server can limit the total amount of memory that a single-user notebook server can
@@ -235,8 +228,8 @@ allocate. Attempting to use more memory than this limit will cause errors. The
single-user notebook server can discover its own memory limit by looking at single-user notebook server can discover its own memory limit by looking at
the environment variable `MEM_LIMIT`, which is specified in absolute bytes. the environment variable `MEM_LIMIT`, which is specified in absolute bytes.
`c.Spawner.mem_guarantee`: Sometimes, a **guarantee** of a *minimum amount of `c.Spawner.mem_guarantee`: Sometimes, a **guarantee** of a _minimum amount of
memory* is desirable. In this case, you can set `c.Spawner.mem_guarantee` to memory_ is desirable. In this case, you can set `c.Spawner.mem_guarantee` to
to provide a guarantee that at minimum this much memory will always be to provide a guarantee that at minimum this much memory will always be
available for the single-user notebook server to use. The environment variable available for the single-user notebook server to use. The environment variable
`MEM_GUARANTEE` will also be set in the single-user notebook server. `MEM_GUARANTEE` will also be set in the single-user notebook server.

View File

@@ -52,10 +52,7 @@ text about the server starting up, place this content in a file named
`JupyterHub.template_paths` configuration option. `JupyterHub.template_paths` configuration option.
```html ```html
{% extends "templates/spawn_pending.html" %} {% extends "templates/spawn_pending.html" %} {% block message %} {{ super() }}
{% block message %}
{{ super() }}
<p>Patience is a virtue.</p> <p>Patience is a virtue.</p>
{% endblock %} {% endblock %}
``` ```
@@ -69,8 +66,7 @@ To add announcements to be displayed on a page, you have two options:
### Announcement Configuration Variables ### Announcement Configuration Variables
If you set the configuration variable `JupyterHub.template_vars = If you set the configuration variable `JupyterHub.template_vars = {'announcement': 'some_text'}`, the given `some_text` will be placed on
{'announcement': 'some_text'}`, the given `some_text` will be placed on
the top of all pages. The more specific variables the top of all pages. The more specific variables
`announcement_login`, `announcement_spawn`, `announcement_home`, and `announcement_login`, `announcement_spawn`, `announcement_home`, and
`announcement_logout` are more specific and only show on their `announcement_logout` are more specific and only show on their
@@ -84,8 +80,7 @@ to update the messages without restarting. Set
template (for example, `login.html`) with: template (for example, `login.html`) with:
```html ```html
{% extends "templates/login.html" %} {% extends "templates/login.html" %} {% set announcement = 'some message' %}
{% set announcement = 'some message' %}
``` ```
Extending `page.html` puts the message on all pages, but note that Extending `page.html` puts the message on all pages, but note that

View File

@@ -11,8 +11,6 @@ All authenticated handlers redirect to `/hub/login` to login users
prior to being redirected back to the originating page. prior to being redirected back to the originating page.
The returned request should preserve all query parameters. The returned request should preserve all query parameters.
## `/` ## `/`
The top-level request is always a simple redirect to `/hub/`, The top-level request is always a simple redirect to `/hub/`,
@@ -61,7 +59,7 @@ for starting and stopping the user's server.
If named servers are enabled, there will be some additional If named servers are enabled, there will be some additional
tools for management of named servers. tools for management of named servers.
*Version added: 1.0* named server UI is new in 1.0. _Version added: 1.0_ named server UI is new in 1.0.
## `/hub/login` ## `/hub/login`
@@ -111,7 +109,7 @@ not the Hub.
The username is the first part and, if using named servers, The username is the first part and, if using named servers,
the server name is the second part. the server name is the second part.
If the user's server is *not* running, this will be redirected to `/hub/user/:username/...` If the user's server is _not_ running, this will be redirected to `/hub/user/:username/...`
## `/hub/user/:username[/:servername]` ## `/hub/user/:username[/:servername]`
@@ -146,7 +144,7 @@ without additional user action (i.e. clicking the link on the page)
![Visiting a URL for a server that's not running](../images/not-running.png) ![Visiting a URL for a server that's not running](../images/not-running.png)
*Version changed: 1.0* _Version changed: 1.0_
Prior to 1.0, this URL itself was responsible for spawning servers, Prior to 1.0, this URL itself was responsible for spawning servers,
and served the progress page if it was pending, and served the progress page if it was pending,
@@ -165,7 +163,7 @@ indicating how to spawn the server.
This is meant to help applications such as JupyterLab This is meant to help applications such as JupyterLab
that are connected to a server that has stopped. that are connected to a server that has stopped.
*Version changed: 1.0* _Version changed: 1.0_
JupyterHub 0.9 failed these API requests with status 404, JupyterHub 0.9 failed these API requests with status 404,
but 1.0 uses 503. but 1.0 uses 503.
@@ -207,12 +205,12 @@ and a POST request will trigger the actual spawn and redirect.
![The spawn form](../images/spawn-form.png) ![The spawn form](../images/spawn-form.png)
*Version added: 1.0* _Version added: 1.0_
1.0 adds the ability to specify username and servername. 1.0 adds the ability to specify username and servername.
Prior to 1.0, only `/hub/spawn` was recognized for the default server. Prior to 1.0, only `/hub/spawn` was recognized for the default server.
*Version changed: 1.0* _Version changed: 1.0_
Prior to 1.0, this page redirected back to `/hub/user/:username`, Prior to 1.0, this page redirected back to `/hub/user/:username`,
which was responsible for triggering spawn and rendering progress, etc. which was responsible for triggering spawn and rendering progress, etc.
@@ -221,7 +219,7 @@ which was responsible for triggering spawn and rendering progress, etc.
![The spawn pending page](../images/spawn-pending.png) ![The spawn pending page](../images/spawn-pending.png)
*Version added: 1.0* this URL is new in JupyterHub 1.0. _Version added: 1.0_ this URL is new in JupyterHub 1.0.
This page renders the progress view for the given spawn request. This page renders the progress view for the given spawn request.
Once the server is ready, Once the server is ready,

View File

@@ -12,17 +12,17 @@ works.
## Semi-trusted and untrusted users ## Semi-trusted and untrusted users
JupyterHub is designed to be a *simple multi-user server for modestly sized JupyterHub is designed to be a _simple multi-user server for modestly sized
groups* of **semi-trusted** users. While the design reflects serving semi-trusted groups_ of **semi-trusted** users. While the design reflects serving semi-trusted
users, JupyterHub is not necessarily unsuitable for serving **untrusted** users. users, JupyterHub is not necessarily unsuitable for serving **untrusted** users.
Using JupyterHub with **untrusted** users does mean more work by the Using JupyterHub with **untrusted** users does mean more work by the
administrator. Much care is required to secure a Hub, with extra caution on administrator. Much care is required to secure a Hub, with extra caution on
protecting users from each other as the Hub is serving untrusted users. protecting users from each other as the Hub is serving untrusted users.
One aspect of JupyterHub's *design simplicity* for **semi-trusted** users is that One aspect of JupyterHub's _design simplicity_ for **semi-trusted** users is that
the Hub and single-user servers are placed in a *single domain*, behind a the Hub and single-user servers are placed in a _single domain_, behind a
[*proxy*][configurable-http-proxy]. If the Hub is serving untrusted [_proxy_][configurable-http-proxy]. If the Hub is serving untrusted
users, many of the web's cross-site protections are not applied between users, many of the web's cross-site protections are not applied between
single-user servers and the Hub, or between single-user servers and each single-user servers and the Hub, or between single-user servers and each
other, since browsers see the whole thing (proxy, Hub, and single user other, since browsers see the whole thing (proxy, Hub, and single user
@@ -40,7 +40,7 @@ server.
To protect all users from each other, JupyterHub administrators must To protect all users from each other, JupyterHub administrators must
ensure that: ensure that:
* A user **does not have permission** to modify their single-user notebook server, - A user **does not have permission** to modify their single-user notebook server,
including: including:
- A user **may not** install new packages in the Python environment that runs - A user **may not** install new packages in the Python environment that runs
their single-user server. their single-user server.
@@ -49,11 +49,11 @@ ensure that:
directory that precedes the directory containing `jupyterhub-singleuser`. directory that precedes the directory containing `jupyterhub-singleuser`.
- A user may not modify environment variables (e.g. PATH, PYTHONPATH) for - A user may not modify environment variables (e.g. PATH, PYTHONPATH) for
their single-user server. their single-user server.
* A user **may not** modify the configuration of the notebook server - A user **may not** modify the configuration of the notebook server
(the `~/.jupyter` or `JUPYTER_CONFIG_DIR` directory). (the `~/.jupyter` or `JUPYTER_CONFIG_DIR` directory).
If any additional services are run on the same domain as the Hub, the services If any additional services are run on the same domain as the Hub, the services
**must never** display user-authored HTML that is neither *sanitized* nor *sandboxed* **must never** display user-authored HTML that is neither _sanitized_ nor _sandboxed_
(e.g. IFramed) to any user that lacks authentication as the author of a file. (e.g. IFramed) to any user that lacks authentication as the author of a file.
## Mitigate security issues ## Mitigate security issues
@@ -85,7 +85,7 @@ admin must enforce.
### Prevent spawners from evaluating shell configuration files ### Prevent spawners from evaluating shell configuration files
For most Spawners, `PATH` is not something users can influence, but care should For most Spawners, `PATH` is not something users can influence, but care should
be taken to ensure that the Spawner does *not* evaluate shell configuration be taken to ensure that the Spawner does _not_ evaluate shell configuration
files prior to launching the server. files prior to launching the server.
### Isolate packages using virtualenv ### Isolate packages using virtualenv
@@ -125,7 +125,6 @@ versions up to date.
A handy website for testing your deployment is A handy website for testing your deployment is
[Qualsys' SSL analyzer tool](https://www.ssllabs.com/ssltest/analyze.html). [Qualsys' SSL analyzer tool](https://www.ssllabs.com/ssltest/analyze.html).
[configurable-http-proxy]: https://github.com/jupyterhub/configurable-http-proxy [configurable-http-proxy]: https://github.com/jupyterhub/configurable-http-proxy
## Vulnerability reporting ## Vulnerability reporting

View File

@@ -4,17 +4,20 @@ When troubleshooting, you may see unexpected behaviors or receive an error
message. This section provide links for identifying the cause of the message. This section provide links for identifying the cause of the
problem and how to resolve it. problem and how to resolve it.
[*Behavior*](#behavior) [_Behavior_](#behavior)
- JupyterHub proxy fails to start - JupyterHub proxy fails to start
- sudospawner fails to run - sudospawner fails to run
- What is the default behavior when none of the lists (admin, allowed, - What is the default behavior when none of the lists (admin, allowed,
allowed groups) are set? allowed groups) are set?
- JupyterHub Docker container not accessible at localhost - JupyterHub Docker container not accessible at localhost
[*Errors*](#errors) [_Errors_](#errors)
- 500 error after spawning my single-user server - 500 error after spawning my single-user server
[*How do I...?*](#how-do-i) [_How do I...?_](#how-do-i)
- Use a chained SSL certificate - Use a chained SSL certificate
- Install JupyterHub without a network connection - Install JupyterHub without a network connection
- I want access to the whole filesystem, but still default users to their home directory - I want access to the whole filesystem, but still default users to their home directory
@@ -25,7 +28,7 @@ problem and how to resolve it.
- Toree integration with HDFS rack awareness script - Toree integration with HDFS rack awareness script
- Where do I find Docker images and Dockerfiles related to JupyterHub? - Where do I find Docker images and Dockerfiles related to JupyterHub?
[*Troubleshooting commands*](#troubleshooting-commands) [_Troubleshooting commands_](#troubleshooting-commands)
## Behavior ## Behavior
@@ -34,8 +37,8 @@ problem and how to resolve it.
If you have tried to start the JupyterHub proxy and it fails to start: If you have tried to start the JupyterHub proxy and it fails to start:
- check if the JupyterHub IP configuration setting is - check if the JupyterHub IP configuration setting is
``c.JupyterHub.ip = '*'``; if it is, try ``c.JupyterHub.ip = ''`` `c.JupyterHub.ip = '*'`; if it is, try `c.JupyterHub.ip = ''`
- Try starting with ``jupyterhub --ip=0.0.0.0`` - Try starting with `jupyterhub --ip=0.0.0.0`
**Note**: If this occurs on Ubuntu/Debian, check that the you are using a **Note**: If this occurs on Ubuntu/Debian, check that the you are using a
recent version of node. Some versions of Ubuntu/Debian come with a version recent version of node. Some versions of Ubuntu/Debian come with a version
@@ -132,11 +135,11 @@ There are two likely reasons for this:
1. The single-user server cannot connect to the Hub's API (networking 1. The single-user server cannot connect to the Hub's API (networking
configuration problems) configuration problems)
2. The single-user server cannot *authenticate* its requests (invalid token) 2. The single-user server cannot _authenticate_ its requests (invalid token)
#### Symptoms #### Symptoms
The main symptom is a failure to load *any* page served by the single-user The main symptom is a failure to load _any_ page served by the single-user
server, met with a 500 error. This is typically the first page at `/user/<your_name>` server, met with a 500 error. This is typically the first page at `/user/<your_name>`
after logging in or clicking "Start my server". When a single-user notebook server after logging in or clicking "Start my server". When a single-user notebook server
receives a request, the notebook server makes an API request to the Hub to receives a request, the notebook server makes an API request to the Hub to
@@ -198,15 +201,15 @@ your server again.
##### Proxy settings (403 GET) ##### Proxy settings (403 GET)
When your whole JupyterHub sits behind a organization proxy (*not* a reverse proxy like NGINX as part of your setup and *not* the configurable-http-proxy) the environment variables `HTTP_PROXY`, `HTTPS_PROXY`, `http_proxy` and `https_proxy` might be set. This confuses the jupyterhub-singleuser servers: When connecting to the Hub for authorization they connect via the proxy instead of directly connecting to the Hub on localhost. The proxy might deny the request (403 GET). This results in the singleuser server thinking it has a wrong auth token. To circumvent this you should add `<hub_url>,<hub_ip>,localhost,127.0.0.1` to the environment variables `NO_PROXY` and `no_proxy`. When your whole JupyterHub sits behind a organization proxy (_not_ a reverse proxy like NGINX as part of your setup and _not_ the configurable-http-proxy) the environment variables `HTTP_PROXY`, `HTTPS_PROXY`, `http_proxy` and `https_proxy` might be set. This confuses the jupyterhub-singleuser servers: When connecting to the Hub for authorization they connect via the proxy instead of directly connecting to the Hub on localhost. The proxy might deny the request (403 GET). This results in the singleuser server thinking it has a wrong auth token. To circumvent this you should add `<hub_url>,<hub_ip>,localhost,127.0.0.1` to the environment variables `NO_PROXY` and `no_proxy`.
### Launching Jupyter Notebooks to run as an externally managed JupyterHub service with the `jupyterhub-singleuser` command returns a `JUPYTERHUB_API_TOKEN` error ### Launching Jupyter Notebooks to run as an externally managed JupyterHub service with the `jupyterhub-singleuser` command returns a `JUPYTERHUB_API_TOKEN` error
[JupyterHub services](https://jupyterhub.readthedocs.io/en/stable/reference/services.html) allow processes to interact with JupyterHub's REST API. Example use-cases include: [JupyterHub services](https://jupyterhub.readthedocs.io/en/stable/reference/services.html) allow processes to interact with JupyterHub's REST API. Example use-cases include:
* **Secure Testing**: provide a canonical Jupyter Notebook for testing production data to reduce the number of entry points into production systems. - **Secure Testing**: provide a canonical Jupyter Notebook for testing production data to reduce the number of entry points into production systems.
* **Grading Assignments**: provide access to shared Jupyter Notebooks that may be used for management tasks such grading assignments. - **Grading Assignments**: provide access to shared Jupyter Notebooks that may be used for management tasks such grading assignments.
* **Private Dashboards**: share dashboards with certain group members. - **Private Dashboards**: share dashboards with certain group members.
If possible, try to run the Jupyter Notebook as an externally managed service with one of the provided [jupyter/docker-stacks](https://github.com/jupyter/docker-stacks). If possible, try to run the Jupyter Notebook as an externally managed service with one of the provided [jupyter/docker-stacks](https://github.com/jupyter/docker-stacks).
@@ -250,7 +253,6 @@ You would then set in your `jupyterhub_config.py` file the `ssl_key` and
c.JupyterHub.ssl_cert = your_host-chained.crt c.JupyterHub.ssl_cert = your_host-chained.crt
c.JupyterHub.ssl_key = your_host.key c.JupyterHub.ssl_key = your_host.key
#### Example #### Example
Your certificate provider gives you the following files: `example_host.crt`, Your certificate provider gives you the following files: `example_host.crt`,
@@ -402,8 +404,8 @@ SyntaxError: Missing parentheses in call to 'print'
In order to resolve this issue, there are two potential options. In order to resolve this issue, there are two potential options.
1. Update HDFS core-site.xml, so the parameter "net.topology.script.file.name" points to a custom 1. Update HDFS core-site.xml, so the parameter "net.topology.script.file.name" points to a custom
script (e.g. /etc/hadoop/conf/custom_topology_script.py). Copy the original script and change the first line point script (e.g. /etc/hadoop/conf/custom_topology_script.py). Copy the original script and change the first line point
to a python two installation (e.g. /usr/bin/python). to a python two installation (e.g. /usr/bin/python).
2. In spark-env.sh add a Python 2 installation to your path (e.g. export PATH=/opt/anaconda2/bin:$PATH). 2. In spark-env.sh add a Python 2 installation to your path (e.g. export PATH=/opt/anaconda2/bin:$PATH).
### Where do I find Docker images and Dockerfiles related to JupyterHub? ### Where do I find Docker images and Dockerfiles related to JupyterHub?

View File

@@ -5,22 +5,22 @@ do some preparation work in a bootstrapping process.
Common use cases are: Common use cases are:
*Providing writeable storage for LDAP users* _Providing writeable storage for LDAP users_
Your Jupyterhub is configured to use the LDAPAuthenticator and DockerSpawer. Your Jupyterhub is configured to use the LDAPAuthenticator and DockerSpawer.
* The user has no file directory on the host since your are using LDAP. - The user has no file directory on the host since your are using LDAP.
* When a user has no directory and DockerSpawner wants to mount a volume, - When a user has no directory and DockerSpawner wants to mount a volume,
the spawner will use docker to create a directory. the spawner will use docker to create a directory.
Since the docker daemon is running as root, the generated directory for the volume Since the docker daemon is running as root, the generated directory for the volume
mount will not be writeable by the `jovyan` user inside of the container. mount will not be writeable by the `jovyan` user inside of the container.
For the directory to be useful to the user, the permissions on the directory For the directory to be useful to the user, the permissions on the directory
need to be modified for the user to have write access. need to be modified for the user to have write access.
*Prepopulating Content* _Prepopulating Content_
Another use would be to copy initial content, such as tutorial files or reference Another use would be to copy initial content, such as tutorial files or reference
material, into the user's space when a notebook server is newly spawned. material, into the user's space when a notebook server is newly spawned.
You can define your own bootstrap process by implementing a `pre_spawn_hook` on any spawner. You can define your own bootstrap process by implementing a `pre_spawn_hook` on any spawner.
The Spawner itself is passed as parameter to your hook and you can easily get the contextual information out of the spawning process. The Spawner itself is passed as parameter to your hook and you can easily get the contextual information out of the spawning process.
@@ -28,7 +28,7 @@ The Spawner itself is passed as parameter to your hook and you can easily get th
Similarly, there may be cases where you would like to clean up after a spawner stops. Similarly, there may be cases where you would like to clean up after a spawner stops.
You may implement a `post_stop_hook` that is always executed after the spawner stops. You may implement a `post_stop_hook` that is always executed after the spawner stops.
If you implement a hook, make sure that it is *idempotent*. It will be executed every time If you implement a hook, make sure that it is _idempotent_. It will be executed every time
a notebook server is spawned to the user. That means you should somehow a notebook server is spawned to the user. That means you should somehow
ensure that things which should run only once are not running again and again. ensure that things which should run only once are not running again and again.
For example, before you create a directory, check if it exists. For example, before you create a directory, check if it exists.

View File

@@ -47,7 +47,6 @@ After logging in with your local-system credentials, you should see a JSON dump
} }
``` ```
The essential pieces for using JupyterHub as an OAuth provider are: The essential pieces for using JupyterHub as an OAuth provider are:
1. registering your service with jupyterhub: 1. registering your service with jupyterhub:

View File

@@ -4,14 +4,14 @@ This example shows how you can connect Jupyterhub to a Postgres database
instead of the default SQLite backend. instead of the default SQLite backend.
### Running Postgres with Jupyterhub on the host. ### Running Postgres with Jupyterhub on the host.
0. Uncomment and replace `ENV JPY_PSQL_PASSWORD arglebargle` with your own 0. Uncomment and replace `ENV JPY_PSQL_PASSWORD arglebargle` with your own
password in the Dockerfile for `examples/postgres/db`. (Alternatively, pass password in the Dockerfile for `examples/postgres/db`. (Alternatively, pass
-e `JPY_PSQL_PASSWORD=<password>` when you start the db container.) -e `JPY_PSQL_PASSWORD=<password>` when you start the db container.)
1. `cd` to the root of your jupyterhub repo. 1. `cd` to the root of your jupyterhub repo.
2. Build the postgres image with `docker build -t jupyterhub-postgres-db 2. Build the postgres image with `docker build -t jupyterhub-postgres-db examples/postgres/db`. This may take a minute or two the first time it's
examples/postgres/db`. This may take a minute or two the first time it's
run. run.
3. Run the db image with `docker run -d -p 5433:5432 jupyterhub-postgres-db`. 3. Run the db image with `docker run -d -p 5433:5432 jupyterhub-postgres-db`.
@@ -24,20 +24,18 @@ instead of the default SQLite backend.
5. Log in as the user running jupyterhub on your host machine. 5. Log in as the user running jupyterhub on your host machine.
### Running Postgres with Containerized Jupyterhub. ### Running Postgres with Containerized Jupyterhub.
0. Do steps 0-2 in from the above section, ensuring that the values set/passed 0. Do steps 0-2 in from the above section, ensuring that the values set/passed
for `JPY_PSQL_PASSWORD` match for the hub and db containers. for `JPY_PSQL_PASSWORD` match for the hub and db containers.
1. Build the hub image with `docker build -t jupyterhub-postgres-hub 1. Build the hub image with `docker build -t jupyterhub-postgres-hub examples/postgres/hub`. This may take a minute or two the first time it's
examples/postgres/hub`. This may take a minute or two the first time it's
run. run.
2. Run the db image with `docker run -d --name=jpy-db 2. Run the db image with `docker run -d --name=jpy-db jupyterhub-postgres`. Note that, unlike when connecting to a host machine
jupyterhub-postgres`. Note that, unlike when connecting to a host machine
jupyterhub, we don't specify a port-forwarding scheme here, but we do need jupyterhub, we don't specify a port-forwarding scheme here, but we do need
to specify a name for the container. to specify a name for the container.
3. Run the containerized hub with `docker run -it --link jpy-db:postgres 3. Run the containerized hub with `docker run -it --link jpy-db:postgres jupyterhub-postgres-hub`. This instructs docker to run the hub container
jupyterhub-postgres-hub`. This instructs docker to run the hub container
with a link to the already-running db container, which will forward with a link to the already-running db container, which will forward
environment and connection information from the DB to the hub. environment and connection information from the DB to the hub.

View File

@@ -1,4 +1,3 @@
# Simple Announcement Service Example # Simple Announcement Service Example
This is a simple service that allows administrators to manage announcements This is a simple service that allows administrators to manage announcements

View File

@@ -1,14 +1,9 @@
{% extends "templates/page.html" %} {% extends "templates/page.html" %} {% block announcement %}
{% block announcement %} <div class="container text-center announcement"></div>
<div class="container text-center announcement"> {% endblock %} {% block script %} {{ super() }}
</div>
{% endblock %}
{% block script %}
{{ super() }}
<script> <script>
$.get("/services/announcement/", function(data) { $.get("/services/announcement/", function (data) {
$(".announcement").html(data["announcement"]); $(".announcement").html(data["announcement"]);
}); });
</script> </script>
{% endblock %} {% endblock %}

View File

@@ -20,5 +20,5 @@ In the external example, some extra steps are required to set up supervisor:
1. select a system user to run the service. This is a user on the system, and does not need to be a Hub user. Add this to the user field in `shared-notebook.conf`, replacing `someuser`. 1. select a system user to run the service. This is a user on the system, and does not need to be a Hub user. Add this to the user field in `shared-notebook.conf`, replacing `someuser`.
2. generate a secret token for authentication, and replace the `super-secret` fields in `shared-notebook-service` and `jupyterhub_config.py` 2. generate a secret token for authentication, and replace the `super-secret` fields in `shared-notebook-service` and `jupyterhub_config.py`
3. install `shared-notebook-service` somewhere on your system, and update `/path/to/shared-notebook-service` to the absolute path of this destination 3. install `shared-notebook-service` somewhere on your system, and update `/path/to/shared-notebook-service` to the absolute path of this destination
3. copy `shared-notebook.conf` to `/etc/supervisor/conf.d/` 4. copy `shared-notebook.conf` to `/etc/supervisor/conf.d/`
4. `supervisorctl reload` 5. `supervisorctl reload`

View File

@@ -29,5 +29,4 @@ A similar service could be run externally, by setting the JupyterHub service env
JUPYTERHUB_API_TOKEN JUPYTERHUB_API_TOKEN
JUPYTERHUB_SERVICE_PREFIX JUPYTERHUB_SERVICE_PREFIX
[flask]: http://flask.pocoo.org [flask]: http://flask.pocoo.org

View File

@@ -1,7 +1,9 @@
"""Base API handlers""" """Base API handlers"""
# Copyright (c) Jupyter Development Team. # Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License. # Distributed under the terms of the Modified BSD License.
import functools
import json import json
import re
from datetime import datetime from datetime import datetime
from http.client import responses from http.client import responses
@@ -9,6 +11,7 @@ from sqlalchemy.exc import SQLAlchemyError
from tornado import web from tornado import web
from .. import orm from .. import orm
from .. import scopes
from ..handlers import BaseHandler from ..handlers import BaseHandler
from ..utils import isoformat from ..utils import isoformat
from ..utils import url_path_join from ..utils import url_path_join
@@ -62,6 +65,38 @@ class APIHandler(BaseHandler):
return False return False
return True return True
@functools.lru_cache()
def get_scope_filter(self, req_scope):
"""Produce a filter for `*ListAPIHandlers* so that GET method knows which models to return.
Filter is a callable that takes a resource name and outputs true or false"""
try:
sub_scope = self.parsed_scopes[req_scope]
except AttributeError:
raise web.HTTPError(
403,
"Resource scope %s (that was just accessed) not found in parsed scope model"
% req_scope,
)
def has_access(orm_resource, kind):
"""
param orm_resource: User or Service or Group
param kind: 'users' or 'services' or 'groups'
"""
if sub_scope == scopes.Scope.ALL:
return True
else:
found_resource = orm_resource.name in sub_scope[kind]
if not found_resource: # Try group-based access
if 'group' in sub_scope and kind == 'user':
group_names = {group.name for group in orm_resource.groups}
user_in_group = bool(group_names & set(sub_scope['group']))
found_resource = user_in_group
return found_resource
return has_access
def get_current_user_cookie(self): def get_current_user_cookie(self):
"""Override get_user_cookie to check Referer header""" """Override get_user_cookie to check Referer header"""
cookie_user = super().get_current_user_cookie() cookie_user = super().get_current_user_cookie()
@@ -189,7 +224,6 @@ class APIHandler(BaseHandler):
"""Get the JSON model for a User object""" """Get the JSON model for a User object"""
if isinstance(user, orm.User): if isinstance(user, orm.User):
user = self.users[user.id] user = self.users[user.id]
model = { model = {
'kind': 'user', 'kind': 'user',
'name': user.name, 'name': user.name,
@@ -201,22 +235,41 @@ class APIHandler(BaseHandler):
'created': isoformat(user.created), 'created': isoformat(user.created),
'last_activity': isoformat(user.last_activity), 'last_activity': isoformat(user.last_activity),
} }
if '' in user.spawners: access_map = {
'read:users': set(model.keys()), # All available components
'read:users:name': {'kind', 'name'},
'read:users:groups': {'kind', 'name', 'groups'},
'read:users:activity': {'kind', 'name', 'last_activity'},
'read:users:servers': {'kind', 'name', 'servers'},
}
self.log.debug(
"Asking for user model of %s with scopes [%s]",
user.name,
", ".join(self.raw_scopes),
)
allowed_keys = set()
for scope in access_map:
if scope in self.parsed_scopes:
scope_filter = self.get_scope_filter(scope)
if scope_filter(user, kind='user'):
allowed_keys |= access_map[scope]
model = {key: model[key] for key in allowed_keys if key in model}
if model:
if '' in user.spawners and 'pending' in allowed_keys:
model['pending'] = user.spawners[''].pending model['pending'] = user.spawners[''].pending
if include_servers and 'servers' in allowed_keys:
if not include_servers: # Todo: Replace include_state with scope (read|admin):users:auth_state
model['servers'] = None
return model
servers = model['servers'] = {} servers = model['servers'] = {}
for name, spawner in user.spawners.items(): for name, spawner in user.spawners.items():
# include 'active' servers, not just ready # include 'active' servers, not just ready
# (this includes pending events) # (this includes pending events)
if spawner.active: if spawner.active:
servers[name] = self.server_model(spawner, include_state=include_state) servers[name] = self.server_model(
spawner, include_state=include_state
)
return model return model
def group_model(self, group): def group_model(self, group): # Todo: make consistent to do scope checking here
"""Get the JSON model for a Group object""" """Get the JSON model for a Group object"""
return { return {
'kind': 'group', 'kind': 'group',
@@ -225,7 +278,7 @@ class APIHandler(BaseHandler):
'roles': [r.name for r in group.roles], 'roles': [r.name for r in group.roles],
} }
def service_model(self, service): def service_model(self, service): # Todo: make consistent to do scope checking here
"""Get the JSON model for a Service object""" """Get the JSON model for a Service object"""
return { return {
'kind': 'service', 'kind': 'service',

View File

@@ -35,12 +35,11 @@ class _GroupAPIHandler(APIHandler):
class GroupListAPIHandler(_GroupAPIHandler): class GroupListAPIHandler(_GroupAPIHandler):
@needs_scope('read:groups') @needs_scope('read:groups')
def get(self, scope_filter=None): def get(self):
"""List groups""" """List groups"""
groups = self.db.query(orm.Group) groups = self.db.query(orm.Group)
if scope_filter is not None: scope_filter = self.get_scope_filter('read:groups')
groups = groups.filter(orm.Group.name.in_(scope_filter)) data = [self.group_model(g) for g in groups if scope_filter(g, kind='group')]
data = [self.group_model(g) for g in groups]
self.write(json.dumps(data)) self.write(json.dumps(data))
@needs_scope('admin:groups') @needs_scope('admin:groups')

View File

@@ -30,16 +30,19 @@ def service_model(service):
class ServiceListAPIHandler(APIHandler): class ServiceListAPIHandler(APIHandler):
@needs_scope('read:services') @needs_scope('read:services')
def get(self, scope_filter=None): def get(self):
data = {name: service_model(service) for name, service in self.services.items()} scope_filter = self.get_scope_filter('read:services')
if scope_filter is not None: data = {
data = dict(filter(lambda tup: tup[0] in scope_filter, data.items())) name: service_model(service)
for name, service in self.services.items()
if scope_filter(service, kind='service')
}
self.write(json.dumps(data)) self.write(json.dumps(data))
def admin_or_self(method): def admin_or_self(method):
"""Decorator for restricting access to either the target service or admin""" """Decorator for restricting access to either the target service or admin"""
"""***Deprecated in favor of RBAC, use scope-based decorator***""" """***Deprecated in favor of RBAC. Use scope-based decorator***"""
def decorated_method(self, name): def decorated_method(self, name):
current = self.current_user current = self.current_user

View File

@@ -53,8 +53,14 @@ class UserListAPIHandler(APIHandler):
user = self.users[orm_user] user = self.users[orm_user]
return any(spawner.ready for spawner in user.spawners.values()) return any(spawner.ready for spawner in user.spawners.values())
@needs_scope('read:users') @needs_scope(
def get(self, scope_filter=None): 'read:users',
'read:users:name',
'reda:users:servers',
'read:users:groups',
'read:users:activity',
)
def get(self):
state_filter = self.get_argument("state", None) state_filter = self.get_argument("state", None)
# post_filter # post_filter
@@ -95,14 +101,14 @@ class UserListAPIHandler(APIHandler):
else: else:
# no filter, return all users # no filter, return all users
query = self.db.query(orm.User) query = self.db.query(orm.User)
if scope_filter is not None: data = []
query = query.filter(orm.User.name.in_(scope_filter)) for u in query:
if post_filter is None or post_filter(u):
data = [ user_model = self.user_model(
self.user_model(u, include_servers=True, include_state=True) u, include_servers=True, include_state=True
for u in query )
if (post_filter is None or post_filter(u)) if user_model:
] data.append(user_model)
self.write(json.dumps(data)) self.write(json.dumps(data))
@needs_scope('admin:users') @needs_scope('admin:users')

View File

@@ -185,6 +185,13 @@ class Authenticator(LoggingConfigurable):
""" """
) )
def get_custom_html(self, base_url):
"""Get custom HTML for the authenticator.
.. versionadded: 1.4
"""
return self.custom_html
login_service = Unicode( login_service = Unicode(
help=""" help="""
Name of the login service that this authenticator is providing using to authenticate users. Name of the login service that this authenticator is providing using to authenticate users.

View File

@@ -19,9 +19,9 @@ description: |
2. Events are only recorded when an action succeeds. 2. Events are only recorded when an action succeeds.
type: object type: object
required: required:
- action - action
- username - username
- servername - servername
properties: properties:
action: action:
enum: enum:

View File

@@ -31,6 +31,7 @@ from tornado.web import RequestHandler
from .. import __version__ from .. import __version__
from .. import orm from .. import orm
from .. import roles from .. import roles
from .. import scopes
from ..metrics import PROXY_ADD_DURATION_SECONDS from ..metrics import PROXY_ADD_DURATION_SECONDS
from ..metrics import PROXY_DELETE_DURATION_SECONDS from ..metrics import PROXY_DELETE_DURATION_SECONDS
from ..metrics import ProxyDeleteStatus from ..metrics import ProxyDeleteStatus
@@ -80,13 +81,13 @@ class BaseHandler(RequestHandler):
The current user (None if not logged in) may be accessed The current user (None if not logged in) may be accessed
via the `self.current_user` property during the handling of any request. via the `self.current_user` property during the handling of any request.
""" """
self.scopes = set() self.raw_scopes = set()
try: try:
await self.get_current_user() await self.get_current_user()
except Exception: except Exception:
self.log.exception("Failed to get current user") self.log.exception("Failed to get current user")
self._jupyterhub_user = None self._jupyterhub_user = None
self._resolve_scopes()
return await maybe_future(super().prepare()) return await maybe_future(super().prepare())
@property @property
@@ -352,16 +353,20 @@ class BaseHandler(RequestHandler):
auth_info['auth_state'] = await user.get_auth_state() auth_info['auth_state'] = await user.get_auth_state()
return await self.auth_to_user(auth_info, user) return await self.auth_to_user(auth_info, user)
def get_current_user_token(self): def get_token(self):
"""get_current_user from Authorization header token""" """get token from authorization header"""
token = self.get_auth_token() token = self.get_auth_token()
if token is None: if token is None:
return None return None
orm_token = orm.APIToken.find(self.db, token) orm_token = orm.APIToken.find(self.db, token)
return orm_token
def get_current_user_token(self):
"""get_current_user from Authorization header token"""
# record token activity
orm_token = self.get_token()
if orm_token is None: if orm_token is None:
return None return None
# record token activity
now = datetime.utcnow() now = datetime.utcnow()
recorded = self._record_activity(orm_token, now) recorded = self._record_activity(orm_token, now)
if orm_token.user: if orm_token.user:
@@ -429,10 +434,27 @@ class BaseHandler(RequestHandler):
# don't let errors here raise more than once # don't let errors here raise more than once
self._jupyterhub_user = None self._jupyterhub_user = None
self.log.exception("Error getting current user") self.log.exception("Error getting current user")
if self._jupyterhub_user is not None or self.get_current_user_oauth_token():
self.scopes = roles.get_subscopes(*self._jupyterhub_user.roles)
return self._jupyterhub_user return self._jupyterhub_user
def _resolve_scopes(self):
self.raw_scopes = set()
app_log.debug("Loading and parsing scopes")
if not self.current_user:
# check for oauth tokens as long as #3380 not merged
user_from_oauth = self.get_current_user_oauth_token()
if user_from_oauth is not None:
self.raw_scopes = {f'read:users!user={user_from_oauth.name}'}
else:
app_log.debug("No user found, no scopes loaded")
else:
api_token = self.get_token()
if api_token:
self.raw_scopes = scopes.get_scopes_for(api_token)
else:
self.raw_scopes = scopes.get_scopes_for(self.current_user)
self.parsed_scopes = scopes.parse_scopes(self.raw_scopes)
app_log.debug("Found scopes [%s]", ",".join(self.raw_scopes))
@property @property
def current_user(self): def current_user(self):
"""Override .current_user accessor from tornado """Override .current_user accessor from tornado
@@ -989,6 +1011,7 @@ class BaseHandler(RequestHandler):
self.log.critical( self.log.critical(
"Aborting due to %i consecutive spawn failures", failure_count "Aborting due to %i consecutive spawn failures", failure_count
) )
# abort in 2 seconds to allow pending handlers to resolve # abort in 2 seconds to allow pending handlers to resolve
# mostly propagating errors for the current failures # mostly propagating errors for the current failures
def abort(): def abort():
@@ -1581,7 +1604,6 @@ class UserUrlHandler(BaseHandler):
if self.subdomain_host: if self.subdomain_host:
target = user.host + target target = user.host + target
referer = self.request.headers.get('Referer', '')
# record redirect count in query parameter # record redirect count in query parameter
if redirects: if redirects:
self.log.warning("Redirect loop detected on %s", self.request.uri) self.log.warning("Redirect loop detected on %s", self.request.uri)
@@ -1593,8 +1615,12 @@ class UserUrlHandler(BaseHandler):
query_parts['redirects'] = redirects + 1 query_parts['redirects'] = redirects + 1
url_parts = url_parts._replace(query=urlencode(query_parts, doseq=True)) url_parts = url_parts._replace(query=urlencode(query_parts, doseq=True))
target = urlunparse(url_parts) target = urlunparse(url_parts)
elif '/user/{}'.format(user.name) in referer or not referer: else:
# add first counter only if it's a redirect from /user/:name -> /hub/user/:name # Start redirect counter.
# This should only occur for redirects from /user/:name -> /hub/user/:name
# when the corresponding server is already ready.
# We don't check this explicitly (direct visits to /hub/user are technically possible),
# but that's now the only normal way to get here.
target = url_concat(target, {'redirects': 1}) target = url_concat(target, {'redirects': 1})
self.redirect(target) self.redirect(target)

View File

@@ -3,6 +3,7 @@
# Distributed under the terms of the Modified BSD License. # Distributed under the terms of the Modified BSD License.
import asyncio import asyncio
from jinja2 import Template
from tornado import web from tornado import web
from tornado.escape import url_escape from tornado.escape import url_escape
from tornado.httputil import url_concat from tornado.httputil import url_concat
@@ -90,17 +91,23 @@ class LoginHandler(BaseHandler):
"""Render the login page.""" """Render the login page."""
def _render(self, login_error=None, username=None): def _render(self, login_error=None, username=None):
return self.render_template( context = {
'login.html', "next": url_escape(self.get_argument('next', default='')),
next=url_escape(self.get_argument('next', default='')), "username": username,
username=username, "login_error": login_error,
login_error=login_error, "login_url": self.settings['login_url'],
custom_html=self.authenticator.custom_html, "authenticator_login_url": url_concat(
login_url=self.settings['login_url'],
authenticator_login_url=url_concat(
self.authenticator.login_url(self.hub.base_url), self.authenticator.login_url(self.hub.base_url),
{'next': self.get_argument('next', '')}, {'next': self.get_argument('next', '')},
), ),
}
custom_html = Template(
self.authenticator.get_custom_html(self.hub.base_url)
).render(**context)
return self.render_template(
'login.html',
**context,
custom_html=custom_html,
) )
async def get(self): async def get(self):

View File

@@ -626,8 +626,8 @@ class APIToken(Hashed, Base):
db.add(orm_token) db.add(orm_token)
# load default roles if they haven't been initiated # load default roles if they haven't been initiated
# correct to have this here? otherwise some tests fail # correct to have this here? otherwise some tests fail
user_role = Role.find(db, 'user') token_role = Role.find(db, 'token')
if not user_role: if not token_role:
default_roles = get_default_roles() default_roles = get_default_roles()
for role in default_roles: for role in default_roles:
add_role(db, role) add_role(db, role)

View File

@@ -15,8 +15,8 @@ def get_default_roles():
default_roles = [ default_roles = [
{ {
'name': 'user', 'name': 'user',
'description': 'Everything the user can do', 'description': 'Standard user privileges',
'scopes': ['all'], 'scopes': ['self'],
}, },
{ {
'name': 'admin', 'name': 'admin',
@@ -40,11 +40,48 @@ def get_default_roles():
'description': 'Post activity only', 'description': 'Post activity only',
'scopes': ['users:activity'], 'scopes': ['users:activity'],
}, },
{
'name': 'token',
'description': 'Token with same rights as token owner',
'scopes': ['all'],
},
{
'name': 'service',
'description': 'Temporary no scope role for services',
'scopes': [],
},
] ]
return default_roles return default_roles
def get_scopes(): def expand_self_scope(name, read_only=False):
"""
Users have a metascope 'self' that should be expanded to standard user privileges.
At the moment that is a user-filtered version (optional read) access to
users
users:name
users:groups
users:activity
users:servers
users:tokens
"""
scope_list = [
'users',
'users:name',
'users:groups',
'users:activity',
'users:servers',
'users:tokens',
]
read_scope_list = ['read:' + scope for scope in scope_list]
if read_only:
scope_list = read_scope_list
else:
scope_list.extend(read_scope_list)
return {"{}!user={}".format(scope, name) for scope in scope_list}
def get_scope_hierarchy():
""" """
Returns a dictionary of scopes: Returns a dictionary of scopes:
scopes.keys() = scopes of highest level and scopes that have their own subscopes scopes.keys() = scopes of highest level and scopes that have their own subscopes
@@ -52,7 +89,8 @@ def get_scopes():
""" """
scopes = { scopes = {
'all': ['read:all'], 'self': None,
'all': None, # Optional 'read:all' as subscope, not implemented at this stage
'users': ['read:users', 'users:activity', 'users:servers'], 'users': ['read:users', 'users:activity', 'users:servers'],
'read:users': [ 'read:users': [
'read:users:name', 'read:users:name',
@@ -93,7 +131,7 @@ def horizontal_filter(func):
def _expand_scope(scopename): def _expand_scope(scopename):
"""Returns a set of all subscopes""" """Returns a set of all subscopes"""
scopes = get_scopes() scopes = get_scope_hierarchy()
subscopes = [scopename] subscopes = [scopename]
def _expand_subscopes(index): def _expand_subscopes(index):
@@ -118,6 +156,29 @@ def _expand_scope(scopename):
return expanded_scope return expanded_scope
def expand_roles_to_scopes(orm_object):
"""Get the scopes listed in the roles of the User/Service/Group/Token
If User, take into account the user's groups roles as well"""
pass_roles = orm_object.roles
if isinstance(orm_object, orm.User):
groups_roles = []
# groups_roles = [role for group.role in orm_object.groups for role in group.roles]
for group in orm_object.groups:
groups_roles.extend(group.roles)
pass_roles.extend(groups_roles)
# scopes = get_subscopes(*orm_object.roles)
scopes = get_subscopes(*pass_roles)
if 'self' in scopes:
if not (isinstance(orm_object, orm.User) or hasattr(orm_object, 'orm_user')):
raise ValueError(
"Metascope 'self' only valid for Users, got %s" % orm_object
)
scopes.remove('self')
scopes |= expand_self_scope(orm_object.name)
return scopes
def get_subscopes(*args): def get_subscopes(*args):
"""Returns a set of all available subscopes for a specified role or list of roles""" """Returns a set of all available subscopes for a specified role or list of roles"""
@@ -134,7 +195,7 @@ def get_subscopes(*args):
def _check_scopes(*args): def _check_scopes(*args):
"""Check if provided scopes exist""" """Check if provided scopes exist"""
allowed_scopes = get_scopes() allowed_scopes = get_scope_hierarchy()
allowed_filters = ['!user=', '!service=', '!group=', '!server='] allowed_filters = ['!user=', '!service=', '!group=', '!server=']
subscopes = set( subscopes = set(
chain.from_iterable([x for x in allowed_scopes.values() if x is not None]) chain.from_iterable([x for x in allowed_scopes.values() if x is not None])
@@ -195,7 +256,7 @@ def add_role(db, role_dict):
role = orm.Role(name=name, description=description, scopes=scopes) role = orm.Role(name=name, description=description, scopes=scopes)
db.add(role) db.add(role)
if role_dict not in default_roles: if role_dict not in default_roles:
app_log.info('Adding role %s to database', name) app_log.info('Role %s added to database', name)
else: else:
_overwrite_role(role, role_dict) _overwrite_role(role, role_dict)
@@ -274,6 +335,10 @@ def _switch_default_role(db, obj, kind, admin):
"""Switch between default user and admin roles for users/services""" """Switch between default user and admin roles for users/services"""
user_role = orm.Role.find(db, 'user') user_role = orm.Role.find(db, 'user')
# temporary fix of default service role
if kind == 'services':
user_role = orm.Role.find(db, 'service')
admin_role = orm.Role.find(db, 'admin') admin_role = orm.Role.find(db, 'admin')
def _add_and_remove(db, obj, kind, current_role, new_role): def _add_and_remove(db, obj, kind, current_role, new_role):
@@ -293,39 +358,34 @@ def _switch_default_role(db, obj, kind, admin):
def _token_allowed_role(db, token, role): def _token_allowed_role(db, token, role):
"""Returns True if token allowed to have requested role through """Returns True if token allowed to have requested role through
comparing the requested scopes with the set of token's owner scopes comparing the requested scopes with the set of token's owner scopes"""
from their roles and their group roles"""
standard_permissions = {'all', 'read:all'}
token_scopes = get_subscopes(role) token_scopes = get_subscopes(role)
owner = None extra_scopes = token_scopes - standard_permissions
roles_to_check = [] # ignore horizontal filters
raw_extra_scopes = {
scope.split('!', 1)[0] if '!' in scope else scope for scope in extra_scopes
}
# find the owner and their roles # find the owner and their roles
owner = None
if token.user_id: if token.user_id:
owner = db.query(orm.User).get(token.user_id) owner = db.query(orm.User).get(token.user_id)
roles_to_check.extend(owner.roles)
# if user is a member of any groups, include the groups' roles as well
for group in owner.groups:
roles_to_check.extend(group.roles)
elif token.service_id: elif token.service_id:
owner = db.query(orm.Service).get(token.service_id) owner = db.query(orm.Service).get(token.service_id)
roles_to_check = owner.roles if owner:
owner_scopes = expand_roles_to_scopes(owner)
owner_scopes = get_subscopes(*roles_to_check) # ignore horizontal filters
raw_owner_scopes = {
# ignore horizontal filters for comparison
t_scopes = {
scope.split('!', 1)[0] if '!' in scope else scope for scope in token_scopes
}
o_scopes = {
scope.split('!', 1)[0] if '!' in scope else scope for scope in owner_scopes scope.split('!', 1)[0] if '!' in scope else scope for scope in owner_scopes
} }
if (raw_extra_scopes).issubset(raw_owner_scopes):
if t_scopes.issubset(o_scopes):
return True return True
else: else:
return False return False
else:
raise ValueError('Owner the token %r not found', token)
def update_roles(db, obj, kind, roles=None): def update_roles(db, obj, kind, roles=None):
@@ -333,9 +393,7 @@ def update_roles(db, obj, kind, roles=None):
assigns default if no roles specified""" assigns default if no roles specified"""
Class = orm.get_class(kind) Class = orm.get_class(kind)
user_role = orm.Role.find(db, 'user') default_token_role = orm.Role.find(db, 'token')
admin_role = orm.Role.find(db, 'admin')
if roles: if roles:
for rolename in roles: for rolename in roles:
if Class == orm.APIToken: if Class == orm.APIToken:
@@ -344,7 +402,6 @@ def update_roles(db, obj, kind, roles=None):
app_log.debug( app_log.debug(
'Checking token permissions against requested role %s', rolename 'Checking token permissions against requested role %s', rolename
) )
if _token_allowed_role(db, obj, role): if _token_allowed_role(db, obj, role):
role.tokens.append(obj) role.tokens.append(obj)
app_log.info( app_log.info(
@@ -352,8 +409,9 @@ def update_roles(db, obj, kind, roles=None):
) )
else: else:
raise ValueError( raise ValueError(
'Requested token role %r of %r with scopes %r cannot grant more permissions than its owner scopes' 'Requested token role %r of %r has more permissions than the token owner',
% (rolename, obj, role.scopes) rolename,
obj,
) )
else: else:
raise NameError('Role %r does not exist' % rolename) raise NameError('Role %r does not exist' % rolename)
@@ -363,15 +421,14 @@ def update_roles(db, obj, kind, roles=None):
# groups can be without a role # groups can be without a role
if Class == orm.Group: if Class == orm.Group:
pass pass
# tokens can have only 'user' role as default # tokens can have only 'token' role as default
# assign the default only for user tokens # assign the default only for tokens
# service tokens with no specified role remain without any role (no default)
elif Class == orm.APIToken: elif Class == orm.APIToken:
app_log.debug('Assigning default roles to tokens') app_log.debug('Assigning default roles to tokens')
if len(obj.roles) < 1 and obj.user is not None: if not obj.roles and obj.user is not None:
user_role.tokens.append(obj) default_token_role.tokens.append(obj)
app_log.info('Added role %s to token %s', default_token_role.name, obj)
db.commit() db.commit()
app_log.info('Adding role %s to token %s', 'user', obj)
# users and services can have 'user' or 'admin' roles as default # users and services can have 'user' or 'admin' roles as default
else: else:
app_log.debug('Assigning default roles to %s', kind) app_log.debug('Assigning default roles to %s', kind)

View File

@@ -6,33 +6,37 @@ from tornado import web
from tornado.log import app_log from tornado.log import app_log
from . import orm from . import orm
from . import roles
class Scope(Enum): class Scope(Enum):
ALL = True ALL = True
def get_user_scopes(name): def get_scopes_for(orm_object):
""" """Find scopes for a given user or token and resolve permissions"""
Scopes have a metascope 'all' that should be expanded to everything a user can do. scopes = set()
At the moment that is a user-filtered version (optional read) access to if orm_object is None:
users return scopes
users:name elif isinstance(orm_object, orm.APIToken):
users:groups app_log.warning(f"Authenticated with token {orm_object}")
users:activity owner = orm_object.user or orm_object.service
users:servers token_scopes = roles.expand_roles_to_scopes(orm_object)
users:tokens owner_scopes = roles.expand_roles_to_scopes(owner)
""" if 'all' in token_scopes:
scope_list = [ token_scopes.remove('all')
'users', token_scopes |= owner_scopes
'users:name', scopes = token_scopes & owner_scopes
'users:groups', discarded_token_scopes = token_scopes - scopes
'users:activity', # Not taking symmetric difference here because token owner can naturally have more scopes than token
'users:servers', if discarded_token_scopes:
'users:tokens', app_log.warning(
] "discarding scopes [%s], not present in owner roles"
scope_list.extend(['read:' + scope for scope in scope_list]) % ", ".join(discarded_token_scopes)
return {"{}!user={}".format(scope, name) for scope in scope_list} )
else:
scopes = roles.expand_roles_to_scopes(orm_object)
return scopes
def _needs_scope_expansion(filter_, filter_value, sub_scope): def _needs_scope_expansion(filter_, filter_value, sub_scope):
@@ -54,65 +58,53 @@ def _check_user_in_expanded_scope(handler, user_name, scope_group_names):
user = handler.find_user(user_name) user = handler.find_user(user_name)
if user is None: if user is None:
raise web.HTTPError(404, "No access to resources or resources not found") raise web.HTTPError(404, "No access to resources or resources not found")
group_names = {group.name for group in user.groups} # Todo: Replace with SQL query group_names = {group.name for group in user.groups}
return bool(set(scope_group_names) & group_names) return bool(set(scope_group_names) & group_names)
def _get_scope_filter(db, req_scope, sub_scope): def _check_scope(api_handler, req_scope, **kwargs):
"""Produce a filter for `*ListAPIHandlers* so that get method knows which models to return"""
scope_translator = {
'read:users': 'users',
'read:services': 'services',
'read:groups': 'groups',
}
if req_scope not in scope_translator:
raise AttributeError("Internal error: inconsistent scope situation")
kind = scope_translator[req_scope]
Resource = orm.get_class(kind)
sub_scope_values = next(iter(sub_scope.values()))
query = db.query(Resource).filter(Resource.name.in_(sub_scope_values))
scope_filter = {entry.name for entry in query.all()}
if 'group' in sub_scope and kind == 'users':
groups = orm.Group.name.in_(sub_scope['group'])
users_in_groups = db.query(orm.User).join(orm.Group.users).filter(groups)
scope_filter |= {user.name for user in users_in_groups}
return scope_filter
def _check_scope(api_handler, req_scope, scopes, **kwargs):
"""Check if scopes satisfy requirements """Check if scopes satisfy requirements
Returns either Scope.ALL for unrestricted access, Scope.NONE for refused access or Returns True for (restricted) access, False for refused access
an iterable with a filter
""" """
# Parse user name and server name together # Parse user name and server name together
try:
api_name = api_handler.request.path
except AttributeError:
api_name = type(api_handler).__name__
if 'user' in kwargs and 'server' in kwargs: if 'user' in kwargs and 'server' in kwargs:
kwargs['server'] = "{}/{}".format(kwargs['user'], kwargs['server']) kwargs['server'] = "{}/{}".format(kwargs['user'], kwargs['server'])
if req_scope not in scopes: if req_scope not in api_handler.parsed_scopes:
app_log.debug("No scopes present to access %s" % api_name)
return False return False
if scopes[req_scope] == Scope.ALL: if api_handler.parsed_scopes[req_scope] == Scope.ALL:
app_log.debug("Unrestricted access to %s call", api_name)
return True return True
# Apply filters # Apply filters
sub_scope = scopes[req_scope] sub_scope = api_handler.parsed_scopes[req_scope]
if 'scope_filter' in kwargs:
scope_filter = _get_scope_filter(api_handler.db, req_scope, sub_scope)
return scope_filter
else:
if not kwargs: if not kwargs:
return False # Separated from 404 error below because in this case we don't leak information app_log.debug(
# Interface change: Now can have multiple filters "Client has restricted access to %s. Internal filtering may apply"
% api_name
)
return True
for (filter_, filter_value) in kwargs.items(): for (filter_, filter_value) in kwargs.items():
if filter_ in sub_scope and filter_value in sub_scope[filter_]: if filter_ in sub_scope and filter_value in sub_scope[filter_]:
app_log.debug(
"Restricted client access supported by endpoint %s" % api_name
)
return True return True
if _needs_scope_expansion(filter_, filter_value, sub_scope): if _needs_scope_expansion(filter_, filter_value, sub_scope):
group_names = sub_scope['group'] group_names = sub_scope['group']
if _check_user_in_expanded_scope( if _check_user_in_expanded_scope(api_handler, filter_value, group_names):
api_handler, filter_value, group_names app_log.debug("Restricted client access supported with group expansion")
):
return True return True
app_log.debug(
"Client access refused; filters do not match API endpoint %s request" % api_name
)
raise web.HTTPError(404, "No access to resources or resources not found") raise web.HTTPError(404, "No access to resources or resources not found")
def _parse_scopes(scope_list): def parse_scopes(scope_list):
""" """
Parses scopes and filters in something akin to JSON style Parses scopes and filters in something akin to JSON style
@@ -120,7 +112,7 @@ def _parse_scopes(scope_list):
would lead to scope model would lead to scope model
{ {
"users":scope.ALL, "users":scope.ALL,
"users:admin":{ "admin:users":{
"user":[ "user":[
"alice" "alice"
] ]
@@ -148,7 +140,7 @@ def _parse_scopes(scope_list):
return parsed_scopes return parsed_scopes
def needs_scope(scope): def needs_scope(*scopes):
"""Decorator to restrict access to users or services with the required scope""" """Decorator to restrict access to users or services with the required scope"""
def scope_decorator(func): def scope_decorator(func):
@@ -157,39 +149,36 @@ def needs_scope(scope):
sig = inspect.signature(func) sig = inspect.signature(func)
bound_sig = sig.bind(self, *args, **kwargs) bound_sig = sig.bind(self, *args, **kwargs)
bound_sig.apply_defaults() bound_sig.apply_defaults()
# Load scopes in case they haven't been loaded yet
if not hasattr(self, 'raw_scopes'):
self.raw_scopes = {}
self.parsed_scopes = {}
s_kwargs = {} s_kwargs = {}
for resource in {'user', 'server', 'group', 'service'}: for resource in {'user', 'server', 'group', 'service'}:
resource_name = resource + '_name' resource_name = resource + '_name'
if resource_name in bound_sig.arguments: if resource_name in bound_sig.arguments:
resource_value = bound_sig.arguments[resource_name] resource_value = bound_sig.arguments[resource_name]
s_kwargs[resource] = resource_value s_kwargs[resource] = resource_value
if 'scope_filter' in bound_sig.arguments: has_access = False
s_kwargs['scope_filter'] = None for scope in scopes:
if 'all' in self.scopes and self.current_user: has_access |= _check_scope(self, scope, **s_kwargs)
self.scopes |= get_user_scopes(self.current_user.name) if has_access:
parsed_scopes = _parse_scopes(self.scopes)
scope_filter = _check_scope(self, scope, parsed_scopes, **s_kwargs)
# todo: This checks if True/False or set of resource names. Can be improved
if isinstance(scope_filter, set):
kwargs['scope_filter'] = scope_filter
if scope_filter:
return func(self, *args, **kwargs) return func(self, *args, **kwargs)
else: else:
# catching attr error occurring for older_requirements test
# could be done more ellegantly?
try: try:
request_path = self.request.path end_point = self.request.path
except AttributeError: except AttributeError:
request_path = 'the requested API' end_point = self.__name__
app_log.warning( app_log.warning(
"Not authorizing access to {}. Requires scope {}, not derived from scopes [{}]".format( "Not authorizing access to {}. Requires any of [{}], not derived from scopes [{}]".format(
request_path, scope, ", ".join(self.scopes) end_point, ", ".join(scopes), ", ".join(self.raw_scopes)
) )
) )
raise web.HTTPError( raise web.HTTPError(
403, 403,
"Action is not authorized with current scopes; requires {}".format( "Action is not authorized with current scopes; requires any of [{}]".format(
scope ", ".join(scopes)
), ),
) )

View File

@@ -19,6 +19,7 @@ import string
import time import time
import uuid import uuid
import warnings import warnings
from unittest import mock
from urllib.parse import quote from urllib.parse import quote
from urllib.parse import urlencode from urllib.parse import urlencode
@@ -832,8 +833,12 @@ class HubAuthenticated(object):
# add state argument to OAuth url # add state argument to OAuth url
state = self.hub_auth.set_state_cookie(self, next_url=self.request.uri) state = self.hub_auth.set_state_cookie(self, next_url=self.request.uri)
login_url = url_concat(login_url, {'state': state}) login_url = url_concat(login_url, {'state': state})
# temporary override at setting level,
# to allow any subclass overrides of get_login_url to preserve their effect
# for example, APIHandler raises 403 to prevent redirects
with mock.patch.dict(self.application.settings, {"login_url": login_url}):
app_log.debug("Redirecting to login url: %s", login_url) app_log.debug("Redirecting to login url: %s", login_url)
return login_url return super().get_login_url()
def check_hub_user(self, model): def check_hub_user(self, model):
"""Check whether Hub-authenticated user or service should be allowed. """Check whether Hub-authenticated user or service should be allowed.

View File

@@ -9,11 +9,10 @@ with JupyterHub authentication mixins enabled.
# Copyright (c) Jupyter Development Team. # Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License. # Distributed under the terms of the Modified BSD License.
import asyncio import asyncio
import importlib
import json import json
import os import os
import random import random
from datetime import datetime import warnings
from datetime import timezone from datetime import timezone
from textwrap import dedent from textwrap import dedent
from urllib.parse import urlparse from urllib.parse import urlparse
@@ -23,7 +22,6 @@ from jinja2 import FunctionLoader
from tornado import ioloop from tornado import ioloop
from tornado.httpclient import AsyncHTTPClient from tornado.httpclient import AsyncHTTPClient
from tornado.httpclient import HTTPRequest from tornado.httpclient import HTTPRequest
from tornado.web import HTTPError
from tornado.web import RequestHandler from tornado.web import RequestHandler
from traitlets import Any from traitlets import Any
from traitlets import Bool from traitlets import Bool
@@ -94,9 +92,19 @@ class JupyterHubLoginHandlerMixin:
@staticmethod @staticmethod
def get_user(handler): def get_user(handler):
"""alternative get_current_user to query the Hub""" """alternative get_current_user to query the Hub
# patch in HubAuthenticated class for querying the Hub for cookie authentication
if HubAuthenticatedHandler not in handler.__class__.__bases__: Thus shouldn't be called anymore because HubAuthenticatedHandler
should have already overridden get_current_user().
Keep here to prevent unlikely circumstance from losing auth.
"""
if HubAuthenticatedHandler not in handler.__class__.mro():
warnings.warn(
f"Expected to see HubAuthenticatedHandler in {handler.__class__}.mro()",
RuntimeWarning,
stacklevel=2,
)
handler.__class__ = type( handler.__class__ = type(
handler.__class__.__name__, handler.__class__.__name__,
(HubAuthenticatedHandler, handler.__class__), (HubAuthenticatedHandler, handler.__class__),
@@ -691,6 +699,7 @@ def make_singleuser_app(App):
""" """
empty_parent_app = App() empty_parent_app = App()
log = empty_parent_app.log
# detect base classes # detect base classes
LoginHandler = empty_parent_app.login_handler_class LoginHandler = empty_parent_app.login_handler_class
@@ -707,6 +716,26 @@ def make_singleuser_app(App):
"{}.base_handler_class must be defined".format(App.__name__) "{}.base_handler_class must be defined".format(App.__name__)
) )
# patch-in HubAuthenticatedHandler to BaseHandler,
# so anything inheriting from BaseHandler uses Hub authentication
if HubAuthenticatedHandler not in BaseHandler.__bases__:
new_bases = (HubAuthenticatedHandler,) + BaseHandler.__bases__
log.debug(
f"Patching {BaseHandler}{BaseHandler.__bases__} -> {BaseHandler}{new_bases}"
)
BaseHandler.__bases__ = new_bases
# We've now inserted our class as a parent of BaseHandler,
# but we also need to ensure BaseHandler *itself* doesn't
# override the public tornado API methods we have inserted.
# If they are defined in BaseHandler, explicitly replace them with our methods.
for name in ("get_current_user", "get_login_url"):
if name in BaseHandler.__dict__:
log.debug(
f"Overriding {BaseHandler}.{name} with HubAuthenticatedHandler.{name}"
)
method = getattr(HubAuthenticatedHandler, name)
setattr(BaseHandler, name, method)
# create Handler classes from mixins + bases # create Handler classes from mixins + bases
class JupyterHubLoginHandler(JupyterHubLoginHandlerMixin, LoginHandler): class JupyterHubLoginHandler(JupyterHubLoginHandlerMixin, LoginHandler):
pass pass

View File

@@ -13,6 +13,7 @@ import sys
import warnings import warnings
from subprocess import Popen from subprocess import Popen
from tempfile import mkdtemp from tempfile import mkdtemp
from urllib.parse import urlparse
if os.name == 'nt': if os.name == 'nt':
import psutil import psutil
@@ -690,6 +691,19 @@ class Spawner(LoggingConfigurable):
""" """
).tag(config=True) ).tag(config=True)
hub_connect_url = Unicode(
None,
allow_none=True,
help="""
The URL the single-user server should connect to the Hub.
If the Hub URL set in your JupyterHub config is not reachable
from spawned notebooks, you can set differnt URL by this config.
Is None if you don't need to change the URL.
""",
).tag(config=True)
def load_state(self, state): def load_state(self, state):
"""Restore state of spawner from database. """Restore state of spawner from database.
@@ -768,9 +782,15 @@ class Spawner(LoggingConfigurable):
# Info previously passed on args # Info previously passed on args
env['JUPYTERHUB_USER'] = self.user.name env['JUPYTERHUB_USER'] = self.user.name
env['JUPYTERHUB_SERVER_NAME'] = self.name env['JUPYTERHUB_SERVER_NAME'] = self.name
env['JUPYTERHUB_API_URL'] = self.hub.api_url if self.hub_connect_url is not None:
hub_api_url = url_path_join(
self.hub_connect_url, urlparse(self.hub.api_url).path
)
else:
hub_api_url = self.hub.api_url
env['JUPYTERHUB_API_URL'] = hub_api_url
env['JUPYTERHUB_ACTIVITY_URL'] = url_path_join( env['JUPYTERHUB_ACTIVITY_URL'] = url_path_join(
self.hub.api_url, hub_api_url,
'users', 'users',
# tolerate mocks defining only user.name # tolerate mocks defining only user.name
getattr(self.user, 'escaped_name', self.user.name), getattr(self.user, 'escaped_name', self.user.name),

View File

@@ -45,6 +45,7 @@ from . import mocking
from .. import crypto from .. import crypto
from .. import orm from .. import orm
from ..roles import mock_roles from ..roles import mock_roles
from ..roles import update_roles
from ..utils import random_port from ..utils import random_port
from .mocking import MockHub from .mocking import MockHub
from .test_services import mockservice_cmd from .test_services import mockservice_cmd
@@ -249,6 +250,8 @@ def _mockservice(request, app, url=False):
mock_roles(app, name, 'services') mock_roles(app, name, 'services')
assert name in app._service_map assert name in app._service_map
service = app._service_map[name] service = app._service_map[name]
token = service.orm.api_tokens[0]
update_roles(app.db, token, 'tokens', roles=['token'])
async def start(): async def start():
# wait for proxy to be updated before starting the service # wait for proxy to be updated before starting the service

View File

@@ -53,7 +53,6 @@ from ..spawner import SimpleLocalProcessSpawner
from ..utils import random_port from ..utils import random_port
from ..utils import url_path_join from ..utils import url_path_join
from .utils import async_requests from .utils import async_requests
from .utils import get_scopes
from .utils import public_host from .utils import public_host
from .utils import public_url from .utils import public_url
from .utils import ssl_setup from .utils import ssl_setup
@@ -305,7 +304,6 @@ class MockHub(JupyterHub):
super().init_tornado_application() super().init_tornado_application()
# reconnect tornado_settings so that mocks can update the real thing # reconnect tornado_settings so that mocks can update the real thing
self.tornado_settings = self.users.settings = self.tornado_application.settings self.tornado_settings = self.users.settings = self.tornado_application.settings
self.tornado_settings['mock_scopes'] = get_scopes()
def init_services(self): def init_services(self):
# explicitly expire services before reinitializing # explicitly expire services before reinitializing
@@ -411,9 +409,10 @@ class StubSingleUserSpawner(MockSpawner):
Should be: Should be:
- authenticated, so we are testing auth - authenticated, so we are testing auth
- always available (i.e. in base ServerApp and NotebookApp - always available (i.e. in mocked ServerApp and NotebookApp)
- *not* an API handler that raises 403 instead of redirecting
""" """
return "/api/status" return "/tree"
_thread = None _thread = None

View File

@@ -1707,7 +1707,7 @@ async def test_update_activity_403(app, user, admin_user):
async def test_update_activity_admin(app, user, admin_user): async def test_update_activity_admin(app, user, admin_user):
token = admin_user.new_api_token() token = admin_user.new_api_token(roles=['admin'])
r = await api_request( r = await api_request(
app, app,
"users/{}/activity".format(user.name), "users/{}/activity".format(user.name),

View File

@@ -26,7 +26,8 @@ def generate_old_db(env_dir, hub_version, db_url):
env_pip = os.path.join(env_dir, 'bin', 'pip') env_pip = os.path.join(env_dir, 'bin', 'pip')
env_py = os.path.join(env_dir, 'bin', 'python') env_py = os.path.join(env_dir, 'bin', 'python')
check_call([sys.executable, '-m', 'virtualenv', env_dir]) check_call([sys.executable, '-m', 'virtualenv', env_dir])
pkgs = ['jupyterhub==' + hub_version] # older jupyterhub needs older sqlachemy version
pkgs = ['jupyterhub==' + hub_version, 'sqlalchemy<1.4']
if 'mysql' in db_url: if 'mysql' in db_url:
pkgs.append('mysql-connector-python') pkgs.append('mysql-connector-python')
elif 'postgres' in db_url: elif 'postgres' in db_url:

View File

@@ -20,12 +20,16 @@ from .utils import api_request
def test_orm_roles(db): def test_orm_roles(db):
"""Test orm roles setup""" """Test orm roles setup"""
user_role = orm.Role.find(db, name='user') user_role = orm.Role.find(db, name='user')
token_role = orm.Role.find(db, name='token')
service_role = orm.Role.find(db, name='service')
if not user_role: if not user_role:
user_role = orm.Role(name='user', scopes=['self']) user_role = orm.Role(name='user', scopes=['self'])
db.add(user_role) db.add(user_role)
db.commit() if not token_role:
token_role = orm.Role(name='token', scopes=['all'])
service_role = orm.Role(name='service', scopes=['users:servers']) db.add(token_role)
if not service_role:
service_role = orm.Role(name='service', scopes=[])
db.add(service_role) db.add(service_role)
db.commit() db.commit()
@@ -67,11 +71,11 @@ def test_orm_roles(db):
assert group.roles == [group_role] assert group.roles == [group_role]
# check token creation without specifying its role # check token creation without specifying its role
# assigns it the default 'user' role # assigns it the default 'token' role
token = user.new_api_token() token = user.new_api_token()
user_token = orm.APIToken.find(db, token=token) user_token = orm.APIToken.find(db, token=token)
assert user_token in user_role.tokens assert user_token in token_role.tokens
assert user_role in user_token.roles assert token_role in user_token.roles
# check creating token with a specific role # check creating token with a specific role
token = service.new_api_token(roles=['service']) token = service.new_api_token(roles=['service'])
@@ -83,8 +87,8 @@ def test_orm_roles(db):
db.delete(user) db.delete(user)
db.commit() db.commit()
assert user_role.users == [] assert user_role.users == []
assert user_token not in user_role.tokens assert user_token not in token_role.tokens
# check deleting the service token removes it from service_role # check deleting the service token removes it from 'service' role
db.delete(service_token) db.delete(service_token)
db.commit() db.commit()
assert service_token not in service_role.tokens assert service_token not in service_role.tokens
@@ -222,15 +226,25 @@ async def test_load_default_roles(tmpdir, request):
db = hub.db db = hub.db
await hub.init_roles() await hub.init_roles()
# test default roles loaded to database # test default roles loaded to database
assert orm.Role.find(db, 'user') is not None default_roles = roles.get_default_roles()
assert orm.Role.find(db, 'admin') is not None for role in default_roles:
assert orm.Role.find(db, 'server') is not None assert orm.Role.find(db, role['name']) is not None
@mark.role @mark.role
@mark.parametrize( @mark.parametrize(
"role, role_def, response_type, response", "role, role_def, response_type, response",
[ [
(
'new-role',
{
'name': 'new-role',
'description': 'Some description',
'scopes': ['groups'],
},
'info',
app_log.info('Role new-role added to database'),
),
('no_name', {'scopes': ['users']}, 'error', KeyError), ('no_name', {'scopes': ['users']}, 'error', KeyError),
( (
'no_scopes', 'no_scopes',
@@ -278,6 +292,12 @@ async def test_adding_new_roles(
elif response_type == 'warning' or response_type == 'info': elif response_type == 'warning' or response_type == 'info':
with pytest.warns(response): with pytest.warns(response):
await hub.init_roles() await hub.init_roles()
role = orm.Role.find(db, role_def['name'])
assert role is not None
if 'description' in role_def.keys():
assert role.description == role_def['description']
if 'scopes' in role_def.keys():
assert role.scopes == role_def['scopes']
@mark.role @mark.role
@@ -373,11 +393,6 @@ async def test_load_roles_users(tmpdir, request):
'scopes': ['users', 'groups'], 'scopes': ['users', 'groups'],
'users': ['cyclops', 'gandalf'], 'users': ['cyclops', 'gandalf'],
}, },
{
'name': 'user',
'description': 'Read access to users names',
'scopes': ['read:users:name'],
},
] ]
kwargs = {'load_roles': roles_to_load} kwargs = {'load_roles': roles_to_load}
ssl_enabled = getattr(request.module, "ssl_enabled", False) ssl_enabled = getattr(request.module, "ssl_enabled", False)
@@ -391,13 +406,8 @@ async def test_load_roles_users(tmpdir, request):
await hub.init_users() await hub.init_users()
await hub.init_roles() await hub.init_roles()
# test if the 'user' role has been overwritten
user_role = orm.Role.find(db, 'user')
assert user_role is not None
assert user_role.description == roles_to_load[1]['description']
assert user_role.scopes == roles_to_load[1]['scopes']
admin_role = orm.Role.find(db, 'admin') admin_role = orm.Role.find(db, 'admin')
user_role = orm.Role.find(db, 'user')
# test if every user has a role (and no duplicates) # test if every user has a role (and no duplicates)
# and admins have admin role # and admins have admin role
for user in db.query(orm.User): for user in db.query(orm.User):
@@ -415,6 +425,10 @@ async def test_load_roles_users(tmpdir, request):
cyclops_user = orm.User.find(db, name='cyclops') cyclops_user = orm.User.find(db, name='cyclops')
assert teacher_role in cyclops_user.roles assert teacher_role in cyclops_user.roles
# delete the test roles
for role in roles_to_load:
roles.remove_role(db, role['name'])
@mark.role @mark.role
async def test_load_roles_services(tmpdir, request): async def test_load_roles_services(tmpdir, request):
@@ -462,18 +476,29 @@ async def test_load_roles_services(tmpdir, request):
# test if every service has a role (and no duplicates) # test if every service has a role (and no duplicates)
admin_role = orm.Role.find(db, name='admin') admin_role = orm.Role.find(db, name='admin')
user_role = orm.Role.find(db, name='user') user_role = orm.Role.find(db, name='user')
for service in db.query(orm.Service): service_role = orm.Role.find(db, name='service')
assert len(service.roles) > 0
assert len(service.roles) == len(set(service.roles))
if service.admin:
assert admin_role in service.roles
assert user_role not in service.roles
# test if predefined roles loaded and assigned # test if predefined roles loaded and assigned
culler_role = orm.Role.find(db, name='idle-culler') culler_role = orm.Role.find(db, name='idle-culler')
cull_idle = orm.Service.find(db, name='idle-culler') culler_service = orm.Service.find(db, name='idle-culler')
assert culler_role in cull_idle.roles assert culler_role in culler_service.roles
assert user_role not in cull_idle.roles assert service_role not in culler_service.roles
# test if every service has a role (and no duplicates)
for service in db.query(orm.Service):
assert len(service.roles) > 0
assert len(service.roles) == len(set(service.roles))
# test default role assignment
if service.admin:
assert admin_role in service.roles
assert service_role not in service.roles
elif culler_role not in service.roles:
assert service_role in service.roles
assert service_role.scopes == []
assert admin_role not in service.roles
# make sure 'user' role not assigned to service
assert user_role not in service.roles
# delete the test services # delete the test services
for service in db.query(orm.Service): for service in db.query(orm.Service):
@@ -485,6 +510,10 @@ async def test_load_roles_services(tmpdir, request):
db.delete(token) db.delete(token)
db.commit() db.commit()
# delete the test roles
for role in roles_to_load:
roles.remove_role(db, role['name'])
@mark.role @mark.role
async def test_load_roles_groups(tmpdir, request): async def test_load_roles_groups(tmpdir, request):
@@ -530,6 +559,10 @@ async def test_load_roles_groups(tmpdir, request):
assert group2 in assist_role.groups assert group2 in assist_role.groups
assert group3 in head_role.groups assert group3 in head_role.groups
# delete the test roles
for role in roles_to_load:
roles.remove_role(db, role['name'])
@mark.role @mark.role
async def test_load_roles_user_tokens(tmpdir, request): async def test_load_roles_user_tokens(tmpdir, request):
@@ -541,9 +574,9 @@ async def test_load_roles_user_tokens(tmpdir, request):
roles_to_load = [ roles_to_load = [
{ {
'name': 'reader', 'name': 'reader',
'description': 'Read-only own model', 'description': 'Read all users models',
'scopes': ['read:all'], 'scopes': ['read:users'],
'tokens': ['secrety-token'], 'tokens': ['super-secret-token'],
}, },
] ]
kwargs = { kwargs = {
@@ -564,21 +597,25 @@ async def test_load_roles_user_tokens(tmpdir, request):
# test if gandalf's token has the 'reader' role # test if gandalf's token has the 'reader' role
reader_role = orm.Role.find(db, 'reader') reader_role = orm.Role.find(db, 'reader')
token = orm.APIToken.find(db, 'secrety-token') token = orm.APIToken.find(db, 'super-secret-token')
assert reader_role in token.roles assert reader_role in token.roles
# test if all other tokens have default 'user' role # test if all other tokens have default 'user' role
user_role = orm.Role.find(db, 'user') token_role = orm.Role.find(db, 'token')
sec_token = orm.APIToken.find(db, 'secret-token') secret_token = orm.APIToken.find(db, 'secret-token')
assert user_role in sec_token.roles assert token_role in secret_token.roles
s_sec_token = orm.APIToken.find(db, 'super-secret-token') secrety_token = orm.APIToken.find(db, 'secrety-token')
assert user_role in s_sec_token.roles assert token_role in secrety_token.roles
# delete the test tokens # delete the test tokens
for token in db.query(orm.APIToken): for token in db.query(orm.APIToken):
db.delete(token) db.delete(token)
db.commit() db.commit()
# delete the test roles
for role in roles_to_load:
roles.remove_role(db, role['name'])
@mark.role @mark.role
async def test_load_roles_user_tokens_not_allowed(tmpdir, request): async def test_load_roles_user_tokens_not_allowed(tmpdir, request):
@@ -587,9 +624,9 @@ async def test_load_roles_user_tokens_not_allowed(tmpdir, request):
} }
roles_to_load = [ roles_to_load = [
{ {
'name': 'user-reader', 'name': 'user-creator',
'description': 'Read-only any user model', 'description': 'Creates/deletes any user',
'scopes': ['read:users'], 'scopes': ['admin:users'],
'tokens': ['secret-token'], 'tokens': ['secret-token'],
}, },
] ]
@@ -610,18 +647,18 @@ async def test_load_roles_user_tokens_not_allowed(tmpdir, request):
response = 'allowed' response = 'allowed'
# bilbo has only default 'user' role # bilbo has only default 'user' role
# while bilbo's token is requesting role with higher permissions # while bilbo's token is requesting role with higher permissions
try: with pytest.raises(ValueError):
await hub.init_roles() await hub.init_roles()
except ValueError:
response = 'denied'
assert response == 'denied'
# delete the test tokens # delete the test tokens
for token in db.query(orm.APIToken): for token in db.query(orm.APIToken):
db.delete(token) db.delete(token)
db.commit() db.commit()
# delete the test roles
for role in roles_to_load:
roles.remove_role(db, role['name'])
@mark.role @mark.role
async def test_load_roles_service_tokens(tmpdir, request): async def test_load_roles_service_tokens(tmpdir, request):
@@ -657,7 +694,7 @@ async def test_load_roles_service_tokens(tmpdir, request):
await hub.init_api_tokens() await hub.init_api_tokens()
await hub.init_roles() await hub.init_roles()
# test if another-secret-token has culler role # test if another-secret-token has idle-culler role
service = orm.Service.find(db, 'idle-culler') service = orm.Service.find(db, 'idle-culler')
culler_role = orm.Role.find(db, 'idle-culler') culler_role = orm.Role.find(db, 'idle-culler')
token = orm.APIToken.find(db, 'another-secret-token') token = orm.APIToken.find(db, 'another-secret-token')
@@ -674,6 +711,10 @@ async def test_load_roles_service_tokens(tmpdir, request):
db.delete(token) db.delete(token)
db.commit() db.commit()
# delete the test roles
for role in roles_to_load:
roles.remove_role(db, role['name'])
@mark.role @mark.role
async def test_load_roles_service_tokens_not_allowed(tmpdir, request): async def test_load_roles_service_tokens_not_allowed(tmpdir, request):
@@ -688,11 +729,16 @@ async def test_load_roles_service_tokens_not_allowed(tmpdir, request):
'scopes': ['read:users'], 'scopes': ['read:users'],
'services': ['some-service'], 'services': ['some-service'],
}, },
# 'culler' role has higher permissions that the token's owner 'some-service' # 'idle-culler' role has higher permissions that the token's owner 'some-service'
{ {
'name': 'culler', 'name': 'idle-culler',
'description': 'Cull idle servers', 'description': 'Cull idle servers',
'scopes': ['users:servers', 'admin:users:servers'], 'scopes': [
'read:users:name',
'read:users:activity',
'read:users:servers',
'users:servers',
],
'tokens': ['secret-token'], 'tokens': ['secret-token'],
}, },
] ]
@@ -708,13 +754,8 @@ async def test_load_roles_service_tokens_not_allowed(tmpdir, request):
hub.init_db() hub.init_db()
db = hub.db db = hub.db
await hub.init_api_tokens() await hub.init_api_tokens()
response = 'allowed' with pytest.raises(ValueError):
try:
await hub.init_roles() await hub.init_roles()
except ValueError:
response = 'denied'
assert response == 'denied'
# delete the test services # delete the test services
for service in db.query(orm.Service): for service in db.query(orm.Service):
@@ -726,44 +767,47 @@ async def test_load_roles_service_tokens_not_allowed(tmpdir, request):
db.delete(token) db.delete(token)
db.commit() db.commit()
# delete the test roles
for role in roles_to_load:
roles.remove_role(db, role['name'])
@mark.role @mark.role
@mark.parametrize( @mark.parametrize(
"headers, role_list, status", "headers, rolename, scopes, status",
[ [
# no role requested - gets default 'user' role # no role requested - gets default 'token' role
({}, None, 200), ({}, None, None, 200),
# role scopes within the user's default 'user' role # role scopes within the user's default 'user' role
({}, ['self-reader'], 200), ({}, 'self-reader', ['read:users'], 200),
# role scopes outside of the user's role but within the group's role scopes of which the user is a member # role scopes outside of the user's role but within the group's role scopes of which the user is a member
({}, ['users-reader'], 200), ({}, 'groups-reader', ['read:groups'], 200),
# non-existing role request # non-existing role request
({}, ['non-existing'], 404), ({}, 'non-existing', [], 404),
# role scopes outside of both user's role and group's role scopes # role scopes outside of both user's role and group's role scopes
({}, ['users-creator'], 403), ({}, 'users-creator', ['admin:users'], 403),
], ],
) )
async def test_get_new_token_via_api(app, headers, role_list, status): async def test_get_new_token_via_api(app, headers, rolename, scopes, status):
"""Test requesting a token via API with and without roles"""
user = add_user(app.db, app, name='user') user = add_user(app.db, app, name='user')
if rolename and rolename != 'non-existing':
roles.add_role(app.db, {'name': 'self-reader', 'scopes': ['read:all']}) roles.add_role(app.db, {'name': rolename, 'scopes': scopes})
roles.add_role(app.db, {'name': 'users-reader', 'scopes': ['read:users']}) if rolename == 'groups-reader':
roles.add_role(app.db, {'name': 'users-creator', 'scopes': ['admin:users']})
# add role for a group # add role for a group
roles.add_role(app.db, {'name': 'group_role', 'scopes': ['read:users']}) roles.add_role(app.db, {'name': 'group-role', 'scopes': ['groups']})
# create a group and add the user and group_role # create a group and add the user and group_role
group = orm.Group.find(app.db, 'test_group') group = orm.Group.find(app.db, 'test-group')
if not group: if not group:
group = orm.Group(name='test_group') group = orm.Group(name='test-group')
app.db.add(group) app.db.add(group)
group.users.append(user) group_role = orm.Role.find(app.db, 'group-role')
group_role = orm.Role.find(app.db, 'group_role')
group.roles.append(group_role) group.roles.append(group_role)
user.groups.append(group)
app.db.commit() app.db.commit()
if rolename:
if role_list: body = json.dumps({'roles': [rolename]})
body = json.dumps({'roles': role_list})
else: else:
body = '' body = ''
# request a new token # request a new token
@@ -777,11 +821,10 @@ async def test_get_new_token_via_api(app, headers, role_list, status):
reply = r.json() reply = r.json()
assert 'token' in reply assert 'token' in reply
assert reply['user'] == user.name assert reply['user'] == user.name
if not role_list: if not rolename:
# token should have a default role assert reply['roles'] == ['token']
assert reply['roles'] == ['user']
else: else:
assert reply['roles'] == role_list assert reply['roles'] == [rolename]
token_id = reply['id'] token_id = reply['id']
# delete the token # delete the token

View File

@@ -9,15 +9,20 @@ from tornado.httputil import HTTPServerRequest
from .. import orm from .. import orm
from .. import roles from .. import roles
from ..handlers import BaseHandler
from ..scopes import _check_scope from ..scopes import _check_scope
from ..scopes import _parse_scopes
from ..scopes import needs_scope from ..scopes import needs_scope
from ..scopes import parse_scopes
from ..scopes import Scope from ..scopes import Scope
from .mocking import MockHub
from .utils import add_user from .utils import add_user
from .utils import api_request from .utils import api_request
from .utils import auth_header from .utils import auth_header
from .utils import public_url
def get_handler_with_scopes(scopes):
handler = mock.Mock(spec=BaseHandler)
handler.parsed_scopes = parse_scopes(scopes)
return handler
def test_scope_constructor(): def test_scope_constructor():
@@ -28,7 +33,7 @@ def test_scope_constructor():
'read:users!user={}'.format(user1), 'read:users!user={}'.format(user1),
'read:users!user={}'.format(user2), 'read:users!user={}'.format(user2),
] ]
parsed_scopes = _parse_scopes(scope_list) parsed_scopes = parse_scopes(scope_list)
assert 'read:users' in parsed_scopes assert 'read:users' in parsed_scopes
assert parsed_scopes['users'] assert parsed_scopes['users']
@@ -37,60 +42,49 @@ def test_scope_constructor():
def test_scope_precendence(): def test_scope_precendence():
scope_list = ['read:users!user=maeby', 'read:users'] scope_list = ['read:users!user=maeby', 'read:users']
parsed_scopes = _parse_scopes(scope_list) parsed_scopes = parse_scopes(scope_list)
assert parsed_scopes['read:users'] == Scope.ALL assert parsed_scopes['read:users'] == Scope.ALL
def test_scope_check_present(): def test_scope_check_present():
handler = None handler = get_handler_with_scopes(['read:users'])
scope_list = ['read:users'] assert _check_scope(handler, 'read:users')
parsed_scopes = _parse_scopes(scope_list) assert _check_scope(handler, 'read:users', user='maeby')
assert _check_scope(handler, 'read:users', parsed_scopes)
assert _check_scope(handler, 'read:users', parsed_scopes, user='maeby')
def test_scope_check_not_present(): def test_scope_check_not_present():
handler = None handler = get_handler_with_scopes(['read:users!user=maeby'])
scope_list = ['read:users!user=maeby'] assert _check_scope(handler, 'read:users')
parsed_scopes = _parse_scopes(scope_list)
assert not _check_scope(handler, 'read:users', parsed_scopes)
with pytest.raises(web.HTTPError): with pytest.raises(web.HTTPError):
_check_scope(handler, 'read:users', parsed_scopes, user='gob') _check_scope(handler, 'read:users', user='gob')
with pytest.raises(web.HTTPError): with pytest.raises(web.HTTPError):
_check_scope(handler, 'read:users', parsed_scopes, user='gob', server='server') _check_scope(handler, 'read:users', user='gob', server='server')
def test_scope_filters(): def test_scope_filters():
handler = None handler = get_handler_with_scopes(
scope_list = ['read:users', 'read:users!group=bluths', 'read:users!user=maeby'] ['read:users', 'read:users!group=bluths', 'read:users!user=maeby']
parsed_scopes = _parse_scopes(scope_list) )
assert _check_scope(handler, 'read:users', parsed_scopes, group='bluth') assert _check_scope(handler, 'read:users', group='bluth')
assert _check_scope(handler, 'read:users', parsed_scopes, user='maeby') assert _check_scope(handler, 'read:users', user='maeby')
def test_scope_multiple_filters(): def test_scope_multiple_filters():
handler = None handler = get_handler_with_scopes(['read:users!user=george_michael'])
assert _check_scope( assert _check_scope(handler, 'read:users', user='george_michael', group='bluths')
handler,
'read:users',
_parse_scopes(['read:users!user=george_michael']),
user='george_michael',
group='bluths',
)
def test_scope_parse_server_name(): def test_scope_parse_server_name():
handler = None handler = get_handler_with_scopes(
scope_list = ['users:servers!server=maeby/server1', 'read:users!user=maeby'] ['users:servers!server=maeby/server1', 'read:users!user=maeby']
parsed_scopes = _parse_scopes(scope_list)
assert _check_scope(
handler, 'users:servers', parsed_scopes, user='maeby', server='server1'
) )
assert _check_scope(handler, 'users:servers', user='maeby', server='server1')
class MockAPIHandler: class MockAPIHandler:
def __init__(self): def __init__(self):
self.scopes = {'users'} self.raw_scopes = {'users'}
self.parsed_scopes = {}
@needs_scope('users') @needs_scope('users')
def user_thing(self, user_name): def user_thing(self, user_name):
@@ -109,7 +103,8 @@ class MockAPIHandler:
return True return True
@needs_scope('users') @needs_scope('users')
def other_thing(self, other_name): def other_thing(self, non_filter_argument):
# Rely on inner vertical filtering
return True return True
@needs_scope('users') @needs_scope('users')
@@ -167,15 +162,16 @@ class MockAPIHandler:
), ),
(['users'], 'other_thing', ('gob',), True), (['users'], 'other_thing', ('gob',), True),
(['read:users'], 'other_thing', ('gob',), False), (['read:users'], 'other_thing', ('gob',), False),
(['users!user=gob'], 'other_thing', ('gob',), False), (['users!user=gob'], 'other_thing', ('gob',), True),
(['users!user=gob'], 'other_thing', ('maeby',), False), (['users!user=gob'], 'other_thing', ('maeby',), True),
], ],
) )
def test_scope_method_access(scopes, method, arguments, is_allowed): def test_scope_method_access(scopes, method, arguments, is_allowed):
obj = MockAPIHandler() obj = MockAPIHandler()
obj.current_user = mock.Mock(name=arguments[0]) obj.current_user = mock.Mock(name=arguments[0])
obj.request = mock.Mock(spec=HTTPServerRequest) obj.request = mock.Mock(spec=HTTPServerRequest)
obj.scopes = set(scopes) obj.raw_scopes = set(scopes)
obj.parsed_scopes = parse_scopes(obj.raw_scopes)
api_call = getattr(obj, method) api_call = getattr(obj, method)
if is_allowed: if is_allowed:
assert api_call(*arguments) assert api_call(*arguments)
@@ -188,7 +184,8 @@ def test_double_scoped_method_succeeds():
obj = MockAPIHandler() obj = MockAPIHandler()
obj.current_user = mock.Mock(name='lucille') obj.current_user = mock.Mock(name='lucille')
obj.request = mock.Mock(spec=HTTPServerRequest) obj.request = mock.Mock(spec=HTTPServerRequest)
obj.scopes = {'users', 'read:services'} obj.raw_scopes = {'users', 'read:services'}
obj.parsed_scopes = parse_scopes(obj.raw_scopes)
assert obj.secret_thing() assert obj.secret_thing()
@@ -196,7 +193,8 @@ def test_double_scoped_method_denials():
obj = MockAPIHandler() obj = MockAPIHandler()
obj.current_user = mock.Mock(name='lucille2') obj.current_user = mock.Mock(name='lucille2')
obj.request = mock.Mock(spec=HTTPServerRequest) obj.request = mock.Mock(spec=HTTPServerRequest)
obj.scopes = {'users', 'read:groups'} obj.raw_scopes = {'users', 'read:groups'}
obj.parsed_scopes = parse_scopes(obj.raw_scopes)
with pytest.raises(web.HTTPError): with pytest.raises(web.HTTPError):
obj.secret_thing() obj.secret_thing()
@@ -280,6 +278,73 @@ async def test_request_fake_user(app):
assert r.json()['message'] == err_message assert r.json()['message'] == err_message
async def test_refuse_exceeding_token_permissions(app):
user_name = 'abed'
user = add_user(app.db, name=user_name)
add_user(app.db, name='user')
api_token = user.new_api_token()
exceeding_role = generate_test_role(user_name, ['read:users'], 'exceeding_role')
roles.add_role(app.db, exceeding_role)
roles.add_obj(app.db, objname=api_token, kind='tokens', rolename='exceeding_role')
app.db.commit()
headers = {'Authorization': 'token %s' % api_token}
r = await api_request(app, 'users', headers=headers)
assert r.status_code == 200
result_names = {user['name'] for user in r.json()}
assert result_names == {user_name}
async def test_exceeding_user_permissions(app):
user_name = 'abed'
user = add_user(app.db, name=user_name)
add_user(app.db, name='user')
api_token = user.new_api_token()
orm_api_token = orm.APIToken.find(app.db, token=api_token)
reader_role = generate_test_role(user_name, ['read:users'], 'reader_role')
subreader_role = generate_test_role(
user_name, ['read:users:groups'], 'subreader_role'
)
roles.add_role(app.db, reader_role)
roles.add_role(app.db, subreader_role)
app.db.commit()
roles.update_roles(app.db, user, kind='users', roles=['reader_role'])
roles.update_roles(app.db, orm_api_token, kind='tokens', roles=['subreader_role'])
orm_api_token.roles.remove(orm.Role.find(app.db, name='token'))
app.db.commit()
headers = {'Authorization': 'token %s' % api_token}
r = await api_request(app, 'users', headers=headers)
assert r.status_code == 200
keys = {key for user in r.json() for key in user.keys()}
assert 'groups' in keys
assert 'last_activity' not in keys
roles.remove_obj(app.db, user_name, 'users', 'reader_role')
async def test_user_service_separation(app, mockservice_url):
name = mockservice_url.name
user = add_user(app.db, name=name)
reader_role = generate_test_role(name, ['read:users'], 'reader_role')
subreader_role = generate_test_role(name, ['read:users:groups'], 'subreader_role')
roles.add_role(app.db, reader_role)
roles.add_role(app.db, subreader_role)
app.db.commit()
roles.update_roles(app.db, user, kind='users', roles=['subreader_role'])
roles.update_roles(
app.db, mockservice_url.orm, kind='services', roles=['reader_role']
)
user.roles.remove(orm.Role.find(app.db, name='user'))
api_token = user.new_api_token()
app.db.commit()
headers = {'Authorization': 'token %s' % api_token}
r = await api_request(app, 'users', headers=headers)
assert r.status_code == 200
keys = {key for user in r.json() for key in user.keys()}
assert 'groups' in keys
assert 'last_activity' not in keys
async def test_request_user_outside_group(app): async def test_request_user_outside_group(app):
user_name = 'buster' user_name = 'buster'
fake_user = 'hello' fake_user = 'hello'
@@ -290,7 +355,6 @@ async def test_request_user_outside_group(app):
roles.add_obj(app.db, objname=user_name, kind='users', rolename='test') roles.add_obj(app.db, objname=user_name, kind='users', rolename='test')
roles.remove_obj(app.db, objname=user_name, kind='users', rolename='user') roles.remove_obj(app.db, objname=user_name, kind='users', rolename='user')
app.db.commit() app.db.commit()
print(orm.User.find(db=app.db, name=user_name).roles)
r = await api_request( r = await api_request(
app, 'users', fake_user, headers=auth_header(app.db, user_name) app, 'users', fake_user, headers=auth_header(app.db, user_name)
) )
@@ -394,3 +458,60 @@ async def test_group_scope_filter(app):
assert r.status_code == 200 assert r.status_code == 200
result_names = {user['name'] for user in r.json()} result_names = {user['name'] for user in r.json()}
assert result_names == {'sitwell', 'bluth'} assert result_names == {'sitwell', 'bluth'}
async def test_vertical_filter(app):
user_name = 'lindsey'
add_user(app.db, name=user_name)
test_role = generate_test_role(user_name, ['read:users:name'])
roles.add_role(app.db, test_role)
roles.add_obj(app.db, objname=user_name, kind='users', rolename='test')
roles.remove_obj(app.db, objname=user_name, kind='users', rolename='user')
app.db.commit()
r = await api_request(app, 'users', headers=auth_header(app.db, user_name))
assert r.status_code == 200
allowed_keys = {'name', 'kind'}
assert set([key for user in r.json() for key in user.keys()]) == allowed_keys
async def test_stacked_vertical_filter(app):
user_name = 'user'
test_role = generate_test_role(
user_name, ['read:users:activity', 'read:users:servers']
)
roles.add_role(app.db, test_role)
roles.add_obj(app.db, objname=user_name, kind='users', rolename='test')
roles.remove_obj(app.db, objname=user_name, kind='users', rolename='user')
app.db.commit()
r = await api_request(app, 'users', headers=auth_header(app.db, user_name))
assert r.status_code == 200
allowed_keys = {'name', 'kind', 'servers', 'last_activity'}
result_model = set([key for user in r.json() for key in user.keys()])
assert result_model == allowed_keys
async def test_cross_filter(app):
user_name = 'abed'
add_user(app.db, name=user_name)
test_role = generate_test_role(
user_name, ['read:users:activity', 'read:users!user=abed']
)
roles.add_role(app.db, test_role)
roles.add_obj(app.db, objname=user_name, kind='users', rolename='test')
roles.remove_obj(app.db, objname=user_name, kind='users', rolename='user')
app.db.commit()
new_users = {'britta', 'jeff', 'annie'}
for new_user_name in new_users:
add_user(app.db, name=new_user_name)
app.db.commit()
r = await api_request(app, 'users', headers=auth_header(app.db, user_name))
assert r.status_code == 200
restricted_keys = {'name', 'kind', 'last_activity'}
key_in_full_model = 'created'
for user in r.json():
if user['name'] == user_name:
assert key_in_full_model in user
else:
assert set(user.keys()) == restricted_keys

View File

@@ -9,6 +9,7 @@ from subprocess import Popen
from async_generator import asynccontextmanager from async_generator import asynccontextmanager
from tornado.ioloop import IOLoop from tornado.ioloop import IOLoop
from ..roles import update_roles
from ..utils import maybe_future from ..utils import maybe_future
from ..utils import random_port from ..utils import random_port
from ..utils import url_path_join from ..utils import url_path_join
@@ -94,6 +95,8 @@ async def test_external_service(app):
await app.init_roles() await app.init_roles()
service = app._service_map[name] service = app._service_map[name]
api_token = service.orm.api_tokens[0]
update_roles(app.db, api_token, 'tokens', roles=['token'])
url = public_url(app, service) + '/api/users' url = public_url(app, service) + '/api/users'
r = await async_requests.get(url, allow_redirects=False) r = await async_requests.get(url, allow_redirects=False)
r.raise_for_status() r.raise_for_status()

View File

@@ -21,6 +21,7 @@ async def test_singleuser_auth(app):
user = app.users['nandy'] user = app.users['nandy']
if not user.running: if not user.running:
await user.spawn() await user.spawn()
await app.proxy.add_user(user)
url = public_url(app, user) url = public_url(app, user)
# no cookies, redirects to login page # no cookies, redirects to login page
@@ -28,6 +29,11 @@ async def test_singleuser_auth(app):
r.raise_for_status() r.raise_for_status()
assert '/hub/login' in r.url assert '/hub/login' in r.url
# unauthenticated /api/ should 403, not redirect
api_url = url_path_join(url, "api/status")
r = await async_requests.get(api_url, allow_redirects=False)
assert r.status_code == 403
# with cookies, login successful # with cookies, login successful
r = await async_requests.get(url, cookies=cookies) r = await async_requests.get(url, cookies=cookies)
r.raise_for_status() r.raise_for_status()
@@ -50,11 +56,9 @@ async def test_singleuser_auth(app):
assert urlparse(r.url).path.endswith('/oauth2/authorize') assert urlparse(r.url).path.endswith('/oauth2/authorize')
# submit the oauth form to complete authorization # submit the oauth form to complete authorization
r = await s.post(r.url, data={'scopes': ['identify']}, headers={'Referer': r.url}) r = await s.post(r.url, data={'scopes': ['identify']}, headers={'Referer': r.url})
assert ( final_url = urlparse(r.url).path.rstrip('/')
urlparse(r.url) final_path = url_path_join('/user/nandy', user.spawner.default_url or "/tree")
.path.rstrip('/') assert final_url.endswith(final_path)
.endswith(url_path_join('/user/nandy', user.spawner.default_url or "/tree"))
)
# user isn't authorized, should raise 403 # user isn't authorized, should raise 403
assert r.status_code == 403 assert r.status_code == 403
assert 'burgess' in r.text assert 'burgess' in r.text

View File

@@ -415,3 +415,14 @@ async def test_spawner_env(db):
for key, value in env_overrides.items(): for key, value in env_overrides.items():
assert key in env assert key in env
assert env[key] == value assert env[key] == value
async def test_hub_connect_url(db):
spawner = new_spawner(db, hub_connect_url="https://example.com/")
name = spawner.user.name
env = spawner.get_env()
assert env["JUPYTERHUB_API_URL"] == "https://example.com/api"
assert (
env["JUPYTERHUB_ACTIVITY_URL"]
== "https://example.com/api/users/%s/activity" % name
)

View File

@@ -206,37 +206,3 @@ def public_url(app, user_or_service=None, path=''):
return host + ujoin(prefix, path) return host + ujoin(prefix, path)
else: else:
return host + prefix return host + prefix
def get_scopes(role='admin'):
"""Get all scopes for a role. Default role is admin, alternatives are user and service"""
all_scopes = {
'admin': [
'all',
'users',
'users:name',
'users:groups',
'users:activity',
'users:servers',
'users:tokens',
'admin:users',
'admin:users:servers',
'groups',
'admin:groups',
'services',
'proxy',
'shutdown',
'read:hub',
],
'user': [
'all',
'users!user={username}',
'users:activity!user={username}',
'users:tokens!user={username}',
],
'server': ['users:activity'],
'service': ['services'],
}
scopes = all_scopes[role]
read_only = ["read:" + el for el in scopes]
return scopes + read_only

View File

@@ -13,7 +13,6 @@ python:
path: . path: .
- requirements: docs/requirements.txt - requirements: docs/requirements.txt
formats: formats:
- htmlzip - htmlzip
- epub - epub

View File

@@ -1,7 +1,7 @@
// Copyright (c) Jupyter Development Team. // Copyright (c) Jupyter Development Team.
// Distributed under the terms of the Modified BSD License. // Distributed under the terms of the Modified BSD License.
require(["jquery", "moment", "jhapi", "utils"], function( require(["jquery", "moment", "jhapi", "utils"], function (
$, $,
moment, moment,
JHAPI, JHAPI,
@@ -51,41 +51,41 @@ require(["jquery", "moment", "jhapi", "utils"], function(
window.location = window.location.pathname + "?" + query.join("&"); window.location = window.location.pathname + "?" + query.join("&");
} }
$("th").map(function(i, th) { $("th").map(function (i, th) {
th = $(th); th = $(th);
var col = th.data("sort"); var col = th.data("sort");
if (!col || col.length === 0) { if (!col || col.length === 0) {
return; return;
} }
var order = th.find("i").hasClass("fa-sort-desc") ? "asc" : "desc"; var order = th.find("i").hasClass("fa-sort-desc") ? "asc" : "desc";
th.find("a").click(function() { th.find("a").click(function () {
resort(col, order); resort(col, order);
}); });
}); });
$(".time-col").map(function(i, el) { $(".time-col").map(function (i, el) {
// convert ISO datestamps to nice momentjs ones // convert ISO datestamps to nice momentjs ones
el = $(el); el = $(el);
var m = moment(new Date(el.text().trim())); var m = moment(new Date(el.text().trim()));
el.text(m.isValid() ? m.fromNow() : "Never"); el.text(m.isValid() ? m.fromNow() : "Never");
}); });
$(".stop-server").click(function() { $(".stop-server").click(function () {
var el = $(this); var el = $(this);
var row = getRow(el); var row = getRow(el);
var serverName = row.data("server-name"); var serverName = row.data("server-name");
var user = row.data("user"); var user = row.data("user");
el.text("stopping..."); el.text("stopping...");
var stop = function(options) { var stop = function (options) {
return api.stop_server(user, options); return api.stop_server(user, options);
}; };
if (serverName !== "") { if (serverName !== "") {
stop = function(options) { stop = function (options) {
return api.stop_named_server(user, serverName, options); return api.stop_named_server(user, serverName, options);
}; };
} }
stop({ stop({
success: function() { success: function () {
el.text("stop " + serverName).addClass("hidden"); el.text("stop " + serverName).addClass("hidden");
row.find(".access-server").addClass("hidden"); row.find(".access-server").addClass("hidden");
row.find(".start-server").removeClass("hidden"); row.find(".start-server").removeClass("hidden");
@@ -93,20 +93,20 @@ require(["jquery", "moment", "jhapi", "utils"], function(
}); });
}); });
$(".delete-server").click(function() { $(".delete-server").click(function () {
var el = $(this); var el = $(this);
var row = getRow(el); var row = getRow(el);
var serverName = row.data("server-name"); var serverName = row.data("server-name");
var user = row.data("user"); var user = row.data("user");
el.text("deleting..."); el.text("deleting...");
api.delete_named_server(user, serverName, { api.delete_named_server(user, serverName, {
success: function() { success: function () {
row.remove(); row.remove();
}, },
}); });
}); });
$(".access-server").map(function(i, el) { $(".access-server").map(function (i, el) {
el = $(el); el = $(el);
var row = getRow(el); var row = getRow(el);
var user = row.data("user"); var user = row.data("user");
@@ -120,7 +120,7 @@ require(["jquery", "moment", "jhapi", "utils"], function(
if (admin_access && options_form) { if (admin_access && options_form) {
// if admin access and options form are enabled // if admin access and options form are enabled
// link to spawn page instead of making API requests // link to spawn page instead of making API requests
$(".start-server").map(function(i, el) { $(".start-server").map(function (i, el) {
el = $(el); el = $(el);
var row = getRow(el); var row = getRow(el);
var user = row.data("user"); var user = row.data("user");
@@ -134,22 +134,22 @@ require(["jquery", "moment", "jhapi", "utils"], function(
// since it would mean opening a bunch of tabs // since it would mean opening a bunch of tabs
$("#start-all-servers").addClass("hidden"); $("#start-all-servers").addClass("hidden");
} else { } else {
$(".start-server").click(function() { $(".start-server").click(function () {
var el = $(this); var el = $(this);
var row = getRow(el); var row = getRow(el);
var user = row.data("user"); var user = row.data("user");
var serverName = row.data("server-name"); var serverName = row.data("server-name");
el.text("starting..."); el.text("starting...");
var start = function(options) { var start = function (options) {
return api.start_server(user, options); return api.start_server(user, options);
}; };
if (serverName !== "") { if (serverName !== "") {
start = function(options) { start = function (options) {
return api.start_named_server(user, serverName, options); return api.start_named_server(user, serverName, options);
}; };
} }
start({ start({
success: function() { success: function () {
el.text("start " + serverName).addClass("hidden"); el.text("start " + serverName).addClass("hidden");
row.find(".stop-server").removeClass("hidden"); row.find(".stop-server").removeClass("hidden");
row.find(".access-server").removeClass("hidden"); row.find(".access-server").removeClass("hidden");
@@ -158,7 +158,7 @@ require(["jquery", "moment", "jhapi", "utils"], function(
}); });
} }
$(".edit-user").click(function() { $(".edit-user").click(function () {
var el = $(this); var el = $(this);
var row = getRow(el); var row = getRow(el);
var user = row.data("user"); var user = row.data("user");
@@ -172,7 +172,7 @@ require(["jquery", "moment", "jhapi", "utils"], function(
$("#edit-user-dialog") $("#edit-user-dialog")
.find(".save-button") .find(".save-button")
.click(function() { .click(function () {
var dialog = $("#edit-user-dialog"); var dialog = $("#edit-user-dialog");
var user = dialog.data("user"); var user = dialog.data("user");
var name = dialog.find(".username-input").val(); var name = dialog.find(".username-input").val();
@@ -184,14 +184,14 @@ require(["jquery", "moment", "jhapi", "utils"], function(
name: name, name: name,
}, },
{ {
success: function() { success: function () {
window.location.reload(); window.location.reload();
}, },
} }
); );
}); });
$(".delete-user").click(function() { $(".delete-user").click(function () {
var el = $(this); var el = $(this);
var row = getRow(el); var row = getRow(el);
var user = row.data("user"); var user = row.data("user");
@@ -202,18 +202,18 @@ require(["jquery", "moment", "jhapi", "utils"], function(
$("#delete-user-dialog") $("#delete-user-dialog")
.find(".delete-button") .find(".delete-button")
.click(function() { .click(function () {
var dialog = $("#delete-user-dialog"); var dialog = $("#delete-user-dialog");
var username = dialog.find(".delete-username").text(); var username = dialog.find(".delete-username").text();
console.log("deleting", username); console.log("deleting", username);
api.delete_user(username, { api.delete_user(username, {
success: function() { success: function () {
window.location.reload(); window.location.reload();
}, },
}); });
}); });
$("#add-users").click(function() { $("#add-users").click(function () {
var dialog = $("#add-users-dialog"); var dialog = $("#add-users-dialog");
dialog.find(".username-input").val(""); dialog.find(".username-input").val("");
dialog.find(".admin-checkbox").prop("checked", false); dialog.find(".admin-checkbox").prop("checked", false);
@@ -222,15 +222,12 @@ require(["jquery", "moment", "jhapi", "utils"], function(
$("#add-users-dialog") $("#add-users-dialog")
.find(".save-button") .find(".save-button")
.click(function() { .click(function () {
var dialog = $("#add-users-dialog"); var dialog = $("#add-users-dialog");
var lines = dialog var lines = dialog.find(".username-input").val().split("\n");
.find(".username-input")
.val()
.split("\n");
var admin = dialog.find(".admin-checkbox").prop("checked"); var admin = dialog.find(".admin-checkbox").prop("checked");
var usernames = []; var usernames = [];
lines.map(function(line) { lines.map(function (line) {
var username = line.trim(); var username = line.trim();
if (username.length) { if (username.length) {
usernames.push(username); usernames.push(username);
@@ -241,47 +238,45 @@ require(["jquery", "moment", "jhapi", "utils"], function(
usernames, usernames,
{ admin: admin }, { admin: admin },
{ {
success: function() { success: function () {
window.location.reload(); window.location.reload();
}, },
} }
); );
}); });
$("#stop-all-servers").click(function() { $("#stop-all-servers").click(function () {
$("#stop-all-servers-dialog").modal(); $("#stop-all-servers-dialog").modal();
}); });
$("#start-all-servers").click(function() { $("#start-all-servers").click(function () {
$("#start-all-servers-dialog").modal(); $("#start-all-servers-dialog").modal();
}); });
$("#stop-all-servers-dialog") $("#stop-all-servers-dialog")
.find(".stop-all-button") .find(".stop-all-button")
.click(function() { .click(function () {
// stop all clicks all the active stop buttons // stop all clicks all the active stop buttons
$(".stop-server") $(".stop-server").not(".hidden").click();
.not(".hidden")
.click();
}); });
function start(el) { function start(el) {
return function() { return function () {
$(el).click(); $(el).click();
}; };
} }
$("#start-all-servers-dialog") $("#start-all-servers-dialog")
.find(".start-all-button") .find(".start-all-button")
.click(function() { .click(function () {
$(".start-server") $(".start-server")
.not(".hidden") .not(".hidden")
.each(function(i) { .each(function (i) {
setTimeout(start(this), i * 500); setTimeout(start(this), i * 500);
}); });
}); });
$("#shutdown-hub").click(function() { $("#shutdown-hub").click(function () {
var dialog = $("#shutdown-hub-dialog"); var dialog = $("#shutdown-hub-dialog");
dialog.find("input[type=checkbox]").prop("checked", true); dialog.find("input[type=checkbox]").prop("checked", true);
dialog.modal(); dialog.modal();
@@ -289,7 +284,7 @@ require(["jquery", "moment", "jhapi", "utils"], function(
$("#shutdown-hub-dialog") $("#shutdown-hub-dialog")
.find(".shutdown-button") .find(".shutdown-button")
.click(function() { .click(function () {
var dialog = $("#shutdown-hub-dialog"); var dialog = $("#shutdown-hub-dialog");
var servers = dialog.find(".shutdown-servers-checkbox").prop("checked"); var servers = dialog.find(".shutdown-servers-checkbox").prop("checked");
var proxy = dialog.find(".shutdown-proxy-checkbox").prop("checked"); var proxy = dialog.find(".shutdown-proxy-checkbox").prop("checked");

View File

@@ -1,11 +1,7 @@
// Copyright (c) Jupyter Development Team. // Copyright (c) Jupyter Development Team.
// Distributed under the terms of the Modified BSD License. // Distributed under the terms of the Modified BSD License.
require(["jquery", "moment", "jhapi"], function( require(["jquery", "moment", "jhapi"], function ($, moment, JHAPI) {
$,
moment,
JHAPI
) {
"use strict"; "use strict";
var base_url = window.jhdata.base_url; var base_url = window.jhdata.base_url;
@@ -22,10 +18,7 @@ require(["jquery", "moment", "jhapi"], function(
} }
function disableRow(row) { function disableRow(row) {
row row.find(".btn").attr("disabled", true).off("click");
.find(".btn")
.attr("disabled", true)
.off("click");
} }
function enableRow(row, running) { function enableRow(row, running) {
@@ -68,7 +61,7 @@ require(["jquery", "moment", "jhapi"], function(
// request // request
api.stop_named_server(user, serverName, { api.stop_named_server(user, serverName, {
success: function() { success: function () {
enableRow(row, false); enableRow(row, false);
}, },
}); });
@@ -83,22 +76,22 @@ require(["jquery", "moment", "jhapi"], function(
// request // request
api.delete_named_server(user, serverName, { api.delete_named_server(user, serverName, {
success: function() { success: function () {
row.remove(); row.remove();
}, },
}); });
} }
// initial state: hook up click events // initial state: hook up click events
$("#stop").click(function() { $("#stop").click(function () {
$("#start") $("#start")
.attr("disabled", true) .attr("disabled", true)
.attr("title", "Your server is stopping") .attr("title", "Your server is stopping")
.click(function() { .click(function () {
return false; return false;
}); });
api.stop_server(user, { api.stop_server(user, {
success: function() { success: function () {
$("#stop").hide(); $("#stop").hide();
$("#start") $("#start")
.text("Start My Server") .text("Start My Server")
@@ -111,7 +104,7 @@ require(["jquery", "moment", "jhapi"], function(
}); });
$(".new-server-btn").click(startServer); $(".new-server-btn").click(startServer);
$(".new-server-name").on('keypress', function(e) { $(".new-server-name").on("keypress", function (e) {
if (e.which === 13) { if (e.which === 13) {
startServer.call(this); startServer.call(this);
} }
@@ -121,7 +114,7 @@ require(["jquery", "moment", "jhapi"], function(
$(".delete-server").click(deleteServer); $(".delete-server").click(deleteServer);
// render timestamps // render timestamps
$(".time-col").map(function(i, el) { $(".time-col").map(function (i, el) {
// convert ISO datestamps to nice momentjs ones // convert ISO datestamps to nice momentjs ones
el = $(el); el = $(el);
var m = moment(new Date(el.text().trim())); var m = moment(new Date(el.text().trim()));

View File

@@ -1,10 +1,10 @@
// Copyright (c) Jupyter Development Team. // Copyright (c) Jupyter Development Team.
// Distributed under the terms of the Modified BSD License. // Distributed under the terms of the Modified BSD License.
define(["jquery", "utils"], function($, utils) { define(["jquery", "utils"], function ($, utils) {
"use strict"; "use strict";
var JHAPI = function(base_url) { var JHAPI = function (base_url) {
this.base_url = base_url; this.base_url = base_url;
}; };
@@ -18,21 +18,21 @@ define(["jquery", "utils"], function($, utils) {
error: utils.ajax_error_dialog, error: utils.ajax_error_dialog,
}; };
var update = function(d1, d2) { var update = function (d1, d2) {
$.map(d2, function(i, key) { $.map(d2, function (i, key) {
d1[key] = d2[key]; d1[key] = d2[key];
}); });
return d1; return d1;
}; };
var ajax_defaults = function(options) { var ajax_defaults = function (options) {
var d = {}; var d = {};
update(d, default_options); update(d, default_options);
update(d, options); update(d, options);
return d; return d;
}; };
JHAPI.prototype.api_request = function(path, options) { JHAPI.prototype.api_request = function (path, options) {
options = options || {}; options = options || {};
options = ajax_defaults(options || {}); options = ajax_defaults(options || {});
var url = utils.url_path_join( var url = utils.url_path_join(
@@ -43,13 +43,13 @@ define(["jquery", "utils"], function($, utils) {
$.ajax(url, options); $.ajax(url, options);
}; };
JHAPI.prototype.start_server = function(user, options) { JHAPI.prototype.start_server = function (user, options) {
options = options || {}; options = options || {};
options = update(options, { type: "POST", dataType: null }); options = update(options, { type: "POST", dataType: null });
this.api_request(utils.url_path_join("users", user, "server"), options); this.api_request(utils.url_path_join("users", user, "server"), options);
}; };
JHAPI.prototype.start_named_server = function(user, server_name, options) { JHAPI.prototype.start_named_server = function (user, server_name, options) {
options = options || {}; options = options || {};
options = update(options, { type: "POST", dataType: null }); options = update(options, { type: "POST", dataType: null });
this.api_request( this.api_request(
@@ -58,13 +58,13 @@ define(["jquery", "utils"], function($, utils) {
); );
}; };
JHAPI.prototype.stop_server = function(user, options) { JHAPI.prototype.stop_server = function (user, options) {
options = options || {}; options = options || {};
options = update(options, { type: "DELETE", dataType: null }); options = update(options, { type: "DELETE", dataType: null });
this.api_request(utils.url_path_join("users", user, "server"), options); this.api_request(utils.url_path_join("users", user, "server"), options);
}; };
JHAPI.prototype.stop_named_server = function(user, server_name, options) { JHAPI.prototype.stop_named_server = function (user, server_name, options) {
options = options || {}; options = options || {};
options = update(options, { type: "DELETE", dataType: null }); options = update(options, { type: "DELETE", dataType: null });
this.api_request( this.api_request(
@@ -73,21 +73,21 @@ define(["jquery", "utils"], function($, utils) {
); );
}; };
JHAPI.prototype.delete_named_server = function(user, server_name, options) { JHAPI.prototype.delete_named_server = function (user, server_name, options) {
options = options || {}; options = options || {};
options.data = JSON.stringify({ remove: true }); options.data = JSON.stringify({ remove: true });
return this.stop_named_server(user, server_name, options); return this.stop_named_server(user, server_name, options);
}; };
JHAPI.prototype.list_users = function(options) { JHAPI.prototype.list_users = function (options) {
this.api_request("users", options); this.api_request("users", options);
}; };
JHAPI.prototype.get_user = function(user, options) { JHAPI.prototype.get_user = function (user, options) {
this.api_request(utils.url_path_join("users", user), options); this.api_request(utils.url_path_join("users", user), options);
}; };
JHAPI.prototype.add_users = function(usernames, userinfo, options) { JHAPI.prototype.add_users = function (usernames, userinfo, options) {
options = options || {}; options = options || {};
var data = update(userinfo, { usernames: usernames }); var data = update(userinfo, { usernames: usernames });
options = update(options, { options = update(options, {
@@ -99,7 +99,7 @@ define(["jquery", "utils"], function($, utils) {
this.api_request("users", options); this.api_request("users", options);
}; };
JHAPI.prototype.edit_user = function(user, userinfo, options) { JHAPI.prototype.edit_user = function (user, userinfo, options) {
options = options || {}; options = options || {};
options = update(options, { options = update(options, {
type: "PATCH", type: "PATCH",
@@ -110,7 +110,7 @@ define(["jquery", "utils"], function($, utils) {
this.api_request(utils.url_path_join("users", user), options); this.api_request(utils.url_path_join("users", user), options);
}; };
JHAPI.prototype.admin_access = function(user, options) { JHAPI.prototype.admin_access = function (user, options) {
options = options || {}; options = options || {};
options = update(options, { options = update(options, {
type: "POST", type: "POST",
@@ -123,13 +123,13 @@ define(["jquery", "utils"], function($, utils) {
); );
}; };
JHAPI.prototype.delete_user = function(user, options) { JHAPI.prototype.delete_user = function (user, options) {
options = options || {}; options = options || {};
options = update(options, { type: "DELETE", dataType: null }); options = update(options, { type: "DELETE", dataType: null });
this.api_request(utils.url_path_join("users", user), options); this.api_request(utils.url_path_join("users", user), options);
}; };
JHAPI.prototype.request_token = function(user, props, options) { JHAPI.prototype.request_token = function (user, props, options) {
options = options || {}; options = options || {};
options = update(options, { type: "POST" }); options = update(options, { type: "POST" });
if (props) { if (props) {
@@ -138,7 +138,7 @@ define(["jquery", "utils"], function($, utils) {
this.api_request(utils.url_path_join("users", user, "tokens"), options); this.api_request(utils.url_path_join("users", user, "tokens"), options);
}; };
JHAPI.prototype.revoke_token = function(user, token_id, options) { JHAPI.prototype.revoke_token = function (user, token_id, options) {
options = options || {}; options = options || {};
options = update(options, { type: "DELETE" }); options = update(options, { type: "DELETE" });
this.api_request( this.api_request(
@@ -147,7 +147,7 @@ define(["jquery", "utils"], function($, utils) {
); );
}; };
JHAPI.prototype.shutdown_hub = function(data, options) { JHAPI.prototype.shutdown_hub = function (data, options) {
options = options || {}; options = options || {};
options = update(options, { type: "POST" }); options = update(options, { type: "POST" });
if (data) { if (data) {

View File

@@ -1,11 +1,11 @@
// Copyright (c) Jupyter Development Team. // Copyright (c) Jupyter Development Team.
// Distributed under the terms of the Modified BSD License. // Distributed under the terms of the Modified BSD License.
require(["jquery", "utils"], function($, utils) { require(["jquery", "utils"], function ($, utils) {
"use strict"; "use strict";
var hash = utils.parse_url(window.location.href).hash; var hash = utils.parse_url(window.location.href).hash;
if (hash !== undefined && hash !== '') { if (hash !== undefined && hash !== "") {
var el = $("#start"); var el = $("#start");
var current_spawn_url = el.attr("href"); var current_spawn_url = el.attr("href");
el.attr("href", current_spawn_url + hash); el.attr("href", current_spawn_url + hash);

View File

@@ -1,21 +1,21 @@
// Copyright (c) Jupyter Development Team. // Copyright (c) Jupyter Development Team.
// Distributed under the terms of the Modified BSD License. // Distributed under the terms of the Modified BSD License.
require(["jquery", "jhapi", "moment"], function($, JHAPI, moment) { require(["jquery", "jhapi", "moment"], function ($, JHAPI, moment) {
"use strict"; "use strict";
var base_url = window.jhdata.base_url; var base_url = window.jhdata.base_url;
var user = window.jhdata.user; var user = window.jhdata.user;
var api = new JHAPI(base_url); var api = new JHAPI(base_url);
$(".time-col").map(function(i, el) { $(".time-col").map(function (i, el) {
// convert ISO datestamps to nice momentjs ones // convert ISO datestamps to nice momentjs ones
el = $(el); el = $(el);
var m = moment(new Date(el.text().trim())); var m = moment(new Date(el.text().trim()));
el.text(m.isValid() ? m.fromNow() : el.text()); el.text(m.isValid() ? m.fromNow() : el.text());
}); });
$("#request-token-form").submit(function() { $("#request-token-form").submit(function () {
var note = $("#token-note").val(); var note = $("#token-note").val();
if (!note.length) { if (!note.length) {
note = "Requested via token page"; note = "Requested via token page";
@@ -24,7 +24,7 @@ require(["jquery", "jhapi", "moment"], function($, JHAPI, moment) {
user, user,
{ note: note }, { note: note },
{ {
success: function(reply) { success: function (reply) {
$("#token-result").text(reply.token); $("#token-result").text(reply.token);
$("#token-area").show(); $("#token-area").show();
}, },
@@ -40,12 +40,12 @@ require(["jquery", "jhapi", "moment"], function($, JHAPI, moment) {
return element; return element;
} }
$(".revoke-token-btn").click(function() { $(".revoke-token-btn").click(function () {
var el = $(this); var el = $(this);
var row = get_token_row(el); var row = get_token_row(el);
el.attr("disabled", true); el.attr("disabled", true);
api.revoke_token(user, row.data("token-id"), { api.revoke_token(user, row.data("token-id"), {
success: function(reply) { success: function (reply) {
row.remove(); row.remove();
}, },
}); });

View File

@@ -5,10 +5,10 @@
// Modifications Copyright (c) Jupyter Development Team. // Modifications Copyright (c) Jupyter Development Team.
// Distributed under the terms of the Modified BSD License. // Distributed under the terms of the Modified BSD License.
define(["jquery"], function($) { define(["jquery"], function ($) {
"use strict"; "use strict";
var url_path_join = function() { var url_path_join = function () {
// join a sequence of url components with '/' // join a sequence of url components with '/'
var url = ""; var url = "";
for (var i = 0; i < arguments.length; i++) { for (var i = 0; i < arguments.length; i++) {
@@ -25,7 +25,7 @@ define(["jquery"], function($) {
return url; return url;
}; };
var parse_url = function(url) { var parse_url = function (url) {
// an `a` element with an href allows attr-access to the parsed segments of a URL // an `a` element with an href allows attr-access to the parsed segments of a URL
// a = parse_url("http://localhost:8888/path/name#hash") // a = parse_url("http://localhost:8888/path/name#hash")
// a.protocol = "http:" // a.protocol = "http:"
@@ -39,29 +39,24 @@ define(["jquery"], function($) {
return a; return a;
}; };
var encode_uri_components = function(uri) { var encode_uri_components = function (uri) {
// encode just the components of a multi-segment uri, // encode just the components of a multi-segment uri,
// leaving '/' separators // leaving '/' separators
return uri return uri.split("/").map(encodeURIComponent).join("/");
.split("/")
.map(encodeURIComponent)
.join("/");
}; };
var url_join_encode = function() { var url_join_encode = function () {
// join a sequence of url components with '/', // join a sequence of url components with '/',
// encoding each component with encodeURIComponent // encoding each component with encodeURIComponent
return encode_uri_components(url_path_join.apply(null, arguments)); return encode_uri_components(url_path_join.apply(null, arguments));
}; };
var escape_html = function(text) { var escape_html = function (text) {
// escape text to HTML // escape text to HTML
return $("<div/>") return $("<div/>").text(text).html();
.text(text)
.html();
}; };
var get_body_data = function(key) { var get_body_data = function (key) {
// get a url-encoded item from body.data and decode it // get a url-encoded item from body.data and decode it
// we should never have any encoded URLs anywhere else in code // we should never have any encoded URLs anywhere else in code
// until we are building an actual request // until we are building an actual request
@@ -69,7 +64,7 @@ define(["jquery"], function($) {
}; };
// http://stackoverflow.com/questions/2400935/browser-detection-in-javascript // http://stackoverflow.com/questions/2400935/browser-detection-in-javascript
var browser = (function() { var browser = (function () {
if (typeof navigator === "undefined") { if (typeof navigator === "undefined") {
// navigator undefined in node // navigator undefined in node
return "None"; return "None";
@@ -86,7 +81,7 @@ define(["jquery"], function($) {
})(); })();
// http://stackoverflow.com/questions/11219582/how-to-detect-my-browser-version-and-operating-system-using-javascript // http://stackoverflow.com/questions/11219582/how-to-detect-my-browser-version-and-operating-system-using-javascript
var platform = (function() { var platform = (function () {
if (typeof navigator === "undefined") { if (typeof navigator === "undefined") {
// navigator undefined in node // navigator undefined in node
return "None"; return "None";
@@ -99,7 +94,7 @@ define(["jquery"], function($) {
return OSName; return OSName;
})(); })();
var ajax_error_msg = function(jqXHR) { var ajax_error_msg = function (jqXHR) {
// Return a JSON error message if there is one, // Return a JSON error message if there is one,
// otherwise the basic HTTP status text. // otherwise the basic HTTP status text.
if (jqXHR.responseJSON && jqXHR.responseJSON.message) { if (jqXHR.responseJSON && jqXHR.responseJSON.message) {
@@ -109,7 +104,7 @@ define(["jquery"], function($) {
} }
}; };
var log_ajax_error = function(jqXHR, status, error) { var log_ajax_error = function (jqXHR, status, error) {
// log ajax failures with informative messages // log ajax failures with informative messages
var msg = "API request failed (" + jqXHR.status + "): "; var msg = "API request failed (" + jqXHR.status + "): ";
console.log(jqXHR); console.log(jqXHR);
@@ -118,7 +113,7 @@ define(["jquery"], function($) {
return msg; return msg;
}; };
var ajax_error_dialog = function(jqXHR, status, error) { var ajax_error_dialog = function (jqXHR, status, error) {
console.log("ajax dialog", arguments); console.log("ajax dialog", arguments);
var msg = log_ajax_error(jqXHR, status, error); var msg = log_ajax_error(jqXHR, status, error);
var dialog = $("#error-dialog"); var dialog = $("#error-dialog");

View File

@@ -2,9 +2,9 @@
display: table; display: table;
height: 80vh; height: 80vh;
& #insecure-login-warning{ & #insecure-login-warning {
.bg-warning(); .bg-warning();
padding:10px; padding: 10px;
} }
.service-login { .service-login {
@@ -22,16 +22,19 @@
font-size: large; font-size: large;
} }
.input-group, input[type=text], button { .input-group,
input[type="text"],
button {
width: 100%; width: 100%;
} }
input[type=submit] { input[type="submit"] {
margin-top: 0px; margin-top: 0px;
} }
.form-control:focus, input[type=submit]:focus { .form-control:focus,
box-shadow: inset 0 1px 1px rgba(0,0,0,.075), 0 0 8px @jupyter-orange; input[type="submit"]:focus {
box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075), 0 0 8px @jupyter-orange;
border-color: @jupyter-orange; border-color: @jupyter-orange;
outline-color: @jupyter-orange; outline-color: @jupyter-orange;
} }

View File

@@ -49,7 +49,6 @@
// background: rgba(66, 165, 245, 0.2); // background: rgba(66, 165, 245, 0.2);
// } // }
.feedback { .feedback {
&-container { &-container {
margin-top: 16px; margin-top: 16px;
@@ -62,5 +61,4 @@
color: lightgrey; color: lightgrey;
} }
} }
} }

View File

@@ -4,8 +4,8 @@
@navbar-height: 40px; @navbar-height: 40px;
@grid-float-breakpoint: @screen-xs-min; @grid-float-breakpoint: @screen-xs-min;
@jupyter-orange: #F37524; @jupyter-orange: #f37524;
@jupyter-red: #E34F21; @jupyter-red: #e34f21;
// color blind-friendly alternative to red/green // color blind-friendly alternative to red/green
// from 5-class RdYlBu via colorbrewer.org // from 5-class RdYlBu via colorbrewer.org
// eliminate distinction between 'primary' and 'success' // eliminate distinction between 'primary' and 'success'

View File

@@ -130,7 +130,9 @@
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="false">Services<span class="caret"></span></a> <a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="false">Services<span class="caret"></span></a>
<ul class="dropdown-menu"> <ul class="dropdown-menu">
{% for service in services %} {% for service in services %}
{% block service scoped %}
<li><a class="dropdown-item" href="{{service.prefix}}">{{service.name}}</a></li> <li><a class="dropdown-item" href="{{service.prefix}}">{{service.name}}</a></li>
{% endblock %}
{% endfor %} {% endfor %}
</ul> </ul>
</li> </li>

View File

@@ -31,6 +31,6 @@ This particular image runs as the `jovyan` user, with home directory at `/home/j
## Note on persistence ## Note on persistence
This home directory, `/home/jovyan`, is *not* persistent by default, This home directory, `/home/jovyan`, is _not_ persistent by default,
so some configuration is required unless the directory is to be used so some configuration is required unless the directory is to be used
with temporary or demonstration JupyterHub deployments. with temporary or demonstration JupyterHub deployments.