Merge branch 'rbac' into read_roles

This commit is contained in:
0mar
2021-06-08 15:31:30 +02:00
59 changed files with 968 additions and 681 deletions

View File

@@ -80,7 +80,7 @@ jobs:
steps:
- name: Should we push this image to a public registry?
run: |
if [ "${{ startsWith(github.ref, 'refs/tags/') || (github.ref == 'refs/heads/master') }}" = "true" ]; then
if [ "${{ startsWith(github.ref, 'refs/tags/') || (github.ref == 'refs/heads/main') }}" = "true" ]; then
# Empty => Docker Hub
echo "REGISTRY=" >> $GITHUB_ENV
else
@@ -114,7 +114,7 @@ jobs:
# https://github.com/jupyterhub/jupyterhub/settings/secrets/actions
if: env.REGISTRY != 'localhost:5000/'
run: |
docker login -u "${{ secrets.DOCKER_USERNAME }}" -p "${{ secrets.DOCKERHUB_TOKEN }}"
docker login -u "${{ secrets.DOCKERHUB_USERNAME }}" -p "${{ secrets.DOCKERHUB_TOKEN }}"
# https://github.com/jupyterhub/action-major-minor-tag-calculator
# If this is a tagged build this will return additional parent tags.

1
.gitignore vendored
View File

@@ -29,3 +29,4 @@ htmlcov
pip-wheel-metadata
docs/source/reference/metrics.rst
oldest-requirements.txt
jupyterhub-proxy.pid

View File

@@ -1 +1 @@
Please refer to [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/master/conduct/code_of_conduct.md).
Please refer to [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md).

View File

@@ -3,7 +3,7 @@
Welcome! As a [Jupyter](https://jupyter.org) project,
you can follow the [Jupyter contributor guide](https://jupyter.readthedocs.io/en/latest/contributing/content-contributor.html).
Make sure to also follow [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/master/conduct/code_of_conduct.md)
Make sure to also follow [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md)
for a friendly and welcoming collaborative environment.
## Setting up a development environment

View File

@@ -14,7 +14,7 @@
[![GitHub Workflow Status - Test](https://img.shields.io/github/workflow/status/jupyterhub/jupyterhub/Test?logo=github&label=tests)](https://github.com/jupyterhub/jupyterhub/actions)
[![DockerHub build status](https://img.shields.io/docker/build/jupyterhub/jupyterhub?logo=docker&label=build)](https://hub.docker.com/r/jupyterhub/jupyterhub/tags)
[![CircleCI build status](https://img.shields.io/circleci/build/github/jupyterhub/jupyterhub?logo=circleci)](https://circleci.com/gh/jupyterhub/jupyterhub)<!-- CircleCI Token: b5b65862eb2617b9a8d39e79340b0a6b816da8cc -->
[![Test coverage of code](https://codecov.io/gh/jupyterhub/jupyterhub/branch/master/graph/badge.svg)](https://codecov.io/gh/jupyterhub/jupyterhub)
[![Test coverage of code](https://codecov.io/gh/jupyterhub/jupyterhub/branch/main/graph/badge.svg)](https://codecov.io/gh/jupyterhub/jupyterhub)
[![GitHub](https://img.shields.io/badge/issue_tracking-github-blue?logo=github)](https://github.com/jupyterhub/jupyterhub/issues)
[![Discourse](https://img.shields.io/badge/help_forum-discourse-blue?logo=discourse)](https://discourse.jupyter.org/c/jupyterhub)
[![Gitter](https://img.shields.io/badge/social_chat-gitter-blue?logo=gitter)](https://gitter.im/jupyterhub/jupyterhub)
@@ -46,7 +46,7 @@ Basic principles for operation are:
servers.
JupyterHub also provides a
[REST API](http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/master/docs/rest-api.yml#/default)
[REST API](https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/HEAD/docs/rest-api.yml#/default)
for administration of the Hub and its users.
## Installation
@@ -239,7 +239,7 @@ our JupyterHub [Gitter](https://gitter.im/jupyterhub/jupyterhub) channel.
- [Reporting Issues](https://github.com/jupyterhub/jupyterhub/issues)
- [JupyterHub tutorial](https://github.com/jupyterhub/jupyterhub-tutorial)
- [Documentation for JupyterHub](https://jupyterhub.readthedocs.io/en/latest/) | [PDF (latest)](https://media.readthedocs.org/pdf/jupyterhub/latest/jupyterhub.pdf) | [PDF (stable)](https://media.readthedocs.org/pdf/jupyterhub/stable/jupyterhub.pdf)
- [Documentation for JupyterHub's REST API](http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/master/docs/rest-api.yml#/default)
- [Documentation for JupyterHub's REST API](https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/HEAD/docs/rest-api.yml#/default)
- [Documentation for Project Jupyter](http://jupyter.readthedocs.io/en/latest/index.html) | [PDF](https://media.readthedocs.org/pdf/jupyter/latest/jupyter.pdf)
- [Project Jupyter website](https://jupyter.org)
- [Project Jupyter community](https://jupyter.org/community)

View File

@@ -1,4 +1,4 @@
# see me at: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#/default
# see me at: https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/HEAD/docs/rest-api.yml#/default
swagger: "2.0"
info:
title: JupyterHub
@@ -143,6 +143,22 @@ paths:
inactive: all users who have *no* active servers (complement of active)
Added in JupyterHub 1.3
- name: offset
in: query
required: false
type: number
description: |
Return a number users starting at the given offset.
Can be used with limit to paginate.
If unspecified, return all users.
- name: limit
in: query
requred: false
type: number
description: |
Return a finite number of users.
Can be used with offset to paginate.
If unspecified, use api_page_default_limit.
responses:
"200":
description: The Hub's user list
@@ -540,6 +556,23 @@ paths:
security:
- oauth2:
- read:groups
parameters:
- name: offset
in: query
required: false
type: number
description: |
Return a number of groups starting at the specified offset.
Can be used with limit to paginate.
If unspecified, return all groups.
- name: limit
in: query
required: false
type: number
description: |
Return a finite number of groups.
Can be used with offset to paginate.
If unspecified, use api_page_default_limit.
responses:
"200":
description: The list of groups
@@ -686,6 +719,23 @@ paths:
security:
- oauth2:
- proxy
parameters:
- name: offset
in: query
required: false
type: number
description: |
Return a number of routes starting at the given offset.
Can be used with limit to paginate.
If unspecified, return all routes.
- name: limit
in: query
requred: false
type: number
description: |
Return a finite number of routes.
Can be used with offset to paginate.
If unspecified, use api_page_default_limit
responses:
"200":
description: Routing table

View File

@@ -18,7 +18,7 @@ information on:
- learning more about JupyterHub's API
The same JupyterHub API spec, as found here, is available in an interactive form
`here (on swagger's petstore) <http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#!/default>`__.
`here (on swagger's petstore) <https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/HEAD/docs/rest-api.yml#!/default>`__.
The `OpenAPI Initiative`_ (fka Swagger™) is a project used to describe
and document RESTful APIs.

View File

@@ -985,7 +985,7 @@ Bugfixes on 0.6:
### [0.6.0] - 2016-04-25
- JupyterHub has moved to a new `jupyterhub` namespace on GitHub and Docker. What was `juptyer/jupyterhub` is now `jupyterhub/jupyterhub`, etc.
- JupyterHub has moved to a new `jupyterhub` namespace on GitHub and Docker. What was `jupyter/jupyterhub` is now `jupyterhub/jupyterhub`, etc.
- `jupyterhub/jupyterhub` image on DockerHub no longer loads the jupyterhub_config.py in an ONBUILD step. A new `jupyterhub/jupyterhub-onbuild` image does this
- Add statsd support, via `c.JupyterHub.statsd_{host,port,prefix}`
- Update to traitlets 4.1 `@default`, `@observe` APIs for traits

View File

@@ -13,7 +13,7 @@ Building documentation locally
We use `sphinx <http://sphinx-doc.org>`_ to build our documentation. It takes
our documentation source files (written in `markdown
<https://daringfireball.net/projects/markdown/>`_ or `reStructuredText
<http://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_ &
<https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_ &
stored under the ``docs/source`` directory) and converts it into various
formats for people to read. To make sure the documentation you write or
change renders correctly, it is good practice to test it locally.

View File

@@ -6,8 +6,8 @@ We want you to contribute to JupyterHub in ways that are most exciting
& useful to you. We value documentation, testing, bug reporting & code equally,
and are glad to have your contributions in whatever form you wish :)
Our `Code of Conduct <https://github.com/jupyter/governance/blob/master/conduct/code_of_conduct.md>`_
(`reporting guidelines <https://github.com/jupyter/governance/blob/master/conduct/reporting_online.md>`_)
Our `Code of Conduct <https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md>`_
(`reporting guidelines <https://github.com/jupyter/governance/blob/HEAD/conduct/reporting_online.md>`_)
helps keep our community welcoming to as many people as possible.
.. toctree::

View File

@@ -30,7 +30,7 @@ Please submit pull requests to update information or to add new institutions or
### University of California Davis
- [Spinning up multiple Jupyter Notebooks on AWS for a tutorial](https://github.com/mblmicdiv/course2017/blob/master/exercises/sourmash-setup.md)
- [Spinning up multiple Jupyter Notebooks on AWS for a tutorial](https://github.com/mblmicdiv/course2017/blob/HEAD/exercises/sourmash-setup.md)
Although not technically a JupyterHub deployment, this tutorial setup
may be helpful to others in the Jupyter community.

View File

@@ -22,7 +22,7 @@ Admin users of JupyterHub, `admin_users`, can add and remove users from
the user `allowed_users` set. `admin_users` can take actions on other users'
behalf, such as stopping and restarting their servers.
A set of initial admin users, `admin_users` can configured be as follows:
A set of initial admin users, `admin_users` can be configured as follows:
```python
c.Authenticator.admin_users = {'mal', 'zoe'}
@@ -32,9 +32,9 @@ Users in the admin set are automatically added to the user `allowed_users` set,
if they are not already present.
Each authenticator may have different ways of determining whether a user is an
administrator. By default JupyterHub use the PAMAuthenticator which provide the
`admin_groups` option and can determine administrator status base on a user
groups. For example we can let any users in the `wheel` group be admin:
administrator. By default JupyterHub uses the PAMAuthenticator which provides the
`admin_groups` option and can set administrator status based on a user
group. For example we can let any user in the `wheel` group be admin:
```python
c.PAMAuthenticator.admin_groups = {'wheel'}
@@ -42,9 +42,9 @@ c.PAMAuthenticator.admin_groups = {'wheel'}
## Give admin access to other users' notebook servers (`admin_access`)
Since the default `JupyterHub.admin_access` setting is False, the admins
Since the default `JupyterHub.admin_access` setting is `False`, the admins
do not have permission to log in to the single user notebook servers
owned by _other users_. If `JupyterHub.admin_access` is set to True,
owned by _other users_. If `JupyterHub.admin_access` is set to `True`,
then admins have permission to log in _as other users_ on their
respective machines, for debugging. **As a courtesy, you should make
sure your users know if admin_access is enabled.**
@@ -53,8 +53,8 @@ sure your users know if admin_access is enabled.**
Users can be added to and removed from the Hub via either the admin
panel or the REST API. When a user is **added**, the user will be
automatically added to the allowed users set and database. Restarting the Hub
will not require manually updating the allowed users set in your config file,
automatically added to the `allowed_users` set and database. Restarting the Hub
will not require manually updating the `allowed_users` set in your config file,
as the users will be loaded from the database.
After starting the Hub once, it is not sufficient to **remove** a user
@@ -107,8 +107,8 @@ with any provider, is also available.
## Use DummyAuthenticator for testing
The :class:`~jupyterhub.auth.DummyAuthenticator` is a simple authenticator that
allows for any username/password unless if a global password has been set. If
The `DummyAuthenticator` is a simple authenticator that
allows for any username/password unless a global password has been set. If
set, it will allow for any username as long as the correct password is provided.
To set a global password, add this to the config file:

View File

@@ -44,7 +44,7 @@ jupyterhub -f /etc/jupyterhub/jupyterhub_config.py
```
The IPython documentation provides additional information on the
[config system](http://ipython.readthedocs.io/en/stable/development/config)
[config system](http://ipython.readthedocs.io/en/stable/development/config.html)
that Jupyter uses.
## Configure using command line options
@@ -63,11 +63,11 @@ would enter:
jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert
```
All configurable options may technically be set on the command-line,
All configurable options may technically be set on the command line,
though some are inconvenient to type. To set a particular configuration
parameter, `c.Class.trait`, you would use the command line option,
`--Class.trait`, when starting JupyterHub. For example, to configure the
`c.Spawner.notebook_dir` trait from the command-line, use the
`c.Spawner.notebook_dir` trait from the command line, use the
`--Spawner.notebook_dir` option:
```bash
@@ -89,11 +89,11 @@ meant as illustration, are:
## Run the proxy separately
This is _not_ strictly necessary, but useful in many cases. If you
use a custom proxy (e.g. Traefik), this also not needed.
use a custom proxy (e.g. Traefik), this is also not needed.
Connections to user servers go through the proxy, and _not_ the hub
itself. If the proxy stays running when the hub restarts (for
maintenance, re-configuration, etc.), then use connections are not
maintenance, re-configuration, etc.), then user connections are not
interrupted. For simplicity, by default the hub starts the proxy
automatically, so if the hub restarts, the proxy restarts, and user
connections are interrupted. It is easy to run the proxy separately,

View File

@@ -26,7 +26,7 @@ so Breq would open `/user/breq/notebooks/foo.ipynb` and
Seivarden would open `/user/seivarden/notebooks/foo.ipynb`, etc.
JupyterHub has a special URL that does exactly this!
It's called `/hub/user-redirect/...` and after the visitor logs in,
It's called `/hub/user-redirect/...`.
So if you replace `/user/yourname` in your URL bar
with `/hub/user-redirect` any visitor should get the same
URL on their own server, rather than visiting yours.

View File

@@ -11,7 +11,7 @@ Yes! JupyterHub has been used at-scale for large pools of users, as well
as complex and high-performance computing. For example, UC Berkeley uses
JupyterHub for its Data Science Education Program courses (serving over
3,000 students). The Pangeo project uses JupyterHub to provide access
to scalable cloud computing with Dask. JupyterHub is stable customizable
to scalable cloud computing with Dask. JupyterHub is stable and customizable
to the use-cases of large organizations.
### I keep hearing about Jupyter Notebook, JupyterLab, and now JupyterHub. Whats the difference?
@@ -27,14 +27,14 @@ Here is a quick breakdown of these three tools:
for other parts of the data science stack.
- **JupyterHub** is an application that manages interactive computing sessions for **multiple users**.
It also connects them with infrastructure those users wish to access. It can provide
remote access to Jupyter Notebooks and Jupyter Lab for many people.
remote access to Jupyter Notebooks and JupyterLab for many people.
## For management
### Briefly, what problem does JupyterHub solve for us?
JupyterHub provides a shared platform for data science and collaboration.
It allows users to utilize familiar data science workflows (such as the scientific python stack,
It allows users to utilize familiar data science workflows (such as the scientific Python stack,
the R tidyverse, and Jupyter Notebooks) on institutional infrastructure. It also allows administrators
some control over access to resources, security, environments, and authentication.
@@ -55,7 +55,7 @@ industry, and government research labs. It is most-commonly used by two kinds of
- Large teams (e.g., a department, a large class, or a large group of remote users) to provide
access to organizational hardware, data, and analytics environments at scale.
Here are a sample of organizations that use JupyterHub:
Here is a sample of organizations that use JupyterHub:
- **Universities and colleges**: UC Berkeley, UC San Diego, Cal Poly SLO, Harvard University, University of Chicago,
University of Oslo, University of Sheffield, Université Paris Sud, University of Versailles
@@ -99,7 +99,7 @@ that we currently suggest are:
guide that runs on Kubernetes. Better for larger or dynamic user groups (50-10,000) or more complex
compute/data needs.
- [The Littlest JupyterHub](https://tljh.jupyter.org) is a lightweight JupyterHub that runs on a single
single machine (in the cloud or under your desk). Better for smaller usergroups (4-80) or more
single machine (in the cloud or under your desk). Better for smaller user groups (4-80) or more
lightweight computational resources.
### Does JupyterHub run well in the cloud?
@@ -123,7 +123,7 @@ level for several years, and makes a number of "default" security decisions that
users.
- For security considerations in the base JupyterHub application,
[see the JupyterHub security page](https://jupyterhub.readthedocs.io/en/stable/reference/websecurity.html)
[see the JupyterHub security page](https://jupyterhub.readthedocs.io/en/stable/reference/websecurity.html).
- For security considerations when deploying JupyterHub on Kubernetes, see the
[JupyterHub on Kubernetes security page](https://zero-to-jupyterhub.readthedocs.io/en/latest/security.html).
@@ -183,7 +183,7 @@ how those resources are controlled is taken care of by the non-JupyterHub applic
Yes - JupyterHub can provide access to many kinds of computing infrastructure.
Especially when combined with other open-source schedulers such as Dask, you can manage fairly
complex computing infrastructure from the interactive sessions of a JupyterHub. For example
complex computing infrastructures from the interactive sessions of a JupyterHub. For example
[see the Dask HPC page](https://docs.dask.org/en/latest/setup/hpc.html).
### How much resources do user sessions take?
@@ -192,7 +192,7 @@ This is highly configurable by the administrator. If you wish for your users to
data analytics environments for prototyping and light data exploring, you can restrict their
memory and CPU based on the resources that you have available. If you'd like your JupyterHub
to serve as a gateway to high-performance compute or data resources, you may increase the
resources available on user machines, or connect them with computing infrastructure elsewhere.
resources available on user machines, or connect them with computing infrastructures elsewhere.
### Can I customize the look and feel of a JupyterHub?
@@ -217,7 +217,7 @@ your JupyterHub with the various services and tools that you wish to provide to
### How well does JupyterHub scale? What are JupyterHub's limitations?
JupyterHub works well at both a small scale (e.g., a single VM or machine) as well as a
high scale (e.g., a scalable Kubernetes cluster). It can be used for teams as small a 2, and
high scale (e.g., a scalable Kubernetes cluster). It can be used for teams as small as 2, and
for user bases as large as 10,000. The scalability of JupyterHub largely depends on the
infrastructure on which it is deployed. JupyterHub has been designed to be lightweight and
flexible, so you can tailor your JupyterHub deployment to your needs.
@@ -249,7 +249,7 @@ share their results with one another.
JupyterHub also provides a computational framework to share computational narratives between
different levels of an organization. For example, data scientists can share Jupyter Notebooks
rendered as [voila dashboards](https://voila.readthedocs.io/en/stable/) with those who are not
rendered as [Voilà dashboards](https://voila.readthedocs.io/en/stable/) with those who are not
familiar with programming, or create publicly-available interactive analyses to allow others to
interact with your work.

View File

@@ -43,7 +43,7 @@ port.
By default, this REST API listens on port 8001 of `localhost` only.
The Hub service talks to the proxy via a REST API on a secondary port. The
API URL can be configured separately and override the default settings.
API URL can be configured separately to override the default settings.
### Set api_url
@@ -82,13 +82,13 @@ c.JupyterHub.hub_ip = '10.0.1.4'
c.JupyterHub.hub_port = 54321
```
**Added in 0.8:** The `c.JupyterHub.hub_connect_ip` setting is the ip address or
**Added in 0.8:** The `c.JupyterHub.hub_connect_ip` setting is the IP address or
hostname that other services should use to connect to the Hub. A common
configuration for, e.g. docker, is:
```python
c.JupyterHub.hub_ip = '0.0.0.0' # listen on all interfaces
c.JupyterHub.hub_connect_ip = '10.0.1.4' # ip as seen on the docker network. Can also be a hostname.
c.JupyterHub.hub_connect_ip = '10.0.1.4' # IP as seen on the docker network. Can also be a hostname.
```
## Adjusting the hub's URL

View File

@@ -2,7 +2,7 @@
When working with JupyterHub, a **Service** is defined as a process
that interacts with the Hub's REST API. A Service may perform a specific
or action or task. For example, shutting down individuals' single user
action or task. For example, shutting down individuals' single user
notebook servers that have been idle for some time is a good example of
a task that could be automated by a Service. Let's look at how the
[jupyterhub_idle_culler][] script can be used as a Service.
@@ -114,7 +114,7 @@ interact with it.
This will run the idle culler service manually. It can be run as a standalone
script anywhere with access to the Hub, and will periodically check for idle
servers and shut them down via the Hub's REST API. In order to shutdown the
servers, the token given to cull-idle must have admin privileges.
servers, the token given to `cull-idle` must have admin privileges.
Generate an API token and store it in the `JUPYTERHUB_API_TOKEN` environment
variable. Run `jupyterhub_idle_culler` manually.

View File

@@ -1,8 +1,8 @@
# Spawners and single-user notebook servers
Since the single-user server is an instance of `jupyter notebook`, an entire separate
multi-process application, there are many aspect of that server can configure, and a lot of ways
to express that configuration.
multi-process application, there are many aspects of that server that can be configured, and a lot
of ways to express that configuration.
At the JupyterHub level, you can set some values on the Spawner. The simplest of these is
`Spawner.notebook_dir`, which lets you set the root directory for a user's server. This root
@@ -14,7 +14,7 @@ expanded to the user's home directory.
c.Spawner.notebook_dir = '~/notebooks'
```
You can also specify extra command-line arguments to the notebook server with:
You can also specify extra command line arguments to the notebook server with:
```python
c.Spawner.args = ['--debug', '--profile=PHYS131']

View File

@@ -123,8 +123,8 @@ We want you to contribute to JupyterHub in ways that are most exciting
& useful to you. We value documentation, testing, bug reporting & code equally,
and are glad to have your contributions in whatever form you wish :)
Our `Code of Conduct <https://github.com/jupyter/governance/blob/master/conduct/code_of_conduct.md>`_
(`reporting guidelines <https://github.com/jupyter/governance/blob/master/conduct/reporting_online.md>`_)
Our `Code of Conduct <https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md>`_
(`reporting guidelines <https://github.com/jupyter/governance/blob/HEAD/conduct/reporting_online.md>`_)
helps keep our community welcoming to as many people as possible.
.. toctree::
@@ -155,4 +155,4 @@ Questions? Suggestions?
.. _JupyterHub: https://github.com/jupyterhub/jupyterhub
.. _Jupyter notebook: https://jupyter-notebook.readthedocs.io/en/latest/
.. _REST API: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#!/default
.. _REST API: https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/HEAD/docs/rest-api.yml#!/default

View File

@@ -3,4 +3,4 @@
JupyterHub the hard way
=======================
This guide has moved to https://github.com/jupyterhub/jupyterhub-the-hard-way/blob/master/docs/installation-guide-hard.md
This guide has moved to https://github.com/jupyterhub/jupyterhub-the-hard-way/blob/HEAD/docs/installation-guide-hard.md

View File

@@ -37,7 +37,7 @@ with any provider, is also available.
## The Dummy Authenticator
When testing, it may be helpful to use the
:class:`~jupyterhub.auth.DummyAuthenticator`. This allows for any username and
{class}`jupyterhub.auth.DummyAuthenticator`. This allows for any username and
password unless if a global password has been set. Once set, any username will
still be accepted but the correct password will need to be provided.
@@ -259,7 +259,7 @@ PAM session.
Beginning with version 0.8, JupyterHub is an OAuth provider.
[authenticator]: https://github.com/jupyterhub/jupyterhub/blob/master/jupyterhub/auth.py
[authenticator]: https://github.com/jupyterhub/jupyterhub/blob/HEAD/jupyterhub/auth.py
[pam]: https://en.wikipedia.org/wiki/Pluggable_authentication_module
[oauth]: https://en.wikipedia.org/wiki/OAuth
[github oauth]: https://developer.github.com/v3/oauth/

View File

@@ -158,6 +158,27 @@ provided by notebook servers managed by JupyterHub if one of the following is tr
1. The token is for the same user as the owner of the notebook
2. The token is tied to an admin user or service **and** `c.JupyterHub.admin_access` is set to `True`
## Paginating API requests
Pagination is available through the `offset` and `limit` query parameters on
certain endpoints, which can be used to return ideally sized windows of results.
Here's example code demonstrating pagination on the `GET /users`
endpoint to fetch the first 20 records.
```python
import requests
api_url = 'http://127.0.0.1:8081/hub/api'
r = requests.get(api_url + '/users?offset=0&limit=20')
r.raise_for_status()
r.json()
```
By default, pagination limit will be specified by the `JupyterHub.api_page_default_limit` config variable.
Pagination is enabled on the `GET /users`, `GET /groups`, and `GET /proxy` REST endpoints.
## Enabling users to spawn multiple named-servers via the API
With JupyterHub version 0.8, support for multiple servers per user has landed.
@@ -209,7 +230,7 @@ be viewed in a more [interactive style on swagger's petstore][].
Both resources contain the same information and differ only in its display.
Note: The Swagger specification is being renamed the [OpenAPI Initiative][].
[interactive style on swagger's petstore]: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#!/default
[interactive style on swagger's petstore]: https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/HEAD/docs/rest-api.yml#!/default
[openapi initiative]: https://www.openapis.org/
[jupyterhub rest api]: ./rest-api
[jupyter notebook rest api]: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/notebook/master/notebook/services/api/api.yaml
[jupyter notebook rest api]: https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyter/notebook/HEAD/notebook/services/api/api.yaml

View File

@@ -203,8 +203,6 @@ To use HubAuth, you must set the `.api_token`, either programmatically when cons
or via the `JUPYTERHUB_API_TOKEN` environment variable.
Most of the logic for authentication implementation is found in the
[`HubAuth.user_for_cookie`][hubauth.user_for_cookie]
and in the
[`HubAuth.user_for_token`][hubauth.user_for_token]
methods, which makes a request of the Hub, and returns:
@@ -230,7 +228,7 @@ configurable by the `cookie_cache_max_age` setting (default: five minutes).
For example, you have a Flask service that returns information about a user.
JupyterHub's HubAuth class can be used to authenticate requests to the Flask
service. See the `service-whoami-flask` example in the
[JupyterHub GitHub repo](https://github.com/jupyterhub/jupyterhub/tree/master/examples/service-whoami-flask)
[JupyterHub GitHub repo](https://github.com/jupyterhub/jupyterhub/tree/HEAD/examples/service-whoami-flask)
for more details.
```python
@@ -241,11 +239,11 @@ from urllib.parse import quote
from flask import Flask, redirect, request, Response
from jupyterhub.services.auth import HubAuth
from jupyterhub.services.auth import HubOAuth
prefix = os.environ.get('JUPYTERHUB_SERVICE_PREFIX', '/')
auth = HubAuth(
auth = HubOAuth(
api_token=os.environ['JUPYTERHUB_API_TOKEN'],
cache_max_age=60,
)
@@ -257,11 +255,8 @@ def authenticated(f):
"""Decorator for authenticating with the Hub"""
@wraps(f)
def decorated(*args, **kwargs):
cookie = request.cookies.get(auth.cookie_name)
token = request.headers.get(auth.auth_header_name)
if cookie:
user = auth.user_for_cookie(cookie)
elif token:
if token:
user = auth.user_for_token(token)
else:
user = None

View File

@@ -37,14 +37,13 @@ Some examples include:
Information about the user can be retrieved from `self.user`,
an object encapsulating the user's name, authentication, and server info.
The return value of `Spawner.start` should be the (ip, port) of the running server.
**NOTE:** When writing coroutines, _never_ `yield` in between a database change and a commit.
The return value of `Spawner.start` should be the `(ip, port)` of the running server,
or a full URL as a string.
Most `Spawner.start` functions will look similar to this example:
```python
def start(self):
async def start(self):
self.ip = '127.0.0.1'
self.port = random_port()
# get environment variables,
@@ -56,8 +55,10 @@ def start(self):
cmd.extend(self.cmd)
cmd.extend(self.get_args())
yield self._actually_start_server_somehow(cmd, env)
return (self.ip, self.port)
await self._actually_start_server_somehow(cmd, env)
# url may not match self.ip:self.port, but it could!
url = self._get_connectable_url()
return url
```
When `Spawner.start` returns, the single-user server process should actually be running,
@@ -65,6 +66,48 @@ not just requested. JupyterHub can handle `Spawner.start` being very slow
(such as PBS-style batch queues, or instantiating whole AWS instances)
via relaxing the `Spawner.start_timeout` config value.
#### Note on IPs and ports
`Spawner.ip` and `Spawner.port` attributes set the _bind_ url,
which the single-user server should listen on
(passed to the single-user process via the `JUPYTERHUB_SERVICE_URL` environment variable).
The _return_ value is the ip and port (or full url) the Hub should _connect to_.
These are not necessarily the same, and usually won't be in any Spawner that works with remote resources or containers.
The default for Spawner.ip, and Spawner.port is `127.0.0.1:{random}`,
which is appropriate for Spawners that launch local processes,
where everything is on localhost and each server needs its own port.
For remote or container Spawners, it will often make sense to use a different value,
such as `ip = '0.0.0.0'` and a fixed port, e.g. `8888`.
The defaults can be changed in the class,
preserving configuration with traitlets:
```python
from traitlets import default
from jupyterhub.spawner import Spawner
class MySpawner(Spawner):
@default("ip")
def _default_ip(self):
return '0.0.0.0'
@default("port")
def _default_port(self):
return 8888
async def start(self):
env = self.get_env()
cmd = []
# get jupyterhub command to run,
# typically ['jupyterhub-singleuser']
cmd.extend(self.cmd)
cmd.extend(self.get_args())
remote_server_info = await self._actually_start_server_somehow(cmd, env)
url = self.get_public_url_from(remote_server_info)
return url
```
### Spawner.poll
`Spawner.poll` should check if the spawner is still running.
@@ -125,7 +168,7 @@ If the `Spawner.options_form` is defined, when a user tries to start their serve
If `Spawner.options_form` is undefined, the user's server is spawned directly, and no spawn page is rendered.
See [this example](https://github.com/jupyterhub/jupyterhub/blob/master/examples/spawn-form/jupyterhub_config.py) for a form that allows custom CLI args for the local spawner.
See [this example](https://github.com/jupyterhub/jupyterhub/blob/HEAD/examples/spawn-form/jupyterhub_config.py) for a form that allows custom CLI args for the local spawner.
### `Spawner.options_from_form`
@@ -166,7 +209,7 @@ which would return:
When `Spawner.start` is called, this dictionary is accessible as `self.user_options`.
[spawner]: https://github.com/jupyterhub/jupyterhub/blob/master/jupyterhub/spawner.py
[spawner]: https://github.com/jupyterhub/jupyterhub/blob/HEAD/jupyterhub/spawner.py
## Writing a custom spawner
@@ -207,6 +250,73 @@ Additionally, configurable attributes for your spawner will
appear in jupyterhub help output and auto-generated configuration files
via `jupyterhub --generate-config`.
## Environment variables and command-line arguments
Spawners mainly do one thing: launch a command in an environment.
The command-line is constructed from user configuration:
- Spawner.cmd (default: `['jupterhub-singleuser']`)
- Spawner.args (cli args to pass to the cmd, default: empty)
where the configuration:
```python
c.Spawner.cmd = ["my-singleuser-wrapper"]
c.Spawner.args = ["--debug", "--flag"]
```
would result in spawning the command:
```bash
my-singleuser-wrapper --debug --flag
```
The `Spawner.get_args()` method is how Spawner.args is accessed,
and can be used by Spawners to customize/extend user-provided arguments.
Prior to 2.0, JupyterHub unconditionally added certain options _if specified_ to the command-line,
such as `--ip={Spawner.ip}` and `--port={Spawner.port}`.
These have now all been moved to environment variables,
and from JupyterHub 2.0,
the command-line launched by JupyterHub is fully specified by overridable configuration `Spawner.cmd + Spawner.args`.
Most process configuration is passed via environment variables.
Additional variables can be specified via the `Spawner.environment` configuration.
The process environment is returned by `Spawner.get_env`, which specifies the following environment variables:
- JUPYTERHUB*SERVICE_URL - the \_bind* url where the server should launch its http server (`http://127.0.0.1:12345`).
This includes Spawner.ip and Spawner.port; _new in 2.0, prior to 2.0 ip,port were on the command-line and only if specified_
- JUPYTERHUB_SERVICE_PREFIX - the URL prefix the service will run on (e.g. `/user/name/`)
- JUPYTERHUB_USER - the JupyterHub user's username
- JUPYTERHUB_SERVER_NAME - the server's name, if using named servers (default server has an empty name)
- JUPYTERHUB_API_URL - the full url for the JupyterHub API (http://17.0.0.1:8001/hub/api)
- JUPYTERHUB_BASE_URL - the base url of the whole jupyterhub deployment, i.e. the bit before `hub/` or `user/`,
as set by c.JupyterHub.base_url (default: `/`)
- JUPYTERHUB_API_TOKEN - the API token the server can use to make requests to the Hub.
This is also the OAuth client secret.
- JUPYTERHUB_CLIENT_ID - the OAuth client ID for authenticating visitors.
- JUPYTERHUB_OAUTH_CALLBACK_URL - the callback URL to use in oauth, typically `/user/:name/oauth_callback`
Optional environment variables, depending on configuration:
- JUPYTERHUB*SSL*[KEYFILE|CERTFILE|CLIENT_CI] - SSL configuration, when internal_ssl is enabled
- JUPYTERHUB_ROOT_DIR - the root directory of the server (notebook directory), when Spawner.notebook_dir is defined (new in 2.0)
- JUPYTERHUB_DEFAULT_URL - the default URL for the server (for redirects from /user/:name/),
if Spawner.default_url is defined
(new in 2.0, previously passed via cli)
- JUPYTERHUB_DEBUG=1 - generic debug flag, sets maximum log level when Spawner.debug is True
(new in 2.0, previously passed via cli)
- JUPYTERHUB_DISABLE_USER_CONFIG=1 - disable loading user config,
sets maximum log level when Spawner.debug is True (new in 2.0,
previously passed via cli)
- JUPYTERHUB*[MEM|CPU]*[LIMIT_GUARANTEE] - the values of cpu and memory limits and guarantees.
These are not expected to be enforced by the process,
but are made available as a hint,
e.g. for resource monitoring extensions.
## Spawners, resource limits, and guarantees (Optional)
Some spawners of the single-user notebook servers allow setting limits or

View File

@@ -10,7 +10,7 @@ appearance.
JupyterHub will look for custom templates in all of the paths in the
`JupyterHub.template_paths` configuration option, falling back on the
[default templates](https://github.com/jupyterhub/jupyterhub/tree/master/share/jupyterhub/templates)
[default templates](https://github.com/jupyterhub/jupyterhub/tree/HEAD/share/jupyterhub/templates)
if no custom template with that name is found. This fallback
behavior is new in version 0.9; previous versions searched only those paths
explicitly included in `template_paths`. You may override as many
@@ -21,7 +21,7 @@ or as few templates as you desire.
Jinja provides a mechanism to [extend templates](http://jinja.pocoo.org/docs/2.10/templates/#template-inheritance).
A base template can define a `block`, and child templates can replace or
supplement the material in the block. The
[JupyterHub templates](https://github.com/jupyterhub/jupyterhub/tree/master/share/jupyterhub/templates)
[JupyterHub templates](https://github.com/jupyterhub/jupyterhub/tree/HEAD/share/jupyterhub/templates)
make extensive use of blocks, which allows you to customize parts of the
interface easily.

View File

@@ -234,7 +234,7 @@ With a docker container, pass in the environment variable with the run command:
-e JUPYTERHUB_API_TOKEN=my_secret_token \
jupyter/datascience-notebook:latest
[This example](https://github.com/jupyterhub/jupyterhub/tree/master/examples/service-notebook/external) demonstrates how to combine the use of the `jupyterhub-singleuser` environment variables when launching a Notebook as an externally managed service.
[This example](https://github.com/jupyterhub/jupyterhub/tree/HEAD/examples/service-notebook/external) demonstrates how to combine the use of the `jupyterhub-singleuser` environment variables when launching a Notebook as an externally managed service.
## How do I...?

View File

@@ -148,9 +148,9 @@ else
echo "...initial content loading for user ..."
mkdir $USER_DIRECTORY/tutorials
cd $USER_DIRECTORY/tutorials
wget https://github.com/jakevdp/PythonDataScienceHandbook/archive/master.zip
unzip -o master.zip
rm master.zip
wget https://github.com/jakevdp/PythonDataScienceHandbook/archive/HEAD.zip
unzip -o HEAD.zip
rm HEAD.zip
fi
exit 0

View File

@@ -40,9 +40,9 @@ else
echo "...initial content loading for user ..."
mkdir $USER_DIRECTORY/tutorials
cd $USER_DIRECTORY/tutorials
wget https://github.com/jakevdp/PythonDataScienceHandbook/archive/master.zip
unzip -o master.zip
rm master.zip
wget https://github.com/jakevdp/PythonDataScienceHandbook/archive/HEAD.zip
unzip -o HEAD.zip
rm HEAD.zip
fi
exit 0

View File

@@ -13,7 +13,7 @@ if not api_token:
c.JupyterHub.services = [
{
'name': 'external-oauth',
'oauth_client_id': "whoami-oauth-client-test",
'oauth_client_id': "service-oauth-client-test",
'api_token': api_token,
'oauth_redirect_uri': 'http://127.0.0.1:5555/oauth_callback',
}

View File

@@ -9,7 +9,7 @@ if [[ -z "${JUPYTERHUB_API_TOKEN}" ]]; then
fi
# 2. oauth client ID
export JUPYTERHUB_CLIENT_ID='whoami-oauth-client-test'
export JUPYTERHUB_CLIENT_ID='service-oauth-client-test'
# 3. where the Hub is
export JUPYTERHUB_URL='http://127.0.0.1:8000'

View File

@@ -9,7 +9,7 @@ if [[ -z "${JUPYTERHUB_API_TOKEN}" ]]; then
fi
# 2. oauth client ID
export JUPYTERHUB_CLIENT_ID="whoami-oauth-client-test"
export JUPYTERHUB_CLIENT_ID="service-oauth-client-test"
# 3. what URL to run on
export JUPYTERHUB_SERVICE_PREFIX='/'
export JUPYTERHUB_SERVICE_URL='http://127.0.0.1:5555'

View File

@@ -1,6 +1,6 @@
# Authenticating a flask service with JupyterHub
Uses `jupyterhub.services.HubAuth` to authenticate requests with the Hub in a [flask][] application.
Uses `jupyterhub.services.HubOAuth` to authenticate requests with the Hub in a [flask][] application.
## Run
@@ -8,7 +8,7 @@ Uses `jupyterhub.services.HubAuth` to authenticate requests with the Hub in a [f
jupyterhub --ip=127.0.0.1
2. Visit http://127.0.0.1:8000/services/whoami/ or http://127.0.0.1:8000/services/whoami-oauth/
2. Visit http://127.0.0.1:8000/services/whoami/
After logging in with your local-system credentials, you should see a JSON dump of your user info:

View File

@@ -5,10 +5,12 @@ c.JupyterHub.services = [
'command': ['flask', 'run', '--port=10101'],
'environment': {'FLASK_APP': 'whoami-flask.py'},
},
{
'name': 'whoami-oauth',
'url': 'http://127.0.0.1:10201',
'command': ['flask', 'run', '--port=10201'],
'environment': {'FLASK_APP': 'whoami-oauth.py'},
},
]
# dummy auth and simple spawner for testing
# any username and password will work
c.JupyterHub.spawner_class = 'simple'
c.JupyterHub.authenticator_class = 'dummy'
# listen only on localhost while testing with wide-open auth
c.JupyterHub.ip = '127.0.0.1'

View File

@@ -4,42 +4,48 @@ whoami service authentication with the Hub
"""
import json
import os
import secrets
from functools import wraps
from urllib.parse import quote
from flask import Flask
from flask import make_response
from flask import redirect
from flask import request
from flask import Response
from flask import session
from jupyterhub.services.auth import HubAuth
from jupyterhub.services.auth import HubOAuth
prefix = os.environ.get('JUPYTERHUB_SERVICE_PREFIX', '/')
auth = HubAuth(api_token=os.environ['JUPYTERHUB_API_TOKEN'], cache_max_age=60)
auth = HubOAuth(api_token=os.environ['JUPYTERHUB_API_TOKEN'], cache_max_age=60)
app = Flask(__name__)
# encryption key for session cookies
app.secret_key = secrets.token_bytes(32)
def authenticated(f):
"""Decorator for authenticating with the Hub"""
"""Decorator for authenticating with the Hub via OAuth"""
@wraps(f)
def decorated(*args, **kwargs):
cookie = request.cookies.get(auth.cookie_name)
token = request.headers.get(auth.auth_header_name)
if cookie:
user = auth.user_for_cookie(cookie)
elif token:
token = session.get("token")
if token:
user = auth.user_for_token(token)
else:
user = None
if user:
return f(user, *args, **kwargs)
else:
# redirect to login url on failed auth
return redirect(auth.login_url + '?next=%s' % quote(request.path))
state = auth.generate_state(next_url=request.path)
response = make_response(redirect(auth.login_url + '&state=%s' % state))
response.set_cookie(auth.state_cookie_name, state)
return response
return decorated
@@ -50,3 +56,24 @@ def whoami(user):
return Response(
json.dumps(user, indent=1, sort_keys=True), mimetype='application/json'
)
@app.route(prefix + 'oauth_callback')
def oauth_callback():
code = request.args.get('code', None)
if code is None:
return 403
# validate state field
arg_state = request.args.get('state', None)
cookie_state = request.cookies.get(auth.state_cookie_name)
if arg_state is None or arg_state != cookie_state:
# state doesn't match
return 403
token = auth.token_for_code(code)
# store token in session cookie
session["token"] = token
next_url = auth.get_next_url(cookie_state) or prefix
response = make_response(redirect(next_url))
return response

View File

@@ -1,72 +0,0 @@
#!/usr/bin/env python3
"""
whoami service authentication with the Hub
"""
import json
import os
from functools import wraps
from flask import Flask
from flask import make_response
from flask import redirect
from flask import request
from flask import Response
from jupyterhub.services.auth import HubOAuth
prefix = os.environ.get('JUPYTERHUB_SERVICE_PREFIX', '/')
auth = HubOAuth(api_token=os.environ['JUPYTERHUB_API_TOKEN'], cache_max_age=60)
app = Flask(__name__)
def authenticated(f):
"""Decorator for authenticating with the Hub via OAuth"""
@wraps(f)
def decorated(*args, **kwargs):
token = request.cookies.get(auth.cookie_name)
if token:
user = auth.user_for_token(token)
else:
user = None
if user:
return f(user, *args, **kwargs)
else:
# redirect to login url on failed auth
state = auth.generate_state(next_url=request.path)
response = make_response(redirect(auth.login_url + '&state=%s' % state))
response.set_cookie(auth.state_cookie_name, state)
return response
return decorated
@app.route(prefix)
@authenticated
def whoami(user):
return Response(
json.dumps(user, indent=1, sort_keys=True), mimetype='application/json'
)
@app.route(prefix + 'oauth_callback')
def oauth_callback():
code = request.args.get('code', None)
if code is None:
return 403
# validate state field
arg_state = request.args.get('state', None)
cookie_state = request.cookies.get(auth.state_cookie_name)
if arg_state is None or arg_state != cookie_state:
# state doesn't match
return 403
token = auth.token_for_code(code)
next_url = auth.get_next_url(cookie_state) or prefix
response = make_response(redirect(next_url))
response.set_cookie(auth.cookie_name, token)
return response

View File

@@ -14,7 +14,6 @@ from tornado import web
from .. import orm
from .. import scopes
from ..user import User
from ..utils import token_authenticated
from .base import APIHandler
from .base import BaseHandler
@@ -24,7 +23,7 @@ class TokenAPIHandler(APIHandler):
@token_authenticated
def get(self, token):
# FIXME: deprecate this API for oauth token resolution, in favor of using /api/user
# TODO: require specific scope for this deprecated API, applied to oauth client secrets only?
# TODO: require specific scope for this deprecated API, applied to service tokens only?
self.log.warning(
"/authorizations/token/:token endpoint is deprecated in JupyterHub 2.0. Use /api/user"
)
@@ -55,53 +54,20 @@ class TokenAPIHandler(APIHandler):
self.write(json.dumps(model))
async def post(self):
warn_msg = (
"Using deprecated token creation endpoint %s."
" Use /hub/api/users/:user/tokens instead."
) % self.request.uri
self.log.warning(warn_msg)
requester = user = self.current_user
if user is None:
# allow requesting a token with username and password
# for authenticators where that's possible
data = self.get_json_body()
try:
requester = user = await self.login_user(data)
except Exception as e:
self.log.error("Failure trying to authenticate with form data: %s" % e)
user = None
if user is None:
raise web.HTTPError(403)
else:
data = self.get_json_body()
# admin users can request tokens for other users
if data and data.get('username'):
user = self.find_user(data['username'])
if user is not requester and not requester.admin:
raise web.HTTPError(
403, "Only admins can request tokens for other users."
)
if requester.admin and user is None:
raise web.HTTPError(400, "No such user '%s'" % data['username'])
note = (data or {}).get('note')
if not note:
note = "Requested via deprecated api"
if requester is not user:
kind = 'user' if isinstance(user, User) else 'service'
note += " by %s %s" % (kind, requester.name)
api_token = user.new_api_token(note=note)
self.write(
json.dumps(
{'token': api_token, 'warning': warn_msg, 'user': self.user_model(user)}
)
404,
"Deprecated endpoint /hub/api/authorizations/token is removed in JupyterHub 2.0."
" Use /hub/api/users/:user/tokens instead.",
)
class CookieAPIHandler(APIHandler):
@token_authenticated
def get(self, cookie_name, cookie_value=None):
self.log.warning(
"/authorizations/cookie endpoint is deprecated in JupyterHub 2.0. Use /api/user with OAuth tokens."
)
cookie_name = quote(cookie_name, safe='')
if cookie_value is None:
self.log.warning(

View File

@@ -368,6 +368,22 @@ class APIHandler(BaseHandler):
400, ("group names must be str, not %r", type(groupname))
)
def get_api_pagination(self):
default_limit = self.settings["app"].api_page_default_limit
max_limit = self.settings["app"].api_page_max_limit
offset = self.get_argument("offset", None)
limit = self.get_argument("limit", default_limit)
try:
offset = abs(int(offset)) if offset is not None else 0
limit = abs(int(limit))
if limit > max_limit:
limit = max_limit
except Exception as e:
raise web.HTTPError(
400, "Invalid argument type, offset and limit must be integers"
)
return offset, limit
def options(self, *args, **kwargs):
self.finish()

View File

@@ -37,9 +37,11 @@ class GroupListAPIHandler(_GroupAPIHandler):
@needs_scope('read:groups')
def get(self):
"""List groups"""
groups = self.db.query(orm.Group)
query = self.db.query(orm.Group)
offset, limit = self.get_api_pagination()
query = query.offset(offset).limit(limit)
scope_filter = self.get_scope_filter('read:groups')
data = [self.group_model(g) for g in groups if scope_filter(g, kind='group')]
data = [self.group_model(g) for g in query if scope_filter(g, kind='group')]
self.write(json.dumps(data))
@needs_scope('admin:groups')

View File

@@ -17,7 +17,16 @@ class ProxyAPIHandler(APIHandler):
This is the same as fetching the routing table directly from the proxy,
but without clients needing to maintain separate
"""
offset, limit = self.get_api_pagination()
routes = await self.proxy.get_all_routes()
routes = {
key: routes[key]
for key in list(routes.keys())[offset:limit]
if key in routes
}
self.write(json.dumps(routes))
@needs_scope('proxy')

View File

@@ -44,6 +44,11 @@ class SelfAPIHandler(APIHandler):
self.raw_scopes.update(scopes.identify_scopes(user.orm_user))
self.parsed_scopes = scopes.parse_scopes(self.raw_scopes)
model = self.user_model(user)
# validate return, should have at least kind and name,
# otherwise our filters did something wrong
for key in ("kind", "name"):
if key not in model:
raise ValueError(f"Missing identify model for {user}: {model}")
self.write(json.dumps(model))
@@ -66,6 +71,7 @@ class UserListAPIHandler(APIHandler):
)
def get(self):
state_filter = self.get_argument("state", None)
offset, limit = self.get_api_pagination()
# post_filter
post_filter = None
@@ -105,12 +111,16 @@ class UserListAPIHandler(APIHandler):
else:
# no filter, return all users
query = self.db.query(orm.User)
query = query.offset(offset).limit(limit)
data = []
for u in query:
if post_filter is None or post_filter(u):
user_model = self.user_model(u)
if user_model:
data.append(user_model)
self.write(json.dumps(data))
@needs_scope('admin:users')
@@ -239,11 +249,7 @@ class UserAPIHandler(APIHandler):
await maybe_future(self.authenticator.delete_user(user))
# allow the spawner to cleanup any persistent resources associated with the user
try:
await user.spawner.delete_forever()
except Exception as e:
self.log.error("Error cleaning up persistent resources: %s" % e)
await user.delete_spawners()
# remove from registry
self.users.delete(user)
@@ -485,10 +491,18 @@ class UserServerAPIHandler(APIHandler):
options = self.get_json_body()
remove = (options or {}).get('remove', False)
def _remove_spawner(f=None):
if f and f.exception():
return
async def _remove_spawner(f=None):
"""Remove the spawner object
only called after it stops successfully
"""
if f:
# await f, stop on error,
# leaving resources in the db in case of failure to stop
await f
self.log.info("Deleting spawner %s", spawner._log_name)
await maybe_future(user._delete_spawner(spawner))
self.db.delete(spawner.orm_spawner)
user.spawners.pop(server_name, None)
self.db.commit()
@@ -509,7 +523,8 @@ class UserServerAPIHandler(APIHandler):
self.set_header('Content-Type', 'text/plain')
self.set_status(202)
if remove:
spawner._stop_future.add_done_callback(_remove_spawner)
# schedule remove when stop completes
asyncio.ensure_future(_remove_spawner(spawner._stop_future))
return
if spawner.pending:
@@ -527,9 +542,10 @@ class UserServerAPIHandler(APIHandler):
if remove:
if stop_future:
stop_future.add_done_callback(_remove_spawner)
# schedule remove when stop completes
asyncio.ensure_future(_remove_spawner(spawner._stop_future))
else:
_remove_spawner()
await _remove_spawner()
status = 202 if spawner._stop_pending else 204
self.set_header('Content-Type', 'text/plain')

View File

@@ -1021,6 +1021,15 @@ class JupyterHub(Application):
""",
).tag(config=True)
api_page_default_limit = Integer(
50,
help="The default amount of records returned by a paginated endpoint",
).tag(config=True)
api_page_max_limit = Integer(
200, help="The maximum amount of records that can be returned at once"
)
authenticate_prometheus = Bool(
True, help="Authentication for prometheus metrics"
).tag(config=True)
@@ -2386,10 +2395,6 @@ class JupyterHub(Application):
for user in self.users.values():
for spawner in user.spawners.values():
oauth_client_ids.add(spawner.oauth_client_id)
# avoid deleting clients created by 0.8
# 0.9 uses `jupyterhub-user-...` for the client id, while
# 0.8 uses just `user-...`
oauth_client_ids.add(spawner.oauth_client_id.split('-', 1)[1])
for i, oauth_client in enumerate(self.db.query(orm.OAuthClient)):
if oauth_client.identifier not in oauth_client_ids:

View File

@@ -3,6 +3,7 @@
# Distributed under the terms of the Modified BSD License.
import enum
import json
import warnings
from base64 import decodebytes
from base64 import encodebytes
from datetime import datetime
@@ -674,18 +675,29 @@ class APIToken(Hashed, Base):
orm_token.service = service
if expires_in is not None:
orm_token.expires_at = cls.now() + timedelta(seconds=expires_in)
db.add(orm_token)
# load default roles if they haven't been initiated
# correct to have this here? otherwise some tests fail
token_role = Role.find(db, 'token')
if not token_role:
# FIXME: remove this.
# Creating a token before the db has roles defined should raise an error.
# PR #3460 should let us fix it by ensuring default roles are defined
warnings.warn(
"Token created before default roles!", RuntimeWarning, stacklevel=2
)
default_roles = get_default_roles()
for role in default_roles:
create_role(db, role)
try:
if roles is not None:
update_roles(db, entity=orm_token, roles=roles)
else:
assign_default_roles(db, entity=orm_token)
except Exception:
db.delete(orm_token)
db.commit()
raise
db.commit()
return token

View File

@@ -24,6 +24,7 @@ import time
from functools import wraps
from subprocess import Popen
from urllib.parse import quote
from weakref import WeakKeyDictionary
from tornado.httpclient import AsyncHTTPClient
from tornado.httpclient import HTTPError
@@ -44,7 +45,6 @@ from .metrics import CHECK_ROUTES_DURATION_SECONDS
from .metrics import PROXY_POLL_DURATION_SECONDS
from .objects import Server
from .utils import exponential_backoff
from .utils import make_ssl_context
from .utils import url_path_join
from jupyterhub.traitlets import Command
@@ -55,11 +55,18 @@ def _one_at_a_time(method):
If multiple concurrent calls to this method are made,
queue them instead of allowing them to be concurrently outstanding.
"""
method._lock = asyncio.Lock()
# use weak dict for locks
# so that the lock is always acquired within the current asyncio loop
# should only be relevant in testing, where eventloops are created and destroyed often
method._locks = WeakKeyDictionary()
@wraps(method)
async def locked_method(*args, **kwargs):
async with method._lock:
loop = asyncio.get_event_loop()
lock = method._locks.get(loop, None)
if lock is None:
lock = method._locks[loop] = asyncio.Lock()
async with lock:
return await method(*args, **kwargs)
return locked_method

View File

@@ -437,11 +437,11 @@ def assign_default_roles(db, entity):
"""Assigns the default roles to an entity:
users and services get 'user' role, or admin role if they have admin flag
Tokens get 'token' role"""
default_token_role = orm.Role.find(db, 'token')
if isinstance(entity, orm.Group):
pass
elif isinstance(entity, orm.APIToken):
app_log.debug('Assigning default roles to tokens')
default_token_role = orm.Role.find(db, 'token')
if not entity.roles and (entity.user or entity.service) is not None:
default_token_role.tokens.append(entity)
app_log.info('Added role %s to token %s', default_token_role.name, entity)

View File

@@ -1,5 +1,7 @@
"""General scope definitions and utilities"""
import functools
import inspect
import warnings
from enum import Enum
from tornado import web
@@ -13,60 +15,85 @@ class Scope(Enum):
ALL = True
def _intersect_scopes(token_scopes, owner_scopes):
"""Compares the permissions of token and its owner including horizontal filters
Returns the intersection of the two sets of scopes
def _intersect_scopes(scopes_a, scopes_b):
"""Intersect two sets of scopes
Compares the permissions of two sets of scopes,
including horizontal filters,
and returns the intersected scopes.
Note: Intersects correctly with ALL and exact filter matches
(i.e. users!user=x & read:users:name -> read:users:name!user=x)
Does not currently intersect with containing filters
(i.e. users!group=x & users!user=y even if user y is in group x,
same for users:servers!user=x & users:servers!server=y)
(i.e. users!group=x & users!user=y even if user y is in group x)
"""
owner_parsed_scopes = parse_scopes(owner_scopes)
token_parsed_scopes = parse_scopes(token_scopes)
parsed_scopes_a = parse_scopes(scopes_a)
parsed_scopes_b = parse_scopes(scopes_b)
common_bases = owner_parsed_scopes.keys() & token_parsed_scopes.keys()
common_bases = parsed_scopes_a.keys() & parsed_scopes_b.keys()
common_filters = {}
warn = False
warned = False
for base in common_bases:
if owner_parsed_scopes[base] == Scope.ALL:
common_filters[base] = token_parsed_scopes[base]
elif token_parsed_scopes[base] == Scope.ALL:
common_filters[base] = owner_parsed_scopes[base]
filters_a = parsed_scopes_a[base]
filters_b = parsed_scopes_b[base]
if filters_a == Scope.ALL:
common_filters[base] = filters_b
elif filters_b == Scope.ALL:
common_filters[base] = filters_a
else:
common_entities = (
owner_parsed_scopes[base].keys() & token_parsed_scopes[base].keys()
# warn *if* there are non-overlapping user= and group= filters
common_entities = filters_a.keys() & filters_b.keys()
all_entities = filters_a.keys() | filters_b.keys()
if (
not warned
and 'group' in all_entities
and ('user' in all_entities or 'server' in all_entities)
):
# this could resolve wrong if there's a user or server only on one side and a group only on the other
# check both directions: A has group X not in B group list AND B has user Y not in A user list
for a, b in [(filters_a, filters_b), (filters_b, filters_a)]:
for b_key in ('user', 'server'):
if (
not warned
and "group" in a
and b_key in b
and set(a["group"]).difference(b.get("group", []))
and set(b[b_key]).difference(a.get(b_key, []))
):
warnings.warn(
f"{base}[!{b_key}={b[b_key]}, !group={a['group']}] combinations of filters present,"
" intersection between not considered. May result in lower than intended permissions.",
UserWarning,
)
all_entities = (
owner_parsed_scopes[base].keys() | token_parsed_scopes[base].keys()
)
if 'user' in all_entities and ('group' or 'server' in all_entities):
warn = True
warned = True
common_filters[base] = {
entity: set(owner_parsed_scopes[base][entity])
& set(token_parsed_scopes[base][entity])
entity: set(parsed_scopes_a[base][entity])
& set(parsed_scopes_b[base][entity])
for entity in common_entities
}
if warn:
app_log.warning(
"[!user=, !group=] or [!user=, !server=] combinations of filters present, intersection between not considered. May result in lower than intended permissions."
)
if 'server' in all_entities and 'user' in all_entities:
if filters_a.get('server') == filters_b.get('server'):
continue
scopes = set()
for base in common_filters:
if common_filters[base] == Scope.ALL:
scopes.add(base)
else:
for entity, names_list in common_filters[base].items():
for name in names_list:
scopes.add(f'{base}!{entity}={name}')
additional_servers = set()
# resolve user/server hierarchy in both directions
for a, b in [(filters_a, filters_b), (filters_b, filters_a)]:
if 'server' in a and 'user' in b:
for server in a['server']:
username, _, servername = server.partition("/")
if username in b['user']:
additional_servers.add(server)
return scopes
if additional_servers:
if "server" not in common_filters[base]:
common_filters[base]["server"] = set()
common_filters[base]["server"].update(additional_servers)
return unparse_scopes(common_filters)
def get_scopes_for(orm_object):
@@ -176,7 +203,7 @@ def parse_scopes(scope_list):
"""
Parses scopes and filters in something akin to JSON style
For instance, scope list ["users", "groups!group=foo", "users:servers!server=bar", "users:servers!server=baz"]
For instance, scope list ["users", "groups!group=foo", "users:servers!server=user/bar", "users:servers!server=user/baz"]
would lead to scope model
{
"users":scope.ALL,
@@ -187,8 +214,8 @@ def parse_scopes(scope_list):
},
"users:servers":{
"server":[
"bar",
"baz"
"user/bar",
"user/baz"
]
}
}
@@ -208,6 +235,19 @@ def parse_scopes(scope_list):
return parsed_scopes
def unparse_scopes(parsed_scopes):
"""Turn a parsed_scopes dictionary back into a scopes set"""
scopes = set()
for base, filters in parsed_scopes.items():
if filters == Scope.ALL:
scopes.add(base)
else:
for entity, names_list in filters.items():
for name in names_list:
scopes.add(f'{base}!{entity}={name}')
return scopes
def needs_scope(*scopes):
"""Decorator to restrict access to users or services with the required scope"""
@@ -269,6 +309,9 @@ def identify_scopes(obj):
for field in {"name", "roles", "groups"}
}
elif isinstance(obj, orm.Service):
# FIXME: need sub-scopes for services
# until then, we have just one service scope:
return {f"read:services!service={obj.name}"}
return {
f"read:services:{field}!service={obj.name}" for field in {"name", "roles"}
}

View File

@@ -1,7 +1,7 @@
"""Authenticating services with JupyterHub.
Cookies are sent to the Hub for verification. The Hub replies with a JSON
model describing the authenticated user.
Tokens are sent to the Hub for verification.
The Hub replies with a JSON model describing the authenticated user.
``HubAuth`` can be used in any application, even outside tornado.
@@ -10,6 +10,7 @@ authenticate with the Hub.
"""
import base64
import hashlib
import json
import os
import random
@@ -20,7 +21,6 @@ import time
import uuid
import warnings
from unittest import mock
from urllib.parse import quote
from urllib.parse import urlencode
import requests
@@ -113,9 +113,15 @@ class HubAuth(SingletonConfigurable):
This can be used by any application.
Use this base class only for direct, token-authenticated applications
(web APIs).
For applications that support direct visits from browsers,
use HubOAuth to enable OAuth redirect-based authentication.
If using tornado, use via :class:`HubAuthenticated` mixin.
If using manually, use the ``.user_for_cookie(cookie_value)`` method
to identify the user corresponding to a given cookie value.
If using manually, use the ``.user_for_token(token_value)`` method
to identify the user owning a given token.
The following config must be set:
@@ -129,9 +135,6 @@ class HubAuth(SingletonConfigurable):
- cookie_cache_max_age: the number of seconds responses
from the Hub should be cached.
- login_url (the *public* ``/hub/login`` URL of the Hub).
- cookie_name: the name of the cookie I should be using,
if different from the default (unlikely).
"""
hub_host = Unicode(
@@ -239,10 +242,6 @@ class HubAuth(SingletonConfigurable):
""",
).tag(config=True)
cookie_name = Unicode(
'jupyterhub-services', help="""The name of the cookie I should be looking for"""
).tag(config=True)
cookie_options = Dict(
help="""Additional options to pass when setting cookies.
@@ -286,12 +285,12 @@ class HubAuth(SingletonConfigurable):
def _default_cache(self):
return _ExpiringDict(self.cache_max_age)
def _check_hub_authorization(self, url, cache_key=None, use_cache=True):
def _check_hub_authorization(self, url, api_token, cache_key=None, use_cache=True):
"""Identify a user with the Hub
Args:
url (str): The API URL to check the Hub for authorization
(e.g. http://127.0.0.1:8081/hub/api/authorizations/token/abc-def)
(e.g. http://127.0.0.1:8081/hub/api/user)
cache_key (str): The key for checking the cache
use_cache (bool): Specify use_cache=False to skip cached cookie values (default: True)
@@ -309,7 +308,12 @@ class HubAuth(SingletonConfigurable):
except KeyError:
app_log.debug("HubAuth cache miss: %s", cache_key)
data = self._api_request('GET', url, allow_404=True)
data = self._api_request(
'GET',
url,
headers={"Authorization": "token " + api_token},
allow_403=True,
)
if data is None:
app_log.warning("No Hub user identified for request")
else:
@@ -321,7 +325,7 @@ class HubAuth(SingletonConfigurable):
def _api_request(self, method, url, **kwargs):
"""Make an API request"""
allow_404 = kwargs.pop('allow_404', False)
allow_403 = kwargs.pop('allow_403', False)
headers = kwargs.setdefault('headers', {})
headers.setdefault('Authorization', 'token %s' % self.api_token)
if "cert" not in kwargs and self.certfile and self.keyfile:
@@ -345,7 +349,7 @@ class HubAuth(SingletonConfigurable):
raise HTTPError(500, msg)
data = None
if r.status_code == 404 and allow_404:
if r.status_code == 403 and allow_403:
pass
elif r.status_code == 403:
app_log.error(
@@ -389,26 +393,9 @@ class HubAuth(SingletonConfigurable):
return data
def user_for_cookie(self, encrypted_cookie, use_cache=True, session_id=''):
"""Ask the Hub to identify the user for a given cookie.
Args:
encrypted_cookie (str): the cookie value (not decrypted, the Hub will do that)
use_cache (bool): Specify use_cache=False to skip cached cookie values (default: True)
Returns:
user_model (dict): The user model, if a user is identified, None if authentication fails.
The 'name' field contains the user's name.
"""
return self._check_hub_authorization(
url=url_path_join(
self.api_url,
"authorizations/cookie",
self.cookie_name,
quote(encrypted_cookie, safe=''),
),
cache_key='cookie:{}:{}'.format(session_id, encrypted_cookie),
use_cache=use_cache,
"""Deprecated and removed. Use HubOAuth to authenticate browsers."""
raise RuntimeError(
"Identifying users by shared cookie is removed in JupyterHub 2.0. Use OAuth tokens."
)
def user_for_token(self, token, use_cache=True, session_id=''):
@@ -425,14 +412,19 @@ class HubAuth(SingletonConfigurable):
"""
return self._check_hub_authorization(
url=url_path_join(
self.api_url, "authorizations/token", quote(token, safe='')
self.api_url,
"user",
),
api_token=token,
cache_key='token:{}:{}'.format(
session_id,
hashlib.sha256(token.encode("utf8", "replace")).hexdigest(),
),
cache_key='token:{}:{}'.format(session_id, token),
use_cache=use_cache,
)
auth_header_name = 'Authorization'
auth_header_pat = re.compile(r'token\s+(.+)', re.IGNORECASE)
auth_header_pat = re.compile(r'(?:token|bearer)\s+(.+)', re.IGNORECASE)
def get_token(self, handler):
"""Get the user token from a request
@@ -453,10 +445,8 @@ class HubAuth(SingletonConfigurable):
def _get_user_cookie(self, handler):
"""Get the user model from a cookie"""
encrypted_cookie = handler.get_cookie(self.cookie_name)
session_id = self.get_session_id(handler)
if encrypted_cookie:
return self.user_for_cookie(encrypted_cookie, session_id=session_id)
# overridden in HubOAuth to store the access token after oauth
return None
def get_session_id(self, handler):
"""Get the jupyterhub session id
@@ -509,6 +499,9 @@ class HubAuth(SingletonConfigurable):
class HubOAuth(HubAuth):
"""HubAuth using OAuth for login instead of cookies set by the Hub.
Use this class if you want users to be able to visit your service with a browser.
They will be authenticated via OAuth with the Hub.
.. versionadded: 0.8
"""

View File

@@ -10,9 +10,11 @@ with JupyterHub authentication mixins enabled.
# Distributed under the terms of the Modified BSD License.
import asyncio
import json
import logging
import os
import random
import secrets
import sys
import warnings
from datetime import datetime
from datetime import timezone
@@ -50,6 +52,17 @@ from ..utils import make_ssl_context
from ..utils import url_path_join
def _bool_env(key):
"""Cast an environment variable to bool
0, empty, or unset is False; All other values are True.
"""
if os.environ.get(key, "") in {"", "0"}:
return False
else:
return True
# Authenticate requests with the Hub
@@ -99,19 +112,26 @@ class JupyterHubLoginHandlerMixin:
Thus shouldn't be called anymore because HubAuthenticatedHandler
should have already overridden get_current_user().
Keep here to prevent unlikely circumstance from losing auth.
Keep here to protect uncommon circumstance of multiple BaseHandlers
from missing auth.
e.g. when multiple BaseHandler classes are used.
"""
if HubAuthenticatedHandler not in handler.__class__.mro():
warnings.warn(
f"Expected to see HubAuthenticatedHandler in {handler.__class__}.mro()",
f"Expected to see HubAuthenticatedHandler in {handler.__class__}.mro(),"
" patching in at call time. Hub authentication is still applied.",
RuntimeWarning,
stacklevel=2,
)
# patch HubAuthenticated into the instance
handler.__class__ = type(
handler.__class__.__name__,
(HubAuthenticatedHandler, handler.__class__),
{},
)
# patch into the class itself so this doesn't happen again for the same class
patch_base_handler(handler.__class__)
return handler.get_current_user()
@classmethod
@@ -141,7 +161,6 @@ class OAuthCallbackHandlerMixin(HubOAuthCallbackHandler):
aliases = {
'user': 'SingleUserNotebookApp.user',
'group': 'SingleUserNotebookApp.group',
'cookie-name': 'HubAuth.cookie_name',
'hub-prefix': 'SingleUserNotebookApp.hub_prefix',
'hub-host': 'SingleUserNotebookApp.hub_host',
'hub-api-url': 'SingleUserNotebookApp.hub_api_url',
@@ -270,6 +289,10 @@ class SingleUserNotebookAppMixin(Configurable):
def _user_changed(self, change):
self.log.name = change.new
@default("default_url")
def _default_url(self):
return os.environ.get("JUPYTERHUB_DEFAULT_URL", "/tree/")
hub_host = Unicode().tag(config=True)
hub_prefix = Unicode('/hub/').tag(config=True)
@@ -352,7 +375,26 @@ class SingleUserNotebookAppMixin(Configurable):
""",
).tag(config=True)
@validate('notebook_dir')
@default("disable_user_config")
def _default_disable_user_config(self):
return _bool_env("JUPYTERHUB_DISABLE_USER_CONFIG")
@default("root_dir")
def _default_root_dir(self):
if os.environ.get("JUPYTERHUB_ROOT_DIR"):
proposal = {"value": os.environ["JUPYTERHUB_ROOT_DIR"]}
# explicitly call validator, not called on default values
return self._notebook_dir_validate(proposal)
else:
return os.getcwd()
# notebook_dir is used by the classic notebook server
# root_dir is the future in jupyter server
@default("notebook_dir")
def _default_notebook_dir(self):
return self._default_root_dir()
@validate("notebook_dir", "root_dir")
def _notebook_dir_validate(self, proposal):
value = os.path.expanduser(proposal['value'])
# Strip any trailing slashes
@@ -368,6 +410,13 @@ class SingleUserNotebookAppMixin(Configurable):
raise TraitError("No such notebook dir: %r" % value)
return value
@default('log_level')
def _log_level_default(self):
if _bool_env("JUPYTERHUB_DEBUG"):
return logging.DEBUG
else:
return logging.INFO
@default('log_datefmt')
def _log_datefmt_default(self):
"""Exclude date from default date format"""
@@ -683,6 +732,97 @@ def detect_base_package(App):
return None
def _nice_cls_repr(cls):
"""Nice repr of classes, e.g. 'module.submod.Class'
Also accepts tuples of classes
"""
return f"{cls.__module__}.{cls.__name__}"
def patch_base_handler(BaseHandler, log=None):
"""Patch HubAuthenticated into a base handler class
so anything inheriting from BaseHandler uses Hub authentication.
This works *even after* subclasses have imported and inherited from BaseHandler.
.. versionadded: 1.5
Made available as an importable utility
"""
if log is None:
log = logging.getLogger()
if HubAuthenticatedHandler not in BaseHandler.__bases__:
new_bases = (HubAuthenticatedHandler,) + BaseHandler.__bases__
log.info(
"Patching auth into {mod}.{name}({old_bases}) -> {name}({new_bases})".format(
mod=BaseHandler.__module__,
name=BaseHandler.__name__,
old_bases=', '.join(
_nice_cls_repr(cls) for cls in BaseHandler.__bases__
),
new_bases=', '.join(_nice_cls_repr(cls) for cls in new_bases),
)
)
BaseHandler.__bases__ = new_bases
# We've now inserted our class as a parent of BaseHandler,
# but we also need to ensure BaseHandler *itself* doesn't
# override the public tornado API methods we have inserted.
# If they are defined in BaseHandler, explicitly replace them with our methods.
for name in ("get_current_user", "get_login_url"):
if name in BaseHandler.__dict__:
log.debug(
f"Overriding {BaseHandler}.{name} with HubAuthenticatedHandler.{name}"
)
method = getattr(HubAuthenticatedHandler, name)
setattr(BaseHandler, name, method)
return BaseHandler
def _patch_app_base_handlers(app):
"""Patch Hub Authentication into the base handlers of an app
Patches HubAuthenticatedHandler into:
- App.base_handler_class (if defined)
- jupyter_server's JupyterHandler (if already imported)
- notebook's IPythonHandler (if already imported)
"""
BaseHandler = app_base_handler = getattr(app, "base_handler_class", None)
base_handlers = []
if BaseHandler is not None:
base_handlers.append(BaseHandler)
# patch juptyer_server and notebook handlers if they have been imported
for base_handler_name in [
"jupyter_server.base.handlers.JupyterHandler",
"notebook.base.handlers.IPythonHandler",
]:
modname, _ = base_handler_name.rsplit(".", 1)
if modname in sys.modules:
base_handlers.append(import_item(base_handler_name))
if not base_handlers:
pkg = detect_base_package(app.__class__)
if pkg == "jupyter_server":
BaseHandler = import_item("jupyter_server.base.handlers.JupyterHandler")
elif pkg == "notebook":
BaseHandler = import_item("notebook.base.handlers.IPythonHandler")
else:
raise ValueError(
"{}.base_handler_class must be defined".format(app.__class__.__name__)
)
base_handlers.append(BaseHandler)
# patch-in HubAuthenticatedHandler to base handler classes
for BaseHandler in base_handlers:
patch_base_handler(BaseHandler)
# return the first entry
return base_handlers[0]
def make_singleuser_app(App):
"""Make and return a singleuser notebook app
@@ -706,37 +846,7 @@ def make_singleuser_app(App):
# detect base classes
LoginHandler = empty_parent_app.login_handler_class
LogoutHandler = empty_parent_app.logout_handler_class
BaseHandler = getattr(empty_parent_app, "base_handler_class", None)
if BaseHandler is None:
pkg = detect_base_package(App)
if pkg == "jupyter_server":
BaseHandler = import_item("jupyter_server.base.handlers.JupyterHandler")
elif pkg == "notebook":
BaseHandler = import_item("notebook.base.handlers.IPythonHandler")
else:
raise ValueError(
"{}.base_handler_class must be defined".format(App.__name__)
)
# patch-in HubAuthenticatedHandler to BaseHandler,
# so anything inheriting from BaseHandler uses Hub authentication
if HubAuthenticatedHandler not in BaseHandler.__bases__:
new_bases = (HubAuthenticatedHandler,) + BaseHandler.__bases__
log.debug(
f"Patching {BaseHandler}{BaseHandler.__bases__} -> {BaseHandler}{new_bases}"
)
BaseHandler.__bases__ = new_bases
# We've now inserted our class as a parent of BaseHandler,
# but we also need to ensure BaseHandler *itself* doesn't
# override the public tornado API methods we have inserted.
# If they are defined in BaseHandler, explicitly replace them with our methods.
for name in ("get_current_user", "get_login_url"):
if name in BaseHandler.__dict__:
log.debug(
f"Overriding {BaseHandler}.{name} with HubAuthenticatedHandler.{name}"
)
method = getattr(HubAuthenticatedHandler, name)
setattr(BaseHandler, name, method)
BaseHandler = _patch_app_base_handlers(empty_parent_app)
# create Handler classes from mixins + bases
class JupyterHubLoginHandler(JupyterHubLoginHandlerMixin, LoginHandler):
@@ -766,4 +876,11 @@ def make_singleuser_app(App):
logout_handler_class = JupyterHubLogoutHandler
oauth_callback_handler_class = OAuthCallbackHandler
def initialize(self, *args, **kwargs):
result = super().initialize(*args, **kwargs)
# run patch again after initialize, so extensions have already been loaded
# probably a no-op most of the time
_patch_app_base_handlers(self)
return result
return SingleUserNotebookApp

View File

@@ -39,7 +39,6 @@ from .traitlets import ByteSpecification
from .traitlets import Callable
from .traitlets import Command
from .utils import exponential_backoff
from .utils import iterate_until
from .utils import maybe_future
from .utils import random_port
from .utils import url_path_join
@@ -246,11 +245,22 @@ class Spawner(LoggingConfigurable):
)
ip = Unicode(
'',
'127.0.0.1',
help="""
The IP address (or hostname) the single-user server should listen on.
Usually either '127.0.0.1' (default) or '0.0.0.0'.
The JupyterHub proxy implementation should be able to send packets to this interface.
Subclasses which launch remotely or in containers
should override the default to '0.0.0.0'.
.. versionchanged:: 2.0
Default changed to '127.0.0.1', from ''.
In most cases, this does not result in a change in behavior,
as '' was interpreted as 'unspecified',
which used the subprocesses' own default, itself usually '127.0.0.1'.
""",
).tag(config=True)
@@ -811,8 +821,20 @@ class Spawner(LoggingConfigurable):
'activity',
)
env['JUPYTERHUB_BASE_URL'] = self.hub.base_url[:-4]
if self.server:
base_url = self.server.base_url
if self.ip or self.port:
self.server.ip = self.ip
self.server.port = self.port
env['JUPYTERHUB_SERVICE_PREFIX'] = self.server.base_url
else:
# this should only occur in mock/testing scenarios
base_url = '/'
proto = 'https' if self.internal_ssl else 'http'
bind_url = f"{proto}://{self.ip}:{self.port}{base_url}"
env["JUPYTERHUB_SERVICE_URL"] = bind_url
# Put in limit and guarantee info if they exist.
# Note that this is for use by the humans / notebook extensions in the
@@ -832,6 +854,20 @@ class Spawner(LoggingConfigurable):
env['JUPYTERHUB_SSL_CERTFILE'] = self.cert_paths['certfile']
env['JUPYTERHUB_SSL_CLIENT_CA'] = self.cert_paths['cafile']
if self.notebook_dir:
notebook_dir = self.format_string(self.notebook_dir)
env["JUPYTERHUB_ROOT_DIR"] = notebook_dir
if self.default_url:
default_url = self.format_string(self.default_url)
env["JUPYTERHUB_DEFAULT_URL"] = default_url
if self.debug:
env["JUPYTERHUB_DEBUG"] = "1"
if self.disable_user_config:
env["JUPYTERHUB_DISABLE_USER_CONFIG"] = "1"
# env overrides from config. If the value is a callable, it will be called with
# one parameter - the current spawner instance - and the return value
# will be assigned to the environment variable. This will be called at
@@ -843,7 +879,6 @@ class Spawner(LoggingConfigurable):
env[key] = value(self)
else:
env[key] = value
return env
async def get_url(self):
@@ -1010,35 +1045,16 @@ class Spawner(LoggingConfigurable):
"""Return the arguments to be passed after self.cmd
Doesn't expect shell expansion to happen.
.. versionchanged:: 2.0
Prior to 2.0, JupyterHub passed some options such as
ip, port, and default_url to the command-line.
JupyterHub 2.0 no longer builds any CLI args
other than `Spawner.cmd` and `Spawner.args`.
All values that come from jupyterhub itself
will be passed via environment variables.
"""
args = []
if self.ip:
args.append('--ip=%s' % _quote_safe(self.ip))
if self.port:
args.append('--port=%i' % self.port)
elif self.server and self.server.port:
self.log.warning(
"Setting port from user.server is deprecated as of JupyterHub 0.7."
)
args.append('--port=%i' % self.server.port)
if self.notebook_dir:
notebook_dir = self.format_string(self.notebook_dir)
args.append('--notebook-dir=%s' % _quote_safe(notebook_dir))
if self.default_url:
default_url = self.format_string(self.default_url)
args.append(
'--SingleUserNotebookApp.default_url=%s' % _quote_safe(default_url)
)
if self.debug:
args.append('--debug')
if self.disable_user_config:
args.append('--disable-user-config')
args.extend(self.args)
return args
return self.args
def run_pre_spawn_hook(self):
"""Run the pre_spawn_hook if defined"""
@@ -1155,6 +1171,18 @@ class Spawner(LoggingConfigurable):
"""
raise NotImplementedError("Override in subclass. Must be a coroutine.")
def delete_forever(self):
"""Called when a user or server is deleted.
This can do things like request removal of resources such as persistent storage.
Only called on stopped spawners, and is usually the last action ever taken for the user.
Will only be called once on each Spawner, immediately prior to removal.
Stopping a server does *not* call this method.
"""
pass
def add_poll_callback(self, callback, *args, **kwargs):
"""Add a callback to fire when the single-user server stops"""
if args or kwargs:
@@ -1470,6 +1498,7 @@ class LocalProcessSpawner(Spawner):
async def start(self):
"""Start the single-user server."""
if self.port == 0:
self.port = random_port()
cmd = []
env = self.get_env()

View File

@@ -94,16 +94,6 @@ class MockSpawner(SimpleLocalProcessSpawner):
def _cmd_default(self):
return [sys.executable, '-m', 'jupyterhub.tests.mocksu']
async def delete_forever(self):
"""Called when a user is deleted.
This can do things like request removal of resources such as persistent storage.
Only called on stopped spawners, and is likely the last action ever taken for the user.
Will only be called once on the user's default Spawner.
"""
pass
use_this_api_token = None
def start(self):

View File

@@ -11,10 +11,10 @@ Handlers and their purpose include:
- ArgsHandler: allowing retrieval of `sys.argv`.
"""
import argparse
import json
import os
import sys
from urllib.parse import urlparse
from tornado import httpserver
from tornado import ioloop
@@ -36,7 +36,8 @@ class ArgsHandler(web.RequestHandler):
self.write(json.dumps(sys.argv))
def main(args):
def main():
url = urlparse(os.environ["JUPYTERHUB_SERVICE_URL"])
options.logging = 'debug'
log.enable_pretty_logging()
app = web.Application(
@@ -50,10 +51,11 @@ def main(args):
if key and cert and ca:
ssl_context = make_ssl_context(key, cert, cafile=ca, check_hostname=False)
assert url.scheme == "https"
server = httpserver.HTTPServer(app, ssl_options=ssl_context)
log.app_log.info("Starting mock singleuser server at 127.0.0.1:%s", args.port)
server.listen(args.port, '127.0.0.1')
log.app_log.info(f"Starting mock singleuser server at {url.hostname}:{url.port}")
server.listen(url.port, url.hostname)
try:
ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
@@ -61,7 +63,4 @@ def main(args):
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--port', type=int)
args, extra = parser.parse_known_args()
main(args)
main()

View File

@@ -192,6 +192,38 @@ async def test_get_users(app):
r_user_model = r.json()[0]
assert r_user_model['name'] == user_model['name']
# Tests offset for pagination
r = await api_request(app, 'users?offset=1')
assert r.status_code == 200
users = sorted(r.json(), key=lambda d: d['name'])
users = [normalize_user(u) for u in users]
assert users == [
fill_user(
{'name': 'user', 'admin': False, 'auth_state': None, 'roles': ['user']}
)
]
r = await api_request(app, 'users?offset=20')
assert r.status_code == 200
assert r.json() == []
# Test limit for pagination
r = await api_request(app, 'users?limit=1')
assert r.status_code == 200
users = sorted(r.json(), key=lambda d: d['name'])
users = [normalize_user(u) for u in users]
assert users == [
fill_user(
{'name': 'admin', 'admin': True, 'auth_state': None, 'roles': ['admin']}
)
]
r = await api_request(app, 'users?limit=0')
assert r.status_code == 200
assert r.json() == []
@mark.user
@mark.parametrize(
@@ -611,10 +643,17 @@ async def test_spawn(app):
r = await async_requests.get(ujoin(url, 'args'), **kwargs)
assert r.status_code == 200
argv = r.json()
assert '--port' in ' '.join(argv)
assert '--port' not in ' '.join(argv)
# we pass no CLI args anymore:
assert len(argv) == 1
r = await async_requests.get(ujoin(url, 'env'), **kwargs)
env = r.json()
for expected in ['JUPYTERHUB_USER', 'JUPYTERHUB_BASE_URL', 'JUPYTERHUB_API_TOKEN']:
for expected in [
'JUPYTERHUB_USER',
'JUPYTERHUB_BASE_URL',
'JUPYTERHUB_API_TOKEN',
'JUPYTERHUB_SERVICE_URL',
]:
assert expected in env
if app.subdomain_host:
assert env['JUPYTERHUB_HOST'] == app.subdomain_host
@@ -1176,76 +1215,13 @@ async def test_check_token(app):
assert r.status_code == 404
@mark.parametrize("headers, status", [({}, 200), ({'Authorization': 'token bad'}, 403)])
@mark.parametrize("headers, status", [({}, 404), ({'Authorization': 'token bad'}, 404)])
async def test_get_new_token_deprecated(app, headers, status):
# request a new token
r = await api_request(
app, 'authorizations', 'token', method='post', headers=headers
)
assert r.status_code == status
if status != 200:
return
reply = r.json()
assert 'token' in reply
r = await api_request(app, 'authorizations', 'token', reply['token'])
r.raise_for_status()
reply = r.json()
assert reply['name'] == 'admin'
async def test_token_formdata_deprecated(app):
"""Create a token for a user with formdata and no auth header"""
data = {'username': 'fake', 'password': 'fake'}
r = await api_request(
app,
'authorizations',
'token',
method='post',
data=json.dumps(data) if data else None,
noauth=True,
)
assert r.status_code == 200
reply = r.json()
assert 'token' in reply
r = await api_request(app, 'authorizations', 'token', reply['token'])
r.raise_for_status()
reply = r.json()
assert reply['name'] == data['username']
@mark.parametrize(
"as_user, for_user, status",
[
('admin', 'other', 200),
('admin', 'missing', 400),
('user', 'other', 403),
('user', 'user', 200),
],
)
async def test_token_as_user_deprecated(app, as_user, for_user, status):
# ensure both users exist
u = add_user(app.db, app, name=as_user)
if for_user != 'missing':
for_user_obj = add_user(app.db, app, name=for_user)
data = {'username': for_user}
headers = {'Authorization': 'token %s' % u.new_api_token()}
r = await api_request(
app,
'authorizations',
'token',
method='post',
data=json.dumps(data),
headers=headers,
)
assert r.status_code == status
reply = r.json()
if status != 200:
return
assert 'token' in reply
r = await api_request(app, 'authorizations', 'token', reply['token'])
r.raise_for_status()
reply = r.json()
assert reply['name'] == data['username']
@mark.parametrize(
@@ -1447,16 +1423,45 @@ async def test_groups_list(app):
reply = r.json()
assert reply == []
# create a group
# create two groups
group = orm.Group(name='alphaflight')
group_2 = orm.Group(name='betaflight')
app.db.add(group)
app.db.add(group_2)
app.db.commit()
r = await api_request(app, 'groups')
r.raise_for_status()
reply = r.json()
assert reply == [
{'kind': 'group', 'name': 'alphaflight', 'users': [], 'roles': []},
{'kind': 'group', 'name': 'betaflight', 'users': [], 'roles': []},
]
# Test offset for pagination
r = await api_request(app, "groups?offset=1")
r.raise_for_status()
reply = r.json()
assert r.status_code == 200
assert reply == [{'kind': 'group', 'name': 'betaflight', 'users': [], 'roles': []}]
r = await api_request(app, "groups?offset=10")
r.raise_for_status()
reply = r.json()
assert reply == []
# Test limit for pagination
r = await api_request(app, "groups?limit=1")
r.raise_for_status()
reply = r.json()
assert r.status_code == 200
assert reply == [{'kind': 'group', 'name': 'alphaflight', 'users': [], 'roles': []}]
r = await api_request(app, "groups?limit=0")
r.raise_for_status()
reply = r.json()
assert reply == []
@mark.group
async def test_add_multi_group(app):

View File

@@ -1,5 +1,4 @@
"""Test scopes for API handlers"""
import json
from unittest import mock
import pytest
@@ -11,6 +10,7 @@ from .. import orm
from .. import roles
from ..handlers import BaseHandler
from ..scopes import _check_scope
from ..scopes import _intersect_scopes
from ..scopes import get_scopes_for
from ..scopes import needs_scope
from ..scopes import parse_scopes
@@ -806,3 +806,81 @@ async def test_roles_access(app, create_service_with_scopes, create_user_with_sc
assert r.status_code == 200
model_keys = {'kind', 'name', 'roles'}
assert model_keys == r.json().keys()
@pytest.mark.parametrize(
"left, right, expected, should_warn",
[
(set(), set(), set(), False),
(set(), set(["users"]), set(), False),
# no warning if users and groups only on the same side
(
set(["users!user=x", "users!group=y"]),
set([]),
set([]),
False,
),
# no warning if users are on both sizes
(
set(["users!user=x", "users!user=y", "users!group=y"]),
set(["users!user=x"]),
set(["users!user=x"]),
False,
),
# no warning if users and groups are both defined
# on both sides
(
set(["users!user=x", "users!group=y"]),
set(["users!user=x", "users!group=y", "users!user=z"]),
set(["users!user=x", "users!group=y"]),
False,
),
# warn if there's a user on one side and a group on the other
# which *may* intersect
(
set(["users!group=y", "users!user=z"]),
set(["users!user=x"]),
set([]),
True,
),
# same for group->server
(
set(["users!group=y", "users!user=z"]),
set(["users!server=x/y"]),
set([]),
True,
),
# this one actually shouldn't warn because server=x/y is under user=x,
# but we don't need to overcomplicate things just for a warning
(
set(["users!group=y", "users!user=x"]),
set(["users!server=x/y"]),
set(["users!server=x/y"]),
True,
),
# resolves server under user, without warning
(
set(["read:users:servers!user=abc"]),
set(["read:users:servers!server=abc/xyz"]),
set(["read:users:servers!server=abc/xyz"]),
False,
),
# user->server, no match
(
set(["read:users:servers!user=abc"]),
set(["read:users:servers!server=abcd/xyz"]),
set([]),
False,
),
],
)
def test_intersect_scopes(left, right, expected, should_warn, recwarn):
# run every test in both directions, to ensure symmetry of the inputs
for a, b in [(left, right), (right, left)]:
intersection = _intersect_scopes(set(left), set(right))
assert intersection == set(expected)
if should_warn:
assert len(recwarn) == 1
else:
assert len(recwarn) == 0

View File

@@ -26,8 +26,8 @@ from tornado.web import RequestHandler
from .. import orm
from ..services.auth import _ExpiringDict
from ..services.auth import HubAuth
from ..services.auth import HubAuthenticated
from ..services.auth import HubOAuth
from ..services.auth import HubOAuthenticated
from ..utils import url_path_join
from .mocking import public_host
from .mocking import public_url
@@ -76,178 +76,6 @@ def test_expiring_dict():
assert cache.get('key', 'default') == 'cached value'
def test_hub_auth():
auth = HubAuth(cookie_name='foo')
mock_model = {'name': 'onyxia'}
url = url_path_join(auth.api_url, "authorizations/cookie/foo/bar")
with requests_mock.Mocker() as m:
m.get(url, text=json.dumps(mock_model))
user_model = auth.user_for_cookie('bar')
assert user_model == mock_model
# check cache
user_model = auth.user_for_cookie('bar')
assert user_model == mock_model
with requests_mock.Mocker() as m:
m.get(url, status_code=404)
user_model = auth.user_for_cookie('bar', use_cache=False)
assert user_model is None
# invalidate cache with timer
mock_model = {'name': 'willow'}
with monotonic_future, requests_mock.Mocker() as m:
m.get(url, text=json.dumps(mock_model))
user_model = auth.user_for_cookie('bar')
assert user_model == mock_model
with requests_mock.Mocker() as m:
m.get(url, status_code=500)
with raises(HTTPError) as exc_info:
user_model = auth.user_for_cookie('bar', use_cache=False)
assert exc_info.value.status_code == 502
with requests_mock.Mocker() as m:
m.get(url, status_code=400)
with raises(HTTPError) as exc_info:
user_model = auth.user_for_cookie('bar', use_cache=False)
assert exc_info.value.status_code == 500
def test_hub_authenticated(request):
auth = HubAuth(cookie_name='jubal')
mock_model = {'name': 'jubalearly', 'groups': ['lions']}
cookie_url = url_path_join(auth.api_url, "authorizations/cookie", auth.cookie_name)
good_url = url_path_join(cookie_url, "early")
bad_url = url_path_join(cookie_url, "late")
class TestHandler(HubAuthenticated, RequestHandler):
hub_auth = auth
@authenticated
def get(self):
self.finish(self.get_current_user())
# start hub-authenticated service in a thread:
port = 50505
q = Queue()
def run():
asyncio.set_event_loop(asyncio.new_event_loop())
app = Application([('/*', TestHandler)], login_url=auth.login_url)
http_server = HTTPServer(app)
http_server.listen(port)
loop = IOLoop.current()
loop.add_callback(lambda: q.put(loop))
loop.start()
t = Thread(target=run)
t.start()
def finish_thread():
loop.add_callback(loop.stop)
t.join(timeout=30)
assert not t.is_alive()
request.addfinalizer(finish_thread)
# wait for thread to start
loop = q.get(timeout=10)
with requests_mock.Mocker(real_http=True) as m:
# no cookie
r = requests.get('http://127.0.0.1:%i' % port, allow_redirects=False)
r.raise_for_status()
assert r.status_code == 302
assert auth.login_url in r.headers['Location']
# wrong cookie
m.get(bad_url, status_code=404)
r = requests.get(
'http://127.0.0.1:%i' % port,
cookies={'jubal': 'late'},
allow_redirects=False,
)
r.raise_for_status()
assert r.status_code == 302
assert auth.login_url in r.headers['Location']
# clear the cache because we are going to request
# the same url again with a different result
auth.cache.clear()
# upstream 403
m.get(bad_url, status_code=403)
r = requests.get(
'http://127.0.0.1:%i' % port,
cookies={'jubal': 'late'},
allow_redirects=False,
)
assert r.status_code == 500
m.get(good_url, text=json.dumps(mock_model))
# no specific allowed user
r = requests.get(
'http://127.0.0.1:%i' % port,
cookies={'jubal': 'early'},
allow_redirects=False,
)
r.raise_for_status()
assert r.status_code == 200
# pass allowed user
TestHandler.hub_users = {'jubalearly'}
r = requests.get(
'http://127.0.0.1:%i' % port,
cookies={'jubal': 'early'},
allow_redirects=False,
)
r.raise_for_status()
assert r.status_code == 200
# no pass allowed ser
TestHandler.hub_users = {'kaylee'}
r = requests.get(
'http://127.0.0.1:%i' % port,
cookies={'jubal': 'early'},
allow_redirects=False,
)
assert r.status_code == 403
# pass allowed group
TestHandler.hub_groups = {'lions'}
r = requests.get(
'http://127.0.0.1:%i' % port,
cookies={'jubal': 'early'},
allow_redirects=False,
)
r.raise_for_status()
assert r.status_code == 200
# no pass allowed group
TestHandler.hub_groups = {'tigers'}
r = requests.get(
'http://127.0.0.1:%i' % port,
cookies={'jubal': 'early'},
allow_redirects=False,
)
assert r.status_code == 403
async def test_hubauth_cookie(app, mockservice_url):
"""Test HubAuthenticated service with user cookies"""
cookies = await app.login_user('badger')
r = await async_requests.get(
public_url(app, mockservice_url) + '/whoami/', cookies=cookies
)
r.raise_for_status()
print(r.text)
reply = r.json()
sub_reply = {key: reply.get(key, 'missing') for key in ['name', 'admin']}
assert sub_reply == {'name': 'badger', 'admin': False}
async def test_hubauth_token(app, mockservice_url):
"""Test HubAuthenticated service with user API tokens"""
u = add_user(app.db, name='river')
@@ -295,8 +123,10 @@ async def test_hubauth_service_token(app, mockservice_url):
r = await async_requests.get(
public_url(app, mockservice_url) + '/whoami/',
headers={'Authorization': 'token %s' % token},
allow_redirects=False,
)
r.raise_for_status()
assert r.status_code == 200
reply = r.json()
service_model = {'kind': 'service', 'name': name, 'admin': False, 'roles': []}
assert service_model.items() <= reply.items()

View File

@@ -1,4 +1,5 @@
import asyncio
import inspect
import os
from concurrent.futures import ThreadPoolExecutor
@@ -80,14 +81,26 @@ def check_db_locks(func):
"""
def new_func(app, *args, **kwargs):
retval = func(app, *args, **kwargs)
maybe_future = func(app, *args, **kwargs)
def _check(_=None):
temp_session = app.session_factory()
try:
temp_session.execute('CREATE TABLE dummy (foo INT)')
temp_session.execute('DROP TABLE dummy')
finally:
temp_session.close()
return retval
async def await_then_check():
result = await maybe_future
_check()
return result
if inspect.isawaitable(maybe_future):
return await_then_check()
else:
_check()
return maybe_future
return new_func

View File

@@ -252,6 +252,35 @@ class User:
await self.save_auth_state(auth_state)
return auth_state
async def delete_spawners(self):
"""Call spawner cleanup methods
Allows the spawner to cleanup persistent resources
"""
for name in self.orm_user.orm_spawners.keys():
await self._delete_spawner(name)
async def _delete_spawner(self, name_or_spawner):
"""Delete a single spawner"""
# always ensure full Spawner
# this may instantiate the Spawner if it wasn't already running,
# just to delete it
if isinstance(name_or_spawner, str):
spawner = self.spawners[name_or_spawner]
else:
spawner = name_or_spawner
if spawner.active:
raise RuntimeError(
f"Spawner {spawner._log_name} is active and cannot be deleted."
)
try:
await maybe_future(spawner.delete_forever())
except Exception as e:
self.log.exception(
f"Error cleaning up persistent resources on {spawner._log_name}"
)
def all_spawners(self, include_default=True):
"""Generator yielding all my spawners
@@ -810,14 +839,8 @@ class User:
if orm_token:
self.db.delete(orm_token)
# remove oauth client as well
# handle upgrades from 0.8, where client id will be `user-USERNAME`,
# not just `jupyterhub-user-USERNAME`
client_ids = (
spawner.oauth_client_id,
spawner.oauth_client_id.split('-', 1)[1],
)
for oauth_client in self.db.query(orm.OAuthClient).filter(
orm.OAuthClient.identifier.in_(client_ids)
for oauth_client in self.db.query(orm.OAuthClient).filter_by(
identifier=spawner.oauth_client_id,
):
self.log.debug("Deleting oauth client %s", oauth_client.identifier)
self.db.delete(oauth_client)

View File

@@ -6,7 +6,7 @@ FROM $BASE_IMAGE
MAINTAINER Project Jupyter <jupyter@googlegroups.com>
ADD install_jupyterhub /tmp/install_jupyterhub
ARG JUPYTERHUB_VERSION=master
ARG JUPYTERHUB_VERSION=main
# install pinned jupyterhub and ensure notebook is installed
RUN python3 /tmp/install_jupyterhub && \
python3 -m pip install notebook

View File

@@ -5,7 +5,7 @@ Built from the `jupyter/base-notebook` base image.
This image contains a single user notebook server for use with
[JupyterHub](https://github.com/jupyterhub/jupyterhub). In particular, it is meant
to be used with the
[DockerSpawner](https://github.com/jupyterhub/dockerspawner/blob/master/dockerspawner/dockerspawner.py)
[DockerSpawner](https://github.com/jupyterhub/dockerspawner/blob/HEAD/dockerspawner/dockerspawner.py)
class to launch user notebook servers within docker containers.
The only thing this image accomplishes is pinning the jupyterhub version on top of base-notebook.

View File

@@ -13,7 +13,7 @@ function get_hub_version() {
hub_xy="${hub_xy}.${split[3]}"
fi
}
# tag e.g. 0.9 with master
# tag e.g. 0.9 with main
get_hub_version
docker tag $DOCKER_REPO:$DOCKER_TAG $DOCKER_REPO:$hub_xy
docker push $DOCKER_REPO:$hub_xy

View File

@@ -9,8 +9,8 @@ pip_install = [
sys.executable, '-m', 'pip', 'install', '--no-cache', '--upgrade',
'--upgrade-strategy', 'only-if-needed',
]
if V == 'master':
req = 'https://github.com/jupyterhub/jupyterhub/archive/master.tar.gz'
if V in {'main', 'HEAD'}:
req = 'https://github.com/jupyterhub/jupyterhub/archive/HEAD.tar.gz'
else:
version_info = [ int(part) for part in V.split('.') ]
version_info[-1] += 1