Compare commits

...

690 Commits
1.1.0 ... 1.4.2

Author SHA1 Message Date
Min RK
909b3ad4d7 Merge pull request #3538 from consideRatio/pr/release-1.4.2
Release 1.4.2
2021-07-16 10:57:54 +00:00
Erik Sundell
114493be9b release 1.4.2 2021-07-15 16:57:54 +02:00
Erik Sundell
4c0ac5ba91 changelog for 1.4.2 2021-07-15 16:57:52 +02:00
Erik Sundell
52793d65bd Backport PR #3531: Fix regression where external services api_token became required
Issue background

Registering an external service means it won't be run as a process by JupyterHub or similar as I understand it, and such external services may be registered only to get a /services/<service_name> route registered with JupyterHub's configured proxy rather than to actually use an api_token and speak with JupyterHub.

In the past, it was okay for a external service without an api_token to be registered, but not it isn't. This PR fixes that.

The situation when I run into this is when I register grafana as an external service like this (but in reality via a z2jh config with slightly different syntax).

```python
c.JupyterHub.services = [
    {
        "name": "grafana",
        "url": "http://grafana",
    }
]
```

JupyterHub has a [documentation about Services](https://jupyterhub.readthedocs.io/en/stable/reference/services.html properties-of-a-service), where one can see that the default value of api_token is None.

    Issue details

This is an error me and  GeorgianaElena have run into using JupyterHub 1.4.1, but I'm not sure at what point the regression was introduced besides it was around in 1.4.1.

I wrote some notes tracking this issue down. This is the summary I wrote.

```
    This test was made to reproduce an error like this:

        ValueError: Tokens must be at least 8 characters, got ''

    The error had the following stack trace in 1.4.1:

        jupyterhub/app.py:2213: in init_api_tokens
            await self._add_tokens(self.service_tokens, kind='service')
        jupyterhub/app.py:2182: in _add_tokens
            obj.new_api_token(
        jupyterhub/orm.py:424: in new_api_token
            return APIToken.new(token=token, service=self, **kwargs)
        jupyterhub/orm.py:699: in new
            cls.check_token(db, token)

    This test also make _add_tokens receive a token_dict that is buggy:

        {"": "external_2"}

    It turned out that whatever passes token_dict to _add_tokens failed to
    ignore service's api_tokens that were None, and instead passes them as blank
    strings.

    It turned out that init_api_tokens was passing self.service_tokens, and that
    self.service_tokens had been populated with blank string tokens for external
    services registered with JupyterHub.
```

Signed-off-by: Erik Sundell <erik.i.sundell@gmail.com>
2021-07-15 10:16:18 +02:00
passer
320e1924a7 Backport PR #3521: Fix contributor documentation's link
Clicking the contributor documentation's link [https://jupyter.readthedocs.io/en/latest/contributor/content-contributor.html](https://jupyter.readthedocs.io/en/latest/contributor/content-contributor.html) will get an error

This link needs to be replaced with [https://jupyter.readthedocs.io/en/latest/contributing/content-contributor.html](https://jupyter.readthedocs.io/en/latest/contributing/content-contributor.html)

Signed-off-by: Erik Sundell <erik.i.sundell@gmail.com>
2021-07-15 10:16:16 +02:00
Min RK
2c90715c8d Backport PR #3510: bump autodoc-traits
for sphinx compatibility fix, to get docs building again

Signed-off-by: Erik Sundell <erik.i.sundell@gmail.com>
2021-07-15 10:16:13 +02:00
David Brochart
c99bb32e12 Backport PR #3494: Fix typo
Signed-off-by: Erik Sundell <erik.i.sundell@gmail.com>
2021-07-15 10:16:11 +02:00
Igor Beliakov
fee4ee23c0 Backport PR #3484: Bug: save_bearer_token (provider.py) passes a float value to the expires_at field (int)
**Environment**

* image: k8s-hub (`jupyterhub/k8s-hub:0.11.1`);
* `authenticator_class: dummy`;
* db: cocroachdb (`sqlalchemy-cocroachdb`).

**Description:**

`save_bearer_token` method (`provider.py`) passes a float value to the `expires_at` field (int).

A user can create a notebook, it gets successfully scheduled, and then, once the pod is up and ready, the user is unable to enter the notebook, because jupyterhub cannot save a token. In logs, we can see the following:

```
[I 2021-05-29 14:45:04.302 JupyterHub log:181] 302 GET /hub/api/oauth2/authorize?client_id=jupyterhub-user-user2&redirect_uri=%2Fuser%2Fuser2%2Foauth_callback&response_type=code&state=[secret] -> /user/user2/oauth_callback?code=[secret]&state=[secret] (user2 40.113.125.116) 73.98ms
[E 2021-05-29 14:45:04.424 JupyterHub web:1789] Uncaught exception POST /hub/api/oauth2/token (10.42.80.10)
    HTTPServerRequest(protocol='http', host='hub:8081', method='POST', uri='/hub/api/oauth2/token', version='HTTP/1.1', remote_ip='10.42.80.10')
    Traceback (most recent call last):
      File "/usr/local/lib/python3.8/dist-packages/tornado/web.py", line 1702, in _execute
        result = method(*self.path_args, **self.path_kwargs)
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/apihandlers/auth.py", line 324, in post
        headers, body, status = self.oauth_provider.create_token_response(
      File "/usr/local/lib/python3.8/dist-packages/oauthlib/oauth2/rfc6749/endpoints/base.py", line 116, in wrapper
        return f(endpoint, uri, *args, **kwargs)
      File "/usr/local/lib/python3.8/dist-packages/oauthlib/oauth2/rfc6749/endpoints/token.py", line 118, in create_token_response
        return grant_type_handler.create_token_response(
      File "/usr/local/lib/python3.8/dist-packages/oauthlib/oauth2/rfc6749/grant_types/authorization_code.py", line 313, in create_token_response
        self.request_validator.save_token(token, request)
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/oauth/provider.py", line 281, in save_token
        return self.save_bearer_token(token, request, *args, **kwargs)
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/oauth/provider.py", line 354, in save_bearer_token
        self.db.commit()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/orm/session.py", line 1042, in commit
        self.transaction.commit()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/orm/session.py", line 504, in commit
        self._prepare_impl()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/orm/session.py", line 483, in _prepare_impl
        self.session.flush()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/orm/session.py", line 2536, in flush
        self._flush(objects)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/orm/session.py", line 2678, in _flush
        transaction.rollback(_capture_exception=True)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__
        compat.raise_(
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/util/compat.py", line 182, in raise_
        raise exception
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/orm/session.py", line 2638, in _flush
        flush_context.execute()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/orm/unitofwork.py", line 422, in execute
        rec.execute(self)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/orm/unitofwork.py", line 586, in execute
        persistence.save_obj(
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/orm/persistence.py", line 239, in save_obj
        _emit_insert_statements(
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/orm/persistence.py", line 1135, in _emit_insert_statements
        result = cached_connections[connection].execute(
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 1011, in execute
        return meth(self, multiparams, params)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
        return connection._execute_clauseelement(self, multiparams, params)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 1124, in _execute_clauseelement
        ret = self._execute_context(
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 1316, in _execute_context
        self._handle_dbapi_exception(
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 1510, in _handle_dbapi_exception
        util.raise_(
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/util/compat.py", line 182, in raise_
        raise exception
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context
        self.dialect.do_execute(
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 593, in do_execute
        cursor.execute(statement, parameters)
    sqlalchemy.exc.ProgrammingError: (psycopg2.errors.DatatypeMismatch) value type decimal doesn't match type int of column "expires_at"
    HINT:  you will need to rewrite or cast the expression

    [SQL: INSERT INTO oauth_access_tokens (client_id, grant_type, expires_at, refresh_token, refresh_expires_at, user_id, session_id, hashed, prefix, created, last_activity) VALUES (%(client_id)s, %(grant_type)s, %(expires_at)s, %(refresh_token)s, %(refresh_expires_at)s, %(user_id)s, %(session_id)s, %(hashed)s, %(prefix)s, %(created)s, %(last_activity)s) RETURNING oauth_access_tokens.id]
    [parameters: {'client_id': 'jupyterhub-user-user2', 'grant_type': 'authorization_code', 'expires_at': 1622303104.418992, 'refresh_token': 'FVJ8S4is0367LlEMnxIiEIoTOeoxhf', 'refresh_expires_at': None, 'user_id': 662636890939424770, 'session_id': '4e041a2bfcb34a34a00033a281bc1236', 'hashed': 'sha512:1:3b18deae37fbf50a:03df035736960af14e19196e1d13fd74f55c21f17405119f80e75817ff37c7567fab089a3d40b97a57f94b54065ee56f7260895352516b9facb989d656f05be8', 'prefix': 't11z', 'created': datetime.datetime(2021, 5, 29, 14, 45, 4, 421305), 'last_activity': None}]
    (Background on this error at: http://sqlalche.me/e/13/f405)

[W 2021-05-29 14:45:04.430 JupyterHub base:110] Rolling back session due to database error (psycopg2.errors.DatatypeMismatch) value type decimal doesn't match type int of column "expires_at"
    HINT:  you will need to rewrite or cast the expression

    [SQL: INSERT INTO oauth_access_tokens (client_id, grant_type, expires_at, refresh_token, refresh_expires_at, user_id, session_id, hashed, prefix, created, last_activity) VALUES (%(client_id)s, %(grant_type)s, %(expires_at)s, %(refresh_token)s, %(refresh_expires_at)s, %(user_id)s, %(session_id)s, %(hashed)s, %(prefix)s, %(created)s, %(last_activity)s) RETURNING oauth_access_tokens.id]
    [parameters: {'client_id': 'jupyterhub-user-user2', 'grant_type': 'authorization_code', 'expires_at': 1622303104.418992, 'refresh_token': 'FVJ8S4is0367LlEMnxIiEIoTOeoxhf', 'refresh_expires_at': None, 'user_id': 662636890939424770, 'session_id': '4e041a2bfcb34a34a00033a281bc1236', 'hashed': 'sha512:1:3b18deae37fbf50a:03df035736960af14e19196e1d13fd74f55c21f17405119f80e75817ff37c7567fab089a3d40b97a57f94b54065ee56f7260895352516b9facb989d656f05be8', 'prefix': 't11z', 'created': datetime.datetime(2021, 5, 29, 14, 45, 4, 421305), 'last_activity': None}]
    (Background on this error at: http://sqlalche.me/e/13/f405)
[E 2021-05-29 14:45:04.443 JupyterHub log:173] {
      "Host": "hub:8081",
      "User-Agent": "python-requests/2.25.1",
      "Accept-Encoding": "gzip, deflate",
      "Accept": "*/*",
      "Connection": "keep-alive",
      "Content-Type": "application/x-www-form-urlencoded",
      "Authorization": "token [secret]",
      "Content-Length": "190"
    }
[E 2021-05-29 14:45:04.444 JupyterHub log:181] 500 POST /hub/api/oauth2/token (user2 10.42.80.10) 63.28ms
```

Everything went well, when I changed:
`expires_at=orm.OAuthAccessToken.now() + token['expires_in'],`
to:
`expires_at=int(orm.OAuthAccessToken.now() + token['expires_in']),`
That's what this PR is about.

As a sidenote, `black` formatter adjusted the `orm_client = orm.OAuthClient(identifier=client_id,)` line, but I guess it should be fine. Please, feel free to revert this change if needed.

(Upd): added the missing `int` conversion.

Signed-off-by: Erik Sundell <erik.i.sundell@gmail.com>
2021-07-15 10:16:08 +02:00
Min RK
2c8b29b6bb Merge pull request #3467 from minrk/1.4.x
Prepare for 1.4.1
2021-05-12 17:16:58 +02:00
Min RK
a53178a92b Backport PR #3462: prepare to rename default branch to main
- update references to default branch name in docs, workflows
- use HEAD in github urls, which always works regardless of default branch name
- fix petstore URLs since the old petstore links seem to have stopped working

to merge, in order:

- [x] approve this PR
- [x] rename the default branch to main in settings
- [x] merge this PR

Related tangent: I've been using [this git default-branch](https://github.com/minrk/git-stuff/blob/main/bin/git-default-branch) to help with my aliases and friends working with repos with different branch names.

Signed-off-by: Min RK <benjaminrk@gmail.com>
2021-05-12 15:51:06 +02:00
Min RK
e032cda638 release 1.4.1 2021-05-12 15:45:27 +02:00
Min RK
40820b3489 changelog for 1.4.1 2021-05-12 15:42:21 +02:00
Min RK
80f4454371 Backport PR #3457: ci: fix typo in environment variable
When i setup the release workflow i made a typo in an environment variable so signing into Docker Hub now fails.

Observed in https://github.com/jupyterhub/jupyterhub/pull/3456 issuecomment-832923798.

Signed-off-by: Min RK <benjaminrk@gmail.com>
2021-05-12 15:36:39 +02:00
Erik Sundell
4d0005b0b7 Backport PR #3454: define Spawner.delete_forever on base Spawner
...where I thought it already was! Instead of on the test class.

and fix the logic for when it is called a bit:

- call on *all* Spawners, not just the default
- call on named server deletion when remove=True

closes  3451, finishes  3337

Signed-off-by: Min RK <benjaminrk@gmail.com>
2021-05-12 15:36:36 +02:00
Erik Sundell
86761ff0d4 Backport PR #3456: avoid re-using asyncio.Locks across event loops
should never occur in real applications where only one loop is run, but may occur in tests if the Proxy object lives longer than the loop that is running when it's created (imported?).

I *suspect* this is the source of our intermittent test failures with:

> got Future <Future pending> attached to a different loop

But since they are intermittent, it's hard to be sure, even if this PR passes.

The issue: we were allocating an asyncio.Lock(), which in turn grabs a handle on the current event loop, at *method definition time* in the decorator, instead of *call time*.

The solution: allocate the method at call time *and* double-check to ensure we never use a lock across event loops by storing the locks per-loop.

This should change nothing for 'real' hub instances, where only one loop is ever running, only tests where we start and stop loops a bunch.

Signed-off-by: Min RK <benjaminrk@gmail.com>
2021-05-12 15:36:34 +02:00
Min RK
32a2a3031c Backport PR #3437: patch base handlers from both jupyter_server and notebook
and clarify warning when a base handler isn't patched that auth is still being applied

- reorganize patch steps into functions for easier re-use
- patch notebook and jupyter_server handlers if they are already imported
- run patch after initialize to ensure extensions have done their importing before we check what's present
- apply class-level patch even when instance-level patch is happening to avoid triggering patch on every request

This change isn't as big as it looks, because it's mostly moving some re-used code to a couple of functions.

closes https://github.com/jupyter-server/jupyter_server/issues/488

Signed-off-by: Min RK <benjaminrk@gmail.com>
2021-05-12 15:36:31 +02:00
Min RK
16352496da Backport PR #3452: Fix documentation
Signed-off-by: Min RK <benjaminrk@gmail.com>
2021-05-12 15:36:28 +02:00
Min RK
2259f57772 Backport PR #3436: ci: github workflow security, pin action to sha etc
Pin references to github actions we rely on in workflows with jobs that reference GitHub secrets that could get exposed.

Signed-off-by: Min RK <benjaminrk@gmail.com>
2021-05-12 15:36:26 +02:00
Min RK
574d343881 release 1.4.0 2021-04-19 13:41:28 +02:00
Yuvi Panda
c205385023 Merge pull request #3424 from minrk/changelog-1.4
more changelog for 1.4
2021-04-19 17:06:23 +05:30
Min RK
9e0ac1594c more changelog for 1.4 2021-04-19 13:13:29 +02:00
Min RK
2fd434f511 Merge pull request #3430 from yuvipanda/additional_routes
Support Proxy.extra_routes
2021-04-19 13:12:11 +02:00
YuviPanda
af39f39082 Mark extra proxy routes properly 2021-04-19 16:27:05 +05:30
YuviPanda
ab751bda5c Accomodate for host based routing 2021-04-19 16:26:09 +05:30
YuviPanda
f84078627f Add a little more documentation to extra_routes 2021-04-19 16:16:03 +05:30
YuviPanda
3ec3dc5195 Support Proxy.extra_routes
When the hub is running in API-only mode, it's
very useful to have the proxy know where to send
URLs that would normally be serviced by the hub.
For example, / might go to a service that renders
a home page, while `/user` might go to a service that
tells the user their server is dead.

Right now, this happens 'out of band', with a process
that has to talk to the proxy directly. This is a
bit messy - the routes need to be re-added when the
proxy restarts, the hub might try to remove them, etc.
By adding support for this in the hub itself, all
this complexity is now removed and the hub continues
to own all the routes in the proxy
2021-04-19 16:14:28 +05:30
Simon Li
73102e7aeb Merge pull request #3429 from minrk/push-auth
typos in onbuild, demo images for push
2021-04-19 09:19:57 +01:00
Min RK
b039e2985b typos in onbuild, demo images for push
it's jupyterhub/jupyterhub-onbuild not jupyterthub-onbuild/jupyterhub
2021-04-19 09:09:49 +02:00
Min RK
6d7863d56a Merge pull request #3428 from Carreau/doc-1
DOC: Conform to numpydoc.
2021-04-19 08:56:42 +02:00
Min RK
aba32e7200 Merge pull request #3425 from manics/docker-arm64
Disable docker jupyterhub-demo arm64 build
2021-04-19 08:33:45 +02:00
Matthias Bussonnier
a71823c5ab DOC: Conform to numpydoc.
Minor syntax update
2021-04-18 21:23:03 -07:00
Simon Li
fcf9122519 jupyterhub/action-major-minor-tag-calculator@v1
Co-authored-by: Erik Sundell <erik.i.sundell@gmail.com>
2021-04-15 20:35:21 +01:00
Simon Li
6c3fc41176 jupyterhub/action-major-minor-tag-calculator@main 2021-04-15 16:14:51 +01:00
Simon Li
0bdb1bac4d GHW docker: use default tag for PRs
This allows testing with a localhost:5000 registry
2021-04-15 11:11:12 +01:00
Simon Li
35c76221fe Disable linux/arm64 jupyterhub-demo build
Installing notebook requires additional compilation dependencies
2021-04-15 10:21:32 +01:00
Simon Li
ffb092721c GHW docker: push to localhost if not releasing 2021-04-15 10:19:06 +01:00
Min RK
8758b3af27 Merge pull request #3422 from olifre/login-page-customization
login-template: Add a "login_container" block inside the div-container.
2021-04-15 09:31:23 +02:00
Min RK
5202cdff8c Merge pull request #3421 from manics/docker-arm64
Docker arm64 builds
2021-04-15 09:31:04 +02:00
Simon Li
ce0cb95282 docker release: fix build-arg BASE_IMAGE tag 2021-04-14 23:17:16 +01:00
Simon Li
ee421f6427 GHW: Remove unnecessary echo, add docker test timeout 2021-04-14 22:47:17 +01:00
Simon Li
268da21bbf GH workflow docker: 'input device is not a TTY' 2021-04-14 22:44:34 +01:00
Simon Li
4ad5f61bc7 Bump onbuild/README.md example version 2021-04-14 22:28:27 +01:00
Simon Li
3df3850b3a Remove Docker hub automated build hooks 2021-04-14 22:28:07 +01:00
Simon Li
50733efa1b Move circleci docker test to gh workflow 2021-04-14 22:27:28 +01:00
Simon Li
98230ee770 docker release: jupyterhub-onbuild jupyterhub-demo 2021-04-14 22:26:25 +01:00
Simon Li
37f250b4d7 Push some branches, use variable to determine whether to push 2021-04-14 22:26:21 +01:00
Oliver Freyermuth
869661bf25 login-template: Add a "login_container" block inside the div-container.
This allows for more flexible customization of the login page,
since it allows to re-use the login form in an extending template
by reusing the new block.

This was not cleanly possible before since the main container
was part of the very same block as the form code.

fixes #3414
2021-04-14 20:11:04 +02:00
Simon Li
009fa955ed Add Docker multi-arch publish 2021-04-13 15:35:03 +01:00
Simon Li
7c8f7e9fcb Don't pin Dockerfile parent hash 2021-04-13 15:34:14 +01:00
Yuvi Panda
14539c4e0f Merge pull request #3373 from minrk/only-hub-route
allow the hub to not be the default route
2021-04-13 17:12:21 +05:30
Min RK
a11a292cd9 test custom hub routespecs 2021-04-13 13:16:59 +02:00
Min RK
5890064191 duplicate metrics, health handlers on /api/
these should probably have been on `/api/` all along,
but must be on /api/ for api-only hub routing
2021-04-13 13:16:59 +02:00
Min RK
1f30e693ad allow overriding JupyterHub.hub_routespec
Rare, but can make sense for api-only deployments

allows easier override of the default route,
e.g. for mybinder.org custom error pages
2021-04-13 13:16:59 +02:00
Min RK
32976f3d42 Merge pull request #3403 from kafonek/fastapi-example
Fastapi example
2021-04-13 12:58:43 +02:00
Min RK
30bc23f102 Merge pull request #3418 from jiajunjie/log-exception
Log the exception raised in Spawner.post_stop_hook instead of raising it
2021-04-13 12:56:38 +02:00
Jia Junjie
786c7039d6 Log the exception raised in Spawner.post_stop_hook instead of raising it 2021-04-13 08:01:59 +00:00
Erik Sundell
19c3b02155 Merge pull request #3417 from manics/fix-hard-way-link
Fix link to jupyterhub/jupyterhub-the-hard-way
2021-04-13 07:49:33 +02:00
Simon Li
1a80524772 Fix link to jupyterhub/jupyterhub-the-hard-way 2021-04-12 21:49:59 +01:00
Erik Sundell
699a1cc01b Merge pull request #3415 from minrk/changelog-1.4
Changelog for 1.4
2021-04-12 17:26:33 +02:00
Min RK
29ae04c921 Changelog for 1.4 2021-04-12 16:57:26 +02:00
Matt Kafonek
62a1652cc9 Add files via upload 2021-04-11 21:41:45 -04:00
Kafonek, Matt
290e031034 updating gif 2021-04-11 21:40:11 -04:00
Kafonek, Matt
7642302d17 docs 2021-04-09 15:01:59 +00:00
Kafonek, Matt
aebf833530 Hit /user instead of /authorizations/token/<token> 2021-04-09 15:01:48 +00:00
Kafonek, Matt
86b51804c1 comment update 2021-04-09 15:01:22 +00:00
Kafonek, Matt
aa12afa34d User groups is List[str] not List[Group] 2021-04-09 15:01:03 +00:00
Yuvi Panda
2ff6d2b36c Merge pull request #3411 from minrk/oauth-token-expiry-config
make oauth token expiry configurable
2021-04-09 18:14:56 +05:30
Min RK
e5f7aa6c2a default oauth token expiry to cookie_max_age_days
so changing cookie age changes oauth token expiry,
since these are what are stored in those cookies anyway,
it makes sense for them to expire at the same time
2021-04-09 14:35:09 +02:00
Min RK
e3811edd87 make oauth token expiry configurable
and default to 1 day instead of 1 hour
2021-04-09 14:06:38 +02:00
Min RK
55cd9d806b Merge pull request #3407 from yuvipanda/upsert-oauth-clients
Don't delete all oauth clients on startup
2021-04-09 09:26:54 +02:00
YuviPanda
96789f5945 Add oauth client to orm only when it's new
- Existing orm_client objects are updated automatically
  in the session.
- Add some logging
- Remove TODO about safety in doing updates without upsert
  in JupyterHub, per @minrk:
  https://github.com/jupyterhub/jupyterhub/pull/3407#discussion_r610390785
2021-04-09 12:50:02 +05:30
kafonek
81d481a110 pre-commit run -a 2021-04-08 09:28:46 -04:00
YuviPanda
054c7f276e Don't delete all oauth clients on startup
When an oauth client changes, we delete all the tokens
associated with that client. This invalidates all user sessions
for that oauth client, and the oauth client's users will need to
go through the OAuth workflow again after the cache period (specified
by cache_max_age in HubAuth, 5min by default). This is fine in theory,
since oauth client information doesn't change frequently.

However, we were deleting and re-adding all oauth clients each time
the hub started! This was unnecessary, since the data was going to
be the same 99% of the time. Rest of the time, we should just update,
preventing unnecessary churn.

This PR does that.

Ref https://github.com/yuvipanda/jupyterhub-configurator/issues/2
Ref https://github.com/berkeley-dsep-infra/datahub/issues/2284
2021-04-08 17:55:28 +05:30
Matt Kafonek
1220673e61 Add files via upload 2021-04-07 14:34:10 -04:00
Kafonek, Matt
815274e966 please to be deleted old gif. 2021-04-07 18:33:32 +00:00
Kafonek, Matt
f1503b5a21 trying to get this new gif up
Merge branch 'fastapi-example' of github.com:kafonek/jupyterhub into fastapi-example
2021-04-07 18:31:30 +00:00
Kafonek, Matt
4dcdf84d32 remove old gif 2021-04-07 18:27:40 +00:00
Matt Kafonek
dda0b611e2 Add files via upload 2021-04-07 14:26:09 -04:00
Kafonek, Matt
a23bfd1769 raise warning if PUBLIC_HOST is not set 2021-04-07 18:18:02 +00:00
Kafonek, Matt
a55ccce64e Use Pydantic models 2021-04-07 18:17:25 +00:00
Kafonek, Matt
42c5030b0e Add models, remove cookie auth
get_current_user returns a User model instead of a dict.
using cookies for Hub auth is deprecated, so removed
that option and refactored get_current_user
2021-04-07 18:15:48 +00:00
Kafonek, Matt
be3df52b4f Add Pydantic models for Hub objects and exceptions 2021-04-07 18:15:26 +00:00
Kafonek, Matt
0ca5eb4997 updated docs 2021-04-07 18:15:10 +00:00
Yuvi Panda
9eeb84158e Merge pull request #3401 from maxshowarth/master
Added Azure AD as a supported authenticator.
2021-04-07 17:37:32 +05:30
Kafonek, Matt
37c2be778c pre-commit formatting 2021-04-07 02:14:54 +00:00
Kafonek, Matt
dc1b2c810d review 2021-04-07 02:13:12 +00:00
Kafonek, Matt
88c7f188e0 Merge branch 'fastapi-example' of github.com:kafonek/jupyterhub into fastapi-example 2021-04-07 02:06:45 +00:00
Kafonek, Matt
4181cc7065 add gif 2021-04-07 02:05:07 +00:00
Matt Kafonek
69e3fc2016 demo.gif 2021-04-06 22:00:42 -04:00
Kafonek, Matt
56269f0226 fastapi service example 2021-04-07 01:55:43 +00:00
Max
e446eff311 Added Azure AD as a supported authenticator. 2021-04-06 09:48:37 -07:00
Max
00042de04c remove 2021-04-06 09:41:29 -07:00
Max
82e0af763d Added AzureAD to list of supported authenticators. 2021-04-06 09:40:07 -07:00
Tim Head
c5bfd28005 Merge pull request #3394 from yuvipanda/secreter-secret 2021-03-31 13:47:07 +02:00
YuviPanda
0ffa5715fd Fix formatting to make pre-commit happy 2021-03-30 12:59:52 +05:30
Yuvi Panda
139312149e Merge pull request #3392 from minrk/deprecated-tablenames 2021-03-29 17:09:23 +05:30
Yuvi Panda
29740b0af6 Merge branch 'master' into secreter-secret 2021-03-29 17:08:17 +05:30
YuviPanda
9f6467be05 Use 'secrets' module to generate secrets
Python 3.6+ has this
2021-03-29 17:07:03 +05:30
Min RK
caae99aa09 avoid deprecated engine.table_names
deprecated in sqlalchemy 1.4

use recommended inspect(engine).get_table_names() instead
2021-03-26 12:54:40 +01:00
Min RK
8f2b14429f Merge pull request #3386 from minrk/bump-alpine
alpine dockerfile: avoid compilation by getting some deps from apk
2021-03-23 09:28:48 +01:00
Min RK
af0d81436d alpine dockerfile: avoid compilation by getting some deps from apk
cryptography is the big one, which needs rust and is a huge pain
2021-03-22 12:17:47 +01:00
Min RK
477ee23ad3 Merge pull request #3383 from IvanaH8/fix-sqlalchemy-interfaces-deprecation 2021-03-18 14:25:01 +01:00
IvanaH8
27bcac5e8b Fix sqlachemy.interfaces.PoolListener deprecation for testing older JupyterHub versions 2021-03-18 14:13:10 +01:00
Erik Sundell
6535cc6bab Merge pull request #3377 from minrk/count-redirects-differently
always start redirect count at 1 when redirecting /hub/user/:name -> /user/:name
2021-03-09 14:04:16 +01:00
Min RK
8173bbbf75 always start redirect count at 1 when redirecting /hub/user/:name -> /user/:name
/hub/user/:name is now only reasonably visited as a result of redirect from /user/:name
2021-03-09 09:57:04 +01:00
Min RK
2146eef150 Merge pull request #3375 from manics/remove-hard-way
Remove the hard way guide
2021-03-08 13:28:34 +01:00
Simon Li
97b7ccbee4 Mark installation-guide-hard orphan 2021-03-05 19:13:55 +00:00
Simon Li
8eb98409d5 Remove installation-guide-hard 2021-03-05 19:08:26 +00:00
Min RK
a4390a1f4f Merge pull request #3370 from minrk/raise-failed-tokens
Always raise on failed token creation
2021-03-05 11:02:03 +01:00
Min RK
f42f7dd01f raise on failed token creation
the logic was there but at the wrong indentation level
causing it to only raise sometimes
2021-03-02 14:32:33 +01:00
Min RK
0ca2ef68f0 Merge pull request #3326 from dtaniwaki/docker-host
Allow to set spawner-specific hub connect URL
2021-02-26 12:57:22 +01:00
Min RK
c3ca924ba8 Merge pull request #3362 from consideRatio/pr/pre-commit-maintenance
Update pre-commit hooks versions
2021-02-17 13:11:40 +00:00
Erik Sundell
0155e6dc34 Run pre-commit requirements-txt-fixer 2021-02-12 19:24:22 +01:00
Erik Sundell
727f9a0d49 Update pre-commit hook versions 2021-02-12 19:23:46 +01:00
Erik Sundell
d31af27888 Merge pull request #3360 from minrk/prettier
add (and run) prettier pre-commit hook
2021-02-12 19:21:29 +01:00
Min RK
9331dd13da run pre-commit (prettier) 2021-02-12 15:25:58 +01:00
Min RK
3c7203741f add prettier pre-commit hook
will autoformat md, js, yaml, etc.
2021-02-12 15:22:26 +01:00
Erik Sundell
4e79360567 Merge pull request #3359 from minrk/move-custom-html
move get_custom_html to base Authenticator class
2021-02-11 22:41:17 +01:00
Min RK
529273d105 move get_custom_html to base Authenticator class
so it's always available

it was accidentally added to PAM instead of the base
2021-02-11 21:42:02 +01:00
Min RK
2e198396c1 Merge pull request #3347 from minrk/mixin-get-user
make_singleuser_app: patch-in HubAuthenticatedHandler at lower priority
2021-02-04 13:41:39 +00:00
Daisuke Taniwaki
259c7512b8 Fix a lint issue 2021-02-02 00:30:59 +09:00
Daisuke Taniwaki
59b29f4c42 Refactor the code 2021-02-02 00:27:34 +09:00
Daisuke Taniwaki
bf3615aa96 Fix path 2021-02-02 00:11:43 +09:00
Daisuke Taniwaki
06a505f6df Fix comment 2021-02-02 00:09:25 +09:00
Daisuke Taniwaki
c8d6c6aaa8 Fix spawner hub connect URL 2021-02-02 00:04:42 +09:00
Daisuke Taniwaki
cc2859a826 Merge remote-tracking branch 'upstream/master' into docker-host 2021-02-01 22:35:46 +09:00
Daisuke Taniwaki
26ccf6fd57 Fix hub_connect_url 2021-02-01 22:29:43 +09:00
Min RK
f220bbca84 Merge pull request #3315 from dtaniwaki/improve-handler
Make Authenticator Custom HTML Flexible
2021-02-01 11:42:27 +00:00
Min RK
4fb3f02870 Merge pull request #3349 from minrk/pr-artifacts
publish release outputs as artifacts
2021-02-01 11:20:03 +00:00
Min RK
471d1f0a2f simplify and clarify override of methods that could be defined on BaseHandler 2021-02-01 11:40:11 +01:00
Min RK
1b12107c54 specify that mock.patch is temporary
Co-authored-by: Erik Sundell <erik.i.sundell@gmail.com>
2021-02-01 07:05:24 +00:00
Min RK
b3a4adcbdd add link to action
Co-authored-by: Erik Sundell <erik.i.sundell@gmail.com>
2021-02-01 07:03:31 +00:00
Min RK
12c69c6a94 publish release outputs as artifacts
makes testing a PR even easier since we build an sdist and wheel for every PR and push

since artifacts are double-archived, it's not quite as simple as giving a URL to install from,
but this at least makes it available. To use:

- download and unpack zip
- `pip install path/to/whl`
2021-01-29 14:32:18 +01:00
Min RK
d3147f3fb7 make_singleuser_app: patch-in HubAuthenticatedHandler at lower priority
apply patch directly to BaseHandler instead of each handler instance
so that overrides can still take effect (i.e. APIHandler raising 403 instead of redirecting)
2021-01-29 14:07:05 +01:00
Daisuke Taniwaki
47265786e3 Add versionadded 2021-01-27 20:49:47 +09:00
Min RK
1d9795c577 Merge pull request #3345 from stv0g/service-template
Allow customization of service menu via templates
2021-01-27 11:39:55 +00:00
Steffen Vogel
e35b84b419 convert tabs to whitespaces 2021-01-26 17:42:35 +01:00
Steffen Vogel
5a57b03b61 allow customization of service menu via templates 2021-01-26 17:39:48 +01:00
Min RK
e526f36b81 Merge pull request #3344 from minrk/no-auth-header-create
[TST] Do not implicitly create users in auth_header
2021-01-26 13:42:32 +00:00
Min RK
d289cd1e02 Merge pull request #3343 from consideRatio/pr/cookie-secret-as-hex
Allow cookie_secret to be set to a hexadecimal string
2021-01-26 12:11:10 +00:00
Erik Sundell
4c3a32b51f Apply suggestions from code review
Co-authored-by: Min RK <benjaminrk@gmail.com>
2021-01-26 12:44:17 +01:00
Min RK
6c65624942 [TST] Do not implicitly create users in auth_header
implicit user creation results in surprising behavior when the user shouldn't exist
2021-01-26 11:54:47 +01:00
Erik Sundell
cba22751b4 Test setting cookie_secret to a hexadecimal string 2021-01-25 22:29:48 +01:00
Erik Sundell
c5d0265984 Allow cookie_secret to be a hexadecimal string
With this, we coerce hexadecimal strings into Bytes. This can be helpful
as YAML/JSON cannot represent raw bytes.
2021-01-25 22:28:50 +01:00
Daisuke Taniwaki
fc772e1c39 Fix a lint issue 2021-01-25 23:33:17 +09:00
Daisuke Taniwaki
d70157e72a Fix the spawner test 2021-01-25 23:30:11 +09:00
Min RK
91359bcaa7 Merge pull request #3337 from nsshah1288/feature/shahn3_pvcDeletion
Add Spawner.delete_forever
2021-01-25 13:59:54 +00:00
Min RK
22fc580275 Merge pull request #3341 from dtaniwaki/clear-cookie
Clear tornado xsrf cookie on logout
2021-01-25 13:58:36 +00:00
Daisuke Taniwaki
2f304bffcc Clear tornado cookie on logout 2021-01-24 20:21:17 +09:00
SHAHN3
162076c5dd added docstring 2021-01-23 15:58:32 -05:00
SHAHN3
9bd97db90b added try except, also changed to await and async 2021-01-21 16:21:18 -05:00
Daisuke Taniwaki
3a25b32ce6 Update Spawner.hub_connect_url help message 2021-01-21 10:32:37 +09:00
SHAHN3
8fcc4b48a5 removed await 2021-01-20 14:44:03 -05:00
SHAHN3
289dee5996 new method delete_forever 2021-01-20 14:34:32 -05:00
Min RK
b1b7954e93 Merge pull request #3338 from minrk/log-slow-responses
always log slow requests at least at info-level
2021-01-20 09:18:41 +00:00
Erik Sundell
35a55c6cbf Merge pull request #3339 from minrk/alembic-min
specify minimum alembic 1.4
2021-01-20 09:50:24 +01:00
Min RK
cd06f3fb12 specify minimum alembic
this gets us *older* alembic in the old-dependencies test

since alembic 1.5 doesn't support sqlalchemy 1.1
2021-01-20 09:34:42 +01:00
Min RK
796d22d0d8 Merge pull request #3335 from rcthomas/pagination-named-servers
Fix pagination with named servers
2021-01-20 08:29:44 +00:00
Min RK
be4357ad7a Merge pull request #3332 from jiajunjie/fix-help
Fix the help related to the proxy check
2021-01-20 08:27:25 +00:00
Min RK
202d6f93d4 always log slow requests at least at info-level
if health or static responses are taking longer than 1s, it's useful to know
2021-01-20 09:23:26 +01:00
SHAHN3
8b9b69ce22 trying to mock 2021-01-19 17:40:59 -05:00
SHAHN3
c40b3a4ad6 reformatted code 2021-01-19 16:32:59 -05:00
SHAHN3
c7f1b89f6c delete user's PVC when delete user is called 2021-01-19 16:08:33 -05:00
Rollin Thomas
dcff08ae13 Add back outerjoin that made spawner sorts work 2021-01-16 09:15:34 -08:00
Rollin Thomas
b0bf348908 Need to format as subquery 2021-01-15 22:53:12 -08:00
Rollin Thomas
b73eca91ca Fix pagination with named servers 2021-01-15 11:19:57 -08:00
Jia Junjie
3db5eae9a9 Run pre-commit 2021-01-14 20:52:59 +08:00
Min RK
adb5f6ab2a Merge pull request #3333 from trallard/trallard-patch-1
📝 Fix telemetry section
2021-01-14 12:01:24 +01:00
Min RK
2a84353a51 Merge pull request #3329 from Zsailer/docs-jupyter_server
Mention Jupyter Server as optional single-user backend in documentation
2021-01-13 15:04:48 +01:00
Jia Junjie
ca4fb3187f Fix the help related to the proxy check 2021-01-13 21:59:38 +08:00
Tania Allard
8ab25e7c3d 📝 Fix telemetry section 2021-01-13 11:43:05 +00:00
Zsailer
f69ef9f846 add docs describing jupyter_server 2021-01-12 09:11:23 -08:00
Daisuke Taniwaki
ba2608c643 Allow to set spawner-specific hub connect URL 2021-01-08 23:39:05 +09:00
Erik Sundell
c3f5ad8b6d Merge pull request #3325 from andrewisplinghoff/master
Fix mixup in comment regarding the sync parameter
2021-01-08 11:46:37 +01:00
Andre Wisplinghoff
4dbe5490f8 Fix mixup in comment regarding the sync parameter 2021-01-08 11:39:09 +01:00
Erik Sundell
711080616e Merge pull request #3324 from consideRatio/pr/manually-trigger-tests-and-readme-badge
ci: github actions, allow for manual test runs and fix badge in readme
2021-01-08 01:28:27 +01:00
Erik Sundell
8e603e5212 docs: update README.md badge for github actions 2021-01-08 01:16:29 +01:00
Erik Sundell
147167e589 ci: allow tests to be run manually through github UI 2021-01-08 01:16:06 +01:00
Erik Sundell
cebb1f3e22 Merge pull request #3314 from timgates42/bugfix_typo_function
docs: fix simple typo, funciton -> function
2020-12-23 10:24:50 +01:00
Daisuke Taniwaki
0b085a91b6 Fix format issues 2020-12-23 13:50:27 +09:00
Daisuke Taniwaki
ca3ceac4f3 Add comment 2020-12-23 13:42:51 +09:00
Daisuke Taniwaki
c833fae901 Allow to use base URL in custom HTML 2020-12-23 13:39:59 +09:00
Daisuke Taniwaki
8d3a7b704c Render custom html 2020-12-23 13:03:27 +09:00
Tim Gates
1e53fd1f8c docs: fix simple typo, funciton -> function
There is a small typo in jupyterhub/orm.py.

Should read `function` rather than `funciton`.
2020-12-23 11:54:51 +11:00
Erik Sundell
166b00867f Merge pull request #3305 from minrk/github-release
publish releases from github actions
2020-12-11 16:39:42 +01:00
Min RK
7c474396f1 publish releases from github actions 2020-12-11 12:27:34 +01:00
Min RK
f6f6b3afa3 back to dev 2020-12-11 12:08:22 +01:00
Min RK
a91197635a release 1.3.0 2020-12-11 12:07:55 +01:00
Min RK
88706d4c27 final changelog edits for 1.3.0 2020-12-11 12:07:06 +01:00
Min RK
29fac11bfe Merge pull request #3295 from minrk/changelog-1.3
begin changelog for 1.3
2020-12-11 12:02:15 +01:00
Erik Sundell
947ef67184 Merge pull request #3303 from Sangarshanan/patch-1
Remove the extra parenthesis in service.md
2020-12-11 09:39:28 +01:00
sangarshanan
8ede924956 Remove extra paranthesis 2020-12-11 13:15:13 +05:30
sangarshanan
55c2d3648e Add the missing parenthesis in service.md 2020-12-11 01:53:35 +05:30
Min RK
2cf8e48fb5 start changelog for 1.3
I noticed that our jinja async feature is new in 2.9, and matured in 2.11, so explicitly require that
2020-12-09 14:31:10 +01:00
Min RK
ae77038a64 Merge pull request #3293 from minrk/services-whoami
allow services to call /api/user to identify themselves
2020-12-09 13:25:46 +01:00
Min RK
ffed8f67a0 Merge pull request #3294 from minrk/paginate-per-page
fix increasing pagination limits
2020-12-08 10:03:51 +01:00
Erik Sundell
1efd7da6ee Merge pull request #3300 from mxjeff/fixed-doc-services
Fixed idle-culler references.
2020-12-04 11:46:04 +01:00
Geoffroy Youri Berret
6e161d0140 Fixed idle-culler references.
Merge request #3257 fixed #3256 only on getting-started/services-basics.md
There is still a reference to jupyterhub example cull-idle in reference/services.md
2020-12-04 09:28:02 +01:00
Min RK
5f4144cc98 Merge pull request #3298 from coffeebenzene/master
Fix asyncio deprecation asyncio.Task.all_tasks
2020-12-03 11:16:46 +01:00
coffeebenzene
f866bbcf45 Use variable instead of monkey patching asyncio 2020-12-02 19:50:49 +00:00
coffeebenzene
ed6231d3aa Fix asyncio deprecation asyncio.Task.all_tasks 2020-12-02 17:57:28 +00:00
Min RK
9d38259ad7 fix increasing pagination limits
setting per_page in constructor resolves before max_per_page limit is updated from config,
preventing max_per_page from being increased beyond the default limit

we already loaded these values anyway in the first instance,
so remove the redundant Pagination object
2020-12-02 12:52:42 +01:00
Min RK
4b254fe5ed Merge pull request #3243 from agp8x/master
[Metrics] Add prefix to prometheus metrics to group all jupyterhub metrics
2020-12-02 12:22:32 +01:00
Min RK
f8040209b0 allow services to call /api/user to identify themselves 2020-12-02 12:21:25 +01:00
Min RK
e59ee33a6e note versionchanged in metrics module docstring 2020-12-02 11:36:13 +01:00
Min RK
ff15ced3ce Merge pull request #3225 from cbanek/configurable_options_from_form
Allow options_from_form to be configurable
2020-12-02 11:32:24 +01:00
Min RK
75acd6a67b Merge pull request #3264 from tlvu/add-user-agreement-to-login-screen
Add optional user agreement to login screen
2020-12-02 11:31:23 +01:00
Min RK
73ac6207af Merge pull request #3244 from mhwasil/fix-https-redirect-issues
[Docs] Fix https reverse proxy redirect issues
2020-12-02 11:30:09 +01:00
Min RK
e435fe66a5 Merge pull request #3292 from minrk/oldest-metrics
bump oldest-required prometheus-client
2020-12-02 11:27:27 +01:00
Min RK
d7569d6f8e bump oldest-required prometheus-client
oldest-dependency tests caught an error with our base required version
2020-12-02 11:20:30 +01:00
Min RK
ba6c2cf854 Merge pull request #3266 from 0mar/reduce_ssl_testing
Test internal_ssl separately
2020-12-02 10:59:39 +01:00
0mar
970b25d017 Added docstrings 2020-12-01 10:49:10 +01:00
0mar
671ef0d5ef Moved ssl options to proxy 2020-12-01 10:30:44 +01:00
Erik Sundell
77220d6662 Merge pull request #3289 from minrk/user-count
fix and test TOTAL_USERS count
2020-11-30 15:21:48 +01:00
Min RK
7e469f911d fix and test TOTAL_USERS count
Don't assume UserDict contains all users

which assumption led to double-counting when a user in the db was loaded into the dict cache
2020-11-30 13:27:52 +01:00
Erik Sundell
18393ec6b4 Merge pull request #3287 from minrk/bump-black
bump black pre-commit hook to 20.8
2020-11-30 10:26:55 +01:00
Min RK
28fdbeb0c0 update back pre-commit hook
specify minimum target_version as py36

results in some churn
2020-11-30 10:13:10 +01:00
Tim Head
5664e4d318 Merge pull request #3286 from Sangarshanan/patch-1
Fix curl in jupyter announcements
2020-11-30 07:47:27 +01:00
sangarshanan
24c83e721f Fix curl in jupyter announcements
Running the Curl as is return a 500 with ```json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
```  Converting the payload to a proper Json
2020-11-28 17:50:44 +05:30
0mar
cc73ab711e Disabled ssl testing 2020-11-27 17:50:47 +01:00
0mar
2cfe4474ac Submitting reason for skiptest 2020-11-27 17:26:44 +01:00
0mar
74766e4786 Resolving merge conflichts 2020-11-27 17:18:40 +01:00
0mar
ed461ff4a7 Merge branch 'tmp' into reduce_ssl_testing
# Conflicts:
#	jupyterhub/tests/test_proxy.py
2020-11-27 17:05:26 +01:00
0mar
184d87ff2a Skip SSL-free tests if not on SSL matrix 2020-11-27 17:00:09 +01:00
Min RK
06ed7dc0cf Merge pull request #3284 from minrk/12-cl
Changelog for 1.2.2
2020-11-27 14:41:08 +01:00
Min RK
a0b229431c Update docs/source/changelog.md
Co-authored-by: Erik Sundell <erik.i.sundell@gmail.com>
2020-11-27 14:40:59 +01:00
0mar
2a06c8a94c WIP: Attempt to access SSL parameters, failing due to self-signed certificate error 2020-11-27 13:26:32 +01:00
Min RK
91159d08d3 Changelog for 1.2.2 2020-11-27 10:09:54 +01:00
Erik Sundell
06a83f146b Merge pull request #3281 from olifre/patch-1
CONTRIBUTING: Fix contributor guide URL
2020-11-27 09:53:41 +01:00
Oliver Freyermuth
7b66d1656b CONTRIBUTING: Fix contributor guide URL
The link has been changed.
2020-11-27 09:39:29 +01:00
0mar
40176a667f Attempt to patch proxy, unsuccessful 2020-11-26 12:22:43 +01:00
Omar Richardson
e02345a4e8 WIP: Moved ssl options to new method 2020-11-26 09:24:44 +01:00
Long Vu
1408e9f5f4 Merge remote-tracking branch 'origin/master' into add-user-agreement-to-login-screen 2020-11-25 10:31:38 -05:00
Long Vu
b66d204d69 login page: no javascript needed for the optional accept terms and conditions feature
Bonus user gets a pop-up notification to check the checkbox.

Tested on Mozilla Firefox
(https://user-images.githubusercontent.com/11966697/100246404-18115e00-2f07-11eb-9061-d35434ace3aa.gif)
and Google Chrome.

Feedback from @minrk.
2020-11-25 10:30:22 -05:00
Omar Richardson
164447717f Fix formulation 2020-11-20 15:30:23 +01:00
Omar Richardson
0472ef0533 Central internal_ssl switch 2020-11-20 15:27:50 +01:00
Erik Sundell
202efae6d8 Merge pull request #3177 from minrk/user-state-filter
add ?state= filter for GET /users
2020-11-20 11:06:15 +01:00
Min RK
2e043241fb Merge pull request #3261 from minrk/next-append-query
Only preserve params when ?next= is unspecified
2020-11-20 09:47:20 +01:00
Min RK
fa61f06fed Merge pull request #3237 from alexweav/cleanup-leftover-proxy
[proxy.py] Improve robustness when detecting and closing existing proxy processes
2020-11-20 09:45:53 +01:00
Min RK
8b19413fa1 Merge pull request #3242 from consideRatio/pr/py36-async-await
Assume py36 and remove @gen.coroutine etc.
2020-11-20 09:31:43 +01:00
Min RK
7c2e7692b0 Merge pull request #3265 from ideonate/master
Fix RootHandler when default_url is a callable
2020-11-20 09:14:46 +01:00
Tim Head
ce11959b1a Merge pull request #3267 from slemonide/patch-1
Update services.md
2020-11-19 14:07:56 +01:00
fyrzbavqr
097974d57d Update services.md
Fix small typo
2020-11-19 04:14:54 -08:00
Omar Richardson
09ff03ca4f Superfluous import statement 2020-11-19 13:10:48 +01:00
Omar Richardson
313f050c42 Reduced ssl on for active tests only 2020-11-19 12:58:38 +01:00
Omar Richardson
4862831f71 Trying with different configuration 2020-11-19 12:08:10 +01:00
Omar Richardson
c46beb976a Moving ssl tests to testing matrix 2020-11-19 11:59:03 +01:00
Long Vu
11a85d1dc5 login page: allow full override of the optional accept terms and conditions feature
The text was already overridable but the endblock was at the wrong
location.

Now the javascript can also be overridden.
2020-11-18 14:25:49 -05:00
Dan Lester
67c4a86376 Fix RootHandler when default_url is a callable 2020-11-18 12:55:44 +00:00
Long Vu
e00ef1aef1 Merge remote-tracking branch 'origin/master' into add-user-agreement-to-login-screen 2020-11-17 17:27:30 -05:00
Long Vu
fb5f98f2fa login page: add optional feature to accept terms and conditions in order to login
The feature is disabled by default.

If enabled (by setting `login_term_url`), user will have to check the
checkbox to accept the terms and conditions in order to login.
2020-11-17 17:24:38 -05:00
Alex Weaver
82a1ba8402 Import psutil and perform cmdline check on Windows onlyy 2020-11-17 13:02:35 -06:00
Alex Weaver
7f53ad52fb Assume that fapermission errors when getting process metadata indicate a non-running proxy 2020-11-17 12:55:34 -06:00
agp8x
73cdd687e9 fix formatting 2020-11-17 15:36:30 +01:00
agp8x
af09bc547a change metric prefix to jupyterhub 2020-11-17 15:29:37 +01:00
Min RK
3ddc796068 verify that tornado gen.coroutine and run_on_executor are awaitable
- our APIs require that methods return 'awaitables'
- make sure that the older ways to create tornado 'yieldables' still produce 'awaitables'
2020-11-17 12:38:42 +01:00
Min RK
3c071467bb require tornado 5.1, async_generator 1.9
- maybe_future relies on changes in 5.1, not in 5.0
- async_generator.asynccontextmanager is new in 1.9
2020-11-17 12:23:39 +01:00
Min RK
0c43feee1b run tests with oldest-supported versions
to catch any cases where we make assumptions about more recent versions than we claim to support
2020-11-17 12:22:46 +01:00
Min RK
5bcbc8b328 Merge pull request #3252 from cmd-ntrf/signin
Standardize "Sign in" capitalization on the login page
2020-11-17 11:59:26 +01:00
Min RK
87e4f458fb only preserve params when ?next= is not specified 2020-11-17 11:58:28 +01:00
Min RK
808e8711e1 Merge pull request #3176 from yuvipanda/async_template
Enable async support in jinja2 templates
2020-11-17 11:46:23 +01:00
YuviPanda
19935254a7 Fix pre-commit errors 2020-11-17 15:58:38 +05:30
YuviPanda
a499940309 Remove extreneous coroutine creation
You can 'pass through' coroutines like this without
yield.
2020-11-17 15:41:40 +05:30
YuviPanda
74544009ca Remove extreneous print statement
Was a debugging aid
2020-11-17 15:41:22 +05:30
YuviPanda
665f9fa693 Drop Python 3.5 support
See https://github.com/jupyterhub/jupyterhub/pull/3176#issuecomment-694315759

For Travis, I push the version cascade down one step.
Should preserve our test coverage while conserving test
duration
2020-11-17 15:39:55 +05:30
YuviPanda
24b555185a Revert "Run templates synchronously for Python 3.5"
This reverts commit f1155d6c2afbcbd875c7addc88784313c77da8e9.

Instead, let's stop supporting 3.5!
2020-11-17 15:39:26 +05:30
YuviPanda
24f4b7b6b6 Run templates synchronously for Python 3.5
jinja2's async support requires Python 3.6+. That should
be an implementation detail - so we render it in the main
thread (current behavior) but pretend we did not
2020-11-17 15:39:26 +05:30
YuviPanda
217dffa845 Fix typo in format string 2020-11-17 15:39:26 +05:30
YuviPanda
a7b796fa57 Autoformat with black 2020-11-17 15:39:21 +05:30
YuviPanda
6c5fb5fe97 F-strings are Python 3.6, not 3.5 2020-11-17 15:38:29 +05:30
Yuvi Panda
20ea322e25 Fix typo
Co-authored-by: Tim Head <betatim@gmail.com>
2020-11-17 15:38:29 +05:30
YuviPanda
4f9664cfe2 Provide sync versions of render_template too
write_error is a synchronous method called by an async
method from inside the event loop. This means we can't just
schedule an async render_templates in the same loop and wait
for it - that would deadlock.

jinja2 compiled your code differently based on wether you
enable async support or not. Templates compiled with async
support can't be used in cases like ours, where we already
have an event loop running and calling a sync function. So
we maintain two almost identical jinja2 environments
2020-11-17 15:38:29 +05:30
YuviPanda
be211a48ef Enable async jinja2 template rendering
Follows https://jinja.palletsprojects.com/en/2.11.x/api/#async-support

- This blocks the main thread fewer times
- We can use async methods inside templates too
2020-11-17 15:38:29 +05:30
Min RK
553ee26312 preserve url params in ?next from root page 2020-11-17 10:45:11 +01:00
Erik Sundell
7e6111448a Merge pull request #3253 from minrk/wait-admin-form
wait for pending spawns in spawn_form_admin_access
2020-11-16 02:39:11 +01:00
Erik Sundell
ccc0294f2e Merge pull request #3257 from manics/jupyterhub_idle_culler
Update services-basics.md to ues jupyterhub_idle_culler
2020-11-14 17:37:17 +01:00
Simon Li
3232ad61aa Update services-basics.md to ues jupyterhub_idle_culler
Closes https://github.com/jupyterhub/jupyterhub/issues/3256
2020-11-14 15:59:56 +00:00
Min RK
202a5bf9a5 Merge pull request #3255 from fcollonval/patch-1
Environment marker on pamela
2020-11-13 10:28:28 +01:00
Frédéric Collonval
47136f6a3c Environment marker on pamela 2020-11-13 09:57:20 +01:00
Min RK
5d3161c6ef wait for pending spawns in spawn_form_admin_access
copy logic from test_spawn_admin_access
2020-11-12 10:16:48 +01:00
Félix-Antoine Fortin
9da4aa236e Standardize Sign in capitalization on the login page 2020-11-11 13:01:14 -05:00
Erik Sundell
d581cf54cb Retain an assertion and update comments 2020-11-11 15:40:54 +01:00
Erik Sundell
fca2528332 Retain explicit pytest mark asyncio of our coroutines 2020-11-11 14:47:41 +01:00
Erik Sundell
5edd246474 Replace @async_generator/yeild_ with async/yeild 2020-11-11 14:47:29 +01:00
Erik Sundell
77ed2faf31 Replace gen.multi(futures) with asyncio.gather(*futures) 2020-11-11 14:47:24 +01:00
Erik Sundell
4a17441e5a Replace gen.sleep with asyncio.sleep 2020-11-11 14:40:59 +01:00
Erik Sundell
e1166ec834 Replace @gen.coroutine/yield with async/await 2020-11-11 14:36:56 +01:00
Erik Sundell
2a1d341586 Merge pull request #3250 from minrk/test-condition
remove push-branch conditions for CI
2020-11-11 12:21:52 +01:00
Min RK
55a59a2e43 remove push-branch conditions for CI
testing other branches is useful, and there's little cost to removing the conditions:

- we don't run PRs from our repo, so test runs aren't duplicated on the repo
- testing on a fork without opening a PR is still useful (I use this often)
- if we push a branch, it should probably be tested (e.g. backport branch), and filters make this extra work
- the cost of running a few extra tests is low, especially given actions' current quotas and parallelism
2020-11-11 09:12:58 +01:00
Min RK
e019a33509 Merge pull request #3246 from consideRatio/pr/migrate-to-gh-actions-from-travis
Migrate from travis to GitHub actions
2020-11-11 09:06:58 +01:00
Erik Sundell
737dcf65eb Fix mysql/postgresql auth and comment struggles 2020-11-10 19:20:47 +01:00
Erik Sundell
9deaeb1fa9 Final variable name update 2020-11-10 16:19:22 +01:00
Erik Sundell
bcfc2c1b0d Cleanup use of database related environment variables 2020-11-10 16:16:28 +01:00
Erik Sundell
f71bacc998 Apply suggestions from code review
Co-authored-by: Min RK <benjaminrk@gmail.com>
2020-11-10 15:39:46 +01:00
Erik Sundell
ff14b1aa71 CI: use --maxfail=2 2020-11-10 11:14:59 +01:00
Erik Sundell
ebbbdcb2b1 Refactor ci/docker-db and ci/init-db 2020-11-10 11:14:40 +01:00
Erik Sundell
d0fca9e56b Reword comment 2020-11-10 10:03:53 +01:00
Erik Sundell
517737aa0b Add notes about not needing "set -e" etc. 2020-11-10 02:17:44 +01:00
Erik Sundell
5dadd34a87 Help GitHub UI present the job parameterization + inline comments 2020-11-10 02:17:40 +01:00
Erik Sundell
df134fefd0 Refactor pre-commit to its own job 2020-11-10 01:17:30 +01:00
Erik Sundell
47cec97e63 Let pytest fail on first error 2020-11-10 01:16:12 +01:00
Erik Sundell
0b8b87d7d0 Remove debugging trigger 2020-11-09 07:43:42 +01:00
Erik Sundell
3bf1d72905 Test in Ubuntu 20.04 2020-11-09 07:42:45 +01:00
Erik Sundell
8cdd449cca Unpin mysql-connector-python and resolve errors 2020-11-09 07:42:12 +01:00
Erik Sundell
6fc3c19763 For CI readability, exit on first failure 2020-11-09 07:41:05 +01:00
Erik Sundell
265dc07c78 Remove .travis.yml, add GitHub workflow 2020-11-09 07:40:15 +01:00
Erik Sundell
1ae039ddef Remove py3.7+ breaking test variation (has~x)
The jupyterhub/tests/test_spawner.py::test_spawner_routing[has~x] test
failed in py37+ but not in py36, and I think it is foundational to the
socket library of Python that has changed.

This is a stacktrace from Python/3.7.9/x64/lib/python3.7/site-packages/urllib3/util/connection.py:61

```
>       for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
E       socket.gaierror: [Errno -2] Name or service not known
```

Here is relevant documentation about socket.getaddrinfo.

https://docs.python.org/3.7/library/socket.html#socket.getaddrinfo
2020-11-09 07:32:11 +01:00
Erik Sundell
378d34b213 Don't ignore outer env vars 2020-11-09 07:31:16 +01:00
Mohammad Wasil
9657430cac Fix reverse proxy redirect from https 2020-11-04 17:59:28 +01:00
Mohammad Wasil
6271535f46 Merge pull request #1 from jupyterhub/master
Merge from jupyterhub/jupyterhub master
2020-11-04 17:02:28 +01:00
agp8x
2bef5ba981 Add prefix to prometheus metrics to group all jupyter metrics (see #1585) 2020-11-04 13:54:31 +01:00
Alex Weaver
efb1f3c824 Run precommit hooks, fix formatting issue 2020-10-30 12:35:01 -05:00
Alex Weaver
53050a5836 Merge branch 'master' of https://github.com/jupyterhub/jupyterhub into cleanup-leftover-proxy 2020-10-30 12:14:08 -05:00
Alex Weaver
6428ad9f0b Check proxy cmd before shutting down, cleaner shutdown on Windows 2020-10-30 12:13:50 -05:00
Min RK
9068ff2239 back to dev 2020-10-30 13:22:14 +01:00
Min RK
fc6cd33ce0 release 1.2.1 2020-10-30 13:20:43 +01:00
Erik Sundell
b0b8e2d058 Merge pull request #3235 from minrk/changelog-1.2.1
Changelog for 1.2.1
2020-10-30 13:19:52 +01:00
Erik Sundell
6bfa402bfa Apply suggestions from code review 2020-10-30 13:19:18 +01:00
Min RK
b51a0bba92 Changelog for 1.2.1 2020-10-30 13:15:19 +01:00
Erik Sundell
2d3f962a1d Merge pull request #3234 from gesiscss/master
Make external JupyterHub services' oauth_no_confirm configuration work as intentend
2020-10-30 13:07:39 +01:00
Kenan Erdogan
625242136a fix checking if oauth confirm is needed 2020-10-30 10:39:02 +01:00
Min RK
f92560fed0 back to dev 2020-10-29 14:06:20 +01:00
Min RK
8249ef69f0 release jupyterhub 1.2.0 2020-10-29 14:03:34 +01:00
Min RK
c63605425f Merge pull request #3233 from minrk/1.2.0-final
latest changelog since 1.2.0b1
2020-10-29 14:03:01 +01:00
Min RK
5b57900c0b 1.2.0 heading in changelog
Co-authored-by: Erik Sundell <erik.i.sundell@gmail.com>
2020-10-29 14:02:35 +01:00
Erik Sundell
d0afdabd4c order changelog entries systematically 2020-10-29 13:13:02 +01:00
Min RK
618746fa00 latest changelog since 1.2.0b1 2020-10-29 13:02:04 +01:00
Min RK
e7bc6c2ba9 Merge pull request #3229 from minrk/configurable-pagination
make pagination configurable
2020-10-29 10:53:29 +01:00
Min RK
e9f86cd602 make pagination configurable
add some unittests for pagination

reorganize pagination a bit to make it easier to configure
2020-10-29 09:24:34 +01:00
Erik Sundell
6e8517f795 Merge pull request #3232 from consideRatio/pr/travis-badge
Update travis-ci badge in README.md
2020-10-28 23:01:04 +01:00
Erik Sundell
5fa540bea1 Update travis-ci badge in README.md 2020-10-28 22:59:44 +01:00
Min RK
99f597887c Merge pull request #3223 from consideRatio/pr/proxy-api_request-retries
Make api_request to CHP's REST API more reliable
2020-10-28 15:21:23 +01:00
Erik Sundell
352526c36a Merge pull request #3226 from xlotlu/patch-1
Fix typo in documentation
2020-10-28 08:09:11 +01:00
Ionuț Ciocîrlan
cbbed04eed fix typo 2020-10-28 03:00:31 +02:00
Christine Banek
b2e7b474ff Allow options_from_form to be configurable 2020-10-27 12:11:48 -07:00
Erik Sundell
b2756fb18c Retry on >=500 errors on hub to proxy REST API reqeusts 2020-10-27 16:53:53 +01:00
Erik Sundell
37b88029e4 Revert improved logging attempt 2020-10-27 16:28:56 +01:00
Erik Sundell
4b7413184e Adjust hub to proxy REST API requests' timeouts 2020-10-27 16:23:40 +01:00
Min RK
41ef0da180 Merge pull request #3219 from elgalu/patch-3
Fix #2284 must be sent from authorization page
2020-10-27 15:41:05 +01:00
Erik Sundell
a4a8b3fa2c Fix scope mistake 2020-10-27 13:38:34 +01:00
Erik Sundell
02e5984f34 Let API requests to CHP retry on 429,500,503,504 as well 2020-10-27 12:52:14 +01:00
Erik Sundell
b91c5a489c Rely on HTTPError over pycurl assumed CurlError 2020-10-26 20:39:20 +01:00
Erik Sundell
c47c3b2f9e Make api_request to CHP's REST API more reliable 2020-10-25 02:35:36 +01:00
Min RK
eaa1353dcd typos in use of partition 2020-10-23 14:16:46 +02:00
Leo Gallucci
b9a3b0a66a Fix #2284 must be sent from authorization pageUpdate jupyterhub/apihandlers/auth.py
Co-authored-by: Min RK <benjaminrk@gmail.com>
2020-10-22 11:36:15 +02:00
Leo Gallucci
929b805fae Fix #2284 must be sent from authorization page
Fix #2284 Authorization form must be sent from authorization page
2020-10-21 17:57:14 +02:00
Min RK
082f6516a1 1.2.0b1 2020-10-16 10:14:32 +02:00
Erik Sundell
1aa21f1d6c Merge pull request #3192 from consideRatio/pr/changelog-for-1.2.0b1
changelog for 1.2.0b1
2020-10-15 15:30:30 +02:00
Erik Sundell
cec9702796 changelog for 1.2.0b1 updated 2020-10-15 14:56:43 +02:00
Erik Sundell
f8cbda9c3c Merge pull request #3208 from minrk/traitlets-list-allow-none
avoid specifying default_value=None in Command traits
2020-10-15 14:47:36 +02:00
Min RK
71aee05bc0 use /api/status to test server
workaround 404 issue with /api/spec.yaml in jupyter-server 1.0.4
2020-10-15 13:23:02 +02:00
Erik Sundell
772de55a0d Merge pull request #3209 from minrk/rtd-docs
stop building docs on circleci
2020-10-15 12:14:40 +02:00
Min RK
e6f92238b1 stop building docs on circleci
RTD CI is enabled now
2020-10-15 11:41:11 +02:00
Min RK
db76b52e35 avoid specifying default_value=None in Command traits
causes issues with traitlets dev where 'unspecified' should be Undefined, not specified-None

Best to leave it out if it's really unspecified
2020-10-15 11:38:08 +02:00
Min RK
e6e994e843 add changelog highlights for 1.2.0 2020-10-15 11:01:26 +02:00
Min RK
284e379341 Merge pull request #3204 from kreuzert/exponential_backoff_overflow_exception
Prevent OverflowErrors in exponential_backoff()
2020-10-15 10:39:28 +02:00
Erik Sundell
3ce1cc63af Merge pull request #3207 from kinow/patch-2
[docs] Remove duplicate line in changelog for 1.1.0
2020-10-15 00:34:56 +02:00
Bruno P. Kinoshita
9945a7f7be Update changelog.md
Remove duplicate changelog from 1.1.0
2020-10-15 09:59:04 +13:00
Tim Kreuzer
004c964cc1 Update utils.py 2020-10-13 10:37:31 +02:00
Tim Kreuzer
0f0d6d12d3 Update jupyterhub/utils.py
Co-authored-by: Erik Sundell <erik.i.sundell@gmail.com>
2020-10-13 10:30:05 +02:00
Tim Kreuzer
c97e4d4e2f Update utils.py
Prevent exponential_backoff() to crash with an Vverflow Error
2020-10-12 17:25:25 +02:00
Erik Sundell
53d496aff5 changelog for 1.2.0b1 2020-10-04 07:04:42 +02:00
Min RK
032ae29066 Merge pull request #3184 from rainwoodman/patch-1
Mention the PAM pitfall on fedora.
2020-10-02 10:50:17 +02:00
Yu Feng
21caa57e7b remove sshauthenticator reference. 2020-10-01 09:13:37 -07:00
Yu Feng
37ee104afa Update docs/source/reference/config-sudo.md
Co-authored-by: Erik Sundell <erik.i.sundell@gmail.com>
2020-10-01 09:11:15 -07:00
Erik Sundell
dac75ff996 Merge pull request #3019 from stv0g/remove-unused-imports
Remove unused imports
2020-10-01 13:17:36 +02:00
Erik Sundell
67e06e5a18 Fix order of imports 2020-10-01 12:44:51 +02:00
Erik Sundell
4cbc0bad34 Merge branch 'master' into remove-unused-imports 2020-10-01 12:07:37 +02:00
Erik Sundell
9f8c1decc4 Merge pull request #2891 from rajat404/auto-gen-docs
Generate prometheus metrics docs
2020-10-01 11:40:05 +02:00
Erik Sundell
1244533387 Merge pull request #3185 from rainwoodman/patch-2
Add SELinux configuration for nginx
2020-10-01 11:15:32 +02:00
Erik Sundell
8c30724f17 monitoring docs: fixes following monitoring section relocation 2020-10-01 10:45:11 +02:00
Erik Sundell
50868f5bb5 monitoring docs: relocate monitoring section under technical reference 2020-10-01 10:36:19 +02:00
Erik Sundell
e15b6ad52e Makefile: let make html depend on generated metrics.rst 2020-10-01 10:13:31 +02:00
Rajat Goyal
b194135a0f Generate list of prometheus metrics in reStructuredText rather than markdown 2020-09-30 23:52:29 +05:30
Rajat Goyal
5b8a7fd191 Remove unused dependency 2020-09-30 23:25:22 +05:30
Rajat Goyal
be272ffb2a Formatted text for better readability 2020-09-30 23:14:21 +05:30
Rajat Goyal
8ee60ce0c7 Add metrics documentation generation step in CircleCI & RTD configs
Also rename generated metrics documentation directory `_gen` from `gen`
2020-09-30 22:57:46 +05:30
Rajat Goyal
e553bcb7e2 Unpin dependencies from their patch versions 2020-09-30 22:08:50 +05:30
Rajat Goyal
c0288ec6f6 Update docs/source/monitoring/index.rst
- Fixes typo (eolving -> evolving)
- re-use the word current instead of momentary for comprehensibility
- references JupyterHubs current state with its instead of the for comprehensibility

Co-authored-by: Erik Sundell <erik.i.sundell@gmail.com>
2020-09-30 22:08:50 +05:30
Rajat Goyal
65b83f5f00 Update docs/source/monitoring/index.rst
Co-authored-by: Erik Sundell <erik.i.sundell@gmail.com>
2020-09-30 22:08:50 +05:30
Rajat Goyal
dcd520179c Made changes in monitoring docs as per the feedback on PR review 2020-09-30 22:08:50 +05:30
Rajat Goyal
c830d964d5 Apply suggestions from code review
Co-authored-by: Min RK <benjaminrk@gmail.com>
2020-09-30 22:08:50 +05:30
rajat404
9e5993f1da Docs: Fix typo; Add generate task as sub-task in html 2020-09-30 22:08:50 +05:30
rajat404
7ed3e0506b Extract doc generation logic in separate method 2020-09-30 22:08:50 +05:30
rajat404
7045e1116c Inspect metrics and generate metric list in docs; Add monitoring section in Docs 2020-09-30 22:08:50 +05:30
Yu Feng
fb56fd406f Add SELinux configuration for nginx
On a Fedora workstation these steps are needed.
2020-09-22 22:08:42 -07:00
Yu Feng
5489395272 Mention the PAM pitfall on fedora. 2020-09-22 21:51:08 -07:00
Yuvi Panda
6ecda96dd6 Merge pull request #3174 from AngelOnFira/upgrade-jquery-dep
Upgraded Jquery dep
2020-09-17 22:42:26 +05:30
Min RK
30b8bc3664 add ?state= filter for GET /users
allows selecting users based on the 'ready' 'active' or 'inactive' states of their servers

- ready: users who have any servers in the 'ready' state
- active: users who have any servers in the 'active' state (i.e. ready OR pending)
- inactive: users who have *no* servers in the 'active' state (inactive + active = all users, no overlap)

Does not change the user model, so a user with *any* ready servers will still return all their servers
2020-09-17 12:31:16 +02:00
Forest Anderson
80ad455fc7 Upgraded jquery dep 2020-09-14 13:01:27 -04:00
Min RK
21eaf0dd9f Merge pull request #3077 from kinow/add-config-reference
Add Configuration Reference section to docs
2020-09-08 16:40:10 +02:00
Min RK
84d2524025 jupyterhub_config.py filename typo 2020-09-08 16:39:51 +02:00
Min RK
959dfb145a Merge pull request #3121 from rkdarst/clear-state-after-post-stop-hook
jupyterhub/user: clear spawner state after post_stop_hook
2020-09-08 16:38:18 +02:00
Min RK
998c18df42 Merge pull request #3133 from ideonate/master
Allow JupyterHub.default_url to be a callable
2020-09-08 16:36:52 +02:00
Richard Darst
88b10aa2f5 jupyterhub/user: Remember to save the state in the database 2020-09-08 13:48:27 +03:00
Dan Lester
d8f5758e08 Fix rst in default_url docstring 2020-09-08 09:55:03 +01:00
Min RK
47e45a4d3f Merge pull request #3136 from pabepadu/add_footer_block
Add a footer block + wrap the admin footer in this block
2020-09-08 09:38:21 +02:00
Min RK
3e31ff4ac7 Merge pull request #3160 from rcthomas/control-service-display
Control service display
2020-09-08 09:37:12 +02:00
Min RK
ff30396a8e Merge pull request #3028 from possiblyMikeB/ui-feedback-onsubmit
UI Feedback on Submit
2020-09-08 09:36:39 +02:00
Min RK
196a7fbc65 Merge pull request #3072 from minrk/purge-expired
synchronize implementation of expiring values
2020-09-08 09:35:25 +02:00
Richard Darst
c66e8bb4c9 jupyterhub/user: remuve extraneous = {}
- Thanks to review from @minrk
2020-09-07 17:21:23 +03:00
Min RK
5595146fe2 Merge pull request #3147 from jgwerner/fix/api-request-error
Get error description from error key vs error_description key
2020-09-07 16:18:21 +02:00
Min RK
76b688e574 Merge pull request #3137 from lydian/sort_on_spawner_last_activity
admin page sorts on spawner last_activity instead of user last_activity
2020-09-07 16:14:12 +02:00
Min RK
f00d0be4d6 Merge pull request #3156 from manics/docker-py38
Update Dockerfile to ubuntu:focal (Python 3.8)
2020-09-07 16:13:18 +02:00
Min RK
f9d815676f verify static files in docker tests 2020-09-07 16:06:48 +02:00
Min RK
94612d09a6 build wheel with setup.py bdist_wheel
pip wheel from scratch may not include files generated during build
2020-09-07 15:03:13 +02:00
Dan Lester
76ed65ed82 default_url takes handler object instead of user 2020-08-31 18:36:57 +01:00
Greg
560bab395b update based on pr suggestion
Signed-off-by: Greg <werner.greg@gmail.com>
2020-08-27 11:16:57 -04:00
Greg
c68b846eef get error key or error_description key if not available
Signed-off-by: Greg <werner.greg@gmail.com>
2020-08-27 11:12:18 -04:00
Greg
5896b2c9f7 get error description from error key vs error_description key
Signed-off-by: Greg <werner.greg@gmail.com>
2020-08-27 11:12:18 -04:00
Min RK
0317fd63fa Merge pull request #3103 from kinow/responsive-issues
Hide hamburger button menu in mobile/responsive mode and fix other minor issues
2020-08-27 11:15:50 +02:00
Min RK
7f6886c60f Merge pull request #3104 from cmd-ntrf/rest-api-version
Update version in docs/rest-api.yaml
2020-08-27 11:00:14 +02:00
Min RK
10bdca8901 Merge pull request #3142 from snickell/document-external-service-api-tokens-better
Document external service api_tokens better
2020-08-27 09:52:31 +02:00
Min RK
66cb2c0f3e Merge pull request #3128 from minrk/mix-it-in
Implement singleuser with mixins
2020-08-27 09:51:19 +02:00
Min RK
0152e29946 Merge pull request #3159 from synchronizing/patch-1
Added extra documentation for endpoint /users/{name}/servers/{server_name}.
2020-08-27 09:51:01 +02:00
Min RK
c6f0c07931 Merge pull request #3157 from manics/python-traitlets-latest
Don't allow 'python:3.8 + master dependencies' to fail
2020-08-27 09:45:50 +02:00
Min RK
51ceab9f6f Merge pull request #3149 from betatim/simplifiy-health-checks
Simplify code of the health check handler
2020-08-27 09:44:02 +02:00
Rollin Thomas
46ead8cd9d Add display variable to tests 2020-08-26 21:43:16 -07:00
Rollin Thomas
bfb3d50936 Reformat! 2020-08-26 21:29:28 -07:00
Rollin Thomas
962307475e Add service display to service API model 2020-08-26 19:15:21 -07:00
Rollin Thomas
80f4edcd20 Omit service if it is not OK to display 2020-08-26 18:57:17 -07:00
Rollin Thomas
1ad4035943 Control whether service is listed in UI or not 2020-08-26 18:56:03 -07:00
Felipe Faria
5ab735fea3 Added extra documentation for endpoint /users/{name}/servers/{server_name}. 2020-08-26 19:07:57 -04:00
Simon Li
e79cb0d376 Don't allow 'python:3.8 + master dependencies' to fail 2020-08-26 22:40:57 +01:00
Simon Li
f728cf89c6 Update Dockerfile to ubuntu:focal (Python 3.8) 2020-08-26 22:24:14 +01:00
Tim Head
8f719e21d2 Simplify code of the health check handler 2020-08-26 14:07:30 +02:00
Min RK
29de00ee3c Merge pull request #3140 from chancez/fix_ssl_http_client_master
jupyterhub/utils: Load system default CA certificates in make_ssl_context
2020-08-26 14:05:29 +02:00
Chance Zibolski
52291b0012 jupyterhub/utils: Load system default CA certificates in make_ssl_context
Fixes issues with OAuth flows when internal_ssl is enabled.
When internal_ssl was enabled requests to non-internal endpoints failed
because the system CAs were not being loaded.

This caused failures with public OAuth providers with public CAs since
they would fail to validate.
2020-08-25 09:09:58 -07:00
Georgiana Elena
e58c341290 Merge pull request #3150 from yhal-nesi/master
update prometheus metrics for server spawn when it fails with exception
2020-08-22 00:01:53 +03:00
yhal-nesi
f988a4939e Update jupyterhub/handlers/base.py
Ah makes sense, I was wandering why the tests fail.

Co-authored-by: Georgiana Elena <georgiana.dolocan@gmail.com>
2020-08-22 08:47:15 +12:00
Yuriy Halytskyy
60ee2bfc35 update prometheus metrics for server spawn when it fails with exception 2020-08-20 08:18:39 +12:00
Erik Sundell
42601c52cc Merge pull request #3151 from consideRatio/docs/move-cert-docstring
docs: please docs linter (move_cert docstring)
2020-08-19 14:10:54 +02:00
Erik Sundell
0679586b2c docs: please docs linter properly
We are users of the napoleon sphinx extension, which helps us parse our
Google Style Python Docstrings, and its syntax suggest we should use
indentation when we use more then one string for an entry in an
Arguments: or Returns: list.

For more details, see: https://github.com/jupyterhub/jupyterhub/pull/3151#issuecomment-676186565
2020-08-19 13:49:28 +02:00
Erik Sundell
be4201f7ee docs: please docs linter (move_cert docstring) 2020-08-19 13:14:46 +02:00
Min RK
11a73b5630 Merge pull request #3131 from rkevin-arch/healthcheck-head-request
Allow head requests for the health endpoint
2020-08-18 10:57:09 +02:00
Tim Head
f1efac41bf Merge pull request #3143 from basvandervlies/apache_reverse_proxy_doc
Needed NoEsacpe (NE)  option for apache
2020-08-14 14:54:23 +02:00
Bas van der Vlies
aa6921dd5a Needed NoEsacpe (NE) option for apache
Else %20 is esacped to %25%20 and we acan not rename "Untitled Folder'
or opening files with spaces or other special chars fails.
2020-08-14 11:24:27 +02:00
Seth Nickell
e94da17c3c Document external service api_tokens better
- Explicitly mention min-8-char constraint
- Connect the api_token in the configuration with the one mentioned in auth requests

Co-authored-by: Mike Situ <msitu@ceresimaging.net>
2020-08-13 12:28:17 -10:00
Min RK
e2ee18fa86 Merge pull request #3123 from alexweav/tornado-py38
app.py: Work around incompatibility between Tornado 6 and asyncio proactor event loop in python 3.8 on Windows
2020-08-10 09:18:24 +02:00
Lydian Lee
c5ec8ceba3 admin page sorts on spawner last_activity instead of user last_activity 2020-08-07 16:37:47 -07:00
pabepadu
3458c742cb Add a footer block + wrap the admin footer in this block 2020-08-07 02:19:21 +02:00
Georgiana Elena
d1a85e53dc Merge pull request #3132 from pabepadu/fix_services_dropdown_in_admin_page
Fix the services dropdown on the admin page
2020-08-07 00:13:37 +03:00
Dan Lester
d915cc3ff2 Allow JupyterHub.default_url to be a callable based on user 2020-08-05 11:59:25 +01:00
Georgiana Elena
b11c02c6e0 Merge pull request #3118 from minrk/tag-from-singleuser
only build tagged versions on docker tags
2020-08-05 12:44:23 +03:00
pabepadu
49f3bb53f4 Fix the services dropdown in the admin page 2020-08-05 05:29:21 +02:00
rkevin
9b7a94046b Allow head requests for the health endpoint 2020-08-03 00:20:17 -07:00
Min RK
62ef5ca2fe test with /api/spec.yaml
because /api/status is currently broken in jupyter_server
2020-07-31 12:44:42 +02:00
Min RK
028e0b0b77 include JUPYTERHUB_SINGLEUSER_APP in env_keep
since the child process is the one that inherits it anyway
2020-07-31 12:12:38 +02:00
Min RK
d2a42a69b0 simplify app mixin
get handler classes from instance attributes, rather than arguments

simplifies API
2020-07-31 12:12:11 +02:00
Min RK
1f21f283df Merge pull request #3127 from mriedem/3126-slow-spawn-timeout-warning
Don't log a warning when slow_spawn_timeout is disabled
2020-07-31 12:07:38 +02:00
Alex Weaver
7f35158575 Also apply patch before creating new event loop in atexit, just in case 2020-07-29 11:03:05 -05:00
Min RK
d0da677813 infer default mixins from $JUPYTERHUB_SINGLEUSER_APP
set to e.g. JUPYTERHUB_SINGLEUSER_APP=jupyterlab.labapp.LabApp for JupyterLab
2020-07-24 13:06:35 +02:00
Min RK
a0a02688c5 create singleuser app with mixins
for easier reuse with jupyter_server

mixins have a lot of assumptions about the NotebookApp structure.
Need to make sure these are met by jupyter_server (that's what tests are for!)
2020-07-24 12:57:05 +02:00
Min RK
2372842b8a Merge remote-tracking branch 'origin/master' into mix-it-in
# Conflicts:
#	.travis.yml
2020-07-24 09:53:02 +02:00
Matt Riedemann
7e205a9751 Don't log a warning when slow_spawn_timeout is disabled
When using the `KubeSpawner` it is typical to disable the
`slow_spawn_timeout` by setting it to 0. `zero-to-jupyterhub-k8s`
does this by default [1]. However, this causes an immediate `TimeoutError`
which gets logged as a warning like this:

>User hub-stress-test-123 is slow to start (timeout=0)

This avoids the warning by checking the value and if disabled simply
returns without logging the warning.

[1] https://github.com/jupyterhub/zero-to-jupyterhub-k8s/commit/b4738edc5

Closes #3126
2020-07-23 16:09:19 -05:00
Alex Weaver
e7fab5c304 Format and lint 2020-07-22 15:16:11 -05:00
Alex Weaver
8b8b512d06 Apply asyncio patch 2020-07-22 15:04:16 -05:00
Richard Darst
714072dbd8 jupyterhub/user: clear spawner state after post_stop_hook
- Related issue: #3120.  Closes: #3120.

- I realized that spawner.clear_state() is called before
  spawner.post_stop_hook().  This caused was a bit surprising to me,
  and caused some issues.

- I tried the naive strategy of moving clear_state to later and
  setting the orm_state to `{}` at the point where it used to be
  clear.

- This tries to maintain the exception behavior of clear_state and
  post_stop_hook, but is exactly identical.

- To review:

  - I'm not sure this is a good idea!

  - Carefully consider the implications of this.  I am not at all sure
    about unintended side-effects or what intended semantics are.
2020-07-22 10:06:21 +03:00
Min RK
6e8f39c22d only build tagged versions on docker tags
instead of building 'stable' from master
2020-07-20 10:14:35 +02:00
Erik Sundell
f3c3225124 Merge pull request #3114 from yuvipanda/no-cull-idle
Remove idle culler example
2020-07-14 17:03:12 +02:00
Georgiana Elena
614bfe77d8 Update examples/cull-idle/README.md 2020-07-14 14:22:51 +03:00
YuviPanda
1beea06ce5 Remove idle culler example
Has been moved to its own repo.

See https://github.com/jupyterhub/the-littlest-jupyterhub/pull/559
for more info
2020-07-12 17:14:14 +05:30
Erik Sundell
42adb44153 Merge pull request #3111 from mriedem/log-slow-stop-timeout
Log slow_stop_timeout when hit like slow_spawn_timeout
2020-07-11 02:56:13 +02:00
Matt Riedemann
d5a0202106 Log slow_stop_timeout when hit like slow_spawn_timeout
When `slow_spawn_timeout` is hit the configured timeout value
gets logged [1]. This does the same thing when `slow_stop_timeout`
is hit.

[1] https://github.com/jupyterhub/jupyterhub/blob/1.1.0/jupyterhub/handlers/base.py#L947
2020-07-10 11:38:26 -05:00
Georgiana Elena
3d524f2092 Merge pull request #3109 from kxiao-fn/proper_named_server_deletion
fix for stopping named server deleting default server and tests
2020-07-07 15:41:43 +03:00
Katherine Xiao
409835303e formatting 2020-07-06 17:45:08 -07:00
Katherine Xiao
acc8d15fec fixed test 2020-07-06 17:23:42 -07:00
Katherine Xiao
608cad6404 fix in base.py 2020-07-06 12:53:50 -07:00
Katherine Xiao
571a428375 fix deletion of default server when stopping named server and added corresponding test 2020-07-06 12:48:41 -07:00
Chris Holdgraf
1575adf272 Merge pull request #3107 from consideRatio/docs-logo-rem-unused-stuff
docs: unsqueeze logo, remove unused CSS and templates
2020-07-06 08:00:14 -07:00
Erik Sundell
4bc6d869f3 docs: unsqueeze logo, remove unused CSS and templates 2020-07-05 03:12:18 +02:00
Min RK
e5a6119505 Merge pull request #3090 from minrk/words-matter 2020-07-03 12:27:08 +02:00
Félix-Antoine Fortin
d80dab284d Update version in docs/rest-api.yaml 2020-06-30 08:59:29 -04:00
Bruno P. Kinoshita
9d556728bb Add padding for the span with user name and logout button (responsive mode only) 2020-06-25 23:31:54 +12:00
Bruno P. Kinoshita
4369e2cbfa Adjust jupyterhub logo margin-left in responsive mode 2020-06-25 23:31:54 +12:00
Bruno P. Kinoshita
ef4455bb67 Closes #2182 display hamburger menu only if user variable is present (in responsive mode) 2020-06-25 23:31:54 +12:00
Min RK
76c9111d80 Merge pull request #3089 from kinow/redirect-with-parameters 2020-06-25 11:08:17 +02:00
Bruno P. Kinoshita
946ed844c5 Update jupyterhub/handlers/base.py
Co-authored-by: Min RK <benjaminrk@gmail.com>
2020-06-25 19:41:46 +12:00
Min RK
cceb652039 TODO is TODONE
Co-authored-by: Georgiana Elena <georgiana.dolocan@gmail.com>
2020-06-24 20:19:44 +02:00
Min RK
6e988bf587 call it allowed_users
be clearer since it's users vs groups, etc.
2020-06-24 13:29:42 +02:00
Simon Li
dbc6998375 Merge pull request #3102 from minrk/unpin-telemetry
loosen jupyter-telemetry pin
2020-06-23 14:18:40 +01:00
Bruno P. Kinoshita
1bdc9aa297 Escape/encode parameters with the next URL, add more tests 2020-06-24 00:18:55 +12:00
Bruno P. Kinoshita
73f1211286 Update append_query_parameters to have exclude=["none"] by default,
and avoid using dicts with url_concat, to have consistent tests
as otherwise in Python 3.5 the generated URL's could have parameters
in random order.
2020-06-23 22:06:57 +12:00
Min RK
3fece09dda loosen jupyter-telemetry pin
we don't want strict pinning in package dependencies
2020-06-23 10:13:31 +02:00
Min RK
7ad4b0c7cb update allowed/blocked language in docs
our words matter, let's be more mindful
2020-06-23 10:10:07 +02:00
Min RK
252015f50d Merge pull request #3071 from minrk/userdict-get 2020-06-23 10:03:13 +02:00
Min RK
b3cc235c8a Merge pull request #3087 from fcollonval/patch-1 2020-06-23 10:02:34 +02:00
Min RK
47d7af8f48 Merge pull request #3100 from mriedem/remove-old-print 2020-06-23 09:58:00 +02:00
Matt Riedemann
8528684dc4 Remove old context-less print statement
This was added in PR #2721 and by default results in just printing
out "10" without any context when starting the hub service. This
simply removes the orphan print statement.

I'm open to changing this to a debug log statement with context if
someone finds that useful, e.g.:

`self.log.debug('Effective init_spawners_timeout: %s', init_spawners_timeout)`
2020-06-22 15:35:15 -05:00
Bruno P. Kinoshita
d4ce3aa731 Add unit tests 2020-06-20 22:51:16 +12:00
Min RK
ec710f4d90 test subclass priority when overriding old methods 2020-06-18 11:50:44 +02:00
Bruno P. Kinoshita
14378f4cc2 Include the query string parameters when redirecting to a new URL 2020-06-17 22:37:20 +12:00
Min RK
cc8e780653 rename white/blacklist allowed/blocked
- group_whitelist -> allowed_groups

still todo: handle deprecated signatures in check_whitelist methods while preserving subclass overrides
2020-06-15 14:40:44 +02:00
Frédéric Collonval
5bbf584cb7 Make delete_invalid_users configurable 2020-06-13 15:58:46 +02:00
Erik Sundell
b5defabf49 Merge pull request #3086 from manics/sshspawner
Replace zonca/remotespawner with NERSC/sshspawner
2020-06-13 14:05:05 +02:00
Simon Li
2d1f91e527 Replace zonca/remotespawner with NERSC/sshspawner
https://github.com/zonca/remotespawner is archived, the readme recommends https://github.com/jupyterhub/batchspawner
2020-06-13 11:47:34 +01:00
Tim Head
1653ee77ed Merge pull request #3084 from elgalu/patch-2
Remove already done named servers from roadmap
2020-06-13 09:55:26 +02:00
Leo Gallucci
10f09f4f70 Remove already done named servers from roadmap
Remove already done "UI for managing named servers" from the roadmap
2020-06-12 18:00:00 +02:00
Min RK
b7f277147b Merge pull request #3057 from GeorgianaElena/add_config_warn 2020-06-12 17:21:26 +02:00
Min RK
f3be735eeb Merge pull request #3082 from ChameleonCloud/fix-missing-static-files 2020-06-12 17:19:35 +02:00
Georgiana Elena
3e855eb1be Merge pull request #3083 from minrk/docker-demo-build
build jupyterhub/jupyterhub-demo image on docker hub
2020-06-12 12:10:27 +03:00
Min RK
98dc1f71db build jupyterhub/jupyterhub-demo image on docker hub 2020-06-12 10:03:34 +02:00
Jason Anderson
703703a648 Ensure client dependencies build before wheel
Bug #2852 describes an issue where templates cannot be found by
JupyterHub when using the Docker images built out of this repo. The
issue turned out to be due to missing node_modules at the time of build.

There is a hook in the `package.json` that causes node_modules to be
copied to the static/components directory post-install. If this is not
run, those components are not in the static directory and thus are not
included in the wheel when it is built.

Fix #2905 fixed one problem--the `bower-lite` hook script wasn't copied
to the Docker image, and so the hook couldn't run, but the other issue
is that the client dependencies are never explicitly built. They must be
built prior to the wheel build, and the hook script must have run so
they are copied to the ./static folder, which is included in the wheel
build thanks to [MANIFEST.in][1]

.. note::

   This removes the verbose flag from the wheel build command. The
   reason is that it generates a lot of writes to stdout. It seems that
   wheel can (or always) is switching to non-blocking mode, which can cause
   EAGAIN to be raised, which leads to fun errors like:

     BlockingIOError(.., 'write could not complete without blocking', ..)

   The wheels fail to build if this error is raised. Removing the verbosity
   flag is a quick solution (it drastically reduces writes to STDOUT), but
   comes at the cost of more trouble debugging a failed wheel build. Adding
   the "-v" back in the Dockerfile when debugging a build failure is still
   possible. [Credit: @vbraun][2]

.. note::

   This commit also removes some extraneous COPY operations during the
   Docker build, in particular the /src/jupyterhub/share directory is
   not used unless users have explicitly override their
   jupyterhub_config.py to include it somehow. If the default
   data_files_path behavior is used, JupyterHub should find the proper
   static directory when the application loads.

Fixes: #2852

[1]: https://packaging.python.org/guides/using-manifest-in/
[2]:
https://github.com/travis-ci/travis-ci/issues/4704#issuecomment-348435959
2020-06-11 15:15:56 -05:00
Yuvi Panda
8db8df6d7a Merge pull request #3081 from minrk/env-config-priority 2020-06-11 18:23:37 +05:30
Min RK
744430ba76 Merge pull request #3059 from GeorgianaElena/jh-demo-img 2020-06-11 10:45:01 +02:00
Min RK
45b858c5af Merge pull request #3055 from minrk/document-admin-service 2020-06-11 10:43:23 +02:00
Min RK
d4b5373c05 synchronize implementation of expiring values
- base Expiring class
- ensures expiring values (OAuthCode, OAuthAccessToken, APIToken) are not returned from `find`
- all expire appropriately via purge_expired
2020-06-11 10:40:06 +02:00
Min RK
aba55cc093 implement UserDict.get
behaves more like one would expect (same as try get-key, except: return default)
without relying on cache presence or underlying key type (integer only)
2020-06-11 10:32:55 +02:00
Min RK
5957a37933 Merge pull request #3079 from manics/allow_fail-masterdeps 2020-06-11 10:31:19 +02:00
Min RK
d20a33a0e4 Merge pull request #3078 from gatoniel/patch-1 2020-06-11 10:30:21 +02:00
Min RK
df35268bfe make Spawner.environment config highest priority
so that it can override 'default' env variables like JUPYTERHUB_API_URL

use with caution!
2020-06-11 09:45:18 +02:00
Simon Li
c357d02b56 Allow python:3.8 + master dependencies to fail
Follow-up from https://github.com/jupyterhub/jupyterhub/pull/3076
2020-06-10 14:53:58 +01:00
niklas netter
4eb22821f2 no_proxy does work 2020-06-10 14:51:37 +02:00
niklas netter
b92ea54eda proxy settings might cause authentication errors 2020-06-10 14:16:36 +02:00
Bruno P. Kinoshita
522ef3daea Add Configuration Reference 2020-06-08 23:49:31 +12:00
Tim Head
77edffd695 Merge pull request #3076 from Carreau/traitlets-master
Test with some master dependencies.
2020-06-07 09:33:23 +02:00
Matthias Bussonnier
a8bc4f8a4a Test with some master dependencies.
This does some of the test with the latest traitlets.
We are looking into making a 5.0 release and would like to have some
confidence that it does not break too many things.
2020-06-05 15:05:10 -07:00
Georgiana Elena
66c3760b02 Update jupyterhub/app.py
Co-authored-by: Erik Sundell <erik.i.sundell@gmail.com>
2020-06-03 14:30:51 +03:00
Erik Sundell
fd28e224f2 Merge pull request #3067 from Zsailer/telemetry-dependency
pin jupyter_telemetry dependency
2020-06-01 23:03:30 +02:00
Zsailer
da3fedb5aa pin jupyter_telemetry dependency 2020-06-01 12:41:22 -07:00
GeorgianaElena
e4e4d472b8 Add JupyterHub Demo docker image 2020-05-28 17:35:42 +03:00
GeorgianaElena
bcbc68dd82 Warn if both bind_url and ip/port/base_url are set 2020-05-27 21:05:01 +03:00
Simon Li
c7df0587d2 Merge pull request #3056 from GeorgianaElena/remove_issue_templates
Use the issue templates from the central repo
2020-05-26 15:56:11 +01:00
GeorgianaElena
cd36733858 Remove the issue templates so that the ones from the central repo jupyterhub/.github take effect 2020-05-26 14:49:36 +03:00
Min RK
6bf4f3b2aa document upgrading from api_tokens to services config 2020-05-26 13:40:21 +02:00
Simon Li
12d81ac07a Merge pull request #3054 from jtpio/black-github-link
Update links to the black GitHub repository
2020-05-25 18:49:27 +01:00
Jeremy Tuloup
d60fa9a400 Update links to the black GitHub repository 2020-05-25 16:10:08 +02:00
Min RK
81d423d6c6 Merge pull request #3040 from romainx/#1018 2020-05-19 15:06:37 +02:00
Min RK
069b477ff3 Merge pull request #3013 from twalcari/feature/spawn_query_arguments 2020-05-19 15:06:17 +02:00
Min RK
cf9046ea47 Merge pull request #3046 from ceocoder/patch-1 2020-05-19 14:58:14 +02:00
Min RK
71a25d4514 Merge pull request #3042 from rabsr/2925_start_my_server_fails 2020-05-19 14:57:44 +02:00
Tim Head
2ff7d05b15 Merge pull request #3047 from consideRatio/health-endpoint-debug-log 2020-05-15 14:39:08 +02:00
Tim Head
bdb29df82a Merge pull request #3048 from mhwasil/disable-proxy-buffering 2020-05-15 14:38:28 +02:00
Mohammad Wasil
0dbad9bd99 Disable proxy_buffering to make the progress bar working when using nginx reverse proxy 2020-05-15 13:51:50 +02:00
Erik Sundell
2991d2d1f1 Log successful /health requests on the debug level
They are less relevant than other request and could very well end up
cluttering the logs. It is not uncomming for these requests to be made
every second or every other second.
2020-05-15 11:48:48 +02:00
dp
a36a56b4ff docs: add proxy_http_version 1.1
add proxy_http_version 1.1 as it is required for KeepAlive connections
2020-05-14 16:16:07 -07:00
romainx
0e59ab003a Readme updated according to review 2020-05-11 14:54:32 +02:00
ragar64
d67b71b7ae " #2925 : Changing start my server button link to spawn url once server is stopped" 2020-05-08 21:47:41 +05:30
romainx
8859bf8842 #1018 PAM added in prerequisites 2020-05-08 06:06:42 +02:00
Thijs Walcarius
4e29342711 Test error path when parsing query arguments 2020-05-06 11:27:04 +02:00
Min RK
8a3790b01f Adding pagination in the admin panel (#2929)
Adding pagination in the admin panel
2020-05-06 11:09:54 +02:00
Min RK
0d245fe4e4 move pagination info next to pagination links
at the bottom
2020-05-06 10:47:08 +02:00
Min RK
da34c6cb34 remove hardcoded path from pagination links
allows pagination of other pages
2020-05-06 10:44:53 +02:00
Min RK
9c0e5ba9c2 Merge pull request #2971 from mriedem/issues/2970-singleuser-version-logging
Only log hub / singleuser version mismatch once
2020-05-06 09:23:04 +02:00
Tim Head
289c3bc3c1 Merge pull request #3035 from vilhelmen/server_ver 2020-05-05 10:00:27 +02:00
Yuvi Panda
3adfec0693 Merge pull request #3020 from stv0g/ipv6-spawner-ip
Support kubespawner running on a IPv6 only cluster
2020-05-04 08:14:01 +05:30
Will Starms
137591f458 remove fixed position, causes Z ordering issues with the bottom of the users list 2020-05-01 19:09:19 -05:00
Michael Blackmon
debd297494 restrict submit handler to only operate on targeted form 2020-04-20 11:33:38 -04:00
Michael Blackmon
10bb5ef3c0 wrap button & widget in feedback-container; add js block with onsubmit handler 2020-04-20 11:00:40 -04:00
Michael Blackmon
42e7d1a3fb put submit button & widget in feedback-container; extend template to include script block with form onsubmit handler 2020-04-20 10:59:34 -04:00
Michael Blackmon
5fbd2838c9 add style class for feedback, widget and container 2020-04-20 10:39:57 -04:00
Michael Blackmon
17dde3a2a9 remove margin styling from submit button 2020-04-20 10:38:19 -04:00
Tim Head
8d50554849 Merge pull request #3022 from joshmeek/docs/index_verbage 2020-04-18 21:17:21 +02:00
Tim Head
493eb03345 Merge branch 'master' into docs/index_verbage 2020-04-18 21:04:11 +02:00
Tim Head
1beac49f4a Merge pull request #3015 from jtpio/admin-template 2020-04-18 19:01:39 +02:00
Tim Head
f230be5ede Merge branch 'master' into admin-template 2020-04-18 15:44:05 +02:00
Steffen Vogel
6283e7ec83 support kubespawner running on a IPv6 only cluster 2020-04-17 19:36:56 +02:00
Thijs Walcarius
2438766418 Show error message when spawning via query-arguments failed. Add options_from_query function 2020-04-17 16:47:55 +02:00
Thijs Walcarius
6f2e409fb9 Allow bypassing of spawn form by calling options in query arguments of /spawn 2020-04-17 16:47:55 +02:00
Carol Willing
aa459aeb39 Merge pull request #3021 from rkdarst/fix-docs
Fix docs CI test failure: duplicate object description
2020-04-17 07:35:24 -07:00
Richard Darst
9d6e8e6b6f Temporary patch autodoc-traits to fix build error [temporary]
- This commit should be removed later after autodoc-traits is fixed upstream
2020-04-17 11:43:49 +03:00
Richard Darst
e882e7954c docs: use recommonmark as an extension
- source_parsers deprecated in sphinx 3.0
- Since sphinx 1.4, it can (should) be used as a direct extension:
  https://github.com/readthedocs/recommonmark/pull/43
2020-04-17 11:11:24 +03:00
Richard Darst
c234463a67 sphinx conf.py: update add_stylesheet -> add_css_file
- Seems to be added in 1.0:
  https://www.sphinx-doc.org/en/latest/changes.html#release-1-0-jul-23-2010
2020-04-17 11:11:24 +03:00
Georgiana Elena
391320a590 Merge pull request #3025 from twalcari/patch-1
Fix broken test due to BeautifulSoup 4.9.0 behavior change
2020-04-17 11:10:25 +03:00
Thijs Walcarius
8648285375 Fix broken test due to BeautifulSoup 4.9.0 behavior change
cfr. https://bugs.launchpad.net/beautifulsoup/+bug/1871335
2020-04-17 10:00:25 +02:00
Josh Meek
485c7b72c2 Fix use of auxiliary verb on index.rst 2020-04-16 09:36:52 -04:00
Steffen Vogel
e93cc83d58 remove unused imports 2020-04-16 12:12:22 +02:00
Jeremy Tuloup
39b9f592b6 Fix user_row endblock in admin template 2020-04-08 17:22:25 +02:00
Tim Head
1f515464fe Merge pull request #3010 from GeorgianaElena/pip_for_docs
Use pip instead of conda for building the docs on RTD
2020-04-02 13:50:27 +02:00
GeorgianaElena
854d0cbb86 Add package requirements to docs build 2020-04-02 10:32:11 +03:00
GeorgianaElena
87212a7414 Remove comment referencing conda environment 2020-04-02 08:55:04 +03:00
GeorgianaElena
2338035df2 Use latest rtd docker image 2020-04-01 14:25:08 +03:00
GeorgianaElena
ea132ff88d Downgrade bootprint 2020-04-01 14:23:35 +03:00
GeorgianaElena
78c14c05f3 Switch to pip on rtd 2020-04-01 14:23:35 +03:00
Erik Sundell
1d2b36e9b0 Merge pull request #3001 from GeorgianaElena/update_issue_templates
Update issue templates
2020-03-26 19:16:40 +01:00
Georgiana Elena
a929ff84c7 Update .github/ISSUE_TEMPLATE/config.yml
Co-Authored-By: Simon Li <orpheus+devel@gmail.com>
2020-03-26 20:03:02 +02:00
GeorgianaElena
0d5bbc16cf Hide comments 2020-03-26 18:30:50 +02:00
GeorgianaElena
ee1fd5a469 Have less issue templates 2020-03-26 18:14:26 +02:00
GeorgianaElena
a702f36524 Update issue templates 2020-03-26 18:14:26 +02:00
GeorgianaElena
59edc6d369 Redirect support questions to Discourse 2020-03-26 18:14:26 +02:00
Georgiana Elena
907b77788d Merge pull request #2978 from danlester/master
SpawnHandler POST with user form options displays the spawn-pending page
2020-03-26 17:13:08 +02:00
Georgiana Elena
914a3eaba5 Merge pull request #2997 from thuvh/fix_typo_installation_guide_hard
fix docs firewall instructions
2020-03-26 16:32:09 +02:00
Hoai-Thu Vuong
b1f048f2ef fix wrong name on firewall 2020-03-24 00:03:26 +07:00
Carol Willing
53d76ad3a2 Merge pull request #2995 from jupyterhub/choldgraf-patch-1
updating docs theme
2020-03-23 08:49:26 -07:00
Chris Holdgraf
7af70b92e9 Update conf.py 2020-03-23 08:29:52 -07:00
Chris Holdgraf
3425eca4ff updating docs theme 2020-03-23 08:10:49 -07:00
Carol Willing
9e0bf9cd9f Merge pull request #2944 from minrk/one-to-one
make spawner:server relationship explicitly one to one
2020-03-22 09:22:46 -07:00
Carol Willing
3118918098 Update jupyterhub/app.py
Minor comment edit
2020-03-22 09:09:49 -07:00
Carol Willing
6a995c822c Merge pull request #2972 from mriedem/contributor-docs
Update contributor docs
2020-03-22 09:04:29 -07:00
Matt Riedemann
a09f535e8f Log hub/singleuser version mismatch once per combo
In case there are multiple singleuser notebooks at different
versions we want to log each of those mismatches as a warning
so this changes the global _version_mismatch_warning_logged flag
from a bool to a dict keyed by the hub/singleuser version mismatch
combination. A test wrinkle is added for that scenario.

Part of #2970
2020-03-16 11:10:13 -04:00
Dan Lester
a60ac53c87 black formatting 2020-03-12 12:44:34 +00:00
Min RK
d2c81bc1d0 Merge pull request #2966 from mriedem/issues/2965-doc-user-options
api-ref: document user_options for server resource
2020-03-12 13:04:25 +01:00
Dan Lester
3908c6d041 SpawnHandler POST with user form options displays the spawn-pending page just like the GET handler always did 2020-03-10 16:17:01 +00:00
Matt Riedemann
c50e1f9852 Update contributor doc wording around sqlite
sqlite3 should be available from the python standard library
so there shouldn't be a need to install native packages.
2020-03-09 15:11:45 -04:00
Matt Riedemann
6954e03bb4 Update contributor docs
As a new contributor to jupyterhub it took awhile to get
up and running locally mainly because I didn't have sqlite
installed but also because I was flipping between README,
CONTRIBUTING and the actual contributing docs which are all
a little bit different.

This does a few things:

- Updates the contributor sphinx docs to mention that how
  one chooses to isolate their development environment is
  up to them with a link to the detailed forum thread on
  that topic.
- Updates the contributor sphinx docs to mention sqlite and
  database setup in general. While in here some trailing
  whitespaces are cleaned up.
- Leave a comment in CONTRIBUTING.md about the redundant
  information in the docs on getting a development environment
  setup. Long-term we should really get those merged so there
  is a single authoritative document on how to get a dev env
  setup for contributing to jupyterhub.
- Link to the jupyterhub gitter channel for asking questions.
2020-03-04 13:09:48 -05:00
Matt Riedemann
08eee9309e Only log hub / singleuser version mismatch once
If your jupyterhub and jupyterhub-singleuser instances
are running at different minor or greater versions a
warning gets logged per active server which can be a lot
when you have hundreds of active servers.

This adds a flag to that version mismatch logging logic
such that the warning is only logged once per restart
of the hub server.

Closes issue #2970
2020-03-04 11:40:23 -05:00
Juan Cruz-Benito
6ed41b38ed Improving pagination for last pages, show always the last page 2020-03-03 14:50:06 +01:00
Matt Riedemann
6b521e0b86 api-ref: document user_options for server resource
APIHandler.server_model unconditionally returns the Spawner's
user_options dict but it wasn't mentioned in the API reference
so it's added here. The description is taken from the docstring
on Spawner.user_options.

Closes issue #2965
2020-03-02 12:12:29 -05:00
Tim Head
1bdc66c75b Merge pull request #2960 from jtpio/named-servers-enter
Start named servers by pressing the Enter key
2020-03-02 10:11:32 +01:00
Jeremy Tuloup
e30b2ca875 Remove unused variables in home.js 2020-03-02 10:02:38 +01:00
Juan Cruz-Benito
1f3ed58570 Fixing pagination numbers. We begin at page 1 not 0 2020-02-28 18:03:29 +01:00
Juan Cruz-Benito
6a31b640c1 Removing more f-strings 2020-02-28 17:56:13 +01:00
Juan Cruz-Benito
ed97150311 Fixing check 2020-02-28 17:53:54 +01:00
Juan Cruz-Benito
78eb77f157 Enforcing checks of page number 2020-02-28 17:47:12 +01:00
Juan Cruz-Benito
f152288d76 Replacing f strings 2020-02-28 17:46:50 +01:00
Juan Cruz-Benito
492c5072b7 Removing print statements 🤦‍♂️ 2020-02-28 17:31:19 +01:00
Juan Cruz-Benito
534e251f97 Adding links generation inside the Pagination class 2020-02-28 17:15:19 +01:00
Jeremy Tuloup
cfcd85a188 Start named servers by pressing the Enter key 2020-02-28 15:24:37 +01:00
Erik Sundell
fd3b5ebbad Merge pull request #2959 from jtpio/patch-1
Add .vscode to gitignore
2020-02-28 15:05:34 +01:00
Jeremy Tuloup
1a2d5913eb Add .vscode to gitignore 2020-02-28 14:55:41 +01:00
Juan Cruz-Benito
8f46d89ac0 Adding info method to pagination and related items in admin template 2020-02-28 13:19:53 +01:00
Juan Cruz-Benito
e82c06cf93 Removing display_msg and record name since it can be coded directly as they're needed in the templates 2020-02-28 12:31:53 +01:00
Juan Cruz-Benito
392525571f Documenting get_page_args method 2020-02-28 12:14:59 +01:00
Juan Cruz-Benito
53927f0490 Pre-commit fixes 2020-02-28 12:05:50 +01:00
Juan Cruz-Benito
ede71db11a Moving Pagination class to its own file 2020-02-28 12:04:53 +01:00
Juan Cruz-Benito
a2e2b1d512 As pointed out in the PR, Pagination isn't a Handler 2020-02-28 12:01:56 +01:00
Tim Head
cff18992ad Merge pull request #2953 from minrk/auth-bearer
preserve auth type when logging obfuscated auth header
2020-02-28 11:48:10 +01:00
Tim Head
b2c0b5024c Merge pull request #2956 from manics/pin-sphinx-theme
[MRG] Pin sphinx theme
2020-02-28 11:28:21 +01:00
Simon Li
996483de94 Pin sphinx theme (https://github.com/jupyterhub/binderhub/pull/1070)
Closes https://github.com/jupyterhub/jupyterhub/issues/2955
2020-02-27 17:35:52 +00:00
Min RK
f4b7b85b02 preserve auth type when logging obfuscated auth header
Authorization header has the form "<type> <credentials>"

rather than checking for "token" only, preserve type value, which could be Bearer, Basic, etc.
2020-02-27 13:49:47 +01:00
Min RK
b4391d0f79 Merge pull request #2952 from kinow/fix-spawn-url
Keep the URL fragments after spawning an application
2020-02-26 14:05:45 +01:00
Juan Cruz-Benito
f49cc1fcf0 Improving description of potential parameters 2020-02-26 10:40:44 +01:00
Juan Cruz-Benito
18205fbf4a Fixing black formatting issues 2020-02-26 10:36:36 +01:00
Bruno P. Kinoshita
2f6ea71106 Add not_running.js to modify button spawn_url 2020-02-26 09:28:12 +13:00
Juan Cruz-Benito
7b6ac158cc Removing python-paginate package and adding minimal Pagination class to enable a pagination API for AdminHandler 2020-02-25 19:11:09 +01:00
Juan Cruz-Benito
facf52f117 Removing unneeded pass of request to the template 2020-02-25 17:03:01 +01:00
Juan Cruz-Benito
f36796dd85 Merge branch 'master' into add_pagination_admin 2020-02-25 17:01:01 +01:00
Min RK
0427f8090f Merge pull request #2773 from kinow/fix-ssl-url-message
Handle the protocol when ssl is enabled and log the right URL
2020-02-25 13:35:32 +01:00
Tim Head
da86eaad97 Merge pull request #2951 from kinow/typos-2
[doc] Fix couple typos in the documentation
2020-02-24 09:21:31 +01:00
Bruno P. Kinoshita
3b05135f11 Fix couple typos 2020-02-24 20:48:42 +13:00
Bruno P. Kinoshita
76afec8adb Update app.bind_url and proxy.public_url when (external) SSL is enabled 2020-02-24 15:51:09 +13:00
Tim Head
06da90ac76 Merge pull request #2950 from alexdriedger/patch-2
Docs: Fixed grammar on landing page
2020-02-23 09:11:43 +01:00
Alex Driedger
7e3caf7f48 Fixed grammar on landing page 2020-02-22 16:37:34 -08:00
Tim Head
e08552eb99 Merge pull request #2941 from minrk/allow-implicit-spawn
Allow implicit spawn via javascript redirect
2020-02-22 07:27:17 +01:00
Tim Head
5fb403af4b Merge pull request #2946 from minrk/user-redirect-faq
add general faq
2020-02-22 07:24:24 +01:00
Min RK
84acdd5a7f handle uselist=False in our relationship expiry 2020-02-21 14:10:36 +01:00
Min RK
3e6abb7a5e add general faq
and put a first q about user-redirect
2020-02-21 13:52:03 +01:00
Min RK
0315f986db Merge pull request #2940 from kinow/add-more-docs-for-cookies
[doc] Add more docs about Cookies used for authentication in JupyterHub
2020-02-21 10:18:29 +01:00
Min RK
7735c7ddd4 make spawner:server backref explicitly one-to-one
using backref(uselist=False), single_parent=True
2020-02-21 10:09:08 +01:00
Bruno P. Kinoshita
239a4c63a2 Add note that not all proxy implementations use an auth token 2020-02-21 10:35:30 +13:00
Bruno P. Kinoshita
f5bd5b7751 Incorporate review feedback 2020-02-21 10:32:11 +13:00
Bruno P. Kinoshita
287b0302d9 Add more docs about authentication and cookies, using text posted by MinRK on Discourse 2020-02-21 10:22:10 +13:00
Tim Head
44e23aad78 Merge pull request #2936 from minrk/make-it-fast-break-everything-maybe
make init_spawners check O(running servers) not O(total users)
2020-02-20 17:06:24 +01:00
Tim Head
606775f72d Remove unused variable 2020-02-20 16:56:03 +01:00
Min RK
9a6308f8d9 docs: use metachannel for faster environment solve (#2943)
docs: use metachannel for faster environment solve
2020-02-20 15:55:36 +01:00
Min RK
0c4db2d99f docs: use metachannel for faster environment solve
rtd is having memory issues with conda-forge, which should hopefully be fixed by metachannel

this should also make things quicker for anyone
2020-02-20 15:54:43 +01:00
Min RK
938970817c update docs environments (#2942)
update docs environments
2020-02-20 15:36:10 +01:00
Min RK
d2a1b8e349 update docs environments
- python 3.7
- node 12
- sync recommonmark 0.6
2020-02-20 15:32:55 +01:00
Min RK
4477506345 Merge pull request #2930 from JohnPaton/patch-1
Add favicon to the base page template
2020-02-20 14:23:06 +01:00
Min RK
0787489e1b maybe_future needs a future! 2020-02-20 12:53:15 +01:00
Min RK
436757dd55 handle implicit spawn with a javascript redirect
less dangerous than using a Location redirect, so remove conflicts

delay is a user-configurable timer (0 = no implicit spawn, default)
2020-02-20 12:43:39 +01:00
Min RK
a0b6d8ec6f add allow_implicit_spawn setting
- warn that there are known issues associated with enabling it
- it is inherently incompatible with named servers
2020-02-20 12:12:55 +01:00
Min RK
b92efcd7b0 spawner test assumed app.users is populated 2020-02-20 09:37:08 +01:00
Erik Sundell
3e17b47ec3 Merge pull request #2939 from kinow/fix-services-link
[doc] Use fixed commit plus line number in github link
2020-02-19 01:09:51 +01:00
Bruno P. Kinoshita
31c0788bd9 Move cookies to the end of the list (ssl, proxy, and then cookies) 2020-02-19 12:56:02 +13:00
Bruno P. Kinoshita
dec3244758 Use fixed commit plus line number in github link 2020-02-19 12:39:23 +13:00
Erik Sundell
91e385efa7 Merge pull request #2938 from kinow/fix-link-to-ssl-doc
[doc] Fix link to SSL encryption from troubleshooting page
2020-02-18 22:55:07 +01:00
Bruno P. Kinoshita
13313abb37 Fix link to SSL encryption from troubleshooting page 2020-02-19 10:46:49 +13:00
Min RK
79a51dfdce make init_spawners check O(running servers) not O(total users)
query on Server objects instead of User objects

avoids lots of ORM work on startup since there are typically a small number of running servers
relative to the total number of users

this also means that the users dict is not fully populated. Is that okay? I hope so.
2020-02-18 17:10:19 +01:00
JohnPaton
a999ac8f07 Use only rel="icon"
This is the officially recommended method from MDN
https://developer.mozilla.org/en-US/docs/Learn/HTML/Introduction_to_HTML/The_head_metadata_in_HTML
2020-02-14 16:51:27 +01:00
John Paton
a3e3f24d2d Add favicon to the base page template
This was missing before. Giving it its own named block will let users customize it if they wish.
2020-02-14 16:35:48 +01:00
Juan Cruz-Benito
b2b85eb548 Improving comments 2020-02-14 11:47:43 +01:00
Juan Cruz-Benito
95c5ebb090 Fixing pre-commit errors 2020-02-14 11:14:07 +01:00
Juan Cruz-Benito
3d0da4f25a Adding python-paginate package and using it to paginate admin panel 2020-02-13 18:35:17 +01:00
Tim Head
bc7bb5076f Merge pull request #2914 from jgwerner/trouble-shooting
[MRG] Add troubleshooting topics
2020-02-13 08:06:20 +01:00
Greg
a80561bfc8 updates based on pr comments
Signed-off-by: Greg <werner.greg@gmail.com>
2020-02-05 16:13:15 -05:00
Erik Sundell
22f86ad76c Merge pull request #2917 from minrk/doc-remove
rest api: fix schema for remove parameter in rest api
2020-01-31 17:31:31 +01:00
Min RK
0ae9cfa42f fix schema for remove parameter in rest api
it wasn't showing up properly since it's a *property* of the body, not the body itself
2020-01-31 17:18:30 +01:00
Min RK
ff8c4ca8a3 update bootprint to v4 2020-01-31 17:16:57 +01:00
Greg
ed4ed4de9d simplify text
Signed-off-by: Greg <werner.greg@gmail.com>
2020-01-29 12:49:52 -05:00
Greg
d177b99f3a add trouble shooting topics
Signed-off-by: Greg <werner.greg@gmail.com>
2020-01-29 12:42:42 -05:00
Erik Sundell
65de8c4916 Merge pull request #2904 from reneluria/patch-doc
Several fixes to the doc
2020-01-24 17:25:36 +01:00
Min RK
178f9d4c51 Merge pull request #2905 from consideRatio/solve-docker-template-issue
Add what we need with some margin to Dockerfile's build stage
2020-01-23 09:57:12 +01:00
Min RK
9433564c5b bump reorder-imports hook (#2899)
bump reorder-imports hook
2020-01-23 09:54:46 +01:00
Erik Sundell
5deba0c4ba Copy all files to Dockerfile's build stage
Not exactly all though as some will be ignored by the .dockerignore
file. This change ensures we don't get future issues caused by a failure
to update what needs to be copied to the build stage and not like we've
had recently.
2020-01-23 07:20:53 +01:00
Erik Sundell
5234d4c7ae Add bower-lite script to Dockerfile
This fixes #2852 by adding a script part of package.json. But is this
enough? Should we perhaps look in MANIFEST.in and copy some more files
listed there?

This is all thanks to people coming together and helping out figuring
out the issue in https://github.com/jupyterhub/jupyterhub/issues/2852.
Thank you @shingo78 for spotting that we missed bower-lite and its role
and all others who reported and helped debug this!
2020-01-23 07:20:40 +01:00
Erik Sundell
1bea28026e Merge pull request #2907 from consideRatio/fix-generate-config-bug
Fix --generate-config bug when specifying a filename
2020-01-23 07:19:11 +01:00
Erik Sundell
9a5c8ff058 Fix --generate-config bug when specifying a filename
This commit fixes #2906 that was introduced due to #2824. See analysis
of issue in
https://github.com/jupyterhub/jupyterhub/issues/2906#issuecomment-577303510.
2020-01-22 19:30:16 +01:00
Rene Luria
2b183c9773 Several fixes to the doc
* sudo for configurable-http-proxy install
* fix sudo command for apt source
* fix $connection_upgrade variable in nginx configuration
2020-01-21 17:02:23 +01:00
Tim Head
5dee864afd fix: 'Non-ASCII character '\xc3' (#2901)
fix: 'Non-ASCII character '\xc3'
2020-01-20 09:15:56 +01:00
Greg
6fdf931515 update prometheus_log_method comments
Signed-off-by: Greg <werner.greg@gmail.com>
2020-01-17 12:32:50 -05:00
Greg
d126baa443 remove diaeresis
Signed-off-by: Greg <werner.greg@gmail.com>
2020-01-17 09:43:46 -05:00
Min RK
d1e2d593ff back to dev 2020-01-17 12:55:42 +01:00
Min RK
800b6a6bc5 bump reorder-imports
removes (hopefully) unnecessarily specified language version
2020-01-17 12:48:17 +01:00
yuvipanda
e9bc25cce0 Run all tests for jupyter_server regardless of failure 2019-06-04 14:42:49 +02:00
yuvipanda
8f7e25f9a1 Don't make pip uninstall wait for human input 2019-06-04 14:24:30 +02:00
yuvipanda
399def182b Actually run jupyter_server test on Travis 2019-06-04 13:57:26 +02:00
yuvipanda
f830b2a417 Try to test notebook package is still uninstalled 2019-06-04 13:45:57 +02:00
yuvipanda
cab1bca6fb Use jupyter_server if notebook package isn't available 2019-06-04 13:42:52 +02:00
yuvipanda
5eb7a14a33 [WIP] Add support for Jupyter Server 2019-06-04 13:30:28 +02:00
202 changed files with 5925 additions and 3181 deletions

View File

@@ -1,75 +0,0 @@
# Python CircleCI 2.0 configuration file
# Updating CircleCI configuration from v1 to v2
# Check https://circleci.com/docs/2.0/language-python/ for more details
#
version: 2
jobs:
build:
machine: true
steps:
- checkout
- run:
name: build images
command: |
docker build -t jupyterhub/jupyterhub .
docker build -t jupyterhub/jupyterhub-onbuild onbuild
docker build -t jupyterhub/jupyterhub:alpine -f dockerfiles/Dockerfile.alpine .
docker build -t jupyterhub/singleuser singleuser
- run:
name: smoke test jupyterhub
command: |
docker run --rm -it jupyterhub/jupyterhub jupyterhub --help
docs:
# This is the base environment that Circle will use
docker:
- image: circleci/python:3.6-stretch
steps:
# Get our data and merge with upstream
- run: sudo apt-get update
- checkout
# Update our path
- run: echo "export PATH=~/.local/bin:$PATH" >> $BASH_ENV
# Restore cached files to speed things up
- restore_cache:
keys:
- cache-pip
# Install the packages needed to build our documentation
- run:
name: Install NodeJS
command: |
# From https://github.com/nodesource/distributions/blob/master/README.md#debinstall
curl -sL https://deb.nodesource.com/setup_13.x | sudo -E bash -
sudo apt-get install -y nodejs
- run:
name: Install dependencies
command: |
python3 -m pip install --user -r dev-requirements.txt
python3 -m pip install --user -r docs/requirements.txt
sudo npm install -g configurable-http-proxy
sudo python3 -m pip install --editable .
# Cache some files for a speedup in subsequent builds
- save_cache:
key: cache-pip
paths:
- ~/.cache/pip
# Build the docs
- run:
name: Build docs to store
command: |
cd docs
make html
# Tell Circle to store the documentation output in a folder that we can access later
- store_artifacts:
path: docs/build/html/
destination: html
# Tell CircleCI to use this workflow when it builds the site
workflows:
version: 2
default:
jobs:
- build
- docs

View File

@@ -1,39 +0,0 @@
---
name: Issue report
about: Create a report to help us improve
---
<!---
Hi! Thanks for using JupyterHub.
If you are reporting an issue with JupyterHub, please use the GitHub search feature to check if your issue has been asked already. If it has, please add your comments to the existing issue.
Some tips:
- Running `jupyter troubleshoot` from the command line, if possible, and posting
its output would also be helpful.
- Running JupyterHub in `--debug` mode (`jupyterhub --debug`) can also be helpful for troubleshooting.
--->
**Describe the bug**
A clear and concise description of what the bug is.
<!---Add description here--->
**To Reproduce**
<!---
Please share the steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
--->
**Expected behavior**
<!---
A clear and concise description of what you expected to happen.
--->
**Compute Information**
- Operating System
- JupyterHub Version [e.g. 22]

View File

@@ -1,22 +0,0 @@
---
name: Installation and configuration questions
about: Installation and configuration assistance
---
<!---
If you are reading this message, you have probably already searched the existing
GitHub issues for JupyterHub. If you haven't tried a search, we encourage you to do so.
If you are unsure where to ask your question (Jupyter, JupyterHub, JupyterLab, etc.),
please ask on our [Discourse Q&A channel](https://discourse.jupyter.org/c/questions).
If you have a quick question about JupyterHub installation or configuratation, you
may ask on the [JupyterHub gitter channel](https://gitter.im/jupyterhub/jupyterhub).
:sunny: Please be patient. We are volunteers and will address your question when we are able. :sunny:
If after trying the above steps, you still have an in-depth installation or
configuration question, such as a possible bug, please file an issue below and include
any relevant details.
--->

185
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,185 @@
# Build releases and (on tags) publish to PyPI
name: Release
# always build releases (to make sure wheel-building works)
# but only publish to PyPI on tags
on:
push:
pull_request:
jobs:
build-release:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: 3.8
- uses: actions/setup-node@v1
with:
node-version: "14"
- name: install build package
run: |
pip install --upgrade pip
pip install build
pip freeze
- name: build release
run: |
python -m build --sdist --wheel .
ls -l dist
- name: verify wheel
run: |
cd dist
pip install ./*.whl
# verify data-files are installed where they are found
cat <<EOF | python
import os
from jupyterhub._data import DATA_FILES_PATH
print(f"DATA_FILES_PATH={DATA_FILES_PATH}")
assert os.path.exists(DATA_FILES_PATH), DATA_FILES_PATH
for subpath in (
"templates/page.html",
"static/css/style.min.css",
"static/components/jquery/dist/jquery.js",
):
path = os.path.join(DATA_FILES_PATH, subpath)
assert os.path.exists(path), path
print("OK")
EOF
# ref: https://github.com/actions/upload-artifact#readme
- uses: actions/upload-artifact@v2
with:
name: jupyterhub-${{ github.sha }}
path: "dist/*"
if-no-files-found: error
- name: Publish to PyPI
if: startsWith(github.ref, 'refs/tags/')
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
run: |
pip install twine
twine upload --skip-existing dist/*
publish-docker:
runs-on: ubuntu-20.04
services:
# So that we can test this in PRs/branches
local-registry:
image: registry:2
ports:
- 5000:5000
steps:
- name: Should we push this image to a public registry?
run: |
if [ "${{ startsWith(github.ref, 'refs/tags/') || (github.ref == 'refs/heads/main') }}" = "true" ]; then
# Empty => Docker Hub
echo "REGISTRY=" >> $GITHUB_ENV
else
echo "REGISTRY=localhost:5000/" >> $GITHUB_ENV
fi
- uses: actions/checkout@v2
# Setup docker to build for multiple platforms, see:
# https://github.com/docker/build-push-action/tree/v2.4.0#usage
# https://github.com/docker/build-push-action/blob/v2.4.0/docs/advanced/multi-platform.md
- name: Set up QEMU (for docker buildx)
uses: docker/setup-qemu-action@25f0500ff22e406f7191a2a8ba8cda16901ca018 # associated tag: v1.0.2
- name: Set up Docker Buildx (for multi-arch builds)
uses: docker/setup-buildx-action@2a4b53665e15ce7d7049afb11ff1f70ff1610609 # associated tag: v1.1.2
with:
# Allows pushing to registry on localhost:5000
driver-opts: network=host
- name: Setup push rights to Docker Hub
# This was setup by...
# 1. Creating a Docker Hub service account "jupyterhubbot"
# 2. Creating a access token for the service account specific to this
# repository: https://hub.docker.com/settings/security
# 3. Making the account part of the "bots" team, and granting that team
# permissions to push to the relevant images:
# https://hub.docker.com/orgs/jupyterhub/teams/bots/permissions
# 4. Registering the username and token as a secret for this repo:
# https://github.com/jupyterhub/jupyterhub/settings/secrets/actions
if: env.REGISTRY != 'localhost:5000/'
run: |
docker login -u "${{ secrets.DOCKERHUB_USERNAME }}" -p "${{ secrets.DOCKERHUB_TOKEN }}"
# https://github.com/jupyterhub/action-major-minor-tag-calculator
# If this is a tagged build this will return additional parent tags.
# E.g. 1.2.3 is expanded to Docker tags
# [{prefix}:1.2.3, {prefix}:1.2, {prefix}:1, {prefix}:latest] unless
# this is a backported tag in which case the newer tags aren't updated.
# For branches this will return the branch name.
# If GITHUB_TOKEN isn't available (e.g. in PRs) returns no tags [].
- name: Get list of jupyterhub tags
id: jupyterhubtags
uses: jupyterhub/action-major-minor-tag-calculator@v1
with:
githubToken: ${{ secrets.GITHUB_TOKEN }}
prefix: "${{ env.REGISTRY }}jupyterhub/jupyterhub:"
defaultTag: "${{ env.REGISTRY }}jupyterhub/jupyterhub:noref"
- name: Build and push jupyterhub
uses: docker/build-push-action@e1b7f96249f2e4c8e4ac1519b9608c0d48944a1f # associated tag: v2.4.0
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
# tags parameter must be a string input so convert `gettags` JSON
# array into a comma separated list of tags
tags: ${{ join(fromJson(steps.jupyterhubtags.outputs.tags)) }}
# jupyterhub-onbuild
- name: Get list of jupyterhub-onbuild tags
id: onbuildtags
uses: jupyterhub/action-major-minor-tag-calculator@v1
with:
githubToken: ${{ secrets.GITHUB_TOKEN }}
prefix: "${{ env.REGISTRY }}jupyterhub/jupyterhub-onbuild:"
defaultTag: "${{ env.REGISTRY }}jupyterhub/jupyterhub-onbuild:noref"
- name: Build and push jupyterhub-onbuild
uses: docker/build-push-action@e1b7f96249f2e4c8e4ac1519b9608c0d48944a1f # associated tag: v2.4.0
with:
build-args: |
BASE_IMAGE=${{ fromJson(steps.jupyterhubtags.outputs.tags)[0] }}
context: onbuild
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ join(fromJson(steps.onbuildtags.outputs.tags)) }}
# jupyterhub-demo
- name: Get list of jupyterhub-demo tags
id: demotags
uses: jupyterhub/action-major-minor-tag-calculator@v1
with:
githubToken: ${{ secrets.GITHUB_TOKEN }}
prefix: "${{ env.REGISTRY }}jupyterhub/jupyterhub-demo:"
defaultTag: "${{ env.REGISTRY }}jupyterhub/jupyterhub-demo:noref"
- name: Build and push jupyterhub-demo
uses: docker/build-push-action@e1b7f96249f2e4c8e4ac1519b9608c0d48944a1f # associated tag: v2.4.0
with:
build-args: |
BASE_IMAGE=${{ fromJson(steps.onbuildtags.outputs.tags)[0] }}
context: demo-image
# linux/arm64 currently fails:
# ERROR: Could not build wheels for argon2-cffi which use PEP 517 and cannot be installed directly
# ERROR: executor failed running [/bin/sh -c python3 -m pip install notebook]: exit code: 1
platforms: linux/amd64
push: true
tags: ${{ join(fromJson(steps.demotags.outputs.tags)) }}

246
.github/workflows/test.yml vendored Normal file
View File

@@ -0,0 +1,246 @@
# This is a GitHub workflow defining a set of jobs with a set of steps.
# ref: https://docs.github.com/en/free-pro-team@latest/actions/reference/workflow-syntax-for-github-actions
#
name: Test
# Trigger the workflow's on all PRs but only on pushed tags or commits to
# main/master branch to avoid PRs developed in a GitHub fork's dedicated branch
# to trigger.
on:
pull_request:
push:
workflow_dispatch:
defaults:
run:
# Declare bash be used by default in this workflow's "run" steps.
#
# NOTE: bash will by default run with:
# --noprofile: Ignore ~/.profile etc.
# --norc: Ignore ~/.bashrc etc.
# -e: Exit directly on errors
# -o pipefail: Don't mask errors from a command piped into another command
shell: bash
env:
# UTF-8 content may be interpreted as ascii and causes errors without this.
LANG: C.UTF-8
jobs:
# Run "pre-commit run --all-files"
pre-commit:
runs-on: ubuntu-20.04
timeout-minutes: 2
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: 3.8
# ref: https://github.com/pre-commit/action
- uses: pre-commit/action@v2.0.0
- name: Help message if pre-commit fail
if: ${{ failure() }}
run: |
echo "You can install pre-commit hooks to automatically run formatting"
echo "on each commit with:"
echo " pre-commit install"
echo "or you can run by hand on staged files with"
echo " pre-commit run"
echo "or after-the-fact on already committed files with"
echo " pre-commit run --all-files"
# Run "pytest jupyterhub/tests" in various configurations
pytest:
runs-on: ubuntu-20.04
timeout-minutes: 10
strategy:
# Keep running even if one variation of the job fail
fail-fast: false
matrix:
# We run this job multiple times with different parameterization
# specified below, these parameters have no meaning on their own and
# gain meaning on how job steps use them.
#
# subdomain:
# Tests everything when JupyterHub is configured to add routes for
# users with dedicated subdomains like user1.jupyter.example.com
# rather than jupyter.example.com/user/user1.
#
# db: [mysql/postgres]
# Tests everything when JupyterHub works against a dedicated mysql or
# postgresql server.
#
# jupyter_server:
# Tests everything when the user instances are started with
# jupyter_server instead of notebook.
#
# ssl:
# Tests everything using internal SSL connections instead of
# unencrypted HTTP
#
# main_dependencies:
# Tests everything when the we use the latest available dependencies
# from: ipytraitlets.
#
# NOTE: Since only the value of these parameters are presented in the
# GitHub UI when the workflow run, we avoid using true/false as
# values by instead duplicating the name to signal true.
include:
- python: "3.6"
oldest_dependencies: oldest_dependencies
- python: "3.6"
subdomain: subdomain
- python: "3.7"
db: mysql
- python: "3.7"
ssl: ssl
- python: "3.8"
db: postgres
- python: "3.8"
jupyter_server: jupyter_server
- python: "3.9"
main_dependencies: main_dependencies
steps:
# NOTE: In GitHub workflows, environment variables are set by writing
# assignment statements to a file. They will be set in the following
# steps as if would used `export MY_ENV=my-value`.
- name: Configure environment variables
run: |
if [ "${{ matrix.subdomain }}" != "" ]; then
echo "JUPYTERHUB_TEST_SUBDOMAIN_HOST=http://localhost.jovyan.org:8000" >> $GITHUB_ENV
fi
if [ "${{ matrix.db }}" == "mysql" ]; then
echo "MYSQL_HOST=127.0.0.1" >> $GITHUB_ENV
echo "JUPYTERHUB_TEST_DB_URL=mysql+mysqlconnector://root@127.0.0.1:3306/jupyterhub" >> $GITHUB_ENV
fi
if [ "${{ matrix.ssl }}" == "ssl" ]; then
echo "SSL_ENABLED=1" >> $GITHUB_ENV
fi
if [ "${{ matrix.db }}" == "postgres" ]; then
echo "PGHOST=127.0.0.1" >> $GITHUB_ENV
echo "PGUSER=test_user" >> $GITHUB_ENV
echo "PGPASSWORD=hub[test/:?" >> $GITHUB_ENV
echo "JUPYTERHUB_TEST_DB_URL=postgresql://test_user:hub%5Btest%2F%3A%3F@127.0.0.1:5432/jupyterhub" >> $GITHUB_ENV
fi
if [ "${{ matrix.jupyter_server }}" != "" ]; then
echo "JUPYTERHUB_SINGLEUSER_APP=jupyterhub.tests.mockserverapp.MockServerApp" >> $GITHUB_ENV
fi
- uses: actions/checkout@v2
# NOTE: actions/setup-node@v1 make use of a cache within the GitHub base
# environment and setup in a fraction of a second.
- name: Install Node v14
uses: actions/setup-node@v1
with:
node-version: "14"
- name: Install Node dependencies
run: |
npm install
npm install -g configurable-http-proxy
npm list
# NOTE: actions/setup-python@v2 make use of a cache within the GitHub base
# environment and setup in a fraction of a second.
- name: Install Python ${{ matrix.python }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python }}
- name: Install Python dependencies
run: |
pip install --upgrade pip
pip install --upgrade . -r dev-requirements.txt
if [ "${{ matrix.oldest_dependencies }}" != "" ]; then
# take any dependencies in requirements.txt such as tornado>=5.0
# and transform them to tornado==5.0 so we can run tests with
# the earliest-supported versions
cat requirements.txt | grep '>=' | sed -e 's@>=@==@g' > oldest-requirements.txt
pip install -r oldest-requirements.txt
fi
if [ "${{ matrix.main_dependencies }}" != "" ]; then
pip install git+https://github.com/ipython/traitlets#egg=traitlets --force
fi
if [ "${{ matrix.jupyter_server }}" != "" ]; then
pip uninstall notebook --yes
pip install jupyter_server
fi
if [ "${{ matrix.db }}" == "mysql" ]; then
pip install mysql-connector-python
fi
if [ "${{ matrix.db }}" == "postgres" ]; then
pip install psycopg2-binary
fi
pip freeze
# NOTE: If you need to debug this DB setup step, consider the following.
#
# 1. mysql/postgressql are database servers we start as docker containers,
# and we use clients named mysql/psql.
#
# 2. When we start a database server we need to pass environment variables
# explicitly as part of the `docker run` command. These environment
# variables are named differently from the similarly named environment
# variables used by the clients.
#
# - mysql server ref: https://hub.docker.com/_/mysql/
# - mysql client ref: https://dev.mysql.com/doc/refman/5.7/en/environment-variables.html
# - postgres server ref: https://hub.docker.com/_/postgres/
# - psql client ref: https://www.postgresql.org/docs/9.5/libpq-envars.html
#
# 3. When we connect, they should use 127.0.0.1 rather than the
# default way of connecting which leads to errors like below both for
# mysql and postgresql unless we set MYSQL_HOST/PGHOST to 127.0.0.1.
#
# - ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
#
- name: Start a database server (${{ matrix.db }})
if: ${{ matrix.db }}
run: |
if [ "${{ matrix.db }}" == "mysql" ]; then
sudo apt-get update
sudo apt-get install -y mysql-client
DB=mysql bash ci/docker-db.sh
DB=mysql bash ci/init-db.sh
fi
if [ "${{ matrix.db }}" == "postgres" ]; then
sudo apt-get update
sudo apt-get install -y postgresql-client
DB=postgres bash ci/docker-db.sh
DB=postgres bash ci/init-db.sh
fi
- name: Run pytest
# FIXME: --color=yes explicitly set because:
# https://github.com/actions/runner/issues/241
run: |
pytest -v --maxfail=2 --color=yes --cov=jupyterhub jupyterhub/tests
- name: Submit codecov report
run: |
codecov
docker-build:
runs-on: ubuntu-20.04
timeout-minutes: 10
steps:
- uses: actions/checkout@v2
- name: build images
run: |
docker build -t jupyterhub/jupyterhub .
docker build -t jupyterhub/jupyterhub-onbuild onbuild
docker build -t jupyterhub/jupyterhub:alpine -f dockerfiles/Dockerfile.alpine .
docker build -t jupyterhub/singleuser singleuser
- name: smoke test jupyterhub
run: |
docker run --rm -t jupyterhub/jupyterhub jupyterhub --help
- name: verify static files
run: |
docker run --rm -t -v $PWD/dockerfiles:/io jupyterhub/jupyterhub python3 /io/test.py

3
.gitignore vendored
View File

@@ -24,5 +24,8 @@ MANIFEST
.coverage.* .coverage.*
htmlcov htmlcov
.idea/ .idea/
.vscode/
.pytest_cache .pytest_cache
pip-wheel-metadata pip-wheel-metadata
docs/source/reference/metrics.rst
oldest-requirements.txt

View File

@@ -1,20 +1,24 @@
repos: repos:
- repo: https://github.com/asottile/reorder_python_imports - repo: https://github.com/asottile/reorder_python_imports
rev: v1.8.0 rev: v1.9.0
hooks: hooks:
- id: reorder-python-imports - id: reorder-python-imports
language_version: python3.6 - repo: https://github.com/psf/black
- repo: https://github.com/ambv/black rev: 20.8b1
rev: 19.10b0 hooks:
hooks: - id: black
- id: black - repo: https://github.com/pre-commit/mirrors-prettier
- repo: https://github.com/pre-commit/pre-commit-hooks rev: v2.2.1
rev: v2.4.0 hooks:
hooks: - id: prettier
- id: end-of-file-fixer - repo: https://gitlab.com/pycqa/flake8
- id: check-json rev: "3.8.4"
- id: check-yaml hooks:
- id: check-case-conflict - id: flake8
- id: check-executables-have-shebangs - repo: https://github.com/pre-commit/pre-commit-hooks
- id: requirements-txt-fixer rev: v3.4.0
- id: flake8 hooks:
- id: end-of-file-fixer
- id: check-case-conflict
- id: check-executables-have-shebangs
- id: requirements-txt-fixer

1
.prettierignore Normal file
View File

@@ -0,0 +1 @@
share/jupyterhub/templates/

View File

@@ -1,94 +0,0 @@
dist: bionic
language: python
cache:
- pip
env:
global:
- MYSQL_HOST=127.0.0.1
- MYSQL_TCP_PORT=13306
# request additional services for the jobs to access
services:
- postgresql
- docker
# install dependencies for running pytest (but not linting)
before_install:
- set -e
- nvm install 6; nvm use 6
- npm install
- npm install -g configurable-http-proxy
- |
# setup database
if [[ $JUPYTERHUB_TEST_DB_URL == mysql* ]]; then
unset MYSQL_UNIX_PORT
DB=mysql bash ci/docker-db.sh
DB=mysql bash ci/init-db.sh
# FIXME: mysql-connector-python 8.0.16 incorrectly decodes bytes to str
# ref: https://bugs.mysql.com/bug.php?id=94944
pip install 'mysql-connector-python==8.0.11'
elif [[ $JUPYTERHUB_TEST_DB_URL == postgresql* ]]; then
psql -c "CREATE USER $PGUSER WITH PASSWORD '$PGPASSWORD';" -U postgres
DB=postgres bash ci/init-db.sh
pip install psycopg2-binary
fi
# install general dependencies
install:
- pip install --upgrade pip
- pip install --upgrade --pre -r dev-requirements.txt .
- pip freeze
# run tests
script:
- pytest -v --maxfail=2 --cov=jupyterhub jupyterhub/tests
# collect test coverage information
after_success:
- codecov
# list the jobs
jobs:
include:
- name: autoformatting check
python: 3.6
# NOTE: It does not suffice to override to: null, [], or [""]. Travis will
# fall back to the default if we do.
before_install: echo "Do nothing before install."
script:
- pre-commit run --all-files
after_success: echo "Do nothing after success."
after_failure:
- |
echo "You can install pre-commit hooks to automatically run formatting"
echo "on each commit with:"
echo " pre-commit install"
echo "or you can run by hand on staged files with"
echo " pre-commit run"
echo "or after-the-fact on already committed files with"
echo " pre-commit run --all-files"
# When we run pytest, we want to run it with python>=3.5 as well as with
# various configurations. We increment the python version at the same time
# as we test new configurations in order to reduce the number of test jobs.
- name: python:3.5 + dist:xenial
python: 3.5
dist: xenial
- name: python:3.6 + subdomain
python: 3.6
env: JUPYTERHUB_TEST_SUBDOMAIN_HOST=http://localhost.jovyan.org:8000
- name: python:3.7 + mysql
python: 3.7
env:
- JUPYTERHUB_TEST_DB_URL=mysql+mysqlconnector://root@127.0.0.1:$MYSQL_TCP_PORT/jupyterhub
- name: python:3.8 + postgresql
python: 3.8
env:
- PGUSER=jupyterhub
- PGPASSWORD=hub[test/:?
# The password in url below is url-encoded with: urllib.parse.quote($PGPASSWORD, safe='')
- JUPYTERHUB_TEST_DB_URL=postgresql://jupyterhub:hub%5Btest%2F%3A%3F@127.0.0.1/jupyterhub
- name: python:nightly
python: nightly
allow_failures:
- name: python:nightly
fast_finish: true

View File

@@ -2,24 +2,24 @@
- [ ] Upgrade Docs prior to Release - [ ] Upgrade Docs prior to Release
- [ ] Change log - [ ] Change log
- [ ] New features documented - [ ] New features documented
- [ ] Update the contributor list - thank you page - [ ] Update the contributor list - thank you page
- [ ] Upgrade and test Reference Deployments - [ ] Upgrade and test Reference Deployments
- [ ] Release software - [ ] Release software
- [ ] Make sure 0 issues in milestone - [ ] Make sure 0 issues in milestone
- [ ] Follow release process steps - [ ] Follow release process steps
- [ ] Send builds to PyPI (Warehouse) and Conda Forge - [ ] Send builds to PyPI (Warehouse) and Conda Forge
- [ ] Blog post and/or release note - [ ] Blog post and/or release note
- [ ] Notify users of release - [ ] Notify users of release
- [ ] Email Jupyter and Jupyter In Education mailing lists - [ ] Email Jupyter and Jupyter In Education mailing lists
- [ ] Tweet (optional) - [ ] Tweet (optional)
- [ ] Increment the version number for the next release - [ ] Increment the version number for the next release

View File

@@ -1 +1 @@
Please refer to [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/master/conduct/code_of_conduct.md). Please refer to [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md).

View File

@@ -1,50 +1,58 @@
# Contributing to JupyterHub # Contributing to JupyterHub
Welcome! As a [Jupyter](https://jupyter.org) project, Welcome! As a [Jupyter](https://jupyter.org) project,
you can follow the [Jupyter contributor guide](https://jupyter.readthedocs.io/en/latest/contributor/content-contributor.html). you can follow the [Jupyter contributor guide](https://jupyter.readthedocs.io/en/latest/contributing/content-contributor.html).
Make sure to also follow [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/master/conduct/code_of_conduct.md) Make sure to also follow [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md)
for a friendly and welcoming collaborative environment. for a friendly and welcoming collaborative environment.
## Setting up a development environment ## Setting up a development environment
<!--
https://jupyterhub.readthedocs.io/en/stable/contributing/setup.html
contains a lot of the same information. Should we merge the docs and
just have this page link to that one?
-->
JupyterHub requires Python >= 3.5 and nodejs. JupyterHub requires Python >= 3.5 and nodejs.
As a Python project, a development install of JupyterHub follows standard practices for the basics (steps 1-2). As a Python project, a development install of JupyterHub follows standard practices for the basics (steps 1-2).
1. clone the repo 1. clone the repo
```bash ```bash
git clone https://github.com/jupyterhub/jupyterhub git clone https://github.com/jupyterhub/jupyterhub
``` ```
2. do a development install with pip 2. do a development install with pip
```bash ```bash
cd jupyterhub cd jupyterhub
python3 -m pip install --editable . python3 -m pip install --editable .
``` ```
3. install the development requirements, 3. install the development requirements,
which include things like testing tools which include things like testing tools
```bash ```bash
python3 -m pip install -r dev-requirements.txt python3 -m pip install -r dev-requirements.txt
``` ```
4. install configurable-http-proxy with npm: 4. install configurable-http-proxy with npm:
```bash ```bash
npm install -g configurable-http-proxy npm install -g configurable-http-proxy
``` ```
5. set up pre-commit hooks for automatic code formatting, etc. 5. set up pre-commit hooks for automatic code formatting, etc.
```bash ```bash
pre-commit install pre-commit install
``` ```
You can also invoke the pre-commit hook manually at any time with You can also invoke the pre-commit hook manually at any time with
```bash ```bash
pre-commit run pre-commit run
``` ```
## Contributing ## Contributing
@@ -60,12 +68,12 @@ pre-commit run
which should run any autoformatting on your code which should run any autoformatting on your code
and tell you about any errors it couldn't fix automatically. and tell you about any errors it couldn't fix automatically.
You may also install [black integration](https://github.com/ambv/black#editor-integration) You may also install [black integration](https://github.com/psf/black#editor-integration)
into your text editor to format code automatically. into your text editor to format code automatically.
If you have already committed files before setting up the pre-commit If you have already committed files before setting up the pre-commit
hook with `pre-commit install`, you can fix everything up using hook with `pre-commit install`, you can fix everything up using
`pre-commit run --all-files`. You need to make the fixing commit `pre-commit run --all-files`. You need to make the fixing commit
yourself after that. yourself after that.
## Testing ## Testing
@@ -128,4 +136,4 @@ To read more about fixtures check out the
[pytest docs](https://docs.pytest.org/en/latest/fixture.html) [pytest docs](https://docs.pytest.org/en/latest/fixture.html)
for how to use the existing fixtures, and how to create new ones. for how to use the existing fixtures, and how to create new ones.
When in doubt, feel free to ask. When in doubt, feel free to [ask](https://gitter.im/jupyterhub/jupyterhub).

View File

@@ -24,7 +24,7 @@ software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
@@ -46,8 +46,8 @@ Jupyter uses a shared copyright model. Each contributor maintains copyright
over their contributions to Jupyter. But, it is important to note that these over their contributions to Jupyter. But, it is important to note that these
contributions are typically only changes to the repositories. Thus, the Jupyter contributions are typically only changes to the repositories. Thus, the Jupyter
source code, in its entirety is not the copyright of any single person or source code, in its entirety is not the copyright of any single person or
institution. Instead, it is the collective copyright of the entire Jupyter institution. Instead, it is the collective copyright of the entire Jupyter
Development Team. If individual contributors want to maintain a record of what Development Team. If individual contributors want to maintain a record of what
changes/contributions they have specific copyright on, they should indicate changes/contributions they have specific copyright on, they should indicate
their copyright in the commit message of the change, when they commit the their copyright in the commit message of the change, when they commit the
change to one of the Jupyter repositories. change to one of the Jupyter repositories.

View File

@@ -21,8 +21,7 @@
# your jupyterhub_config.py will be added automatically # your jupyterhub_config.py will be added automatically
# from your docker directory. # from your docker directory.
# https://github.com/tianon/docker-brew-ubuntu-core/commit/d4313e13366d24a97bd178db4450f63e221803f1 ARG BASE_IMAGE=ubuntu:focal-20200729
ARG BASE_IMAGE=ubuntu:bionic-20191029@sha256:6e9f67fa63b0323e9a1e587fd71c561ba48a034504fb804fd26fd8800039835d
FROM $BASE_IMAGE AS builder FROM $BASE_IMAGE AS builder
USER root USER root
@@ -41,19 +40,18 @@ RUN apt-get update \
&& apt-get clean \ && apt-get clean \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
# copy only what we need to avoid unnecessary rebuilds
COPY package.json \
pyproject.toml \
README.md \
requirements.txt \
setup.py \
/src/jupyterhub/
COPY jupyterhub/ /src/jupyterhub/jupyterhub
COPY share/ /src/jupyterhub/share
WORKDIR /src/jupyterhub
RUN python3 -m pip install --upgrade setuptools pip wheel RUN python3 -m pip install --upgrade setuptools pip wheel
RUN python3 -m pip wheel -v --wheel-dir wheelhouse .
# copy everything except whats in .dockerignore, its a
# compromise between needing to rebuild and maintaining
# what needs to be part of the build
COPY . /src/jupyterhub/
WORKDIR /src/jupyterhub
# Build client component packages (they will be copied into ./share and
# packaged with the built wheel.)
RUN python3 setup.py bdist_wheel
RUN python3 -m pip wheel --wheel-dir wheelhouse dist/*.whl
FROM $BASE_IMAGE FROM $BASE_IMAGE
@@ -90,7 +88,6 @@ RUN npm install -g configurable-http-proxy@^4.2.0 \
# install the wheels we built in the first stage # install the wheels we built in the first stage
COPY --from=builder /src/jupyterhub/wheelhouse /tmp/wheelhouse COPY --from=builder /src/jupyterhub/wheelhouse /tmp/wheelhouse
COPY --from=builder /src/jupyterhub/share /src/jupyterhub/share
RUN python3 -m pip install --no-cache /tmp/wheelhouse/* RUN python3 -m pip install --no-cache /tmp/wheelhouse/*
RUN mkdir -p /srv/jupyterhub/ RUN mkdir -p /srv/jupyterhub/

View File

@@ -6,17 +6,15 @@
**[License](#license)** | **[License](#license)** |
**[Help and Resources](#help-and-resources)** **[Help and Resources](#help-and-resources)**
# [JupyterHub](https://github.com/jupyterhub/jupyterhub) # [JupyterHub](https://github.com/jupyterhub/jupyterhub)
[![Latest PyPI version](https://img.shields.io/pypi/v/jupyterhub?logo=pypi)](https://pypi.python.org/pypi/jupyterhub) [![Latest PyPI version](https://img.shields.io/pypi/v/jupyterhub?logo=pypi)](https://pypi.python.org/pypi/jupyterhub)
[![Latest conda-forge version](https://img.shields.io/conda/vn/conda-forge/jupyterhub?logo=conda-forge)](https://www.npmjs.com/package/jupyterhub) [![Latest conda-forge version](https://img.shields.io/conda/vn/conda-forge/jupyterhub?logo=conda-forge)](https://www.npmjs.com/package/jupyterhub)
[![Documentation build status](https://img.shields.io/readthedocs/jupyterhub?logo=read-the-docs)](https://jupyterhub.readthedocs.org/en/latest/) [![Documentation build status](https://img.shields.io/readthedocs/jupyterhub?logo=read-the-docs)](https://jupyterhub.readthedocs.org/en/latest/)
[![TravisCI build status](https://img.shields.io/travis/jupyterhub/jupyterhub/master?logo=travis)](https://travis-ci.org/jupyterhub/jupyterhub) [![GitHub Workflow Status - Test](https://img.shields.io/github/workflow/status/jupyterhub/jupyterhub/Test?logo=github&label=tests)](https://github.com/jupyterhub/jupyterhub/actions)
[![DockerHub build status](https://img.shields.io/docker/build/jupyterhub/jupyterhub?logo=docker&label=build)](https://hub.docker.com/r/jupyterhub/jupyterhub/tags) [![DockerHub build status](https://img.shields.io/docker/build/jupyterhub/jupyterhub?logo=docker&label=build)](https://hub.docker.com/r/jupyterhub/jupyterhub/tags)
[![CircleCI build status](https://img.shields.io/circleci/build/github/jupyterhub/jupyterhub?logo=circleci)](https://circleci.com/gh/jupyterhub/jupyterhub)<!-- CircleCI Token: b5b65862eb2617b9a8d39e79340b0a6b816da8cc --> [![CircleCI build status](https://img.shields.io/circleci/build/github/jupyterhub/jupyterhub?logo=circleci)](https://circleci.com/gh/jupyterhub/jupyterhub)<!-- CircleCI Token: b5b65862eb2617b9a8d39e79340b0a6b816da8cc -->
[![Test coverage of code](https://codecov.io/gh/jupyterhub/jupyterhub/branch/master/graph/badge.svg)](https://codecov.io/gh/jupyterhub/jupyterhub) [![Test coverage of code](https://codecov.io/gh/jupyterhub/jupyterhub/branch/main/graph/badge.svg)](https://codecov.io/gh/jupyterhub/jupyterhub)
[![GitHub](https://img.shields.io/badge/issue_tracking-github-blue?logo=github)](https://github.com/jupyterhub/jupyterhub/issues) [![GitHub](https://img.shields.io/badge/issue_tracking-github-blue?logo=github)](https://github.com/jupyterhub/jupyterhub/issues)
[![Discourse](https://img.shields.io/badge/help_forum-discourse-blue?logo=discourse)](https://discourse.jupyter.org/c/jupyterhub) [![Discourse](https://img.shields.io/badge/help_forum-discourse-blue?logo=discourse)](https://discourse.jupyter.org/c/jupyterhub)
[![Gitter](https://img.shields.io/badge/social_chat-gitter-blue?logo=gitter)](https://gitter.im/jupyterhub/jupyterhub) [![Gitter](https://img.shields.io/badge/social_chat-gitter-blue?logo=gitter)](https://gitter.im/jupyterhub/jupyterhub)
@@ -48,22 +46,21 @@ Basic principles for operation are:
servers. servers.
JupyterHub also provides a JupyterHub also provides a
[REST API](http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/master/docs/rest-api.yml#/default) [REST API](https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/HEAD/docs/rest-api.yml#/default)
for administration of the Hub and its users. for administration of the Hub and its users.
## Installation ## Installation
### Check prerequisites ### Check prerequisites
- A Linux/Unix based system - A Linux/Unix based system
- [Python](https://www.python.org/downloads/) 3.5 or greater - [Python](https://www.python.org/downloads/) 3.5 or greater
- [nodejs/npm](https://www.npmjs.com/) - [nodejs/npm](https://www.npmjs.com/)
* If you are using **`conda`**, the nodejs and npm dependencies will be installed for - If you are using **`conda`**, the nodejs and npm dependencies will be installed for
you by conda. you by conda.
* If you are using **`pip`**, install a recent version of - If you are using **`pip`**, install a recent version of
[nodejs/npm](https://docs.npmjs.com/getting-started/installing-node). [nodejs/npm](https://docs.npmjs.com/getting-started/installing-node).
For example, install it on Linux (Debian/Ubuntu) using: For example, install it on Linux (Debian/Ubuntu) using:
@@ -74,6 +71,7 @@ for administration of the Hub and its users.
The `nodejs-legacy` package installs the `node` executable and is currently The `nodejs-legacy` package installs the `node` executable and is currently
required for npm to work on Debian/Ubuntu. required for npm to work on Debian/Ubuntu.
- If using the default PAM Authenticator, a [pluggable authentication module (PAM)](https://en.wikipedia.org/wiki/Pluggable_authentication_module).
- TLS certificate and key for HTTPS communication - TLS certificate and key for HTTPS communication
- Domain name - Domain name
@@ -119,10 +117,10 @@ To start the Hub server, run the command:
Visit `https://localhost:8000` in your browser, and sign in with your unix Visit `https://localhost:8000` in your browser, and sign in with your unix
PAM credentials. PAM credentials.
*Note*: To allow multiple users to sign into the server, you will need to _Note_: To allow multiple users to sign into the server, you will need to
run the `jupyterhub` command as a *privileged user*, such as root. run the `jupyterhub` command as a _privileged user_, such as root.
The [wiki](https://github.com/jupyterhub/jupyterhub/wiki/Using-sudo-to-run-JupyterHub-without-root-privileges) The [wiki](https://github.com/jupyterhub/jupyterhub/wiki/Using-sudo-to-run-JupyterHub-without-root-privileges)
describes how to run the server as a *less privileged user*, which requires describes how to run the server as a _less privileged user_, which requires
more configuration of the system. more configuration of the system.
## Configuration ## Configuration
@@ -141,7 +139,7 @@ To generate a default config file with settings and descriptions:
### Start the Hub ### Start the Hub
To start the Hub on a specific url and port ``10.0.1.2:443`` with **https**: To start the Hub on a specific url and port `10.0.1.2:443` with **https**:
jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert
@@ -203,7 +201,7 @@ These accounts will be used for authentication in JupyterHub's default configura
## Contributing ## Contributing
If you would like to contribute to the project, please read our If you would like to contribute to the project, please read our
[contributor documentation](http://jupyter.readthedocs.io/en/latest/contributor/content-contributor.html) [contributor documentation](https://jupyter.readthedocs.io/en/latest/contributing/content-contributor.html)
and the [`CONTRIBUTING.md`](CONTRIBUTING.md). The `CONTRIBUTING.md` file and the [`CONTRIBUTING.md`](CONTRIBUTING.md). The `CONTRIBUTING.md` file
explains how to set up a development installation, how to run the test suite, explains how to set up a development installation, how to run the test suite,
and how to contribute to documentation. and how to contribute to documentation.
@@ -241,7 +239,7 @@ our JupyterHub [Gitter](https://gitter.im/jupyterhub/jupyterhub) channel.
- [Reporting Issues](https://github.com/jupyterhub/jupyterhub/issues) - [Reporting Issues](https://github.com/jupyterhub/jupyterhub/issues)
- [JupyterHub tutorial](https://github.com/jupyterhub/jupyterhub-tutorial) - [JupyterHub tutorial](https://github.com/jupyterhub/jupyterhub-tutorial)
- [Documentation for JupyterHub](https://jupyterhub.readthedocs.io/en/latest/) | [PDF (latest)](https://media.readthedocs.org/pdf/jupyterhub/latest/jupyterhub.pdf) | [PDF (stable)](https://media.readthedocs.org/pdf/jupyterhub/stable/jupyterhub.pdf) - [Documentation for JupyterHub](https://jupyterhub.readthedocs.io/en/latest/) | [PDF (latest)](https://media.readthedocs.org/pdf/jupyterhub/latest/jupyterhub.pdf) | [PDF (stable)](https://media.readthedocs.org/pdf/jupyterhub/stable/jupyterhub.pdf)
- [Documentation for JupyterHub's REST API](http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/master/docs/rest-api.yml#/default) - [Documentation for JupyterHub's REST API](https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/HEAD/docs/rest-api.yml#/default)
- [Documentation for Project Jupyter](http://jupyter.readthedocs.io/en/latest/index.html) | [PDF](https://media.readthedocs.org/pdf/jupyter/latest/jupyter.pdf) - [Documentation for Project Jupyter](http://jupyter.readthedocs.io/en/latest/index.html) | [PDF](https://media.readthedocs.org/pdf/jupyter/latest/jupyter.pdf)
- [Project Jupyter website](https://jupyter.org) - [Project Jupyter website](https://jupyter.org)
- [Project Jupyter community](https://jupyter.org/community) - [Project Jupyter community](https://jupyter.org/community)

View File

@@ -1,59 +1,60 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# source this file to setup postgres and mysql # The goal of this script is to start a database server as a docker container.
# for local testing (as similar as possible to docker) #
# Required environment variables:
# - DB: The database server to start, either "postgres" or "mysql".
#
# - PGUSER/PGPASSWORD: For the creation of a postgresql user with associated
# password.
set -eu set -eu
export MYSQL_HOST=127.0.0.1 # Stop and remove any existing database container
export MYSQL_TCP_PORT=${MYSQL_TCP_PORT:-13306} DOCKER_CONTAINER="hub-test-$DB"
export PGHOST=127.0.0.1 docker rm -f "$DOCKER_CONTAINER" 2>/dev/null || true
NAME="hub-test-$DB"
DOCKER_RUN="docker run -d --name $NAME"
docker rm -f "$NAME" 2>/dev/null || true # Prepare environment variables to startup and await readiness of either a mysql
# or postgresql server.
if [[ "$DB" == "mysql" ]]; then
# Environment variables can influence both the mysql server in the docker
# container and the mysql client.
#
# ref server: https://hub.docker.com/_/mysql/
# ref client: https://dev.mysql.com/doc/refman/5.7/en/setting-environment-variables.html
#
DOCKER_RUN_ARGS="-p 3306:3306 --env MYSQL_ALLOW_EMPTY_PASSWORD=1 mysql:5.7"
READINESS_CHECK="mysql --user root --execute \q"
elif [[ "$DB" == "postgres" ]]; then
# Environment variables can influence both the postgresql server in the
# docker container and the postgresql client (psql).
#
# ref server: https://hub.docker.com/_/postgres/
# ref client: https://www.postgresql.org/docs/9.5/libpq-envars.html
#
# POSTGRES_USER / POSTGRES_PASSWORD will create a user on startup of the
# postgres server, but PGUSER and PGPASSWORD are the environment variables
# used by the postgresql client psql, so we configure the user based on how
# we want to connect.
#
DOCKER_RUN_ARGS="-p 5432:5432 --env "POSTGRES_USER=${PGUSER}" --env "POSTGRES_PASSWORD=${PGPASSWORD}" postgres:9.5"
READINESS_CHECK="psql --command \q"
else
echo '$DB must be mysql or postgres'
exit 1
fi
case "$DB" in # Start the database server
"mysql") docker run --detach --name "$DOCKER_CONTAINER" $DOCKER_RUN_ARGS
RUN_ARGS="-e MYSQL_ALLOW_EMPTY_PASSWORD=1 -p $MYSQL_TCP_PORT:3306 mysql:5.7"
CHECK="mysql --host $MYSQL_HOST --port $MYSQL_TCP_PORT --user root -e \q"
;;
"postgres")
RUN_ARGS="-p 5432:5432 postgres:9.5"
CHECK="psql --user postgres -c \q"
;;
*)
echo '$DB must be mysql or postgres'
exit 1
esac
$DOCKER_RUN $RUN_ARGS
# Wait for the database server to start
echo -n "waiting for $DB " echo -n "waiting for $DB "
for i in {1..60}; do for i in {1..60}; do
if $CHECK; then if $READINESS_CHECK; then
echo 'done' echo 'done'
break break
else else
echo -n '.' echo -n '.'
sleep 1 sleep 1
fi fi
done done
$CHECK $READINESS_CHECK
case "$DB" in
"mysql")
;;
"postgres")
# create the user
psql --user postgres -c "CREATE USER $PGUSER WITH PASSWORD '$PGPASSWORD';"
;;
*)
esac
echo -e "
Set these environment variables:
export MYSQL_HOST=127.0.0.1
export MYSQL_TCP_PORT=$MYSQL_TCP_PORT
export PGHOST=127.0.0.1
"

View File

@@ -1,27 +1,26 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# initialize jupyterhub databases for testing # The goal of this script is to initialize a running database server with clean
# databases for use during tests.
#
# Required environment variables:
# - DB: The database server to start, either "postgres" or "mysql".
set -eu set -eu
MYSQL="mysql --user root --host $MYSQL_HOST --port $MYSQL_TCP_PORT -e " # Prepare env vars SQL_CLIENT and EXTRA_CREATE_DATABASE_ARGS
PSQL="psql --user postgres -c " if [[ "$DB" == "mysql" ]]; then
SQL_CLIENT="mysql --user root --execute "
case "$DB" in EXTRA_CREATE_DATABASE_ARGS='CHARACTER SET utf8 COLLATE utf8_general_ci'
"mysql") elif [[ "$DB" == "postgres" ]]; then
EXTRA_CREATE='CHARACTER SET utf8 COLLATE utf8_general_ci' SQL_CLIENT="psql --command "
SQL="$MYSQL" else
;; echo '$DB must be mysql or postgres'
"postgres") exit 1
SQL="$PSQL" fi
;;
*)
echo '$DB must be mysql or postgres'
exit 1
esac
# Configure a set of databases in the database server for upgrade tests
set -x set -x
for SUFFIX in '' _upgrade_072 _upgrade_081 _upgrade_094; do for SUFFIX in '' _upgrade_072 _upgrade_081 _upgrade_094; do
$SQL "DROP DATABASE jupyterhub${SUFFIX};" 2>/dev/null || true $SQL_CLIENT "DROP DATABASE jupyterhub${SUFFIX};" 2>/dev/null || true
$SQL "CREATE DATABASE jupyterhub${SUFFIX} ${EXTRA_CREATE:-};" $SQL_CLIENT "CREATE DATABASE jupyterhub${SUFFIX} ${EXTRA_CREATE_DATABASE_ARGS:-};"
done done

16
demo-image/Dockerfile Normal file
View File

@@ -0,0 +1,16 @@
# Demo JupyterHub Docker image
#
# This should only be used for demo or testing and not as a base image to build on.
#
# It includes the notebook package and it uses the DummyAuthenticator and the SimpleLocalProcessSpawner.
ARG BASE_IMAGE=jupyterhub/jupyterhub-onbuild
FROM ${BASE_IMAGE}
# Install the notebook package
RUN python3 -m pip install notebook
# Create a demo user
RUN useradd --create-home demo
RUN chown demo .
USER demo

26
demo-image/README.md Normal file
View File

@@ -0,0 +1,26 @@
## Demo Dockerfile
This is a demo JupyterHub Docker image to help you get a quick overview of what
JupyterHub is and how it works.
It uses the SimpleLocalProcessSpawner to spawn new user servers and
DummyAuthenticator for authentication.
The DummyAuthenticator allows you to log in with any username & password and the
SimpleLocalProcessSpawner allows starting servers without having to create a
local user for each JupyterHub user.
### Important!
This should only be used for demo or testing purposes!
It shouldn't be used as a base image to build on.
### Try it
1. `cd` to the root of your jupyterhub repo.
2. Build the demo image with `docker build -t jupyterhub-demo demo-image`.
3. Run the demo image with `docker run -d -p 8000:8000 jupyterhub-demo`.
4. Visit http://localhost:8000 and login with any username and password
5. Happy demo-ing :tada:!

View File

@@ -0,0 +1,7 @@
# Configuration file for jupyterhub-demo
c = get_config()
# Use DummyAuthenticator and SimpleSpawner
c.JupyterHub.spawner_class = "simple"
c.JupyterHub.authenticator_class = "dummy"

View File

@@ -10,9 +10,9 @@ html5lib # needed for beautifulsoup
mock mock
notebook notebook
pre-commit pre-commit
pytest>=3.3
pytest-asyncio pytest-asyncio
pytest-cov pytest-cov
pytest>=3.3
requests-mock requests-mock
# blacklist urllib3 releases affected by https://github.com/urllib3/urllib3/issues/1683 # blacklist urllib3 releases affected by https://github.com/urllib3/urllib3/issues/1683
# I *think* this should only affect testing, not production # I *think* this should only affect testing, not production

View File

@@ -1,9 +1,14 @@
FROM python:3.6.3-alpine3.6 FROM alpine:3.13
ARG JUPYTERHUB_VERSION=0.8.1
RUN pip3 install --no-cache jupyterhub==${JUPYTERHUB_VERSION}
ENV LANG=en_US.UTF-8 ENV LANG=en_US.UTF-8
RUN apk add --no-cache \
python3 \
py3-pip \
py3-ruamel.yaml \
py3-cryptography \
py3-sqlalchemy
ARG JUPYTERHUB_VERSION=1.3.0
RUN pip3 install --no-cache jupyterhub==${JUPYTERHUB_VERSION}
USER nobody USER nobody
CMD ["jupyterhub"] CMD ["jupyterhub"]

View File

@@ -1,5 +1,6 @@
## What is Dockerfile.alpine ## What is Dockerfile.alpine
Dockerfile.alpine contains base image for jupyterhub. It does not work independently, but only as part of a full jupyterhub cluster
Dockerfile.alpine contains base image for jupyterhub. It does not work independently, but only as part of a full jupyterhub cluster
## How to use it? ## How to use it?
@@ -7,14 +8,13 @@ Dockerfile.alpine contains base image for jupyterhub. It does not work independ
2. A jupyterhub_config file. 2. A jupyterhub_config file.
3. Authentication and other libraries required by the specific jupyterhub_config file. 3. Authentication and other libraries required by the specific jupyterhub_config file.
## Steps to test it outside a cluster ## Steps to test it outside a cluster
* start configurable-http-proxy in another container - start configurable-http-proxy in another container
* specify CONFIGPROXY_AUTH_TOKEN env in both containers - specify CONFIGPROXY_AUTH_TOKEN env in both containers
* put both containers on the same network (e.g. docker network create jupyterhub; docker run ... --net jupyterhub) - put both containers on the same network (e.g. docker network create jupyterhub; docker run ... --net jupyterhub)
* tell jupyterhub where CHP is (e.g. c.ConfigurableHTTPProxy.api_url = 'http://chp:8001') - tell jupyterhub where CHP is (e.g. c.ConfigurableHTTPProxy.api_url = 'http://chp:8001')
* tell jupyterhub not to start the proxy itself (c.ConfigurableHTTPProxy.should_start = False) - tell jupyterhub not to start the proxy itself (c.ConfigurableHTTPProxy.should_start = False)
* Use dummy authenticator for ease of testing. Update following in jupyterhub_config file - Use dummy authenticator for ease of testing. Update following in jupyterhub_config file
- c.JupyterHub.authenticator_class = 'dummyauthenticator.DummyAuthenticator' - c.JupyterHub.authenticator_class = 'dummyauthenticator.DummyAuthenticator'
- c.DummyAuthenticator.password = "your strong password" - c.DummyAuthenticator.password = "your strong password"

9
dockerfiles/test.py Normal file
View File

@@ -0,0 +1,9 @@
import os
from jupyterhub._data import DATA_FILES_PATH
print(f"DATA_FILES_PATH={DATA_FILES_PATH}")
for sub_path in ("templates", "static/components", "static/css/style.min.css"):
path = os.path.join(DATA_FILES_PATH, sub_path)
assert os.path.exists(path), path

View File

@@ -48,6 +48,7 @@ help:
@echo " doctest to run all doctests embedded in the documentation (if enabled)" @echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)" @echo " coverage to run coverage check of the documentation (if enabled)"
@echo " spelling to run spell check on documentation" @echo " spelling to run spell check on documentation"
@echo " metrics to generate documentation for metrics by inspecting the source code"
clean: clean:
rm -rf $(BUILDDIR)/* rm -rf $(BUILDDIR)/*
@@ -60,7 +61,12 @@ rest-api: source/_static/rest-api/index.html
source/_static/rest-api/index.html: rest-api.yml node_modules source/_static/rest-api/index.html: rest-api.yml node_modules
npm run rest-api npm run rest-api
html: rest-api metrics: source/reference/metrics.rst
source/reference/metrics.rst: generate-metrics.py
python3 generate-metrics.py
html: rest-api metrics
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo @echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html." @echo "Build finished. The HTML pages are in $(BUILDDIR)/html."

View File

@@ -1,20 +0,0 @@
# ReadTheDocs uses the `environment.yaml` so make sure to update that as well
# if you change the dependencies of JupyterHub in the various `requirements.txt`
name: jhub_docs
channels:
- conda-forge
dependencies:
- pip
- nodejs
- python=3.6
- alembic
- jinja2
- pamela
- recommonmark==0.6.0
- requests
- sqlalchemy>=1
- tornado>=5.0
- traitlets>=4.1
- sphinx>=1.7
- pip:
- -r requirements.txt

57
docs/generate-metrics.py Normal file
View File

@@ -0,0 +1,57 @@
import os
from os.path import join
from pytablewriter import RstSimpleTableWriter
from pytablewriter.style import Style
import jupyterhub.metrics
HERE = os.path.abspath(os.path.dirname(__file__))
class Generator:
@classmethod
def create_writer(cls, table_name, headers, values):
writer = RstSimpleTableWriter()
writer.table_name = table_name
writer.headers = headers
writer.value_matrix = values
writer.margin = 1
[writer.set_style(header, Style(align="center")) for header in headers]
return writer
def _parse_metrics(self):
table_rows = []
for name in dir(jupyterhub.metrics):
obj = getattr(jupyterhub.metrics, name)
if obj.__class__.__module__.startswith('prometheus_client.'):
for metric in obj.describe():
table_rows.append([metric.type, metric.name, metric.documentation])
return table_rows
def prometheus_metrics(self):
generated_directory = f"{HERE}/source/reference"
if not os.path.exists(generated_directory):
os.makedirs(generated_directory)
filename = f"{generated_directory}/metrics.rst"
table_name = ""
headers = ["Type", "Name", "Description"]
values = self._parse_metrics()
writer = self.create_writer(table_name, headers, values)
title = "List of Prometheus Metrics"
underline = "============================"
content = f"{title}\n{underline}\n{writer.dumps()}"
with open(filename, 'w') as f:
f.write(content)
print(f"Generated {filename}.")
def main():
doc_generator = Generator()
doc_generator.prometheus_metrics()
if __name__ == "__main__":
main()

View File

@@ -1,10 +1,12 @@
# ReadTheDocs uses the `environment.yaml` so make sure to update that as well
# if you change this file
-r ../requirements.txt -r ../requirements.txt
alabaster_jupyterhub alabaster_jupyterhub
autodoc-traits # Temporary fix of #3021. Revert back to released autodoc-traits when
git+https://github.com/pandas-dev/pandas-sphinx-theme.git@master # 0.1.0 released.
recommonmark==0.5.0 https://github.com/jupyterhub/autodoc-traits/archive/d22282c1c18c6865436e06d8b329c06fe12a07f8.zip
pydata-sphinx-theme
pytablewriter>=0.56
recommonmark>=0.6
sphinx>=1.7
sphinx-copybutton sphinx-copybutton
sphinx-jsonschema sphinx-jsonschema
sphinx>=1.7

View File

@@ -1,13 +1,12 @@
# see me at: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#/default # see me at: https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/HEAD/docs/rest-api.yml#/default
swagger: '2.0' swagger: "2.0"
info: info:
title: JupyterHub title: JupyterHub
description: The REST API for JupyterHub description: The REST API for JupyterHub
version: 0.9.0dev version: 1.4.0
license: license:
name: BSD-3-Clause name: BSD-3-Clause
schemes: schemes: [http, https]
[http, https]
securityDefinitions: securityDefinitions:
token: token:
type: apiKey type: apiKey
@@ -28,7 +27,7 @@ paths:
This endpoint is not authenticated for the purpose of clients and user This endpoint is not authenticated for the purpose of clients and user
to identify the JupyterHub version before setting up authentication. to identify the JupyterHub version before setting up authentication.
responses: responses:
'200': "200":
description: The JupyterHub version description: The JupyterHub version
schema: schema:
type: object type: object
@@ -44,7 +43,7 @@ paths:
JupyterHub's version and executable path, JupyterHub's version and executable path,
and which Authenticator and Spawner are active. and which Authenticator and Spawner are active.
responses: responses:
'200': "200":
description: Detailed JupyterHub info description: Detailed JupyterHub info
schema: schema:
type: object type: object
@@ -79,13 +78,28 @@ paths:
/users: /users:
get: get:
summary: List users summary: List users
parameters:
- name: state
in: query
required: false
type: string
enum: ["inactive", "active", "ready"]
description: |
Return only users who have servers in the given state.
If unspecified, return all users.
active: all users with any active servers (ready OR pending)
ready: all users who have any ready servers (running, not pending)
inactive: all users who have *no* active servers (complement of active)
Added in JupyterHub 1.3
responses: responses:
'200': "200":
description: The Hub's user list description: The Hub's user list
schema: schema:
type: array type: array
items: items:
$ref: '#/definitions/User' $ref: "#/definitions/User"
post: post:
summary: Create multiple users summary: Create multiple users
parameters: parameters:
@@ -104,13 +118,13 @@ paths:
description: whether the created users should be admins description: whether the created users should be admins
type: boolean type: boolean
responses: responses:
'201': "201":
description: The users have been created description: The users have been created
schema: schema:
type: array type: array
description: The created users description: The created users
items: items:
$ref: '#/definitions/User' $ref: "#/definitions/User"
/users/{name}: /users/{name}:
get: get:
summary: Get a user by name summary: Get a user by name
@@ -121,10 +135,10 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'200': "200":
description: The User model description: The User model
schema: schema:
$ref: '#/definitions/User' $ref: "#/definitions/User"
post: post:
summary: Create a single user summary: Create a single user
parameters: parameters:
@@ -134,10 +148,10 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'201': "201":
description: The user has been created description: The user has been created
schema: schema:
$ref: '#/definitions/User' $ref: "#/definitions/User"
patch: patch:
summary: Modify a user summary: Modify a user
description: Change a user's name or admin status description: Change a user's name or admin status
@@ -161,10 +175,10 @@ paths:
type: boolean type: boolean
description: update admin (optional, if another key is updated i.e. name) description: update admin (optional, if another key is updated i.e. name)
responses: responses:
'200': "200":
description: The updated user info description: The updated user info
schema: schema:
$ref: '#/definitions/User' $ref: "#/definitions/User"
delete: delete:
summary: Delete a user summary: Delete a user
parameters: parameters:
@@ -174,14 +188,12 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'204': "204":
description: The user has been deleted description: The user has been deleted
/users/{name}/activity: /users/{name}/activity:
post: post:
summary: summary: Notify Hub of activity for a given user.
Notify Hub of activity for a given user. description: Notify the Hub of activity by the user,
description:
Notify the Hub of activity by the user,
e.g. accessing a service or (more likely) e.g. accessing a service or (more likely)
actively using a server. actively using a server.
parameters: parameters:
@@ -209,7 +221,7 @@ paths:
The default server has an empty name (''). The default server has an empty name ('').
type: object type: object
properties: properties:
'<server name>': "<server name>":
description: | description: |
Activity for a single server. Activity for a single server.
type: object type: object
@@ -222,16 +234,16 @@ paths:
description: | description: |
Timestamp of last-seen activity on this server. Timestamp of last-seen activity on this server.
example: example:
last_activity: '2019-02-06T12:54:14Z' last_activity: "2019-02-06T12:54:14Z"
servers: servers:
'': "":
last_activity: '2019-02-06T12:54:14Z' last_activity: "2019-02-06T12:54:14Z"
gpu: gpu:
last_activity: '2019-02-06T12:54:14Z' last_activity: "2019-02-06T12:54:14Z"
responses: responses:
'401': "401":
$ref: '#/responses/Unauthorized' $ref: "#/responses/Unauthorized"
'404': "404":
description: No such user description: No such user
/users/{name}/server: /users/{name}/server:
post: post:
@@ -248,14 +260,17 @@ paths:
when spawning via the API instead of spawn form. when spawning via the API instead of spawn form.
The structure of the options The structure of the options
will depend on the Spawner's configuration. will depend on the Spawner's configuration.
The body itself will be available as `user_options` for the
Spawner.
in: body in: body
required: false required: false
schema: schema:
type: object type: object
responses: responses:
'201': "201":
description: The user's notebook server has started description: The user's notebook server has started
'202': "202":
description: The user's notebook server has not yet started, but has been requested description: The user's notebook server has not yet started, but has been requested
delete: delete:
summary: Stop a user's server summary: Stop a user's server
@@ -266,9 +281,9 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'204': "204":
description: The user's notebook server has stopped description: The user's notebook server has stopped
'202': "202":
description: The user's notebook server has not yet stopped as it is taking a while to stop description: The user's notebook server has not yet stopped as it is taking a while to stop
/users/{name}/servers/{server_name}: /users/{name}/servers/{server_name}:
post: post:
@@ -280,7 +295,10 @@ paths:
required: true required: true
type: string type: string
- name: server_name - name: server_name
description: name given to a named-server description: |
name given to a named-server.
Note that depending on your JupyterHub infrastructure there are chracterter size limitation to `server_name`. Default spawner with K8s pod will not allow Jupyter Notebooks to be spawned with a name that contains more than 253 characters (keep in mind that the pod will be spawned with extra characters to identify the user and hub).
in: path in: path
required: true required: true
type: string type: string
@@ -295,9 +313,9 @@ paths:
schema: schema:
type: object type: object
responses: responses:
'201': "201":
description: The user's notebook named-server has started description: The user's notebook named-server has started
'202': "202":
description: The user's notebook named-server has not yet started, but has been requested description: The user's notebook named-server has not yet started, but has been requested
delete: delete:
summary: Stop a user's named-server summary: Stop a user's named-server
@@ -312,18 +330,22 @@ paths:
in: path in: path
required: true required: true
type: string type: string
- name: remove - name: body
description: |
Whether to fully remove the server, rather than just stop it.
Removing a server deletes things like the state of the stopped server.
in: body in: body
required: false required: false
schema: schema:
type: boolean type: object
properties:
remove:
type: boolean
description: |
Whether to fully remove the server, rather than just stop it.
Removing a server deletes things like the state of the stopped server.
Default: false.
responses: responses:
'204': "204":
description: The user's notebook named-server has stopped description: The user's notebook named-server has stopped
'202': "202":
description: The user's notebook named-server has not yet stopped as it is taking a while to stop description: The user's notebook named-server has not yet stopped as it is taking a while to stop
/users/{name}/tokens: /users/{name}/tokens:
parameters: parameters:
@@ -335,15 +357,15 @@ paths:
get: get:
summary: List tokens for the user summary: List tokens for the user
responses: responses:
'200': "200":
description: The list of tokens description: The list of tokens
schema: schema:
type: array type: array
items: items:
$ref: '#/definitions/Token' $ref: "#/definitions/Token"
'401': "401":
$ref: '#/responses/Unauthorized' $ref: "#/responses/Unauthorized"
'404': "404":
description: No such user description: No such user
post: post:
summary: Create a new token for the user summary: Create a new token for the user
@@ -361,11 +383,11 @@ paths:
type: string type: string
description: A note attached to the token for future bookkeeping description: A note attached to the token for future bookkeeping
responses: responses:
'201': "201":
description: The newly created token description: The newly created token
schema: schema:
$ref: '#/definitions/Token' $ref: "#/definitions/Token"
'400': "400":
description: Body must be a JSON dict or empty description: Body must be a JSON dict or empty
/users/{name}/tokens/{token_id}: /users/{name}/tokens/{token_id}:
parameters: parameters:
@@ -381,33 +403,33 @@ paths:
get: get:
summary: Get the model for a token by id summary: Get the model for a token by id
responses: responses:
'200': "200":
description: The info for the new token description: The info for the new token
schema: schema:
$ref: '#/definitions/Token' $ref: "#/definitions/Token"
delete: delete:
summary: Delete (revoke) a token by id summary: Delete (revoke) a token by id
responses: responses:
'204': "204":
description: The token has been deleted description: The token has been deleted
/user: /user:
get: get:
summary: Return authenticated user's model summary: Return authenticated user's model
responses: responses:
'200': "200":
description: The authenticated user's model is returned. description: The authenticated user's model is returned.
schema: schema:
$ref: '#/definitions/User' $ref: "#/definitions/User"
/groups: /groups:
get: get:
summary: List groups summary: List groups
responses: responses:
'200': "200":
description: The list of groups description: The list of groups
schema: schema:
type: array type: array
items: items:
$ref: '#/definitions/Group' $ref: "#/definitions/Group"
/groups/{name}: /groups/{name}:
get: get:
summary: Get a group by name summary: Get a group by name
@@ -418,10 +440,10 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'200': "200":
description: The group model description: The group model
schema: schema:
$ref: '#/definitions/Group' $ref: "#/definitions/Group"
post: post:
summary: Create a group summary: Create a group
parameters: parameters:
@@ -431,10 +453,10 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'201': "201":
description: The group has been created description: The group has been created
schema: schema:
$ref: '#/definitions/Group' $ref: "#/definitions/Group"
delete: delete:
summary: Delete a group summary: Delete a group
parameters: parameters:
@@ -444,7 +466,7 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'204': "204":
description: The group has been deleted description: The group has been deleted
/groups/{name}/users: /groups/{name}/users:
post: post:
@@ -468,10 +490,10 @@ paths:
items: items:
type: string type: string
responses: responses:
'200': "200":
description: The users have been added to the group description: The users have been added to the group
schema: schema:
$ref: '#/definitions/Group' $ref: "#/definitions/Group"
delete: delete:
summary: Remove users from a group summary: Remove users from a group
parameters: parameters:
@@ -493,18 +515,18 @@ paths:
items: items:
type: string type: string
responses: responses:
'200': "200":
description: The users have been removed from the group description: The users have been removed from the group
/services: /services:
get: get:
summary: List services summary: List services
responses: responses:
'200': "200":
description: The service list description: The service list
schema: schema:
type: array type: array
items: items:
$ref: '#/definitions/Service' $ref: "#/definitions/Service"
/services/{name}: /services/{name}:
get: get:
summary: Get a service by name summary: Get a service by name
@@ -515,16 +537,16 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'200': "200":
description: The Service model description: The Service model
schema: schema:
$ref: '#/definitions/Service' $ref: "#/definitions/Service"
/proxy: /proxy:
get: get:
summary: Get the proxy's routing table summary: Get the proxy's routing table
description: A convenience alias for getting the routing table directly from the proxy description: A convenience alias for getting the routing table directly from the proxy
responses: responses:
'200': "200":
description: Routing table description: Routing table
schema: schema:
type: object type: object
@@ -532,7 +554,7 @@ paths:
post: post:
summary: Force the Hub to sync with the proxy summary: Force the Hub to sync with the proxy
responses: responses:
'200': "200":
description: Success description: Success
patch: patch:
summary: Notify the Hub about a new proxy summary: Notify the Hub about a new proxy
@@ -558,7 +580,7 @@ paths:
type: string type: string
description: CONFIGPROXY_AUTH_TOKEN for the new proxy description: CONFIGPROXY_AUTH_TOKEN for the new proxy
responses: responses:
'200': "200":
description: Success description: Success
/authorizations/token: /authorizations/token:
post: post:
@@ -580,7 +602,7 @@ paths:
password: password:
type: string type: string
responses: responses:
'200': "200":
description: The new API token description: The new API token
schema: schema:
type: object type: object
@@ -588,7 +610,7 @@ paths:
token: token:
type: string type: string
description: The new API token. description: The new API token.
'403': "403":
description: The user can not be authenticated. description: The user can not be authenticated.
/authorizations/token/{token}: /authorizations/token/{token}:
get: get:
@@ -599,9 +621,9 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'200': "200":
description: The user or service identified by the API token description: The user or service identified by the API token
'404': "404":
description: A user or service is not found. description: A user or service is not found.
/authorizations/cookie/{cookie_name}/{cookie_value}: /authorizations/cookie/{cookie_name}/{cookie_value}:
get: get:
@@ -617,15 +639,15 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'200': "200":
description: The user identified by the cookie description: The user identified by the cookie
schema: schema:
$ref: '#/definitions/User' $ref: "#/definitions/User"
'404': "404":
description: A user is not found. description: A user is not found.
/oauth2/authorize: /oauth2/authorize:
get: get:
summary: 'OAuth 2.0 authorize endpoint' summary: "OAuth 2.0 authorize endpoint"
description: | description: |
Redirect users to this URL to begin the OAuth process. Redirect users to this URL to begin the OAuth process.
It is not an API endpoint. It is not an API endpoint.
@@ -651,9 +673,9 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'200': "200":
description: Success description: Success
'400': "400":
description: OAuth2Error description: OAuth2Error
/oauth2/token: /oauth2/token:
post: post:
@@ -690,7 +712,7 @@ paths:
required: true required: true
type: string type: string
responses: responses:
'200': "200":
description: JSON response including the token description: JSON response including the token
schema: schema:
type: object type: object
@@ -717,9 +739,9 @@ paths:
type: boolean type: boolean
description: Whether users' notebook servers should be shutdown as well (default from Hub config) description: Whether users' notebook servers should be shutdown as well (default from Hub config)
responses: responses:
'202': "202":
description: Shutdown successful description: Shutdown successful
'400': "400":
description: Unexpeced value for proxy or servers description: Unexpeced value for proxy or servers
# Descriptions of common responses # Descriptions of common responses
responses: responses:
@@ -757,7 +779,7 @@ definitions:
type: array type: array
description: The active servers for this user. description: The active servers for this user.
items: items:
$ref: '#/definitions/Server' $ref: "#/definitions/Server"
Server: Server:
type: object type: object
properties: properties:
@@ -795,6 +817,9 @@ definitions:
state: state:
type: object type: object
description: Arbitrary internal state from this server's spawner. Only available on the hub's users list or get-user-by-name method, and only if a hub admin. None otherwise. description: Arbitrary internal state from this server's spawner. Only available on the hub's users list or get-user-by-name method, and only if a hub admin. None otherwise.
user_options:
type: object
description: User specified options for the user's spawned instance of a single-user server.
Group: Group:
type: object type: object
properties: properties:
@@ -848,7 +873,7 @@ definitions:
description: The user that owns a token (undefined if owned by a service) description: The user that owns a token (undefined if owned by a service)
service: service:
type: string type: string
description: The service that owns the token (undefined of owned by a user) description: The service that owns the token (undefined if owned by a user)
note: note:
type: string type: string
description: A note about the token, typically describing what it was created for. description: A note about the token, typically describing what it was created for.

View File

@@ -1,106 +1,4 @@
div#helm-chart-schema h2, /* Added to avoid logo being too squeezed */
div#helm-chart-schema h3, .navbar-brand {
div#helm-chart-schema h4, height: 4rem !important;
div#helm-chart-schema h5,
div#helm-chart-schema h6 {
font-family: courier new;
}
h3, h3 ~ * {
margin-left: 3% !important;
}
h4, h4 ~ * {
margin-left: 6% !important;
}
h5, h5 ~ * {
margin-left: 9% !important;
}
h6, h6 ~ * {
margin-left: 12% !important;
}
h7, h7 ~ * {
margin-left: 15% !important;
}
img.logo {
width:100%
}
.right-next {
float: right;
max-width: 45%;
overflow: auto;
text-overflow: ellipsis;
white-space: nowrap;
}
.right-next::after{
content: ' »';
}
.left-prev {
float: left;
max-width: 45%;
overflow: auto;
text-overflow: ellipsis;
white-space: nowrap;
}
.left-prev::before{
content: '« ';
}
.prev-next-bottom {
margin-top: 3em;
}
.prev-next-top {
margin-bottom: 1em;
}
/* Sidebar TOC and headers */
div.sphinxsidebarwrapper div {
margin-bottom: .8em;
}
div.sphinxsidebar h3 {
font-size: 1.3em;
padding-top: 0px;
font-weight: 800;
margin-left: 0px !important;
}
div.sphinxsidebar p.caption {
font-size: 1.2em;
margin-bottom: 0px;
margin-left: 0px !important;
font-weight: 900;
color: #767676;
}
div.sphinxsidebar ul {
font-size: .8em;
margin-top: 0px;
padding-left: 3%;
margin-left: 0px !important;
}
div.relations ul {
font-size: 1em;
margin-left: 0px !important;
}
div#searchbox form {
margin-left: 0px !important;
}
/* body elements */
.toctree-wrapper span.caption-text {
color: #767676;
font-style: italic;
font-weight: 300;
} }

View File

@@ -1,16 +0,0 @@
{# Custom template for navigation.html
alabaster theme does not provide blocks for titles to
be overridden so this custom theme handles title and
toctree for sidebar
#}
<h3>{{ _('Table of Contents') }}</h3>
{{ toctree(includehidden=theme_sidebar_includehidden, collapse=theme_sidebar_collapse) }}
{% if theme_extra_nav_links %}
<hr />
<ul>
{% for text, uri in theme_extra_nav_links.items() %}
<li class="toctree-l1"><a href="{{ uri }}">{{ text }}</a></li>
{% endfor %}
</ul>
{% endif %}

View File

@@ -1,17 +0,0 @@
{# Custom template for relations.html
alabaster theme does not provide previous/next page by default
#}
<div class="relations">
<h3>Navigation</h3>
<ul>
<li><a href="{{ pathto(master_doc) }}">Documentation Home</a><ul>
{%- if prev %}
<li><a href="{{ prev.link|e }}" title="Previous">Previous topic</a></li>
{%- endif %}
{%- if next %}
<li><a href="{{ next.link|e }}" title="Next">Next topic</a></li>
{%- endif %}
</ul>
</ul>
</div>

View File

@@ -18,7 +18,7 @@ information on:
- learning more about JupyterHub's API - learning more about JupyterHub's API
The same JupyterHub API spec, as found here, is available in an interactive form The same JupyterHub API spec, as found here, is available in an interactive form
`here (on swagger's petstore) <http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#!/default>`__. `here (on swagger's petstore) <https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/HEAD/docs/rest-api.yml#!/default>`__.
The `OpenAPI Initiative`_ (fka Swagger™) is a project used to describe The `OpenAPI Initiative`_ (fka Swagger™) is a project used to describe
and document RESTful APIs. and document RESTful APIs.

File diff suppressed because one or more lines are too long

View File

@@ -1,7 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# #
import os import os
import shlex
import sys import sys
# Set paths # Set paths
@@ -20,10 +19,9 @@ extensions = [
'autodoc_traits', 'autodoc_traits',
'sphinx_copybutton', 'sphinx_copybutton',
'sphinx-jsonschema', 'sphinx-jsonschema',
'recommonmark',
] ]
templates_path = ['_templates']
# The master toctree document. # The master toctree document.
master_doc = 'index' master_doc = 'index'
@@ -59,22 +57,74 @@ default_role = 'literal'
import recommonmark import recommonmark
from recommonmark.transform import AutoStructify from recommonmark.transform import AutoStructify
# -- Config -------------------------------------------------------------
from jupyterhub.app import JupyterHub
from docutils import nodes
from sphinx.directives.other import SphinxDirective
from contextlib import redirect_stdout
from io import StringIO
# create a temp instance of JupyterHub just to get the output of the generate-config
# and help --all commands.
jupyterhub_app = JupyterHub()
class ConfigDirective(SphinxDirective):
"""Generate the configuration file output for use in the documentation."""
has_content = False
required_arguments = 0
optional_arguments = 0
final_argument_whitespace = False
option_spec = {}
def run(self):
# The generated configuration file for this version
generated_config = jupyterhub_app.generate_config_file()
# post-process output
home_dir = os.environ['HOME']
generated_config = generated_config.replace(home_dir, '$HOME', 1)
par = nodes.literal_block(text=generated_config)
return [par]
class HelpAllDirective(SphinxDirective):
"""Print the output of jupyterhub help --all for use in the documentation."""
has_content = False
required_arguments = 0
optional_arguments = 0
final_argument_whitespace = False
option_spec = {}
def run(self):
# The output of the help command for this version
buffer = StringIO()
with redirect_stdout(buffer):
jupyterhub_app.print_help('--help-all')
all_help = buffer.getvalue()
# post-process output
home_dir = os.environ['HOME']
all_help = all_help.replace(home_dir, '$HOME', 1)
par = nodes.literal_block(text=all_help)
return [par]
def setup(app): def setup(app):
app.add_config_value('recommonmark_config', {'enable_eval_rst': True}, True) app.add_config_value('recommonmark_config', {'enable_eval_rst': True}, True)
app.add_stylesheet('custom.css') app.add_css_file('custom.css')
app.add_transform(AutoStructify) app.add_transform(AutoStructify)
app.add_directive('jupyterhub-generate-config', ConfigDirective)
app.add_directive('jupyterhub-help-all', HelpAllDirective)
source_parsers = {'.md': 'recommonmark.parser.CommonMarkParser'}
source_suffix = ['.rst', '.md'] source_suffix = ['.rst', '.md']
# source_encoding = 'utf-8-sig' # source_encoding = 'utf-8-sig'
# -- Options for HTML output ---------------------------------------------- # -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. # The theme to use for HTML and HTML Help pages.
html_theme = 'pandas_sphinx_theme' html_theme = 'pydata_sphinx_theme'
html_logo = '_static/images/logo/logo.png' html_logo = '_static/images/logo/logo.png'
html_favicon = '_static/images/logo/favicon.ico' html_favicon = '_static/images/logo/favicon.ico'
@@ -166,10 +216,10 @@ intersphinx_mapping = {'https://docs.python.org/3/': None}
on_rtd = os.environ.get('READTHEDOCS', None) == 'True' on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
if on_rtd: if on_rtd:
# readthedocs.org uses their theme by default, so no need to specify it # readthedocs.org uses their theme by default, so no need to specify it
# build rest-api, since RTD doesn't run make # build both metrics and rest-api, since RTD doesn't run make
from subprocess import check_call as sh from subprocess import check_call as sh
sh(['make', 'rest-api'], cwd=docs) sh(['make', 'metrics', 'rest-api'], cwd=docs)
# -- Spell checking ------------------------------------------------------- # -- Spell checking -------------------------------------------------------

View File

@@ -13,7 +13,7 @@ Building documentation locally
We use `sphinx <http://sphinx-doc.org>`_ to build our documentation. It takes We use `sphinx <http://sphinx-doc.org>`_ to build our documentation. It takes
our documentation source files (written in `markdown our documentation source files (written in `markdown
<https://daringfireball.net/projects/markdown/>`_ or `reStructuredText <https://daringfireball.net/projects/markdown/>`_ or `reStructuredText
<http://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_ & <https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_ &
stored under the ``docs/source`` directory) and converts it into various stored under the ``docs/source`` directory) and converts it into various
formats for people to read. To make sure the documentation you write or formats for people to read. To make sure the documentation you write or
change renders correctly, it is good practice to test it locally. change renders correctly, it is good practice to test it locally.

View File

@@ -6,8 +6,8 @@ We want you to contribute to JupyterHub in ways that are most exciting
& useful to you. We value documentation, testing, bug reporting & code equally, & useful to you. We value documentation, testing, bug reporting & code equally,
and are glad to have your contributions in whatever form you wish :) and are glad to have your contributions in whatever form you wish :)
Our `Code of Conduct <https://github.com/jupyter/governance/blob/master/conduct/code_of_conduct.md>`_ Our `Code of Conduct <https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md>`_
(`reporting guidelines <https://github.com/jupyter/governance/blob/master/conduct/reporting_online.md>`_) (`reporting guidelines <https://github.com/jupyter/governance/blob/HEAD/conduct/reporting_online.md>`_)
helps keep our community welcoming to as many people as possible. helps keep our community welcoming to as many people as possible.
.. toctree:: .. toctree::

View File

@@ -6,8 +6,8 @@ the community of users, contributors, and maintainers.
The goal is to communicate priorities and upcoming release plans. The goal is to communicate priorities and upcoming release plans.
It is not a aimed at limiting contributions to what is listed here. It is not a aimed at limiting contributions to what is listed here.
## Using the roadmap ## Using the roadmap
### Sharing Feedback on the Roadmap ### Sharing Feedback on the Roadmap
All of the community is encouraged to provide feedback as well as share new All of the community is encouraged to provide feedback as well as share new
@@ -22,17 +22,17 @@ maintainers will help identify what a good next step is for the issue.
When submitting an issue, think about what "next step" category best describes When submitting an issue, think about what "next step" category best describes
your issue: your issue:
* **now**, concrete/actionable step that is ready for someone to start work on. - **now**, concrete/actionable step that is ready for someone to start work on.
These might be items that have a link to an issue or more abstract like These might be items that have a link to an issue or more abstract like
"decrease typos and dead links in the documentation" "decrease typos and dead links in the documentation"
* **soon**, less concrete/actionable step that is going to happen soon, - **soon**, less concrete/actionable step that is going to happen soon,
discussions around the topic are coming close to an end at which point it can discussions around the topic are coming close to an end at which point it can
move into the "now" category move into the "now" category
* **later**, abstract ideas or tasks, need a lot of discussion or - **later**, abstract ideas or tasks, need a lot of discussion or
experimentation to shape the idea so that it can be executed. Can also experimentation to shape the idea so that it can be executed. Can also
contain concrete/actionable steps that have been postponed on purpose contain concrete/actionable steps that have been postponed on purpose
(these are steps that could be in "now" but the decision was taken to work on (these are steps that could be in "now" but the decision was taken to work on
them later) them later)
### Reviewing and Updating the Roadmap ### Reviewing and Updating the Roadmap
@@ -47,8 +47,8 @@ For those please create a
The roadmap should give the reader an idea of what is happening next, what needs The roadmap should give the reader an idea of what is happening next, what needs
input and discussion before it can happen and what has been postponed. input and discussion before it can happen and what has been postponed.
## The roadmap proper ## The roadmap proper
### Project vision ### Project vision
JupyterHub is a dependable tool used by humans that reduces the complexity of JupyterHub is a dependable tool used by humans that reduces the complexity of
@@ -58,20 +58,19 @@ creating the environment in which a piece of software can be executed.
These "Now" items are considered active areas of focus for the project: These "Now" items are considered active areas of focus for the project:
* HubShare - a sharing service for use with JupyterHub. - HubShare - a sharing service for use with JupyterHub.
* Users should be able to: - Users should be able to:
- Push a project to other users. - Push a project to other users.
- Get a checkout of a project from other users. - Get a checkout of a project from other users.
- Push updates to a published project. - Push updates to a published project.
- Pull updates from a published project. - Pull updates from a published project.
- Manage conflicts/merges by simply picking a version (our/theirs) - Manage conflicts/merges by simply picking a version (our/theirs)
- Get a checkout of a project from the internet. These steps are completely different from saving notebooks/files. - Get a checkout of a project from the internet. These steps are completely different from saving notebooks/files.
- Have directories that are managed by git completely separately from our stuff. - Have directories that are managed by git completely separately from our stuff.
- Look at pushed content that they have access to without an explicit pull. - Look at pushed content that they have access to without an explicit pull.
- Define and manage teams of users. - Define and manage teams of users.
- Adding/removing a user to/from a team gives/removes them access to all projects that team has access to. - Adding/removing a user to/from a team gives/removes them access to all projects that team has access to.
- Build other services, such as static HTML publishing and dashboarding on top of these things. - Build other services, such as static HTML publishing and dashboarding on top of these things.
### Soon ### Soon
@@ -79,12 +78,10 @@ These "Soon" items are under discussion. Once an item reaches the point of an
actionable plan, the item will be moved to the "Now" section. Typically, actionable plan, the item will be moved to the "Now" section. Typically,
these will be moved at a future review of the roadmap. these will be moved at a future review of the roadmap.
* resource monitoring and management: - resource monitoring and management:
- (prometheus?) API for resource monitoring - (prometheus?) API for resource monitoring
- tracking activity on single-user servers instead of the proxy - tracking activity on single-user servers instead of the proxy
- notes and activity tracking per API token - notes and activity tracking per API token
- UI for managing named servers
### Later ### Later
@@ -93,6 +90,6 @@ time there is no active plan for an item. The project would like to find the
resources and time to discuss these ideas. resources and time to discuss these ideas.
- real-time collaboration - real-time collaboration
- Enter into real-time collaboration mode for a project that starts a shared execution context. - Enter into real-time collaboration mode for a project that starts a shared execution context.
- Once the single-user notebook package supports realtime collaboration, - Once the single-user notebook package supports realtime collaboration,
implement sharing mechanism integrated into the Hub. implement sharing mechanism integrated into the Hub.

View File

@@ -45,6 +45,12 @@ When developing JupyterHub, you need to make changes to the code & see
their effects quickly. You need to do a developer install to make that their effects quickly. You need to do a developer install to make that
happen. happen.
.. note:: This guide does not attempt to dictate *how* development
environements should be isolated since that is a personal preference and can
be achieved in many ways, for example `tox`, `conda`, `docker`, etc. See this
`forum thread <https://discourse.jupyter.org/t/thoughts-on-using-tox/3497>`_ for
a more detailed discussion.
1. Clone the `JupyterHub git repository <https://github.com/jupyterhub/jupyterhub>`_ 1. Clone the `JupyterHub git repository <https://github.com/jupyterhub/jupyterhub>`_
to your computer. to your computer.
@@ -93,7 +99,14 @@ happen.
python3 -m pip install -r dev-requirements.txt python3 -m pip install -r dev-requirements.txt
python3 -m pip install -r requirements.txt python3 -m pip install -r requirements.txt
5. Install the development version of JupyterHub. This lets you edit 5. Setup a database.
The default database engine is ``sqlite`` so if you are just trying
to get up and running quickly for local development that should be
available via `python <https://docs.python.org/3.5/library/sqlite3.html>`__.
See :doc:`/reference/database` for details on other supported databases.
6. Install the development version of JupyterHub. This lets you edit
JupyterHub code in a text editor & restart the JupyterHub process to JupyterHub code in a text editor & restart the JupyterHub process to
see your code changes immediately. see your code changes immediately.
@@ -101,13 +114,13 @@ happen.
python3 -m pip install --editable . python3 -m pip install --editable .
6. You are now ready to start JupyterHub! 7. You are now ready to start JupyterHub!
.. code:: bash .. code:: bash
jupyterhub jupyterhub
7. You can access JupyterHub from your browser at 8. You can access JupyterHub from your browser at
``http://localhost:8000`` now. ``http://localhost:8000`` now.
Happy developing! Happy developing!

View File

@@ -64,5 +64,5 @@ Troubleshooting Test Failures
All the tests are failing All the tests are failing
------------------------- -------------------------
Make sure you have completed all the steps in :ref:`contributing/setup` sucessfully, and Make sure you have completed all the steps in :ref:`contributing/setup` successfully, and
can launch ``jupyterhub`` from the terminal. can launch ``jupyterhub`` from the terminal.

View File

@@ -1,10 +1,7 @@
Eventlogging and Telemetry Eventlogging and Telemetry
========================== ==========================
JupyterHub can be configured to record structured events from a running server using Jupyter's `Telemetry System`_. The types of events that JupyterHub emits are defined by `JSON schemas`_ listed below_ JupyterHub can be configured to record structured events from a running server using Jupyter's `Telemetry System`_. The types of events that JupyterHub emits are defined by `JSON schemas`_ listed at the bottom of this page_.
emitted as JSON data, defined and validated by the JSON schemas listed below.
.. _logging: https://docs.python.org/3/library/logging.html .. _logging: https://docs.python.org/3/library/logging.html
.. _`Telemetry System`: https://github.com/jupyter/telemetry .. _`Telemetry System`: https://github.com/jupyter/telemetry
@@ -38,13 +35,12 @@ Here's a basic example:
The output is a file, ``"event.log"``, with events recorded as JSON data. The output is a file, ``"event.log"``, with events recorded as JSON data.
.. _page:
.. _below:
Event schemas Event schemas
------------- -------------
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
server-actions.rst server-actions.rst

View File

@@ -8,27 +8,29 @@ high performance computing.
Please submit pull requests to update information or to add new institutions or uses. Please submit pull requests to update information or to add new institutions or uses.
## Academic Institutions, Research Labs, and Supercomputer Centers ## Academic Institutions, Research Labs, and Supercomputer Centers
### University of California Berkeley ### University of California Berkeley
- [BIDS - Berkeley Institute for Data Science](https://bids.berkeley.edu/) - [BIDS - Berkeley Institute for Data Science](https://bids.berkeley.edu/)
- [Teaching with Jupyter notebooks and JupyterHub](https://bids.berkeley.edu/resources/videos/teaching-ipythonjupyter-notebooks-and-jupyterhub)
- [Teaching with Jupyter notebooks and JupyterHub](https://bids.berkeley.edu/resources/videos/teaching-ipythonjupyter-notebooks-and-jupyterhub)
- [Data 8](http://data8.org/) - [Data 8](http://data8.org/)
- [GitHub organization](https://github.com/data-8)
- [GitHub organization](https://github.com/data-8)
- [NERSC](http://www.nersc.gov/) - [NERSC](http://www.nersc.gov/)
- [Press release on Jupyter and Cori](http://www.nersc.gov/news-publications/nersc-news/nersc-center-news/2016/jupyter-notebooks-will-open-up-new-possibilities-on-nerscs-cori-supercomputer/)
- [Moving and sharing data](https://www.nersc.gov/assets/Uploads/03-MovingAndSharingData-Cholia.pdf) - [Press release on Jupyter and Cori](http://www.nersc.gov/news-publications/nersc-news/nersc-center-news/2016/jupyter-notebooks-will-open-up-new-possibilities-on-nerscs-cori-supercomputer/)
- [Moving and sharing data](https://www.nersc.gov/assets/Uploads/03-MovingAndSharingData-Cholia.pdf)
- [Research IT](http://research-it.berkeley.edu) - [Research IT](http://research-it.berkeley.edu)
- [JupyterHub server supports campus research computation](http://research-it.berkeley.edu/blog/17/01/24/free-fully-loaded-jupyterhub-server-supports-campus-research-computation) - [JupyterHub server supports campus research computation](http://research-it.berkeley.edu/blog/17/01/24/free-fully-loaded-jupyterhub-server-supports-campus-research-computation)
### University of California Davis ### University of California Davis
- [Spinning up multiple Jupyter Notebooks on AWS for a tutorial](https://github.com/mblmicdiv/course2017/blob/master/exercises/sourmash-setup.md) - [Spinning up multiple Jupyter Notebooks on AWS for a tutorial](https://github.com/mblmicdiv/course2017/blob/HEAD/exercises/sourmash-setup.md)
Although not technically a JupyterHub deployment, this tutorial setup Although not technically a JupyterHub deployment, this tutorial setup
may be helpful to others in the Jupyter community. may be helpful to others in the Jupyter community.
@@ -62,20 +64,21 @@ easy to do with RStudio too.
### Clemson University ### Clemson University
- Advanced Computing - Advanced Computing
- [Palmetto cluster and JupyterHub](http://citi.sites.clemson.edu/2016/08/18/JupyterHub-for-Palmetto-Cluster.html) - [Palmetto cluster and JupyterHub](http://citi.sites.clemson.edu/2016/08/18/JupyterHub-for-Palmetto-Cluster.html)
### University of Colorado Boulder ### University of Colorado Boulder
- (CU Research Computing) CURC - (CU Research Computing) CURC
- [JupyterHub User Guide](https://www.rc.colorado.edu/support/user-guide/jupyterhub.html)
- Slurm job dispatched on Crestone compute cluster - [JupyterHub User Guide](https://www.rc.colorado.edu/support/user-guide/jupyterhub.html)
- log troubleshooting - Slurm job dispatched on Crestone compute cluster
- Profiles in IPython Clusters tab - log troubleshooting
- [Parallel Processing with JupyterHub tutorial](https://www.rc.colorado.edu/support/examples-and-tutorials/parallel-processing-with-jupyterhub.html) - Profiles in IPython Clusters tab
- [Parallel Programming with JupyterHub document](https://www.rc.colorado.edu/book/export/html/833) - [Parallel Processing with JupyterHub tutorial](https://www.rc.colorado.edu/support/examples-and-tutorials/parallel-processing-with-jupyterhub.html)
- [Parallel Programming with JupyterHub document](https://www.rc.colorado.edu/book/export/html/833)
- Earth Lab at CU - Earth Lab at CU
- [Tutorial on Parallel R on JupyterHub](https://earthdatascience.org/tutorials/parallel-r-on-jupyterhub/) - [Tutorial on Parallel R on JupyterHub](https://earthdatascience.org/tutorials/parallel-r-on-jupyterhub/)
### George Washington University ### George Washington University
@@ -112,7 +115,7 @@ easy to do with RStudio too.
### Paderborn University ### Paderborn University
- [Data Science (DICE) group](https://dice.cs.uni-paderborn.de/) - [Data Science (DICE) group](https://dice.cs.uni-paderborn.de/)
- [nbgraderutils](https://github.com/dice-group/nbgraderutils): Use JupyterHub + nbgrader + iJava kernel for online Java exercises. Used in lecture Statistical Natural Language Processing. - [nbgraderutils](https://github.com/dice-group/nbgraderutils): Use JupyterHub + nbgrader + iJava kernel for online Java exercises. Used in lecture Statistical Natural Language Processing.
### Penn State University ### Penn State University
@@ -125,27 +128,28 @@ easy to do with RStudio too.
### University of California San Diego ### University of California San Diego
- San Diego Supercomputer Center - Andrea Zonca - San Diego Supercomputer Center - Andrea Zonca
- [Deploy JupyterHub on a Supercomputer with SSH](https://zonca.github.io/2017/05/jupyterhub-hpc-batchspawner-ssh.html)
- [Run Jupyterhub on a Supercomputer](https://zonca.github.io/2015/04/jupyterhub-hpc.html) - [Deploy JupyterHub on a Supercomputer with SSH](https://zonca.github.io/2017/05/jupyterhub-hpc-batchspawner-ssh.html)
- [Deploy JupyterHub on a VM for a Workshop](https://zonca.github.io/2016/04/jupyterhub-sdsc-cloud.html) - [Run Jupyterhub on a Supercomputer](https://zonca.github.io/2015/04/jupyterhub-hpc.html)
- [Customize your Python environment in Jupyterhub](https://zonca.github.io/2017/02/customize-python-environment-jupyterhub.html) - [Deploy JupyterHub on a VM for a Workshop](https://zonca.github.io/2016/04/jupyterhub-sdsc-cloud.html)
- [Jupyterhub deployment on multiple nodes with Docker Swarm](https://zonca.github.io/2016/05/jupyterhub-docker-swarm.html) - [Customize your Python environment in Jupyterhub](https://zonca.github.io/2017/02/customize-python-environment-jupyterhub.html)
- [Sample deployment of Jupyterhub in HPC on SDSC Comet](https://zonca.github.io/2017/02/sample-deployment-jupyterhub-hpc.html) - [Jupyterhub deployment on multiple nodes with Docker Swarm](https://zonca.github.io/2016/05/jupyterhub-docker-swarm.html)
- [Sample deployment of Jupyterhub in HPC on SDSC Comet](https://zonca.github.io/2017/02/sample-deployment-jupyterhub-hpc.html)
- Educational Technology Services - Paul Jamason - Educational Technology Services - Paul Jamason
- [jupyterhub.ucsd.edu](https://jupyterhub.ucsd.edu) - [jupyterhub.ucsd.edu](https://jupyterhub.ucsd.edu)
### TACC University of Texas ### TACC University of Texas
### Texas A&M ### Texas A&M
- Kristen Thyng - Oceanography - Kristen Thyng - Oceanography
- [Teaching with JupyterHub and nbgrader](http://kristenthyng.com/blog/2016/09/07/jupyterhub+nbgrader/) - [Teaching with JupyterHub and nbgrader](http://kristenthyng.com/blog/2016/09/07/jupyterhub+nbgrader/)
### Elucidata ### Elucidata
- What's new in Jupyter Notebooks @[Elucidata](https://elucidata.io/):
- Using Jupyter Notebooks with Jupyterhub on GCP, managed by GKE - What's new in Jupyter Notebooks @[Elucidata](https://elucidata.io/):
- https://medium.com/elucidata/why-you-should-be-using-a-jupyter-notebook-8385a4ccd93d - Using Jupyter Notebooks with Jupyterhub on GCP, managed by GKE - https://medium.com/elucidata/why-you-should-be-using-a-jupyter-notebook-8385a4ccd93d
## Service Providers ## Service Providers
@@ -175,7 +179,6 @@ easy to do with RStudio too.
- [Deploying JupyterHub on Hadoop](https://jupyterhub-on-hadoop.readthedocs.io) - [Deploying JupyterHub on Hadoop](https://jupyterhub-on-hadoop.readthedocs.io)
## Miscellaneous ## Miscellaneous
- https://medium.com/@ybarraud/setting-up-jupyterhub-with-sudospawner-and-anaconda-844628c0dbee#.rm3yt87e1 - https://medium.com/@ybarraud/setting-up-jupyterhub-with-sudospawner-and-anaconda-844628c0dbee#.rm3yt87e1

View File

@@ -4,37 +4,37 @@ The default Authenticator uses [PAM][] to authenticate system users with
their username and password. With the default Authenticator, any user their username and password. With the default Authenticator, any user
with an account and password on the system will be allowed to login. with an account and password on the system will be allowed to login.
## Create a whitelist of users ## Create a set of allowed users
You can restrict which users are allowed to login with a whitelist,
`Authenticator.whitelist`:
You can restrict which users are allowed to login with a set,
`Authenticator.allowed_users`:
```python ```python
c.Authenticator.whitelist = {'mal', 'zoe', 'inara', 'kaylee'} c.Authenticator.allowed_users = {'mal', 'zoe', 'inara', 'kaylee'}
``` ```
Users in the whitelist are added to the Hub database when the Hub is Users in the `allowed_users` set are added to the Hub database when the Hub is
started. started.
## Configure admins (`admin_users`) ## Configure admins (`admin_users`)
Admin users of JupyterHub, `admin_users`, can add and remove users from Admin users of JupyterHub, `admin_users`, can add and remove users from
the user `whitelist`. `admin_users` can take actions on other users' the user `allowed_users` set. `admin_users` can take actions on other users'
behalf, such as stopping and restarting their servers. behalf, such as stopping and restarting their servers.
A set of initial admin users, `admin_users` can configured be as follows: A set of initial admin users, `admin_users` can be configured as follows:
```python ```python
c.Authenticator.admin_users = {'mal', 'zoe'} c.Authenticator.admin_users = {'mal', 'zoe'}
``` ```
Users in the admin list are automatically added to the user `whitelist`,
Users in the admin set are automatically added to the user `allowed_users` set,
if they are not already present. if they are not already present.
Each authenticator may have different ways of determining whether a user is an Each authenticator may have different ways of determining whether a user is an
administrator. By default JupyterHub use the PAMAuthenticator which provide the administrator. By default JupyterHub uses the PAMAuthenticator which provides the
`admin_groups` option and can determine administrator status base on a user `admin_groups` option and can set administrator status based on a user
groups. For example we can let any users in the `wheel` group be admin: group. For example we can let any user in the `wheel` group be admin:
```python ```python
c.PAMAuthenticator.admin_groups = {'wheel'} c.PAMAuthenticator.admin_groups = {'wheel'}
@@ -42,10 +42,10 @@ c.PAMAuthenticator.admin_groups = {'wheel'}
## Give admin access to other users' notebook servers (`admin_access`) ## Give admin access to other users' notebook servers (`admin_access`)
Since the default `JupyterHub.admin_access` setting is False, the admins Since the default `JupyterHub.admin_access` setting is `False`, the admins
do not have permission to log in to the single user notebook servers do not have permission to log in to the single user notebook servers
owned by *other users*. If `JupyterHub.admin_access` is set to True, owned by _other users_. If `JupyterHub.admin_access` is set to `True`,
then admins have permission to log in *as other users* on their then admins have permission to log in _as other users_ on their
respective machines, for debugging. **As a courtesy, you should make respective machines, for debugging. **As a courtesy, you should make
sure your users know if admin_access is enabled.** sure your users know if admin_access is enabled.**
@@ -53,12 +53,12 @@ sure your users know if admin_access is enabled.**
Users can be added to and removed from the Hub via either the admin Users can be added to and removed from the Hub via either the admin
panel or the REST API. When a user is **added**, the user will be panel or the REST API. When a user is **added**, the user will be
automatically added to the whitelist and database. Restarting the Hub automatically added to the `allowed_users` set and database. Restarting the Hub
will not require manually updating the whitelist in your config file, will not require manually updating the `allowed_users` set in your config file,
as the users will be loaded from the database. as the users will be loaded from the database.
After starting the Hub once, it is not sufficient to **remove** a user After starting the Hub once, it is not sufficient to **remove** a user
from the whitelist in your config file. You must also remove the user from the allowed users set in your config file. You must also remove the user
from the Hub's database, either by deleting the user from JupyterHub's from the Hub's database, either by deleting the user from JupyterHub's
admin page, or you can clear the `jupyterhub.sqlite` database and start admin page, or you can clear the `jupyterhub.sqlite` database and start
fresh. fresh.
@@ -91,6 +91,7 @@ JupyterHub's [OAuthenticator][] currently supports the following
popular services: popular services:
- Auth0 - Auth0
- Azure AD
- Bitbucket - Bitbucket
- CILogon - CILogon
- GitHub - GitHub
@@ -106,8 +107,8 @@ with any provider, is also available.
## Use DummyAuthenticator for testing ## Use DummyAuthenticator for testing
The :class:`~jupyterhub.auth.DummyAuthenticator` is a simple authenticator that The `DummyAuthenticator` is a simple authenticator that
allows for any username/password unless if a global password has been set. If allows for any username/password unless a global password has been set. If
set, it will allow for any username as long as the correct password is provided. set, it will allow for any username as long as the correct password is provided.
To set a global password, add this to the config file: To set a global password, add this to the config file:
@@ -115,5 +116,5 @@ To set a global password, add this to the config file:
c.DummyAuthenticator.password = "some_password" c.DummyAuthenticator.password = "some_password"
``` ```
[PAM]: https://en.wikipedia.org/wiki/Pluggable_authentication_module [pam]: https://en.wikipedia.org/wiki/Pluggable_authentication_module
[OAuthenticator]: https://github.com/jupyterhub/oauthenticator [oauthenticator]: https://github.com/jupyterhub/oauthenticator

View File

@@ -44,7 +44,7 @@ jupyterhub -f /etc/jupyterhub/jupyterhub_config.py
``` ```
The IPython documentation provides additional information on the The IPython documentation provides additional information on the
[config system](http://ipython.readthedocs.io/en/stable/development/config) [config system](http://ipython.readthedocs.io/en/stable/development/config.html)
that Jupyter uses. that Jupyter uses.
## Configure using command line options ## Configure using command line options
@@ -56,18 +56,18 @@ To display all command line options that are available for configuration:
``` ```
Configuration using the command line options is done when launching JupyterHub. Configuration using the command line options is done when launching JupyterHub.
For example, to start JupyterHub on ``10.0.1.2:443`` with https, you For example, to start JupyterHub on `10.0.1.2:443` with https, you
would enter: would enter:
```bash ```bash
jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert
``` ```
All configurable options may technically be set on the command-line, All configurable options may technically be set on the command line,
though some are inconvenient to type. To set a particular configuration though some are inconvenient to type. To set a particular configuration
parameter, `c.Class.trait`, you would use the command line option, parameter, `c.Class.trait`, you would use the command line option,
`--Class.trait`, when starting JupyterHub. For example, to configure the `--Class.trait`, when starting JupyterHub. For example, to configure the
`c.Spawner.notebook_dir` trait from the command-line, use the `c.Spawner.notebook_dir` trait from the command line, use the
`--Spawner.notebook_dir` option: `--Spawner.notebook_dir` option:
```bash ```bash
@@ -88,13 +88,13 @@ meant as illustration, are:
## Run the proxy separately ## Run the proxy separately
This is *not* strictly necessary, but useful in many cases. If you This is _not_ strictly necessary, but useful in many cases. If you
use a custom proxy (e.g. Traefik), this also not needed. use a custom proxy (e.g. Traefik), this is also not needed.
Connections to user servers go through the proxy, and *not* the hub Connections to user servers go through the proxy, and _not_ the hub
itself. If the proxy stays running when the hub restarts (for itself. If the proxy stays running when the hub restarts (for
maintenance, re-configuration, etc.), then use connections are not maintenance, re-configuration, etc.), then user connections are not
interrupted. For simplicity, by default the hub starts the proxy interrupted. For simplicity, by default the hub starts the proxy
automatically, so if the hub restarts, the proxy restarts, and user automatically, so if the hub restarts, the proxy restarts, and user
connections are interrupted. It is easy to run the proxy separately, connections are interrupted. It is easy to run the proxy separately,
for information see [the separate proxy page](../reference/separate-proxy). for information see [the separate proxy page](../reference/separate-proxy).

View File

@@ -0,0 +1,35 @@
# Frequently asked questions
### How do I share links to notebooks?
In short, where you see `/user/name/notebooks/foo.ipynb` use `/hub/user-redirect/notebooks/foo.ipynb` (replace `/user/name` with `/hub/user-redirect`).
Sharing links to notebooks is a common activity,
and can look different based on what you mean.
Your first instinct might be to copy the URL you see in the browser,
e.g. `hub.jupyter.org/user/yourname/notebooks/coolthing.ipynb`.
However, let's break down what this URL means:
`hub.jupyter.org/user/yourname/` is the URL prefix handled by _your server_,
which means that sharing this URL is asking the person you share the link with
to come to _your server_ and look at the exact same file.
In most circumstances, this is forbidden by permissions because the person you share with does not have access to your server.
What actually happens when someone visits this URL will depend on whether your server is running and other factors.
But what is our actual goal?
A typical situation is that you have some shared or common filesystem,
such that the same path corresponds to the same document
(either the exact same document or another copy of it).
Typically, what folks want when they do sharing like this
is for each visitor to open the same file _on their own server_,
so Breq would open `/user/breq/notebooks/foo.ipynb` and
Seivarden would open `/user/seivarden/notebooks/foo.ipynb`, etc.
JupyterHub has a special URL that does exactly this!
It's called `/hub/user-redirect/...`.
So if you replace `/user/yourname` in your URL bar
with `/hub/user-redirect` any visitor should get the same
URL on their own server, rather than visiting yours.
In JupyterLab 2.0, this should also be the result of the "Copy Shareable Link"
action in the file browser.

View File

@@ -15,4 +15,5 @@ own JupyterHub.
authenticators-users-basics authenticators-users-basics
spawners-basics spawners-basics
services-basics services-basics
faq
institutional-faq institutional-faq

View File

@@ -11,30 +11,30 @@ Yes! JupyterHub has been used at-scale for large pools of users, as well
as complex and high-performance computing. For example, UC Berkeley uses as complex and high-performance computing. For example, UC Berkeley uses
JupyterHub for its Data Science Education Program courses (serving over JupyterHub for its Data Science Education Program courses (serving over
3,000 students). The Pangeo project uses JupyterHub to provide access 3,000 students). The Pangeo project uses JupyterHub to provide access
to scalable cloud computing with Dask. JupyterHub is stable customizable to scalable cloud computing with Dask. JupyterHub is stable and customizable
to the use-cases of large organizations. to the use-cases of large organizations.
### I keep hearing about Jupyter Notebook, JupyterLab, and now JupyterHub. Whats the difference? ### I keep hearing about Jupyter Notebook, JupyterLab, and now JupyterHub. Whats the difference?
Here is a quick breakdown of these three tools: Here is a quick breakdown of these three tools:
* **The Jupyter Notebook** is a document specification (the `.ipynb`) file that interweaves - **The Jupyter Notebook** is a document specification (the `.ipynb`) file that interweaves
narrative text with code cells and their outputs. It is also a graphical interface narrative text with code cells and their outputs. It is also a graphical interface
that allows users to edit these documents. There are also several other graphical interfaces that allows users to edit these documents. There are also several other graphical interfaces
that allow users to edit the `.ipynb` format (nteract, Jupyer Lab, Google Colab, Kaggle, etc). that allow users to edit the `.ipynb` format (nteract, Jupyter Lab, Google Colab, Kaggle, etc).
* **JupyterLab** is a flexible and extendible user interface for interactive computing. It - **JupyterLab** is a flexible and extendible user interface for interactive computing. It
has several extensions that are tailored for using Jupyter Notebooks, as well as extensions has several extensions that are tailored for using Jupyter Notebooks, as well as extensions
for other parts of the data science stack. for other parts of the data science stack.
* **JupyterHub** is an application that manages interactive computing sessions for **multiple users**. - **JupyterHub** is an application that manages interactive computing sessions for **multiple users**.
It also connects them with infrastructure those users wish to access. It can provide It also connects them with infrastructure those users wish to access. It can provide
remote access to Jupyter Notebooks and Jupyter Lab for many people. remote access to Jupyter Notebooks and JupyterLab for many people.
## For management ## For management
### Briefly, what problem does JupyterHub solve for us? ### Briefly, what problem does JupyterHub solve for us?
JupyterHub provides a shared platform for data science and collaboration. JupyterHub provides a shared platform for data science and collaboration.
It allows users to utilize familiar data science workflows (such as the scientific python stack, It allows users to utilize familiar data science workflows (such as the scientific Python stack,
the R tidyverse, and Jupyter Notebooks) on institutional infrastructure. It also allows administrators the R tidyverse, and Jupyter Notebooks) on institutional infrastructure. It also allows administrators
some control over access to resources, security, environments, and authentication. some control over access to resources, security, environments, and authentication.
@@ -50,20 +50,20 @@ scalable infrastructure, large datasets, and high-performance computing.
JupyterHub is used at a variety of institutions in academia, JupyterHub is used at a variety of institutions in academia,
industry, and government research labs. It is most-commonly used by two kinds of groups: industry, and government research labs. It is most-commonly used by two kinds of groups:
* Small teams (e.g., data science teams, research labs, or collaborative projects) to provide a - Small teams (e.g., data science teams, research labs, or collaborative projects) to provide a
shared resource for interactive computing, collaboration, and analytics. shared resource for interactive computing, collaboration, and analytics.
* Large teams (e.g., a department, a large class, or a large group of remote users) to provide - Large teams (e.g., a department, a large class, or a large group of remote users) to provide
access to organizational hardware, data, and analytics environments at scale. access to organizational hardware, data, and analytics environments at scale.
Here are a sample of organizations that use JupyterHub: Here is a sample of organizations that use JupyterHub:
* **Universities and colleges**: UC Berkeley, UC San Diego, Cal Poly SLO, Harvard University, University of Chicago, - **Universities and colleges**: UC Berkeley, UC San Diego, Cal Poly SLO, Harvard University, University of Chicago,
University of Oslo, University of Sheffield, Université Paris Sud, University of Versailles University of Oslo, University of Sheffield, Université Paris Sud, University of Versailles
* **Research laboratories**: NASA, NCAR, NOAA, the Large Synoptic Survey Telescope, Brookhaven National Lab, - **Research laboratories**: NASA, NCAR, NOAA, the Large Synoptic Survey Telescope, Brookhaven National Lab,
Minnesota Supercomputing Institute, ALCF, CERN, Lawrence Livermore National Laboratory Minnesota Supercomputing Institute, ALCF, CERN, Lawrence Livermore National Laboratory
* **Online communities**: Pangeo, Quantopian, mybinder.org, MathHub, Open Humans - **Online communities**: Pangeo, Quantopian, mybinder.org, MathHub, Open Humans
* **Computing infrastructure providers**: NERSC, San Diego Supercomputing Center, Compute Canada - **Computing infrastructure providers**: NERSC, San Diego Supercomputing Center, Compute Canada
* **Companies**: Capital One, SANDVIK code, Globus - **Companies**: Capital One, SANDVIK code, Globus
See the [Gallery of JupyterHub deployments](../gallery-jhub-deployments.md) for See the [Gallery of JupyterHub deployments](../gallery-jhub-deployments.md) for
a more complete list of JupyterHub deployments at institutions. a more complete list of JupyterHub deployments at institutions.
@@ -95,14 +95,13 @@ The most common way to set up a JupyterHub is to use a JupyterHub distribution,
and opinionated ways to set up a JupyterHub on particular kinds of infrastructure. The two distributions and opinionated ways to set up a JupyterHub on particular kinds of infrastructure. The two distributions
that we currently suggest are: that we currently suggest are:
* [Zero to JupyterHub for Kubernetes](https://z2jh.jupyter.org) is a scalable JupyterHub deployment and - [Zero to JupyterHub for Kubernetes](https://z2jh.jupyter.org) is a scalable JupyterHub deployment and
guide that runs on Kubernetes. Better for larger or dynamic user groups (50-10,000) or more complex guide that runs on Kubernetes. Better for larger or dynamic user groups (50-10,000) or more complex
compute/data needs. compute/data needs.
* [The Littlest JupyterHub](https://tljh.jupyter.org) is a lightweight JupyterHub that runs on a single - [The Littlest JupyterHub](https://tljh.jupyter.org) is a lightweight JupyterHub that runs on a single
single machine (in the cloud or under your desk). Better for smaller usergroups (4-80) or more single machine (in the cloud or under your desk). Better for smaller user groups (4-80) or more
lightweight computational resources. lightweight computational resources.
### Does JupyterHub run well in the cloud? ### Does JupyterHub run well in the cloud?
Yes - most deployments of JupyterHub are run via cloud infrastructure and on a variety of cloud providers. Yes - most deployments of JupyterHub are run via cloud infrastructure and on a variety of cloud providers.
@@ -123,9 +122,9 @@ The short answer: yes. JupyterHub as a standalone application has been battle-te
level for several years, and makes a number of "default" security decisions that are reasonable for most level for several years, and makes a number of "default" security decisions that are reasonable for most
users. users.
* For security considerations in the base JupyterHub application, - For security considerations in the base JupyterHub application,
[see the JupyterHub security page](https://jupyterhub.readthedocs.io/en/stable/reference/websecurity.html) [see the JupyterHub security page](https://jupyterhub.readthedocs.io/en/stable/reference/websecurity.html).
* For security considerations when deploying JupyterHub on Kubernetes, see the - For security considerations when deploying JupyterHub on Kubernetes, see the
[JupyterHub on Kubernetes security page](https://zero-to-jupyterhub.readthedocs.io/en/latest/security.html). [JupyterHub on Kubernetes security page](https://zero-to-jupyterhub.readthedocs.io/en/latest/security.html).
The longer answer: it depends on your deployment. Because JupyterHub is very flexible, it can be used The longer answer: it depends on your deployment. Because JupyterHub is very flexible, it can be used
@@ -137,15 +136,13 @@ If you are worried about security, don't hesitate to reach out to the JupyterHub
[Jupyter Community Forum](https://discourse.jupyter.org/c/jupyterhub). This community of practice has many [Jupyter Community Forum](https://discourse.jupyter.org/c/jupyterhub). This community of practice has many
individuals with experience running secure JupyterHub deployments. individuals with experience running secure JupyterHub deployments.
### Does JupyterHub provide computing or data infrastructure? ### Does JupyterHub provide computing or data infrastructure?
No - JupyterHub manages user sessions and can *control* computing infrastructure, but it does not provide these No - JupyterHub manages user sessions and can _control_ computing infrastructure, but it does not provide these
things itself. You are expected to run JupyterHub on your own infrastructure (local or in the cloud). Moreover, things itself. You are expected to run JupyterHub on your own infrastructure (local or in the cloud). Moreover,
JupyterHub has no internal concept of "data", but is designed to be able to communicate with data repositories JupyterHub has no internal concept of "data", but is designed to be able to communicate with data repositories
(again, either locally or remotely) for use within interactive computing sessions. (again, either locally or remotely) for use within interactive computing sessions.
### How do I manage users? ### How do I manage users?
JupyterHub offers a few options for managing your users. Upon setting up a JupyterHub, you can choose what JupyterHub offers a few options for managing your users. Upon setting up a JupyterHub, you can choose what
@@ -154,7 +151,7 @@ email address, or choose a username / password when they first log-in, or offloa
another service such as an organization's OAuth. another service such as an organization's OAuth.
The users of a JupyterHub are stored locally, and can be modified manually by an administrator of the JupyterHub. The users of a JupyterHub are stored locally, and can be modified manually by an administrator of the JupyterHub.
Moreover, the *active* users on a JupyterHub can be found on the administrator's page. This page Moreover, the _active_ users on a JupyterHub can be found on the administrator's page. This page
gives you the abiltiy to stop or restart kernels, inspect user filesystems, and even take over user gives you the abiltiy to stop or restart kernels, inspect user filesystems, and even take over user
sessions to assist them with debugging. sessions to assist them with debugging.
@@ -182,12 +179,11 @@ connect with other infrastructure tools (like Dask or Spark). This allows users
scalable or high-performance resources from within their JupyterHub sessions. The logic of scalable or high-performance resources from within their JupyterHub sessions. The logic of
how those resources are controlled is taken care of by the non-JupyterHub application. how those resources are controlled is taken care of by the non-JupyterHub application.
### Can JupyterHub be used with my high-performance computing resources? ### Can JupyterHub be used with my high-performance computing resources?
Yes - JupyterHub can provide access to many kinds of computing infrastructure. Yes - JupyterHub can provide access to many kinds of computing infrastructure.
Especially when combined with other open-source schedulers such as Dask, you can manage fairly Especially when combined with other open-source schedulers such as Dask, you can manage fairly
complex computing infrastructure from the interactive sessions of a JupyterHub. For example complex computing infrastructures from the interactive sessions of a JupyterHub. For example
[see the Dask HPC page](https://docs.dask.org/en/latest/setup/hpc.html). [see the Dask HPC page](https://docs.dask.org/en/latest/setup/hpc.html).
### How much resources do user sessions take? ### How much resources do user sessions take?
@@ -196,7 +192,7 @@ This is highly configurable by the administrator. If you wish for your users to
data analytics environments for prototyping and light data exploring, you can restrict their data analytics environments for prototyping and light data exploring, you can restrict their
memory and CPU based on the resources that you have available. If you'd like your JupyterHub memory and CPU based on the resources that you have available. If you'd like your JupyterHub
to serve as a gateway to high-performance compute or data resources, you may increase the to serve as a gateway to high-performance compute or data resources, you may increase the
resources available on user machines, or connect them with computing infrastructure elsewhere. resources available on user machines, or connect them with computing infrastructures elsewhere.
### Can I customize the look and feel of a JupyterHub? ### Can I customize the look and feel of a JupyterHub?
@@ -218,16 +214,14 @@ the technologies your JupyterHub will use (e.g., dev-ops knowledge with cloud co
In general, the base JupyterHub deployment is not the bottleneck for setup, it is connecting In general, the base JupyterHub deployment is not the bottleneck for setup, it is connecting
your JupyterHub with the various services and tools that you wish to provide to your users. your JupyterHub with the various services and tools that you wish to provide to your users.
### How well does JupyterHub scale? What are JupyterHub's limitations? ### How well does JupyterHub scale? What are JupyterHub's limitations?
JupyterHub works well at both a small scale (e.g., a single VM or machine) as well as a JupyterHub works well at both a small scale (e.g., a single VM or machine) as well as a
high scale (e.g., a scalable Kubernetes cluster). It can be used for teams as small a 2, and high scale (e.g., a scalable Kubernetes cluster). It can be used for teams as small as 2, and
for user bases as large as 10,000. The scalability of JupyterHub largely depends on the for user bases as large as 10,000. The scalability of JupyterHub largely depends on the
infrastructure on which it is deployed. JupyterHub has been designed to be lightweight and infrastructure on which it is deployed. JupyterHub has been designed to be lightweight and
flexible, so you can tailor your JupyterHub deployment to your needs. flexible, so you can tailor your JupyterHub deployment to your needs.
### Is JupyterHub resilient? What happens when a machine goes down? ### Is JupyterHub resilient? What happens when a machine goes down?
For JupyterHubs that are deployed in a containerized environment (e.g., Kubernetes), it is For JupyterHubs that are deployed in a containerized environment (e.g., Kubernetes), it is
@@ -255,7 +249,7 @@ share their results with one another.
JupyterHub also provides a computational framework to share computational narratives between JupyterHub also provides a computational framework to share computational narratives between
different levels of an organization. For example, data scientists can share Jupyter Notebooks different levels of an organization. For example, data scientists can share Jupyter Notebooks
rendered as [voila dashboards](https://voila.readthedocs.io/en/stable/) with those who are not rendered as [Voilà dashboards](https://voila.readthedocs.io/en/stable/) with those who are not
familiar with programming, or create publicly-available interactive analyses to allow others to familiar with programming, or create publicly-available interactive analyses to allow others to
interact with your work. interact with your work.

View File

@@ -11,7 +11,7 @@ This section will help you with basic proxy and network configuration to:
The Proxy's main IP address setting determines where JupyterHub is available to users. The Proxy's main IP address setting determines where JupyterHub is available to users.
By default, JupyterHub is configured to be available on all network interfaces By default, JupyterHub is configured to be available on all network interfaces
(`''`) on port 8000. *Note*: Use of `'*'` is discouraged for IP configuration; (`''`) on port 8000. _Note_: Use of `'*'` is discouraged for IP configuration;
instead, use of `'0.0.0.0'` is preferred. instead, use of `'0.0.0.0'` is preferred.
Changing the Proxy's main IP address and port can be done with the following Changing the Proxy's main IP address and port can be done with the following
@@ -43,7 +43,7 @@ port.
By default, this REST API listens on port 8001 of `localhost` only. By default, this REST API listens on port 8001 of `localhost` only.
The Hub service talks to the proxy via a REST API on a secondary port. The The Hub service talks to the proxy via a REST API on a secondary port. The
API URL can be configured separately and override the default settings. API URL can be configured separately to override the default settings.
### Set api_url ### Set api_url
@@ -74,7 +74,7 @@ The Hub service listens only on `localhost` (port 8081) by default.
The Hub needs to be accessible from both the proxy and all Spawners. The Hub needs to be accessible from both the proxy and all Spawners.
When spawning local servers, an IP address setting of `localhost` is fine. When spawning local servers, an IP address setting of `localhost` is fine.
If *either* the Proxy *or* (more likely) the Spawners will be remote or If _either_ the Proxy _or_ (more likely) the Spawners will be remote or
isolated in containers, the Hub must listen on an IP that is accessible. isolated in containers, the Hub must listen on an IP that is accessible.
```python ```python
@@ -82,20 +82,20 @@ c.JupyterHub.hub_ip = '10.0.1.4'
c.JupyterHub.hub_port = 54321 c.JupyterHub.hub_port = 54321
``` ```
**Added in 0.8:** The `c.JupyterHub.hub_connect_ip` setting is the ip address or **Added in 0.8:** The `c.JupyterHub.hub_connect_ip` setting is the IP address or
hostname that other services should use to connect to the Hub. A common hostname that other services should use to connect to the Hub. A common
configuration for, e.g. docker, is: configuration for, e.g. docker, is:
```python ```python
c.JupyterHub.hub_ip = '0.0.0.0' # listen on all interfaces c.JupyterHub.hub_ip = '0.0.0.0' # listen on all interfaces
c.JupyterHub.hub_connect_ip = '10.0.1.4' # ip as seen on the docker network. Can also be a hostname. c.JupyterHub.hub_connect_ip = '10.0.1.4' # IP as seen on the docker network. Can also be a hostname.
``` ```
## Adjusting the hub's URL ## Adjusting the hub's URL
The hub will most commonly be running on a hostname of its own. If it The hub will most commonly be running on a hostname of its own. If it
is not for example, if the hub is being reverse-proxied and being is not for example, if the hub is being reverse-proxied and being
exposed at a URL such as `https://proxy.example.org/jupyter/` then exposed at a URL such as `https://proxy.example.org/jupyter/` then
you will need to tell JupyterHub the base URL of the service. In such you will need to tell JupyterHub the base URL of the service. In such
a case, it is both necessary and sufficient to set a case, it is both necessary and sufficient to set
`c.JupyterHub.base_url = '/jupyter/'` in the configuration. `c.JupyterHub.base_url = '/jupyter/'` in the configuration.

View File

@@ -80,6 +80,49 @@ To achieve this, simply omit the configuration settings
``c.JupyterHub.ssl_key`` and ``c.JupyterHub.ssl_cert`` ``c.JupyterHub.ssl_key`` and ``c.JupyterHub.ssl_cert``
(setting them to ``None`` does not have the same effect, and is an error). (setting them to ``None`` does not have the same effect, and is an error).
.. _authentication-token:
Proxy authentication token
--------------------------
The Hub authenticates its requests to the Proxy using a secret token that
the Hub and Proxy agree upon. Note that this applies to the default
``ConfigurableHTTPProxy`` implementation. Not all proxy implementations
use an auth token.
The value of this token should be a random string (for example, generated by
``openssl rand -hex 32``). You can store it in the configuration file or an
environment variable
Generating and storing token in the configuration file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can set the value in the configuration file, ``jupyterhub_config.py``:
.. code-block:: python
c.ConfigurableHTTPProxy.api_token = 'abc123...' # any random string
Generating and storing as an environment variable
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can pass this value of the proxy authentication token to the Hub and Proxy
using the ``CONFIGPROXY_AUTH_TOKEN`` environment variable:
.. code-block:: bash
export CONFIGPROXY_AUTH_TOKEN=$(openssl rand -hex 32)
This environment variable needs to be visible to the Hub and Proxy.
Default if token is not set
~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you don't set the Proxy authentication token, the Hub will generate a random
key itself, which means that any time you restart the Hub you **must also
restart the Proxy**. If the proxy is a subprocess of the Hub, this should happen
automatically (this is the default configuration).
.. _cookie-secret: .. _cookie-secret:
Cookie secret Cookie secret
@@ -146,41 +189,73 @@ itself, ``jupyterhub_config.py``, as a binary string:
If the cookie secret value changes for the Hub, all single-user notebook If the cookie secret value changes for the Hub, all single-user notebook
servers must also be restarted. servers must also be restarted.
.. _cookies:
.. _authentication-token: Cookies used by JupyterHub authentication
-----------------------------------------
Proxy authentication token The following cookies are used by the Hub for handling user authentication.
--------------------------
The Hub authenticates its requests to the Proxy using a secret token that This section was created based on this post_ from Discourse.
the Hub and Proxy agree upon. The value of this string should be a random
string (for example, generated by ``openssl rand -hex 32``).
Generating and storing token in the configuration file .. _post: https://discourse.jupyter.org/t/how-to-force-re-login-for-users/1998/6
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Or you can set the value in the configuration file, ``jupyterhub_config.py``: jupyterhub-hub-login
~~~~~~~~~~~~~~~~~~~~
.. code-block:: python This is the login token used when visiting Hub-served pages that are
protected by authentication such as the main home, the spawn form, etc.
If this cookie is set, then the user is logged in.
c.JupyterHub.proxy_auth_token = '0bc02bede919e99a26de1e2a7a5aadfaf6228de836ec39a05a6c6942831d8fe5' Resetting the Hub cookie secret effectively revokes this cookie.
Generating and storing as an environment variable This cookie is restricted to the path ``/hub/``.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can pass this value of the proxy authentication token to the Hub and Proxy jupyterhub-user-<username>
using the ``CONFIGPROXY_AUTH_TOKEN`` environment variable: ~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: bash This is the cookie used for authenticating with a single-user server.
It is set by the single-user server after OAuth with the Hub.
export CONFIGPROXY_AUTH_TOKEN=$(openssl rand -hex 32) Effectively the same as ``jupyterhub-hub-login``, but for the
single-user server instead of the Hub. It contains an OAuth access token,
which is checked with the Hub to authenticate the browser.
This environment variable needs to be visible to the Hub and Proxy. Each OAuth access token is associated with a session id (see ``jupyterhub-session-id`` section
below).
Default if token is not set To avoid hitting the Hub on every request, the authentication response
~~~~~~~~~~~~~~~~~~~~~~~~~~~ is cached. And to avoid a stale cache the cache key is comprised of both
the token and session id.
If you don't set the Proxy authentication token, the Hub will generate a random Resetting the Hub cookie secret effectively revokes this cookie.
key itself, which means that any time you restart the Hub you **must also
restart the Proxy**. If the proxy is a subprocess of the Hub, this should happen This cookie is restricted to the path ``/user/<username>``, so that
automatically (this is the default configuration). only the users server receives it.
jupyterhub-session-id
~~~~~~~~~~~~~~~~~~~~~
This is a random string, meaningless in itself, and the only cookie
shared by the Hub and single-user servers.
Its sole purpose is to coordinate logout of the multiple OAuth cookies.
This cookie is set to ``/`` so all endpoints can receive it, or clear it, etc.
jupyterhub-user-<username>-oauth-state
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A short-lived cookie, used solely to store and validate OAuth state.
It is only set while OAuth between the single-user server and the Hub
is processing.
If you use your browser development tools, you should see this cookie
for a very brief moment before your are logged in,
with an expiration date shorter than ``jupyterhub-hub-login`` or
``jupyterhub-user-<username>``.
This cookie should not exist after you have successfully logged in.
This cookie is restricted to the path ``/user/<username>``, so that only
the users server receives it.

View File

@@ -2,10 +2,10 @@
When working with JupyterHub, a **Service** is defined as a process When working with JupyterHub, a **Service** is defined as a process
that interacts with the Hub's REST API. A Service may perform a specific that interacts with the Hub's REST API. A Service may perform a specific
or action or task. For example, shutting down individuals' single user action or task. For example, shutting down individuals' single user
notebook servers that have been idle for some time is a good example of notebook servers that have been idle for some time is a good example of
a task that could be automated by a Service. Let's look at how the a task that could be automated by a Service. Let's look at how the
[cull_idle_servers][] script can be used as a Service. [jupyterhub_idle_culler][] script can be used as a Service.
## Real-world example to cull idle servers ## Real-world example to cull idle servers
@@ -15,11 +15,11 @@ document will:
- explain some basic information about API tokens - explain some basic information about API tokens
- clarify that API tokens can be used to authenticate to - clarify that API tokens can be used to authenticate to
single-user servers as of [version 0.8.0](../changelog) single-user servers as of [version 0.8.0](../changelog)
- show how the [cull_idle_servers][] script can be: - show how the [jupyterhub_idle_culler][] script can be:
- used in a Hub-managed service - used in a Hub-managed service
- run as a standalone script - run as a standalone script
Both examples for `cull_idle_servers` will communicate tasks to the Both examples for `jupyterhub_idle_culler` will communicate tasks to the
Hub via the REST API. Hub via the REST API.
## API Token basics ## API Token basics
@@ -78,17 +78,23 @@ single-user servers, and only cookies can be used for authentication.
0.8 supports using JupyterHub API tokens to authenticate to single-user 0.8 supports using JupyterHub API tokens to authenticate to single-user
servers. servers.
## Configure `cull-idle` to run as a Hub-Managed Service ## Configure the idle culler to run as a Hub-Managed Service
Install the idle culler:
```
pip install jupyterhub-idle-culler
```
In `jupyterhub_config.py`, add the following dictionary for the In `jupyterhub_config.py`, add the following dictionary for the
`cull-idle` Service to the `c.JupyterHub.services` list: `idle-culler` Service to the `c.JupyterHub.services` list:
```python ```python
c.JupyterHub.services = [ c.JupyterHub.services = [
{ {
'name': 'cull-idle', 'name': 'idle-culler',
'admin': True, 'admin': True,
'command': [sys.executable, 'cull_idle_servers.py', '--timeout=3600'], 'command': [sys.executable, '-m', 'jupyterhub_idle_culler', '--timeout=3600'],
} }
] ]
``` ```
@@ -101,21 +107,21 @@ where:
## Run `cull-idle` manually as a standalone script ## Run `cull-idle` manually as a standalone script
Now you can run your script, i.e. `cull_idle_servers`, by providing it Now you can run your script by providing it
the API token and it will authenticate through the REST API to the API token and it will authenticate through the REST API to
interact with it. interact with it.
This will run `cull-idle` manually. `cull-idle` can be run as a standalone This will run the idle culler service manually. It can be run as a standalone
script anywhere with access to the Hub, and will periodically check for idle script anywhere with access to the Hub, and will periodically check for idle
servers and shut them down via the Hub's REST API. In order to shutdown the servers and shut them down via the Hub's REST API. In order to shutdown the
servers, the token given to cull-idle must have admin privileges. servers, the token given to `cull-idle` must have admin privileges.
Generate an API token and store it in the `JUPYTERHUB_API_TOKEN` environment Generate an API token and store it in the `JUPYTERHUB_API_TOKEN` environment
variable. Run `cull_idle_servers.py` manually. variable. Run `jupyterhub_idle_culler` manually.
```bash ```bash
export JUPYTERHUB_API_TOKEN='token' export JUPYTERHUB_API_TOKEN='token'
python3 cull_idle_servers.py [--timeout=900] [--url=http://127.0.0.1:8081/hub/api] python -m jupyterhub_idle_culler [--timeout=900] [--url=http://127.0.0.1:8081/hub/api]
``` ```
[cull_idle_servers]: https://github.com/jupyterhub/jupyterhub/blob/master/examples/cull-idle/cull_idle_servers.py [jupyterhub_idle_culler]: https://github.com/jupyterhub/jupyterhub-idle-culler

View File

@@ -1,8 +1,8 @@
# Spawners and single-user notebook servers # Spawners and single-user notebook servers
Since the single-user server is an instance of `jupyter notebook`, an entire separate Since the single-user server is an instance of `jupyter notebook`, an entire separate
multi-process application, there are many aspect of that server can configure, and a lot of ways multi-process application, there are many aspects of that server that can be configured, and a lot
to express that configuration. of ways to express that configuration.
At the JupyterHub level, you can set some values on the Spawner. The simplest of these is At the JupyterHub level, you can set some values on the Spawner. The simplest of these is
`Spawner.notebook_dir`, which lets you set the root directory for a user's server. This root `Spawner.notebook_dir`, which lets you set the root directory for a user's server. This root
@@ -14,7 +14,7 @@ expanded to the user's home directory.
c.Spawner.notebook_dir = '~/notebooks' c.Spawner.notebook_dir = '~/notebooks'
``` ```
You can also specify extra command-line arguments to the notebook server with: You can also specify extra command line arguments to the notebook server with:
```python ```python
c.Spawner.args = ['--debug', '--profile=PHYS131'] c.Spawner.args = ['--debug', '--profile=PHYS131']

View File

@@ -3,11 +3,11 @@ JupyterHub
========== ==========
`JupyterHub`_ is the best way to serve `Jupyter notebook`_ for multiple users. `JupyterHub`_ is the best way to serve `Jupyter notebook`_ for multiple users.
It can be used in a classes of students, a corporate data science group or scientific It can be used in a class of students, a corporate data science group or scientific
research group. It is a multi-user **Hub** that spawns, manages, and proxies multiple research group. It is a multi-user **Hub** that spawns, manages, and proxies multiple
instances of the single-user `Jupyter notebook`_ server. instances of the single-user `Jupyter notebook`_ server.
To make life easier, JupyterHub have distributions. Be sure to To make life easier, JupyterHub has distributions. Be sure to
take a look at them before continuing with the configuration of the broad take a look at them before continuing with the configuration of the broad
original system of `JupyterHub`_. Today, you can find two main cases: original system of `JupyterHub`_. Today, you can find two main cases:
@@ -115,8 +115,8 @@ We want you to contribute to JupyterHub in ways that are most exciting
& useful to you. We value documentation, testing, bug reporting & code equally, & useful to you. We value documentation, testing, bug reporting & code equally,
and are glad to have your contributions in whatever form you wish :) and are glad to have your contributions in whatever form you wish :)
Our `Code of Conduct <https://github.com/jupyter/governance/blob/master/conduct/code_of_conduct.md>`_ Our `Code of Conduct <https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md>`_
(`reporting guidelines <https://github.com/jupyter/governance/blob/master/conduct/reporting_online.md>`_) (`reporting guidelines <https://github.com/jupyter/governance/blob/HEAD/conduct/reporting_online.md>`_)
helps keep our community welcoming to as many people as possible. helps keep our community welcoming to as many people as possible.
.. toctree:: .. toctree::
@@ -147,4 +147,4 @@ Questions? Suggestions?
.. _JupyterHub: https://github.com/jupyterhub/jupyterhub .. _JupyterHub: https://github.com/jupyterhub/jupyterhub
.. _Jupyter notebook: https://jupyter-notebook.readthedocs.io/en/latest/ .. _Jupyter notebook: https://jupyter-notebook.readthedocs.io/en/latest/
.. _REST API: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#!/default .. _REST API: https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/HEAD/docs/rest-api.yml#!/default

View File

@@ -1,338 +0,0 @@
# Install JupyterHub and JupyterLab from the ground up
The combination of [JupyterHub](https://jupyterhub.readthedocs.io) and [JupyterLab](https://jupyterlab.readthedocs.io)
is a great way to make shared computing resources available to a group.
These instructions are a guide for a manual, 'bare metal' install of [JupyterHub](https://jupyterhub.readthedocs.io)
and [JupyterLab](https://jupyterlab.readthedocs.io). This is ideal for running on a single server: build a beast
of a machine and share it within your lab, or use a virtual machine from any VPS or cloud provider.
This guide has similar goals to [The Littlest JupyterHub](https://the-littlest-jupyterhub.readthedocs.io) setup
script. However, instead of bundling all these step for you into one installer, we will perform every step manually.
This makes it easy to customize any part (e.g. if you want to run other services on the same system and need to make them
work together), as well as giving you full control and understanding of your setup.
## Prerequisites
Your own server with administrator (root) access. This could be a local machine, a remotely hosted one, or a cloud instance
or VPS. Each user who will access JupyterHub should have a standard user account on the machine. The install will be done
through the command line - useful if you log into your machine remotely using SSH.
This tutorial was tested on **Ubuntu 18.04**. No other Linux distributions have been tested, but the instructions
should be reasonably straightforward to adapt.
## Goals
JupyterLab enables access to a multiple 'kernels', each one being a given environment for a given language. The most
common is a Python environment, for scientific computing usually one managed by the `conda` package manager.
This guide will set up JupyterHub and JupyterLab seperately from the Python environment. In other words, we treat
JupyterHub+JupyterLab as a 'app' or webservice, which will connect to the kernels available on the system. Specifically:
- We will create an installation of JupyterHub and JupyterLab using a virtualenv under `/opt` using the system Python.
- We will install conda globally.
- We will create a shared conda environment which can be used (but not modified) by all users.
- We will show how users can create their own private conda environments, where they can install whatever they like.
The default JupyterHub Authenticator uses PAM to authenticate system users with their username and password. One can
[choose the authenticator](https://jupyterhub.readthedocs.io/en/stable/reference/authenticators.html#authenticators)
that best suits their needs. In this guide we will use the default Authenticator because it makes it easy for everyone to manage data
in their home folder and to mix and match different services and access methods (e.g. SSH) which all work using the
Linux system user accounts. Therefore, each user of JupyterHub will need a standard system user account.
Another goal of this guide is to use system provided packages wherever possible. This has the advantage that these packages
get automatic patches and security updates (be sure to turn on automatic updates in Ubuntu). This means less maintenance
work and a more reliable system.
## Part 1: JupyterHub and JupyterLab
### Setup the JupyterHub and JupyterLab in a virtual environment
First we create a virtual environment under '/opt/jupyterhub'. The '/opt' folder is where apps not belonging to the operating
system are [commonly installed](https://unix.stackexchange.com/questions/11544/what-is-the-difference-between-opt-and-usr-local).
Both jupyterlab and jupyterhub will be installed into this virtualenv. Create it with the command:
```sh
sudo python3 -m venv /opt/jupyterhub/
```
Now we use pip to install the required Python packages into the new virtual environment. Be sure to install
`wheel` first. Since we are separating the user interface from the computing kernels, we don't install
any Python scientific packages here. The only exception is `ipywidgets` because this is needed to allow connection
between interactive tools running in the kernel and the user interface.
Note that we use `/opt/jupyterhub/bin/python3 -m pip install` each time - this [makes sure](https://snarky.ca/why-you-should-use-python-m-pip/)
that the packages are installed to the correct virtual environment.
Perform the install using the following commands:
```sh
sudo /opt/jupyterhub/bin/python3 -m pip install wheel
sudo /opt/jupyterhub/bin/python3 -m pip install jupyterhub jupyterlab
sudo /opt/jupyterhub/bin/python3 -m pip install ipywidgets
```
JupyterHub also currently defaults to requiring `configurable-http-proxy`, which needs `nodejs` and `npm`. The versions
of these available in Ubuntu therefore need to be installed first (they are a bit old but this is ok for our needs):
```sh
sudo apt install nodejs npm
```
Then install `configurable-http-proxy`:
```sh
npm install -g configurable-http-proxy
```
### Create the configuration for JupyterHub
Now we start creating configuration files. To keep everything together, we put all the configuration into the folder
created for the virtualenv, under `/opt/jupyterhub/etc/`. For each thing needing configuration, we will create a further
subfolder and necessary files.
First create the folder for the JupyterHub configuration and navigate to it:
```sh
sudo mkdir -p /opt/jupyterhub/etc/jupyterhub/
cd /opt/jupyterhub/etc/jupyterhub/
```
Then generate the default configuration file
```sh
sudo /opt/jupyterhub/bin/jupyterhub --generate-config
```
This will produce the default configuration file `/opt/jupyterhub/etc/jupyterhub/jupyterhub_config.py`
You will need to edit the configuration file to make the JupyterLab interface by the default.
Set the following configuration option in your `jupyterhub_config.py` file:
```python
c.Spawner.default_url = '/lab'
```
Further configuration options may be found in the documentation.
### Setup Systemd service
We will setup JupyterHub to run as a system service using Systemd (which is responsible for managing all services and
servers that run on startup in Ubuntu). We will create a service file in a suitable location in the virtualenv folder
and then link it to the system services. First create the folder for the service file:
```sh
sudo mkdir -p /opt/jupyterhub/etc/systemd
```
Then create the following text file using your [favourite editor](https://micro-editor.github.io/) at
```sh
/opt/jupyterhub/etc/systemd/jupyterhub.service
```
Paste the following service unit definition into the file:
```
[Unit]
Description=JupyterHub
After=syslog.target network.target
[Service]
User=root
Environment="PATH=/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/jupyterhub/bin"
ExecStart=/opt/jupyterhub/bin/jupyterhub -f /opt/jupyterhub/etc/jupyterhub/jupyterhub_config.py
[Install]
WantedBy=multi-user.target
```
This sets up the environment to use the virtual environment we created, tells Systemd how to start jupyterhub using
the configuration file we created, specifies that jupyterhub will be started as the `root` user (needed so that it can
start jupyter on behalf of other logged in users), and specifies that jupyterhub should start on boot after the network
is enabled.
Finally, we need to make systemd aware of our service file. First we symlink our file into systemd's directory:
```sh
sudo ln -s /opt/jupyterhub/etc/systemd/jupyterhub.service /etc/systemd/system/jupyterhub.service
```
Then tell systemd to reload its configuration files
```sh
sudo systemctl daemon-reload
```
And finally enable the service
```sh
sudo systemctl enable jupyterhub.service
```
The service will start on reboot, but we can start it straight away using:
```sh
sudo systemctl start jupyterhub.service
```
...and check that it's running using:
```sh
sudo systemctl status jupyterhub.service
```
You should now be already be able to access jupyterhub using `<your servers ip>:8000` (assuming you haven't already set
up a firewall or something). However, when you log in the jupyter notebooks will be trying to use the Python virtualenv
that was created to install JupyterHub, this is not what we want. So on to part 2
## Part 2: Conda environments
### Install conda for the whole system
We will use `conda` to manage Python environments. We will install the officially maintained `conda` packages for Ubuntu,
this means they will get automatic updates with the rest of the system. Setup repo for the official Conda debian packages,
instructions are copied from [here](https://docs.conda.io/projects/conda/en/latest/user-guide/install/rpm-debian.html):
Install Anacononda public gpg key to trusted store
```sh
curl https://repo.anaconda.com/pkgs/misc/gpgkeys/anaconda.asc | gpg --dearmor > conda.gpg
sudo install -o root -g root -m 644 conda.gpg /etc/apt/trusted.gpg.d/
```
Add Debian repo
```sh
sudo echo "deb [arch=amd64] https://repo.anaconda.com/pkgs/misc/debrepo/conda stable main" > /etc/apt/sources.list.d/conda.list
```
Install conda
```sh
sudo apt update
sudo apt install conda
```
This will install conda into the folder `/opt/conda/`, with the conda command available at `/opt/conda/bin/conda`.
Finally, we can make conda more easily available to users by symlinking the conda shell setup script to the profile
'drop in' folder so that it gets run on login
```sh
sudo ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh
```
### Install a default conda environment for all users
First create a folder for conda envs (might exist already):
```sh
sudo mkdir /opt/conda/envs/
```
Then create a conda environment to your liking within that folder. Here we have called it 'python' because it will
be the obvious default - call it whatever you like. You can install whatever you like into this environment, but you MUST at least install `ipykernel`.
```sh
sudo /opt/conda/bin/conda create --prefix /opt/conda/envs/python python=3.7 ipykernel
```
Once your env is set up as desired, make it visible to Jupyter by installing the kernel spec. There are two options here:
1 ) Install into the JupyterHub virtualenv - this ensures it overrides the default python version. It will only be visible
to the JupyterHub installation we have just created. This is useful to avoid conda environments appearing where they are not expected.
```sh
sudo /opt/conda/envs/python/bin/python -m ipykernel install --prefix=/opt/jupyterhub/ --name 'python' --display-name "Python (default)"
```
2 ) Install it system-wide by putting it into `/usr/local`. It will be visible to any parallel install of JupyterHub or
JupyterLab, and will persist even if you later delete or modify the JupyterHub installation. This is useful if the kernels
might be used by other services, or if you want to modify the JupyterHub installation independently from the conda environments.
```sh
sudo /opt/conda/envs/python/bin/python -m ipykernel install --prefix /usr/local/ --name 'python' --display-name "Python (default)"
````
### Setting up users' own conda environments
There is relatively little for the administrator to do here, as users will have to set up their own environments using the shell.
On login they should run `conda init` or `/opt/conda/bin/conda`. The can then use conda to set up their environment,
although they must also install `ipykernel`. Once done, they can enable their kernel using:
```sh
/path/to/kernel/env/bin/python -m ipykernel install --name 'python-my-env' --display-name "Python My Env"
```
This will place the kernel spec into their home folder, where Jupyter will look for it on startup.
## Setting up a reverse proxy
The guide so far results in JupyterHub running on port 8000. It is not generally advisable to run open web services in
this way - instead, use a reverse proxy running on standard HTTP/HTTPS ports.
> **Important**: Be aware of the security implications especially if you are running a server that is accessible from the open internet
> i.e. not protected within an institutional intranet or private home/office network. You should set up a firewall and
> HTTPS encryption, which is outside of the scope of this guide. For HTTPS consider using [LetsEncrypt](https://letsencrypt.org/)
> or setting up a [self-signed certificate](https://www.digitalocean.com/community/tutorials/how-to-create-a-self-signed-ssl-certificate-for-nginx-in-ubuntu-18-04).
> Firewalls may be set up using `ufs` or `firewalld` and combined with `fail2ban`.
### Using Nginx
Nginx is a mature and established web server and reverse proxy and is easy to install using `sudo apt install nginx`.
Details on using Nginx as a reverse proxy can be found elsewhere. Here, we will only outline the additional steps needed
to setup JupyterHub with Nginx and host it at a given URL e.g. `<your-server-ip-or-url>/jupyter`.
This could be useful for example if you are running several services or web pages on the same server.
To achieve this needs a few tweaks to both the JupyterHub configuration and the Nginx config. First, edit the
configuration file `/opt/jupyterhub/etc/jupyterhub/jupyterhub_config.py` and add the line:
```python
c.JupyterHub.bind_url = 'http://:8000/jupyter'
```
where `/jupyter` will be the relative URL of the JupyterHub.
Now Nginx must be configured with a to pass all traffic from `/jupyter` to the the local address `127.0.0.1:8000`.
Add the following snippet to your nginx configuration file (e.g. `/etc/nginx/sites-available/default`).
```
location /jupyter/ {
# NOTE important to also set base url of jupyterhub to /jupyter in its config
proxy_pass http://127.0.0.1:8000;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# websocket headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
```
Nginx will not run if there are errors in the configuration, check your configuration using:
```sh
nginx -t
```
If there are no errors, you can restart the Nginx service for the new configuration to take effect.
```sh
sudo systemctl restart nginx.service
```
## Getting started using your new JupyterHub
Once you have setup JupyterHub and Nginx proxy as described, you can browse to your JupyterHub IP or URL
(e.g. if your server IP address is `123.456.789.1` and you decided to host JupyterHub at the `/jupyter` URL, browse
to `123.456.789.1/jupyter`). You will find a login page where you enter your Linux username and password. On login
you will be presented with the JupyterLab interface, with the file browser pane showing the contents of your users'
home directory on the server.

View File

@@ -0,0 +1,6 @@
:orphan:
JupyterHub the hard way
=======================
This guide has moved to https://github.com/jupyterhub/jupyterhub-the-hard-way/blob/HEAD/docs/installation-guide-hard.md

View File

@@ -11,4 +11,3 @@ running on your own infrastructure.
quickstart quickstart
quickstart-docker quickstart-docker
installation-basics installation-basics
installation-guide-hard

View File

@@ -12,10 +12,10 @@ Before installing JupyterHub, you will need:
- [nodejs/npm](https://www.npmjs.com/). [Install nodejs/npm](https://docs.npmjs.com/getting-started/installing-node), - [nodejs/npm](https://www.npmjs.com/). [Install nodejs/npm](https://docs.npmjs.com/getting-started/installing-node),
using your operating system's package manager. using your operating system's package manager.
* If you are using **`conda`**, the nodejs and npm dependencies will be installed for - If you are using **`conda`**, the nodejs and npm dependencies will be installed for
you by conda. you by conda.
* If you are using **`pip`**, install a recent version of - If you are using **`pip`**, install a recent version of
[nodejs/npm](https://docs.npmjs.com/getting-started/installing-node). [nodejs/npm](https://docs.npmjs.com/getting-started/installing-node).
For example, install it on Linux (Debian/Ubuntu) using: For example, install it on Linux (Debian/Ubuntu) using:
@@ -26,6 +26,10 @@ Before installing JupyterHub, you will need:
The `nodejs-legacy` package installs the `node` executable and is currently The `nodejs-legacy` package installs the `node` executable and is currently
required for npm to work on Debian/Ubuntu. required for npm to work on Debian/Ubuntu.
- A [pluggable authentication module (PAM)](https://en.wikipedia.org/wiki/Pluggable_authentication_module)
to use the [default Authenticator](./getting-started/authenticators-users-basics.md).
PAM is often available by default on most distributions, if this is not the case it can be installed by
using the operating system's package manager.
- TLS certificate and key for HTTPS communication - TLS certificate and key for HTTPS communication
- Domain name - Domain name
@@ -74,12 +78,12 @@ Visit `https://localhost:8000` in your browser, and sign in with your unix
credentials. credentials.
To **allow multiple users to sign in** to the Hub server, you must start To **allow multiple users to sign in** to the Hub server, you must start
`jupyterhub` as a *privileged user*, such as root: `jupyterhub` as a _privileged user_, such as root:
```bash ```bash
sudo jupyterhub sudo jupyterhub
``` ```
The [wiki](https://github.com/jupyterhub/jupyterhub/wiki/Using-sudo-to-run-JupyterHub-without-root-privileges) The [wiki](https://github.com/jupyterhub/jupyterhub/wiki/Using-sudo-to-run-JupyterHub-without-root-privileges)
describes how to run the server as a *less privileged user*. This requires describes how to run the server as a _less privileged user_. This requires
additional configuration of the system. additional configuration of the system.

View File

@@ -89,7 +89,6 @@ class DictionaryAuthenticator(Authenticator):
return data['username'] return data['username']
``` ```
#### Normalize usernames #### Normalize usernames
Since the Authenticator and Spawner both use the same username, Since the Authenticator and Spawner both use the same username,
@@ -111,11 +110,10 @@ When using `PAMAuthenticator`, you can set
normalize usernames using PAM (basically round-tripping them: username normalize usernames using PAM (basically round-tripping them: username
to uid to username), which is useful in case you use some external to uid to username), which is useful in case you use some external
service that allows multiple usernames mapping to the same user (such service that allows multiple usernames mapping to the same user (such
as ActiveDirectory, yes, this really happens). When as ActiveDirectory, yes, this really happens). When
`pam_normalize_username` is on, usernames are *not* normalized to `pam_normalize_username` is on, usernames are _not_ normalized to
lowercase. lowercase.
#### Validate usernames #### Validate usernames
In most cases, there is a very limited set of acceptable usernames. In most cases, there is a very limited set of acceptable usernames.
@@ -132,7 +130,6 @@ To only allow usernames that start with 'w':
c.Authenticator.username_pattern = r'w.*' c.Authenticator.username_pattern = r'w.*'
``` ```
### How to write a custom authenticator ### How to write a custom authenticator
You can use custom Authenticator subclasses to enable authentication You can use custom Authenticator subclasses to enable authentication
@@ -145,7 +142,6 @@ and [post_spawn_stop(user, spawner)][], are hooks that can be used to do
auth-related startup (e.g. opening PAM sessions) and cleanup auth-related startup (e.g. opening PAM sessions) and cleanup
(e.g. closing PAM sessions). (e.g. closing PAM sessions).
See a list of custom Authenticators [on the wiki](https://github.com/jupyterhub/jupyterhub/wiki/Authenticators). See a list of custom Authenticators [on the wiki](https://github.com/jupyterhub/jupyterhub/wiki/Authenticators).
If you are interested in writing a custom authenticator, you can read If you are interested in writing a custom authenticator, you can read
@@ -186,7 +182,6 @@ Additionally, configurable attributes for your authenticator will
appear in jupyterhub help output and auto-generated configuration files appear in jupyterhub help output and auto-generated configuration files
via `jupyterhub --generate-config`. via `jupyterhub --generate-config`.
### Authentication state ### Authentication state
JupyterHub 0.8 adds the ability to persist state related to authentication, JupyterHub 0.8 adds the ability to persist state related to authentication,
@@ -220,12 +215,10 @@ To store auth_state, two conditions must be met:
export JUPYTERHUB_CRYPT_KEY=$(openssl rand -hex 32) export JUPYTERHUB_CRYPT_KEY=$(openssl rand -hex 32)
``` ```
JupyterHub uses [Fernet](https://cryptography.io/en/latest/fernet/) to encrypt auth_state. JupyterHub uses [Fernet](https://cryptography.io/en/latest/fernet/) to encrypt auth_state.
To facilitate key-rotation, `JUPYTERHUB_CRYPT_KEY` may be a semicolon-separated list of encryption keys. To facilitate key-rotation, `JUPYTERHUB_CRYPT_KEY` may be a semicolon-separated list of encryption keys.
If there are multiple keys present, the **first** key is always used to persist any new auth_state. If there are multiple keys present, the **first** key is always used to persist any new auth_state.
#### Using auth_state #### Using auth_state
Typically, if `auth_state` is persisted it is desirable to affect the Spawner environment in some way. Typically, if `auth_state` is persisted it is desirable to affect the Spawner environment in some way.
@@ -235,10 +228,9 @@ to Spawner environment:
```python ```python
class MyAuthenticator(Authenticator): class MyAuthenticator(Authenticator):
@gen.coroutine async def authenticate(self, handler, data=None):
def authenticate(self, handler, data=None): username = await identify_user(handler, data)
username = yield identify_user(handler, data) upstream_token = await token_for_user(username)
upstream_token = yield token_for_user(username)
return { return {
'name': username, 'name': username,
'auth_state': { 'auth_state': {
@@ -246,10 +238,9 @@ class MyAuthenticator(Authenticator):
}, },
} }
@gen.coroutine async def pre_spawn_start(self, user, spawner):
def pre_spawn_start(self, user, spawner):
"""Pass upstream_token to spawner via environment variable""" """Pass upstream_token to spawner via environment variable"""
auth_state = yield user.get_auth_state() auth_state = await user.get_auth_state()
if not auth_state: if not auth_state:
# auth_state not enabled # auth_state not enabled
return return
@@ -268,11 +259,10 @@ PAM session.
Beginning with version 0.8, JupyterHub is an OAuth provider. Beginning with version 0.8, JupyterHub is an OAuth provider.
[authenticator]: https://github.com/jupyterhub/jupyterhub/blob/HEAD/jupyterhub/auth.py
[Authenticator]: https://github.com/jupyterhub/jupyterhub/blob/master/jupyterhub/auth.py [pam]: https://en.wikipedia.org/wiki/Pluggable_authentication_module
[PAM]: https://en.wikipedia.org/wiki/Pluggable_authentication_module [oauth]: https://en.wikipedia.org/wiki/OAuth
[OAuth]: https://en.wikipedia.org/wiki/OAuth [github oauth]: https://developer.github.com/v3/oauth/
[GitHub OAuth]: https://developer.github.com/v3/oauth/ [oauthenticator]: https://github.com/jupyterhub/oauthenticator
[OAuthenticator]: https://github.com/jupyterhub/oauthenticator
[pre_spawn_start(user, spawner)]: https://jupyterhub.readthedocs.io/en/latest/api/auth.html#jupyterhub.auth.Authenticator.pre_spawn_start [pre_spawn_start(user, spawner)]: https://jupyterhub.readthedocs.io/en/latest/api/auth.html#jupyterhub.auth.Authenticator.pre_spawn_start
[post_spawn_stop(user, spawner)]: https://jupyterhub.readthedocs.io/en/latest/api/auth.html#jupyterhub.auth.Authenticator.post_spawn_stop [post_spawn_stop(user, spawner)]: https://jupyterhub.readthedocs.io/en/latest/api/auth.html#jupyterhub.auth.Authenticator.post_spawn_stop

View File

@@ -3,18 +3,17 @@
In this example, we show a configuration file for a fairly standard JupyterHub In this example, we show a configuration file for a fairly standard JupyterHub
deployment with the following assumptions: deployment with the following assumptions:
* Running JupyterHub on a single cloud server - Running JupyterHub on a single cloud server
* Using SSL on the standard HTTPS port 443 - Using SSL on the standard HTTPS port 443
* Using GitHub OAuth (using oauthenticator) for login - Using GitHub OAuth (using oauthenticator) for login
* Using the default spawner (to configure other spawners, uncomment and edit - Using the default spawner (to configure other spawners, uncomment and edit
`spawner_class` as well as follow the instructions for your desired spawner) `spawner_class` as well as follow the instructions for your desired spawner)
* Users exist locally on the server - Users exist locally on the server
* Users' notebooks to be served from `~/assignments` to allow users to browse - Users' notebooks to be served from `~/assignments` to allow users to browse
for notebooks within other users' home directories for notebooks within other users' home directories
* You want the landing page for each user to be a `Welcome.ipynb` notebook in - You want the landing page for each user to be a `Welcome.ipynb` notebook in
their assignments directory. their assignments directory.
* All runtime files are put into `/srv/jupyterhub` and log files in `/var/log`. - All runtime files are put into `/srv/jupyterhub` and log files in `/var/log`.
The `jupyterhub_config.py` file would have these settings: The `jupyterhub_config.py` file would have these settings:
@@ -52,7 +51,7 @@ c.GitHubOAuthenticator.oauth_callback_url = os.environ['OAUTH_CALLBACK_URL']
c.LocalAuthenticator.create_system_users = True c.LocalAuthenticator.create_system_users = True
# specify users and admin # specify users and admin
c.Authenticator.whitelist = {'rgbkrk', 'minrk', 'jhamrick'} c.Authenticator.allowed_users = {'rgbkrk', 'minrk', 'jhamrick'}
c.Authenticator.admin_users = {'jhamrick', 'rgbkrk'} c.Authenticator.admin_users = {'jhamrick', 'rgbkrk'}
# uses the default spawner # uses the default spawner

View File

@@ -6,12 +6,12 @@ SSL port `443`. This could be useful if the JupyterHub server machine is also
hosting other domains or content on `443`. The goal in this example is to hosting other domains or content on `443`. The goal in this example is to
satisfy the following: satisfy the following:
* JupyterHub is running on a server, accessed *only* via `HUB.DOMAIN.TLD:443` - JupyterHub is running on a server, accessed _only_ via `HUB.DOMAIN.TLD:443`
* On the same machine, `NO_HUB.DOMAIN.TLD` strictly serves different content, - On the same machine, `NO_HUB.DOMAIN.TLD` strictly serves different content,
also on port `443` also on port `443`
* `nginx` or `apache` is used as the public access point (which means that - `nginx` or `apache` is used as the public access point (which means that
only nginx/apache will bind to `443`) only nginx/apache will bind to `443`)
* After testing, the server in question should be able to score at least an A on the - After testing, the server in question should be able to score at least an A on the
Qualys SSL Labs [SSL Server Test](https://www.ssllabs.com/ssltest/) Qualys SSL Labs [SSL Server Test](https://www.ssllabs.com/ssltest/)
Let's start out with needed JupyterHub configuration in `jupyterhub_config.py`: Let's start out with needed JupyterHub configuration in `jupyterhub_config.py`:
@@ -83,8 +83,12 @@ server {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# websocket headers # websocket headers
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade; proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade; proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Scheme $scheme;
proxy_buffering off;
} }
# Managing requests to verify letsencrypt host # Managing requests to verify letsencrypt host
@@ -139,6 +143,21 @@ Now restart `nginx`, restart the JupyterHub, and enjoy accessing
`https://HUB.DOMAIN.TLD` while serving other content securely on `https://HUB.DOMAIN.TLD` while serving other content securely on
`https://NO_HUB.DOMAIN.TLD`. `https://NO_HUB.DOMAIN.TLD`.
### SELinux permissions for nginx
On distributions with SELinux enabled (e.g. Fedora), one may encounter permission errors
when the nginx service is started.
We need to allow nginx to perform network relay and connect to the jupyterhub port. The
following commands do that:
```bash
semanage port -a -t http_port_t -p tcp 8000
setsebool -P httpd_can_network_relay 1
setsebool -P httpd_can_network_connect 1
```
Replace 8000 with the port the jupyterhub server is running from.
## Apache ## Apache
@@ -193,22 +212,24 @@ Listen 443
</VirtualHost> </VirtualHost>
``` ```
In case of the need to run the jupyterhub under /jhub/ or other location please use the below configurations: In case of the need to run the jupyterhub under /jhub/ or other location please use the below configurations:
- JupyterHub running locally at http://127.0.0.1:8000/jhub/ or other location - JupyterHub running locally at http://127.0.0.1:8000/jhub/ or other location
httpd.conf amendments: httpd.conf amendments:
```bash ```bash
RewriteRule /jhub/(.*) ws://127.0.0.1:8000/jhub/$1 [P,L] RewriteRule /jhub/(.*) ws://127.0.0.1:8000/jhub/$1 [NE.P,L]
RewriteRule /jhub/(.*) http://127.0.0.1:8000/jhub/$1 [P,L] RewriteRule /jhub/(.*) http://127.0.0.1:8000/jhub/$1 [NE,P,L]
ProxyPass /jhub/ http://127.0.0.1:8000/jhub/ ProxyPass /jhub/ http://127.0.0.1:8000/jhub/
ProxyPassReverse /jhub/ http://127.0.0.1:8000/jhub/ ProxyPassReverse /jhub/ http://127.0.0.1:8000/jhub/
``` ```
jupyterhub_config.py amendments: jupyterhub_config.py amendments:
```bash
--The public facing URL of the whole JupyterHub application. ```bash
--This is the address on which the proxy will bind. Sets protocol, ip, base_url --The public facing URL of the whole JupyterHub application.
c.JupyterHub.bind_url = 'http://127.0.0.1:8000/jhub/' --This is the address on which the proxy will bind. Sets protocol, ip, base_url
``` c.JupyterHub.bind_url = 'http://127.0.0.1:8000/jhub/'
```

View File

@@ -0,0 +1,30 @@
==============================
Configuration Reference
==============================
.. important::
Make sure the version of JupyterHub for this documentation matches your
installation version, as the output of this command may change between versions.
JupyterHub configuration
------------------------
As explained in the `Configuration Basics <../getting-started/config-basics.html#generate-a-default-config-file>`_
section, the ``jupyterhub_config.py`` can be automatically generated via
.. code-block:: bash
jupyterhub --generate-config
The following contains the output of that command for reference.
.. jupyterhub-generate-config::
JupyterHub help command output
------------------------------
This section contains the output of the command ``jupyterhub --help-all``.
.. jupyterhub-help-all::

View File

@@ -53,11 +53,10 @@ To do this we add to `/etc/sudoers` (use `visudo` for safe editing of sudoers):
- give `rhea` permission to run `JUPYTER_CMD` on behalf of `JUPYTER_USERS` - give `rhea` permission to run `JUPYTER_CMD` on behalf of `JUPYTER_USERS`
without entering a password without entering a password
For example: For example:
```bash ```bash
# comma-separated whitelist of users that can spawn single-user servers # comma-separated list of users that can spawn single-user servers
# this should include all of your Hub users # this should include all of your Hub users
Runas_Alias JUPYTER_USERS = rhea, zoe, wash Runas_Alias JUPYTER_USERS = rhea, zoe, wash
@@ -91,7 +90,7 @@ $ adduser -G jupyterhub newuser
Test that the new user doesn't need to enter a password to run the sudospawner Test that the new user doesn't need to enter a password to run the sudospawner
command. command.
This should prompt for your password to switch to rhea, but *not* prompt for This should prompt for your password to switch to rhea, but _not_ prompt for
any password for the second switch. It should show some help output about any password for the second switch. It should show some help output about
logging options: logging options:
@@ -120,6 +119,11 @@ the shadow password database.
### Shadow group (Linux) ### Shadow group (Linux)
**Note:** On Fedora based distributions there is no clear way to configure
the PAM database to allow sufficient access for authenticating with the target user's password
from JupyterHub. As a workaround we recommend use an
[alternative authentication method](https://github.com/jupyterhub/jupyterhub/wiki/Authenticators).
```bash ```bash
$ ls -l /etc/shadow $ ls -l /etc/shadow
-rw-r----- 1 root shadow 2197 Jul 21 13:41 shadow -rw-r----- 1 root shadow 2197 Jul 21 13:41 shadow
@@ -152,6 +156,7 @@ then you will need to give `node` permission to do so:
```bash ```bash
sudo setcap 'cap_net_bind_service=+ep' /usr/bin/node sudo setcap 'cap_net_bind_service=+ep' /usr/bin/node
``` ```
However, you may want to further understand the consequences of this. However, you may want to further understand the consequences of this.
You may also be interested in limiting the amount of CPU any process can use You may also be interested in limiting the amount of CPU any process can use
@@ -160,7 +165,6 @@ distributions' packaging system. This can be used to keep any user's process
from using too much CPU cycles. You can configure it accoring to [these from using too much CPU cycles. You can configure it accoring to [these
instructions](http://ubuntuforums.org/showthread.php?t=992706). instructions](http://ubuntuforums.org/showthread.php?t=992706).
### Shadow group (FreeBSD) ### Shadow group (FreeBSD)
**NOTE:** This has not been tested and may not work as expected. **NOTE:** This has not been tested and may not work as expected.
@@ -223,7 +227,7 @@ And try logging in.
If you still get a generic `Permission denied` `PermissionError`, it's possible SELinux is blocking you. If you still get a generic `Permission denied` `PermissionError`, it's possible SELinux is blocking you.
Here's how you can make a module to allow this. Here's how you can make a module to allow this.
First, put this in a file named `sudo_exec_selinux.te`: First, put this in a file named `sudo_exec_selinux.te`:
```bash ```bash
module sudo_exec_selinux 1.1; module sudo_exec_selinux 1.1;

View File

@@ -22,20 +22,18 @@ This section will focus on user environments, including:
- Installing kernelspecs - Installing kernelspecs
- Using containers vs. multi-user hosts - Using containers vs. multi-user hosts
## Installing packages ## Installing packages
To make packages available to users, you generally will install packages To make packages available to users, you generally will install packages
system-wide or in a shared environment. system-wide or in a shared environment.
This installation location should always be in the same environment that This installation location should always be in the same environment that
`jupyterhub-singleuser` itself is installed in, and must be *readable and `jupyterhub-singleuser` itself is installed in, and must be _readable and
executable* by your users. If you want users to be able to install additional executable_ by your users. If you want users to be able to install additional
packages, it must also be *writable* by your users. packages, it must also be _writable_ by your users.
If you are using a standard system Python install, you would use: If you are using a standard system Python install, you would use:
```bash ```bash
sudo python3 -m pip install numpy sudo python3 -m pip install numpy
``` ```
@@ -47,7 +45,6 @@ You may also use conda to install packages. If you do, you should make sure
that the conda environment has appropriate permissions for users to be able to that the conda environment has appropriate permissions for users to be able to
run Python code in the env. run Python code in the env.
## Configuring Jupyter and IPython ## Configuring Jupyter and IPython
[Jupyter](https://jupyter-notebook.readthedocs.io/en/stable/config_overview.html) [Jupyter](https://jupyter-notebook.readthedocs.io/en/stable/config_overview.html)
@@ -64,6 +61,7 @@ users. It's generally more efficient to configure user environments "system-wide
and it's a good idea to avoid creating files in users' home directories. and it's a good idea to avoid creating files in users' home directories.
The typical locations for these config files are: The typical locations for these config files are:
- **system-wide** in `/etc/{jupyter|ipython}` - **system-wide** in `/etc/{jupyter|ipython}`
- **env-wide** (environment wide) in `{sys.prefix}/etc/{jupyter|ipython}`. - **env-wide** (environment wide) in `{sys.prefix}/etc/{jupyter|ipython}`.
@@ -91,7 +89,6 @@ c.MappingKernelManager.cull_idle_timeout = 20 * 60
c.MappingKernelManager.cull_interval = 2 * 60 c.MappingKernelManager.cull_interval = 2 * 60
``` ```
## Installing kernelspecs ## Installing kernelspecs
You may have multiple Jupyter kernels installed and want to make sure that You may have multiple Jupyter kernels installed and want to make sure that
@@ -119,7 +116,6 @@ sure are available, I can install their specs system-wide (in /usr/local) with:
/path/to/python2 -m IPython kernel install --prefix=/usr/local /path/to/python2 -m IPython kernel install --prefix=/usr/local
``` ```
## Multi-user hosts vs. Containers ## Multi-user hosts vs. Containers
There are two broad categories of user environments that depend on what There are two broad categories of user environments that depend on what
@@ -141,8 +137,8 @@ When JupyterHub uses **container-based** Spawners (e.g. KubeSpawner or
DockerSpawner), the 'system-wide' environment is really the container image DockerSpawner), the 'system-wide' environment is really the container image
which you are using for users. which you are using for users.
In both cases, you want to *avoid putting configuration in user home In both cases, you want to _avoid putting configuration in user home
directories* because users can change those configuration settings. Also, directories_ because users can change those configuration settings. Also,
home directories typically persist once they are created, so they are home directories typically persist once they are created, so they are
difficult for admins to update later. difficult for admins to update later.
@@ -179,3 +175,13 @@ The number of named servers per user can be limited by setting
```python ```python
c.JupyterHub.named_server_limit_per_user = 5 c.JupyterHub.named_server_limit_per_user = 5
``` ```
## Switching to Jupyter Server
[Jupyter Server](https://jupyter-server.readthedocs.io/en/latest/) is a new Tornado Server backend for Jupyter web applications (e.g. JupyterLab 3.0 uses this package as its default backend).
By default, the single-user notebook server uses the (old) `NotebookApp` from the [notebook](https://github.com/jupyter/notebook) package. You can switch to using Jupyter Server's `ServerApp` backend (this will likely become the default in future releases) by setting the `JUPYTERHUB_SINGLEUSER_APP` environment variable to:
```bash
export JUPYTERHUB_SINGLEUSER_APP='jupyter_server.serverapp.ServerApp'
```

View File

@@ -47,7 +47,7 @@ additional configuration required for MySQL that is not needed for PostgreSQL.
- You should use the `pymysql` sqlalchemy provider (the other one, MySQLdb, - You should use the `pymysql` sqlalchemy provider (the other one, MySQLdb,
isn't available for py3). isn't available for py3).
- You also need to set `pool_recycle` to some value (typically 60 - 300) - You also need to set `pool_recycle` to some value (typically 60 - 300)
which depends on your MySQL setup. This is necessary since MySQL kills which depends on your MySQL setup. This is necessary since MySQL kills
connections serverside if they've been idle for a while, and the connection connections serverside if they've been idle for a while, and the connection
from the hub will be idle for longer than most connections. This behavior from the hub will be idle for longer than most connections. This behavior
will lead to frustrating 'the connection has gone away' errors from will lead to frustrating 'the connection has gone away' errors from

View File

@@ -16,6 +16,7 @@ what happens under-the-hood when you deploy and configure your JupyterHub.
proxy proxy
separate-proxy separate-proxy
rest rest
monitoring
database database
templates templates
../events/index ../events/index
@@ -24,3 +25,4 @@ what happens under-the-hood when you deploy and configure your JupyterHub.
config-ghoauth config-ghoauth
config-proxy config-proxy
config-sudo config-sudo
config-reference

View File

@@ -0,0 +1,20 @@
Monitoring
==========
This section covers details on monitoring the state of your JupyterHub installation.
JupyterHub expose the ``/metrics`` endpoint that returns text describing its current
operational state formatted in a way `Prometheus <https://prometheus.io/docs/introduction/overview/>`_ understands.
Prometheus is a separate open source tool that can be configured to repeatedly poll
JupyterHub's ``/metrics`` endpoint to parse and save its current state.
By doing so, Prometheus can describe JupyterHub's evolving state over time.
This evolving state can then be accessed through Prometheus that expose its underlying
storage to those allowed to access it, and be presented with dashboards by a
tool like `Grafana <https://grafana.com/docs/grafana/latest/getting-started/what-is-grafana/>`_.
.. toctree::
:maxdepth: 2
metrics

View File

@@ -54,7 +54,7 @@ class MyProxy(Proxy):
"""Stop the proxy""" """Stop the proxy"""
``` ```
These methods **may** be coroutines. These methods **may** be coroutines.
`c.Proxy.should_start` is a configurable flag that determines whether the `c.Proxy.should_start` is a configurable flag that determines whether the
Hub should call these methods when the Hub itself starts and stops. Hub should call these methods when the Hub itself starts and stops.
@@ -103,7 +103,7 @@ route to be proxied, such as `/user/name/`. A routespec will:
When adding a route, JupyterHub may pass a JSON-serializable dict as a `data` When adding a route, JupyterHub may pass a JSON-serializable dict as a `data`
argument that should be attached to the proxy route. When that route is argument that should be attached to the proxy route. When that route is
retrieved, the `data` argument should be returned as well. If your proxy retrieved, the `data` argument should be returned as well. If your proxy
implementation doesn't support storing data attached to routes, then your implementation doesn't support storing data attached to routes, then your
Python wrapper may have to handle storing the `data` piece itself, e.g in a Python wrapper may have to handle storing the `data` piece itself, e.g in a
simple file or database. simple file or database.
@@ -136,7 +136,7 @@ async def delete_route(self, routespec):
### Retrieving routes ### Retrieving routes
For retrieval, you only *need* to implement a single method that retrieves all For retrieval, you only _need_ to implement a single method that retrieves all
routes. The return value for this function should be a dictionary, keyed by routes. The return value for this function should be a dictionary, keyed by
`routespect`, of dicts whose keys are the same three arguments passed to `routespect`, of dicts whose keys are the same three arguments passed to
`add_route` (`routespec`, `target`, `data`) `add_route` (`routespec`, `target`, `data`)

View File

@@ -57,6 +57,9 @@ generating an API token is available from the JupyterHub user interface:
## Add API tokens to the config file ## Add API tokens to the config file
**This is deprecated. We are in no rush to remove this feature,
but please consider if service tokens are right for you.**
You may also add a dictionary of API tokens and usernames to the hub's You may also add a dictionary of API tokens and usernames to the hub's
configuration file, `jupyterhub_config.py` (note that configuration file, `jupyterhub_config.py` (note that
the **key** is the 'secret-token' while the **value** is the 'username'): the **key** is the 'secret-token' while the **value** is the 'username'):
@@ -67,6 +70,41 @@ c.JupyterHub.api_tokens = {
} }
``` ```
### Updating to admin services
The `api_tokens` configuration has been softly deprecated since the introduction of services.
We have no plans to remove it,
but users are encouraged to use service configuration instead.
If you have been using `api_tokens` to create an admin user
and a token for that user to perform some automations,
the services mechanism may be a better fit.
If you have the following configuration:
```python
c.JupyterHub.admin_users = {"service-admin",}
c.JupyterHub.api_tokens = {
"secret-token": "service-admin",
}
```
This can be updated to create an admin service, with the following configuration:
```python
c.JupyterHub.services = [
{
"name": "service-token",
"admin": True,
"api_token": "secret-token",
},
]
```
The token will have the same admin permissions,
but there will no longer be a user account created to house it.
The main noticeable difference is that there will be no notebook server associated with the account
and the service will not show up in the various user list pages and APIs.
## Make an API request ## Make an API request
To authenticate your requests, pass the API token in the request's To authenticate your requests, pass the API token in the request's
@@ -131,7 +169,7 @@ curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/us
``` ```
With the named-server functionality, it's now possible to launch more than one With the named-server functionality, it's now possible to launch more than one
specifically named servers against a given user. This could be used, for instance, specifically named servers against a given user. This could be used, for instance,
to launch each server based on a different image. to launch each server based on a different image.
First you must enable named-servers by including the following setting in the `jupyterhub_config.py` file. First you must enable named-servers by including the following setting in the `jupyterhub_config.py` file.
@@ -149,6 +187,7 @@ hub:
``` ```
With that setting in place, a new named-server is activated like this: With that setting in place, a new named-server is activated like this:
```bash ```bash
curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/servers/<serverA>" curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/servers/<serverA>"
curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/servers/<serverB>" curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/servers/<serverB>"
@@ -163,7 +202,6 @@ will need to be able to handle the case of multiple servers per user and ensure
uniqueness of names, particularly if servers are spawned via docker containers uniqueness of names, particularly if servers are spawned via docker containers
or kubernetes pods. or kubernetes pods.
## Learn more about the API ## Learn more about the API
You can see the full [JupyterHub REST API][] for details. This REST API Spec can You can see the full [JupyterHub REST API][] for details. This REST API Spec can
@@ -171,7 +209,7 @@ be viewed in a more [interactive style on swagger's petstore][].
Both resources contain the same information and differ only in its display. Both resources contain the same information and differ only in its display.
Note: The Swagger specification is being renamed the [OpenAPI Initiative][]. Note: The Swagger specification is being renamed the [OpenAPI Initiative][].
[interactive style on swagger's petstore]: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#!/default [interactive style on swagger's petstore]: https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/HEAD/docs/rest-api.yml#!/default
[OpenAPI Initiative]: https://www.openapis.org/ [openapi initiative]: https://www.openapis.org/
[JupyterHub REST API]: ./rest-api [jupyterhub rest api]: ./rest-api
[Jupyter Notebook REST API]: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/notebook/master/notebook/services/api/api.yaml [jupyter notebook rest api]: https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyter/notebook/HEAD/notebook/services/api/api.yaml

View File

@@ -1,28 +1,26 @@
# Running proxy separately from the hub # Running proxy separately from the hub
## Background ## Background
The thing which users directly connect to is the proxy, by default The thing which users directly connect to is the proxy, by default
`configurable-http-proxy`. The proxy either redirects users to the `configurable-http-proxy`. The proxy either redirects users to the
hub (for login and managing servers), or to their own single-user hub (for login and managing servers), or to their own single-user
servers. Thus, as long as the proxy stays running, access to existing servers. Thus, as long as the proxy stays running, access to existing
servers continues, even if the hub itself restarts or goes down. servers continues, even if the hub itself restarts or goes down.
When you first configure the hub, you may not even realize this When you first configure the hub, you may not even realize this
because the proxy is automatically managed by the hub. This is great because the proxy is automatically managed by the hub. This is great
for getting started and even most use, but everytime you restart the for getting started and even most use, but everytime you restart the
hub, all user connections also get restarted. But it's also simple to hub, all user connections also get restarted. But it's also simple to
run the proxy as a service separate from the hub, so that you are free run the proxy as a service separate from the hub, so that you are free
to reconfigure the hub while only interrupting users who are currently to reconfigure the hub while only interrupting users who are currently
actively starting the hub. actively starting the hub.
The default JupyterHub proxy is The default JupyterHub proxy is
[configurable-http-proxy](https://github.com/jupyterhub/configurable-http-proxy), [configurable-http-proxy](https://github.com/jupyterhub/configurable-http-proxy),
and that page has some docs. If you are using a different proxy, such and that page has some docs. If you are using a different proxy, such
as Traefik, these instructions are probably not relevant to you. as Traefik, these instructions are probably not relevant to you.
## Configuration options ## Configuration options
`c.JupyterHub.cleanup_servers = False` should be set, which tells the `c.JupyterHub.cleanup_servers = False` should be set, which tells the
@@ -37,24 +35,20 @@ it yourself).
token for authenticating communication with the proxy. token for authenticating communication with the proxy.
`c.ConfigurableHTTPProxy.api_url = 'http://localhost:8001'` should be `c.ConfigurableHTTPProxy.api_url = 'http://localhost:8001'` should be
set to the URL which the hub uses to connect *to the proxy's API*. set to the URL which the hub uses to connect _to the proxy's API_.
## Proxy configuration ## Proxy configuration
You need to configure a service to start the proxy. An example You need to configure a service to start the proxy. An example
command line for this is `configurable-http-proxy --ip=127.0.0.1 command line for this is `configurable-http-proxy --ip=127.0.0.1 --port=8000 --api-ip=127.0.0.1 --api-port=8001 --default-target=http://localhost:8081 --error-target=http://localhost:8081/hub/error`. (Details for how to
--port=8000 --api-ip=127.0.0.1 --api-port=8001
--default-target=http://localhost:8081
--error-target=http://localhost:8081/hub/error`. (Details for how to
do this is out of scope for this tutorial - for example it might be a do this is out of scope for this tutorial - for example it might be a
systemd service on within another docker cotainer). The proxy has no systemd service on within another docker cotainer). The proxy has no
configuration files, all configuration is via the command line and configuration files, all configuration is via the command line and
environment variables. environment variables.
`--api-ip` and `--api-port` (which tells the proxy where to listen) should match the hub's `ConfigurableHTTPProxy.api_url`. `--api-ip` and `--api-port` (which tells the proxy where to listen) should match the hub's `ConfigurableHTTPProxy.api_url`.
`--ip`, `-port`, and other options configure the *user* connections to the proxy. `--ip`, `-port`, and other options configure the _user_ connections to the proxy.
`--default-target` and `--error-target` should point to the hub, and used when users navigate to the proxy originally. `--default-target` and `--error-target` should point to the hub, and used when users navigate to the proxy originally.
@@ -63,18 +57,16 @@ match the token given to `c.ConfigurableHTTPProxy.auth_token`.
You should check the [configurable-http-proxy You should check the [configurable-http-proxy
options](https://github.com/jupyterhub/configurable-http-proxy) to see options](https://github.com/jupyterhub/configurable-http-proxy) to see
what other options are needed, for example SSL options. Note that what other options are needed, for example SSL options. Note that
these are configured in the hub if the hub is starting the proxy - you these are configured in the hub if the hub is starting the proxy - you
need to move the options to here. need to move the options to here.
## Docker image ## Docker image
You can use [jupyterhub configurable-http-proxy docker You can use [jupyterhub configurable-http-proxy docker
image](https://hub.docker.com/r/jupyterhub/configurable-http-proxy/) image](https://hub.docker.com/r/jupyterhub/configurable-http-proxy/)
to run the proxy. to run the proxy.
## See also ## See also
* [jupyterhub configurable-http-proxy](https://github.com/jupyterhub/configurable-http-proxy) - [jupyterhub configurable-http-proxy](https://github.com/jupyterhub/configurable-http-proxy)

View File

@@ -50,12 +50,9 @@ A Service may have the following properties:
If a service is also to be managed by the Hub, it has a few extra options: If a service is also to be managed by the Hub, it has a few extra options:
- `command: (str/Popen list`) - Command for JupyterHub to spawn the service. - `command: (str/Popen list)` - Command for JupyterHub to spawn the service. - Only use this if the service should be a subprocess. - If command is not specified, the Service is assumed to be managed
- Only use this if the service should be a subprocess. externally. - If a command is specified for launching the Service, the Service will
- If command is not specified, the Service is assumed to be managed be started and managed by the Hub.
externally.
- If a command is specified for launching the Service, the Service will
be started and managed by the Hub.
- `environment: dict` - additional environment variables for the Service. - `environment: dict` - additional environment variables for the Service.
- `user: str` - the name of a system user to manage the Service. If - `user: str` - the name of a system user to manage the Service. If
unspecified, run as the same user as the Hub. unspecified, run as the same user as the Hub.
@@ -91,9 +88,9 @@ This example would be configured as follows in `jupyterhub_config.py`:
```python ```python
c.JupyterHub.services = [ c.JupyterHub.services = [
{ {
'name': 'cull-idle', 'name': 'idle-culler',
'admin': True, 'admin': True,
'command': [sys.executable, '/path/to/cull-idle.py', '--timeout'] 'command': [sys.executable, '-m', 'jupyterhub_idle_culler', '--timeout=3600']
} }
] ]
``` ```
@@ -103,9 +100,9 @@ parameters, which describe the environment needed to start the Service process:
- `environment: dict` - additional environment variables for the Service. - `environment: dict` - additional environment variables for the Service.
- `user: str` - name of the user to run the server if different from the Hub. - `user: str` - name of the user to run the server if different from the Hub.
Requires Hub to be root. Requires Hub to be root.
- `cwd: path` directory in which to run the Service, if different from the - `cwd: path` directory in which to run the Service, if different from the
Hub directory. Hub directory.
The Hub will pass the following environment variables to launch the Service: The Hub will pass the following environment variables to launch the Service:
@@ -123,15 +120,14 @@ For the previous 'cull idle' Service example, these environment variables
would be passed to the Service when the Hub starts the 'cull idle' Service: would be passed to the Service when the Hub starts the 'cull idle' Service:
```bash ```bash
JUPYTERHUB_SERVICE_NAME: 'cull-idle' JUPYTERHUB_SERVICE_NAME: 'idle-culler'
JUPYTERHUB_API_TOKEN: API token assigned to the service JUPYTERHUB_API_TOKEN: API token assigned to the service
JUPYTERHUB_API_URL: http://127.0.0.1:8080/hub/api JUPYTERHUB_API_URL: http://127.0.0.1:8080/hub/api
JUPYTERHUB_BASE_URL: https://mydomain[:port] JUPYTERHUB_BASE_URL: https://mydomain[:port]
JUPYTERHUB_SERVICE_PREFIX: /services/cull-idle/ JUPYTERHUB_SERVICE_PREFIX: /services/idle-culler/
``` ```
See the JupyterHub GitHub repo for additional information about the See the GitHub repo for additional information about the [jupyterhub_idle_culler][].
[`cull-idle` example](https://github.com/jupyterhub/jupyterhub/tree/master/examples/cull-idle).
## Externally-Managed Services ## Externally-Managed Services
@@ -151,6 +147,8 @@ c.JupyterHub.services = [
{ {
'name': 'my-web-service', 'name': 'my-web-service',
'url': 'https://10.0.1.1:1984', 'url': 'https://10.0.1.1:1984',
# any secret >8 characters, you'll use api_token to
# authenticate api requests to the hub from your service
'api_token': 'super-secret', 'api_token': 'super-secret',
} }
] ]
@@ -198,16 +196,16 @@ can be used by services. You may go beyond this reference implementation and
create custom hub-authenticating clients and services. We describe the process create custom hub-authenticating clients and services. We describe the process
below. below.
The reference, or base, implementation is the [`HubAuth`][HubAuth] class, The reference, or base, implementation is the [`HubAuth`][hubauth] class,
which implements the requests to the Hub. which implements the requests to the Hub.
To use HubAuth, you must set the `.api_token`, either programmatically when constructing the class, To use HubAuth, you must set the `.api_token`, either programmatically when constructing the class,
or via the `JUPYTERHUB_API_TOKEN` environment variable. or via the `JUPYTERHUB_API_TOKEN` environment variable.
Most of the logic for authentication implementation is found in the Most of the logic for authentication implementation is found in the
[`HubAuth.user_for_cookie`][HubAuth.user_for_cookie] [`HubAuth.user_for_cookie`][hubauth.user_for_cookie]
and in the and in the
[`HubAuth.user_for_token`][HubAuth.user_for_token] [`HubAuth.user_for_token`][hubauth.user_for_token]
methods, which makes a request of the Hub, and returns: methods, which makes a request of the Hub, and returns:
- None, if no user could be identified, or - None, if no user could be identified, or
@@ -232,7 +230,7 @@ configurable by the `cookie_cache_max_age` setting (default: five minutes).
For example, you have a Flask service that returns information about a user. For example, you have a Flask service that returns information about a user.
JupyterHub's HubAuth class can be used to authenticate requests to the Flask JupyterHub's HubAuth class can be used to authenticate requests to the Flask
service. See the `service-whoami-flask` example in the service. See the `service-whoami-flask` example in the
[JupyterHub GitHub repo](https://github.com/jupyterhub/jupyterhub/tree/master/examples/service-whoami-flask) [JupyterHub GitHub repo](https://github.com/jupyterhub/jupyterhub/tree/HEAD/examples/service-whoami-flask)
for more details. for more details.
```python ```python
@@ -284,11 +282,10 @@ def whoami(user):
) )
``` ```
### Authenticating tornado services with JupyterHub ### Authenticating tornado services with JupyterHub
Since most Jupyter services are written with tornado, Since most Jupyter services are written with tornado,
we include a mixin class, [`HubAuthenticated`][HubAuthenticated], we include a mixin class, [`HubAuthenticated`][hubauthenticated],
for quickly authenticating your own tornado services with JupyterHub. for quickly authenticating your own tornado services with JupyterHub.
Tornado's `@web.authenticated` method calls a Handler's `.get_current_user` Tornado's `@web.authenticated` method calls a Handler's `.get_current_user`
@@ -309,66 +306,65 @@ class MyHandler(HubAuthenticated, web.RequestHandler):
... ...
``` ```
The HubAuth will automatically load the desired configuration from the Service The HubAuth will automatically load the desired configuration from the Service
environment variables. environment variables.
If you want to limit user access, you can whitelist users through either the If you want to limit user access, you can specify allowed users through either the
`.hub_users` attribute or `.hub_groups`. These are sets that check against the `.hub_users` attribute or `.hub_groups`. These are sets that check against the
username and user group list, respectively. If a user matches neither the user username and user group list, respectively. If a user matches neither the user
list nor the group list, they will not be allowed access. If both are left list nor the group list, they will not be allowed access. If both are left
undefined, then any user will be allowed. undefined, then any user will be allowed.
### Implementing your own Authentication with JupyterHub ### Implementing your own Authentication with JupyterHub
If you don't want to use the reference implementation If you don't want to use the reference implementation
(e.g. you find the implementation a poor fit for your Flask app), (e.g. you find the implementation a poor fit for your Flask app),
you can implement authentication via the Hub yourself. you can implement authentication via the Hub yourself.
We recommend looking at the [`HubAuth`][HubAuth] class implementation for reference, We recommend looking at the [`HubAuth`][hubauth] class implementation for reference,
and taking note of the following process: and taking note of the following process:
1. retrieve the cookie `jupyterhub-services` from the request. 1. retrieve the cookie `jupyterhub-services` from the request.
2. Make an API request `GET /hub/api/authorizations/cookie/jupyterhub-services/cookie-value`, 2. Make an API request `GET /hub/api/authorizations/cookie/jupyterhub-services/cookie-value`,
where cookie-value is the url-encoded value of the `jupyterhub-services` cookie. where cookie-value is the url-encoded value of the `jupyterhub-services` cookie.
This request must be authenticated with a Hub API token in the `Authorization` header. This request must be authenticated with a Hub API token in the `Authorization` header,
For example, with [requests][]: for example using the `api_token` from your [external service's configuration](#externally-managed-services).
```python For example, with [requests][]:
r = requests.get(
'/'.join((["http://127.0.0.1:8081/hub/api", ```python
"authorizations/cookie/jupyterhub-services", r = requests.get(
quote(encrypted_cookie, safe=''), '/'.join(["http://127.0.0.1:8081/hub/api",
]), "authorizations/cookie/jupyterhub-services",
headers = { quote(encrypted_cookie, safe=''),
'Authorization' : 'token %s' % api_token, ]),
}, headers = {
) 'Authorization' : 'token %s' % api_token,
r.raise_for_status() },
user = r.json() )
``` r.raise_for_status()
user = r.json()
```
3. On success, the reply will be a JSON model describing the user: 3. On success, the reply will be a JSON model describing the user:
```json ```json
{ {
"name": "inara", "name": "inara",
"groups": ["serenity", "guild"], "groups": ["serenity", "guild"]
} }
``` ```
An example of using an Externally-Managed Service and authentication is An example of using an Externally-Managed Service and authentication is
in [nbviewer README][nbviewer example] section on securing the notebook viewer, in [nbviewer README][nbviewer example] section on securing the notebook viewer,
and an example of its configuration is found [here](https://github.com/jupyter/nbviewer/blob/master/nbviewer/providers/base.py#L94). and an example of its configuration is found [here](https://github.com/jupyter/nbviewer/blob/ed942b10a52b6259099e2dd687930871dc8aac22/nbviewer/providers/base.py#L95).
nbviewer can also be run as a Hub-Managed Service as described [nbviewer README][nbviewer example] nbviewer can also be run as a Hub-Managed Service as described [nbviewer README][nbviewer example]
section on securing the notebook viewer. section on securing the notebook viewer.
[requests]: http://docs.python-requests.org/en/master/ [requests]: http://docs.python-requests.org/en/master/
[services_auth]: ../api/services.auth.html [services_auth]: ../api/services.auth.html
[HubAuth]: ../api/services.auth.html#jupyterhub.services.auth.HubAuth [hubauth]: ../api/services.auth.html#jupyterhub.services.auth.HubAuth
[HubAuth.user_for_cookie]: ../api/services.auth.html#jupyterhub.services.auth.HubAuth.user_for_cookie [hubauth.user_for_cookie]: ../api/services.auth.html#jupyterhub.services.auth.HubAuth.user_for_cookie
[HubAuth.user_for_token]: ../api/services.auth.html#jupyterhub.services.auth.HubAuth.user_for_token [hubauth.user_for_token]: ../api/services.auth.html#jupyterhub.services.auth.HubAuth.user_for_token
[HubAuthenticated]: ../api/services.auth.html#jupyterhub.services.auth.HubAuthenticated [hubauthenticated]: ../api/services.auth.html#jupyterhub.services.auth.HubAuthenticated
[nbviewer example]: https://github.com/jupyter/nbviewer#securing-the-notebook-viewer [nbviewer example]: https://github.com/jupyter/nbviewer#securing-the-notebook-viewer
[jupyterhub_idle_culler]: https://github.com/jupyterhub/jupyterhub-idle-culler

View File

@@ -8,18 +8,17 @@ and a custom Spawner needs to be able to take three actions:
- poll whether the process is still running - poll whether the process is still running
- stop the process - stop the process
## Examples ## Examples
Custom Spawners for JupyterHub can be found on the [JupyterHub wiki](https://github.com/jupyterhub/jupyterhub/wiki/Spawners). Custom Spawners for JupyterHub can be found on the [JupyterHub wiki](https://github.com/jupyterhub/jupyterhub/wiki/Spawners).
Some examples include: Some examples include:
- [DockerSpawner](https://github.com/jupyterhub/dockerspawner) for spawning user servers in Docker containers - [DockerSpawner](https://github.com/jupyterhub/dockerspawner) for spawning user servers in Docker containers
* `dockerspawner.DockerSpawner` for spawning identical Docker containers for - `dockerspawner.DockerSpawner` for spawning identical Docker containers for
each users each users
* `dockerspawner.SystemUserSpawner` for spawning Docker containers with an - `dockerspawner.SystemUserSpawner` for spawning Docker containers with an
environment and home directory for each users environment and home directory for each users
* both `DockerSpawner` and `SystemUserSpawner` also work with Docker Swarm for - both `DockerSpawner` and `SystemUserSpawner` also work with Docker Swarm for
launching containers on remote machines launching containers on remote machines
- [SudoSpawner](https://github.com/jupyterhub/sudospawner) enables JupyterHub to - [SudoSpawner](https://github.com/jupyterhub/sudospawner) enables JupyterHub to
run without being root, by spawning an intermediate process via `sudo` run without being root, by spawning an intermediate process via `sudo`
@@ -27,9 +26,8 @@ Some examples include:
servers using batch systems servers using batch systems
- [YarnSpawner](https://github.com/jupyterhub/yarnspawner) for spawning notebook - [YarnSpawner](https://github.com/jupyterhub/yarnspawner) for spawning notebook
servers in YARN containers on a Hadoop cluster servers in YARN containers on a Hadoop cluster
- [RemoteSpawner](https://github.com/zonca/remotespawner) to spawn notebooks - [SSHSpawner](https://github.com/NERSC/sshspawner) to spawn notebooks
and a remote server and tunnel the port via SSH on a remote server using SSH
## Spawner control methods ## Spawner control methods
@@ -41,7 +39,7 @@ an object encapsulating the user's name, authentication, and server info.
The return value of `Spawner.start` should be the (ip, port) of the running server. The return value of `Spawner.start` should be the (ip, port) of the running server.
**NOTE:** When writing coroutines, *never* `yield` in between a database change and a commit. **NOTE:** When writing coroutines, _never_ `yield` in between a database change and a commit.
Most `Spawner.start` functions will look similar to this example: Most `Spawner.start` functions will look similar to this example:
@@ -80,7 +78,6 @@ to check if the local process is still running. On Windows, it uses `psutil.pid_
`Spawner.stop` should stop the process. It must be a tornado coroutine, which should return when the process has finished exiting. `Spawner.stop` should stop the process. It must be a tornado coroutine, which should return when the process has finished exiting.
## Spawner state ## Spawner state
JupyterHub should be able to stop and restart without tearing down JupyterHub should be able to stop and restart without tearing down
@@ -112,7 +109,6 @@ def clear_state(self):
self.pid = 0 self.pid = 0
``` ```
## Spawner options form ## Spawner options form
(new in 0.4) (new in 0.4)
@@ -129,7 +125,7 @@ If the `Spawner.options_form` is defined, when a user tries to start their serve
If `Spawner.options_form` is undefined, the user's server is spawned directly, and no spawn page is rendered. If `Spawner.options_form` is undefined, the user's server is spawned directly, and no spawn page is rendered.
See [this example](https://github.com/jupyterhub/jupyterhub/blob/master/examples/spawn-form/jupyterhub_config.py) for a form that allows custom CLI args for the local spawner. See [this example](https://github.com/jupyterhub/jupyterhub/blob/HEAD/examples/spawn-form/jupyterhub_config.py) for a form that allows custom CLI args for the local spawner.
### `Spawner.options_from_form` ### `Spawner.options_from_form`
@@ -170,8 +166,7 @@ which would return:
When `Spawner.start` is called, this dictionary is accessible as `self.user_options`. When `Spawner.start` is called, this dictionary is accessible as `self.user_options`.
[spawner]: https://github.com/jupyterhub/jupyterhub/blob/HEAD/jupyterhub/spawner.py
[Spawner]: https://github.com/jupyterhub/jupyterhub/blob/master/jupyterhub/spawner.py
## Writing a custom spawner ## Writing a custom spawner
@@ -212,7 +207,6 @@ Additionally, configurable attributes for your spawner will
appear in jupyterhub help output and auto-generated configuration files appear in jupyterhub help output and auto-generated configuration files
via `jupyterhub --generate-config`. via `jupyterhub --generate-config`.
## Spawners, resource limits, and guarantees (Optional) ## Spawners, resource limits, and guarantees (Optional)
Some spawners of the single-user notebook servers allow setting limits or Some spawners of the single-user notebook servers allow setting limits or
@@ -224,10 +218,9 @@ support for them**. For example, LocalProcessSpawner, the default
spawner, does not support limits and guarantees. One of the spawners spawner, does not support limits and guarantees. One of the spawners
that supports limits and guarantees is the `systemdspawner`. that supports limits and guarantees is the `systemdspawner`.
### Memory Limits & Guarantees ### Memory Limits & Guarantees
`c.Spawner.mem_limit`: A **limit** specifies the *maximum amount of memory* `c.Spawner.mem_limit`: A **limit** specifies the _maximum amount of memory_
that may be allocated, though there is no promise that the maximum amount will that may be allocated, though there is no promise that the maximum amount will
be available. In supported spawners, you can set `c.Spawner.mem_limit` to be available. In supported spawners, you can set `c.Spawner.mem_limit` to
limit the total amount of memory that a single-user notebook server can limit the total amount of memory that a single-user notebook server can
@@ -235,8 +228,8 @@ allocate. Attempting to use more memory than this limit will cause errors. The
single-user notebook server can discover its own memory limit by looking at single-user notebook server can discover its own memory limit by looking at
the environment variable `MEM_LIMIT`, which is specified in absolute bytes. the environment variable `MEM_LIMIT`, which is specified in absolute bytes.
`c.Spawner.mem_guarantee`: Sometimes, a **guarantee** of a *minimum amount of `c.Spawner.mem_guarantee`: Sometimes, a **guarantee** of a _minimum amount of
memory* is desirable. In this case, you can set `c.Spawner.mem_guarantee` to memory_ is desirable. In this case, you can set `c.Spawner.mem_guarantee` to
to provide a guarantee that at minimum this much memory will always be to provide a guarantee that at minimum this much memory will always be
available for the single-user notebook server to use. The environment variable available for the single-user notebook server to use. The environment variable
`MEM_GUARANTEE` will also be set in the single-user notebook server. `MEM_GUARANTEE` will also be set in the single-user notebook server.
@@ -271,7 +264,7 @@ utilize these certs, there are two methods of interest on the base `Spawner`
class: `.create_certs` and `.move_certs`. class: `.create_certs` and `.move_certs`.
The first method, `.create_certs` will sign a key-cert pair using an internally The first method, `.create_certs` will sign a key-cert pair using an internally
trusted authority for notebooks. During this process, `.create_certs` can trusted authority for notebooks. During this process, `.create_certs` can
apply `ip` and `dns` name information to the cert via an `alt_names` `kwarg`. apply `ip` and `dns` name information to the cert via an `alt_names` `kwarg`.
This is used for certificate authentication (verification). Without proper This is used for certificate authentication (verification). Without proper
verification, the `Notebook` will be unable to communicate with the `Hub` and verification, the `Notebook` will be unable to communicate with the `Hub` and

View File

@@ -1,8 +1,8 @@
# Working with templates and UI # Working with templates and UI
The pages of the JupyterHub application are generated from The pages of the JupyterHub application are generated from
[Jinja](http://jinja.pocoo.org/) templates. These allow the header, for [Jinja](http://jinja.pocoo.org/) templates. These allow the header, for
example, to be defined once and incorporated into all pages. By providing example, to be defined once and incorporated into all pages. By providing
your own templates, you can have complete control over JupyterHub's your own templates, you can have complete control over JupyterHub's
appearance. appearance.
@@ -10,7 +10,7 @@ appearance.
JupyterHub will look for custom templates in all of the paths in the JupyterHub will look for custom templates in all of the paths in the
`JupyterHub.template_paths` configuration option, falling back on the `JupyterHub.template_paths` configuration option, falling back on the
[default templates](https://github.com/jupyterhub/jupyterhub/tree/master/share/jupyterhub/templates) [default templates](https://github.com/jupyterhub/jupyterhub/tree/HEAD/share/jupyterhub/templates)
if no custom template with that name is found. This fallback if no custom template with that name is found. This fallback
behavior is new in version 0.9; previous versions searched only those paths behavior is new in version 0.9; previous versions searched only those paths
explicitly included in `template_paths`. You may override as many explicitly included in `template_paths`. You may override as many
@@ -20,8 +20,8 @@ or as few templates as you desire.
Jinja provides a mechanism to [extend templates](http://jinja.pocoo.org/docs/2.10/templates/#template-inheritance). Jinja provides a mechanism to [extend templates](http://jinja.pocoo.org/docs/2.10/templates/#template-inheritance).
A base template can define a `block`, and child templates can replace or A base template can define a `block`, and child templates can replace or
supplement the material in the block. The supplement the material in the block. The
[JupyterHub templates](https://github.com/jupyterhub/jupyterhub/tree/master/share/jupyterhub/templates) [JupyterHub templates](https://github.com/jupyterhub/jupyterhub/tree/HEAD/share/jupyterhub/templates)
make extensive use of blocks, which allows you to customize parts of the make extensive use of blocks, which allows you to customize parts of the
interface easily. interface easily.
@@ -32,8 +32,8 @@ In general, a child template can extend a base template, `page.html`, by beginni
``` ```
This works, unless you are trying to extend the default template for the same This works, unless you are trying to extend the default template for the same
file name. Starting in version 0.9, you may refer to the base file with a file name. Starting in version 0.9, you may refer to the base file with a
`templates/` prefix. Thus, if you are writing a custom `page.html`, start the `templates/` prefix. Thus, if you are writing a custom `page.html`, start the
file with this block: file with this block:
```html ```html
@@ -41,7 +41,7 @@ file with this block:
``` ```
By defining `block`s with same name as in the base template, child templates By defining `block`s with same name as in the base template, child templates
can replace those sections with custom content. The content from the base can replace those sections with custom content. The content from the base
template can be included with the `{{ super() }}` directive. template can be included with the `{{ super() }}` directive.
### Example ### Example
@@ -52,10 +52,7 @@ text about the server starting up, place this content in a file named
`JupyterHub.template_paths` configuration option. `JupyterHub.template_paths` configuration option.
```html ```html
{% extends "templates/spawn_pending.html" %} {% extends "templates/spawn_pending.html" %} {% block message %} {{ super() }}
{% block message %}
{{ super() }}
<p>Patience is a virtue.</p> <p>Patience is a virtue.</p>
{% endblock %} {% endblock %}
``` ```
@@ -69,9 +66,8 @@ To add announcements to be displayed on a page, you have two options:
### Announcement Configuration Variables ### Announcement Configuration Variables
If you set the configuration variable `JupyterHub.template_vars = If you set the configuration variable `JupyterHub.template_vars = {'announcement': 'some_text'}`, the given `some_text` will be placed on
{'announcement': 'some_text'}`, the given `some_text` will be placed on the top of all pages. The more specific variables
the top of all pages. The more specific variables
`announcement_login`, `announcement_spawn`, `announcement_home`, and `announcement_login`, `announcement_spawn`, `announcement_home`, and
`announcement_logout` are more specific and only show on their `announcement_logout` are more specific and only show on their
respective pages (overriding the global `announcement` variable). respective pages (overriding the global `announcement` variable).
@@ -79,13 +75,12 @@ Note that changing these variables require a restart, unlike direct
template extension. template extension.
You can get the same effect by extending templates, which allows you You can get the same effect by extending templates, which allows you
to update the messages without restarting. Set to update the messages without restarting. Set
`c.JupyterHub.template_paths` as mentioned above, and then create a `c.JupyterHub.template_paths` as mentioned above, and then create a
template (for example, `login.html`) with: template (for example, `login.html`) with:
```html ```html
{% extends "templates/login.html" %} {% extends "templates/login.html" %} {% set announcement = 'some message' %}
{% set announcement = 'some message' %}
``` ```
Extending `page.html` puts the message on all pages, but note that Extending `page.html` puts the message on all pages, but note that

View File

@@ -11,8 +11,6 @@ All authenticated handlers redirect to `/hub/login` to login users
prior to being redirected back to the originating page. prior to being redirected back to the originating page.
The returned request should preserve all query parameters. The returned request should preserve all query parameters.
## `/` ## `/`
The top-level request is always a simple redirect to `/hub/`, The top-level request is always a simple redirect to `/hub/`,
@@ -61,7 +59,7 @@ for starting and stopping the user's server.
If named servers are enabled, there will be some additional If named servers are enabled, there will be some additional
tools for management of named servers. tools for management of named servers.
*Version added: 1.0* named server UI is new in 1.0. _Version added: 1.0_ named server UI is new in 1.0.
## `/hub/login` ## `/hub/login`
@@ -111,7 +109,7 @@ not the Hub.
The username is the first part and, if using named servers, The username is the first part and, if using named servers,
the server name is the second part. the server name is the second part.
If the user's server is *not* running, this will be redirected to `/hub/user/:username/...` If the user's server is _not_ running, this will be redirected to `/hub/user/:username/...`
## `/hub/user/:username[/:servername]` ## `/hub/user/:username[/:servername]`
@@ -123,8 +121,8 @@ Handling this URL is the most complicated condition in JupyterHub,
because there can be many states: because there can be many states:
1. server is not active 1. server is not active
a. user matches a. user matches
b. user doesn't match b. user doesn't match
2. server is ready 2. server is ready
3. server is pending, but not ready 3. server is pending, but not ready
@@ -146,7 +144,7 @@ without additional user action (i.e. clicking the link on the page)
![Visiting a URL for a server that's not running](../images/not-running.png) ![Visiting a URL for a server that's not running](../images/not-running.png)
*Version changed: 1.0* _Version changed: 1.0_
Prior to 1.0, this URL itself was responsible for spawning servers, Prior to 1.0, this URL itself was responsible for spawning servers,
and served the progress page if it was pending, and served the progress page if it was pending,
@@ -165,7 +163,7 @@ indicating how to spawn the server.
This is meant to help applications such as JupyterLab This is meant to help applications such as JupyterLab
that are connected to a server that has stopped. that are connected to a server that has stopped.
*Version changed: 1.0* _Version changed: 1.0_
JupyterHub 0.9 failed these API requests with status 404, JupyterHub 0.9 failed these API requests with status 404,
but 1.0 uses 503. but 1.0 uses 503.
@@ -207,12 +205,12 @@ and a POST request will trigger the actual spawn and redirect.
![The spawn form](../images/spawn-form.png) ![The spawn form](../images/spawn-form.png)
*Version added: 1.0* _Version added: 1.0_
1.0 adds the ability to specify username and servername. 1.0 adds the ability to specify username and servername.
Prior to 1.0, only `/hub/spawn` was recognized for the default server. Prior to 1.0, only `/hub/spawn` was recognized for the default server.
*Version changed: 1.0* _Version changed: 1.0_
Prior to 1.0, this page redirected back to `/hub/user/:username`, Prior to 1.0, this page redirected back to `/hub/user/:username`,
which was responsible for triggering spawn and rendering progress, etc. which was responsible for triggering spawn and rendering progress, etc.
@@ -221,7 +219,7 @@ which was responsible for triggering spawn and rendering progress, etc.
![The spawn pending page](../images/spawn-pending.png) ![The spawn pending page](../images/spawn-pending.png)
*Version added: 1.0* this URL is new in JupyterHub 1.0. _Version added: 1.0_ this URL is new in JupyterHub 1.0.
This page renders the progress view for the given spawn request. This page renders the progress view for the given spawn request.
Once the server is ready, Once the server is ready,

View File

@@ -12,17 +12,17 @@ works.
## Semi-trusted and untrusted users ## Semi-trusted and untrusted users
JupyterHub is designed to be a *simple multi-user server for modestly sized JupyterHub is designed to be a _simple multi-user server for modestly sized
groups* of **semi-trusted** users. While the design reflects serving semi-trusted groups_ of **semi-trusted** users. While the design reflects serving semi-trusted
users, JupyterHub is not necessarily unsuitable for serving **untrusted** users. users, JupyterHub is not necessarily unsuitable for serving **untrusted** users.
Using JupyterHub with **untrusted** users does mean more work by the Using JupyterHub with **untrusted** users does mean more work by the
administrator. Much care is required to secure a Hub, with extra caution on administrator. Much care is required to secure a Hub, with extra caution on
protecting users from each other as the Hub is serving untrusted users. protecting users from each other as the Hub is serving untrusted users.
One aspect of JupyterHub's *design simplicity* for **semi-trusted** users is that One aspect of JupyterHub's _design simplicity_ for **semi-trusted** users is that
the Hub and single-user servers are placed in a *single domain*, behind a the Hub and single-user servers are placed in a _single domain_, behind a
[*proxy*][configurable-http-proxy]. If the Hub is serving untrusted [_proxy_][configurable-http-proxy]. If the Hub is serving untrusted
users, many of the web's cross-site protections are not applied between users, many of the web's cross-site protections are not applied between
single-user servers and the Hub, or between single-user servers and each single-user servers and the Hub, or between single-user servers and each
other, since browsers see the whole thing (proxy, Hub, and single user other, since browsers see the whole thing (proxy, Hub, and single user
@@ -40,7 +40,7 @@ server.
To protect all users from each other, JupyterHub administrators must To protect all users from each other, JupyterHub administrators must
ensure that: ensure that:
* A user **does not have permission** to modify their single-user notebook server, - A user **does not have permission** to modify their single-user notebook server,
including: including:
- A user **may not** install new packages in the Python environment that runs - A user **may not** install new packages in the Python environment that runs
their single-user server. their single-user server.
@@ -49,11 +49,11 @@ ensure that:
directory that precedes the directory containing `jupyterhub-singleuser`. directory that precedes the directory containing `jupyterhub-singleuser`.
- A user may not modify environment variables (e.g. PATH, PYTHONPATH) for - A user may not modify environment variables (e.g. PATH, PYTHONPATH) for
their single-user server. their single-user server.
* A user **may not** modify the configuration of the notebook server - A user **may not** modify the configuration of the notebook server
(the `~/.jupyter` or `JUPYTER_CONFIG_DIR` directory). (the `~/.jupyter` or `JUPYTER_CONFIG_DIR` directory).
If any additional services are run on the same domain as the Hub, the services If any additional services are run on the same domain as the Hub, the services
**must never** display user-authored HTML that is neither *sanitized* nor *sandboxed* **must never** display user-authored HTML that is neither _sanitized_ nor _sandboxed_
(e.g. IFramed) to any user that lacks authentication as the author of a file. (e.g. IFramed) to any user that lacks authentication as the author of a file.
## Mitigate security issues ## Mitigate security issues
@@ -85,7 +85,7 @@ admin must enforce.
### Prevent spawners from evaluating shell configuration files ### Prevent spawners from evaluating shell configuration files
For most Spawners, `PATH` is not something users can influence, but care should For most Spawners, `PATH` is not something users can influence, but care should
be taken to ensure that the Spawner does *not* evaluate shell configuration be taken to ensure that the Spawner does _not_ evaluate shell configuration
files prior to launching the server. files prior to launching the server.
### Isolate packages using virtualenv ### Isolate packages using virtualenv
@@ -125,7 +125,6 @@ versions up to date.
A handy website for testing your deployment is A handy website for testing your deployment is
[Qualsys' SSL analyzer tool](https://www.ssllabs.com/ssltest/analyze.html). [Qualsys' SSL analyzer tool](https://www.ssllabs.com/ssltest/analyze.html).
[configurable-http-proxy]: https://github.com/jupyterhub/configurable-http-proxy [configurable-http-proxy]: https://github.com/jupyterhub/configurable-http-proxy
## Vulnerability reporting ## Vulnerability reporting

View File

@@ -4,17 +4,20 @@ When troubleshooting, you may see unexpected behaviors or receive an error
message. This section provide links for identifying the cause of the message. This section provide links for identifying the cause of the
problem and how to resolve it. problem and how to resolve it.
[*Behavior*](#behavior) [_Behavior_](#behavior)
- JupyterHub proxy fails to start - JupyterHub proxy fails to start
- sudospawner fails to run - sudospawner fails to run
- What is the default behavior when none of the lists (admin, whitelist, - What is the default behavior when none of the lists (admin, allowed,
group whitelist) are set? allowed groups) are set?
- JupyterHub Docker container not accessible at localhost - JupyterHub Docker container not accessible at localhost
[*Errors*](#errors) [_Errors_](#errors)
- 500 error after spawning my single-user server - 500 error after spawning my single-user server
[*How do I...?*](#how-do-i) [_How do I...?_](#how-do-i)
- Use a chained SSL certificate - Use a chained SSL certificate
- Install JupyterHub without a network connection - Install JupyterHub without a network connection
- I want access to the whole filesystem, but still default users to their home directory - I want access to the whole filesystem, but still default users to their home directory
@@ -25,7 +28,7 @@ problem and how to resolve it.
- Toree integration with HDFS rack awareness script - Toree integration with HDFS rack awareness script
- Where do I find Docker images and Dockerfiles related to JupyterHub? - Where do I find Docker images and Dockerfiles related to JupyterHub?
[*Troubleshooting commands*](#troubleshooting-commands) [_Troubleshooting commands_](#troubleshooting-commands)
## Behavior ## Behavior
@@ -34,8 +37,8 @@ problem and how to resolve it.
If you have tried to start the JupyterHub proxy and it fails to start: If you have tried to start the JupyterHub proxy and it fails to start:
- check if the JupyterHub IP configuration setting is - check if the JupyterHub IP configuration setting is
``c.JupyterHub.ip = '*'``; if it is, try ``c.JupyterHub.ip = ''`` `c.JupyterHub.ip = '*'`; if it is, try `c.JupyterHub.ip = ''`
- Try starting with ``jupyterhub --ip=0.0.0.0`` - Try starting with `jupyterhub --ip=0.0.0.0`
**Note**: If this occurs on Ubuntu/Debian, check that the you are using a **Note**: If this occurs on Ubuntu/Debian, check that the you are using a
recent version of node. Some versions of Ubuntu/Debian come with a version recent version of node. Some versions of Ubuntu/Debian come with a version
@@ -55,14 +58,14 @@ or add:
to the config file, `jupyterhub_config.py`. to the config file, `jupyterhub_config.py`.
### What is the default behavior when none of the lists (admin, whitelist, group whitelist) are set? ### What is the default behavior when none of the lists (admin, allowed, allowed groups) are set?
When nothing is given for these lists, there will be no admins, and all users When nothing is given for these lists, there will be no admins, and all users
who can authenticate on the system (i.e. all the unix users on the server with who can authenticate on the system (i.e. all the unix users on the server with
a password) will be allowed to start a server. The whitelist lets you limit a password) will be allowed to start a server. The allowed username set lets you limit
this to a particular set of users, and the admin_users lets you specify who this to a particular set of users, and admin_users lets you specify who
among them may use the admin interface (not necessary, unless you need to do among them may use the admin interface (not necessary, unless you need to do
things like inspect other users' servers, or modify the userlist at runtime). things like inspect other users' servers, or modify the user list at runtime).
### JupyterHub Docker container not accessible at localhost ### JupyterHub Docker container not accessible at localhost
@@ -75,6 +78,50 @@ tell Jupyterhub to start at `0.0.0.0` which is visible to everyone. Try this
command: command:
`docker run -p 8000:8000 -d --name jupyterhub jupyterhub/jupyterhub jupyterhub --ip 0.0.0.0 --port 8000` `docker run -p 8000:8000 -d --name jupyterhub jupyterhub/jupyterhub jupyterhub --ip 0.0.0.0 --port 8000`
### How can I kill ports from JupyterHub managed services that have been orphaned?
I started JupyterHub + nbgrader on the same host without containers. When I try to restart JupyterHub + nbgrader with this configuration, errors appear that the service accounts cannot start because the ports are being used.
How can I kill the processes that are using these ports?
Run the following command:
sudo kill -9 $(sudo lsof -t -i:<service_port>)
Where `<service_port>` is the port used by the nbgrader course service. This configuration is specified in `jupyterhub_config.py`.
### Why am I getting a Spawn failed error message?
After successfully logging in to JupyterHub with a compatible authenticators, I get a 'Spawn failed' error message in the browser. The JupyterHub logs have `jupyterhub KeyError: "getpwnam(): name not found: <my_user_name>`.
This issue occurs when the authenticator requires a local system user to exist. In these cases, you need to use a spawner
that does not require an existing system user account, such as `DockerSpawner` or `KubeSpawner`.
### How can I run JupyterHub with sudo but use my current env vars and virtualenv location?
When launching JupyterHub with `sudo jupyterhub` I get import errors and my environment variables don't work.
When launching services with `sudo ...` the shell won't have the same environment variables or `PATH`s in place. The most direct way to solve this issue is to use the full path to your python environment and add environment variables. For example:
```bash
sudo MY_ENV=abc123 \
/home/foo/venv/bin/python3 \
/srv/jupyterhub/jupyterhub
```
### How can I view the logs for JupyterHub or the user's Notebook servers when using the DockerSpawner?
Use `docker logs <container>` where `<container>` is the container name defined within `docker-compose.yml`. For example, to view the logs of the JupyterHub container use:
docker logs hub
By default, the user's notebook server is named `jupyter-<username>` where `username` is the user's username within JupyterHub's db. So if you wanted to see the logs for user `foo` you would use:
docker logs jupyter-foo
You can also tail logs to view them in real time using the `-f` option:
docker logs -f hub
## Errors ## Errors
@@ -88,11 +135,11 @@ There are two likely reasons for this:
1. The single-user server cannot connect to the Hub's API (networking 1. The single-user server cannot connect to the Hub's API (networking
configuration problems) configuration problems)
2. The single-user server cannot *authenticate* its requests (invalid token) 2. The single-user server cannot _authenticate_ its requests (invalid token)
#### Symptoms #### Symptoms
The main symptom is a failure to load *any* page served by the single-user The main symptom is a failure to load _any_ page served by the single-user
server, met with a 500 error. This is typically the first page at `/user/<your_name>` server, met with a 500 error. This is typically the first page at `/user/<your_name>`
after logging in or clicking "Start my server". When a single-user notebook server after logging in or clicking "Start my server". When a single-user notebook server
receives a request, the notebook server makes an API request to the Hub to receives a request, the notebook server makes an API request to the Hub to
@@ -108,7 +155,7 @@ You should see a similar 200 message, as above, in the Hub log when you first
visit your single-user notebook server. If you don't see this message in the log, it visit your single-user notebook server. If you don't see this message in the log, it
may mean that your single-user notebook server isn't connecting to your Hub. may mean that your single-user notebook server isn't connecting to your Hub.
If you see 403 (forbidden) like this, it's a token problem: If you see 403 (forbidden) like this, it's likely a token problem:
``` ```
403 GET /hub/api/authorizations/cookie/jupyterhub-token-name/[secret] (@10.0.1.4) 4.14ms 403 GET /hub/api/authorizations/cookie/jupyterhub-token-name/[secret] (@10.0.1.4) 4.14ms
@@ -152,6 +199,42 @@ After this, when you start your server via JupyterHub, it will build a
new container. If this was the underlying cause of the issue, you should see new container. If this was the underlying cause of the issue, you should see
your server again. your server again.
##### Proxy settings (403 GET)
When your whole JupyterHub sits behind a organization proxy (_not_ a reverse proxy like NGINX as part of your setup and _not_ the configurable-http-proxy) the environment variables `HTTP_PROXY`, `HTTPS_PROXY`, `http_proxy` and `https_proxy` might be set. This confuses the jupyterhub-singleuser servers: When connecting to the Hub for authorization they connect via the proxy instead of directly connecting to the Hub on localhost. The proxy might deny the request (403 GET). This results in the singleuser server thinking it has a wrong auth token. To circumvent this you should add `<hub_url>,<hub_ip>,localhost,127.0.0.1` to the environment variables `NO_PROXY` and `no_proxy`.
### Launching Jupyter Notebooks to run as an externally managed JupyterHub service with the `jupyterhub-singleuser` command returns a `JUPYTERHUB_API_TOKEN` error
[JupyterHub services](https://jupyterhub.readthedocs.io/en/stable/reference/services.html) allow processes to interact with JupyterHub's REST API. Example use-cases include:
- **Secure Testing**: provide a canonical Jupyter Notebook for testing production data to reduce the number of entry points into production systems.
- **Grading Assignments**: provide access to shared Jupyter Notebooks that may be used for management tasks such grading assignments.
- **Private Dashboards**: share dashboards with certain group members.
If possible, try to run the Jupyter Notebook as an externally managed service with one of the provided [jupyter/docker-stacks](https://github.com/jupyter/docker-stacks).
Standard JupyterHub installations include a [jupyterhub-singleuser](https://github.com/jupyterhub/jupyterhub/blob/9fdab027daa32c9017845572ad9d5ba1722dbc53/setup.py#L116) command which is built from the `jupyterhub.singleuser:main` method. The `jupyterhub-singleuser` command is the default command when JupyterHub launches single-user Jupyter Notebooks. One of the goals of this command is to make sure the version of JupyterHub installed within the Jupyter Notebook coincides with the version of the JupyterHub server itself.
If you launch a Jupyter Notebook with the `jupyterhub-singleuser` command directly from the command line the Jupyter Notebook won't have access to the `JUPYTERHUB_API_TOKEN` and will return:
```
JUPYTERHUB_API_TOKEN env is required to run jupyterhub-singleuser.
Did you launch it manually?
```
If you plan on testing `jupyterhub-singleuser` independently from JupyterHub, then you can set the api token environment variable. For example, if were to run the single-user Jupyter Notebook on the host, then:
export JUPYTERHUB_API_TOKEN=my_secret_token
jupyterhub-singleuser
With a docker container, pass in the environment variable with the run command:
docker run -d \
-p 8888:8888 \
-e JUPYTERHUB_API_TOKEN=my_secret_token \
jupyter/datascience-notebook:latest
[This example](https://github.com/jupyterhub/jupyterhub/tree/HEAD/examples/service-notebook/external) demonstrates how to combine the use of the `jupyterhub-singleuser` environment variables when launching a Notebook as an externally managed service.
## How do I...? ## How do I...?
@@ -170,7 +253,6 @@ You would then set in your `jupyterhub_config.py` file the `ssl_key` and
c.JupyterHub.ssl_cert = your_host-chained.crt c.JupyterHub.ssl_cert = your_host-chained.crt
c.JupyterHub.ssl_key = your_host.key c.JupyterHub.ssl_key = your_host.key
#### Example #### Example
Your certificate provider gives you the following files: `example_host.crt`, Your certificate provider gives you the following files: `example_host.crt`,
@@ -193,7 +275,7 @@ where `ssl_cert` is example-chained.crt and ssl_key to your private key.
Then restart JupyterHub. Then restart JupyterHub.
See also [JupyterHub SSL encryption](getting-started.md#ssl-encryption). See also [JupyterHub SSL encryption](./getting-started/security-basics.html#ssl-encryption).
### Install JupyterHub without a network connection ### Install JupyterHub without a network connection
@@ -252,8 +334,7 @@ notebook servers to default to JupyterLab:
### How do I set up JupyterHub for a workshop (when users are not known ahead of time)? ### How do I set up JupyterHub for a workshop (when users are not known ahead of time)?
1. Set up JupyterHub using OAuthenticator for GitHub authentication 1. Set up JupyterHub using OAuthenticator for GitHub authentication
2. Configure whitelist to be an empty list in` jupyterhub_config.py` 2. Configure admin list to have workshop leaders be listed with administrator privileges.
3. Configure admin list to have workshop leaders be listed with administrator privileges.
Users will need a GitHub account to login and be authenticated by the Hub. Users will need a GitHub account to login and be authenticated by the Hub.
@@ -281,7 +362,6 @@ Or use syslog:
jupyterhub | logger -t jupyterhub jupyterhub | logger -t jupyterhub
## Troubleshooting commands ## Troubleshooting commands
The following commands provide additional detail about installed packages, The following commands provide additional detail about installed packages,
@@ -324,8 +404,8 @@ SyntaxError: Missing parentheses in call to 'print'
In order to resolve this issue, there are two potential options. In order to resolve this issue, there are two potential options.
1. Update HDFS core-site.xml, so the parameter "net.topology.script.file.name" points to a custom 1. Update HDFS core-site.xml, so the parameter "net.topology.script.file.name" points to a custom
script (e.g. /etc/hadoop/conf/custom_topology_script.py). Copy the original script and change the first line point script (e.g. /etc/hadoop/conf/custom_topology_script.py). Copy the original script and change the first line point
to a python two installation (e.g. /usr/bin/python). to a python two installation (e.g. /usr/bin/python).
2. In spark-env.sh add a Python 2 installation to your path (e.g. export PATH=/opt/anaconda2/bin:$PATH). 2. In spark-env.sh add a Python 2 installation to your path (e.g. export PATH=/opt/anaconda2/bin:$PATH).
### Where do I find Docker images and Dockerfiles related to JupyterHub? ### Where do I find Docker images and Dockerfiles related to JupyterHub?

View File

@@ -5,22 +5,22 @@ do some preparation work in a bootstrapping process.
Common use cases are: Common use cases are:
*Providing writeable storage for LDAP users* _Providing writeable storage for LDAP users_
Your Jupyterhub is configured to use the LDAPAuthenticator and DockerSpawer. Your Jupyterhub is configured to use the LDAPAuthenticator and DockerSpawer.
* The user has no file directory on the host since your are using LDAP. - The user has no file directory on the host since your are using LDAP.
* When a user has no directory and DockerSpawner wants to mount a volume, - When a user has no directory and DockerSpawner wants to mount a volume,
the spawner will use docker to create a directory. the spawner will use docker to create a directory.
Since the docker daemon is running as root, the generated directory for the volume Since the docker daemon is running as root, the generated directory for the volume
mount will not be writeable by the `jovyan` user inside of the container. mount will not be writeable by the `jovyan` user inside of the container.
For the directory to be useful to the user, the permissions on the directory For the directory to be useful to the user, the permissions on the directory
need to be modified for the user to have write access. need to be modified for the user to have write access.
*Prepopulating Content* _Prepopulating Content_
Another use would be to copy initial content, such as tutorial files or reference Another use would be to copy initial content, such as tutorial files or reference
material, into the user's space when a notebook server is newly spawned. material, into the user's space when a notebook server is newly spawned.
You can define your own bootstrap process by implementing a `pre_spawn_hook` on any spawner. You can define your own bootstrap process by implementing a `pre_spawn_hook` on any spawner.
The Spawner itself is passed as parameter to your hook and you can easily get the contextual information out of the spawning process. The Spawner itself is passed as parameter to your hook and you can easily get the contextual information out of the spawning process.
@@ -28,7 +28,7 @@ The Spawner itself is passed as parameter to your hook and you can easily get th
Similarly, there may be cases where you would like to clean up after a spawner stops. Similarly, there may be cases where you would like to clean up after a spawner stops.
You may implement a `post_stop_hook` that is always executed after the spawner stops. You may implement a `post_stop_hook` that is always executed after the spawner stops.
If you implement a hook, make sure that it is *idempotent*. It will be executed every time If you implement a hook, make sure that it is _idempotent_. It will be executed every time
a notebook server is spawned to the user. That means you should somehow a notebook server is spawned to the user. That means you should somehow
ensure that things which should run only once are not running again and again. ensure that things which should run only once are not running again and again.
For example, before you create a directory, check if it exists. For example, before you create a directory, check if it exists.
@@ -148,9 +148,9 @@ else
echo "...initial content loading for user ..." echo "...initial content loading for user ..."
mkdir $USER_DIRECTORY/tutorials mkdir $USER_DIRECTORY/tutorials
cd $USER_DIRECTORY/tutorials cd $USER_DIRECTORY/tutorials
wget https://github.com/jakevdp/PythonDataScienceHandbook/archive/master.zip wget https://github.com/jakevdp/PythonDataScienceHandbook/archive/HEAD.zip
unzip -o master.zip unzip -o HEAD.zip
rm master.zip rm HEAD.zip
fi fi
exit 0 exit 0

View File

@@ -40,9 +40,9 @@ else
echo "...initial content loading for user ..." echo "...initial content loading for user ..."
mkdir $USER_DIRECTORY/tutorials mkdir $USER_DIRECTORY/tutorials
cd $USER_DIRECTORY/tutorials cd $USER_DIRECTORY/tutorials
wget https://github.com/jakevdp/PythonDataScienceHandbook/archive/master.zip wget https://github.com/jakevdp/PythonDataScienceHandbook/archive/HEAD.zip
unzip -o master.zip unzip -o HEAD.zip
rm master.zip rm HEAD.zip
fi fi
exit 0 exit 0

View File

@@ -1,41 +1,4 @@
# `cull-idle` Example # idle-culler example
The `cull_idle_servers.py` file provides a script to cull and shut down idle The idle culler has been moved to its own repository at
single-user notebook servers. This script is used when `cull-idle` is run as [jupyterhub/jupyterhub-idle-culler](https://github.com/jupyterhub/jupyterhub-idle-culler).
a Service or when it is run manually as a standalone script.
## Configure `cull-idle` to run as a Hub-Managed Service
In `jupyterhub_config.py`, add the following dictionary for the `cull-idle`
Service to the `c.JupyterHub.services` list:
```python
c.JupyterHub.services = [
{
'name': 'cull-idle',
'admin': True,
'command': [sys.executable, 'cull_idle_servers.py', '--timeout=3600'],
}
]
```
where:
- `'admin': True` indicates that the Service has 'admin' permissions, and
- `'command'` indicates that the Service will be managed by the Hub.
## Run `cull-idle` manually as a standalone script
This will run `cull-idle` manually. `cull-idle` can be run as a standalone
script anywhere with access to the Hub, and will periodically check for idle
servers and shut them down via the Hub's REST API. In order to shutdown the
servers, the token given to cull-idle must have admin privileges.
Generate an API token and store it in the `JUPYTERHUB_API_TOKEN` environment
variable. Run `cull_idle_servers.py` manually.
```bash
export JUPYTERHUB_API_TOKEN=$(jupyterhub token)
python3 cull_idle_servers.py [--timeout=900] [--url=http://127.0.0.1:8081/hub/api]
```

View File

@@ -1,401 +0,0 @@
#!/usr/bin/env python3
"""script to monitor and cull idle single-user servers
Caveats:
last_activity is not updated with high frequency,
so cull timeout should be greater than the sum of:
- single-user websocket ping interval (default: 30s)
- JupyterHub.last_activity_interval (default: 5 minutes)
You can run this as a service managed by JupyterHub with this in your config::
c.JupyterHub.services = [
{
'name': 'cull-idle',
'admin': True,
'command': [sys.executable, 'cull_idle_servers.py', '--timeout=3600'],
}
]
Or run it manually by generating an API token and storing it in `JUPYTERHUB_API_TOKEN`:
export JUPYTERHUB_API_TOKEN=$(jupyterhub token)
python3 cull_idle_servers.py [--timeout=900] [--url=http://127.0.0.1:8081/hub/api]
This script uses the same ``--timeout`` and ``--max-age`` values for
culling users and users' servers. If you want a different value for
users and servers, you should add this script to the services list
twice, just with different ``name``s, different values, and one with
the ``--cull-users`` option.
"""
import json
import os
from datetime import datetime
from datetime import timezone
from functools import partial
try:
from urllib.parse import quote
except ImportError:
from urllib import quote
import dateutil.parser
from tornado.gen import coroutine, multi
from tornado.locks import Semaphore
from tornado.log import app_log
from tornado.httpclient import AsyncHTTPClient, HTTPRequest
from tornado.ioloop import IOLoop, PeriodicCallback
from tornado.options import define, options, parse_command_line
def parse_date(date_string):
"""Parse a timestamp
If it doesn't have a timezone, assume utc
Returned datetime object will always be timezone-aware
"""
dt = dateutil.parser.parse(date_string)
if not dt.tzinfo:
# assume naïve timestamps are UTC
dt = dt.replace(tzinfo=timezone.utc)
return dt
def format_td(td):
"""
Nicely format a timedelta object
as HH:MM:SS
"""
if td is None:
return "unknown"
if isinstance(td, str):
return td
seconds = int(td.total_seconds())
h = seconds // 3600
seconds = seconds % 3600
m = seconds // 60
seconds = seconds % 60
return "{h:02}:{m:02}:{seconds:02}".format(h=h, m=m, seconds=seconds)
@coroutine
def cull_idle(
url, api_token, inactive_limit, cull_users=False, max_age=0, concurrency=10
):
"""Shutdown idle single-user servers
If cull_users, inactive *users* will be deleted as well.
"""
auth_header = {'Authorization': 'token %s' % api_token}
req = HTTPRequest(url=url + '/users', headers=auth_header)
now = datetime.now(timezone.utc)
client = AsyncHTTPClient()
if concurrency:
semaphore = Semaphore(concurrency)
@coroutine
def fetch(req):
"""client.fetch wrapped in a semaphore to limit concurrency"""
yield semaphore.acquire()
try:
return (yield client.fetch(req))
finally:
yield semaphore.release()
else:
fetch = client.fetch
resp = yield fetch(req)
users = json.loads(resp.body.decode('utf8', 'replace'))
futures = []
@coroutine
def handle_server(user, server_name, server, max_age, inactive_limit):
"""Handle (maybe) culling a single server
"server" is the entire server model from the API.
Returns True if server is now stopped (user removable),
False otherwise.
"""
log_name = user['name']
if server_name:
log_name = '%s/%s' % (user['name'], server_name)
if server.get('pending'):
app_log.warning(
"Not culling server %s with pending %s", log_name, server['pending']
)
return False
# jupyterhub < 0.9 defined 'server.url' once the server was ready
# as an *implicit* signal that the server was ready.
# 0.9 adds a dedicated, explicit 'ready' field.
# By current (0.9) definitions, servers that have no pending
# events and are not ready shouldn't be in the model,
# but let's check just to be safe.
if not server.get('ready', bool(server['url'])):
app_log.warning(
"Not culling not-ready not-pending server %s: %s", log_name, server
)
return False
if server.get('started'):
age = now - parse_date(server['started'])
else:
# started may be undefined on jupyterhub < 0.9
age = None
# check last activity
# last_activity can be None in 0.9
if server['last_activity']:
inactive = now - parse_date(server['last_activity'])
else:
# no activity yet, use start date
# last_activity may be None with jupyterhub 0.9,
# which introduces the 'started' field which is never None
# for running servers
inactive = age
# CUSTOM CULLING TEST CODE HERE
# Add in additional server tests here. Return False to mean "don't
# cull", True means "cull immediately", or, for example, update some
# other variables like inactive_limit.
#
# Here, server['state'] is the result of the get_state method
# on the spawner. This does *not* contain the below by
# default, you may have to modify your spawner to make this
# work. The `user` variable is the user model from the API.
#
# if server['state']['profile_name'] == 'unlimited'
# return False
# inactive_limit = server['state']['culltime']
should_cull = (
inactive is not None and inactive.total_seconds() >= inactive_limit
)
if should_cull:
app_log.info(
"Culling server %s (inactive for %s)", log_name, format_td(inactive)
)
if max_age and not should_cull:
# only check started if max_age is specified
# so that we can still be compatible with jupyterhub 0.8
# which doesn't define the 'started' field
if age is not None and age.total_seconds() >= max_age:
app_log.info(
"Culling server %s (age: %s, inactive for %s)",
log_name,
format_td(age),
format_td(inactive),
)
should_cull = True
if not should_cull:
app_log.debug(
"Not culling server %s (age: %s, inactive for %s)",
log_name,
format_td(age),
format_td(inactive),
)
return False
if server_name:
# culling a named server
delete_url = url + "/users/%s/servers/%s" % (
quote(user['name']),
quote(server['name']),
)
else:
delete_url = url + '/users/%s/server' % quote(user['name'])
req = HTTPRequest(url=delete_url, method='DELETE', headers=auth_header)
resp = yield fetch(req)
if resp.code == 202:
app_log.warning("Server %s is slow to stop", log_name)
# return False to prevent culling user with pending shutdowns
return False
return True
@coroutine
def handle_user(user):
"""Handle one user.
Create a list of their servers, and async exec them. Wait for
that to be done, and if all servers are stopped, possibly cull
the user.
"""
# shutdown servers first.
# Hub doesn't allow deleting users with running servers.
# jupyterhub 0.9 always provides a 'servers' model.
# 0.8 only does this when named servers are enabled.
if 'servers' in user:
servers = user['servers']
else:
# jupyterhub < 0.9 without named servers enabled.
# create servers dict with one entry for the default server
# from the user model.
# only if the server is running.
servers = {}
if user['server']:
servers[''] = {
'last_activity': user['last_activity'],
'pending': user['pending'],
'url': user['server'],
}
server_futures = [
handle_server(user, server_name, server, max_age, inactive_limit)
for server_name, server in servers.items()
]
results = yield multi(server_futures)
if not cull_users:
return
# some servers are still running, cannot cull users
still_alive = len(results) - sum(results)
if still_alive:
app_log.debug(
"Not culling user %s with %i servers still alive",
user['name'],
still_alive,
)
return False
should_cull = False
if user.get('created'):
age = now - parse_date(user['created'])
else:
# created may be undefined on jupyterhub < 0.9
age = None
# check last activity
# last_activity can be None in 0.9
if user['last_activity']:
inactive = now - parse_date(user['last_activity'])
else:
# no activity yet, use start date
# last_activity may be None with jupyterhub 0.9,
# which introduces the 'created' field which is never None
inactive = age
should_cull = (
inactive is not None and inactive.total_seconds() >= inactive_limit
)
if should_cull:
app_log.info("Culling user %s (inactive for %s)", user['name'], inactive)
if max_age and not should_cull:
# only check created if max_age is specified
# so that we can still be compatible with jupyterhub 0.8
# which doesn't define the 'started' field
if age is not None and age.total_seconds() >= max_age:
app_log.info(
"Culling user %s (age: %s, inactive for %s)",
user['name'],
format_td(age),
format_td(inactive),
)
should_cull = True
if not should_cull:
app_log.debug(
"Not culling user %s (created: %s, last active: %s)",
user['name'],
format_td(age),
format_td(inactive),
)
return False
req = HTTPRequest(
url=url + '/users/%s' % user['name'], method='DELETE', headers=auth_header
)
yield fetch(req)
return True
for user in users:
futures.append((user['name'], handle_user(user)))
for (name, f) in futures:
try:
result = yield f
except Exception:
app_log.exception("Error processing %s", name)
else:
if result:
app_log.debug("Finished culling %s", name)
if __name__ == '__main__':
define(
'url',
default=os.environ.get('JUPYTERHUB_API_URL'),
help="The JupyterHub API URL",
)
define('timeout', default=600, help="The idle timeout (in seconds)")
define(
'cull_every',
default=0,
help="The interval (in seconds) for checking for idle servers to cull",
)
define(
'max_age',
default=0,
help="The maximum age (in seconds) of servers that should be culled even if they are active",
)
define(
'cull_users',
default=False,
help="""Cull users in addition to servers.
This is for use in temporary-user cases such as tmpnb.""",
)
define(
'concurrency',
default=10,
help="""Limit the number of concurrent requests made to the Hub.
Deleting a lot of users at the same time can slow down the Hub,
so limit the number of API requests we have outstanding at any given time.
""",
)
parse_command_line()
if not options.cull_every:
options.cull_every = options.timeout // 2
api_token = os.environ['JUPYTERHUB_API_TOKEN']
try:
AsyncHTTPClient.configure("tornado.curl_httpclient.CurlAsyncHTTPClient")
except ImportError as e:
app_log.warning(
"Could not load pycurl: %s\n"
"pycurl is recommended if you have a large number of users.",
e,
)
loop = IOLoop.current()
cull = partial(
cull_idle,
url=options.url,
api_token=api_token,
inactive_limit=options.timeout,
cull_users=options.cull_users,
max_age=options.max_age,
concurrency=options.concurrency,
)
# schedule first cull immediately
# because PeriodicCallback doesn't start until the end of the first interval
loop.add_callback(cull)
# schedule periodic cull
pc = PeriodicCallback(cull, 1e3 * options.cull_every)
pc.start()
try:
loop.start()
except KeyboardInterrupt:
pass

View File

@@ -1,11 +0,0 @@
import sys
# run cull-idle as a service
c.JupyterHub.services = [
{
'name': 'cull-idle',
'admin': True,
'command': [sys.executable, 'cull_idle_servers.py', '--timeout=3600'],
}
]

View File

@@ -16,63 +16,62 @@ implementations in other web servers or languages.
## Run the example ## Run the example
1. generate an API token: 1. generate an API token:
export JUPYTERHUB_API_TOKEN=$(openssl rand -hex 32) export JUPYTERHUB_API_TOKEN=$(openssl rand -hex 32)
2. launch a version of the the whoami service. 2. launch a version of the the whoami service.
For `whoami-oauth`: For `whoami-oauth`:
bash launch-service.sh & bash launch-service.sh &
or for `whoami-oauth-basic`: or for `whoami-oauth-basic`:
bash launch-service-basic.sh & bash launch-service-basic.sh &
3. Launch JupyterHub: 3. Launch JupyterHub:
jupyterhub jupyterhub
4. Visit http://127.0.0.1:5555/ 4. Visit http://127.0.0.1:5555/
After logging in with your local-system credentials, you should see a JSON dump of your user info: After logging in with your local-system credentials, you should see a JSON dump of your user info:
```json ```json
{ {
"admin": false, "admin": false,
"last_activity": "2016-05-27T14:05:18.016372", "last_activity": "2016-05-27T14:05:18.016372",
"name": "queequeg", "name": "queequeg",
"pending": null, "pending": null,
"server": "/user/queequeg" "server": "/user/queequeg"
} }
``` ```
The essential pieces for using JupyterHub as an OAuth provider are: The essential pieces for using JupyterHub as an OAuth provider are:
1. registering your service with jupyterhub: 1. registering your service with jupyterhub:
```python ```python
c.JupyterHub.services = [ c.JupyterHub.services = [
{ {
# the name of your service # the name of your service
# should be simple and unique. # should be simple and unique.
# mostly used to identify your service in logging # mostly used to identify your service in logging
"name": "my-service", "name": "my-service",
# the oauth client id of your service # the oauth client id of your service
# must be unique but isn't private # must be unique but isn't private
# can be randomly generated or hand-written # can be randomly generated or hand-written
"oauth_client_id": "abc123", "oauth_client_id": "abc123",
# the API token and client secret of the service # the API token and client secret of the service
# should be generated securely, # should be generated securely,
# e.g. via `openssl rand -hex 32` # e.g. via `openssl rand -hex 32`
"api_token": "abc123...", "api_token": "abc123...",
# the redirect target for jupyterhub to send users # the redirect target for jupyterhub to send users
# after successful authentication # after successful authentication
"oauth_redirect_uri": "https://service-host/oauth_callback" "oauth_redirect_uri": "https://service-host/oauth_callback"
} }
] ]
``` ```
2. Telling your service how to authenticate with JupyterHub. 2. Telling your service how to authenticate with JupyterHub.

View File

@@ -5,13 +5,11 @@ so all URLs and requests necessary for OAuth with JupyterHub should be in one pl
""" """
import json import json
import os import os
import sys
from urllib.parse import urlencode from urllib.parse import urlencode
from urllib.parse import urlparse from urllib.parse import urlparse
from tornado import log from tornado import log
from tornado import web from tornado import web
from tornado.auth import OAuth2Mixin
from tornado.httpclient import AsyncHTTPClient from tornado.httpclient import AsyncHTTPClient
from tornado.httpclient import HTTPRequest from tornado.httpclient import HTTPRequest
from tornado.httputil import url_concat from tornado.httputil import url_concat

View File

@@ -4,14 +4,14 @@ This example shows how you can connect Jupyterhub to a Postgres database
instead of the default SQLite backend. instead of the default SQLite backend.
### Running Postgres with Jupyterhub on the host. ### Running Postgres with Jupyterhub on the host.
0. Uncomment and replace `ENV JPY_PSQL_PASSWORD arglebargle` with your own 0. Uncomment and replace `ENV JPY_PSQL_PASSWORD arglebargle` with your own
password in the Dockerfile for `examples/postgres/db`. (Alternatively, pass password in the Dockerfile for `examples/postgres/db`. (Alternatively, pass
-e `JPY_PSQL_PASSWORD=<password>` when you start the db container.) -e `JPY_PSQL_PASSWORD=<password>` when you start the db container.)
1. `cd` to the root of your jupyterhub repo. 1. `cd` to the root of your jupyterhub repo.
2. Build the postgres image with `docker build -t jupyterhub-postgres-db 2. Build the postgres image with `docker build -t jupyterhub-postgres-db examples/postgres/db`. This may take a minute or two the first time it's
examples/postgres/db`. This may take a minute or two the first time it's
run. run.
3. Run the db image with `docker run -d -p 5433:5432 jupyterhub-postgres-db`. 3. Run the db image with `docker run -d -p 5433:5432 jupyterhub-postgres-db`.
@@ -24,24 +24,22 @@ instead of the default SQLite backend.
5. Log in as the user running jupyterhub on your host machine. 5. Log in as the user running jupyterhub on your host machine.
### Running Postgres with Containerized Jupyterhub. ### Running Postgres with Containerized Jupyterhub.
0. Do steps 0-2 in from the above section, ensuring that the values set/passed 0. Do steps 0-2 in from the above section, ensuring that the values set/passed
for `JPY_PSQL_PASSWORD` match for the hub and db containers. for `JPY_PSQL_PASSWORD` match for the hub and db containers.
1. Build the hub image with `docker build -t jupyterhub-postgres-hub 1. Build the hub image with `docker build -t jupyterhub-postgres-hub examples/postgres/hub`. This may take a minute or two the first time it's
examples/postgres/hub`. This may take a minute or two the first time it's
run. run.
2. Run the db image with `docker run -d --name=jpy-db 2. Run the db image with `docker run -d --name=jpy-db jupyterhub-postgres`. Note that, unlike when connecting to a host machine
jupyterhub-postgres`. Note that, unlike when connecting to a host machine
jupyterhub, we don't specify a port-forwarding scheme here, but we do need jupyterhub, we don't specify a port-forwarding scheme here, but we do need
to specify a name for the container. to specify a name for the container.
3. Run the containerized hub with `docker run -it --link jpy-db:postgres 3. Run the containerized hub with `docker run -it --link jpy-db:postgres jupyterhub-postgres-hub`. This instructs docker to run the hub container
jupyterhub-postgres-hub`. This instructs docker to run the hub container
with a link to the already-running db container, which will forward with a link to the already-running db container, which will forward
environment and connection information from the DB to the hub. environment and connection information from the DB to the hub.
4. Log in as one of the users defined in the `examples/postgres/hub/` 4. Log in as one of the users defined in the `examples/postgres/hub/`
Dockerfile. By default `rhea` is the server's admin user, `io` and Dockerfile. By default `rhea` is the server's admin user, `io` and
`ganymede` are non-admin users, and all users' passwords are their `ganymede` are non-admin users, and all users' passwords are their
usernames. usernames.

View File

@@ -1,4 +1,3 @@
# Simple Announcement Service Example # Simple Announcement Service Example
This is a simple service that allows administrators to manage announcements This is a simple service that allows administrators to manage announcements
@@ -16,10 +15,10 @@ configuration file something like:
] ]
This starts the announcements service up at `/services/announcement` when This starts the announcements service up at `/services/announcement` when
JupyterHub launches. By default the announcement text is empty. JupyterHub launches. By default the announcement text is empty.
The `announcement` module has a configurable port (default 8888) and an API The `announcement` module has a configurable port (default 8888) and an API
prefix setting. By default the API prefix is `JUPYTERHUB_SERVICE_PREFIX` if prefix setting. By default the API prefix is `JUPYTERHUB_SERVICE_PREFIX` if
that environment variable is set or `/` if it is not. that environment variable is set or `/` if it is not.
## Managing the Announcement ## Managing the Announcement
@@ -27,7 +26,7 @@ that environment variable is set or `/` if it is not.
Admin users can set the announcement text with an API token: Admin users can set the announcement text with an API token:
$ curl -X POST -H "Authorization: token <token>" \ $ curl -X POST -H "Authorization: token <token>" \
-d "{'announcement':'JupyterHub will be upgraded on August 14!'}" \ -d '{"announcement":"JupyterHub will be upgraded on August 14!"}' \
https://.../services/announcement https://.../services/announcement
Anyone can read the announcement: Anyone can read the announcement:
@@ -42,7 +41,7 @@ Anyone can read the announcement:
The time the announcement was posted is recorded in the `timestamp` field and The time the announcement was posted is recorded in the `timestamp` field and
the user who posted the announcement is recorded in the `user` field. the user who posted the announcement is recorded in the `user` field.
To clear the announcement text, just DELETE. Only admin users can do this. To clear the announcement text, just DELETE. Only admin users can do this.
$ curl -X POST -H "Authorization: token <token>" \ $ curl -X POST -H "Authorization: token <token>" \
https://.../services/announcement https://.../services/announcement
@@ -50,11 +49,11 @@ To clear the announcement text, just DELETE. Only admin users can do this.
## Seeing the Announcement in JupyterHub ## Seeing the Announcement in JupyterHub
To be able to render the announcement, include the provide `page.html` template To be able to render the announcement, include the provide `page.html` template
that extends the base `page.html` template. Set `c.JupyterHub.template_paths` that extends the base `page.html` template. Set `c.JupyterHub.template_paths`
in JupyterHub's configuration to include the path to the extending template. in JupyterHub's configuration to include the path to the extending template.
The template changes the `announcement` element and does a JQuery `$.get()` call The template changes the `announcement` element and does a JQuery `$.get()` call
to retrieve the announcement text. to retrieve the announcement text.
JupyterHub's configurable announcement template variables can be set for various JupyterHub's configurable announcement template variables can be set for various
pages like login, logout, spawn, and home. Including the template provided in pages like login, logout, spawn, and home. Including the template provided in
this example overrides all of those. this example overrides all of those.

View File

@@ -4,7 +4,6 @@ import json
import os import os
from tornado import escape from tornado import escape
from tornado import gen
from tornado import ioloop from tornado import ioloop
from tornado import web from tornado import web

View File

@@ -1,14 +1,9 @@
{% extends "templates/page.html" %} {% extends "templates/page.html" %} {% block announcement %}
{% block announcement %} <div class="container text-center announcement"></div>
<div class="container text-center announcement"> {% endblock %} {% block script %} {{ super() }}
</div>
{% endblock %}
{% block script %}
{{ super() }}
<script> <script>
$.get("/services/announcement/", function(data) { $.get("/services/announcement/", function (data) {
$(".announcement").html(data["announcement"]); $(".announcement").html(data["announcement"]);
}); });
</script> </script>
{% endblock %} {% endblock %}

View File

@@ -0,0 +1,13 @@
FROM jupyterhub/jupyterhub
# Create test user (PAM auth) and install single-user Jupyter
RUN useradd testuser --create-home --shell /bin/bash
RUN echo 'testuser:passwd' | chpasswd
RUN pip install jupyter
COPY app ./app
COPY jupyterhub_config.py .
COPY requirements.txt /tmp/requirements.txt
RUN pip install -r /tmp/requirements.txt
CMD ["jupyterhub", "--ip", "0.0.0.0"]

View File

@@ -0,0 +1,107 @@
# Fastapi
[FastAPI](https://fastapi.tiangolo.com/) is a popular new web framework attractive for its type hinting, async support, automatic doc generation (Swagger), and more. Their [Feature highlights](https://fastapi.tiangolo.com/features/) sum it up nicely.
# Swagger UI with OAuth demo
![Fastapi Service Example](./fastapi_example.gif)
# Try it out locally
1. Install `fastapi` and other dependencies, then launch Jupyterhub
```
pip install -r requirements.txt
jupyterhub --ip=127.0.0.1
```
2. Visit http://127.0.0.1:8000/services/fastapi or http://127.0.0.1:8000/services/fastapi/docs
3. Try interacting programmatically. If you create a new token in your control panel or pull out the `JUPYTERHUB_API_TOKEN` in the single user environment, you can skip the third step here.
```
$ curl -X GET http://127.0.0.1:8000/services/fastapi/
{"Hello":"World"}
$ curl -X GET http://127.0.0.1:8000/services/fastapi/me
{"detail":"Must login with token parameter, cookie, or header"}
$ curl -X POST http://127.0.0.1:8000/hub/api/authorizations/token \
-d '{"username": "myname", "password": "mypasswd!"}' \
| jq '.token'
"3fee13ce6d2845da9bd5f2c2170d3428"
$ curl -X GET http://127.0.0.1:8000/services/fastapi/me \
-H "Authorization: Bearer 3fee13ce6d2845da9bd5f2c2170d3428" \
| jq .
{
"name": "myname",
"admin": false,
"groups": [],
"server": null,
"pending": null,
"last_activity": "2021-04-07T18:05:11.587638+00:00",
"servers": null
}
```
# Try it out in Docker
1. Build and run the Docker image locally
```bash
sudo docker build . -t service-fastapi
sudo docker run -it -p 8000:8000 service-fastapi
```
2. Visit http://127.0.0.1:8000/services/fastapi/docs. When going through the OAuth flow or getting a token from the control panel, you can log in with `testuser` / `passwd`.
# PUBLIC_HOST
If you are running your service behind a proxy, or on a Docker / Kubernetes infrastructure, you might run into an error during OAuth that says `Mismatching redirect URI`. In the Jupterhub logs, there will be a warning along the lines of: `[W 2021-04-06 23:40:06.707 JupyterHub provider:498] Redirect uri https://jupyterhub.my.cloud/services/fastapi/oauth_callback != /services/fastapi/oauth_callback`. This happens because Swagger UI adds the request host, as seen in the browser, to the Authorization URL.
To solve that problem, the `oauth_redirect_uri` value in the service initialization needs to match what Swagger will auto-generate and what the service will use when POST'ing to `/oauth2/token`. In this example, setting the `PUBLIC_HOST` environment variable to your public-facing Hub domain (e.g. `https://jupyterhub.my.cloud`) should make it work.
# Notes on security.py
FastAPI has a concept of a [dependency injection](https://fastapi.tiangolo.com/tutorial/dependencies) using a `Depends` object (and a subclass `Security`) that is automatically instantiated/executed when it is a parameter for your endpoint routes. You can utilize a `Depends` object for re-useable common parameters or authentication mechanisms like the [`get_user`](https://fastapi.tiangolo.com/tutorial/security/get-current-user) pattern.
JupyterHub OAuth has three ways to authenticate: a `token` url parameter; a `Authorization: Bearer <token>` header; and a (deprecated) `jupyterhub-services` cookie. FastAPI has helper functions that let us create `Security` (dependency injection) objects for each of those. When you need to allow multiple / optional authentication dependencies (`Security` objects), then you can use the argument `auto_error=False` and it will return `None` instead of raising an `HTTPException`.
Endpoints that need authentication (`/me` and `/debug` in this example) can leverage the `get_user` pattern and effectively pull the user model from the Hub API when a request has authenticated with cookie / token / header all using the simple syntax,
```python
from .security import get_current_user
from .models import User
@router.get("/new_endpoint")
async def new_endpoint(user: User = Depends(get_current_user)):
"Function that needs to work with an authenticated user"
return {"Hello": user.name}
```
# Notes on client.py
FastAPI is designed to be an asynchronous web server, so the interactions with the Hub API should be made asynchronously as well. Instead of using `requests` to get user information from a token/cookie, this example uses [`httpx`](https://www.python-httpx.org/). `client.py` defines a small function that creates a `Client` (equivalent of `requests.Session`) with the Hub API url as it's `base_url` and adding the `JUPYTERHUB_API_TOKEN` to every header.
Consider this a very minimal alternative to using `jupyterhub.services.auth.HubOAuth`
```python
# client.py
import os
def get_client():
base_url = os.environ["JUPYTERHUB_API_URL"]
token = os.environ["JUPYTERHUB_API_TOKEN"]
headers = {"Authorization": "Bearer %s" % token}
return httpx.AsyncClient(base_url=base_url, headers=headers)
```
```python
# other modules
from .client import get_client
async with get_client() as client:
resp = await client.get('/endpoint')
...
```

View File

@@ -0,0 +1 @@
from .app import app

View File

@@ -0,0 +1,25 @@
import os
from fastapi import FastAPI
from .service import router
### When managed by Jupyterhub, the actual endpoints
### will be served out prefixed by /services/:name.
### One way to handle this with FastAPI is to use an APIRouter.
### All routes are defined in service.py
app = FastAPI(
title="Example FastAPI Service",
version="0.1",
### Serve out Swagger from the service prefix (<hub>/services/:name/docs)
openapi_url=router.prefix + "/openapi.json",
docs_url=router.prefix + "/docs",
redoc_url=router.prefix + "/redoc",
### Add our service client id to the /docs Authorize form automatically
swagger_ui_init_oauth={"clientId": os.environ["JUPYTERHUB_CLIENT_ID"]},
### Default /docs/oauth2 redirect will cause Hub
### to raise oauth2 redirect uri mismatch errors
swagger_ui_oauth2_redirect_url=os.environ["JUPYTERHUB_OAUTH_CALLBACK_URL"],
)
app.include_router(router)

View File

@@ -0,0 +1,11 @@
import os
import httpx
# a minimal alternative to using HubOAuth class
def get_client():
base_url = os.environ["JUPYTERHUB_API_URL"]
token = os.environ["JUPYTERHUB_API_TOKEN"]
headers = {"Authorization": "Bearer %s" % token}
return httpx.AsyncClient(base_url=base_url, headers=headers)

View File

@@ -0,0 +1,46 @@
from datetime import datetime
from typing import Any
from typing import List
from typing import Optional
from pydantic import BaseModel
# https://jupyterhub.readthedocs.io/en/stable/_static/rest-api/index.html
class Server(BaseModel):
name: str
ready: bool
pending: Optional[str]
url: str
progress_url: str
started: datetime
last_activity: datetime
state: Optional[Any]
user_options: Optional[Any]
class User(BaseModel):
name: str
admin: bool
groups: List[str]
server: Optional[str]
pending: Optional[str]
last_activity: datetime
servers: Optional[List[Server]]
# https://stackoverflow.com/questions/64501193/fastapi-how-to-use-httpexception-in-responses
class AuthorizationError(BaseModel):
detail: str
class HubResponse(BaseModel):
msg: str
request_url: str
token: str
response_code: int
hub_response: dict
class HubApiError(BaseModel):
detail: HubResponse

View File

@@ -0,0 +1,61 @@
import os
from fastapi import HTTPException
from fastapi import Security
from fastapi import status
from fastapi.security import OAuth2AuthorizationCodeBearer
from fastapi.security.api_key import APIKeyQuery
from .client import get_client
from .models import User
### Endpoints can require authentication using Depends(get_current_user)
### get_current_user will look for a token in url params or
### Authorization: bearer token (header).
### Hub technically supports cookie auth too, but it is deprecated so
### not being included here.
auth_by_param = APIKeyQuery(name="token", auto_error=False)
auth_url = os.environ["PUBLIC_HOST"] + "/hub/api/oauth2/authorize"
auth_by_header = OAuth2AuthorizationCodeBearer(
authorizationUrl=auth_url, tokenUrl="get_token", auto_error=False
)
### ^^ The flow for OAuth2 in Swagger is that the "authorize" button
### will redirect user (browser) to "auth_url", which is the Hub login page.
### After logging in, the browser will POST to our internal /get_token endpoint
### with the auth code. That endpoint POST's to Hub /oauth2/token with
### our client_secret (JUPYTERHUB_API_TOKEN) and that code to get an
### access_token, which it returns to browser, which places in Authorization header.
### For consideration: optimize performance with a cache instead of
### always hitting the Hub api?
async def get_current_user(
auth_by_param: str = Security(auth_by_param),
auth_by_header: str = Security(auth_by_header),
):
token = auth_by_param or auth_by_header
if token is None:
raise HTTPException(
status.HTTP_401_UNAUTHORIZED,
detail="Must login with token parameter or Authorization bearer header",
)
async with get_client() as client:
endpoint = "/user"
# normally we auth to Hub API with service api token,
# but this time auth as the user token to get user model
headers = {"Authorization": f"Bearer {token}"}
resp = await client.get(endpoint, headers=headers)
if resp.is_error:
raise HTTPException(
status.HTTP_400_BAD_REQUEST,
detail={
"msg": "Error getting user info from token",
"request_url": str(resp.request.url),
"token": token,
"response_code": resp.status_code,
"hub_response": resp.json(),
},
)
user = User(**resp.json())
return user

View File

@@ -0,0 +1,70 @@
import os
from fastapi import APIRouter
from fastapi import Depends
from fastapi import Form
from fastapi import Request
from .client import get_client
from .models import AuthorizationError
from .models import HubApiError
from .models import User
from .security import get_current_user
# APIRouter prefix cannot end in /
service_prefix = os.getenv("JUPYTERHUB_SERVICE_PREFIX", "").rstrip("/")
router = APIRouter(prefix=service_prefix)
@router.post("/get_token", include_in_schema=False)
async def get_token(code: str = Form(...)):
"Callback function for OAuth2AuthorizationCodeBearer scheme"
# The only thing we need in this form post is the code
# Everything else we can hardcode / pull from env
async with get_client() as client:
redirect_uri = (
os.environ["PUBLIC_HOST"] + os.environ["JUPYTERHUB_OAUTH_CALLBACK_URL"],
)
data = {
"client_id": os.environ["JUPYTERHUB_CLIENT_ID"],
"client_secret": os.environ["JUPYTERHUB_API_TOKEN"],
"grant_type": "authorization_code",
"code": code,
"redirect_uri": redirect_uri,
}
resp = await client.post("/oauth2/token", data=data)
### resp.json() is {'access_token': <token>, 'token_type': 'Bearer'}
return resp.json()
@router.get("/")
async def index():
"Non-authenticated function that returns {'Hello': 'World'}"
return {"Hello": "World"}
# response_model and responses dict translate to OpenAPI (Swagger) hints
# compare and contrast what the /me endpoint looks like in Swagger vs /debug
@router.get(
"/me",
response_model=User,
responses={401: {'model': AuthorizationError}, 400: {'model': HubApiError}},
)
async def me(user: User = Depends(get_current_user)):
"Authenticated function that returns the User model"
return user
@router.get("/debug")
async def index(request: Request, user: User = Depends(get_current_user)):
"""
Authenticated function that returns a few pieces of debug
* Environ of the service process
* Request headers
* User model
"""
return {
"env": dict(os.environ),
"headers": dict(request.headers),
"user": user,
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.7 MiB

View File

@@ -0,0 +1,31 @@
import os
import warnings
# When Swagger performs OAuth2 in the browser, it will set
# the request host + relative path as the redirect uri, causing a
# uri mismatch if the oauth_redirect_uri is just the relative path
# is set in the c.JupyterHub.services entry (as per default).
# Therefore need to know the request host ahead of time.
if "PUBLIC_HOST" not in os.environ:
msg = (
"env PUBLIC_HOST is not set, defaulting to http://127.0.0.1:8000. "
"This can cause problems with OAuth. "
"Set PUBLIC_HOST to your public (browser accessible) host."
)
warnings.warn(msg)
public_host = "http://127.0.0.1:8000"
else:
public_host = os.environ["PUBLIC_HOST"].rstrip('/')
service_name = "fastapi"
oauth_redirect_uri = f"{public_host}/services/{service_name}/oauth_callback"
c.JupyterHub.services = [
{
"name": service_name,
"url": "http://127.0.0.1:10202",
"command": ["uvicorn", "app:app", "--port", "10202"],
"admin": True,
"oauth_redirect_uri": oauth_redirect_uri,
"environment": {"PUBLIC_HOST": public_host},
}
]

View File

@@ -0,0 +1,4 @@
fastapi
httpx
python-multipart
uvicorn

View File

@@ -17,8 +17,8 @@ and the name of the shared-notebook service.
In the external example, some extra steps are required to set up supervisor: In the external example, some extra steps are required to set up supervisor:
1. select a system user to run the service. This is a user on the system, and does not need to be a Hub user. Add this to the user field in `shared-notebook.conf`, replacing `someuser`. 1. select a system user to run the service. This is a user on the system, and does not need to be a Hub user. Add this to the user field in `shared-notebook.conf`, replacing `someuser`.
2. generate a secret token for authentication, and replace the `super-secret` fields in `shared-notebook-service` and `jupyterhub_config.py` 2. generate a secret token for authentication, and replace the `super-secret` fields in `shared-notebook-service` and `jupyterhub_config.py`
3. install `shared-notebook-service` somewhere on your system, and update `/path/to/shared-notebook-service` to the absolute path of this destination 3. install `shared-notebook-service` somewhere on your system, and update `/path/to/shared-notebook-service` to the absolute path of this destination
3. copy `shared-notebook.conf` to `/etc/supervisor/conf.d/` 4. copy `shared-notebook.conf` to `/etc/supervisor/conf.d/`
4. `supervisorctl reload` 5. `supervisorctl reload`

View File

@@ -4,21 +4,21 @@ Uses `jupyterhub.services.HubAuth` to authenticate requests with the Hub in a [f
## Run ## Run
1. Launch JupyterHub and the `whoami service` with 1. Launch JupyterHub and the `whoami service` with
jupyterhub --ip=127.0.0.1 jupyterhub --ip=127.0.0.1
2. Visit http://127.0.0.1:8000/services/whoami/ or http://127.0.0.1:8000/services/whoami-oauth/ 2. Visit http://127.0.0.1:8000/services/whoami/ or http://127.0.0.1:8000/services/whoami-oauth/
After logging in with your local-system credentials, you should see a JSON dump of your user info: After logging in with your local-system credentials, you should see a JSON dump of your user info:
```json ```json
{ {
"admin": false, "admin": false,
"last_activity": "2016-05-27T14:05:18.016372", "last_activity": "2016-05-27T14:05:18.016372",
"name": "queequeg", "name": "queequeg",
"pending": null, "pending": null,
"server": "/user/queequeg" "server": "/user/queequeg"
} }
``` ```
@@ -29,5 +29,4 @@ A similar service could be run externally, by setting the JupyterHub service env
JUPYTERHUB_API_TOKEN JUPYTERHUB_API_TOKEN
JUPYTERHUB_SERVICE_PREFIX JUPYTERHUB_SERVICE_PREFIX
[flask]: http://flask.pocoo.org [flask]: http://flask.pocoo.org

View File

@@ -1,6 +1,3 @@
import os
import sys
c.JupyterHub.services = [ c.JupyterHub.services = [
{ {
'name': 'whoami', 'name': 'whoami',

View File

@@ -6,21 +6,21 @@ There is an implementation each of cookie-based `HubAuthenticated` and OAuth-bas
## Run ## Run
1. Launch JupyterHub and the `whoami service` with 1. Launch JupyterHub and the `whoami service` with
jupyterhub --ip=127.0.0.1 jupyterhub --ip=127.0.0.1
2. Visit http://127.0.0.1:8000/services/whoami or http://127.0.0.1:8000/services/whoami-oauth 2. Visit http://127.0.0.1:8000/services/whoami or http://127.0.0.1:8000/services/whoami-oauth
After logging in with your local-system credentials, you should see a JSON dump of your user info: After logging in with your local-system credentials, you should see a JSON dump of your user info:
```json ```json
{ {
"admin": false, "admin": false,
"last_activity": "2016-05-27T14:05:18.016372", "last_activity": "2016-05-27T14:05:18.016372",
"name": "queequeg", "name": "queequeg",
"pending": null, "pending": null,
"server": "/user/queequeg" "server": "/user/queequeg"
} }
``` ```

Some files were not shown because too many files have changed in this diff Show More