Compare commits

...

127 Commits

Author SHA1 Message Date
Min RK
cdc2151f75 Bump to 5.1.0 2024-07-31 11:10:39 +02:00
Min RK
b4a06ea53f add 4.1.6 changelog 2024-07-31 10:53:39 +02:00
Min RK
5fcaaac331 Merge pull request #4848 from minrk/prep-510
changelog for 5.1.0
2024-07-31 10:47:34 +02:00
Min RK
4ea8fcb031 regen rest-api 2024-07-31 10:38:27 +02:00
Min RK
ca7df636cb Merge commit from fork
only admins can modify admins
2024-07-31 10:28:14 +02:00
Min RK
759a4f0624 update 5.1 changelog 2024-07-30 20:30:03 +02:00
Min RK
2a89495323 Merge pull request #4856 from jfrost-mo/secure_context_for_login
Show insecure login warning when not in a secure context
2024-07-30 10:22:37 +02:00
Min RK
671c8ab78d Merge pull request #4860 from krassowski/pass-kwargs-to-server-initialize
Pass `kwargs` down to `initialize()` call of the server
2024-07-29 15:55:54 +02:00
Michał Krassowski
49aaf5050f Pass kwargs down to initialize() call of the server 2024-07-27 10:38:23 +01:00
James Frost
0c20f3e867 Show insecure login warning when not in a secure context
Secure contexts are a more robust way of checking that a browsing context
is authenticated and confidential. Compared to comparing the scheme this
covers cases where the connection is encrypted, but using a broken algorithm.

Notably, localhost is considered a secure context, even over HTTP.

For more detail on secure contexts, see:
https://developer.mozilla.org/en-US/docs/Web/Security/Secure_Contexts
2024-07-23 11:41:00 +01:00
Min RK
db7d0920cd add some docs on groups permissions 2024-07-03 09:27:10 +02:00
Min RK
ff2db557a8 only admins can modify admins
- if not admin, cannot set admin=True anywhere
- if not admin, cannot modify any user where admin=True
2024-07-02 11:55:54 +02:00
Min RK
0cd5e51dd4 Merge pull request #4849 from jupyterhub/pre-commit-ci-update-config
[pre-commit.ci] pre-commit autoupdate
2024-07-02 09:07:53 +02:00
pre-commit-ci[bot]
b0fbf6a61e [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.4.7 → v0.5.0](https://github.com/astral-sh/ruff-pre-commit/compare/v0.4.7...v0.5.0)
2024-07-02 00:12:05 +00:00
Min RK
9c810b1436 changelog for 5.1.0
small release, a few nice things and one performance regression fix
2024-07-01 15:03:11 +02:00
Min RK
3d1f936a46 Merge pull request #4844 from minrk/allow-stop-during-start
allow stop while start is pending
2024-07-01 14:36:36 +02:00
dependabot[bot]
2c609d0936 Merge pull request #4847 from jupyterhub/dependabot/npm_and_yarn/jsx/braces-3.0.3 2024-07-01 09:07:04 +00:00
dependabot[bot]
8c3025dc4f Bump braces from 3.0.2 to 3.0.3 in /jsx
Bumps [braces](https://github.com/micromatch/braces) from 3.0.2 to 3.0.3.
- [Changelog](https://github.com/micromatch/braces/blob/master/CHANGELOG.md)
- [Commits](https://github.com/micromatch/braces/compare/3.0.2...3.0.3)

---
updated-dependencies:
- dependency-name: braces
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-01 08:53:53 +00:00
Simon Li
d51f9f8998 Merge pull request #4846 from jupyterhub/dependabot/github_actions/docker/build-push-action-6
Bump docker/build-push-action from 5 to 6
2024-07-01 09:52:43 +01:00
dependabot[bot]
41583c1322 Bump docker/build-push-action from 5 to 6
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 5 to 6.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-01 05:20:37 +00:00
Min RK
c65e48b2b6 allow stop while start is pending
cancels start rather than waiting for it to finish or timeout

also fixes cancellation when start_timeout is reached, which was previously left running forever
2024-06-25 10:16:13 +02:00
dependabot[bot]
01aeb84a13 Merge pull request #4839 from jupyterhub/dependabot/npm_and_yarn/braces-3.0.3 2024-06-18 07:03:06 +00:00
dependabot[bot]
4c2e3f176a Bump braces from 3.0.2 to 3.0.3
Bumps [braces](https://github.com/micromatch/braces) from 3.0.2 to 3.0.3.
- [Changelog](https://github.com/micromatch/braces/blob/master/CHANGELOG.md)
- [Commits](https://github.com/micromatch/braces/compare/3.0.2...3.0.3)

---
updated-dependencies:
- dependency-name: braces
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-18 06:49:47 +00:00
Simon Li
554248b083 Merge pull request #4838 from jupyterhub/dependabot/npm_and_yarn/jsx/ws-8.17.1
Bump ws from 8.13.0 to 8.17.1 in /jsx
2024-06-18 07:49:15 +01:00
dependabot[bot]
4a859664da Bump ws from 8.13.0 to 8.17.1 in /jsx
Bumps [ws](https://github.com/websockets/ws) from 8.13.0 to 8.17.1.
- [Release notes](https://github.com/websockets/ws/releases)
- [Commits](https://github.com/websockets/ws/compare/8.13.0...8.17.1)

---
updated-dependencies:
- dependency-name: ws
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-18 06:36:09 +00:00
Yuvi Panda
00b37c9415 Merge pull request #4837 from yuvipanda/docs-2
Provide consistent myst references to documentation pages - part 1
2024-06-11 09:51:17 -07:00
YuviPanda
3a9c631526 Provide consistent myst references to documentation pages
While doing https://github.com/jupyterhub/jupyterhub/pull/2726,
I realized we don't have a consistent way to format references
inside the docs. I now have them be formatted to match the name
of the file, but using `:` to separate them instead of `/` or `-`.
`/` makes it ambiguous when using with markdown link syntax, as
it could be a reference or a file. And using `-` is ambiguous, as
that can be the name of the file itself.

This PR does about half, I can do the other half later (unless
someone else does).
2024-06-10 19:11:51 -07:00
Yuvi Panda
4c868cdfb6 Merge pull request #2726 from rkdarst/conceptual-intro
Jupyter(Hub) conceptual intro
2024-06-10 18:57:13 -07:00
YuviPanda
96e75bb4ac Point old service references to correct place 2024-06-10 18:50:10 -07:00
YuviPanda
f09fdf4761 Explicitly reference the services document 2024-06-10 18:45:32 -07:00
YuviPanda
7ef70eb74f Add note about crosslinks 2024-06-10 18:34:05 -07:00
YuviPanda
5c4eab0c15 Link to idle culler 2024-06-10 18:33:17 -07:00
YuviPanda
8ca8750b04 Remove reference to hubshare 2024-06-10 18:32:49 -07:00
YuviPanda
eb1bf1dc58 Remove TODO 2024-06-10 18:31:55 -07:00
YuviPanda
7852dbc1dc Cleanup references 2024-06-10 18:31:36 -07:00
pre-commit-ci[bot]
3caea2a463 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2024-06-10 18:25:53 -07:00
Richard Darst
6679c389b5 docs/source/getting-started/what-is-jupyterhub: Suggestions from code review 2024-06-10 18:25:53 -07:00
Richard Darst
954bbbe7d9 Apply suggestions from code review
Co-authored-by: Chris Holdgraf <choldgraf@gmail.com>
2024-06-10 18:25:53 -07:00
Richard Darst
3338de2619 docs/.../what-is-jupyterhub: Updates based on code review
- Thanks to @betatim for the suggestions.
2024-06-10 18:25:53 -07:00
Richard Darst
33c09daf5b Apply suggestions from code review
Thanks to @betatim

Co-authored-by: Tim Head <betatim@gmail.com>
2024-06-10 18:25:53 -07:00
Richard Darst
f3cc79e453 what-is-jupyterhub: Fix links
- Apparently recommonmark does intelligently uses links like
  sphinx+rst, and you shouldn't use `.html` on the links.
2024-06-10 18:25:53 -07:00
Richard Darst
cc0bc531d3 what-is-jupyterhub: Full revision 2024-06-10 18:25:53 -07:00
Richard Darst
fd2919b36f what-is-jupyterhub: clarifications (single-user and kernels)
- Single-user servers are same you get with `jupyter notebook`.
- Kernels by default in single-user server environment but don't have
  to be.
2024-06-10 18:25:53 -07:00
Richard Darst
b6e4225482 what-is-jupyterhub initial draft 2024-06-10 18:25:47 -07:00
Yuvi Panda
18d7003580 Merge pull request #4835 from minrk/metrics-cost
reduce cost of event_loop_interval metric
2024-06-10 13:18:44 -07:00
Yuvi Panda
873f60781c Merge pull request #4836 from minrk/group-docstrings
fix formatting of group_overrides docstring
2024-06-10 08:33:19 -07:00
Min RK
d1d8c02cb9 fix formatting of group_overrides docstring
- literals for dict keys
- use example header section
- missing `::` for code block
2024-06-10 14:36:38 +02:00
Min RK
67dd7742ef event_loop_interval: measure only delay, not tick duration
use resolution as lower bound, don't report delays lower than resolution,
since we don't really know their value
2024-06-10 11:48:27 +02:00
Min RK
3ee808e35c reduce cost of event_loop_interval metric
- use single coroutine to reduce cost of each check
- reduce default interval to 50ms and remove metric buckets below 50ms
2024-06-10 09:40:39 +02:00
Min RK
78369901b2 make sure metrics configuration is in docs 2024-06-10 09:29:39 +02:00
Simon Li
d7a7589821 Merge pull request #4831 from minrk/token-max-age
Add token_expires_in_max_seconds configuration
2024-06-04 09:07:37 +01:00
Min RK
8437e66db9 require token_expires_in_max_seconds setting 2024-06-04 08:16:06 +02:00
Erik Sundell
6ea07a7dd0 Merge pull request #4832 from jupyterhub/pre-commit-ci-update-config
[pre-commit.ci] pre-commit autoupdate
2024-06-04 08:44:07 +03:00
pre-commit-ci[bot]
fc184c4ec7 [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.4.3 → v0.4.7](https://github.com/astral-sh/ruff-pre-commit/compare/v0.4.3...v0.4.7)
2024-06-03 22:08:28 +00:00
Min RK
df4f96eaf9 Add token_expires_in_max_seconds configuration
Allows limiting max expiration of tokens created via the API

Only affects the POST /api/tokens endpoint, not tokens issued by other means or created prior to config
2024-06-03 13:04:14 +02:00
Min RK
d8bb3f4402 Merge pull request #4822 from yuvipanda/group-override
Allow overriding spawner config based on user group membership
2024-05-31 09:28:05 +02:00
Yuvi Panda
4082c2ddbc Reorder log messages
Co-authored-by: Min RK <benjaminrk@gmail.com>
2024-05-30 07:22:28 -07:00
YuviPanda
300f49d1ab Change names of groups in examples 2024-05-29 23:05:51 -07:00
Erik Sundell
6abc096cbc Merge pull request #4829 from manics/read-users-description
Fix wording for `read:users` scope description
2024-05-30 06:55:44 +02:00
Simon Li
a6aba9a7e1 python docs/source/rbac/generate-scope-table.py 2024-05-29 23:49:53 +01:00
Simon Li
8c3ff64511 Fix wording for read:users scope description 2024-05-29 23:05:45 +01:00
Simon Li
104593b9ec Merge pull request #4828 from minrk/admin_users_doc
further emphasize that admin_users config only grants permission
2024-05-29 13:48:40 +01:00
Min RK
495ebe406c further emphasize that admin_users config only grants permission 2024-05-29 10:37:16 +02:00
YuviPanda
5100c60831 add example config 2024-05-24 08:59:48 -07:00
Yuvi Panda
bec737bf27 Fix typo
Co-authored-by: Simon Li <orpheus+devel@gmail.com>
2024-05-24 08:24:48 -07:00
YuviPanda
2bb27653e2 Apply group overrides before pre_spawn hook 2024-05-24 08:23:12 -07:00
pre-commit-ci[bot]
e8fbe84ac8 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2024-05-24 14:56:48 +00:00
YuviPanda
8564ff015c Add test for dict merging 2024-05-24 07:56:20 -07:00
YuviPanda
fb85cfb118 Better wording for group_overrides help
Co-authored-by: ryanlovett <rylo@berkeley.edu>
2024-05-24 07:27:51 -07:00
pre-commit-ci[bot]
25384051aa [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2024-05-24 14:23:19 +00:00
YuviPanda
2623aa5e46 Reduce amount of logging when applying group overrides 2024-05-24 07:22:41 -07:00
YuviPanda
30ebf84bd4 Remove direct reference to KubeSpawner 2024-05-24 07:22:30 -07:00
Min RK
50466843ee Bump to 5.1.0.dev 2024-05-24 12:45:49 +02:00
Min RK
c616ab284d Bump to 5.0.0 2024-05-24 12:45:26 +02:00
Min RK
41090ceb55 Merge pull request #4820 from minrk/rel5
final changelog for 5.0.0
2024-05-24 12:31:02 +02:00
Min RK
d7939c1721 one last patch 2024-05-24 11:00:46 +02:00
Min RK
d93ca55b11 update nginx ssl url 2024-05-24 10:57:36 +02:00
Min RK
9ff11e6fa4 Merge pull request #4821 from yuvipanda/fix-bootstrap
Fix missing `form-control` classes & some padding
2024-05-24 10:54:16 +02:00
YuviPanda
5f3833bc95 Allow overriding spawner config based on user group membership
Similar to 'kubespawner_override' in KubeSpawner, this allows
admins to selectivel override spawner configuration based on
groups a user belongs to. This allows for low maintenance but
extremely powerful customization based on group membership.
This is particularly powerful when combined with
https://github.com/jupyterhub/oauthenticator/pull/735

\#\# Dictionary vs List

Ordering is important here, but still I choose to implement this
configuration as a dictionary of dictionaries vs a list. This is
primarily to allow for easy overriding in z2jh (and similar places),
where Lists are just really hard to override. Ordering is provided
by lexicographically sorting the keys, similar to how we do it in z2jh.

\#\# Merging config

The merging code is literally copied from KubeSpawner, and provides
the exact same behavior. Documentation of how it acts is also copied.
2024-05-23 19:48:25 -07:00
pre-commit-ci[bot]
66ddaebf26 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2024-05-24 01:55:12 +00:00
YuviPanda
2598ac2c1a Fix missing form-control classes & some padding
- Missing `form-control` on a textbox gave it weird padding,
  this fixes it.
- Add new server is set up as a [button addon](https://getbootstrap.com/docs/5.3/forms/input-group/#button-addons)
- Add a little right margin to the username in the navbar,
  just before the logout button. Otherwise they were 'stuck'
  to each other
2024-05-23 18:53:32 -07:00
Min RK
4ab36e3da6 final changelog for 5.0.0 2024-05-23 13:10:58 +02:00
Min RK
282cc020b6 Merge pull request #4815 from minrk/admin-test
admin: don't use state change to update offset
2024-05-16 08:48:22 +02:00
Min RK
6912a5a752 Merge pull request #4817 from minrk/share-code-full-url
add full URLs to share modes
2024-05-16 08:45:08 +02:00
Min RK
cedf237852 avoid offset race cycle in groups as well 2024-05-15 10:42:58 +02:00
Min RK
9ff8f3e6ec update server model docstring 2024-05-15 10:29:09 +02:00
Erik Sundell
abc9581a75 Merge pull request #4816 from minrk/share-codes
DOC: /share-codes/ url typo
2024-05-15 10:01:53 +02:00
Min RK
02df033227 add full URLs to share modes
- full_url for SharedServer
- full_accept_url for ShareCode
2024-05-15 00:02:47 +02:00
Min RK
f82097bf2e /share-codes/ typo 2024-05-14 23:47:01 +02:00
Min RK
2af252c4c3 admin: don't use state change to update offset
set offset -> request page -> response sets offset is a recipe for races

instead, send request with new offset and only update offset state

made easier by consolidating page update requests into single loadPageData
2024-05-14 15:23:46 +02:00
Min RK
06c8d22087 Merge pull request #4814 from minrk/activity-warning
quieter logging in activity-reporting when hub is temporarily unavailable
2024-05-13 10:32:48 +02:00
Min RK
95d479af88 Merge pull request #4812 from minrk/setup-python-cache
ci: enable pip cache
2024-05-13 10:31:58 +02:00
Min RK
aee92985ac set cache-dependency-path 2024-05-13 09:49:18 +02:00
Min RK
ea73931ad0 quieter logging in activity-reporting when hub is temporarily unavailable 2024-05-13 09:36:19 +02:00
Min RK
bbc3870803 Bump to 5.0.0b2 2024-05-09 09:03:55 +02:00
Min RK
212d618978 Merge pull request #4811 from minrk/5b2
Update changelog for 5.0b2
2024-05-09 09:03:39 +02:00
Min RK
b0494c203f ci: enable pip cache 2024-05-09 09:03:05 +02:00
Min RK
75673fc268 beta 2 2024-05-09 08:04:09 +02:00
Min RK
332a393083 Update changelog for 5.0b2 2024-05-08 20:15:57 +02:00
Simon Li
fa538cfc65 Merge pull request #4807 from minrk/jupyter-events
switch from jupyter-telemetry to jupyter-events
2024-05-08 11:31:11 +02:00
Min RK
29ae082399 Merge pull request #4808 from jupyterhub/pre-commit-ci-update-config
Update string formatting - from %s to f-strings
2024-05-07 15:05:59 +02:00
Min RK
463960edaf schemas are not published...YET 2024-05-07 11:40:45 +02:00
Min RK
d9c6e43508 remove unused version number from test events 2024-05-07 11:39:56 +02:00
Min RK
961d2fe878 allows_schemas config is not in jupyter-events 2024-05-07 11:38:24 +02:00
Min RK
5636472ebf apply ruff fixes for UP031 2024-05-07 11:33:59 +02:00
Erik Sundell
fc02f9e2e6 Merge pull request #4809 from consideRatio/pr/fix-internal-ref
docs: fix internal reference typo
2024-05-07 09:16:59 +02:00
Erik Sundell
fd21b2fe94 docs: fix internal reference typo 2024-05-07 09:10:28 +02:00
pre-commit-ci[bot]
6051dc9fa7 [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.3.5 → v0.4.3](https://github.com/astral-sh/ruff-pre-commit/compare/v0.3.5...v0.4.3)
- [github.com/pre-commit/pre-commit-hooks: v4.5.0 → v4.6.0](https://github.com/pre-commit/pre-commit-hooks/compare/v4.5.0...v4.6.0)
2024-05-06 22:03:18 +00:00
Simon Li
4ee5ee4e02 Merge pull request #4806 from minrk/pam-grouplist
use os.getgrouplist to check group membership in allowed_groups
2024-05-04 17:25:19 +02:00
Min RK
745cad5058 ignore unpublished schema URLs 2024-05-03 12:32:40 +02:00
Min RK
335803d19f switch from jupyter-telemetry to jupyter-events
- id must be a URL
- change `record_event` to `emit`
2024-05-03 12:00:42 +02:00
Min RK
3924295650 use getgrouplist to check group membership in allowed_groups
gr_mem check is less reliable
2024-05-03 09:21:10 +02:00
Min RK
c135e109ab Merge pull request #4805 from minrk/user-redirect-domain
include domain in PrefixRedirectHandler
2024-05-03 09:02:28 +02:00
Min RK
7e098fa09f include domain in PrefixRedirectHandler
redirects user.domain/user/foo -> hub.domain/hub/user/foo when server is not running

ensures right cookies, etc. are available
2024-05-01 15:43:46 +02:00
Erik Sundell
49f88450d5 Merge pull request #4804 from minrk/doc-redirect_uri
document conditions for oauth_redirect_url more clearly
2024-04-30 17:57:23 +02:00
Min RK
de20379933 document conditions for oauth_redirect_url more clearly 2024-04-30 15:22:18 +02:00
Min RK
8d406c398b Merge pull request #4799 from lahwaacz/async_generator
Relax dependency on async_generator
2024-04-26 11:04:04 +02:00
Jakub Klinkovský
dbd3813a1c async_generator is needed only for python<3.10
- the asynccontextmanager object is available in the standard contextlib
  module since Pyhton 3.7
- the aclosing object is available in the standard contextlib module
  since Pyhton 3.10
- JupyterHub currently requires Python 3.8 or newer
2024-04-24 23:11:10 +02:00
Simon Li
df04596172 Merge pull request #4798 from minrk/use_public_url
add full_url, full_progress_url to server models
2024-04-24 19:24:13 +02:00
Min RK
12f96df4eb fix condition for adding public_url to full_url
check directly if it is just a path, instead of trying to check other config that means it ought to be
2024-04-24 16:18:37 +02:00
Min RK
aecb95cd26 add full_url, full_progress_url to server models
if public_url is defined
2024-04-24 14:38:00 +02:00
Min RK
5fecb71265 Merge pull request #4797 from minrk/raise-not-redirect-loop
403 instead of redirect for token-only HubAuth
2024-04-24 11:08:00 +02:00
Min RK
e0157ff5eb don't try to login redirect with token-only HubAuth class
login via redirect is an artifact of the old services cookie,
removed in 2.0
2024-04-24 09:43:36 +02:00
Min RK
5ae250506b service-whoami: don't advertise link that won't work
whoami-api is api-only, it shouldn't be in the services dropdown
2024-04-23 10:09:05 +02:00
Min RK
8d298922e5 Merge pull request #4796 from manics/fix-redoc
Fix rest API djlint auto-formatting
2024-04-23 09:38:57 +02:00
Simon Li
18707e24b3 Forcibly disable djlint-reformat-jinja for redoc.html
Possible bug, djlint doesn't respect `djlint off` comments
2024-04-22 20:10:48 +01:00
Simon Li
3580904e8a redoc.html: revert djlint changes (breaks handlebar template) 2024-04-22 20:08:46 +01:00
112 changed files with 1812 additions and 495 deletions

View File

@@ -36,6 +36,7 @@ jobs:
- uses: actions/setup-python@v5
with:
python-version: "3.11"
cache: pip
- uses: actions/setup-node@v4
with:
@@ -148,7 +149,7 @@ jobs:
branchRegex: ^\w[\w-.]*$
- name: Build and push jupyterhub
uses: docker/build-push-action@v5
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64
@@ -171,7 +172,7 @@ jobs:
branchRegex: ^\w[\w-.]*$
- name: Build and push jupyterhub-onbuild
uses: docker/build-push-action@v5
uses: docker/build-push-action@v6
with:
build-args: |
BASE_IMAGE=${{ fromJson(steps.jupyterhubtags.outputs.tags)[0] }}
@@ -194,7 +195,7 @@ jobs:
branchRegex: ^\w[\w-.]*$
- name: Build and push jupyterhub-demo
uses: docker/build-push-action@v5
uses: docker/build-push-action@v6
with:
build-args: |
BASE_IMAGE=${{ fromJson(steps.onbuildtags.outputs.tags)[0] }}
@@ -220,7 +221,7 @@ jobs:
branchRegex: ^\w[\w-.]*$
- name: Build and push jupyterhub/singleuser
uses: docker/build-push-action@v5
uses: docker/build-push-action@v6
with:
build-args: |
JUPYTERHUB_VERSION=${{ github.ref_type == 'tag' && github.ref_name || format('git:{0}', github.sha) }}

View File

@@ -61,6 +61,10 @@ jobs:
- uses: actions/setup-python@v5
with:
python-version: "3.11"
cache: pip
cache-dependency-path: |
requirements.txt
docs/requirements.txt
- name: Install requirements
run: |

View File

@@ -158,6 +158,11 @@ jobs:
uses: actions/setup-python@v5
with:
python-version: "${{ matrix.python }}"
cache: pip
cache-dependency-path: |
pyproject.toml
requirements.txt
ci/oldest-dependencies/requirements.old
- name: Install Python dependencies
run: |

View File

@@ -16,7 +16,7 @@ ci:
repos:
# autoformat and lint Python code
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.3.5
rev: v0.5.0
hooks:
- id: ruff
types_or:
@@ -42,13 +42,14 @@ repos:
- id: djlint-reformat-jinja
files: ".*templates/.*.html"
types_or: ["html"]
exclude: redoc.html
- id: djlint-jinja
files: ".*templates/.*.html"
types_or: ["html"]
# Autoformat and linting, misc. details
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
rev: v4.6.0
hooks:
- id: end-of-file-fixer
exclude: share/jupyterhub/static/js/admin-react.js

View File

@@ -7,7 +7,7 @@ info:
license:
name: BSD-3-Clause
identifier: BSD-3-Clause
version: 5.0.0b1
version: 5.1.0
servers:
- url: /hub/api
security:
@@ -1176,8 +1176,16 @@ paths:
example: abc123
accept_url:
type: string
description: The URL for accepting the code
description: The URL path for accepting the code
example: /hub/accept-share?code=abc123
full_accept_url:
type:
- string
- "null"
description: |
The full URL for accepting the code,
if JupyterHub.public_url configuration is defined.
example: https://hub.example.org/hub/accept-share?code=abc123
security:
- oauth2:
- shares
@@ -1714,12 +1722,31 @@ components:
url:
type: string
description: |
The URL where the server can be accessed
The URL path where the server can be accessed
(typically /user/:name/:server.name/).
Will be a full URL if subdomains are configured.
progress_url:
type: string
description: |
The URL for an event-stream to retrieve events during a spawn.
The URL path for an event-stream to retrieve events during a spawn.
full_url:
type:
- string
- "null"
description: |
The full URL of the server (`https://hub.example.org/user/:name/:servername`).
`null` unless JupyterHub.public_url or subdomains are configured.
Added in 5.0.
full_progress_url:
type:
- string
- "null"
description: |
The full URL for the progress events (`https://hub.example.org/hub/api/users/:name/servers/:servername/progress`).
`null` unless JupyterHub.public_url is configured.
Added in 5.0.
started:
type: string
description: UTC timestamp when the server was last started.
@@ -1858,7 +1885,14 @@ components:
description: the server name. '' for the default server.
url:
type: string
description: the server's URL
description: the server's URL (path only when not using subdomains)
full_url:
type:
- string
- "null"
description: |
The full URL of the server (`https://hub.example.org/user/:name/:servername`).
`null` unless JupyterHub.public_url or subdomains are configured.
ready:
type: boolean
description: whether the server is ready
@@ -2082,8 +2116,9 @@ components:
Access the admin page. Permission to take actions via the admin
page granted separately.
admin:users:
Read, write, create and delete users and their authentication
state, not including their servers or tokens.
Read, modify, create, and delete users and their authentication
state, not including their servers or tokens. This is an extremely privileged
scope and should be considered tantamount to superuser.
admin:auth_state: Read a users authentication state.
users:
Read and write permissions to user models (excluding servers, tokens
@@ -2091,8 +2126,8 @@ components:
delete:users: Delete users.
list:users: List users, including at least their names.
read:users:
Read user models (excluding including servers, tokens and
authentication state).
Read user models (including servers, tokens and authentication
state).
read:users:name: Read names of users.
read:users:groups: Read users group membership.
read:users:activity: Read time of last user activity.
@@ -2114,8 +2149,8 @@ components:
read:tokens: Read user tokens.
admin:groups: Read and write group information, create and delete groups.
groups:
Read and write group information, including adding/removing users
to/from groups.
"Read and write group information, including adding/removing any
users to/from groups. Note: adding users to groups may affect permissions."
list:groups: List groups, including at least their names.
read:groups: Read group models.
read:groups:name: Read group names.

View File

@@ -1,3 +1,4 @@
{# djlint: off #}
{%- extends "!layout.html" %}
{# not sure why, but theme CSS prevents scrolling within redoc content
# If this were fixed, we could keep the navbar and footer
@@ -8,9 +9,7 @@
{% endblock docs_navbar %}
{% block footer %}
{% endblock footer %}
{# djlint: off #}
{%- block body_tag -%}<body>{%- endblock body_tag %}
{# djlint: on #}
{%- block extrahead %}
{{ super() }}
<link href="{{ pathto('_static/redoc-fonts.css', 1) }}" rel="stylesheet" />
@@ -23,14 +22,11 @@
document.body.innerText = "Rendered API specification doesn't work with file: protocol. Use sphinx-autobuild to do local builds of the docs, served over HTTP."
} else {
Redoc.init(
"{{ pathto('_static/rest-api.yml', 1) }}", {
{
meta.redoc_options |
default ({})
}
},
"{{ pathto('_static/rest-api.yml', 1) }}",
{{ meta.redoc_options | default ({}) }},
document.getElementById("redoc-spec"),
);
}
</script>
{%- endblock content %}
{# djlint: on #}

View File

@@ -289,6 +289,7 @@ linkcheck_ignore = [
r"https://github.com/[^/]*$", # too many github usernames / searches in changelog
"https://github.com/jupyterhub/jupyterhub/pull/", # too many PRs in changelog
"https://github.com/jupyterhub/jupyterhub/compare/", # too many comparisons in changelog
"https://schema.jupyter.org/jupyterhub/.*", # schemas are not published yet
r"https?://(localhost|127.0.0.1).*", # ignore localhost references in auto-links
r"https://linux.die.net/.*", # linux.die.net seems to block requests from CI with 403 sometimes
# don't check links to unpublished advisories

View File

@@ -1,3 +1,5 @@
(contributing:community)=
# Community communication channels
We use different channels of communication for different purposes. Whichever one you use will depend on what kind of communication you want to engage in.

View File

@@ -1,3 +1,5 @@
(contributing:contributors)=
# Contributors
Project Jupyter thanks the following people for their help and

View File

@@ -1,4 +1,4 @@
(contributing-docs)=
(contributing:docs)=
# Contributing Documentation
@@ -13,7 +13,7 @@ stored under the `docs/source` directory) and converts it into various
formats for people to read. To make sure the documentation you write or
change renders correctly, it is good practice to test it locally.
1. Make sure you have successfully completed {ref}`contributing/setup`.
1. Make sure you have successfully completed {ref}`contributing:setup`.
2. Install the packages required to build the docs.

View File

@@ -1,3 +1,5 @@
(contributing)=
# Contributing
We want you to contribute to JupyterHub in ways that are most exciting

View File

@@ -1,3 +1,5 @@
(contributing:roadmap)=
# The JupyterHub roadmap
This roadmap collects "next steps" for JupyterHub. It is about creating a

View File

@@ -1,7 +1,9 @@
(contributing:security)=
# Reporting security issues in Jupyter or JupyterHub
If you find a security vulnerability in Jupyter or JupyterHub,
whether it is a failure of the security model described in [Security Overview](web-security)
whether it is a failure of the security model described in [Security Overview](explanation:security)
or a failure in implementation,
please report it to <mailto:security@ipython.org>.

View File

@@ -1,4 +1,4 @@
(contributing/setup)=
(contributing:setup)=
# Setting up a development install

View File

@@ -11,7 +11,7 @@ can find them under the [jupyterhub/tests](https://github.com/jupyterhub/jupyter
## Running the tests
1. Make sure you have completed {ref}`contributing/setup`.
1. Make sure you have completed {ref}`contributing:setup`.
Once you are done, you would be able to run `jupyterhub` from the command line and access it from your web browser.
This ensures that the dev environment is properly set up for tests to run.
@@ -126,7 +126,7 @@ For more information on asyncio and event-loops, here are some resources:
### All the tests are failing
Make sure you have completed all the steps in {ref}`contributing/setup` successfully, and are able to access JupyterHub from your browser at http://localhost:8000 after starting `jupyterhub` in your command line.
Make sure you have completed all the steps in {ref}`contributing:setup` successfully, and are able to access JupyterHub from your browser at http://localhost:8000 after starting `jupyterhub` in your command line.
## Code formatting and linting

View File

@@ -1,3 +1,5 @@
(explanation:capacity-planning)=
# Capacity planning
General capacity planning advice for JupyterHub is hard to give,

View File

@@ -0,0 +1,430 @@
(explanation:concepts)=
# JupyterHub: A conceptual overview
```{warning}
This page could is missing cross-links to other parts of
the documentation. You can help by adding them!
```
JupyterHub is not what you think it is. Most things you think are
part of JupyterHub are actually handled by some other component, for
example the spawner or notebook server itself, and it's not always
obvious how the parts relate. The knowledge contained here hasn't
been assembled in one place before, and is essential to understand
when setting up a sufficiently complex Jupyter(Hub) setup.
This document was originally written to assist in debugging: very
often, the actual problem is not where one thinks it is and thus
people can't easily debug. In order to tell this story, we start at
JupyterHub and go all the way down to the fundamental components of
Jupyter.
In this document, we occasionally leave things out or bend the truth
where it helps in explanation, and give our explanations in terms of
Python even though Jupyter itself is language-neutral. The "(&)"
symbol highlights important points where this page leaves out or bends
the truth for simplification of explanation, but there is more if you
dig deeper.
This guide is long, but after reading it you will be know of all major
components in the Jupyter ecosystem and everything else you read
should make sense.
## What is Jupyter?
Before we get too far, let's remember what our end goal is. A
**Jupyter Notebook** is nothing more than a Python(&) process
which is getting commands from a web browser and displaying the output
via that browser. What the process actually sees is roughly like
getting commands on standard input(&) and writing to standard
output(&). There is nothing intrinsically special about this process
- it can do anything a normal Python process can do, and nothing more.
The **Jupyter kernel** handles capturing output and converting things
such as graphics to a form usable by the browser.
Everything we explain below is building up to this, going through many
different layers which give you many ways of customizing how this
process runs.
## JupyterHub
**JupyterHub** is the central piece that provides multi-user
login capabilities. Despite this, the end user only briefly interacts with
JupyterHub and most of the actual Jupyter session does not relate to
the hub at all: the hub mainly handles authentication and creating (JupyterHub calls it "spawning") the
single-user server. In short, anything which is related to _starting_
the user's workspace/environment is about JupyterHub, anything about
_running_ usually isn't.
If you have problems connecting the authentication, spawning, and the
proxy (explained below), the issue is usually with JupyterHub. To
debug, JupyterHub has extensive logs which get printed to its console
and can be used to discover most problems.
The main pieces of JupyterHub are:
### Authenticator
JupyterHub itself doesn't actually manage your users. It has a
database of users, but it is usually connected with some other system
that manages the usernames and passwords. When someone tries to log
in to JupyteHub, it asks the
**authenticator**([basics](authenticators),
[reference](../reference/authenticators)) if the
username/password is valid(&). The authenticator returns a username(&),
which is passed on to the spawner, which has to use it to start that
user's environment. The authenticator can also return user
groups and admin status of users, so that JupyterHub can do some
higher-level management.
The following authenticators are included with JupyterHub:
- **PAMAuthenticator** uses the standard Unix/Linux operating system
functions to check users. Roughly, if someone already has access to
the machine (they can log in by ssh), they will be able to log in to
JupyterHub without any other setup. Thus, JupyterHub fills the role
of a ssh server, but providing a web-browser based way to access the
machine.
There are [plenty of others to choose from](https://github.com/jupyterhub/jupyterhub/wiki/Authenticators).
You can connect to almost any other existing service to manage your
users. You either use all users from this other service (e.g. your
company), or enable only the allowed users (e.g. your group's
Github usernames). Some other popular authenticators include:
- **OAuthenticator** uses the standard OAuth protocol to verify users.
For example, you can easily use Github to authenticate your users -
people have a "click to login with Github" button. This is often
done with a allowlist to only allow certain users.
- **NativeAuthenticator** actually stores and validates its own
usernames and passwords, unlike most other authenticators. Thus,
you can manage all your users within JupyterHub only.
- There are authenticators for LTI (learning management systems),
Shibboleth, Kerberos - and so on.
The authenticator is configured with the
`c.JupyterHub.authenticator_class` configuration option in the
`jupyterhub_config.py` file.
The authenticator runs internally to the Hub process but communicates
with outside services.
If you have trouble logging in, this is usually a problem of the
authenticator. The authenticator logs are part of the the JupyterHub
logs, but there may also be relevant information in whatever external
services you are using.
### Spawner
The **spawner** ([basics](spawners),
[reference](../reference/spawners)) is the real core of
JupyterHub: when someone wants a notebook server, the spawner allocates
resources and starts the server. The notebook server could run on the
same machine as JupyterHub, on another machine, on some cloud service,
or more. Administrators can limit resources (CPU, memory) or isolate users
from each other - if the spawner supports it. They can also do no
limiting and allow any user to access any other user's files if they
are not configured properly.
Some basic spawners included in JupyterHub are:
- **LocalProcessSpawner** is built into JupyterHub. Upon launch it tries
to switch users to the given username (`su` (&)) and start the
notebook server. It requires that the hub be run as root (because
only root has permission to start processes as other user IDs).
LocalProcessSpawner is no different than a user logging in with
something like `ssh` and running `jupyter notebook`. PAMAuthenticator and
LocalProcessSpawner is the most basic way of using JupyterHub (and
what it does out of the box) and makes the hub not too dissimilar to
an advanced ssh server.
There are [many more advanced spawners](/reference/spawners), and to
show the diversity of spawning strategys some are listed below:
- **SudoSpawner** is like LocalProcessSpawner but lets you run
JupyterHub without root. `sudo` has to be configured to allow the
hub's user to run processes under other user IDs.
- **SystemdSpawner** uses Systemd to start other processes. It can
isolate users from each other and provide resource limiting.
- **DockerSpawner** runs stuff in Docker, a containerization system.
This lets you fully isolate users, limit CPU, memory, and provide
other container images to fully customize the environment.
- **KubeSpawner** runs on the Kubernetes, a cloud orchestration
system. The spawner can easily limit users and provide cloud
scaling - but the spawner doesn't actually do that, Kubernetes
does. The spawner just tells Kubernetes what to do. If you want to
get KubeSpawner to do something, first you would figure out how to
do it in Kubernetes, then figure out how to tell KubeSpawner to tell
Kubernetes that. Actually... this is true for most spawners.
- **BatchSpawner** runs on computer clusters with batch job scheduling
systems (e.g Slurm, HTCondor, PBS, etc). The user processes are run
as batch jobs, having access to all the data and software that the
users normally will.
In short, spawners are the interface to the rest of the operating
system, and to configure them right you need to know a bit about how
the corresponding operating system service works.
The spawner is responsible for the environment of the single-user
notebook servers (described in the next section). In the end, it just
makes a choice about how to start these processes: for example, the
Docker spawner starts a normal Docker container and runs the right
command inside of it. Thus, the spawner is responsible for setting
what kind of software and data is available to the user.
The spawner runs internally to the Hub process but communicates with
outside services. It is configured by `c.JupyterHub.spawner_class` in
`jupyterhub_config.py`.
If a user tries to launch a notebook server and it doesn't work, the
error is usually with the spawner or the notebook server (as described
in the next section). Each spawner outputs some logs to the main
JupyterHub logs, but may also have logs in other places depending on
what services it interacts with (for example, the Docker spawner
somehow puts logs in the Docker system services, Kubernetes through
the `kubectl` API).
### Proxy
The JupyterHub **proxy** relays connections between the users
and their single-user notebook servers. What this basically means is
that the hub itself can shut down and the proxy can continue to
allow users to communicate with their notebook servers. (This
further emphasizes that the hub is responsible for starting, not
running, the notebooks). By default, the hub starts the proxy
automatically
and stops the proxy when the hub stops (so that connections get
interrupted). But when you [configure the proxy to run
separately](howto:separate-proxy),
user's connections will continue to work even without the hub.
The default proxy is **ConfigurableHttpProxy** which is simple but
effective. A more advanced option is the [**Traefik Proxy**](https://blog.jupyter.org/introducing-traefikproxy-a-new-jupyterhub-proxy-based-on-traefik-4839e972faf6),
which gives you redundancy and high-availability.
When users "connect to JupyterHub", they _always_ first connect to the
proxy and the proxy relays the connection to the hub. Thus, the proxy
is responsible for SSL and accepting connections from the rest of the
internet. The user uses the hub to authenticate and start the server,
and then the hub connects back to the proxy to adjust the proxy routes
for the user's server (e.g. the web path `/user/someone` redirects to
the server of someone at a certain internal address). The proxy has
to be able to internally connect to both the hub and all the
single-user servers.
The proxy always runs as a separate process to JupyterHub (even though
JupyterHub can start it for you). JupyterHub has one set of
configuration options for the proxy addresses (`bind_url`) and one for
the hub (`hub_bind_url`). If `bind_url` is given, it is just passed to
the automatic proxy to tell it what to do.
If you have problems after users are redirected to their single-user
notebook servers, or making the first connection to the hub, it is
usually caused by the proxy. The ConfigurableHttpProxy's logs are
mixed with JupyterHub's logs if it's started through the hub (the
default case), otherwise from whatever system runs the proxy (if you
do configure it, you'll know).
### Services
JupyterHub has the concept of **services** ([basics](tutorial:services),
[reference](services-reference)), which are other web services
started by the hub, but otherwise are not necessarily related to the
hub itself. They are often used to do things related to Jupyter
(things that user interacts with, usually not the hub), but could
always be run some other way. Running from the hub provides an easy
way to get Hub API tokens and authenticate users against the hub. It
can also automatically add a proxy route to forward web requests to
that service.
A common example of a service is the [cull idle
servers](https://github.com/jupyterhub/jupyterhub-idle-culler)
service. When started by the hub, it automatically gets admin API
tokens. It uses the API to list all running servers, compare against
activity timeouts, and shut down servers exceeding the limits. Even
though this is an intrinsic part of JupyterHub, it is only loosely
coupled and running as a service provides convenience of
authentication - it could be just as well run some other way, with a
manually provided API token.
The configuration option `c.JupyterHub.services` is used to start
services from the hub.
When a service is started from JupyterHub automatically, its logs are
included in the JupyterHub logs.
## Single-user notebook server
The **single-user notebook server** is the same thing you get by
running `jupyter notebook` or `jupyter lab` from the command line -
the actual Jupyter user interface for a single person.
The role of the spawner is to start this server - basically, running
the command `jupyter notebook`. Actually it doesn't run that, it runs
`jupyterhub-singleuser` which first communicates with the hub to say
"I'm alive" before running a completely normal Jupyter server. The
single-user server can be JupyterLab or classic notebooks. By this
point, the hub is almost completely out of the picture (the web
traffic is going through proxy unchanged). Also by this time, the
spawner has already decided the environment which this single-user
server will have and the single-user server has to deal with that.
The spawner starts the server using `jupyterhub-singleuser` with some
environment variables like `JUPYTERHUB_API_TOKEN` and
`JUPYTERHUB_BASE_URL` which tell the single-user server how to connect
back to the hub in order to say that it's ready.
The single-user server options are **JupyterLab** and **classic
Jupyter Notebook**. They both run through the same backend server process--the web
frontend is an option when it is starting. The spawner can choose the
command line when it starts the single-user server. Extensions are a
property of the single-user server (in two parts: there can be a part
that runs in the Python server process, and parts that run in
javascript in lab or notebook).
If one wants to install software for users, it is not a matter of
"installing it for JupyerHub" - it's a matter of installing it for the
single-user server, which might be the same environment as the hub,
but not necessarily. (see below - it's a matter of the kernels!)
After the single-user notebook server is started, any errors are only
an issue of the single-user notebook server. Sometimes, it seems like
the spawner is failing, but really the spawner is working but the
single-user notebook server dies right away (in this case, you need to
find the problem with the single-user server and adjust the spawner to
start it correctly or fix the environment). This can happen, for
example, if the spawner doesn't set an environment variable or doesn't
provide storage.
The single-user server's logs are printed to stdout/stderr, and the
spawer decides where those streams are directed, so if you
notice problems at this phase you need to check your spawner for
instructions for accessing the single-user logs. For example, the
LocalProcessSpawner logs are just outputted to the same JupyterHub
output logs, the SystemdSpawner logs are
written to the Systemd journal, Docker and Kubernetes logs are written
to Docker and Kubernetes respectively, and batchspawner output goes to
the normal output places of batch jobs and is an explicit
configuration option of the spawner.
**(Jupyter) Notebook** is the classic interface, where each notebook
opens in a separate tab. It is traditionally started by `jupyter
notebook`. Does anything need to be said here?
**JupyterLab** is the new interface, where multiple notebooks are
openable in the same tab in an IDE-like environment. It is
traditionally started with `jupyter lab`. Both Notebook and Lab use
the same `.ipynb` file format.
JupyterLab is run thorugh the same server file, but at a path `/lab`
instead of `/tree`. Thus, they can be active at the same time in the
backend and you can switch between them at runtime by changing your
URL path.
Extensions need to be re-written for JupyterLab (if moving from
classic notebooks). But, the server-side of the extensions can be
shared by both.
## Kernel
The commands you run in the notebook session are not executed in the same process as
the notebook itself, but in a separate **Jupyter kernel**. There are [many
kernels
available](https://github.com/jupyter/jupyter/wiki/Jupyter-kernels).
As a basic approximation, a **Jupyter kernel** is a process which
accepts commands (cells that are run) and returns the output to
Jupyter to display. One example is the **IPython Jupyter kernel**,
which runs Python. There is nothing special about it, it can be
considered a \*normal Python process. The kernel process can be
approximated in UNIX terms as a process that takes commands on stdin
and returns stuff on stdout(&). Obviously, it's more because it has
to be able to disentangle all the possible outputs, such as figures,
and present it to the user in a web browser.
Kernel communication is via the the ZeroMQ protocol on the local
computer. Kernels are separate processes from the main single-user
notebook server (and thus obviously, different from the JupyterHub
process and everything else). By default (and unless you do something
special), kernels share the same environment as the notebook server
(data, resource limits, permissions, user id, etc.). But they _can_
run in a separate Python environment from the single-user server
(search `--prefix` in the [ipykernel installation
instructions](https://ipython.readthedocs.io/en/stable/install/kernel_install.html))
There are also more fancy techniques such as the [Jupyter Kernel
Gateway](https://jupyter-kernel-gateway.readthedocs.io/) and [Enterprise
Gateway](https://jupyter-enterprise-gateway.readthedocs.io/), which
allow you to run the kernels on a different machine and possibly with
a different environment.
A kernel doesn't just execute it's language - cell magics such as `%`,
`%%`, and `!` are a property of the kernel - in particular, these are
IPython kernel commands and don't necessarily work in any other
kernel unless they specifically support them.
Kernels are yet _another_ layer of configurability.
Each kernel can run a different programming language, with different
software, and so on. By default, they would run in the same
environment as the single-user notebook server, and the most common
other way they are configured is by
running in different Python virtual environments or conda
environments. They can be started and killed independently (there is
normally one per notebook you have open). The kernel uses
most of your memory and CPU when running Jupyter - the rest of the web
interface has a small footprint.
You can list your installed kernels with `jupyter kernelspec list`.
If you look at one of `kernel.json` files in those directories, you
will see exactly what command is run. These are normally
automatically made by the kernels, but can be edited as needed. [The
spec](https://jupyter-client.readthedocs.io/en/stable/kernels.html)
tells you even more.
The kernel normally has to be reachable by the single-user notebook server
but the gateways mentioned above can get around that limitation.
If you get problems with "Kernel died" or some other error in a single
notebook but the single-user notebook server stays working, it is
usually a problem with the kernel. It could be that you are trying to
use more resources than you are allowed and the symptom is the kernel
getting killed. It could be that it crashes for some other reason.
In these cases, you need to find the kernel logs and investigate.
The debug logs for the kernel are normally mixed in with the
single-user notebook server logs.
## JupyterHub distributions
There are several "distributions" which automatically install all of
the things above and configure them for a certain purpose. They are
good ways to get started, but if you have custom needs, eventually it
may become hard to adapt them to your requirements.
- [**Zero to JupyterHub with
Kubernetes**](https://zero-to-jupyterhub.readthedocs.io/) installs
an entire scaleable system using Kubernetes. Uses KubeSpawner,
....Authenticator, ....
- [**The Littlest JupyterHub**](https://tljh.jupyter.org/) installs JupyterHub on a single system
using SystemdSpawner and NativeAuthenticator (which manages users
itself).
- [**JupyterHub the hard way**](https://github.com/jupyterhub/jupyterhub-the-hard-way/blob/master/docs/installation-guide-hard.md)
takes you through everything yourself. It is a natural companion to
this guide, since you get to experience every little bit.
## What's next?
Now you know everything. Well, you know how everything relates, but
there are still plenty of details, implementations, and exceptions.
When setting up JupyterHub, the first step is to consider the above
layers, decide the right option for each of them, then begin putting
everything together.

View File

@@ -1,4 +1,4 @@
(hub-database)=
(explanation:hub-database)=
# The Hub's Database

View File

@@ -1,3 +1,5 @@
(explanation)=
# Explanation
_Explanation_ documentation provide big-picture descriptions of how JupyterHub works. This section is meant to build your understanding of particular topics.
@@ -5,6 +7,7 @@ _Explanation_ documentation provide big-picture descriptions of how JupyterHub w
```{toctree}
:maxdepth: 1
concepts
capacity-planning
database
websecurity

View File

@@ -1,4 +1,4 @@
(jupyterhub-oauth)=
(explanation:hub-oauth)=
# JupyterHub and OAuth

View File

@@ -1,4 +1,4 @@
(singleuser)=
(explanation:singleuser)=
# The JupyterHub single-user server
@@ -24,7 +24,7 @@ It's the same!
## Single-user server authentication
Implementation-wise, JupyterHub single-user servers are a special-case of {ref}`services`
Implementation-wise, JupyterHub single-user servers are a special-case of {ref}`services-reference`
and as such use the same (OAuth) authentication mechanism (more on OAuth in JupyterHub at [](oauth)).
This is primarily implemented in the {class}`~.HubOAuth` class.
@@ -104,6 +104,6 @@ But technically, all JupyterHub cares about is that it is:
1. an http server at the prescribed URL, accessible from the Hub and proxy, and
2. authenticated via [OAuth](oauth) with the Hub (it doesn't even have to do this, if you want to do your own authentication, as is done in BinderHub)
which means that you can customize JupyterHub to launch _any_ web application that meets these criteria, by following the specifications in {ref}`services`.
which means that you can customize JupyterHub to launch _any_ web application that meets these criteria, by following the specifications in {ref}`services-reference`.
Most of the time, though, it's easier to use [jupyter-server-proxy](https://jupyter-server-proxy.readthedocs.io) if you want to launch additional web applications in JupyterHub.

View File

@@ -1,4 +1,4 @@
(web-security)=
(explanation:security)=
# Security Overview

View File

@@ -1,3 +1,5 @@
(faq)=
# Frequently asked questions
## How do I share links to notebooks?

View File

@@ -1,3 +1,5 @@
(faq:institutional)=
# Institutional FAQ
This page contains common questions from users of JupyterHub,
@@ -130,7 +132,7 @@ level for several years, and makes a number of "default" security decisions that
users.
- For security considerations in the base JupyterHub application,
[see the JupyterHub security page](web-security).
[see the JupyterHub security page](explanation:security).
- For security considerations when deploying JupyterHub on Kubernetes, see the
[JupyterHub on Kubernetes security page](https://z2jh.jupyter.org/en/latest/security.html).

View File

@@ -1,4 +1,4 @@
(troubleshooting)=
(faq:troubleshooting)=
# Troubleshooting
@@ -167,7 +167,7 @@ When your whole JupyterHub sits behind an organization proxy (_not_ a reverse pr
### Launching Jupyter Notebooks to run as an externally managed JupyterHub service with the `jupyterhub-singleuser` command returns a `JUPYTERHUB_API_TOKEN` error
{ref}`services` allow processes to interact with JupyterHub's REST API. Example use-cases include:
{ref}`services-reference` allow processes to interact with JupyterHub's REST API. Example use-cases include:
- **Secure Testing**: provide a canonical Jupyter Notebook for testing production data to reduce the number of entry points into production systems.
- **Grading Assignments**: provide access to shared Jupyter Notebooks that may be used for management tasks such as grading assignments.

View File

@@ -1,4 +1,4 @@
(api-only)=
(howto:api-only)=
# Deploying JupyterHub in "API only mode"

View File

@@ -1,3 +1,5 @@
(howto:config:gh-oauth)=
# Configure GitHub OAuth
In this example, we show a configuration file for a fairly standard JupyterHub

View File

@@ -1,3 +1,5 @@
(howto:config:reverse-proxy)=
# Using a reverse proxy
In the following example, we show configuration files for a JupyterHub server

View File

@@ -1,3 +1,5 @@
(howto:config:no-sudo)=
# Run JupyterHub without root privileges using `sudo`
**Note:** Setting up `sudo` permissions involves many pieces of system

View File

@@ -1,3 +1,5 @@
(howto:config:user-env)=
# Configuring user environments
To deploy JupyterHub means you are providing Jupyter notebook environments for

View File

@@ -1,3 +1,5 @@
(howto:log-messages)=
# Interpreting common log messages
When debugging errors and outages, looking at the logs emitted by

View File

@@ -1,3 +1,5 @@
(howto:custom-proxy)=
# Writing a custom Proxy implementation
JupyterHub 0.8 introduced the ability to write a custom implementation of the

View File

@@ -1,4 +1,4 @@
(using-jupyterhub-rest-api)=
(howto:rest-api)=
# Using JupyterHub's REST API

View File

@@ -1,4 +1,4 @@
(separate-proxy)=
(howto:separate-proxy)=
# Running proxy separately from the hub

View File

@@ -1,3 +1,5 @@
(howto:templates)=
# Working with templates and UI
The pages of the JupyterHub application are generated from

View File

@@ -1,4 +1,4 @@
(upgrading-v5)=
(howto:upgrading-v5)=
# Upgrading to JupyterHub 5

View File

@@ -1,4 +1,4 @@
(upgrading-jupyterhub)=
(howto:upgrading-jupyterhub)=
# Upgrading JupyterHub

View File

@@ -186,14 +186,14 @@ An **access scope** is used to govern _access_ to a JupyterHub service or a user
This means making API requests, or visiting via a browser using OAuth.
Without the appropriate access scope, a user or token should not be permitted to make requests of the service.
When you attempt to access a service or server authenticated with JupyterHub, it will begin the [oauth flow](jupyterhub-oauth) for issuing a token that can be used to access the service.
When you attempt to access a service or server authenticated with JupyterHub, it will begin the [oauth flow](explanation:hub-oauth) for issuing a token that can be used to access the service.
If the user does not have the access scope for the relevant service or server, JupyterHub will not permit the oauth process to complete.
If oauth completes, the token will have at least the access scope for the service.
For minimal permissions, this is the _only_ scope granted to tokens issued during oauth by default,
but can be expanded via {attr}`.Spawner.oauth_client_allowed_scopes` or a service's [`oauth_client_allowed_scopes`](service-credentials) configuration.
:::{seealso}
[Further explanation of OAuth in JupyterHub](jupyterhub-oauth)
[Further explanation of OAuth in JupyterHub](explanation:hub-oauth)
:::
If a given service or single-user server can be governed by a single boolean "yes, you can use this service" or "no, you can't," or limiting via other existing scopes, access scopes are enough to manage access to the service.
@@ -229,6 +229,32 @@ access:servers!server
access:servers!server=username/
: access to only `username`'s _default_ server.
(granting-scopes)=
### Considerations when allowing users to grant permissions via the `groups` scope
In general, permissions are fixed by role assignments in configuration (or via [Authenticator-managed roles](#authenticator-roles) in JupyterHub 5) and can only be modified by administrators who can modify the Hub configuration.
There is only one scope that allows users to modify permissions of themselves or others at runtime instead of via configuration:
the `groups` scope, which allows adding and removing users from one or more groups.
With the `groups` scope, a user can add or remove any users to/from any group.
With the `groups!group=name` filtered scope, a user can add or remove any users to/from a specific group.
There are two ways in which adding a user to a group may affect their permissions:
- if the group is assigned one or more roles, adding a user to the group may increase their permissions (this is usually the point!)
- if the group is the _target_ of a filter on this or another group, such as `access:servers!group=students`, adding a user to the group can grant _other_ users elevated access to that user's resources.
With these in mind, when designing your roles, do not grant users the `groups` scope for any groups which:
- have roles the user should not have authority over, or
- would grant them access they shouldn't have for _any_ user (e.g. don't grant `teachers` both `access:servers!group=students` and `groups!group=students` which is tantamount to the unrestricted `access:servers` because they control which users the `group=students` filter applies to).
If a group does not have role assignments and the group is not present in any `!group=` filter, there should be no permissions-related consequences for adding users to groups.
:::{note}
The legacy `admin` property of users, which grants extreme superuser permissions and is generally discouraged in favor of more specific roles and scopes, may be modified only by other users with the `admin` property (e.g. added via `admin_users`).
:::
(custom-scopes)=
### Custom scopes

View File

@@ -11,7 +11,7 @@ No other database records are affected.
## Upgrade steps
1. All running **servers must be stopped** before proceeding with the upgrade.
2. To upgrade the Hub, follow the [Upgrading JupyterHub](upgrading-jupyterhub) instructions.
2. To upgrade the Hub, follow the [Upgrading JupyterHub](howto:upgrading-jupyterhub) instructions.
```{attention}
We advise against defining any new roles in the `jupyterhub.config.py` file right after the upgrade is completed and JupyterHub restarted for the first time. This preserves the 'current' state of the Hub. You can define and assign new roles on any other following startup.
```

View File

@@ -11,7 +11,7 @@
:Release: {{ version }}
JupyterHub also provides a REST API for administration of the Hub and users.
The documentation on [Using JupyterHub's REST API](using-jupyterhub-rest-api) provides
The documentation on [Using JupyterHub's REST API](howto:rest-api) provides
information on:
- what you can do with the API

File diff suppressed because one or more lines are too long

View File

@@ -1,28 +1,23 @@
# Event logging and telemetry
JupyterHub can be configured to record structured events from a running server using Jupyter's [Telemetry System]. The types of events that JupyterHub emits are defined by [JSON schemas] listed at the bottom of this page.
JupyterHub can be configured to record structured events from a running server using Jupyter's [Events System]. The types of events that JupyterHub emits are defined by [JSON schemas] listed at the bottom of this page.
## How to emit events
Event logging is handled by its `Eventlog` object. This leverages Python's standing [logging] library to emit, filter, and collect event data.
Event logging is handled by its `EventLogger` object. This leverages Python's standing [logging] library to emit, filter, and collect event data.
To begin recording events, you'll need to set two configurations:
To begin recording events, you'll need to set at least one configuration option:
> 1. `handlers`: tells the EventLog _where_ to route your events. This trait is a list of Python logging handlers that route events to the event log file.
> 2. `allows_schemas`: tells the EventLog _which_ events should be recorded. No events are emitted by default; all recorded events must be listed here.
> `EventLogger.handlers`: tells the EventLogger _where_ to route your events. This trait is a list of Python logging handlers that route events to e.g. an event log file.
Here's a basic example:
```
```python
import logging
c.EventLog.handlers = [
c.EventLogger.handlers = [
logging.FileHandler('event.log'),
]
c.EventLog.allowed_schemas = [
'hub.jupyter.org/server-action'
]
```
The output is a file, `"event.log"`, with events recorded as JSON data.
@@ -37,6 +32,15 @@ The output is a file, `"event.log"`, with events recorded as JSON data.
server-actions
```
:::{versionchanged} 5.0
JupyterHub 5.0 changes from the deprecated jupyter-telemetry to jupyter-events.
The main changes are:
- `EventLog` configuration is now called `EventLogger`
- The `hub.jupyter.org/server-action` schema is now called `https://schema.jupyter.org/jupyterhub/events/server-action`
:::
[json schemas]: https://json-schema.org/
[logging]: https://docs.python.org/3/library/logging.html
[telemetry system]: https://github.com/jupyter/telemetry
[events system]: https://jupyter-events.readthedocs.io

View File

@@ -32,3 +32,11 @@ export JUPYTERHUB_METRICS_PREFIX=jupyterhub_prod
```
would result in the metric `jupyterhub_prod_active_users`, etc.
## Configuring metrics
```{eval-rst}
.. currentmodule:: jupyterhub.metrics
.. autoconfigurable:: PeriodicMetricsCollector
```

View File

@@ -1,4 +1,4 @@
(services)=
(services-reference)=
# Services
@@ -213,7 +213,7 @@ c.JupyterHub.load_roles = [
]
```
When a service has a configured URL or explicit `oauth_client_id` or `oauth_redirect_uri`, it can operate as an [OAuth client](jupyterhub-oauth).
When a service has a configured URL or explicit `oauth_client_id` or `oauth_redirect_uri`, it can operate as an [OAuth client](explanation:hub-oauth).
When a user visits an oauth-authenticated service,
completion of authentication results in issuing an oauth token.
@@ -514,7 +514,7 @@ For example, using flask:
:language: python
```
We recommend looking at the [`HubOAuth`][huboauth] class implementation for reference,
We recommend looking at the {class}`.HubOAuth` class implementation for reference,
and taking note of the following process:
1. retrieve the token from the request.

View File

@@ -264,7 +264,7 @@ Share codes are much like shares, except:
To create a share code:
```{parsed-literal}
[POST /api/share-code/:username/:servername](rest-api-post-share-code)
[POST /api/share-codes/:username/:servername](rest-api-post-share-code)
```
where the body should include the scopes to be granted and expiration.
@@ -286,6 +286,7 @@ The response contains the code itself:
{
"code": "abc1234....",
"accept_url": "/hub/accept-share?code=abc1234",
"full_accept_url": "https://hub.example.org/hub/accept-share?code=abc1234",
"id": "sc_1234",
"scopes": [...],
...

View File

@@ -4,7 +4,7 @@
This document describes how JupyterHub routes requests.
This does not include the [REST API](using-jupyterhub-rest-api) URLs.
This does not include the [REST API](howto:rest-api) URLs.
In general, all URLs can be prefixed with `c.JupyterHub.base_url` to
run the whole JupyterHub application on a prefix.
@@ -240,7 +240,7 @@ and the page will show a link back to `/hub/spawn/...`.
On this page, users can manage their JupyterHub API tokens.
They can revoke access and request new tokens for writing scripts
against the [JupyterHub REST API](using-jupyterhub-rest-api).
against the [JupyterHub REST API](howto:rest-api).
## `/hub/admin`

View File

@@ -93,6 +93,25 @@ A set of initial admin users, `admin_users` can be configured as follows:
c.Authenticator.admin_users = {'mal', 'zoe'}
```
:::{warning}
`admin_users` config can only be used to _grant_ admin permissions.
Removing users from this set **does not** remove their admin permissions,
which must be done via the admin page or API.
Role assignments via `load_roles` are the only way to _revoke_ past permissions from configuration:
```python
c.JupyterHub.load_roles = [
{
"name": "admin",
"users": ["admin1", "..."],
}
]
```
or, better yet, [specify your own roles](define-role-target) with only the permissions your admins actually need.
:::
Users in the admin set are automatically added to the user `allowed_users` set,
if they are not already present.

View File

@@ -99,4 +99,4 @@ maintenance, re-configuration, etc.), then user connections are not
interrupted. For simplicity, by default the hub starts the proxy
automatically, so if the hub restarts, the proxy restarts, and user
connections are interrupted. It is easy to run the proxy separately,
for information see [the separate proxy page](separate-proxy).
for information see [the separate proxy page](howto:separate-proxy).

View File

@@ -43,7 +43,7 @@ is important that these files be put in a secure location on your server, where
they are not readable by regular users.
If you are using a **chain certificate**, see also chained certificate for SSL
in the JupyterHub [Troubleshooting FAQ](troubleshooting).
in the JupyterHub [Troubleshooting FAQ](faq:troubleshooting).
### Using letsencrypt
@@ -68,7 +68,7 @@ c.JupyterHub.ssl_cert = '/etc/letsencrypt/live/example.com/fullchain.pem'
### If SSL termination happens outside of the Hub
In certain cases, for example, if the hub is running behind a reverse proxy, and
[SSL termination is being provided by NGINX](https://www.nginx.com/resources/admin-guide/nginx-ssl-termination/),
[SSL termination is being provided by NGINX](https://docs.nginx.com/nginx/admin-guide/security-controls/terminating-ssl-http/),
it is reasonable to run the hub without SSL.
To achieve this, remove `c.JupyterHub.ssl_key` and `c.JupyterHub.ssl_cert`

View File

@@ -1,3 +1,5 @@
(tutorial:services)=
# External services
When working with JupyterHub, a **Service** is defined as a process

View File

@@ -1,7 +1,7 @@
# Starting servers with the JupyterHub API
Sometimes, when working with applications such as [BinderHub](https://binderhub.readthedocs.io), it may be necessary to launch Jupyter-based services on behalf of your users.
Doing so can be achieved through JupyterHub's [REST API](using-jupyterhub-rest-api), which allows one to launch and manage servers on behalf of users through API calls instead of the JupyterHub UI.
Doing so can be achieved through JupyterHub's [REST API](howto:rest-api), which allows one to launch and manage servers on behalf of users through API calls instead of the JupyterHub UI.
This way, you can take advantage of other user/launch/lifecycle patterns that are not natively supported by the JupyterHub UI, all without the need to develop the server management features of JupyterHub Spawners and/or Authenticators.
This tutorial goes through working with the JupyterHub API to manage servers for users.

View File

@@ -159,11 +159,14 @@ which will have a JSON response:
'last_exchanged_at': None,
'code': 'U-eYLFT1lGstEqfMHpAIvTZ1MRjZ1Y1a-loGQ0K86to',
'accept_url': '/hub/accept-share?code=U-eYLFT1lGstEqfMHpAIvTZ1MRjZ1Y1a-loGQ0K86to',
'full_accept_url': 'https://hub.example.org/accept-share?code=U-eYLFT1lGstEqfMHpAIvTZ1MRjZ1Y1a-loGQ0K86to',
}
```
The most relevant fields here are `code`, which contains the code itself, and `accept_url`, which is the URL path for the page another user.
Note: it does not contain the _hostname_ of the hub, which JupyterHub often does not know.
If `public_url` configuration is defined, `full_accept_url` will be the full URL including the host.
Otherwise, it will be null.
Share codes are guaranteed to be url-safe, so no encoding is required.

View File

@@ -7,5 +7,5 @@ import httpx
def get_client():
base_url = os.environ["JUPYTERHUB_API_URL"]
token = os.environ["JUPYTERHUB_API_TOKEN"]
headers = {"Authorization": "Bearer %s" % token}
headers = {"Authorization": f"Bearer {token}"}
return httpx.AsyncClient(base_url=base_url, headers=headers)

View File

@@ -38,7 +38,7 @@ def authenticated(f):
else:
# redirect to login url on failed auth
state = auth.generate_state(next_url=request.path)
response = make_response(redirect(auth.login_url + '&state=%s' % state))
response = make_response(redirect(auth.login_url + f'&state={state}'))
response.set_cookie(auth.state_cookie_name, state)
return response

View File

@@ -7,6 +7,7 @@ c.JupyterHub.services = [
'name': 'whoami-api',
'url': 'http://127.0.0.1:10101',
'command': [sys.executable, './whoami.py'],
'display': False,
},
{
'name': 'whoami-oauth',
@@ -36,3 +37,5 @@ c.JupyterHub.load_roles = [
c.JupyterHub.authenticator_class = 'dummy'
c.JupyterHub.spawner_class = 'simple'
c.JupyterHub.ip = '127.0.0.1' # let's just run on localhost while dummy auth is enabled
# default to home page, since we don't want to start servers for this demo
c.JupyterHub.default_url = "/hub/home"

View File

@@ -11,7 +11,7 @@ c = get_config() # noqa
class DemoFormSpawner(LocalProcessSpawner):
def _options_form_default(self):
default_env = "YOURNAME=%s\n" % self.user.name
default_env = f"YOURNAME={self.user.name}\n"
return f"""
<div class="form-group">
<label for="args">Extra notebook CLI arguments</label>

23
jsx/package-lock.json generated
View File

@@ -3687,11 +3687,12 @@
}
},
"node_modules/braces": {
"version": "3.0.2",
"version": "3.0.3",
"resolved": "https://registry.npmjs.org/braces/-/braces-3.0.3.tgz",
"integrity": "sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==",
"dev": true,
"license": "MIT",
"dependencies": {
"fill-range": "^7.0.1"
"fill-range": "^7.1.1"
},
"engines": {
"node": ">=8"
@@ -5268,9 +5269,10 @@
}
},
"node_modules/fill-range": {
"version": "7.0.1",
"version": "7.1.1",
"resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.1.1.tgz",
"integrity": "sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg==",
"dev": true,
"license": "MIT",
"dependencies": {
"to-regex-range": "^5.0.1"
},
@@ -6230,8 +6232,9 @@
},
"node_modules/is-number": {
"version": "7.0.0",
"resolved": "https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz",
"integrity": "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=0.12.0"
}
@@ -9402,8 +9405,9 @@
},
"node_modules/to-regex-range": {
"version": "5.0.1",
"resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz",
"integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"is-number": "^7.0.0"
},
@@ -10139,9 +10143,10 @@
}
},
"node_modules/ws": {
"version": "8.13.0",
"version": "8.17.1",
"resolved": "https://registry.npmjs.org/ws/-/ws-8.17.1.tgz",
"integrity": "sha512-6XQFvXTkbfUOZOKKILFG1PDK2NDQs4azKQl26T0YS5CxqWLgXajbPZ+h4gZekJyRqFU8pvnbAbbs/3TgRPy+GQ==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=10.0.0"
},

View File

@@ -14,8 +14,7 @@ const Groups = (props) => {
const dispatch = useDispatch();
const navigate = useNavigate();
const { setOffset, offset, handleLimit, limit, setPagination } =
usePaginationParams();
const { offset, handleLimit, limit, setPagination } = usePaginationParams();
const total = groups_page ? groups_page.total : undefined;
@@ -32,11 +31,22 @@ const Groups = (props) => {
});
};
// single callback to reload the page
// uses current state, or params can be specified if state
// should be updated _after_ load, e.g. offset
const loadPageData = (params) => {
params = params || {};
return updateGroups(
params.offset === undefined ? offset : params.offset,
params.limit === undefined ? limit : params.limit,
)
.then((data) => dispatchPageUpdate(data.items, data._pagination))
.catch((err) => setErrorAlert("Failed to update group list."));
};
useEffect(() => {
updateGroups(offset, limit).then((data) =>
dispatchPageUpdate(data.items, data._pagination),
);
}, [offset, limit]);
loadPageData();
}, [limit]);
if (!groups_data || !groups_page) {
return <div data-testid="no-show"></div>;
@@ -72,8 +82,10 @@ const Groups = (props) => {
limit={limit}
visible={groups_data.length}
total={total}
next={() => setOffset(offset + limit)}
prev={() => setOffset(offset - limit)}
next={() => loadPageData({ offset: offset + limit })}
prev={() =>
loadPageData({ offset: limit > offset ? 0 : offset - limit })
}
handleLimit={handleLimit}
/>
</Card.Body>

View File

@@ -112,8 +112,8 @@ test("Renders nothing if required data is not available", async () => {
expect(noShow).toBeVisible();
});
test("Interacting with PaginationFooter causes state update and refresh via useEffect call", async () => {
let upgradeGroupsSpy = mockAsync();
test("Interacting with PaginationFooter causes page refresh", async () => {
let updateGroupsSpy = mockAsync();
let setSearchParamsSpy = mockAsync();
let searchParams = new URLSearchParams({ limit: "2" });
useSearchParams.mockImplementation(() => [
@@ -125,11 +125,11 @@ test("Interacting with PaginationFooter causes state update and refresh via useE
]);
let _, setSearchParams;
await act(async () => {
render(groupsJsx(upgradeGroupsSpy));
render(groupsJsx(updateGroupsSpy));
[_, setSearchParams] = useSearchParams();
});
expect(upgradeGroupsSpy).toBeCalledWith(0, 2);
expect(updateGroupsSpy).toBeCalledWith(0, 2);
var lastState =
mockReducers.mock.results[mockReducers.mock.results.length - 1].value;
@@ -140,9 +140,7 @@ test("Interacting with PaginationFooter causes state update and refresh via useE
await act(async () => {
fireEvent.click(next);
});
expect(setSearchParamsSpy).toBeCalledWith("limit=2&offset=2");
// FIXME: mocked useSelector, state seem to prevent updateGroups from being called
// making the test environment not representative
// expect(callbackSpy).toHaveBeenCalledWith(2, 2);
expect(updateGroupsSpy).toBeCalledWith(2, 2);
// mocked updateGroups means callback after load doesn't fire
// expect(setSearchParamsSpy).toBeCalledWith("limit=2&offset=2");
});

View File

@@ -41,7 +41,7 @@ const ServerDashboard = (props) => {
let user_data = useSelector((state) => state.user_data);
const user_page = useSelector((state) => state.user_page);
const { setOffset, offset, setLimit, handleLimit, limit, setPagination } =
const { offset, setLimit, handleLimit, limit, setPagination } =
usePaginationParams();
const name_filter = searchParams.get("name_filter") || "";
@@ -123,26 +123,39 @@ const ServerDashboard = (props) => {
} else {
params.set("state", new_state_filter);
}
console.log("setting search params", params.toString());
return params;
});
};
// the callback to update the displayed user list
const updateUsersWithParams = () =>
updateUsers({
offset,
const updateUsersWithParams = (params) => {
if (params) {
if (params.offset !== undefined && params.offset < 0) {
params.offset = 0;
}
}
return updateUsers({
offset: offset,
limit,
name_filter,
sort,
state: state_filter,
...params,
});
};
useEffect(() => {
updateUsersWithParams()
// single callback to reload the page
// uses current state, or params can be specified if state
// should be updated _after_ load, e.g. offset
const loadPageData = (params) => {
return updateUsersWithParams(params)
.then((data) => dispatchPageUpdate(data.items, data._pagination))
.catch((err) => setErrorAlert("Failed to update user list."));
}, [offset, limit, name_filter, sort, state_filter]);
};
useEffect(() => {
loadPageData();
}, [limit, name_filter, sort, state_filter]);
if (!user_data || !user_page) {
return <div data-testid="no-show"></div>;
@@ -172,14 +185,7 @@ const ServerDashboard = (props) => {
action(user.name, server.name)
.then((res) => {
if (res.status < 300) {
updateUsersWithParams()
.then((data) => {
dispatchPageUpdate(data.items, data._pagination);
})
.catch(() => {
setIsDisabled(false);
setErrorAlert(`Failed to update users list.`);
});
loadPageData();
} else {
setErrorAlert(`Failed to ${name.toLowerCase()}.`);
setIsDisabled(false);
@@ -519,13 +525,7 @@ const ServerDashboard = (props) => {
return res;
})
.then((res) => {
updateUsersWithParams()
.then((data) => {
dispatchPageUpdate(data.items, data._pagination);
})
.catch(() =>
setErrorAlert(`Failed to update users list.`),
);
loadPageData();
return res;
})
.catch(() => setErrorAlert(`Failed to start servers.`));
@@ -556,13 +556,7 @@ const ServerDashboard = (props) => {
return res;
})
.then((res) => {
updateUsersWithParams()
.then((data) => {
dispatchPageUpdate(data.items, data._pagination);
})
.catch(() =>
setErrorAlert(`Failed to update users list.`),
);
loadPageData();
return res;
})
.catch(() => setErrorAlert(`Failed to stop servers.`));
@@ -590,8 +584,13 @@ const ServerDashboard = (props) => {
limit={limit}
visible={user_data.length}
total={total}
next={() => setOffset(offset + limit)}
prev={() => setOffset(offset - limit)}
// don't trigger via setOffset state change,
// which can cause infinite cycles.
// offset state will be set upon reply via setPagination
next={() => loadPageData({ offset: offset + limit })}
prev={() =>
loadPageData({ offset: limit > offset ? 0 : offset - limit })
}
handleLimit={handleLimit}
/>
<br></br>

View File

@@ -608,7 +608,7 @@ test("Search for user calls updateUsers with name filter", async () => {
// expect(mockUpdateUsers).toBeCalledWith(0, 100, "ab");
});
test("Interacting with PaginationFooter causes state update and refresh via useEffect call", async () => {
test("Interacting with PaginationFooter requests page update", async () => {
await act(async () => {
render(serverDashboardJsx());
});
@@ -625,14 +625,10 @@ test("Interacting with PaginationFooter causes state update and refresh via useE
jest.runAllTimers();
});
expect(searchParams.get("offset")).toEqual("2");
expect(searchParams.get("limit")).toEqual("2");
// FIXME: should call updateUsers, does in reality.
// tests don't reflect reality due to mocked state/useSelector
// unclear how to fix this.
// expect(callbackSpy.mock.calls).toHaveLength(2);
// expect(callbackSpy).toHaveBeenCalledWith(2, 2, "");
expect(mockUpdateUsers).toBeCalledWith({
...defaultUpdateUsersParams,
offset: 2,
});
});
test("Server delete button exists for named servers", async () => {

View File

@@ -3,7 +3,7 @@
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
# version_info updated by running `tbump`
version_info = (5, 0, 0, "b1", "")
version_info = (5, 1, 0, "", "")
# pep 440 version: no dot before beta/rc, but before .dev
# 0.1.0rc1
@@ -70,6 +70,4 @@ def _check_version(hub_version, singleuser_version, log):
singleuser_version,
)
else:
log.debug(
"jupyterhub and jupyterhub-singleuser both on version %s" % hub_version
)
log.debug(f"jupyterhub and jupyterhub-singleuser both on version {hub_version}")

View File

@@ -46,7 +46,7 @@ class TokenAPIHandler(APIHandler):
elif orm_token.service:
model = self.service_model(orm_token.service)
else:
self.log.warning("%s has no user or service. Deleting..." % orm_token)
self.log.warning(f"{orm_token} has no user or service. Deleting...")
self.db.delete(orm_token)
self.db.commit()
raise web.HTTPError(404)

View File

@@ -202,7 +202,22 @@ class APIHandler(BaseHandler):
'url': url_path_join(user.url, url_escape_path(spawner.name), '/'),
'user_options': spawner.user_options,
'progress_url': user.progress_url(spawner.name),
'full_url': None,
'full_progress_url': None,
}
# fill out full_url fields
public_url = self.settings.get("public_url")
if urlparse(model["url"]).netloc:
# if using subdomains, this is already a full URL
model["full_url"] = model["url"]
if public_url:
model["full_progress_url"] = urlunparse(
public_url._replace(path=model["progress_url"])
)
if not model["full_url"]:
# set if not defined already by subdomain
model["full_url"] = urlunparse(public_url._replace(path=model["url"]))
scope_filter = self.get_scope_filter('admin:server_state')
if scope_filter(spawner, kind='server'):
model['state'] = state
@@ -446,9 +461,9 @@ class APIHandler(BaseHandler):
name (str): name of the model, used in error messages
"""
if not isinstance(model, dict):
raise web.HTTPError(400, "Invalid JSON data: %r" % model)
raise web.HTTPError(400, f"Invalid JSON data: {model!r}")
if not set(model).issubset(set(model_types)):
raise web.HTTPError(400, "Invalid JSON keys: %r" % model)
raise web.HTTPError(400, f"Invalid JSON keys: {model!r}")
for key, value in model.items():
if not isinstance(value, model_types[key]):
raise web.HTTPError(

View File

@@ -19,7 +19,7 @@ class _GroupAPIHandler(APIHandler):
username = self.authenticator.normalize_username(username)
user = self.find_user(username)
if user is None:
raise web.HTTPError(400, "No such user: %s" % username)
raise web.HTTPError(400, f"No such user: {username}")
users.append(user.orm_user)
return users
@@ -87,7 +87,7 @@ class GroupListAPIHandler(_GroupAPIHandler):
for name in groupnames:
existing = orm.Group.find(self.db, name=name)
if existing is not None:
raise web.HTTPError(409, "Group %s already exists" % name)
raise web.HTTPError(409, f"Group {name} already exists")
usernames = model.get('users', [])
# check that users exist
@@ -124,7 +124,7 @@ class GroupAPIHandler(_GroupAPIHandler):
existing = orm.Group.find(self.db, name=group_name)
if existing is not None:
raise web.HTTPError(409, "Group %s already exists" % group_name)
raise web.HTTPError(409, f"Group {group_name} already exists")
usernames = model.get('users', [])
# check that users exist

View File

@@ -32,14 +32,14 @@ class ShutdownAPIHandler(APIHandler):
proxy = data['proxy']
if proxy not in {True, False}:
raise web.HTTPError(
400, "proxy must be true or false, got %r" % proxy
400, f"proxy must be true or false, got {proxy!r}"
)
app.cleanup_proxy = proxy
if 'servers' in data:
servers = data['servers']
if servers not in {True, False}:
raise web.HTTPError(
400, "servers must be true or false, got %r" % servers
400, f"servers must be true or false, got {servers!r}"
)
app.cleanup_servers = servers

View File

@@ -5,6 +5,7 @@
import json
import re
from typing import List, Optional
from urllib.parse import urlunparse
from pydantic import (
BaseModel,
@@ -80,7 +81,7 @@ class _ShareAPIHandler(APIHandler):
"""Truncated server model for use in shares
- Adds "user" field (just name for now)
- Limits fields to "name", "url", "ready"
- Limits fields to "name", "url", "full_url", "ready"
from standard server model
"""
user = self.users[spawner.user.id]
@@ -95,7 +96,7 @@ class _ShareAPIHandler(APIHandler):
}
}
# subset keys for sharing
for key in ["name", "url", "ready"]:
for key in ["name", "url", "full_url", "ready"]:
if key in full_model:
server_model[key] = full_model[key]
@@ -128,6 +129,12 @@ class _ShareAPIHandler(APIHandler):
model["accept_url"] = url_concat(
self.hub.base_url + "accept-share", {"code": code}
)
model["full_accept_url"] = None
public_url = self.settings.get("public_url")
if public_url:
model["full_accept_url"] = urlunparse(
public_url._replace(path=model["accept_url"])
)
return model
def _init_share_query(self, kind="share"):

View File

@@ -5,9 +5,14 @@
import asyncio
import inspect
import json
import sys
from datetime import timedelta, timezone
from async_generator import aclosing
if sys.version_info >= (3, 10):
from contextlib import aclosing
else:
from async_generator import aclosing
from dateutil.parser import parse as parse_date
from sqlalchemy import func, or_
from sqlalchemy.orm import joinedload, raiseload, selectinload # noqa
@@ -156,7 +161,7 @@ class UserListAPIHandler(APIHandler):
.having(func.count(orm.Server.id) == 0)
)
elif state_filter:
raise web.HTTPError(400, "Unrecognized state filter: %r" % state_filter)
raise web.HTTPError(400, f"Unrecognized state filter: {state_filter!r}")
# apply eager load options
query = query.options(
@@ -217,7 +222,7 @@ class UserListAPIHandler(APIHandler):
data = user_list
self.write(json.dumps(data))
# if testing with raiselaod above, need expire_all to avoid affecting other operations
# if testing with raiseload above, need expire_all to avoid affecting other operations
# self.db.expire_all()
@needs_scope('admin:users')
@@ -231,6 +236,8 @@ class UserListAPIHandler(APIHandler):
# admin is set for all users
# to create admin and non-admin users requires at least two API requests
admin = data.get('admin', False)
if admin and not self.current_user.admin:
raise web.HTTPError(403, "Only admins can grant admin permissions")
to_create = []
invalid_names = []
@@ -241,15 +248,15 @@ class UserListAPIHandler(APIHandler):
continue
user = self.find_user(name)
if user is not None:
self.log.warning("User %s already exists" % name)
self.log.warning(f"User {name} already exists")
else:
to_create.append(name)
if invalid_names:
if len(invalid_names) == 1:
msg = "Invalid username: %s" % invalid_names[0]
msg = f"Invalid username: {invalid_names[0]}"
else:
msg = "Invalid usernames: %s" % ', '.join(invalid_names)
msg = "Invalid usernames: {}".format(', '.join(invalid_names))
raise web.HTTPError(400, msg)
if not to_create:
@@ -265,7 +272,7 @@ class UserListAPIHandler(APIHandler):
try:
await maybe_future(self.authenticator.add_user(user))
except Exception as e:
self.log.error("Failed to create user: %s" % name, exc_info=True)
self.log.error(f"Failed to create user: {name}", exc_info=True)
self.users.delete(user)
raise web.HTTPError(400, f"Failed to create user {name}: {e}")
else:
@@ -302,23 +309,27 @@ class UserAPIHandler(APIHandler):
data = self.get_json_body()
user = self.find_user(user_name)
if user is not None:
raise web.HTTPError(409, "User %s already exists" % user_name)
raise web.HTTPError(409, f"User {user_name} already exists")
user = self.user_from_username(user_name)
if data:
self._check_user_model(data)
if 'admin' in data:
user.admin = data['admin']
assign_default_roles(self.db, entity=user)
if data.get('admin', False) and not self.current_user.admin:
raise web.HTTPError(403, "Only admins can grant admin permissions")
# create the user
user = self.user_from_username(user_name)
if data and data.get('admin', False):
user.admin = data['admin']
assign_default_roles(self.db, entity=user)
self.db.commit()
try:
await maybe_future(self.authenticator.add_user(user))
except Exception:
self.log.error("Failed to create user: %s" % user_name, exc_info=True)
self.log.error(f"Failed to create user: {user_name}", exc_info=True)
# remove from registry
self.users.delete(user)
raise web.HTTPError(400, "Failed to create user: %s" % user_name)
raise web.HTTPError(400, f"Failed to create user: {user_name}")
self.write(json.dumps(self.user_model(user)))
self.set_status(201)
@@ -333,15 +344,14 @@ class UserAPIHandler(APIHandler):
if user.spawner._stop_pending:
raise web.HTTPError(
400,
"%s's server is in the process of stopping, please wait." % user_name,
f"{user_name}'s server is in the process of stopping, please wait.",
)
if user.running:
await self.stop_single_user(user)
if user.spawner._stop_pending:
raise web.HTTPError(
400,
"%s's server is in the process of stopping, please wait."
% user_name,
f"{user_name}'s server is in the process of stopping, please wait.",
)
await maybe_future(self.authenticator.delete_user(user))
@@ -365,9 +375,21 @@ class UserAPIHandler(APIHandler):
if self.find_user(data['name']):
raise web.HTTPError(
400,
"User %s already exists, username must be unique" % data['name'],
"User {} already exists, username must be unique".format(
data['name']
),
)
if not self.current_user.admin:
if user.admin:
raise web.HTTPError(403, "Only admins can modify other admins")
if 'admin' in data and data['admin']:
raise web.HTTPError(403, "Only admins can grant admin permissions")
for key, value in data.items():
value_s = "..." if key == "auth_state" else repr(value)
self.log.info(
f"{self.current_user.name} setting {key}={value_s} for {user.name}"
)
if key == 'auth_state':
await user.save_auth_state(value)
else:
@@ -397,7 +419,7 @@ class UserTokenListAPIHandler(APIHandler):
"""Get tokens for a given user"""
user = self.find_user(user_name)
if not user:
raise web.HTTPError(404, "No such user: %s" % user_name)
raise web.HTTPError(404, f"No such user: {user_name}")
now = utcnow(with_tz=False)
api_tokens = []
@@ -483,10 +505,29 @@ class UserTokenListAPIHandler(APIHandler):
400, f"token {key} must be null or a list of strings, not {value!r}"
)
expires_in = body.get('expires_in', None)
if not (expires_in is None or isinstance(expires_in, int)):
raise web.HTTPError(
400,
f"token expires_in must be null or integer, not {expires_in!r}",
)
expires_in_max = self.settings["token_expires_in_max_seconds"]
if expires_in_max:
# validate expires_in against limit
if expires_in is None:
# expiration unspecified, use max value
# (default before max limit was introduced was 'never', this is closest equivalent)
expires_in = expires_in_max
elif expires_in > expires_in_max:
raise web.HTTPError(
400,
f"token expires_in: {expires_in} must not exceed {expires_in_max}",
)
try:
api_token = user.new_api_token(
note=note,
expires_in=body.get('expires_in', None),
expires_in=expires_in,
roles=token_roles,
scopes=token_scopes,
)
@@ -619,7 +660,7 @@ class UserServerAPIHandler(APIHandler):
finally:
spawner._spawn_pending = False
if state is None:
raise web.HTTPError(400, "%s is already running" % spawner._log_name)
raise web.HTTPError(400, f"{spawner._log_name} is already running")
options = self.get_json_body()
await self.spawn_single_user(user, server_name, options=options)
@@ -669,14 +710,22 @@ class UserServerAPIHandler(APIHandler):
asyncio.ensure_future(_remove_spawner(spawner._stop_future))
return
if spawner.pending:
raise web.HTTPError(
400,
f"{spawner._log_name} is pending {spawner.pending}, please wait",
)
stop_future = None
if spawner.ready:
if spawner.pending:
# we are interrupting a pending start
# hopefully nothing gets leftover
self.log.warning(
f"Interrupting spawner {spawner._log_name}, pending {spawner.pending}"
)
spawn_future = spawner._spawn_future
if spawn_future:
spawn_future.cancel()
# Give cancel a chance to resolve?
# not sure what we would wait for here,
await asyncio.sleep(1)
stop_future = await self.stop_single_user(user, server_name)
elif spawner.ready:
# include notify, so that a server that died is noticed immediately
status = await spawner.poll_and_notify()
if status is None:
@@ -812,7 +861,9 @@ class SpawnProgressAPIHandler(APIHandler):
# not pending, no progress to fetch
# check if spawner has just failed
f = spawn_future
if f and f.done() and f.exception():
if f and f.cancelled():
failed_event['message'] = "Spawn cancelled"
elif f and f.done() and f.exception():
exc = f.exception()
message = getattr(exc, "jupyterhub_message", str(exc))
failed_event['message'] = f"Spawn failed: {message}"
@@ -851,7 +902,9 @@ class SpawnProgressAPIHandler(APIHandler):
else:
# what happened? Maybe spawn failed?
f = spawn_future
if f and f.done() and f.exception():
if f and f.cancelled():
failed_event['message'] = "Spawn cancelled"
elif f and f.done() and f.exception():
exc = f.exception()
message = getattr(exc, "jupyterhub_message", str(exc))
failed_event['message'] = f"Spawn failed: {message}"

View File

@@ -21,6 +21,7 @@ from datetime import datetime, timedelta, timezone
from functools import partial
from getpass import getuser
from operator import itemgetter
from pathlib import Path
from textwrap import dedent
from typing import Optional
from urllib.parse import unquote, urlparse, urlunparse
@@ -29,7 +30,7 @@ import tornado.httpserver
import tornado.options
from dateutil.parser import parse as parse_date
from jinja2 import ChoiceLoader, Environment, FileSystemLoader, PrefixLoader
from jupyter_telemetry.eventlog import EventLog
from jupyter_events.logger import EventLogger
from sqlalchemy.exc import OperationalError, SQLAlchemyError
from sqlalchemy.orm import joinedload
from tornado import gen, web
@@ -213,7 +214,7 @@ class NewToken(Application):
ThreadPoolExecutor(1).submit(init_roles_and_users).result()
user = orm.User.find(hub.db, self.name)
if user is None:
print("No such user: %s" % self.name, file=sys.stderr)
print(f"No such user: {self.name}", file=sys.stderr)
self.exit(1)
token = user.new_api_token(note="command-line generated")
print(token)
@@ -463,6 +464,26 @@ class JupyterHub(Application):
# convert cookie max age days to seconds
return int(self.cookie_max_age_days * 24 * 3600)
token_expires_in_max_seconds = Integer(
0,
config=True,
help="""
Set the maximum expiration (in seconds) of tokens created via the API.
Set to any positive value to disallow creation of tokens with no expiration.
0 (default) = no limit.
Does not affect:
- Server API tokens ($JUPYTERHUB_API_TOKEN is tied to lifetime of the server)
- Tokens issued during oauth (use `oauth_token_expires_in`)
- Tokens created via the API before configuring this limit
.. versionadded:: 5.1
""",
)
redirect_to_server = Bool(
True, help="Redirect user to server (if running), instead of control panel."
).tag(config=True)
@@ -1475,7 +1496,7 @@ class JupyterHub(Application):
new = change['new']
if '://' not in new:
# assume sqlite, if given as a plain filename
self.db_url = 'sqlite:///%s' % new
self.db_url = f'sqlite:///{new}'
db_kwargs = Dict(
help="""Include any kwargs to pass to the database connection.
@@ -1778,10 +1799,10 @@ class JupyterHub(Application):
[
# add trailing / to ``/user|services/:name`
(
r"%s(user|services)/([^/]+)" % self.base_url,
rf"{self.base_url}(user|services)/([^/]+)",
handlers.AddSlashHandler,
),
(r"(?!%s).*" % self.hub_prefix, handlers.PrefixRedirectHandler),
(rf"(?!{self.hub_prefix}).*", handlers.PrefixRedirectHandler),
(r'(.*)', handlers.Template404),
]
)
@@ -1910,7 +1931,7 @@ class JupyterHub(Application):
default_alt_names = ["IP:127.0.0.1", "DNS:localhost"]
if self.subdomain_host:
default_alt_names.append(
"DNS:%s" % urlparse(self.subdomain_host).hostname
f"DNS:{urlparse(self.subdomain_host).hostname}"
)
# The signed certs used by hub-internal components
try:
@@ -2095,7 +2116,7 @@ class JupyterHub(Application):
ck.check_available()
except Exception as e:
self.exit(
"auth_state is enabled, but encryption is not available: %s" % e
f"auth_state is enabled, but encryption is not available: {e}"
)
# give the authenticator a chance to check its own config
@@ -2114,7 +2135,7 @@ class JupyterHub(Application):
self.authenticator.admin_users = set(admin_users) # force normalization
for username in admin_users:
if not self.authenticator.validate_username(username):
raise ValueError("username %r is not valid" % username)
raise ValueError(f"username {username!r} is not valid")
new_users = []
@@ -2138,7 +2159,7 @@ class JupyterHub(Application):
self.authenticator.allowed_users = set(allowed_users) # force normalization
for username in allowed_users:
if not self.authenticator.validate_username(username):
raise ValueError("username %r is not valid" % username)
raise ValueError(f"username {username!r} is not valid")
if self.authenticator.allowed_users and self.authenticator.admin_users:
# make sure admin users are in the allowed_users set, if defined,
@@ -2206,7 +2227,7 @@ class JupyterHub(Application):
user = orm.User.find(self.db, name=username)
if user is None:
if not self.authenticator.validate_username(username):
raise ValueError("Username %r is not valid" % username)
raise ValueError(f"Username {username!r} is not valid")
self.log.info(f"Creating user {username} found in {hint}")
user = orm.User(name=username)
self.db.add(user)
@@ -2317,9 +2338,7 @@ class JupyterHub(Application):
old_role = orm.Role.find(self.db, name=role_name)
if old_role:
if not set(role_spec.get('scopes', [])).issubset(old_role.scopes):
self.log.warning(
"Role %s has obtained extra permissions" % role_name
)
self.log.warning(f"Role {role_name} has obtained extra permissions")
roles_with_new_permissions.append(role_name)
# make sure we load any default roles not overridden
@@ -2583,14 +2602,14 @@ class JupyterHub(Application):
elif kind == 'service':
Class = orm.Service
else:
raise ValueError("kind must be user or service, not %r" % kind)
raise ValueError(f"kind must be user or service, not {kind!r}")
db = self.db
for token, name in token_dict.items():
if kind == 'user':
name = self.authenticator.normalize_username(name)
if not self.authenticator.validate_username(name):
raise ValueError("Token user name %r is not valid" % name)
raise ValueError(f"Token user name {name!r} is not valid")
if kind == 'service':
if not any(service_name == name for service_name in self._service_map):
self.log.warning(
@@ -2790,7 +2809,7 @@ class JupyterHub(Application):
for key, value in spec.items():
trait = traits.get(key)
if trait is None:
raise AttributeError("No such service field: %s" % key)
raise AttributeError(f"No such service field: {key}")
setattr(service, key, value)
# also set the value on the orm object
# unless it's marked as not in the db
@@ -2863,7 +2882,7 @@ class JupyterHub(Application):
client_id=service.oauth_client_id,
client_secret=service.api_token,
redirect_uri=service.oauth_redirect_uri,
description="JupyterHub service %s" % service.name,
description=f"JupyterHub service {service.name}",
)
service.orm.oauth_client = oauth_client
# add access-scopes, derived from OAuthClient itself
@@ -3193,6 +3212,7 @@ class JupyterHub(Application):
static_path=os.path.join(self.data_files_path, 'static'),
static_url_prefix=url_path_join(self.hub.base_url, 'static/'),
static_handler_class=CacheControlStaticFilesHandler,
token_expires_in_max_seconds=self.token_expires_in_max_seconds,
subdomain_hook=self.subdomain_hook,
template_path=self.template_paths,
template_vars=self.template_vars,
@@ -3251,13 +3271,10 @@ class JupyterHub(Application):
def init_eventlog(self):
"""Set up the event logging system."""
self.eventlog = EventLog(parent=self)
self.eventlog = EventLogger(parent=self)
for dirname, _, files in os.walk(os.path.join(here, 'event-schemas')):
for file in files:
if not file.endswith('.yaml'):
continue
self.eventlog.register_schema_file(os.path.join(dirname, file))
for schema in (Path(here) / "event-schemas").glob("**/*.yaml"):
self.eventlog.register_event_schema(schema)
def write_pid_file(self):
pid = os.getpid()
@@ -3456,7 +3473,7 @@ class JupyterHub(Application):
answer = ''
def ask():
prompt = "Overwrite %s with default config? [y/N]" % self.config_file
prompt = f"Overwrite {self.config_file} with default config? [y/N]"
try:
return input(prompt).lower() or 'n'
except KeyboardInterrupt:
@@ -3473,7 +3490,7 @@ class JupyterHub(Application):
config_text = self.generate_config_file()
if isinstance(config_text, bytes):
config_text = config_text.decode('utf8')
print("Writing default config to: %s" % self.config_file)
print(f"Writing default config to: {self.config_file}")
with open(self.config_file, mode='w') as f:
f.write(config_text)

View File

@@ -102,18 +102,37 @@ class Authenticator(LoggingConfigurable):
admin_users = Set(
help="""
Set of users that will have admin rights on this JupyterHub.
Set of users that will be granted admin rights on this JupyterHub.
Note: As of JupyterHub 2.0,
full admin rights should not be required,
and more precise permissions can be managed via roles.
Note:
Admin users have extra privileges:
- Use the admin panel to see list of users logged in
- Add / remove users in some authenticators
- Restart / halt the hub
- Start / stop users' single-user servers
- Can access each individual users' single-user server (if configured)
As of JupyterHub 2.0,
full admin rights should not be required,
and more precise permissions can be managed via roles.
Caution:
Adding users to `admin_users` can only *grant* admin rights,
removing a username from the admin_users set **DOES NOT** remove admin rights previously granted.
For an authoritative, restricted set of admins,
assign explicit membership of the `admin` *role*::
c.JupyterHub.load_roles = [
{
"name": "admin",
"users": ["admin1", "..."],
}
]
Admin users can take every possible action on behalf of all users,
for example:
- Use the admin panel to see list of users logged in
- Add / remove users in some authenticators
- Restart / halt the hub
- Start / stop users' single-user servers
- Can access each individual users' single-user server
Admin access should be treated the same way root access is.
@@ -332,7 +351,7 @@ class Authenticator(LoggingConfigurable):
if short_names:
sorted_names = sorted(short_names)
single = ''.join(sorted_names)
string_set_typo = "set('%s')" % single
string_set_typo = f"set('{single}')"
self.log.warning(
"Allowed set contains single-character names: %s; did you mean set([%r]) instead of %s?",
sorted_names[:8],
@@ -663,7 +682,7 @@ class Authenticator(LoggingConfigurable):
return
if isinstance(authenticated, dict):
if 'name' not in authenticated:
raise ValueError("user missing a name: %r" % authenticated)
raise ValueError(f"user missing a name: {authenticated!r}")
else:
authenticated = {'name': authenticated}
authenticated.setdefault('auth_state', None)
@@ -850,7 +869,7 @@ class Authenticator(LoggingConfigurable):
user (User): The User wrapper object
"""
if not self.validate_username(user.name):
raise ValueError("Invalid username: %s" % user.name)
raise ValueError(f"Invalid username: {user.name}")
if self.allow_existing_users and not self.allow_all:
self.allowed_users.add(user.name)
@@ -1115,13 +1134,16 @@ class LocalAuthenticator(Authenticator):
"""
if not self.allowed_groups:
return False
user_group_gids = set(
self._getgrouplist(username, self._getpwnam(username).pw_gid)
)
for grnam in self.allowed_groups:
try:
group = self._getgrnam(grnam)
except KeyError:
self.log.error('No such group: [%s]' % grnam)
self.log.error(f'No such group: [{grnam}]')
continue
if username in group.gr_mem:
if group.gr_gid in user_group_gids:
return True
return False
@@ -1190,7 +1212,7 @@ class LocalAuthenticator(Authenticator):
uid = self.uids[name]
cmd += ['--uid', '%d' % uid]
except KeyError:
self.log.debug("No UID for user %s" % name)
self.log.debug(f"No UID for user {name}")
cmd += [name]
self.log.info("Creating user: %s", ' '.join(map(shlex.quote, cmd)))
p = Popen(cmd, stdout=PIPE, stderr=STDOUT)

View File

@@ -33,7 +33,7 @@ class CryptographyUnavailable(EncryptionUnavailable):
class NoEncryptionKeys(EncryptionUnavailable):
def __str__(self):
return "Encryption keys must be specified in %s env" % KEY_ENV
return f"Encryption keys must be specified in {KEY_ENV} env"
def _validate_key(key):

View File

@@ -95,7 +95,7 @@ def backup_db_file(db_file, log=None):
backup_db_file = f'{db_file}.{timestamp}.{i}'
#
if os.path.exists(backup_db_file):
raise OSError("backup db file already exists: %s" % backup_db_file)
raise OSError(f"backup db file already exists: {backup_db_file}")
if log:
log.info("Backing up %s => %s", db_file, backup_db_file)
shutil.copy(db_file, backup_db_file)
@@ -167,7 +167,7 @@ def main(args=None):
# to subcommands
choices = ['shell', 'alembic']
if not args or args[0] not in choices:
print("Select a command from: %s" % ', '.join(choices))
print("Select a command from: {}".format(', '.join(choices)))
return 1
cmd, args = args[0], args[1:]

View File

@@ -1,4 +1,4 @@
"$id": hub.jupyter.org/server-action
"$id": https://schema.jupyter.org/jupyterhub/events/server-action
version: 1
title: JupyterHub server events
description: |

View File

@@ -1128,10 +1128,13 @@ class BaseHandler(RequestHandler):
SERVER_SPAWN_DURATION_SECONDS.labels(
status=ServerSpawnStatus.success
).observe(time.perf_counter() - spawn_start_time)
self.eventlog.record_event(
'hub.jupyter.org/server-action',
1,
{'action': 'start', 'username': user.name, 'servername': server_name},
self.eventlog.emit(
schema_id='https://schema.jupyter.org/jupyterhub/events/server-action',
data={
'action': 'start',
'username': user.name,
'servername': server_name,
},
)
proxy_add_start_time = time.perf_counter()
spawner._proxy_pending = True
@@ -1334,10 +1337,9 @@ class BaseHandler(RequestHandler):
SERVER_STOP_DURATION_SECONDS.labels(
status=ServerStopStatus.success
).observe(toc - tic)
self.eventlog.record_event(
'hub.jupyter.org/server-action',
1,
{
self.eventlog.emit(
schema_id='https://schema.jupyter.org/jupyterhub/events/server-action',
data={
'action': 'stop',
'username': user.name,
'servername': server_name,
@@ -1512,7 +1514,7 @@ class BaseHandler(RequestHandler):
# so we run it sync here, instead of making a sync version of render_template
try:
html = self.render_template('%s.html' % status_code, sync=True, **ns)
html = self.render_template(f'{status_code}.html', sync=True, **ns)
except TemplateNotFound:
self.log.debug("Using default error template for %d", status_code)
try:
@@ -1537,6 +1539,16 @@ class PrefixRedirectHandler(BaseHandler):
"""Redirect anything outside a prefix inside.
Redirects /foo to /prefix/foo, etc.
Redirect specifies hub domain when public_url or subdomains are enabled.
Mainly handles requests for non-running servers, e.g. to
/user/tree/ -> /hub/user/tree/
UserUrlHandler will handle the request after redirect.
Don't do anything but redirect here because cookies, etc. won't be available to this request,
due to not being on the hub's path or possibly domain.
"""
def get(self):
@@ -1554,7 +1566,19 @@ class PrefixRedirectHandler(BaseHandler):
# default / -> /hub/ redirect
# avoiding extra hop through /hub
path = '/'
self.redirect(url_path_join(self.hub.base_url, path), permanent=False)
redirect_url = redirect_path = url_path_join(self.hub.base_url, path)
# when using subdomains,
# make sure we redirect `user.domain/user/foo` -> `hub.domain/hub/user/foo/...`
# so that the Hub handles it properly with cookies and all
public_url = self.settings.get("public_url")
subdomain_host = self.settings.get("subdomain_host")
if public_url:
redirect_url = urlunparse(public_url._replace(path=redirect_path))
elif subdomain_host:
redirect_url = url_path_join(subdomain_host, redirect_path)
self.redirect(redirect_url, permanent=False)
class UserUrlHandler(BaseHandler):

View File

@@ -236,12 +236,12 @@ class SpawnHandler(BaseHandler):
if for_user != user.name:
user = self.find_user(for_user)
if user is None:
raise web.HTTPError(404, "No such user: %s" % for_user)
raise web.HTTPError(404, f"No such user: {for_user}")
spawner = user.get_spawner(server_name, replace_failed=True)
if spawner.ready:
raise web.HTTPError(400, "%s is already running" % (spawner._log_name))
raise web.HTTPError(400, f"{spawner._log_name} is already running")
elif spawner.pending:
raise web.HTTPError(
400, f"{spawner._log_name} is pending {spawner.pending}"
@@ -251,7 +251,7 @@ class SpawnHandler(BaseHandler):
for key, byte_list in self.request.body_arguments.items():
form_options[key] = [bs.decode('utf8') for bs in byte_list]
for key, byte_list in self.request.files.items():
form_options["%s_file" % key] = byte_list
form_options[f"{key}_file"] = byte_list
try:
self.log.debug(
"Triggering spawn with supplied form options for %s", spawner._log_name
@@ -345,7 +345,7 @@ class SpawnPendingHandler(BaseHandler):
if for_user != current_user.name:
user = self.find_user(for_user)
if user is None:
raise web.HTTPError(404, "No such user: %s" % for_user)
raise web.HTTPError(404, f"No such user: {for_user}")
if server_name and server_name not in user.spawners:
raise web.HTTPError(404, f"{user.name} has no such server {server_name}")
@@ -542,11 +542,50 @@ class TokenPageHandler(BaseHandler):
oauth_clients = sorted(oauth_clients, key=sort_key, reverse=True)
auth_state = await self.current_user.get_auth_state()
expires_in_max = self.settings["token_expires_in_max_seconds"]
options = [
(3600, "1 Hour"),
(86400, "1 Day"),
(7 * 86400, "1 Week"),
(30 * 86400, "1 Month"),
(365 * 86400, "1 Year"),
]
if expires_in_max:
# omit items that exceed the limit
options = [
(seconds, label)
for (seconds, label) in options
if seconds <= expires_in_max
]
if expires_in_max not in (seconds for (seconds, label) in options):
# max not exactly in list, add it
# this also ensures options_list is never empty
max_hours = expires_in_max / 3600
max_days = max_hours / 24
if max_days < 3:
max_label = f"{max_hours:.0f} hours"
else:
# this could be a lot of days, but no need to get fancy
max_label = f"{max_days:.0f} days"
options.append(("", f"Max ({max_label})"))
else:
options.append(("", "Never"))
options_html_elements = [
f'<option value="{value}">{label}</option>' for value, label in options
]
# make the last item selected
options_html_elements[-1] = options_html_elements[-1].replace(
"<option ", '<option selected="selected"'
)
expires_in_options_html = "\n".join(options_html_elements)
html = await self.render_template(
'token.html',
api_tokens=api_tokens,
oauth_clients=oauth_clients,
auth_state=auth_state,
token_expires_in_options_html=expires_in_options_html,
token_expires_in_max_seconds=expires_in_max,
)
self.finish(html)
@@ -642,7 +681,7 @@ class ProxyErrorHandler(BaseHandler):
message_html = ' '.join(
[
"Your server appears to be down.",
"Try restarting it <a href='%s'>from the hub</a>" % hub_home,
f"Try restarting it <a href='{hub_home}'>from the hub</a>",
]
)
ns = dict(
@@ -655,7 +694,7 @@ class ProxyErrorHandler(BaseHandler):
self.set_header('Content-Type', 'text/html')
# render the template
try:
html = await self.render_template('%s.html' % status_code, **ns)
html = await self.render_template(f'{status_code}.html', **ns)
except TemplateNotFound:
self.log.debug("Using default error template for %d", status_code)
html = await self.render_template('error.html', **ns)

View File

@@ -22,6 +22,7 @@ them manually here.
added ``jupyterhub_`` prefix to metric names.
"""
import asyncio
import os
import time
from datetime import timedelta
@@ -236,17 +237,17 @@ EVENT_LOOP_INTERVAL_SECONDS = Histogram(
'event_loop_interval_seconds',
'Distribution of measured event loop intervals',
namespace=metrics_prefix,
# Increase resolution to 5ms below 50ms
# don't measure below 50ms, our default
# Increase resolution to 5ms below 75ms
# because this is where we are most sensitive.
# No need to have buckets below 25, since we only measure every 20ms.
# No need to have buckets below 50, since we only measure every 50ms.
buckets=[
# 5ms from 25-50ms
25e-3,
30e-3,
35e-3,
40e-3,
45e-3,
# 5ms from 50-75ms
50e-3,
55e-3,
60e-3,
65e-3,
70e-3,
# from here, default prometheus buckets
75e-3,
0.1,
@@ -323,19 +324,20 @@ class PeriodicMetricsCollector(LoggingConfigurable):
""",
)
event_loop_interval_resolution = Float(
0.02,
0.05,
config=True,
help="""
Interval (in seconds) on which to measure the event loop interval.
This is the _sensitivity_ of the event_loop_interval metric.
This is the _sensitivity_ of the `event_loop_interval` metric.
Setting it too low (e.g. below 20ms) can end up slowing down the whole event loop
by measuring too often,
while setting it too high (e.g. above a few seconds) may limit its resolution and usefulness.
The Prometheus Histogram populated by this metric
doesn't resolve differences below 25ms,
so setting this below ~20ms won't result in increased resolution of the histogram metric,
except for the average value, computed by:
except for the average value, computed by::
event_loop_interval_seconds_sum / event_loop_interval_seconds_count
""",
)
@@ -346,7 +348,7 @@ class PeriodicMetricsCollector(LoggingConfigurable):
)
# internal state
_last_tick = Float()
_tasks = Dict()
_periodic_callbacks = Dict()
db = Any(help="SQLAlchemy db session to use for performing queries")
@@ -371,18 +373,39 @@ class PeriodicMetricsCollector(LoggingConfigurable):
self.log.info(f'Found {value} active users in the last {period}')
ACTIVE_USERS.labels(period=period.value).set(value)
def _event_loop_tick(self):
"""Measure a single tick of the event loop
async def _measure_event_loop_interval(self):
"""Measure the event loop responsiveness
This measures the time since the last tick
A single long-running coroutine because PeriodicCallback is too expensive
to measure small intervals.
"""
now = time.perf_counter()
tick_duration = now - self._last_tick
self._last_tick = now
EVENT_LOOP_INTERVAL_SECONDS.observe(tick_duration)
if tick_duration >= self.event_loop_interval_log_threshold:
# warn about slow ticks
self.log.warning("Event loop was unresponsive for %.2fs!", tick_duration)
tick = time.perf_counter
last_tick = tick()
resolution = self.event_loop_interval_resolution
lower_bound = 2 * resolution
# This loop runs _very_ often, so try to keep it efficient.
# Even excess comparisons and assignments have a measurable effect on overall cpu usage.
while True:
await asyncio.sleep(resolution)
now = tick()
# measure the _difference_ between the sleep time and the measured time
# the event loop blocked for somewhere in the range [delay, delay + resolution]
tick_duration = now - last_tick
last_tick = now
if tick_duration < lower_bound:
# don't report numbers less than measurement resolution,
# we don't really have that information
delay = resolution
else:
delay = tick_duration - resolution
if delay >= self.event_loop_interval_log_threshold:
# warn about slow ticks
self.log.warning(
"Event loop was unresponsive for at least %.2fs!", delay
)
EVENT_LOOP_INTERVAL_SECONDS.observe(delay)
def start(self):
"""
@@ -400,12 +423,8 @@ class PeriodicMetricsCollector(LoggingConfigurable):
self.update_active_users()
if self.event_loop_interval_enabled:
now = time.perf_counter()
self._last_tick = self._last_tick_collect = now
self._tick_durations = []
self._periodic_callbacks["event_loop_tick"] = PeriodicCallback(
self._event_loop_tick,
self.event_loop_interval_resolution * 1000,
self._tasks["event_loop_tick"] = asyncio.create_task(
self._measure_event_loop_interval()
)
# start callbacks
@@ -418,3 +437,5 @@ class PeriodicMetricsCollector(LoggingConfigurable):
"""
for pc in self._periodic_callbacks.values():
pc.stop()
for task in self._tasks.values():
task.cancel()

View File

@@ -156,7 +156,7 @@ class JupyterHubRequestValidator(RequestValidator):
self.db.query(orm.OAuthClient).filter_by(identifier=client_id).first()
)
if orm_client is None:
raise ValueError("No such client: %s" % client_id)
raise ValueError(f"No such client: {client_id}")
scopes = set(orm_client.allowed_scopes)
if 'inherit' not in scopes:
# add identify-user scope
@@ -255,7 +255,7 @@ class JupyterHubRequestValidator(RequestValidator):
self.db.query(orm.OAuthClient).filter_by(identifier=client_id).first()
)
if orm_client is None:
raise ValueError("No such client: %s" % client_id)
raise ValueError(f"No such client: {client_id}")
orm_code = orm.OAuthCode(
code=code['code'],
@@ -345,7 +345,7 @@ class JupyterHubRequestValidator(RequestValidator):
app_log.debug("Saving bearer token %s", log_token)
if request.user is None:
raise ValueError("No user for access token: %s" % request.user)
raise ValueError(f"No user for access token: {request.user}")
client = (
self.db.query(orm.OAuthClient)
.filter_by(identifier=request.client.client_id)

View File

@@ -1113,7 +1113,7 @@ class APIToken(Hashed, Base):
elif kind == 'service':
prefix_match = prefix_match.filter(cls.service_id != None)
elif kind is not None:
raise ValueError("kind must be 'user', 'service', or None, not %r" % kind)
raise ValueError(f"kind must be 'user', 'service', or None, not {kind!r}")
for orm_token in prefix_match:
if orm_token.match(token):
if not orm_token.client_id:

View File

@@ -221,11 +221,11 @@ class Proxy(LoggingConfigurable):
host_route = not routespec.startswith('/')
if host_route and not self.host_routing:
raise ValueError(
"Cannot add host-based route %r, not using host-routing" % routespec
f"Cannot add host-based route {routespec!r}, not using host-routing"
)
if self.host_routing and not host_route:
raise ValueError(
"Cannot add route without host %r, using host-routing" % routespec
f"Cannot add route without host {routespec!r}, using host-routing"
)
# add trailing slash
if not routespec.endswith('/'):
@@ -613,8 +613,8 @@ class ConfigurableHTTPProxy(Proxy):
# check for required token if proxy is external
if not self.auth_token and not self.should_start:
raise ValueError(
"%s.auth_token or CONFIGPROXY_AUTH_TOKEN env is required"
" if Proxy.should_start is False" % self.__class__.__name__
f"{self.__class__.__name__}.auth_token or CONFIGPROXY_AUTH_TOKEN env is required"
" if Proxy.should_start is False"
)
def _check_previous_process(self):
@@ -758,11 +758,11 @@ class ConfigurableHTTPProxy(Proxy):
)
except FileNotFoundError as e:
self.log.error(
"Failed to find proxy %r\n"
f"Failed to find proxy {self.command!r}\n"
"The proxy can be installed with `npm install -g configurable-http-proxy`."
"To install `npm`, install nodejs which includes `npm`."
"If you see an `EACCES` error or permissions error, refer to the `npm` "
"documentation on How To Prevent Permissions Errors." % self.command
"documentation on How To Prevent Permissions Errors."
)
raise

View File

@@ -48,7 +48,7 @@ scope_definitions = {
'doc_description': 'Access the admin page. Permission to take actions via the admin page granted separately.',
},
'admin:users': {
'description': 'Read, write, create and delete users and their authentication state, not including their servers or tokens.',
'description': 'Read, modify, create, and delete users and their authentication state, not including their servers or tokens. This is an extremely privileged scope and should be considered tantamount to superuser.',
'subscopes': ['admin:auth_state', 'users', 'read:roles:users', 'delete:users'],
},
'admin:auth_state': {'description': 'Read a users authentication state.'},
@@ -64,7 +64,7 @@ scope_definitions = {
'subscopes': ['read:users:name'],
},
'read:users': {
'description': 'Read user models (excluding including servers, tokens and authentication state).',
'description': 'Read user models (including servers, tokens and authentication state).',
'subscopes': [
'read:users:name',
'read:users:groups',
@@ -109,7 +109,7 @@ scope_definitions = {
'subscopes': ['groups', 'read:roles:groups', 'delete:groups'],
},
'groups': {
'description': 'Read and write group information, including adding/removing users to/from groups.',
'description': 'Read and write group information, including adding/removing any users to/from groups. Note: adding users to groups may affect permissions.',
'subscopes': ['read:groups', 'list:groups'],
},
'list:groups': {
@@ -1257,7 +1257,7 @@ def define_custom_scopes(scopes):
The keys are the scopes,
while the values are dictionaries with at least a `description` field,
and optional `subscopes` field.
%s
CUSTOM_SCOPE_DESCRIPTION
Examples::
define_custom_scopes(
@@ -1274,7 +1274,7 @@ def define_custom_scopes(scopes):
},
}
)
""" % indent(_custom_scope_description, " " * 8)
""".replace("CUSTOM_SCOPE_DESCRIPTION", indent(_custom_scope_description, " " * 8))
for scope, scope_definition in scopes.items():
if scope in scope_definitions and scope_definitions[scope] != scope_definition:
raise ValueError(

View File

@@ -250,7 +250,6 @@ class HubAuth(SingletonConfigurable):
fetched from JUPYTERHUB_API_URL by default.
- cookie_cache_max_age: the number of seconds responses
from the Hub should be cached.
- login_url (the *public* ``/hub/login`` URL of the Hub).
"""
hub_host = Unicode(
@@ -331,17 +330,18 @@ class HubAuth(SingletonConfigurable):
return url_path_join(os.getenv('JUPYTERHUB_BASE_URL') or '/', 'hub') + '/'
login_url = Unicode(
'/hub/login',
help="""The login URL to use
Typically /hub/login
'',
help="""The login URL to use, if any.
The base HubAuth class doesn't support login via URL,
and will raise 403 on `@web.authenticated` requests without a valid token.
An empty string here raises 403 errors instead of redirecting.
HubOAuth will redirect to /hub/api/oauth2/authorize.
""",
).tag(config=True)
@default('login_url')
def _default_login_url(self):
return self.hub_host + url_path_join(self.hub_prefix, 'login')
keyfile = Unicode(
os.getenv('JUPYTERHUB_SSL_KEYFILE', ''),
help="""The ssl key to use for requests
@@ -613,11 +613,8 @@ class HubAuth(SingletonConfigurable):
r = await AsyncHTTPClient().fetch(req, raise_error=False)
except Exception as e:
app_log.error("Error connecting to %s: %s", self.api_url, e)
msg = "Failed to connect to Hub API at %r." % self.api_url
msg += (
" Is the Hub accessible at this URL (from host: %s)?"
% socket.gethostname()
)
msg = f"Failed to connect to Hub API at {self.api_url!r}."
msg += f" Is the Hub accessible at this URL (from host: {socket.gethostname()})?"
if '127.0.0.1' in self.api_url:
msg += (
" Make sure to set c.JupyterHub.hub_ip to an IP accessible to"
@@ -1045,7 +1042,7 @@ class HubOAuth(HubAuth):
@validate('oauth_client_id', 'api_token')
def _ensure_not_empty(self, proposal):
if not proposal.value:
raise ValueError("%s cannot be empty." % proposal.trait.name)
raise ValueError(f"{proposal.trait.name} cannot be empty.")
return proposal.value
oauth_redirect_uri = Unicode(
@@ -1385,6 +1382,12 @@ class HubAuthenticated:
if self._hub_login_url is not None:
# cached value, don't call this more than once per handler
return self._hub_login_url
if not self.hub_auth.login_url:
# HubOAuth is required for login via redirect,
# base class can only raise to avoid redirect loops
raise HTTPError(403)
# temporary override at setting level,
# to allow any subclass overrides of get_login_url to preserve their effect
# for example, APIHandler raises 403 to prevent redirects
@@ -1555,7 +1558,7 @@ class HubOAuthCallbackHandler(HubOAuthenticated, RequestHandler):
error = self.get_argument("error", False)
if error:
msg = self.get_argument("error_description", error)
raise HTTPError(400, "Error in oauth: %s" % msg)
raise HTTPError(400, f"Error in oauth: {msg}")
code = self.get_argument("code", False)
if not code:

View File

@@ -167,6 +167,12 @@ class Service(LoggingConfigurable):
- url: str (None)
The URL where the service is/should be.
If specified, the service will be added to the proxy at /services/:name
- oauth_redirect_url: str ('/services/:name/oauth_redirect')
The URI for the oauth redirect.
Not usually needed, but must be set for external services that are not accessed through the proxy,
or any service which have a redirect URI different from the default of `/services/:name/oauth_redirect`.
- oauth_no_confirm: bool(False)
Whether this service should be allowed to complete oauth
with logged-in users without prompting for confirmation.
@@ -321,7 +327,7 @@ class Service(LoggingConfigurable):
@default('oauth_client_id')
def _default_client_id(self):
return 'service-%s' % self.name
return f'service-{self.name}'
@validate("oauth_client_id")
def _validate_client_id(self, proposal):
@@ -335,7 +341,9 @@ class Service(LoggingConfigurable):
oauth_redirect_uri = Unicode(
help="""OAuth redirect URI for this service.
You shouldn't generally need to change this.
Must be set for external services that are not accessed through the proxy,
or any service which have a redirect URI different from the default.
Default: `/services/:name/oauth_callback`
"""
).tag(input=True, in_db=False)
@@ -411,7 +419,7 @@ class Service(LoggingConfigurable):
async def start(self):
"""Start a managed service"""
if not self.managed:
raise RuntimeError("Cannot start unmanaged service %s" % self)
raise RuntimeError(f"Cannot start unmanaged service {self}")
self.log.info("Starting service %r: %r", self.name, self.command)
env = {}
env.update(self.environment)
@@ -465,7 +473,7 @@ class Service(LoggingConfigurable):
"""Stop a managed service"""
self.log.debug("Stopping service %s", self.name)
if not self.managed:
raise RuntimeError("Cannot stop unmanaged service %s" % self)
raise RuntimeError(f"Cannot stop unmanaged service {self}")
if self.spawner:
if self.orm.server:
self.db.delete(self.orm.server)

View File

@@ -412,9 +412,12 @@ class JupyterHubSingleUser(ExtensionApp):
return
last_activity_timestamp = isoformat(last_activity)
failure_count = 0
async def notify():
nonlocal failure_count
self.log.debug("Notifying Hub of activity %s", last_activity_timestamp)
req = HTTPRequest(
url=self.hub_activity_url,
method='POST',
@@ -433,8 +436,12 @@ class JupyterHubSingleUser(ExtensionApp):
)
try:
await client.fetch(req)
except Exception:
self.log.exception("Error notifying Hub of activity")
except Exception as e:
failure_count += 1
# log traceback at debug-level
self.log.debug("Error notifying Hub of activity", exc_info=True)
# only one-line error visible by default
self.log.error("Error notifying Hub of activity: %s", e)
return False
else:
return True
@@ -446,6 +453,8 @@ class JupyterHubSingleUser(ExtensionApp):
max_wait=15,
timeout=60,
)
if failure_count:
self.log.info("Sent hub activity after %s retries", failure_count)
self._last_activity_sent = last_activity
async def keep_activity_updated(self):

View File

@@ -341,7 +341,7 @@ class SingleUserNotebookAppMixin(Configurable):
# If we receive a non-absolute path, make it absolute.
value = os.path.abspath(value)
if not os.path.isdir(value):
raise TraitError("No such notebook dir: %r" % value)
raise TraitError(f"No such notebook dir: {value!r}")
return value
@default('log_level')
@@ -588,7 +588,7 @@ class SingleUserNotebookAppMixin(Configurable):
self.log.warning("Enabling jupyterhub test extension")
self.jpserver_extensions["jupyterhub.tests.extension"] = True
def initialize(self, argv=None):
def initialize(self, argv=None, **kwargs):
if self.disable_user_config:
_disable_user_config(self)
# disable trash by default
@@ -605,7 +605,7 @@ class SingleUserNotebookAppMixin(Configurable):
# jupyter-server calls it too late, notebook doesn't define it yet
# only called in jupyter-server >= 1.9
self.init_ioloop()
super().initialize(argv)
super().initialize(argv, **kwargs)
self.patch_templates()
def init_ioloop(self):

View File

@@ -18,7 +18,11 @@ from tempfile import mkdtemp
from textwrap import dedent
from urllib.parse import urlparse
from async_generator import aclosing
if sys.version_info >= (3, 10):
from contextlib import aclosing
else:
from async_generator import aclosing
from sqlalchemy import inspect
from tornado.ioloop import PeriodicCallback
from traitlets import (
@@ -46,6 +50,7 @@ from .utils import (
exponential_backoff,
maybe_future,
random_port,
recursive_update,
url_escape_path,
url_path_join,
)
@@ -302,6 +307,57 @@ class Spawner(LoggingConfigurable):
f"access:servers!user={self.user.name}",
]
group_overrides = Union(
[Callable(), Dict()],
help="""
Override specific traitlets based on group membership of the user.
This can be a dict, or a callable that returns a dict. The keys of the dict
are *only* used for lexicographical sorting, to guarantee consistent
ordering of the overrides. If it is a callable, it may be async, and will
be passed one parameter - the spawner instance. It should return a dictionary.
The values of the dict are dicts with the following keys:
- `"groups"` - If the user belongs to *any* of these groups, these overrides are
applied to their server before spawning.
- `"spawner_override"` - a dictionary with overrides to apply to the Spawner
settings. Each value can be either the final value to change or a callable that
take the `Spawner` instance as parameter and returns the final value.
If the traitlet being overriden is a *dictionary*, the dictionary
will be *recursively updated*, rather than overriden. If you want to
remove a key, set its value to `None`.
Example:
The following example config will:
1. Add the environment variable "AM_I_GROUP_ALPHA" to everyone in the "group-alpha" group
2. Add the environment variable "AM_I_GROUP_BETA" to everyone in the "group-beta" group.
If a user is part of both "group-beta" and "group-alpha", they will get *both* these env
vars, due to the dictionary merging functionality.
3. Add a higher memory limit for everyone in the "group-beta" group.
::
c.Spawner.group_overrides = {
"01-group-alpha-env-add": {
"groups": ["group-alpha"],
"spawner_override": {"environment": {"AM_I_GROUP_ALPHA": "yes"}},
},
"02-group-beta-env-add": {
"groups": ["group-beta"],
"spawner_override": {"environment": {"AM_I_GROUP_BETA": "yes"}},
},
"03-group-beta-mem-limit": {
"groups": ["group-beta"],
"spawner_override": {"mem_limit": "2G"}
}
}
""",
config=True,
)
handler = Any()
oauth_roles = Union(
@@ -500,7 +556,7 @@ class Spawner(LoggingConfigurable):
max=1,
help="""
Jitter fraction for poll_interval.
Avoids alignment of poll calls for many Spawners,
e.g. when restarting JupyterHub, which restarts all polls for running Spawners.
@@ -990,7 +1046,7 @@ class Spawner(LoggingConfigurable):
env = {}
if self.env:
warnings.warn(
"Spawner.env is deprecated, found %s" % self.env, DeprecationWarning
f"Spawner.env is deprecated, found {self.env}", DeprecationWarning
)
env.update(self.env)
@@ -1475,6 +1531,48 @@ class Spawner(LoggingConfigurable):
except AnyTimeoutError:
return False
def _apply_overrides(self, spawner_override: dict):
"""
Apply set of overrides onto the current spawner instance
spawner_override is a dict with key being the name of the traitlet
to override, and value is either a callable or the value for the
traitlet. If the value is a dictionary, it is *merged* with the
existing value (rather than replaced). Callables are called with
one parameter - the current spawner instance.
"""
for k, v in spawner_override.items():
if callable(v):
v = v(self)
# If v is a dict, *merge* it with existing values, rather than completely
# resetting it. This allows *adding* things like environment variables rather
# than completely replacing them. If value is set to None, the key
# will be removed
if isinstance(v, dict) and isinstance(getattr(self, k), dict):
recursive_update(getattr(self, k), v)
else:
setattr(self, k, v)
async def apply_group_overrides(self):
"""
Apply group overrides before starting a server
"""
user_group_names = {g.name for g in self.user.groups}
if callable(self.group_overrides):
group_overrides = await maybe_future(self.group_overrides(self))
else:
group_overrides = self.group_overrides
for key in sorted(group_overrides):
go = group_overrides[key]
if user_group_names & set(go['groups']):
# If there is *any* overlap between the groups user is in
# and the groups for this override, apply overrides
self.log.info(
f"Applying group_override {key} for {self.user.name}, modifying config keys: {' '.join(go['spawner_override'].keys())}"
)
self._apply_overrides(go['spawner_override'])
def _try_setcwd(path):
"""Try to set CWD to path, walking up until a valid directory is found.
@@ -1490,7 +1588,7 @@ def _try_setcwd(path):
path, _ = os.path.split(path)
else:
return
print("Couldn't set CWD at all (%s), using temp dir" % exc, file=sys.stderr)
print(f"Couldn't set CWD at all ({exc}), using temp dir", file=sys.stderr)
td = mkdtemp()
os.chdir(td)
@@ -1520,7 +1618,7 @@ def set_user_setuid(username, chdir=True):
try:
os.setgroups(gids)
except Exception as e:
print('Failed to set groups %s' % e, file=sys.stderr)
print(f'Failed to set groups {e}', file=sys.stderr)
os.setuid(uid)
# start in the user's home dir

View File

@@ -32,6 +32,20 @@ async def login(browser, username, password=None):
await browser.get_by_role("button", name="Sign in").click()
async def login_home(browser, app, username):
"""Visit login page, login, go home
A good way to start a session
"""
login_url = url_concat(
url_path_join(public_url(app), "hub/login"),
{"next": ujoin(app.hub.base_url, "home")},
)
await browser.goto(login_url)
async with browser.expect_navigation(url=re.compile(".*/hub/home")):
await login(browser, username)
async def test_open_login_page(app, browser):
login_url = url_path_join(public_host(app), app.hub.base_url, "login")
await browser.goto(login_url)
@@ -467,6 +481,70 @@ async def open_token_page(app, browser, user):
await expect(browser).to_have_url(re.compile(".*/hub/token"))
@pytest.mark.parametrize(
"expires_in_max, expected_options",
[
pytest.param(
None,
[
('1 Hour', '3600'),
('1 Day', '86400'),
('1 Week', '604800'),
('1 Month', '2592000'),
('1 Year', '31536000'),
('Never', ''),
],
id="default",
),
pytest.param(
86400,
[
('1 Hour', '3600'),
('1 Day', '86400'),
],
id="1day",
),
pytest.param(
3600 * 36,
[
('1 Hour', '3600'),
('1 Day', '86400'),
('Max (36 hours)', ''),
],
id="36hours",
),
pytest.param(
86400 * 10,
[
('1 Hour', '3600'),
('1 Day', '86400'),
('1 Week', '604800'),
('Max (10 days)', ''),
],
id="10days",
),
],
)
async def test_token_form_expires_in(
app, browser, user_special_chars, expires_in_max, expected_options
):
with mock.patch.dict(
app.tornado_settings, {"token_expires_in_max_seconds": expires_in_max}
):
await open_token_page(app, browser, user_special_chars.user)
# check the list of tokens duration
dropdown = browser.locator('#token-expiration-seconds')
options = await dropdown.locator('option').all()
actual_values = [
(await option.text_content(), await option.get_attribute('value'))
for option in options
]
assert actual_values == expected_options
# get the value of the 'selected' attribute of the currently selected option
selected_value = dropdown.locator('option[selected]')
await expect(selected_value).to_have_text(expected_options[-1][0])
async def test_token_request_form_and_panel(app, browser, user_special_chars):
"""verify elements of the request token form"""
@@ -483,24 +561,6 @@ async def test_token_request_form_and_panel(app, browser, user_special_chars):
await expect(field_note).to_be_enabled()
await expect(field_note).to_be_empty()
# check the list of tokens duration
dropdown = browser.locator('#token-expiration-seconds')
options = await dropdown.locator('option').all()
expected_values_in_list = {
'1 Hour': '3600',
'1 Day': '86400',
'1 Week': '604800',
'Never': '',
}
actual_values = {
await option.text_content(): await option.get_attribute('value')
for option in options
}
assert actual_values == expected_values_in_list
# get the value of the 'selected' attribute of the currently selected option
selected_value = dropdown.locator('option[selected]')
await expect(selected_value).to_have_text("Never")
# check scopes field
scopes_input = browser.get_by_label("Permissions")
await expect(scopes_input).to_be_editable()
@@ -1367,6 +1427,17 @@ async def test_login_xsrf_initial_cookies(app, browser, case, username):
await login(browser, username, username)
async def test_prefix_redirect_not_running(browser, app, user):
# tests PrefixRedirectHandler for stopped servers
await login_home(browser, app, user.name)
# visit user url (includes subdomain, if enabled)
url = public_url(app, user, "/tree/")
await browser.goto(url)
# make sure we end up on the Hub (domain included)
expected_url = url_path_join(public_url(app), f"hub/user/{user.name}/tree/")
await expect(browser).to_have_url(expected_url)
def _cookie_dict(cookie_list):
"""Convert list of cookies to dict of the form

View File

@@ -83,7 +83,7 @@ async def app(request, io_loop, ssl_tmpdir):
try:
mocked_app.stop()
except Exception as e:
print("Error stopping Hub: %s" % e, file=sys.stderr)
print(f"Error stopping Hub: {e}", file=sys.stderr)
request.addfinalizer(fin)
await mocked_app.initialize([])

View File

@@ -54,7 +54,7 @@ class APIHandler(web.RequestHandler):
api_token = os.environ['JUPYTERHUB_API_TOKEN']
api_url = os.environ['JUPYTERHUB_API_URL']
r = requests.get(
api_url + path, headers={'Authorization': 'token %s' % api_token}
api_url + path, headers={'Authorization': f'token {api_token}'}
)
r.raise_for_status()
self.set_header('Content-Type', 'application/json')

View File

@@ -12,6 +12,7 @@ from unittest import mock
from urllib.parse import parse_qs, quote, urlparse
import pytest
from dateutil.parser import parse as parse_date
from pytest import fixture, mark
from tornado.httputil import url_concat
@@ -63,7 +64,7 @@ async def test_auth_api(app):
app,
'authorizations/token',
api_token,
headers={'Authorization': 'token: %s' % user.cookie_id},
headers={'Authorization': f'token: {user.cookie_id}'},
)
assert r.status_code == 403
@@ -774,16 +775,25 @@ async def test_add_multi_user(app):
@mark.user
@mark.role
async def test_add_multi_user_admin(app):
@mark.parametrize("is_admin", [True, False])
async def test_add_multi_user_admin(app, create_user_with_scopes, is_admin):
db = app.db
requester = create_user_with_scopes("admin:users")
requester.admin = is_admin
db.commit()
names = ['c', 'd']
r = await api_request(
app,
'users',
method='post',
data=json.dumps({'usernames': names, 'admin': True}),
name=requester.name,
)
assert r.status_code == 201
if is_admin:
assert r.status_code == 201
else:
assert r.status_code == 403
return
reply = r.json()
r_names = [user['name'] for user in reply]
assert names == r_names
@@ -821,13 +831,26 @@ async def test_add_user_duplicate(app):
@mark.user
@mark.role
async def test_add_admin(app):
@mark.parametrize("is_admin", [True, False])
async def test_add_admin(app, create_user_with_scopes, is_admin):
db = app.db
name = 'newadmin'
user = create_user_with_scopes("admin:users")
user.admin = is_admin
db.commit()
r = await api_request(
app, 'users', name, method='post', data=json.dumps({'admin': True})
app,
'users',
name,
method='post',
data=json.dumps({'admin': True}),
name=user.name,
)
assert r.status_code == 201
if is_admin:
assert r.status_code == 201
else:
assert r.status_code == 403
return
user = find_user(db, name)
assert user is not None
assert user.name == name
@@ -847,9 +870,14 @@ async def test_delete_user(app):
@mark.user
@mark.role
async def test_make_admin(app):
@mark.parametrize("is_admin", [True, False])
async def test_user_make_admin(app, create_user_with_scopes, is_admin):
db = app.db
name = 'admin2'
requester = create_user_with_scopes('admin:users')
requester.admin = is_admin
db.commit()
name = new_username("make_admin")
r = await api_request(app, 'users', name, method='post')
assert r.status_code == 201
user = find_user(db, name)
@@ -860,10 +888,18 @@ async def test_make_admin(app):
assert orm.Role.find(db, 'admin') not in user.roles
r = await api_request(
app, 'users', name, method='patch', data=json.dumps({'admin': True})
app,
'users',
name,
method='patch',
data=json.dumps({'admin': True}),
name=requester.name,
)
assert r.status_code == 200
if is_admin:
assert r.status_code == 200
else:
assert r.status_code == 403
return
user = find_user(db, name)
assert user is not None
assert user.name == name
@@ -872,6 +908,38 @@ async def test_make_admin(app):
assert orm.Role.find(db, 'admin') in user.roles
@mark.user
@mark.parametrize("requester_is_admin", [True, False])
@mark.parametrize("user_is_admin", [True, False])
async def test_user_set_name(
app, user, create_user_with_scopes, requester_is_admin, user_is_admin
):
db = app.db
requester = create_user_with_scopes('admin:users')
requester.admin = requester_is_admin
user.admin = user_is_admin
db.commit()
new_name = new_username()
r = await api_request(
app,
'users',
user.name,
method='patch',
data=json.dumps({'name': new_name}),
name=requester.name,
)
if requester_is_admin or not user_is_admin:
assert r.status_code == 200
else:
assert r.status_code == 403
return
renamed = find_user(db, new_name)
assert renamed is not None
assert renamed.name == new_name
assert renamed.id == user.id
@mark.user
async def test_set_auth_state(app, auth_state_enabled):
auth_state = {'secret': 'hello'}
@@ -965,7 +1033,7 @@ async def test_spawn(app):
status = await app_user.spawner.poll()
assert status is None
assert spawner.server.base_url == ujoin(app.base_url, 'user/%s' % name) + '/'
assert spawner.server.base_url == ujoin(app.base_url, f'user/{name}') + '/'
url = public_url(app, user)
kwargs = {}
if app.internal_ssl:
@@ -1412,7 +1480,7 @@ async def test_progress_bad_slow(request, app, no_patience, slow_bad_spawn):
async def progress_forever():
"""progress function that yields messages forever"""
for i in range(1, 10):
yield {'progress': i, 'message': 'Stage %s' % i}
yield {'progress': i, 'message': f'Stage {i}'}
# wait a long time before the next event
await asyncio.sleep(10)
@@ -1557,23 +1625,20 @@ async def test_start_stop_race(app, no_patience, slow_spawn):
r = await api_request(app, 'users', user.name, 'server', method='post')
assert r.status_code == 202
assert spawner.pending == 'spawn'
spawn_future = spawner._spawn_future
# additional spawns while spawning shouldn't trigger a new spawn
with mock.patch.object(spawner, 'start') as m:
r = await api_request(app, 'users', user.name, 'server', method='post')
assert r.status_code == 202
assert m.call_count == 0
# stop while spawning is not okay
r = await api_request(app, 'users', user.name, 'server', method='delete')
assert r.status_code == 400
while not spawner.ready:
await asyncio.sleep(0.1)
# stop while spawning is okay now
spawner.delay = 3
# stop the spawner
r = await api_request(app, 'users', user.name, 'server', method='delete')
assert r.status_code == 202
assert spawner.pending == 'stop'
assert spawn_future.cancelled()
assert spawner._spawn_future is None
# make sure we get past deleting from the proxy
await asyncio.sleep(1)
# additional stops while stopping shouldn't trigger a new stop
@@ -1726,6 +1791,46 @@ async def test_get_new_token(app, headers, status, note, expires_in):
assert r.status_code == 404
@pytest.mark.parametrize(
"expires_in_max, expires_in, expected",
[
(86400, None, 86400),
(86400, 86400, 86400),
(86400, 86401, 'error'),
(3600, 100, 100),
(None, None, None),
(None, 86400, 86400),
],
)
async def test_token_expires_in_max(app, user, expires_in_max, expires_in, expected):
options = {
"expires_in": expires_in,
}
# request a new token
with mock.patch.dict(
app.tornado_settings, {"token_expires_in_max_seconds": expires_in_max}
):
r = await api_request(
app,
f'users/{user.name}/tokens',
method='post',
data=json.dumps(options),
)
if expected == 'error':
assert r.status_code == 400
assert f"must not exceed {expires_in_max}" in r.json()["message"]
return
else:
assert r.status_code == 201
token_model = r.json()
if expected is None:
assert token_model["expires_at"] is None
else:
expected_expires_at = utcnow() + timedelta(seconds=expected)
expires_at = parse_date(token_model["expires_at"])
assert abs((expires_at - expected_expires_at).total_seconds()) < 30
@mark.parametrize(
"as_user, for_user, status",
[
@@ -1741,7 +1846,7 @@ async def test_token_for_user(app, as_user, for_user, status):
if for_user != 'missing':
for_user_obj = add_user(app.db, app, name=for_user)
data = {'username': for_user}
headers = {'Authorization': 'token %s' % u.new_api_token()}
headers = {'Authorization': f'token {u.new_api_token()}'}
r = await api_request(
app,
'users',
@@ -1765,7 +1870,7 @@ async def test_token_for_user(app, as_user, for_user, status):
if for_user == as_user:
note = 'Requested via api'
else:
note = 'Requested via api by user %s' % as_user
note = f'Requested via api by user {as_user}'
assert reply['note'] == note
# delete the token
@@ -1836,7 +1941,7 @@ async def test_token_list(app, as_user, for_user, status):
u = add_user(app.db, app, name=as_user)
if for_user != 'missing':
for_user_obj = add_user(app.db, app, name=for_user)
headers = {'Authorization': 'token %s' % u.new_api_token()}
headers = {'Authorization': f'token {u.new_api_token()}'}
r = await api_request(app, 'users', for_user, 'tokens', headers=headers)
assert r.status_code == status
if status != 200:
@@ -2214,7 +2319,7 @@ async def test_get_service(app, mockservice_url):
r = await api_request(
app,
f"services/{mockservice.name}",
headers={'Authorization': 'token %s' % mockservice.api_token},
headers={'Authorization': f'token {mockservice.api_token}'},
)
r.raise_for_status()

View File

@@ -165,21 +165,35 @@ async def test_pam_auth_allowed():
async def test_pam_auth_allowed_groups():
def getgrnam(name):
return MockStructGroup('grp', ['kaylee'])
class TestAuthenticator(MockPAMAuthenticator):
@staticmethod
def _getpwnam(name):
return MockStructPasswd(name=name)
authenticator = MockPAMAuthenticator(allowed_groups={'group'}, allow_all=False)
@staticmethod
def _getgrnam(name):
if name == "group":
return MockStructGroup('grp', ['kaylee'], gid=1234)
else:
return None
with mock.patch.object(authenticator, '_getgrnam', getgrnam):
authorized = await authenticator.get_authenticated_user(
None, {'username': 'kaylee', 'password': 'kaylee'}
)
@staticmethod
def _getgrouplist(username, gid):
gids = [gid]
if username == "kaylee":
gids.append(1234)
return gids
authenticator = TestAuthenticator(allowed_groups={'group'}, allow_all=False)
authorized = await authenticator.get_authenticated_user(
None, {'username': 'kaylee', 'password': 'kaylee'}
)
assert authorized['name'] == 'kaylee'
with mock.patch.object(authenticator, '_getgrnam', getgrnam):
authorized = await authenticator.get_authenticated_user(
None, {'username': 'mal', 'password': 'mal'}
)
authorized = await authenticator.get_authenticated_user(
None, {'username': 'mal', 'password': 'mal'}
)
assert authorized is None
@@ -270,6 +284,7 @@ async def test_pam_auth_no_such_group():
authenticator = MockPAMAuthenticator(
allowed_groups={'nosuchcrazygroup'},
)
authenticator._getpwnam = MockStructPasswd
authorized = await authenticator.get_authenticated_user(
None, {'username': 'kaylee', 'password': 'kaylee'}
)

View File

@@ -57,7 +57,7 @@ async def test_upgrade(tmpdir, hub_version):
# use persistent temp env directory
# to reuse across multiple runs
env_dir = os.path.join(tempfile.gettempdir(), 'test-hub-upgrade-%s' % hub_version)
env_dir = os.path.join(tempfile.gettempdir(), f'test-hub-upgrade-{hub_version}')
generate_old_db(env_dir, hub_version, db_url)

View File

@@ -19,20 +19,22 @@ from traitlets.config import Config
# and `invalid_events` dictionary below.
# To test valid events, add event item with the form:
# { ( '<schema id>', <version> ) : { <event_data> } }
# ( '<schema id>', { <event_data> } )
valid_events = [
(
'hub.jupyter.org/server-action',
1,
'https://schema.jupyter.org/jupyterhub/events/server-action',
dict(action='start', username='test-username', servername='test-servername'),
)
]
# To test invalid events, add event item with the form:
# { ( '<schema id>', <version> ) : { <event_data> } }
# ( '<schema id>', { <event_data> } )
invalid_events = [
# Missing required keys
('hub.jupyter.org/server-action', 1, dict(action='start'))
(
'https://schema.jupyter.org/jupyterhub/events/server-action',
dict(action='start'),
)
]
@@ -41,11 +43,11 @@ def eventlog_sink(app):
"""Return eventlog and sink objects"""
sink = io.StringIO()
handler = logging.StreamHandler(sink)
# Update the EventLog config with handler
# Update the EventLogger config with handler
cfg = Config()
cfg.EventLog.handlers = [handler]
cfg.EventLogger.handlers = [handler]
with mock.patch.object(app.config, 'EventLog', cfg.EventLog):
with mock.patch.object(app.config, 'EventLogger', cfg.EventLogger):
# recreate the eventlog object with our config
app.init_eventlog()
# return the sink from the fixture
@@ -54,12 +56,12 @@ def eventlog_sink(app):
app.init_eventlog()
@pytest.mark.parametrize('schema, version, event', valid_events)
def test_valid_events(eventlog_sink, schema, version, event):
@pytest.mark.parametrize('schema, event', valid_events)
def test_valid_events(eventlog_sink, schema, event):
eventlog, sink = eventlog_sink
eventlog.allowed_schemas = [schema]
# Record event
eventlog.record_event(schema, version, event)
eventlog.emit(schema_id=schema, data=event)
# Inspect consumed event
output = sink.getvalue()
assert output
@@ -68,11 +70,11 @@ def test_valid_events(eventlog_sink, schema, version, event):
assert data is not None
@pytest.mark.parametrize('schema, version, event', invalid_events)
def test_invalid_events(eventlog_sink, schema, version, event):
@pytest.mark.parametrize('schema, event', invalid_events)
def test_invalid_events(eventlog_sink, schema, event):
eventlog, sink = eventlog_sink
eventlog.allowed_schemas = [schema]
# Make sure an error is thrown when bad events are recorded
with pytest.raises(jsonschema.ValidationError):
recorded_event = eventlog.record_event(schema, version, event)
recorded_event = eventlog.emit(schema_id=schema, data=event)

View File

@@ -69,6 +69,12 @@ async def test_default_server(app, named_servers):
r.raise_for_status()
user_model = normalize_user(r.json())
full_progress_url = None
if app.public_url:
full_progress_url = url_path_join(
app.public_url,
f'hub/api/users/{username}/server/progress',
)
assert user_model == fill_user(
{
'name': username,
@@ -88,6 +94,8 @@ async def test_default_server(app, named_servers):
'progress_url': f'PREFIX/hub/api/users/{username}/server/progress',
'state': {'pid': 0},
'user_options': {},
'full_url': user.public_url() or None,
'full_progress_url': full_progress_url,
}
},
}
@@ -157,6 +165,14 @@ async def test_create_named_server(
assert db_server_names == {"", servername}
user_model = normalize_user(r.json())
full_progress_url = None
if app.public_url:
full_progress_url = url_path_join(
app.public_url,
f'hub/api/users/{username}/servers/{escapedname}/progress',
)
assert user_model == fill_user(
{
'name': username,
@@ -175,6 +191,8 @@ async def test_create_named_server(
'progress_url': f'PREFIX/hub/api/users/{username}/servers/{escapedname}/progress',
'state': {'pid': 0},
'user_options': {},
'full_url': user.public_url(name) or None,
'full_progress_url': full_progress_url,
}
for name in [servername]
},

View File

@@ -120,7 +120,7 @@ async def test_admin_version(app):
@pytest.mark.parametrize('sort', ['running', 'last_activity', 'admin', 'name'])
async def test_admin_sort(app, sort):
cookies = await app.login_user('admin')
r = await get_page('admin?sort=%s' % sort, app, cookies=cookies)
r = await get_page(f'admin?sort={sort}', app, cookies=cookies)
r.raise_for_status()
assert r.status_code == 200
@@ -170,7 +170,7 @@ async def test_spawn_redirect(app, last_failed):
r.raise_for_status()
print(urlparse(r.url))
path = urlparse(r.url).path
assert path == ujoin(app.base_url, '/user/%s/' % name)
assert path == ujoin(app.base_url, f'/user/{name}/')
# stop server to ensure /user/name is handled by the Hub
r = await api_request(
@@ -181,7 +181,7 @@ async def test_spawn_redirect(app, last_failed):
# test handing of trailing slash on `/user/name`
r = await get_page('user/' + name, app, hub=False, cookies=cookies)
path = urlparse(r.url).path
assert path == ujoin(app.base_url, 'hub/user/%s/' % name)
assert path == ujoin(app.base_url, f'hub/user/{name}/')
assert r.status_code == 424
@@ -586,7 +586,7 @@ async def test_user_redirect(app, username):
await asyncio.sleep(0.1)
r = await async_requests.get(r.url, cookies=cookies)
path = urlparse(r.url).path
assert path == ujoin(app.base_url, '/user/%s/notebooks/test.ipynb' % name)
assert path == ujoin(app.base_url, f'/user/{name}/notebooks/test.ipynb')
async def test_user_redirect_hook(app, username):
@@ -1240,7 +1240,7 @@ async def test_token_page(app):
r.raise_for_status()
body = extract_body(r)
assert "API Tokens" in body, body
assert "Server at %s" % user.base_url in body, body
assert f"Server at {user.base_url}" in body, body
assert "Authorized Applications" in body, body
@@ -1299,7 +1299,7 @@ async def test_pre_spawn_start_exc_options_form(app):
r.raise_for_status()
assert FormSpawner.options_form in r.text
# spawning the user server should throw the pre_spawn_start error
with pytest.raises(Exception, match="%s" % exc):
with pytest.raises(Exception, match=str(exc)):
await user.spawn()

View File

@@ -171,7 +171,7 @@ async def test_external_proxy(request):
async def test_check_routes(app, username, disable_check_routes):
proxy = app.proxy
test_user = add_user(app.db, app, name=username)
r = await api_request(app, 'users/%s/server' % username, method='post')
r = await api_request(app, f'users/{username}/server', method='post')
r.raise_for_status()
# check a valid route exists for user

View File

@@ -956,7 +956,7 @@ async def test_user_group_roles(app, create_temp_role):
# jack's API token
token = user.new_api_token()
headers = {'Authorization': 'token %s' % token}
headers = {'Authorization': f'token {token}'}
r = await api_request(app, f'users/{user.name}', method='get', headers=headers)
assert r.status_code == 200
r.raise_for_status()
@@ -968,7 +968,7 @@ async def test_user_group_roles(app, create_temp_role):
assert len(reply['roles']) == 1
assert group_role.name not in reply['roles']
headers = {'Authorization': 'token %s' % token}
headers = {'Authorization': f'token {token}'}
r = await api_request(app, 'groups', method='get', headers=headers)
assert r.status_code == 200
r.raise_for_status()
@@ -978,7 +978,7 @@ async def test_user_group_roles(app, create_temp_role):
assert len(reply) == 1
assert reply[0]['name'] == 'A'
headers = {'Authorization': 'token %s' % token}
headers = {'Authorization': f'token {token}'}
r = await api_request(app, f'users/{user.name}', method='get', headers=headers)
assert r.status_code == 200
r.raise_for_status()

View File

@@ -289,7 +289,7 @@ async def test_exceeding_user_permissions(
orm_api_token = orm.APIToken.find(app.db, token=api_token)
# store scopes user does not have
orm_api_token.scopes = list(orm_api_token.scopes) + ['list:users', 'read:users']
headers = {'Authorization': 'token %s' % api_token}
headers = {'Authorization': f'token {api_token}'}
r = await api_request(app, 'users', headers=headers)
assert r.status_code == 200
keys = {key for user in r.json() for key in user.keys()}
@@ -307,7 +307,7 @@ async def test_user_service_separation(app, mockservice_url, create_temp_role):
roles.update_roles(app.db, mockservice_url.orm, roles=['reader_role'])
user.roles.remove(orm.Role.find(app.db, name='user'))
api_token = user.new_api_token()
headers = {'Authorization': 'token %s' % api_token}
headers = {'Authorization': f'token {api_token}'}
r = await api_request(app, 'users', headers=headers)
assert r.status_code == 200
keys = {key for user in r.json() for key in user.keys()}
@@ -551,7 +551,7 @@ async def test_server_state_access(
)
service = create_service_with_scopes("read:users:name!user=bianca", *scopes)
api_token = service.new_api_token()
headers = {'Authorization': 'token %s' % api_token}
headers = {'Authorization': f'token {api_token}'}
# can I get the user model?
r = await api_request(app, 'users', user.name, headers=headers)

View File

@@ -3,10 +3,9 @@
import os
import sys
from binascii import hexlify
from contextlib import asynccontextmanager
from subprocess import Popen
from async_generator import asynccontextmanager
from ..utils import (
exponential_backoff,
maybe_future,

View File

@@ -88,14 +88,10 @@ async def test_hubauth_token(app, mockservice_url, create_user_with_scopes):
# token in ?token parameter is not allowed by default
r = await async_requests.get(
public_url(app, mockservice_url) + '/whoami/?token=%s' % token,
public_url(app, mockservice_url) + f'/whoami/?token={token}',
allow_redirects=False,
)
assert r.status_code == 302
assert 'Location' in r.headers
location = r.headers['Location']
path = urlparse(location).path
assert path.endswith('/hub/login')
assert r.status_code == 403
@pytest.mark.parametrize(
@@ -154,7 +150,7 @@ async def test_hubauth_service_token(request, app, mockservice_url, scopes, allo
# token in Authorization header
r = await async_requests.get(
public_url(app, mockservice_url) + 'whoami/',
headers={'Authorization': 'token %s' % token},
headers={'Authorization': f'token {token}'},
allow_redirects=False,
)
service_model = {
@@ -174,14 +170,10 @@ async def test_hubauth_service_token(request, app, mockservice_url, scopes, allo
# token in ?token parameter is not allowed by default
r = await async_requests.get(
public_url(app, mockservice_url) + 'whoami/?token=%s' % token,
public_url(app, mockservice_url) + f'whoami/?token={token}',
allow_redirects=False,
)
assert r.status_code == 302
assert 'Location' in r.headers
location = r.headers['Location']
path = urlparse(location).path
assert path.endswith('/hub/login')
assert r.status_code == 403
@pytest.mark.parametrize(
@@ -311,7 +303,7 @@ async def test_oauth_service_roles(
# we should be looking at the oauth confirmation page
assert urlparse(r.url).path == app.base_url + 'hub/api/oauth2/authorize'
# verify oauth state cookie was set at some point
assert set(r.history[0].cookies.keys()) == {'service-%s-oauth-state' % service.name}
assert set(r.history[0].cookies.keys()) == {f'service-{service.name}-oauth-state'}
page = BeautifulSoup(r.text, "html.parser")
scope_inputs = page.find_all("input", {"name": "scopes"})
@@ -326,9 +318,9 @@ async def test_oauth_service_roles(
r.raise_for_status()
assert r.url == url
# verify oauth cookie is set
assert 'service-%s' % service.name in set(s.cookies.keys())
assert f'service-{service.name}' in set(s.cookies.keys())
# verify oauth state cookie has been consumed
assert 'service-%s-oauth-state' % service.name not in set(s.cookies.keys())
assert f'service-{service.name}-oauth-state' not in set(s.cookies.keys())
# second request should be authenticated, which means no redirects
r = await s.get(url, allow_redirects=False)
@@ -410,16 +402,16 @@ async def test_oauth_access_scopes(
# we should be looking at the oauth confirmation page
assert urlparse(r.url).path == app.base_url + 'hub/api/oauth2/authorize'
# verify oauth state cookie was set at some point
assert set(r.history[0].cookies.keys()) == {'service-%s-oauth-state' % service.name}
assert set(r.history[0].cookies.keys()) == {f'service-{service.name}-oauth-state'}
# submit the oauth form to complete authorization
r = await s.post(r.url, data={"_xsrf": s.cookies["_xsrf"]})
r.raise_for_status()
assert r.url == url
# verify oauth cookie is set
assert 'service-%s' % service.name in set(s.cookies.keys())
assert f'service-{service.name}' in set(s.cookies.keys())
# verify oauth state cookie has been consumed
assert 'service-%s-oauth-state' % service.name not in set(s.cookies.keys())
assert f'service-{service.name}-oauth-state' not in set(s.cookies.keys())
# second request should be authenticated, which means no redirects
r = await s.get(url, allow_redirects=False)
@@ -507,8 +499,8 @@ async def test_oauth_cookie_collision(
name = 'mypha'
create_user_with_scopes("access:services", name=name)
s.cookies = await app.login_user(name)
state_cookie_name = 'service-%s-oauth-state' % service.name
service_cookie_name = 'service-%s' % service.name
state_cookie_name = f'service-{service.name}-oauth-state'
service_cookie_name = f'service-{service.name}'
url_1 = url + "?oauth_test=1"
oauth_1 = await s.get(url_1)
assert state_cookie_name in s.cookies
@@ -590,7 +582,7 @@ async def test_oauth_logout(app, mockservice_url, create_user_with_scopes):
4. cache hit
"""
service = mockservice_url
service_cookie_name = 'service-%s' % service.name
service_cookie_name = f'service-{service.name}'
url = url_path_join(public_url(app, mockservice_url), 'owhoami/?foo=bar')
# first request is only going to set login cookie
s = AsyncSession()

View File

@@ -23,8 +23,7 @@ from ..spawner import SimpleLocalProcessSpawner, Spawner
from ..user import User
from ..utils import AnyTimeoutError, maybe_future, new_token, url_path_join
from .mocking import public_url
from .test_api import add_user
from .utils import async_requests
from .utils import add_user, async_requests, find_user
_echo_sleep = """
import sys, time
@@ -221,8 +220,8 @@ def test_string_formatting(db):
name = s.user.name
assert s.notebook_dir == 'user/{username}/'
assert s.default_url == '/base/{username}'
assert s.format_string(s.notebook_dir) == 'user/%s/' % name
assert s.format_string(s.default_url) == '/base/%s' % name
assert s.format_string(s.notebook_dir) == f'user/{name}/'
assert s.format_string(s.default_url) == f'/base/{name}'
async def test_popen_kwargs(db):
@@ -496,7 +495,7 @@ async def test_hub_connect_url(db):
assert env["JUPYTERHUB_API_URL"] == "https://example.com/api"
assert (
env["JUPYTERHUB_ACTIVITY_URL"]
== "https://example.com/api/users/%s/activity" % name
== f"https://example.com/api/users/{name}/activity"
)
@@ -598,3 +597,123 @@ def test_spawner_server(db):
spawner.server = Server.from_url("http://1.2.3.4")
assert spawner.server is not None
assert spawner.server.ip == "1.2.3.4"
async def test_group_override(app):
app.load_groups = {
"admin": {"users": ["admin"]},
"user": {"users": ["admin", "user"]},
}
await app.init_groups()
group_overrides = {
"01-admin-mem-limit": {
"groups": ["admin"],
"spawner_override": {"start_timeout": 120},
}
}
admin_user = find_user(app.db, "admin")
s = Spawner(user=admin_user)
s.start_timeout = 60
s.group_overrides = group_overrides
await s.apply_group_overrides()
assert s.start_timeout == 120
non_admin_user = find_user(app.db, "user")
s = Spawner(user=non_admin_user)
s.start_timeout = 60
s.group_overrides = group_overrides
await s.apply_group_overrides()
assert s.start_timeout == 60
async def test_group_override_lexical_ordering(app):
app.load_groups = {
"admin": {"users": ["admin"]},
"user": {"users": ["admin", "user"]},
}
await app.init_groups()
group_overrides = {
# this should be applied last, even though it is specified first,
# due to lexical ordering based on key names
"02-admin-mem-limit": {
"groups": ["admin"],
"spawner_override": {"start_timeout": 300},
},
"01-admin-mem-limit": {
"groups": ["admin"],
"spawner_override": {"start_timeout": 120},
},
}
admin_user = find_user(app.db, "admin")
s = Spawner(user=admin_user)
s.start_timeout = 60
s.group_overrides = group_overrides
await s.apply_group_overrides()
assert s.start_timeout == 300
async def test_group_override_dict_merging(app):
app.load_groups = {
"admin": {"users": ["admin"]},
"user": {"users": ["admin", "user"]},
}
await app.init_groups()
group_overrides = {
"01-admin-env-add": {
"groups": ["admin"],
"spawner_override": {"environment": {"AM_I_ADMIN": "yes"}},
},
"02-user-env-add": {
"groups": ["user"],
"spawner_override": {"environment": {"AM_I_USER": "yes"}},
},
}
admin_user = find_user(app.db, "admin")
s = Spawner(user=admin_user)
s.group_overrides = group_overrides
await s.apply_group_overrides()
assert s.environment["AM_I_ADMIN"] == "yes"
assert s.environment["AM_I_USER"] == "yes"
admin_user = find_user(app.db, "user")
s = Spawner(user=admin_user)
s.group_overrides = group_overrides
await s.apply_group_overrides()
assert s.environment["AM_I_USER"] == "yes"
assert "AM_I_ADMIN" not in s.environment
async def test_group_override_callable(app):
app.load_groups = {
"admin": {"users": ["admin"]},
"user": {"users": ["admin", "user"]},
}
await app.init_groups()
def group_overrides(spawner):
return {
"01-admin-mem-limit": {
"groups": ["admin"],
"spawner_override": {"start_timeout": 120},
}
}
admin_user = find_user(app.db, "admin")
s = Spawner(user=admin_user)
s.start_timeout = 60
s.group_overrides = group_overrides
await s.apply_group_overrides()
assert s.start_timeout == 120
non_admin_user = find_user(app.db, "user")
s = Spawner(user=non_admin_user)
s.start_timeout = 60
s.group_overrides = group_overrides
await s.apply_group_overrides()
assert s.start_timeout == 60

Some files were not shown because too many files have changed in this diff Show More