Compare commits

...

456 Commits
0.6.0 ... 0.7.1

Author SHA1 Message Date
Min RK
94978ea9e0 release 0.7.1 2017-01-02 13:53:43 +01:00
Min RK
bf6999e439 changelog for 0.7.1 2017-01-02 13:53:43 +01:00
Carol Willing
020ee7378f Merge pull request #916 from rachmaninovquartet/master
Added Toree troubleshooting to docs
2016-12-22 13:56:51 -08:00
Min RK
e4a0569961 Merge pull request #915 from jupyterhub/willingc-patch-1
Update README to clarify docker image contents
2016-12-22 16:43:02 +01:00
Ian Maloney
4ff525d5bd updated docs/source/troubleshooting.md per conversation with @willingc in issue 889 2016-12-21 15:21:50 -05:00
Carol Willing
37a31b01b2 Update README to clarify docker image contents
Addresses #879 and #772 re: confusion about the docker image contents
2016-12-21 10:46:30 -08:00
Carol Willing
1604cb1b0b Merge pull request #914 from minrk/update-bootprint
fix rest-api doc building
2016-12-21 10:29:08 -08:00
Min RK
45702ac18c update bootprint to 0.10
0.8 has stopped working for some reason
2016-12-21 14:51:12 +01:00
Min RK
c81e9d60e4 fix rest-api link
link to REST API, not Python API
2016-12-21 14:51:12 +01:00
Carol Willing
224865b894 Merge pull request #910 from minrk/cleanup-server-token
Avoid cleaning up API tokens for Spawners that will resume
2016-12-20 08:29:06 -08:00
Min RK
3b3bc8224b comment review 2016-12-20 16:41:26 +01:00
Carol Willing
c56dc2ea6f Merge pull request #911 from jjaraalm/master
Update Service Docs

Closes #908
2016-12-19 10:28:30 -08:00
jjaraalm
62202bbb74 Revert "Revert "Update service docs""
This reverts commit 7ba28c0207.
2016-12-19 13:00:48 -05:00
jjaraalm
7ba28c0207 Revert "Update service docs"
This reverts commit 9392a29dad.
2016-12-19 12:59:42 -05:00
jjaraalm
9392a29dad Update service docs
Fixes #908
2016-12-19 12:56:26 -05:00
Min RK
72ab8f99ec Avoid cleaning up API tokens for Spawners that will resume
in which case the previous API token should be left alone.
2016-12-19 10:50:25 +01:00
Min RK
fcf32c7e50 Merge pull request #909 from willingc/update-travis
Add 3.6 to travis
2016-12-19 09:59:47 +01:00
Carol Willing
da451d6552 Add 3.6 to travis 2016-12-18 21:26:52 -08:00
Carol Willing
662b1a4d4a Merge pull request #902 from minrk/redirect-empty-msg
Don't warn about empty next_url
2016-12-09 08:04:56 -08:00
Min RK
732adea997 Don't warn about empty next_url
empty next_url is fine
2016-12-09 15:34:32 +01:00
Carol Willing
7e1dbf3515 Merge pull request #896 from minrk/whitelist-warning
Warn about single-character names in whitelist
2016-12-05 11:16:30 -06:00
Min RK
65b92ec246 Warn about single-character names in whitelist
likely cause is `set('string')` typo instead of `set(['string'])`,
so include that in the error message:

    whitelist contains single-character names: ['i', 'k', 'm', 'n', 'r']; did you mean set(['ikmnr']) instead of set('ikmnr')?
2016-12-05 09:46:52 +01:00
Min RK
dc42ee4779 typo in changelog link 2016-12-02 18:12:28 +01:00
Min RK
c04441c1b2 back to dev 2016-12-02 18:08:03 +01:00
Min RK
c3faef8e2a release 0.7.0 2016-12-02 18:02:20 +01:00
Carol Willing
d2175635af Merge pull request #895 from minrk/release-0.7
Update changelog for 0.7 final
2016-12-02 10:47:21 -06:00
Min RK
1f7401cd14 Update changelog for 0.7 final 2016-12-02 17:35:19 +01:00
Min RK
c94b3e34d2 Merge pull request #894 from minrk/disable-token
disable unused token on singleuser-server
2016-12-02 17:01:50 +01:00
Carol Willing
566e1d05ea Merge pull request #893 from minrk/expanduser
call expanduser on singleuser notebook_dir
2016-12-01 21:53:24 -06:00
Min RK
0488d0bd73 call expanduser on singleuser notebook_dir
This copies validate_notebook_dir from notebook with one addition:
calling expanduser.
2016-12-01 22:04:18 +01:00
Min RK
ca31d9b426 disable token on singleuser-server
fixes confusing output about token access in notebook server startup
2016-12-01 21:59:44 +01:00
Min RK
8721f9010f Merge pull request #892 from yuvipanda/maybe-async
Document that authenticator's add_user maybe a coroutine
2016-12-01 17:58:02 +01:00
YuviPanda
88de48ebac Document that authenticator's add_user maybe a coroutine 2016-12-01 19:31:23 +05:30
Min RK
d5a6e2b2ac Merge pull request #886 from yuvipanda/spawner-docs 2016-11-30 13:48:05 +01:00
Min RK
2152a94156 review pass on spawner docstring changes
- small wording, spelling tweaks
- rst formatting fixes
- remove some spurious, cluttering newlines
- clearer traitlets default values on first line
2016-11-30 13:43:59 +01:00
Min RK
bc3824e9bf review pass on auth docstrings 2016-11-30 13:22:07 +01:00
YuviPanda
60bc92cf78 Spawner doc fixes per @willingc 2016-11-30 14:02:02 +05:30
YuviPanda
3b15467738 Clearer module docstring for spawner.py 2016-11-29 16:26:34 +08:00
YuviPanda
4970fe0a1c Add more docs for spawner base class 2016-11-29 16:25:15 +08:00
YuviPanda
7dbe2425b8 Fix typo 2016-11-29 16:00:25 +08:00
YuviPanda
433d44a642 Add docs for PAMAuthenticator 2016-11-29 15:58:35 +08:00
YuviPanda
7733d320d0 Add more docs to LocalAuthenticator 2016-11-29 15:56:16 +08:00
YuviPanda
20d367c2a8 Add more docs for authenticator base class 2016-11-29 15:55:32 +08:00
YuviPanda
4687fbe075 Add extended docs for LocalProcessSpawner too 2016-11-28 23:07:54 -08:00
YuviPanda
b0dc52781e Add info about shell expansion to cmd / args traitlets
We should probably standardize this too
2016-11-28 22:45:06 -08:00
YuviPanda
4f1f7d6b8f Add example use for default_url traitlet 2016-11-28 22:42:10 -08:00
YuviPanda
41f8608f4e Fix port config documentation to match reality 2016-11-28 22:41:47 -08:00
Min RK
ba3a8f2e76 Merge pull request #887 from yuvipanda/rename-spec
Rename MemorySpecification to ByteSpecification
2016-11-28 10:27:41 +01:00
YuviPanda
12e3a5496d Rename MemorySpecification to ByteSpecification 2016-11-27 17:57:34 -08:00
YuviPanda
280644bab5 Expand traitlet documentation for spawner base class 2016-11-27 17:53:41 -08:00
Carol Willing
bf28371356 Merge pull request #882 from minrk/alembic.ini
add alembic.ini to package_data
2016-11-22 08:10:43 -08:00
Carol Willing
ce237181f2 Merge pull request #881 from minrk/more-allow-async-whitelist
handle async check_whitelist in app
2016-11-22 07:46:27 -08:00
Min RK
85ca5a052e add alembic.ini to package_data 2016-11-22 16:21:03 +01:00
Min RK
db8b3dbce9 handle async check_whitelist in app
follow-up to previous PR
2016-11-22 16:06:08 +01:00
Min RK
9c2d56f015 Merge pull request #876 from jbweston/bugfix/whitelist-coroutine
allow `check_whitelist` to be a coroutine
2016-11-22 09:56:28 +01:00
Joseph Weston
d244a1e02f allow check_whitelist to be a coroutine
Some authenticators may have whitelist checking that requires
async operations.
2016-11-21 16:14:02 +01:00
Min RK
9f134277a9 Merge pull request #872 from jupyterhub/willingc-patch-1
Change py.test to newer convention of pytest
2016-11-16 10:18:15 +01:00
Carol Willing
ef9aca7bcb Change py.test to newer convention of pytest 2016-11-15 14:13:03 -08:00
Min RK
32f39f23eb Merge pull request #871 from jupyterhub/willingc-patch-1
Add info on tests to README
2016-11-15 20:39:59 +01:00
Carol Willing
c9b2beb821 Add info on tests to README 2016-11-15 06:35:39 -08:00
Min RK
e9ad82e350 release 0.7b1 2016-11-12 18:36:36 -08:00
Min RK
347dd3cc0f prune docs/node_modules 2016-11-12 18:36:30 -08:00
Min RK
798346dbe8 Merge pull request #869 from willingc/doc-service
Edit Services doc content
2016-11-12 18:28:24 -08:00
Carol Willing
fd94c6de17 Fix missing link target 2016-11-12 18:16:59 -08:00
Carol Willing
3fc6fc32c5 Add review comment by @parente 2016-11-12 18:07:17 -08:00
Carol Willing
a1b6aa5537 Add troubleshooting commands 2016-11-12 18:01:53 -08:00
Carol Willing
f9965bb3c3 Add example of Service per @parente 2016-11-12 17:37:30 -08:00
Carol Willing
541997371c Fix broken or changed links 2016-11-12 17:17:00 -08:00
Carol Willing
522c3e5bee Edit Services doc content 2016-11-12 16:54:57 -08:00
Carol Willing
1baf434695 Merge pull request #868 from minrk/more-changes
Add a few more things in the changelog
2016-11-12 13:40:16 -08:00
Carol Willing
92db71f293 Merge pull request #867 from minrk/upgrade-db-backup
backup db file during upgrade-db
2016-11-12 13:39:39 -08:00
Min RK
b985f8384d Add a few more things in the changelog 2016-11-12 12:54:44 -08:00
Min RK
4c2d049e70 backup db file during upgrade-db 2016-11-12 12:44:59 -08:00
Carol Willing
605c4f121c Merge pull request #866 from parente/note-about-db-secrets
Add two short notes about db security
2016-11-12 12:03:21 -08:00
Peter Parente
4baf5035cb Reflow markdown for easier editing 2016-11-12 11:57:45 -08:00
Peter Parente
f8a57eb7d9 Add two short notes about db security 2016-11-12 11:49:17 -08:00
Min RK
93ac343493 Merge pull request #865 from willingc/doc-tidbits
Add documentation prior to 0.7 beta
2016-11-12 11:40:53 -08:00
Carol Willing
dc092186f0 Edit example for clarity 2016-11-12 11:27:36 -08:00
Carol Willing
6b7c319351 Add intro and standardize code format 2016-11-12 11:15:44 -08:00
Carol Willing
ef5885f769 Make minor edits 2016-11-12 11:15:02 -08:00
Peter Parente
0ffd53424d Merge pull request #861 from willingc/issuetempl
Add initial issue template
2016-11-11 13:36:11 -08:00
Carol Willing
5f464d01b4 Soften tone 2016-11-11 10:44:20 -08:00
Yuvi Panda
0a054cc651 Merge pull request #858 from willingc/post855-edits
Reflow text and minor edits following PR #855
2016-11-11 09:54:14 -08:00
Carol Willing
348af48d45 Merge pull request #863 from minrk/checklist-checklist
Make the release checklist a GFM checklist
2016-11-11 09:31:47 -08:00
Min RK
4d03c00dab Make the release checklist a GFM checklist
so we can paste into a new issue when preparing for a release
2016-11-11 09:14:07 -08:00
Min RK
7a71074a55 Merge pull request #860 from willingc/release-checklist
Add a high-level release checklist
2016-11-11 08:23:12 -08:00
Carol Willing
5527a3e7dd Fix spacing 2016-11-11 07:39:23 -08:00
Carol Willing
f961800fa4 Add troubleshoot command per @parente review 2016-11-11 07:37:43 -08:00
Peter Parente
adbf961433 Merge pull request #859 from willingc/contrib-thanks
Update contributor thank you list
2016-11-11 07:30:02 -08:00
Carol Willing
73e130cb2c Add initial issue template 2016-11-11 07:03:05 -08:00
Carol Willing
a44f178b64 Fix typo 2016-11-11 03:56:42 -08:00
Carol Willing
057fe32e3b Add release checklist 2016-11-11 03:54:33 -08:00
Carol Willing
cad9ffa453 Update contributor thank you list 2016-11-11 03:29:42 -08:00
Carol Willing
a11193a240 Reflow text and minor edits 2016-11-11 03:13:51 -08:00
Carol Willing
ea61a580b3 Merge pull request #855 from yuvipanda/limits-env
Add docs for the LIMIT_ and GUARANTEE_ conventions
2016-11-11 02:43:05 -08:00
Min RK
0bf6db92dd typo in example 2016-11-10 17:07:48 -08:00
YuviPanda
b0f38e7626 Fix docs to match reality 2016-11-10 14:38:09 -08:00
YuviPanda
0f237f28e7 Rename the env variables
Match the traitlet names
2016-11-10 14:37:50 -08:00
YuviPanda
d63bd944ac Add clarifying comment about limit / guarantee env variables 2016-11-10 10:39:44 -08:00
YuviPanda
54e28d759d Some inline doc fixups 2016-11-10 10:31:04 -08:00
YuviPanda
a00c13ba67 Set allow_none=True for limit/guarantee floats 2016-11-09 09:41:54 -08:00
YuviPanda
b4bc5437dd Set allow_none = True as default for MemorySpecification 2016-11-08 22:43:47 -08:00
Min RK
13bc0397f6 Merge pull request #767 from willingc/upgrade723
Add document on upgrading JupyterHub and its db
2016-11-08 18:13:27 -08:00
YuviPanda
9eb30f6ff6 Add resource limits / guarantees consistently to jupyterhub
- Allows us to standardize this on the spawner base class,
  so there's a consistent interface for different spawners
  to implement this.
- Specify the supported suffixes and various units we accept
  for memory and cpu units.
- Standardize the way we expose resource limit / guarantees
  to single-user servers
2016-11-08 17:17:10 -08:00
YuviPanda
17f20d8593 Add docs for the LIMIT_ and GUARANTEE_ conventions
https://github.com/jupyterhub/jupyterhub/issues/854 has
rationale for why, and links to PRs.
2016-11-08 16:19:25 -08:00
Carol Willing
cd23e086a8 Add an upgrade checklist 2016-11-08 12:04:57 -08:00
Carol Willing
03087f20fe Add additional database content from @minrk review 2016-11-08 11:51:42 -08:00
Carol Willing
f536eb4629 Change title 2016-11-08 10:50:49 -08:00
Carol Willing
f3e814aa8a Minor edits 2016-11-08 10:50:49 -08:00
Carol Willing
5fb0a6dffe Add note on databases 2016-11-08 10:50:49 -08:00
Carol Willing
c7ba86d1d8 Add upgrade instructions 2016-11-08 10:50:49 -08:00
Carol Willing
38dcc694b7 Add shutdown and upgrade steps 2016-11-08 10:50:49 -08:00
Carol Willing
fdfffefefa Update process steps 2016-11-08 10:50:49 -08:00
Carol Willing
4e7704afd9 Edit heading levels 2016-11-08 10:50:49 -08:00
Carol Willing
b52fcf4936 Add structure to upgrading doc 2016-11-08 10:50:49 -08:00
Carol Willing
539be2f08e Add basics for alembic 2016-11-08 10:50:49 -08:00
Carol Willing
29b2836c50 Add wip upgrade doc 2016-11-08 10:50:49 -08:00
Min RK
3a757d003a Merge pull request #852 from parente/use-conda-forge
[WIP] Update Dockerfile
2016-11-08 09:58:36 -08:00
Peter Parente
236802be1f Update Dockerfile
* Use nodejs, CHP from condaforge
* Bump the version of conda used
2016-11-07 18:46:04 -08:00
Carol Willing
4a2c9e97c6 Merge pull request #844 from willingc/secure-doc
Reflow text in websecurity doc
2016-11-01 15:15:16 -07:00
Carol Willing
0444d8465c Reflow text in doc 2016-11-01 14:27:49 -07:00
Carol Willing
faef34e4ff Merge pull request #838 from minrk/ensure-strings
quotes around single-user CLI args
2016-11-01 14:05:56 -07:00
Carol Willing
c174ec42f0 Merge pull request #842 from minrk/generate-path-error
finish error message when generate-config path does not exist
2016-11-01 09:26:52 -07:00
Min RK
d484728de9 check directory existence when writing config file
rather than file

and put output on stderr with exit message
2016-11-01 14:47:44 +01:00
Min RK
7da7f7e074 quotes around single-user CLI args
avoids mishandling things such as integer-literals
2016-11-01 12:07:25 +01:00
Min RK
53bdcd7d74 Merge pull request #840 from parente/clear-services-cookie
Fix jupyter-services cookie reset on logout
2016-10-31 13:25:43 +01:00
Peter Parente
1849964699 Fix jupyter-services cookie reset on logout
It currently remains set after logout from the hub allowing the user to
continue to access any services.
2016-10-30 22:36:31 -04:00
Carol Willing
5163c7a97f Merge pull request #824 from minrk/allow-empty-state
Don't assume empty state means not running
2016-10-27 08:33:54 -07:00
Min RK
b9daef9947 docstring review 2016-10-27 11:41:23 +02:00
Carol Willing
f16e0488ab Merge pull request #837 from Scrypy/issue-821
Updated authenticators and spawner docs
2016-10-26 15:44:09 -07:00
Daniel Martinez
adc16be4dc Updated spawners docs 2016-10-26 16:50:25 -05:00
Daniel Martinez
3e4b4149de Updated authenticators docs 2016-10-26 16:48:15 -05:00
Min RK
c392bae7e4 Merge pull request #835 from willingc/check-return
Edit model check to be consistent for user and group
2016-10-26 23:25:24 +02:00
Carol Willing
2e5373aa37 Edit model check to be consistent for user and group 2016-10-26 12:03:53 -07:00
Min RK
5412cd414f Merge pull request #832 from willingc/replace-warn
Use warning instead of warn for logs
2016-10-26 13:26:41 +02:00
Carol Willing
d957c5158f Use warning instead of warn for logs 2016-10-26 04:06:29 -07:00
Carol Willing
4a622cb964 Merge pull request #831 from jupyterhub/willingc-patch-1
Remove duplicate word in docstring
2016-10-26 02:24:39 -07:00
Carol Willing
69e721de46 Remove duplicate word in docstring 2016-10-26 02:19:49 -07:00
Carol Willing
f3f130f452 Merge pull request #830 from minrk/services-todo
Flesh out custom services examples
2016-10-26 02:16:39 -07:00
Min RK
fd4a04e3f3 docs review 2016-10-26 10:22:54 +02:00
Min RK
85c040ab8e flesh out custom services doc 2016-10-25 13:28:13 +02:00
Min RK
2bb4cd4739 allow HubAuthenticated to check groups 2016-10-25 13:27:57 +02:00
Min RK
4c3b134f10 add flask whoami service
for a non-tornado example
2016-10-25 13:24:46 +02:00
Carol Willing
bb8536b553 Merge pull request #826 from Scrypy/issue-822
Updated spawner docs
2016-10-24 23:21:00 -07:00
Carol Willing
8998fd480c Merge pull request #829 from Todd-Z-Li/issue-823
Added funky ascii art to previous TODO messages.
2016-10-24 18:40:25 -07:00
Carol Willing
d948fed0b5 Merge pull request #828 from temogen/deldoc
Deleted IPython from howitworks doc.
2016-10-24 18:38:17 -07:00
Daniel Anthony Noventa
fcfe6314ac Delted IPython from howitworks docs. 2016-10-24 19:13:57 -05:00
Todd
dcfe2aa792 Added funky ascii art to previous TODO messages. 2016-10-24 19:03:21 -05:00
Danowsky
85790ab9d8 Updated spawner docs 2016-10-24 18:57:17 -05:00
Min RK
adda2fcd90 Don't assume empty state means not running
Some Spawners may not need state,
and they should be allowed to resume on Hub restart as well.

Adds some detail about when .poll may be called and how it should behave in less obvious circumstances
2016-10-21 16:28:40 +02:00
Min RK
5604e983db Merge pull request #818 from minrk/unmanaged-no-start
don’t try to start unmanaged services
2016-10-19 10:44:14 +02:00
Min RK
386563a10a don’t try to start unmanaged services 2016-10-18 16:18:03 +02:00
Min RK
0e3c5cf625 statsd typo 2016-10-18 16:17:49 +02:00
Min RK
a3eb2d2b9a Merge pull request #815 from kinuax/fix-setting-in-configuration-example
Fix setting in configuration example
2016-10-14 13:03:45 +02:00
Asier
b6a8860a44 Fix setting in configuration example 2016-10-13 13:45:23 -05:00
Carol Willing
b8a649ae86 Add error message when generate config path does not exist 2016-10-13 07:20:38 -07:00
Min RK
7774bfc612 Merge pull request #811 from willingc/quick-install
Sync quick install steps with PyData tutorial
2016-10-13 10:56:55 +02:00
Carol Willing
9f76613aed Sync quick install steps with PyData tutorial 2016-10-12 18:06:11 -07:00
Min RK
f1ccbe4bed Merge pull request #807 from willingc/normalize-whitelist
Add tests for username normalization
2016-10-12 16:21:19 +02:00
Carol Willing
668d78f729 Add tests for username normalization 2016-10-11 16:44:24 -07:00
Min RK
0009b9a3d6 Merge pull request #805 from danielballan/template-vars
MNT: Add hub host and prefix to template vars in prep for JLab extension
2016-10-11 18:18:21 +02:00
danielballan
b2be07ea6a MNT: Add hub host and prefix to template vars in prep for JLab extension. 2016-10-11 11:27:50 -04:00
Min RK
74649eaad0 Merge pull request #804 from willingc/ssl-termination
Clarify deprecation of --no-ssl
2016-10-11 12:08:55 +02:00
Carol Willing
f33086aa13 Clarify deprecation of --no-ssl 2016-10-10 12:05:39 -07:00
Min RK
9c1cd960fc Merge pull request #801 from minrk/warn-about-direct-connect
try to detect and warn about connecting directly to the Hub
2016-10-10 10:36:50 +02:00
Min RK
3a5226ffa0 Merge pull request #802 from minrk/spawn-pending-finish
add User.waiting_for_response
2016-10-07 11:53:04 +02:00
Min RK
96a53f9921 Merge pull request #797 from ianabc/redirection_loop
spawn_pending set too soon causing redirect loop
2016-10-07 11:13:09 +02:00
Min RK
ff92ac9dad more mocking in tests
avoids no_patience state leaking into other tests
2016-10-07 10:59:32 +02:00
Min RK
933478bfff add waiting_for_response indicator on User
.spawn_pending used for the *whole* window, from request to responsive (added to proxy)
.waiting_for_response is just used for the window between Spawner.start returning (process started, http endpoint known) and http endpoint becoming responsive

.waiting_for_response will never be True while .spawn_pending is False
2016-10-07 10:59:05 +02:00
Min RK
7d996f91b0 try to detect and warn about connecting directly to the Hub
This is guaranteed to result in a redirect loop.
2016-10-07 10:16:21 +02:00
Min RK
c818cbb644 Merge pull request #799 from willingc/doc-install
Move README installation instructions to docs
2016-10-06 19:46:51 +02:00
Carol Willing
e638e5b684 Move README installation instructions to docs 2016-10-06 04:37:57 -07:00
Ian Allison
625e76ea40 spawn_pending set too soon causing redirect loop
Signed-off-by: Ian Allison <iana@pims.math.ca>
2016-10-05 13:28:52 -07:00
Min RK
f8229c9fb6 Merge pull request #793 from willingc/slimconfpy
Slim conf.py comments and options cruft
2016-10-04 15:04:32 +02:00
Min RK
47da422a93 Merge pull request #758 from willingc/update-changes
Add changes for 0.7 release
2016-10-04 14:47:00 +02:00
Carol Willing
3dd98bc0fc Slim conf.py comments and options cruft 2016-10-04 05:28:03 -07:00
Carol Willing
fa6e4aa449 Add pr 789 deprecate --no-ssl 2016-09-30 09:02:58 -07:00
Carol Willing
182472f921 Changes per @minrk review 2016-09-30 08:57:35 -07:00
Carol Willing
d99afe531d Add changes for 0.7 release 2016-09-30 08:57:35 -07:00
Carol Willing
b6b238073f Merge pull request #789 from minrk/deprecate-no-ssl
Deprecate `--no-ssl`
2016-09-30 08:42:07 -07:00
Min RK
a4c696d3bd Merge pull request #788 from willingc/warehouse
Update link to docs
2016-09-30 17:03:34 +02:00
Min RK
bce767120c Merge pull request #785 from willingc/devclarity
Clarify docstring
2016-09-30 16:58:23 +02:00
Min RK
6a9f346b21 Deprecate --no-ssl
it's unnecessarily pedantic. Just warn instead.
2016-09-30 16:16:33 +02:00
Carol Willing
d4646e1caa Update link 2016-09-28 20:54:57 -07:00
Carol Willing
77f0e00695 Clarify docstring 2016-09-28 07:36:29 -07:00
Carol Willing
26a6c89b3a Merge pull request #778 from minrk/shutdown-services
cleanup managed services in shutdown
2016-09-27 09:53:50 -07:00
Carol Willing
34297b82b3 Merge pull request #777 from minrk/service-cookie
Work on service authentication
2016-09-27 09:53:12 -07:00
Carol Willing
70727c4940 Merge pull request #776 from minrk/cleanup-on-start
remove stopped users from proxy on startup
2016-09-27 09:51:01 -07:00
Min RK
56080e5436 Merge pull request #782 from spoorthyv/master
Updated Logos To Match New Brand Guidelines
2016-09-27 15:41:44 +02:00
spoorthyv
309b1bda75 Updated Logos 2016-09-26 15:56:11 -07:00
Min RK
f3ebb694b4 Merge pull request #780 from minrk/travis-no-verbose-pip
remove -v from pip install on travis
2016-09-26 17:06:44 +02:00
Min RK
f35c14318a Merge pull request #779 from minrk/docker-cdn
Dockerfile: set debian CDN
2016-09-26 17:06:01 +02:00
Min RK
b60f2e8233 remove -v from pip install on travis
it makes way too much noise
2016-09-26 17:03:51 +02:00
Min RK
f1a55e31ce Dockerfile: set debian CDN
because the default httpredir fails with some regularity

based on info from http://deb.debian.org
2016-09-26 16:58:31 +02:00
Min RK
2432611264 cleanup managed services in shutdown
don’t leave them running
2016-09-26 15:20:34 +02:00
Min RK
729b608eff Fix setting cookie for services
and exercise it in tests
2016-09-26 14:30:00 +02:00
Min RK
eb3252da28 simplify whoami service example
rely on defaults in HubAuthenticated to show how simple it can be
2016-09-26 14:18:54 +02:00
Min RK
a9e9338ee4 get HubAuth defaults from service env variables
allows use of HubAuthenticated without any arguments
2016-09-26 14:13:04 +02:00
Min RK
aad063e3cd remove stopped users from proxy on startup
We already added running users, but we didn't handle removing users from the proxy
if the user's server was stopped (e.g. while the Hub was restarting).
2016-09-26 13:20:42 +02:00
Min RK
be00265d1a Merge pull request #762 from willingc/swagger
Edit descriptions in API spec for user clarity
2016-09-25 14:39:42 +02:00
Min RK
335ba4f453 Merge pull request #771 from willingc/faq-adds
Added navigation links and workshop best practices
2016-09-25 14:38:45 +02:00
Carol Willing
5a4f3a4910 Added navigation links and workshop best practices 2016-09-22 10:02:29 -07:00
Carol Willing
7ee4be0f13 Remove api review notes doc 2016-09-22 09:13:39 -07:00
Carol Willing
10c3fbe5cf Add changes per @minrk 2016-09-22 09:12:26 -07:00
Carol Willing
13826a41a1 Merge pull request #769 from minrk/service-start-yield
service.start is not a coroutine
2016-09-22 03:21:47 -07:00
Min RK
cb35026637 service.start is not a coroutine
don’t yield it
2016-09-22 12:04:31 +02:00
Min RK
24c080cf4a Merge pull request #768 from minrk/service-url
only set service URL env if there's a URL to set
2016-09-22 11:57:44 +02:00
Min RK
e9fc629285 only set service URL env if there's a URL to set
These fields are only relevant for services with a web endpoint
2016-09-21 12:39:07 +02:00
Min RK
150b67c1c9 Merge pull request #761 from willingc/apidocs
Update API docs
2016-09-21 10:34:57 +02:00
Carol Willing
acdee0ac29 Change notes from txt to md 2016-09-19 12:05:26 -07:00
Carol Willing
193b236ef1 Add additional review questions re: API 2016-09-19 11:52:43 -07:00
Carol Willing
1851e6a29d Edit descriptions in API spec for user clarity 2016-09-19 10:49:56 -07:00
Carol Willing
74f086629c Update API docs 2016-09-19 08:42:28 -07:00
Min RK
33a59c8352 Merge pull request #757 from willingc/doc-contrib
Add contributor list to the docs and update the contents
2016-09-19 09:01:54 +02:00
Carol Willing
08644fea74 Add services to index/table of contents 2016-09-18 15:33:01 -07:00
Carol Willing
f878bf6ad3 Add contribution list to documentation 2016-09-18 15:29:54 -07:00
Carol Willing
651c457266 Add contributor list 2016-09-18 15:28:51 -07:00
Carol Willing
2dd3463ea8 Merge pull request #748 from minrk/string-formatting
Deprecate `%U` username substitution
2016-09-18 06:34:01 -04:00
Carol Willing
ad93af8cc8 Merge pull request #749 from minrk/single-user-help-all
exercise single-user help output
2016-09-18 06:33:38 -04:00
Min RK
080cf7a29b exercise single-user help output
and tweak some of its output
2016-09-15 13:04:09 +02:00
Min RK
b8f4803ef4 Deprecate %U username substitution
use Python format-strings instead.
2016-09-15 12:05:46 +02:00
Min RK
4a8f51ed6d Merge pull request #741 from willingc/add-lab
Add info on trying out JupyterLab
2016-09-13 10:31:49 +02:00
Carol Willing
7923074ed5 Add info on trying out JupyterLab 2016-09-12 22:22:22 -07:00
Min RK
834b2ba77d Merge pull request #739 from jupyterhub/fix-readme-1
Add --no-ssl to docker run command in README
2016-09-10 10:55:41 +02:00
Yuvi Panda
7897a13ca5 Add --no-ssl to docker run command
Otherwise this doesn't run by default, and someone in gitter ran into this earlier.
2016-09-09 10:57:19 -07:00
Min RK
7987011372 Merge pull request #738 from willingc/inspired-zulip
Add navigation to README
2016-09-09 13:38:22 +02:00
Min RK
d7a76077bd Merge pull request #734 from minrk/deprecated-local-spawner-subclasses
backward-combat for ip, port in LocalProcessSpawner subclasses
2016-09-09 13:37:58 +02:00
Min RK
62731cf489 Merge pull request #727 from willingc/servicespec
Edit Services document
2016-09-09 10:29:10 +02:00
Carol Willing
5d501bc465 Add navigation to README 2016-09-08 22:41:44 -07:00
Kyle Kelley
63a6841848 Merge pull request #737 from willingc/issue438
Add info on configuring pySpark executors on YARN
2016-09-08 17:59:24 -05:00
Carol Willing
403241bd98 Add reference to official pySpark docs 2016-09-08 15:28:54 -07:00
Carol Willing
de3fe88df6 Fix code indentation for markdown 2016-09-08 15:11:25 -07:00
Carol Willing
6a370286e1 Add info on setting pySpark executors on YARN 2016-09-08 15:09:20 -07:00
Carol Willing
491b7e7d11 Use Hub-Managed and Externally-Managed 2016-09-08 08:23:47 -07:00
Min RK
0b0db97117 Merge pull request #728 from willingc/cull-idle
Add readme to cull-idle example to demonstrate managed services
2016-09-08 16:57:16 +02:00
Min RK
42a993fd08 backward-combat for ip, port in LocalProcessSpawner subclasses
Subclasses prior to 0.6 may assume return value
of LocalProcessSpawner.start can be ignored
instead of passing it through.

For these cases, keep setting ip/port in the deprecated way
so that it still works with a warning,
rather than failing with the wrong port.
2016-09-08 16:54:52 +02:00
Carol Willing
fd1544bf41 Edits per @minrk's review 2016-09-08 07:16:25 -07:00
Carol Willing
ed36207328 Merge pull request #731 from willingc/issue654
Add config for default URL to FAQ
2016-09-08 07:05:56 -07:00
Carol Willing
a0b8ccf805 Add config for whole filesystem access and user home directory as default 2016-09-08 06:54:23 -07:00
Min RK
9d2278d29b Merge pull request #733 from willingc/issue594
Add troubleshooting info about sudospawner
2016-09-08 14:51:57 +02:00
Min RK
df42385d7e Merge pull request #732 from willingc/issue632
Add info on updates and Qualsys SSL analyzer to docs
2016-09-08 14:51:42 +02:00
Min RK
02796d4daa Merge pull request #730 from willingc/issue661
Add install instructions with no network to FAQ
2016-09-08 14:50:44 +02:00
Carol Willing
80c5f67335 Add troubleshooting info about sudospawner 2016-09-08 00:57:17 -07:00
Carol Willing
0b14e89404 Add info on updates and Qualsys SSL analyzer to docs 2016-09-07 22:00:33 -07:00
Carol Willing
f595b1ad59 Add clarification re: run on hub not each single user server 2016-09-07 21:21:10 -07:00
Carol Willing
80ca1eacc5 Add install instructions with no network to FAQ 2016-09-07 21:06:56 -07:00
Carol Willing
5b3ac6c840 Add readme to cull-idle example 2016-09-07 14:01:46 -07:00
Carol Willing
0000b7447a Make command consistent with examples/cull-idle 2016-09-07 13:52:33 -07:00
Carol Willing
a22060ca7f Edit Services document 2016-09-07 11:39:15 -07:00
Min RK
8ca321ecc3 Merge pull request #705 from minrk/actual-services
WIP: implement services API
2016-09-07 13:43:54 +02:00
Min RK
862cb3640b Merge pull request #722 from minrk/setuptools-no-egg
always install with setuptools
2016-09-07 13:38:06 +02:00
Min RK
51908c9673 clarifications from review 2016-09-07 13:19:09 +02:00
Min RK
9aa4046093 always install with setuptools
but not eggs (effectively require pip install .)
2016-09-05 15:46:20 +02:00
Min RK
acb49adfea Merge pull request #719 from Mistobaan/patch-1
fix docker repository
2016-09-05 10:38:12 +02:00
Fabrizio Milo
f345ad5422 fix docker repository 2016-09-02 14:45:16 -07:00
Min RK
5ad618bfc1 add API endpoint for services 2016-09-02 15:19:45 +02:00
Min RK
26b00578a1 remove redundant user_url utility
public_url works for users now
2016-09-02 13:22:49 +02:00
Min RK
c3111b04bb support services subdomain
- all services are on the 'services' domain, share the same cookie
2016-09-02 13:21:46 +02:00
Min RK
a61ba74360 Merge pull request #717 from minrk/hubauth-defaults
HubAuth login_url changes:
2016-09-02 12:05:17 +02:00
Min RK
4de93fd1d5 Merge pull request #718 from willingc/sdist-one
Remove zip from sdist build per PEP 527
2016-09-02 11:45:44 +02:00
Min RK
46bb7b05f4 strict host matching by including / 2016-09-02 11:44:51 +02:00
Carol Willing
1aa2cb1921 Remove zip from sdist build per PEP 527 2016-09-01 07:33:10 -07:00
Min RK
c4bfa63fd6 allow full URLs for login redirects iff they are for our host 2016-09-01 15:10:02 +02:00
Min RK
4c5d6167bd use just path for default hub auth login_url 2016-09-01 15:07:00 +02:00
Min RK
9a002c2445 update services doc with some feedback 2016-09-01 15:01:02 +02:00
Min RK
f97d32c5bd add services to the proxy
and start test coverage
2016-09-01 14:46:34 +02:00
Min RK
bac311677f Merge pull request #711 from willingc/update-change
Update changelog format
2016-08-29 12:01:08 +02:00
Carol Willing
94cb5b3a05 Update changelog format 2016-08-29 02:39:39 -07:00
Carol Willing
ed4f0ba014 Merge pull request #707 from willingc/mytheme
Update conda env and conf.py for clean build
2016-08-28 10:42:49 -07:00
Carol Willing
fd219b5fff Update conda env and conf.py for clean build 2016-08-28 10:08:00 -07:00
Min RK
140c4f2909 use services API in cull-idle example 2016-08-27 13:23:45 +02:00
Min RK
a1c787ba5f basic implementation of managed services
- managed services are automatically restarted
- proxied services not there yet
2016-08-27 12:59:26 +02:00
Min RK
54c808fe98 Service specification document 2016-08-26 17:25:53 +02:00
Min RK
eaeec9f19b Merge pull request #693 from willingc/doc-revise
Documentation refresh
2016-08-24 23:05:54 +02:00
Min RK
21d25ac130 Merge pull request #689 from minrk/log-add-user-error
log errors adding users already in db
2016-08-21 22:19:39 +02:00
Min RK
eda21642bd log errors adding users already in db
avoids removal of system users preventing Hub startup
2016-08-21 22:07:46 +02:00
Carol Willing
aace54d5b2 Merge pull request #699 from jhamrick/swarm-docs
Remove link to SwarmSpawner
2016-08-19 20:41:18 -07:00
Jessica B. Hamrick
e460c00759 Remove link to SwarmSpawner 2016-08-20 02:02:08 +01:00
Carol Willing
678fd1cd08 Shorten name 2016-08-18 10:36:40 -07:00
Carol Willing
42c78f3c43 Drop back to old environment 2016-08-18 10:20:57 -07:00
Carol Willing
548e0f6153 Edits to technical overview 2016-08-18 09:58:37 -07:00
Carol Willing
31f63c737f Add image of JupyterHub parts to index 2016-08-18 09:15:48 -07:00
Carol Willing
71b35602d7 Edit grammar in index 2016-08-18 08:36:46 -07:00
Carol Willing
7c41a024ba Fix typo 2016-08-18 05:05:46 -07:00
Carol Willing
51097de43d Update contents format 2016-08-18 04:52:51 -07:00
Carol Willing
44e16d538d Edit and corrections 2016-08-18 04:52:22 -07:00
Carol Willing
f6517d01db Move 'Using API' to user guide 2016-08-18 04:51:48 -07:00
Carol Willing
039b925cf6 Edit config-examples 2016-08-18 04:50:11 -07:00
Carol Willing
bba5460236 Simplify formating of troubleshooting doc 2016-08-18 04:49:30 -07:00
Carol Willing
e5d3705a1a Edit headings for authenticators and spawners docs 2016-08-18 04:48:43 -07:00
Carol Willing
7b80b95a49 Add checks for spelling 2016-08-18 04:47:21 -07:00
Carol Willing
75cb487ab3 Update conf 2016-08-17 15:11:57 -07:00
Carol Willing
eba4b3e8c7 More doc edits 2016-08-17 15:11:17 -07:00
Carol Willing
712b895d8e WIP refresh 2016-08-15 19:18:38 -07:00
Carol Willing
635fd9b2c3 Fix typo 2016-08-15 18:50:26 -07:00
Min RK
afcbdd9bc4 Merge pull request #678 from vilhelmen/swagger_fix
Swagger spec conformance
2016-08-04 13:06:00 +02:00
Will Starms
80fa5418b7 Fix missing description for responce 2016-08-03 16:59:14 -05:00
Will Starms
b0a09c027d Fix invalid type bool->boolean 2016-08-03 16:57:17 -05:00
Kyle Kelley
4edf59efeb Merge pull request #675 from minrk/api-info
Add /api/ and /api/info endpoints
2016-08-02 22:26:10 -05:00
Min RK
9f0dec1247 ignore generated rest-api html 2016-08-01 15:16:26 +02:00
Min RK
2c47fd4a02 Add /api/ and /api/info endpoints
/api/ is not authenticated, and just reports JupyterHub's version for now.
/api/info is admin-only, and reports more detailed info about Python, authenticators/spawners in use, etc.
2016-08-01 15:15:59 +02:00
Min RK
9878f1e32d Document parameters to shutdown API 2016-08-01 15:12:05 +02:00
Min RK
5c396668ff Merge pull request #671 from vilhelmen/swagger_fix
Fix timestamp type in API spec
2016-08-01 13:39:10 +02:00
Min RK
5f12f9f2c3 Merge pull request #667 from vilhelmen/master
Proxy will no longer recieve Hub's SIGINT
2016-08-01 11:04:39 +02:00
Will Starms
4974775cd9 Fix timestamp type 2016-07-31 19:23:59 -05:00
Will Starms
0cb777cd0f Switch to start_new_sesstion 2016-07-29 13:43:09 -05:00
Min RK
a4bb25a75f Merge pull request #604 from minrk/service-token
add Services to db
2016-07-29 10:32:15 +02:00
Min RK
b3f117bc59 Merge pull request #669 from vilhelmen/swagger_fix
Fix invalid license object and bad JSON pointers in API spec
2016-07-29 10:25:39 +02:00
Will Starms
499ba89f07 Correct invalid JSON pointers 2016-07-28 22:25:38 -05:00
Will Starms
05d743f725 Correct invalid license object in API spec 2016-07-28 22:19:05 -05:00
Carol Willing
a347d56623 Merge pull request #668 from willingc/toc-tweak
Minor additions to work done by @iamed18 in PR#602
2016-07-28 14:33:36 -07:00
Carol Willing
172976208e Minor additions to work done by @iamed18 in PR#602 2016-07-28 14:27:19 -07:00
Carol Willing
b6db3f59a2 Merge pull request #602 from iamed18/master
Added nginx reverse proxy example to GettingStarted.md
2016-07-28 14:09:31 -07:00
Carol Willing
4b31279fc8 Merge branch 'iamed18-master'
Closes #509
2016-07-28 14:01:29 -07:00
Edward Leonard
bfef83cefc separated configuration examples into their own document
Merge conflict resolved by @willingc
2016-07-28 13:58:20 -07:00
Edward Leonard
07d599fed2 added code-block ends
forgot them in last commit
2016-07-28 13:56:15 -07:00
Edward Leonard
0412407558 added example config with nginx reverse proxy 2016-07-28 13:56:15 -07:00
Edward Leonard
4c568b46d6 separated configuration examples into their own document 2016-07-28 14:17:51 -05:00
Michael Milligan
d92fcf5827 batchspawner URL change 2016-07-28 13:54:34 -05:00
Will Starms
36f3abbfc7 Proxy will no longer recieve Hub's SIGINT #665 2016-07-28 13:04:55 -05:00
Min RK
49a45b13e6 debug installation on travis 2016-07-28 17:23:44 +02:00
Min RK
dfa13cb2c5 avoid creating duplicate users in test_api
now that we check!
2016-07-28 17:23:44 +02:00
Min RK
fd3b959771 add api_tokens.service_id column with alembic 2016-07-28 17:23:44 +02:00
Min RK
39a80edb74 async fixes in test_init_tokens 2016-07-28 17:23:44 +02:00
Min RK
2a35d1c8a6 add service API tokens
service_tokens supersedes api_tokens,
since they now map to a new services collection,
rather than regular Hub usernames.

Services in the ORM have:

- API tokens
- servers (multiple, can be 0)
- pid (0 if not managed)
2016-07-28 17:23:44 +02:00
Min RK
81350322d7 Merge pull request #660 from willingc/remove-badge
Remove requires.io badge
2016-07-26 12:02:33 +02:00
Min RK
50c2528359 Merge pull request #659 from willingc/fix-restlink
Remove link and reflow text
2016-07-26 12:02:24 +02:00
Min RK
77bac30654 Merge pull request #650 from minrk/return-ip-port
return (ip, port) from Spawner.start
2016-07-26 12:02:13 +02:00
Carol Willing
41fafc74cf Merge pull request #662 from mwmarkland/master
Fix typo regarding user's interactions with PATH
2016-07-25 17:58:48 -07:00
Matthew Markland
c6281160fa Fix typo regarding user 2016-07-25 14:55:17 -05:00
Min RK
3159b61ae7 return (ip, port) from Spawner.start
removes the need for Spawners to set db state themselves in most cases

Should be backward-compatible with warnings.
2016-07-25 16:54:15 +02:00
Carol Willing
11278ddb26 Remove requires.io badge 2016-07-25 07:45:34 -07:00
Carol Willing
e299a6c279 Remove link and reflow text 2016-07-25 07:41:28 -07:00
Min RK
22ff5f3d91 Merge pull request #635 from minrk/traitlets-4.2-singleuser
use traitlets 4.2 API in singleuser script
2016-07-25 16:29:07 +02:00
Carol Willing
a3e8bd346f Merge pull request #656 from minrk/rest-api-docs
Add REST API to docs
2016-07-25 07:09:16 -07:00
Min RK
592a084a28 set API token in single-user-spawner test 2016-07-25 15:57:43 +02:00
Min RK
c27e59b0f9 better exit message if JPY_API_TOKEN is undefined. 2016-07-25 15:27:32 +02:00
Min RK
1c9bc1b133 traitlets 4.2 API in singleuser script 2016-07-25 15:27:32 +02:00
Min RK
be4f4853cf Merge pull request #655 from willingc/doc-rest
Add links to REST API docs
2016-07-25 10:51:08 +02:00
Carol Willing
7d8895c2fb Add links to swagger docs for REST API 2016-07-23 18:47:23 -07:00
Min RK
5b8913be5b install nodejs with conda on RTD 2016-07-23 12:23:30 +02:00
Min RK
d03a1ee490 build rest-api on RTD 2016-07-23 12:05:50 +02:00
Min RK
19ae38c108 add REST API to docs
include local build, even though it's not as nice as petstore.
Due the that, link to petstore as well.
2016-07-23 12:05:25 +02:00
Carol Willing
9b71f11213 Merge pull request #651 from minrk/check-hub-ip
more informative error if single-user server can't connect to Hub for auth
2016-07-22 07:30:27 -07:00
Min RK
8fbaedf4d7 more informative error if single-user server can't connect to Hub for auth
error message points to hub_ip setting if Hub doesn't appear to be accessible at 127.0.0.1
2016-07-22 15:35:24 +02:00
Min RK
87ab07b322 Merge pull request #646 from datapolitan/fix_juptyter
fixing start_proxy() that misspelled name of the project
2016-07-18 22:05:22 -07:00
Richard Dunks
f36a1e10e6 fixing start_proxy() that misspelled name of the project 2016-07-17 23:23:32 -04:00
Carol Willing
5944671663 Merge pull request #644 from JamiesHQ/doctypo
Fix link
2016-07-16 11:21:55 -05:00
Jamie W
27dfd0edca fix link 2016-07-16 11:18:14 -05:00
Min RK
9dfc043352 Merge pull request #639 from ryanlovett/patch-1
Correct Spawner.start typo
2016-07-13 17:10:17 -05:00
Min RK
e8bd1520b2 Merge pull request #640 from minrk/travis-pre-for-nathaniel
install dependencies with pre
2016-07-13 17:10:03 -05:00
Min RK
a30b9976f5 install dependencies with pre
to catch bugs introduced by dependencies during prerelease
2016-07-13 16:19:15 -05:00
Ryan Lovett
954e5b3d5e Correct Spawner.start typo
As documented at https://github.com/jupyterhub/jupyterhub/blob/master/jupyterhub/spawner.py#L103
2016-07-13 10:23:16 -07:00
Min RK
7cd8aa266b Merge pull request #634 from minrk/cleanup-after-yourself
cleanup servers, api tokens after spawner shutdown
2016-07-11 14:29:20 -05:00
Min RK
d0449d136c cascade on API token delete 2016-07-11 10:44:55 -05:00
Min RK
ff9aeb70b4 try ondelete=SET NULL in foreign keys 2016-07-09 12:13:04 +02:00
Min RK
2eaecd22ba cleanup servers, api tokens after spawner shutdown
prevents growing table of unused servers and tokens
2016-07-08 16:50:43 +02:00
Carol Willing
4801d647c1 Merge pull request #627 from minrk/alembic-util
allow running alembic with `python -m jupyterhub.dbutil`
2016-07-01 17:17:54 -07:00
Carol Willing
b7e6fa3abe Merge pull request #626 from minrk/check-permissions
Add a permissions-check hint when spawn fails with PermissionError
2016-07-01 17:02:37 -07:00
Min RK
d590024c47 allow running alembic with python -m jupyterhub.dbutil
since we only have generated alembic.ini, present a command that generates one and uses it.

enables generating new revisions with:

    python -m jupyterhub.dbutil revision -m msg
2016-07-01 14:38:31 +02:00
Min RK
f3f71c38c3 Merge pull request #620 from Fokko/fd-add-badge
Added requirements badge
2016-07-01 14:37:26 +02:00
Min RK
27125a169c Merge pull request #621 from minrk/user-redirect-handler
Add /user-redirect/ endpoint
2016-07-01 14:37:00 +02:00
Min RK
3f9205d405 Add a permissions-check hint when spawn fails with PermissionError 2016-07-01 14:36:34 +02:00
Carol Willing
96861dc2b0 Merge pull request #622 from minrk/getting-started-log
update log instructions in getting started
2016-06-24 08:52:31 -07:00
Min RK
cedaa184f1 update log instructions in getting started
use recommended output-piping instead of nonexistent log_file config
2016-06-24 17:15:50 +02:00
Fokko Driesprong
f491791081 Added requirements badge 2016-06-24 16:17:51 +02:00
Min RK
6bba1c474f Add /user-redirect/ endpoint
should avoid needing to cram user-detection / intent into other endpoints.
That functionality isn't removed,
but warnings are added indicating that /user-redirect/ should be used instead.
2016-06-24 16:08:30 +02:00
Carol Willing
357f6799b0 Merge pull request #613 from minrk/user-redirect-560
Finish up cross-user redirects
2016-06-16 08:32:03 -07:00
Carol Willing
ce3ea270f5 Merge pull request #612 from minrk/doc-links
fix links to authenticator/spawner pages from howitworks
2016-06-16 08:28:56 -07:00
Min RK
992717adc0 support cross-user redirects when JupyterHub is on a prefix 2016-06-16 15:42:00 +02:00
Min RK
993101710f fix links to authenticator/spawner pages from howitworks 2016-06-16 15:00:19 +02:00
Matthias Bussonnier
ac6fe61804 Merge pull request #609 from minrk/only-relevant-warning
hide http warning until it's relevant
2016-06-14 09:42:24 -07:00
Min RK
37aa1a291a Merge pull request #606 from minrk/rm-hub-prefix-option
disable hub_prefix config
2016-06-13 12:08:58 +02:00
Min RK
c6294f2763 Merge pull request #607 from Carreau/fix-bad-security-anchor-link
Fix bad anchor
2016-06-13 12:08:34 +02:00
Min RK
6e9a77f55f hide http warning until it's relevant
avoids flash of invalid warning when everything is correct
2016-06-13 12:07:52 +02:00
Min RK
799b407d89 Merge pull request #608 from Carreau/add-ssl-frontend-warning
Add a warning on login if page not over ssl.
2016-06-13 12:00:02 +02:00
Matthias Bussonnier
3ddfa5f939 Add a warning on login if page not over ssl.
The --no-ssl option in the backend make sens, but still too many
deployment are not over SSL because they underestimate / do not
understand the risks.
2016-06-12 13:24:46 -07:00
Matthias Bussonnier
5968661742 Fix bad anchor 2016-06-11 12:09:00 -07:00
Dara Adib
34592e3da5 Process single-user server redirects
Follow-up to #448.

If single-user notebook is running, it will redirect other
users to hub root with next argument, which was previously
ignored.
2016-06-10 17:15:48 +02:00
Min RK
5aea7eda96 disable hub_prefix config
it shouldn't be configurable
2016-06-10 17:14:47 +02:00
Edward Leonard
08024be1c0 added code-block ends
forgot them in last commit
2016-06-06 22:07:10 -05:00
Edward Leonard
39daff3099 added example config with nginx reverse proxy 2016-06-06 22:05:27 -05:00
Min RK
d4c0fe8679 Merge pull request #597 from minrk/single-user-service-auth
use HubAuth in single-user server
2016-06-06 13:31:40 +02:00
Min RK
c9ae45bef3 Merge pull request #599 from minrk/groups
Add groups
2016-06-04 21:22:29 +02:00
Min RK
503f21fd37 allow initializing groups from config
c.JupyterHub.load_groups creates groups and adds users to them.

It *does not* remove users from groups added previously.
2016-06-01 14:35:34 +02:00
Min RK
6d106b24f4 add groups API 2016-06-01 14:04:32 +02:00
Min RK
71f47b7a70 add user groups 2016-06-01 13:47:53 +02:00
Min RK
844381e7c9 use HubAuthenticated in jupyterhub-singleuser 2016-05-31 13:20:21 +02:00
Min RK
267994b191 move singleuser script into the package 2016-05-31 12:59:38 +02:00
Min RK
cc2202c188 Merge pull request #554 from minrk/service-auth
Add HubAuth for authenticating services with JupyterHub
2016-05-31 12:58:51 +02:00
Min RK
4996a84ca0 Merge pull request #596 from minrk/hub-prefix-traitlet
fix _hub_prefix_changed signature
2016-05-31 12:58:37 +02:00
Min RK
3cefc2951c fix _hub_prefix_changed signature
wasn't update for new traitlets API

c/o @Milly
2016-05-31 11:33:12 +02:00
Min RK
835b4afc06 Merge pull request #593 from minrk/redirect-typo
only strip base_url if it's actually there
2016-05-31 11:32:42 +02:00
Min RK
146bef1d88 test hub-authenticated tornado handler 2016-05-30 13:32:10 +02:00
Min RK
ef9656eb8b add example service 2016-05-30 13:32:10 +02:00
Min RK
84868a6475 add login_url to HubAuth
needed for tornado redirects. Still not sure the best way to pass it to tornado app settings.
2016-05-30 13:32:10 +02:00
Min RK
9e9c6f2761 document services.auth 2016-05-30 13:32:10 +02:00
Min RK
19e8bdacfe Add HubAuth for authenticating tornado-based services with JupyterHub
- HubAuth implements request to identify users with the Hub
- HubAuthenticated is a mixin for tornado handlers
2016-05-30 13:32:10 +02:00
Min RK
c6640aa51d only strip base_url if it's actually there 2016-05-30 10:34:33 +02:00
Min RK
1514a2f2e2 ignore .cache 2016-05-27 16:12:54 +02:00
Min RK
9edb282067 Merge pull request #591 from minrk/clean-css-fix
require clean-css 3.4.13
2016-05-27 13:12:09 +02:00
Min RK
9ffe5e6187 require clean-css 3.4.13
fixes node 6 compatibility
2016-05-26 18:31:10 +02:00
Min RK
14662111a8 Merge pull request #508 from minrk/alembic
Use alembic for database migrations
2016-05-26 15:40:44 +02:00
Min RK
a7ea5774d9 test database upgrades with alembic 2016-05-26 15:32:57 +02:00
Min RK
c998458362 include old-jupyterhub.sqlite generated by
I don't like including this, but I don't know a better way to get a starting db
other than doing a complete installation of old JupyterHub in an env.

At least it's small.
2016-05-26 15:32:50 +02:00
Min RK
07ddede40c typo: write db url in alembic 2016-05-26 15:31:49 +02:00
Min RK
b8a6ac62e8 include alembic in package_data 2016-05-26 14:23:10 +02:00
Min RK
86e9a3217c add jupyterhub upgrade-db entry point
don't do automatic upgrades (yet)

I'm not sure if auto upgrades are a good idea or not.
2016-05-26 14:17:41 +02:00
Min RK
f591e6e3fb require alembic 2016-05-26 14:17:41 +02:00
Min RK
64dd1db327 add User.auth_state column
a place for storing authenticator state,
and a simple test case for alembic.
2016-05-26 14:17:41 +02:00
Min RK
b68569f61c init alembic
prepare for database migration
2016-05-26 14:17:40 +02:00
Min RK
3a52e3f4df Merge pull request #589 from minrk/test-with-base-url
Run tests with an encoded base_url
2016-05-26 14:14:18 +02:00
Min RK
05c268e190 Run tests with an encoded base_url
to ensure we get our escaping right

Mostly revealed fixes needed in tests so far, not code,
but should catch regressions.
2016-05-26 13:56:20 +02:00
Carol Willing
98937de278 Merge pull request #586 from jupyterhub/willingc-patch-1
Fix typo in README link
2016-05-25 09:36:41 -07:00
Carol Willing
ff35e3b93e Fix typo in README link
Extra 's' on jupyter-notebook removed.
2016-05-25 09:12:10 -07:00
Carol Willing
4eebc95109 Merge pull request #585 from willingc/link-me
Fix RTD links to PDF
2016-05-25 09:02:39 -07:00
Carol Willing
c708c2a3a0 Fix RTD links to PDF back to org 2016-05-25 08:46:54 -07:00
Kyle Kelley
35f8190128 Merge pull request #584 from willingc/wtd-readme
Edit README based on Write The Docs tips
2016-05-24 13:31:46 -07:00
Carol Willing
78b268ddef Edit README based on Write The Docs tips 2016-05-24 12:40:52 -07:00
Min RK
eb99060a25 Merge pull request #578 from willingc/doc-570
Add docs on chain certificates and configuration re: 570
2016-05-19 08:24:54 +02:00
Carol Willing
8e99f659f5 Fix link 2016-05-18 10:59:04 -07:00
Carol Willing
5c9e9d65b5 Add links to SSL section in docs 2016-05-18 10:52:49 -07:00
Carol Willing
3e768b7297 Add chained cert doc info from @ryanlovett and @LMtx 2016-05-18 10:34:38 -07:00
Kyle Kelley
aa2999210d Merge pull request #572 from PeterDaveHelloKitchen/image-optimize
Losslessly optimize images using Google zopflipng
2016-05-16 09:33:59 -05:00
Peter Dave Hello
be95a27597 optimize images 2016-05-16 21:46:53 +08:00
Kyle Kelley
5edcdd4fb2 Merge pull request #567 from minrk/less-271
require less-2.7.1
2016-05-12 07:12:32 -05:00
Min RK
b81586de0a require less-2.7.1
2.7.0 has compatibility problems with source maps
2016-05-12 10:32:52 +02:00
Carol Willing
e0f3e3b954 Merge pull request #559 from ozancaglayan/doc-fix-apitoken
doc: Add section about API tokens
2016-05-09 05:57:56 -07:00
Ozan Çağlayan
3037d264c3 docs: Fix last two typos 2016-05-05 15:41:38 +03:00
Ozan Çağlayan
17f1346c08 doc: address reviewers comments 2016-05-04 18:15:15 +03:00
Ozan Çağlayan
276aba9f85 doc: Add section about API tokens 2016-05-04 17:35:54 +03:00
Min RK
0ba63c42fd back to dev 2016-05-04 14:09:47 +02:00
Min RK
2985562c2f release 0.6.1 2016-05-04 14:08:39 +02:00
Min RK
754f850e95 changelog for 0.6.1 2016-05-04 14:08:39 +02:00
Min RK
dccb85d225 plural add-users ids 2016-05-04 13:57:16 +02:00
Min RK
a0e401bc87 Merge pull request #551 from minrk/proxy-error
Serve proxy error pages from the Hub
2016-05-04 12:34:10 +02:00
Min RK
c6885a2124 Merge pull request #552 from minrk/poll-and-notify
notice dead servers more often
2016-05-04 12:33:09 +02:00
Min RK
7528fb7d9b notice dead servers more often
call poll_and_notify to ensure triggering of dead-server events in a few places:

- `/hub/home` page view
- user start and stop API endpoints

This should avoid the failure to stop a server that's died uncleanly because the server hasn't noticed yet
2016-05-04 11:07:28 +02:00
Carol Willing
e7df5a299c Merge pull request #556 from minrk/shutdown-all
Add Stop All button to admin page
2016-05-03 05:58:41 -07:00
Min RK
ff997bbce5 Add Stop All button to admin page
for stopping all single-user servers at once
2016-05-03 13:25:12 +02:00
Min RK
1e21e00e1a return status from poll_and_notify
allows calling it directly
2016-04-27 14:28:23 +02:00
Min RK
77d3ee98f9 allow logo_url in template namespace to set the logo link 2016-04-27 14:06:51 +02:00
Min RK
1f861b2c90 server proxy error pages from the Hub 2016-04-27 14:06:29 +02:00
Carol Willing
14a00e67b4 Merge pull request #550 from daradib/typo
Fix docs typo for Spawner.disable_user_config
2016-04-26 15:28:02 -07:00
Dara Adib
14f63c168d Fix docs typo for Spawner.disable_user_config 2016-04-26 11:36:48 -07:00
Kyle Kelley
e70dbb3d32 Merge pull request #549 from minrk/optional-statsd
Make statsd an optional dependency
2016-04-26 07:46:28 -05:00
Min RK
b679275a68 remove unneeded codecov.yml
codecov team config suffices
2016-04-26 13:44:29 +02:00
Min RK
0c1478a67e Make statsd an optional dependency
only import it if it's used
2016-04-26 13:37:39 +02:00
Min RK
d26e2346a2 Merge pull request #548 from minrk/jupyterhub-urls
fix a few more jupyter->jupyterhub URLs
2016-04-26 12:41:19 +02:00
Min RK
9a09c841b9 Merge pull request #547 from minrk/disable-codecov-comments
disable codecov PR comments
2016-04-26 12:41:02 +02:00
Min RK
f1d4f5a733 fix a few more jupyter->jupyterhub URLs
in README
2016-04-26 11:58:27 +02:00
Min RK
d970dd4c89 disable CodeCov PR comments
The've removed web app config, in favor of codecov.yml,
discarding our existing config,
which means coverage reports are showing up in most Jupyter PRs now.
2016-04-26 11:55:52 +02:00
Min RK
f3279bf849 Merge pull request #544 from rafael-ladislau/master
Fix multiple windows logout error
2016-04-26 11:41:53 +02:00
Rafael Ladislau
db0878a495 Fix multiple windows logout error
When you have two JupyterHub windows and log out successfully in one of them. If you try to click the logout button in the other window, you will receive a 500 error.

It happened because there were operations being done in a None user object.
2016-04-25 13:31:39 -04:00
Min RK
c9b1042791 back to dev 2016-04-25 14:34:15 +02:00
Min RK
cd81320d8f push tags on circleci 2016-04-25 14:25:34 +02:00
109 changed files with 7023 additions and 1598 deletions

View File

@@ -1,4 +1,4 @@
[run] [run]
omit = omit =
jupyterhub/tests/* jupyterhub/tests/*
jupyterhub/singleuser.py jupyterhub/alembic/*

29
.github/issue_template.md vendored Normal file
View File

@@ -0,0 +1,29 @@
Hi! Thanks for using JupyterHub.
If you are reporting an issue with JupyterHub:
- Please use the [GitHub issue](https://github.com/jupyterhub/jupyterhub/issues)
search feature to check if your issue has been asked already. If it has,
please add your comments to the existing issue.
- Where applicable, please fill out the details below to help us troubleshoot
the issue that you are facing. Please be as thorough as you are able to
provide details on the issue.
**How to reproduce the issue**
**What you expected to happen**
**What actually happens**
**Share what version of JupyterHub you are using**
Running `jupyter troubleshoot` from the command line, if possible, and posting
its output would also be helpful.
```
Insert jupyter troubleshoot output here
```

2
.gitignore vendored
View File

@@ -1,10 +1,12 @@
node_modules node_modules
*.py[co] *.py[co]
*~ *~
.cache
.DS_Store .DS_Store
build build
dist dist
docs/_build docs/_build
docs/source/_static/rest-api
.ipynb_checkpoints .ipynb_checkpoints
# ignore config file at the top-level of the repo # ignore config file at the top-level of the repo
# but not sub-dirs # but not sub-dirs

View File

@@ -2,6 +2,7 @@
language: python language: python
sudo: false sudo: false
python: python:
- 3.6-dev
- 3.5 - 3.5
- 3.4 - 3.4
- 3.3 - 3.3
@@ -10,7 +11,7 @@ before_install:
- npm install -g configurable-http-proxy - npm install -g configurable-http-proxy
- git clone --quiet --depth 1 https://github.com/minrk/travis-wheels travis-wheels - git clone --quiet --depth 1 https://github.com/minrk/travis-wheels travis-wheels
install: install:
- pip install -f travis-wheels/wheelhouse -r dev-requirements.txt . - pip install --pre -f travis-wheels/wheelhouse -r dev-requirements.txt .
script: script:
- travis_retry py.test --cov jupyterhub jupyterhub/tests -v - travis_retry py.test --cov jupyterhub jupyterhub/tests -v
after_success: after_success:

26
CHECKLIST-Release.md Normal file
View File

@@ -0,0 +1,26 @@
# Release checklist
- [ ] Upgrade Docs prior to Release
- [ ] Change log
- [ ] New features documented
- [ ] Update the contributor list - thank you page
- [ ] Upgrade and test Reference Deployments
- [ ] Release software
- [ ] Make sure 0 issues in milestone
- [ ] Follow release process steps
- [ ] Send builds to PyPI (Warehouse) and Conda Forge
- [ ] Blog post and/or release note
- [ ] Notify users of release
- [ ] Email Jupyter and Jupyter In Education mailing lists
- [ ] Tweet (optional)
- [ ] Increment the version number for the next release
- [ ] Update roadmap

View File

@@ -24,11 +24,13 @@
FROM debian:jessie FROM debian:jessie
MAINTAINER Jupyter Project <jupyter@googlegroups.com> MAINTAINER Jupyter Project <jupyter@googlegroups.com>
# install nodejs, utf8 locale # install nodejs, utf8 locale, set CDN because default httpredir is unreliable
ENV DEBIAN_FRONTEND noninteractive ENV DEBIAN_FRONTEND noninteractive
RUN apt-get -y update && \ RUN REPO=http://cdn-fastly.deb.debian.org && \
echo "deb $REPO/debian jessie main\ndeb $REPO/debian-security jessie/updates main" > /etc/apt/sources.list && \
apt-get -y update && \
apt-get -y upgrade && \ apt-get -y upgrade && \
apt-get -y install npm nodejs nodejs-legacy wget locales git &&\ apt-get -y install wget locales git bzip2 &&\
/usr/sbin/update-locale LANG=C.UTF-8 && \ /usr/sbin/update-locale LANG=C.UTF-8 && \
locale-gen C.UTF-8 && \ locale-gen C.UTF-8 && \
apt-get remove -y locales && \ apt-get remove -y locales && \
@@ -36,18 +38,15 @@ RUN apt-get -y update && \
rm -rf /var/lib/apt/lists/* rm -rf /var/lib/apt/lists/*
ENV LANG C.UTF-8 ENV LANG C.UTF-8
# install Python with conda # install Python + NodeJS with conda
RUN wget -q https://repo.continuum.io/miniconda/Miniconda3-4.0.5-Linux-x86_64.sh -O /tmp/miniconda.sh && \ RUN wget -q https://repo.continuum.io/miniconda/Miniconda3-4.2.12-Linux-x86_64.sh -O /tmp/miniconda.sh && \
echo 'a7bcd0425d8b6688753946b59681572f63c2241aed77bf0ec6de4c5edc5ceeac */tmp/miniconda.sh' | shasum -a 256 -c - && \ echo 'd0c7c71cc5659e54ab51f2005a8d96f3 */tmp/miniconda.sh' | md5sum -c - && \
bash /tmp/miniconda.sh -f -b -p /opt/conda && \ bash /tmp/miniconda.sh -f -b -p /opt/conda && \
/opt/conda/bin/conda install --yes python=3.5 sqlalchemy tornado jinja2 traitlets requests pip && \ /opt/conda/bin/conda install --yes -c conda-forge python=3.5 sqlalchemy tornado jinja2 traitlets requests pip nodejs configurable-http-proxy && \
/opt/conda/bin/pip install --upgrade pip && \ /opt/conda/bin/pip install --upgrade pip && \
rm /tmp/miniconda.sh rm /tmp/miniconda.sh
ENV PATH=/opt/conda/bin:$PATH ENV PATH=/opt/conda/bin:$PATH
# install js dependencies
RUN npm install -g configurable-http-proxy && rm -rf ~/.npm
ADD . /src/jupyterhub ADD . /src/jupyterhub
WORKDIR /src/jupyterhub WORKDIR /src/jupyterhub

View File

@@ -13,6 +13,7 @@ graft share
# Documentation # Documentation
graft docs graft docs
prune docs/node_modules
# prune some large unused files from components # prune some large unused files from components
prune share/jupyter/hub/static/components/bootstrap/css prune share/jupyter/hub/static/components/bootstrap/css

266
README.md
View File

@@ -1,75 +1,174 @@
# JupyterHub: A multi-user server for Jupyter notebooks **[Technical overview](#technical-overview)** |
**[Prerequisites](#prerequisites)** |
**[Installation](#installation)** |
**[Running the Hub Server](#running-the-hub-server)** |
**[Configuration](#configuration)** |
**[Docker](#docker)** |
**[Contributing](#contributing)** |
**[License](#license)** |
**[Getting help](#getting-help)**
Questions, comments? Visit our Google Group: # [JupyterHub](https://github.com/jupyterhub/jupyterhub)
[![Google Group](https://img.shields.io/badge/-Google%20Group-lightgrey.svg)](https://groups.google.com/forum/#!forum/jupyter)
[![Build Status](https://travis-ci.org/jupyterhub/jupyterhub.svg?branch=master)](https://travis-ci.org/jupyterhub/jupyterhub) [![Build Status](https://travis-ci.org/jupyterhub/jupyterhub.svg?branch=master)](https://travis-ci.org/jupyterhub/jupyterhub)
[![Circle CI](https://circleci.com/gh/jupyterhub/jupyterhub.svg?style=shield&circle-token=b5b65862eb2617b9a8d39e79340b0a6b816da8cc)](https://circleci.com/gh/jupyterhub/jupyterhub) [![Circle CI](https://circleci.com/gh/jupyterhub/jupyterhub.svg?style=shield&circle-token=b5b65862eb2617b9a8d39e79340b0a6b816da8cc)](https://circleci.com/gh/jupyterhub/jupyterhub)
[![codecov.io](https://codecov.io/github/jupyterhub/jupyterhub/coverage.svg?branch=master)](https://codecov.io/github/jupyterhub/jupyterhub?branch=master)
"
[![Documentation Status](https://readthedocs.org/projects/jupyterhub/badge/?version=latest)](http://jupyterhub.readthedocs.org/en/latest/?badge=latest) [![Documentation Status](https://readthedocs.org/projects/jupyterhub/badge/?version=latest)](http://jupyterhub.readthedocs.org/en/latest/?badge=latest)
[![codecov.io](https://codecov.io/github/jupyter/jupyterhub/coverage.svg?branch=master)](https://codecov.io/github/jupyter/jupyterhub?branch=master) "
[![Google Group](https://img.shields.io/badge/-Google%20Group-lightgrey.svg)](https://groups.google.com/forum/#!forum/jupyter)
With [JupyterHub](https://jupyterhub.readthedocs.io) you can create a
**multi-user Hub** which spawns, manages, and proxies multiple instances of the
single-user [Jupyter notebook *(IPython notebook)* ](https://jupyter-notebook.readthedocs.io) server.
JupyterHub, a multi-user server, manages and proxies multiple instances of the single-user <del>IPython</del> Jupyter notebook server. JupyterHub provides **single-user notebook servers to many users**. For example,
JupyterHub could serve notebooks to a class of students, a corporate
workgroup, or a science research group.
Three actors: by [Project Jupyter](https://jupyter.org)
- multi-user Hub (tornado process) ----
- configurable http proxy (node-http-proxy)
- multiple single-user IPython notebook servers (Python/IPython/tornado)
Basic principles: ## Technical overview
Three main actors make up JupyterHub:
- Hub spawns proxy - multi-user **Hub** (tornado process)
- Proxy forwards ~all requests to hub by default - configurable http **proxy** (node-http-proxy)
- multiple **single-user Jupyter notebook servers** (Python/IPython/tornado)
JupyterHub's basic principles for operation are:
- Hub spawns a proxy
- Proxy forwards all requests to Hub by default
- Hub handles login, and spawns single-user servers on demand - Hub handles login, and spawns single-user servers on demand
- Hub configures proxy to forward url prefixes to single-user servers - Hub configures proxy to forward url prefixes to the single-user servers
JupyterHub also provides a
[REST API](http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/master/docs/rest-api.yml#/default)
for administration of the Hub and users.
## Dependencies ----
JupyterHub itself requires [Python](https://www.python.org/downloads/) ≥ 3.3. To run the single-user servers (which may be on the same system as the Hub or not), [Jupyter Notebook](https://jupyter.readthedocs.org/en/latest/install.html) ≥ 4 is required. ## Prerequisites
Before installing JupyterHub, you need:
Install [nodejs/npm](https://www.npmjs.com/), which is available from your - [Python](https://www.python.org/downloads/) 3.3 or greater
package manager. For example, install on Linux (Debian/Ubuntu) using:
sudo apt-get install npm nodejs-legacy An understanding of using [`pip`](https://pip.pypa.io/en/stable/) for installing
Python packages is recommended.
(The `nodejs-legacy` package installs the `node` executable and is currently - [nodejs/npm](https://www.npmjs.com/)
required for npm to work on Debian/Ubuntu.)
Next, install JavaScript dependencies: [Install nodejs/npm](https://docs.npmjs.com/getting-started/installing-node), which is available from your
package manager. For example, install on Linux (Debian/Ubuntu) using:
sudo npm install -g configurable-http-proxy sudo apt-get install npm nodejs-legacy
### (Optional) Installation Prerequisite (pip) (The `nodejs-legacy` package installs the `node` executable and is currently
required for npm to work on Debian/Ubuntu.)
Notes on the `pip` command used in the installation directions below: - TLS certificate and key for HTTPS communication
- `sudo` may be needed for `pip install`, depending on the user's filesystem permissions.
- JupyterHub requires Python >= 3.3, so `pip3` may be required on some machines for package installation instead of `pip` (especially when both Python 2 and Python 3 are installed on a machine). If `pip3` is not found, install it using (on Linux Debian/Ubuntu):
sudo apt-get install python3-pip - Domain name
Before running the single-user notebook servers (which may be on the same system as the Hub or not):
- [Jupyter Notebook](https://jupyter.readthedocs.io/en/latest/install.html) version 4 or greater
## Installation ## Installation
JupyterHub can be installed with `pip`, and the proxy with `npm`:
JupyterHub can be installed with pip, and the proxy with npm: ```bash
npm install -g configurable-http-proxy
pip3 install jupyterhub
```
npm install -g configurable-http-proxy If you plan to run notebook servers locally, you will need to install the
pip3 install jupyterhub Jupyter notebook:
If you plan to run notebook servers locally, you may also need to install the
Jupyter ~~IPython~~ notebook:
pip3 install --upgrade notebook pip3 install --upgrade notebook
## Running the Hub server
To start the Hub server, run the command:
### Development install jupyterhub
For a development install, clone the repository and then install from source: Visit `https://localhost:8000` in your browser, and sign in with your unix credentials.
git clone https://github.com/jupyter/jupyterhub To allow multiple users to sign into the server, you will need to
cd jupyterhub run the `jupyterhub` command as a *privileged user*, such as root.
pip3 install -r dev-requirements.txt -e . The [wiki](https://github.com/jupyterhub/jupyterhub/wiki/Using-sudo-to-run-JupyterHub-without-root-privileges)
describes how to run the server as a *less privileged user*, which requires more
configuration of the system.
----
## Configuration
The [getting started document](docs/source/getting-started.md) contains the
basics of configuring a JupyterHub deployment.
The JupyterHub **tutorial** provides a video and documentation that explains and illustrates the fundamental steps for installation and configuration. [Repo](https://github.com/jupyterhub/jupyterhub-tutorial)
| [Tutorial documentation](http://jupyterhub-tutorial.readthedocs.io/en/latest/)
#### Generate a default configuration file
Generate a default config file:
jupyterhub --generate-config
#### Customize the configuration, authentication, and process spawning
Spawn the server on ``10.0.1.2:443`` with **https**:
jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert
The authentication and process spawning mechanisms can be replaced,
which should allow plugging into a variety of authentication or process control environments.
Some examples, meant as illustration and testing of this concept:
- Using GitHub OAuth instead of PAM with [OAuthenticator](https://github.com/jupyterhub/oauthenticator)
- Spawning single-user servers with Docker, using the [DockerSpawner](https://github.com/jupyterhub/dockerspawner)
----
## Docker
A starter [docker image for JupyterHub](https://hub.docker.com/r/jupyterhub/jupyterhub/) gives a baseline deployment of JupyterHub.
**Important:** This `jupyterhub/jupyterhub` image contains only the Hub itself, with no configuration. In general, one needs
to make a derivative image, with at least a `jupyterhub_config.py` setting up an Authenticator and/or a Spawner. To run the
single-user servers, which may be on the same system as the Hub or not, Jupyter Notebook version 4 or greater must be installed.
#### Starting JupyterHub with docker
The JupyterHub docker image can be started with the following command:
docker run -d --name jupyterhub jupyterhub/jupyterhub jupyterhub
This command will create a container named `jupyterhub` that you can **stop and resume** with `docker stop/start`.
The Hub service will be listening on all interfaces at port 8000, which makes this a good choice for **testing JupyterHub on your desktop or laptop**.
If you want to run docker on a computer that has a public IP then you should (as in MUST) **secure it with ssl** by
adding ssl options to your docker configuration or using a ssl enabled proxy.
[Mounting volumes](https://docs.docker.com/engine/userguide/containers/dockervolumes/) will
allow you to **store data outside the docker image (host system) so it will be persistent**, even when you start
a new image.
The command `docker exec -it jupyterhub bash` will spawn a root shell in your docker
container. You can **use the root shell to create system users in the container**. These accounts will be used for authentication
in JupyterHub's default configuration.
----
## Contributing
If you would like to contribute to the project, please read our [contributor documentation](http://jupyter.readthedocs.io/en/latest/contributor/content-contributor.html) and the [`CONTRIBUTING.md`](CONTRIBUTING.md).
For a **development install**, clone the [repository](https://github.com/jupyterhub/jupyterhub) and then install from source:
```bash
git clone https://github.com/jupyterhub/jupyterhub
cd jupyterhub
pip3 install -r dev-requirements.txt -e .
```
If the `pip3 install` command fails and complains about `lessc` being unavailable, you may need to explicitly install some additional JavaScript dependencies: If the `pip3 install` command fails and complains about `lessc` being unavailable, you may need to explicitly install some additional JavaScript dependencies:
@@ -79,80 +178,33 @@ This will fetch client-side JavaScript dependencies necessary to compile CSS.
You may also need to manually update JavaScript and CSS after some development updates, with: You may also need to manually update JavaScript and CSS after some development updates, with:
python3 setup.py js # fetch updated client-side js (changes rarely) ```bash
python3 setup.py css # recompile CSS from LESS sources python3 setup.py js # fetch updated client-side js
python3 setup.py css # recompile CSS from LESS sources
```
We use [pytest](http://doc.pytest.org/en/latest/) for testing. To run tests:
## Running the server ```bash
pytest jupyterhub/tests
```
To start the server, run the command: ----
## License
We use a shared copyright model that enables all contributors to maintain the
copyright on their contributions.
jupyterhub All code is licensed under the terms of the revised BSD license.
and then visit `http://localhost:8000`, and sign in with your unix credentials. ## Getting help
We encourage you to ask questions on the [mailing list](https://groups.google.com/forum/#!forum/jupyter),
To allow multiple users to sign into the server, you will need to and you may participate in development discussions or get live help on [Gitter](https://gitter.im/jupyterhub/jupyterhub).
run the `jupyterhub` command as a *privileged user*, such as root.
The [wiki](https://github.com/jupyter/jupyterhub/wiki/Using-sudo-to-run-JupyterHub-without-root-privileges)
describes how to run the server as a *less privileged user*, which requires more
configuration of the system.
## Getting started
See the [getting started document](docs/source/getting-started.md) for the
basics of configuring your JupyterHub deployment.
### Some examples
Generate a default config file:
jupyterhub --generate-config
Spawn the server on ``10.0.1.2:443`` with **https**:
jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert
The authentication and process spawning mechanisms can be replaced,
which should allow plugging into a variety of authentication or process control environments.
Some examples, meant as illustration and testing of this concept:
- Using GitHub OAuth instead of PAM with [OAuthenticator](https://github.com/jupyter/oauthenticator)
- Spawning single-user servers with Docker, using the [DockerSpawner](https://github.com/jupyter/dockerspawner)
### Docker
There is a ready to go [docker image for JupyterHub](https://hub.docker.com/r/jupyter/jupyterhub/).
[Note: This `jupyter/jupyterhub` docker image is only an image for running the Hub service itself.
It does not require the other Jupyter components, which are needed by the single-user servers.
To run the single-user servers, which may be on the same system as the Hub or not, installation of Jupyter Notebook ≥ 4 is required.]
The JupyterHub docker image can be started with the following command:
docker run -d --name jupyterhub jupyter/jupyterhub jupyterhub
This command will create a container named `jupyterhub` that you can stop and resume with `docker stop/start`.
It will be listening on all interfaces at port 8000, so this is perfect to test JupyterHub on your desktop or laptop.
If you want to run docker on a computer that has a public IP then you should (as in MUST) secure it with ssl by
adding ssl options to your docker configuration or using a ssl enabled proxy.
[Mounting volumes](https://docs.docker.com/engine/userguide/containers/dockervolumes/) will
allow you to store data outside the docker image (host system) so it will be persistent, even when you start
a new image. The command `docker exec -it jupyterhub bash` will spawn a root shell in your docker
container. You can use it to create system users in the container. These accounts will be used for authentication
in jupyterhub's default configuration. In order to run without SSL (for testing purposes only), you'll need to set `--no-ssl` explicitly.
# Getting help
We encourage you to ask questions on the mailing list:
[![Google Group](https://img.shields.io/badge/-Google%20Group-lightgrey.svg)](https://groups.google.com/forum/#!forum/jupyter)
and you may participate in development discussions or get live help on Gitter:
[![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/jupyter/jupyterhub?utm_source=badge&utm_medium=badge)
## Resources ## Resources
- [Reporting Issues](https://github.com/jupyterhub/jupyterhub/issues)
- JupyterHub tutorial | [Repo](https://github.com/jupyterhub/jupyterhub-tutorial)
| [Tutorial documentation](http://jupyterhub-tutorial.readthedocs.io/en/latest/)
- [Documentation for JupyterHub](http://jupyterhub.readthedocs.io/en/latest/) | [PDF (latest)](https://media.readthedocs.org/pdf/jupyterhub/latest/jupyterhub.pdf) | [PDF (stable)](https://media.readthedocs.org/pdf/jupyterhub/stable/jupyterhub.pdf)
- [Documentation for JupyterHub's REST API](http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyter/jupyterhub/master/docs/rest-api.yml#/default)
- [Documentation for Project Jupyter](http://jupyter.readthedocs.io/en/latest/index.html) | [PDF](https://media.readthedocs.org/pdf/jupyter/latest/jupyter.pdf)
- [Project Jupyter website](https://jupyter.org) - [Project Jupyter website](https://jupyter.org)
- [Documentation for JupyterHub](http://jupyterhub.readthedocs.org/en/latest/) [[PDF](https://media.readthedocs.org/pdf/jupyterhub/latest/jupyterhub.pdf)]
- [Documentation for Project Jupyter](http://jupyter.readthedocs.org/en/latest/index.html) [[PDF](https://media.readthedocs.org/pdf/jupyter/latest/jupyter.pdf)]
- [Issues](https://github.com/jupyter/jupyterhub/issues)
- [Technical support - Jupyter Google Group](https://groups.google.com/forum/#!forum/jupyter)

View File

@@ -16,4 +16,9 @@ deployment:
branch: master branch: master
commands: commands:
- docker login -u $DOCKER_USER -p $DOCKER_PASS -e unused@example.com - docker login -u $DOCKER_USER -p $DOCKER_PASS -e unused@example.com
- docker push jupyterhub/jupyterhub-onbuild:${CIRCLE_TAG:-latest} - docker push jupyterhub/jupyterhub-onbuild
release:
tag: /.*/
commands:
- docker login -u $DOCKER_USER -p $DOCKER_PASS -e unused@example.com
- docker push jupyterhub/jupyterhub-onbuild:$CIRCLE_TAG

View File

@@ -1,5 +1,7 @@
-r requirements.txt -r requirements.txt
mock
codecov codecov
pytest-cov pytest-cov
pytest>=2.8 pytest>=2.8
notebook notebook
requests-mock

View File

@@ -47,11 +47,20 @@ help:
@echo " linkcheck to check all external links for integrity" @echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)" @echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)" @echo " coverage to run coverage check of the documentation (if enabled)"
@echo " spelling to run spell check on documentation"
clean: clean:
rm -rf $(BUILDDIR)/* rm -rf $(BUILDDIR)/*
html: node_modules: package.json
npm install && touch node_modules
rest-api: source/_static/rest-api/index.html
source/_static/rest-api/index.html: rest-api.yml node_modules
npm run rest-api
html: rest-api
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo @echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html." @echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
@@ -171,6 +180,11 @@ linkcheck:
@echo "Link check complete; look for any errors in the above output " \ @echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt." "or in $(BUILDDIR)/linkcheck/output.txt."
spelling:
$(SPHINXBUILD) -b spelling $(ALLSPHINXOPTS) $(BUILDDIR)/spelling
@echo
@echo "Spell check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/spelling/output.txt."
doctest: doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \ @echo "Testing of doctests in the sources finished, look at the " \

16
docs/environment.yml Normal file
View File

@@ -0,0 +1,16 @@
name: jhub_docs
channels:
- conda-forge
dependencies:
- nodejs
- python=3
- jinja2
- pamela
- requests
- sqlalchemy>=1
- tornado>=4.1
- traitlets>=4.1
- sphinx>=1.3.6
- sphinx_rtd_theme
- pip:
- recommonmark==0.4.0

14
docs/package.json Normal file
View File

@@ -0,0 +1,14 @@
{
"name": "jupyterhub-docs-build",
"version": "0.0.0",
"description": "build JupyterHub swagger docs",
"scripts": {
"rest-api": "bootprint openapi ./rest-api.yml source/_static/rest-api"
},
"author": "",
"license": "BSD-3-Clause",
"devDependencies": {
"bootprint": "^0.10.0",
"bootprint-openapi": "^0.17.0"
}
}

View File

@@ -3,9 +3,11 @@ swagger: '2.0'
info: info:
title: JupyterHub title: JupyterHub
description: The REST API for JupyterHub description: The REST API for JupyterHub
version: 0.4.0 version: 0.7.0
license:
name: BSD-3-Clause
schemes: schemes:
- http - [http, https]
securityDefinitions: securityDefinitions:
token: token:
type: apiKey type: apiKey
@@ -13,18 +15,73 @@ securityDefinitions:
in: header in: header
security: security:
- token: [] - token: []
basePath: /hub/api/ basePath: /hub/api
produces: produces:
- application/json - application/json
consumes: consumes:
- application/json - application/json
paths: paths:
/:
get:
summary: Get JupyterHub version
description: |
This endpoint is not authenticated for the purpose of clients and user
to identify the JupyterHub version before setting up authentication.
responses:
'200':
description: The JupyterHub version
schema:
type: object
properties:
version:
type: string
description: The version of JupyterHub itself
/info:
get:
summary: Get detailed info about JupyterHub
description: |
Detailed JupyterHub information, including Python version,
JupyterHub's version and executable path,
and which Authenticator and Spawner are active.
responses:
'200':
description: Detailed JupyterHub info
schema:
type: object
properties:
version:
type: string
description: The version of JupyterHub itself
python:
type: string
description: The Python version, as returned by sys.version
sys_executable:
type: string
description: The path to sys.executable running JupyterHub
authenticator:
type: object
properties:
class:
type: string
description: The Python class currently active for JupyterHub Authentication
version:
type: string
description: The version of the currently active Authenticator
spawner:
type: object
properties:
class:
type: string
description: The Python class currently active for spawning single-user notebook servers
version:
type: string
description: The version of the currently active Spawner
/users: /users:
get: get:
summary: List users summary: List users
responses: responses:
'200': '200':
description: The user list description: The Hub's user list
schema: schema:
type: array type: array
items: items:
@@ -40,7 +97,7 @@ paths:
properties: properties:
usernames: usernames:
type: array type: array
description: list of usernames to create description: list of usernames to create on the Hub
items: items:
type: string type: string
admin: admin:
@@ -81,17 +138,6 @@ paths:
description: The user has been created description: The user has been created
schema: schema:
$ref: '#/definitions/User' $ref: '#/definitions/User'
delete:
summary: Delete a user
parameters:
- name: name
description: username
in: path
required: true
type: string
responses:
'204':
description: The user has been deleted
patch: patch:
summary: Modify a user summary: Modify a user
description: Change a user's name or admin status description: Change a user's name or admin status
@@ -104,24 +150,35 @@ paths:
- name: data - name: data
in: body in: body
required: true required: true
description: Updated user info. At least one of name and admin is required. description: Updated user info. At least one key to be updated (name or admin) is required.
schema: schema:
type: object type: object
properties: properties:
name: name:
type: string type: string
description: the new name (optional) description: the new name (optional, if another key is updated i.e. admin)
admin: admin:
type: boolean type: boolean
description: update admin (optional) description: update admin (optional, if another key is updated i.e. name)
responses: responses:
'200': '200':
description: The updated user info description: The updated user info
schema: schema:
$ref: '#/definitions/User' $ref: '#/definitions/User'
delete:
summary: Delete a user
parameters:
- name: name
description: username
in: path
required: true
type: string
responses:
'204':
description: The user has been deleted
/users/{name}/server: /users/{name}/server:
post: post:
summary: Start a user's server summary: Start a user's single-user notebook server
parameters: parameters:
- name: name - name: name
description: username description: username
@@ -130,9 +187,9 @@ paths:
type: string type: string
responses: responses:
'201': '201':
description: The server has started description: The user's notebook server has started
'202': '202':
description: The server has been requested, but has not yet started description: The user's notebook server has not yet started, but has been requested
delete: delete:
summary: Stop a user's server summary: Stop a user's server
parameters: parameters:
@@ -143,12 +200,12 @@ paths:
type: string type: string
responses: responses:
'204': '204':
description: The server has stopped description: The user's notebook server has stopped
'202': '202':
description: The server has been asked to stop, but is taking a while description: The user's notebook server has not yet stopped as it is taking a while to stop
/users/{name}/admin-access: /users/{name}/admin-access:
post: post:
summary: Grant an admin access to this user's server summary: Grant admin access to this user's notebook server
parameters: parameters:
- name: name - name: name
description: username description: username
@@ -157,25 +214,146 @@ paths:
type: string type: string
responses: responses:
'200': '200':
description: Sets a cookie granting the requesting admin access to the user's server description: Sets a cookie granting the requesting administrator access to the user's notebook server
/groups:
get:
summary: List groups
responses:
'200':
description: The list of groups
schema:
type: array
items:
$ref: '#/definitions/Group'
/groups/{name}:
get:
summary: Get a group by name
parameters:
- name: name
description: group name
in: path
required: true
type: string
responses:
'200':
description: The group model
schema:
$ref: '#/definitions/Group'
post:
summary: Create a group
parameters:
- name: name
description: group name
in: path
required: true
type: string
responses:
'201':
description: The group has been created
schema:
$ref: '#/definitions/Group'
delete:
summary: Delete a group
parameters:
- name: name
description: group name
in: path
required: true
type: string
responses:
'204':
description: The group has been deleted
/groups/{name}/users:
post:
summary: Add users to a group
parameters:
- name: name
description: group name
in: path
required: true
type: string
- name: data
in: body
required: true
description: The users to add to the group
schema:
type: object
properties:
users:
type: array
description: List of usernames to add to the group
items:
type: string
responses:
'200':
description: The users have been added to the group
schema:
$ref: '#/definitions/Group'
delete:
summary: Remove users from a group
parameters:
- name: name
description: group name
in: path
required: true
type: string
- name: data
in: body
required: true
description: The users to remove from the group
schema:
type: object
properties:
users:
type: array
description: List of usernames to remove from the group
items:
type: string
responses:
'200':
description: The users have been removed from the group
/services:
get:
summary: List services
responses:
'200':
description: The service list
schema:
type: array
items:
$ref: '#/definitions/Service'
/services/{name}:
get:
summary: Get a service by name
parameters:
- name: name
description: service name
in: path
required: true
type: string
responses:
'200':
description: The Service model
schema:
$ref: '#/definitions/Service'
/proxy: /proxy:
get: get:
summary: Get the proxy's routing table summary: Get the proxy's routing table
description: A convenience alias for getting the info directly from the proxy description: A convenience alias for getting the routing table directly from the proxy
responses: responses:
'200': '200':
description: Routing table description: Routing table
schema: schema:
type: object type: object
description: configurable-http-proxy routing table (see CHP docs for details) description: configurable-http-proxy routing table (see configurable-http-proxy docs for details)
post: post:
summary: Force the Hub to sync with the proxy summary: Force the Hub to sync with the proxy
responses: responses:
'200': '200':
description: Success description: Success
patch: patch:
summary: Tell the Hub about a new proxy summary: Notify the Hub about a new proxy
description: If you have started a new proxy and would like the Hub to switch over to it, this allows you to notify the Hub of the new proxy. description: Notifies the Hub of a new proxy to use.
parameters: parameters:
- name: data - name: data
in: body in: body
@@ -211,11 +389,11 @@ paths:
'200': '200':
description: The user identified by the API token description: The user identified by the API token
schema: schema:
$ref: '#!/definitions/User' $ref: '#/definitions/User'
/authorizations/cookie/{cookie_name}/{cookie_value}: /authorizations/cookie/{cookie_name}/{cookie_value}:
get: get:
summary: Identify a user from a cookie summary: Identify a user from a cookie
description: Used by single-user servers to hand off cookie authentication to the Hub description: Used by single-user notebook servers to hand off cookie authentication to the Hub
parameters: parameters:
- name: cookie_name - name: cookie_name
in: path in: path
@@ -229,10 +407,19 @@ paths:
'200': '200':
description: The user identified by the cookie description: The user identified by the cookie
schema: schema:
$ref: '#!/definitions/User' $ref: '#/definitions/User'
/shutdown: /shutdown:
post: post:
summary: Shutdown the Hub summary: Shutdown the Hub
parameters:
- name: proxy
in: body
type: boolean
description: Whether the proxy should be shutdown as well (default from Hub config)
- name: servers
in: body
type: boolean
description: Whether users's notebook servers should be shutdown as well (default from Hub config)
responses: responses:
'200': '200':
description: Hub has shutdown description: Hub has shutdown
@@ -246,14 +433,53 @@ definitions:
admin: admin:
type: boolean type: boolean
description: Whether the user is an admin description: Whether the user is an admin
groups:
type: array
description: The names of groups where this user is a member
items:
type: string
server: server:
type: string type: string
description: The user's server's base URL, if running; null if not. description: The user's notebook server's base URL, if running; null if not.
pending: pending:
type: string type: string
enum: ["spawn", "stop"] enum: ["spawn", "stop"]
description: The currently pending action, if any description: The currently pending action, if any
last_activity: last_activity:
type: string type: string
format: ISO8601 Timestamp format: date-time
description: Timestamp of last-seen activity from the user description: Timestamp of last-seen activity from the user
Group:
type: object
properties:
name:
type: string
description: The group's name
users:
type: array
description: The names of users who are members of this group
items:
type: string
Service:
type: object
properties:
name:
type: string
description: The service's name
admin:
type: boolean
description: Whether the service is an admin
url:
type: string
description: The internal url where the service is running
prefix:
type: string
description: The proxied URL prefix to the service's url
pid:
type: number
description: The PID of the service process (if managed)
command:
type: array
description: The command used to start the service (if managed)
items:
type: string

View File

@@ -7,8 +7,27 @@
:Release: |release| :Release: |release|
:Date: |today| :Date: |today|
JupyterHub also provides a REST API for administration of the Hub and users.
The documentation on `Using JupyterHub's REST API <../rest.html>`_ provides
information on:
- Creating an API token
- Adding tokens to the configuration file (optional)
- Making an API request
The same JupyterHub API spec, as found here, is available in an interactive form
`here (on swagger's petstore) <http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#!/default>`__.
The `OpenAPI Initiative`_ (fka Swagger™) is a project used to describe
and document RESTful APIs.
JupyterHub API Reference:
.. toctree:: .. toctree::
auth auth
spawner spawner
user user
services.auth
.. _OpenAPI Initiative: https://www.openapis.org/

View File

@@ -0,0 +1,18 @@
=======================
Authenticating Services
=======================
Module: :mod:`jupyterhub.services.auth`
=======================================
.. automodule:: jupyterhub.services.auth
.. currentmodule:: jupyterhub.services.auth
.. autoclass:: HubAuth
:members:
.. autoclass:: HubAuthenticated
:members:

View File

@@ -13,6 +13,6 @@ Module: :mod:`jupyterhub.spawner`
---------------- ----------------
.. autoclass:: Spawner .. autoclass:: Spawner
:members: options_from_form, poll, start, stop, get_args, get_env, get_state :members: options_from_form, poll, start, stop, get_args, get_env, get_state, template_namespace, format_string
.. autoclass:: LocalProcessSpawner .. autoclass:: LocalProcessSpawner

View File

@@ -1,4 +1,4 @@
# Writing a custom Authenticator # Authenticators
The [Authenticator][] is the mechanism for authorizing users. The [Authenticator][] is the mechanism for authorizing users.
Basic authenticators use simple username and password authentication. Basic authenticators use simple username and password authentication.
@@ -11,14 +11,13 @@ One such example is using [GitHub OAuth][].
Because the username is passed from the Authenticator to the Spawner, Because the username is passed from the Authenticator to the Spawner,
a custom Authenticator and Spawner are often used together. a custom Authenticator and Spawner are often used together.
See a list of custom Authenticators [on the wiki](https://github.com/jupyter/jupyterhub/wiki/Authenticators). See a list of custom Authenticators [on the wiki](https://github.com/jupyterhub/jupyterhub/wiki/Authenticators).
## Basics of Authenticators ## Basics of Authenticators
A basic Authenticator has one central method: A basic Authenticator has one central method:
### Authenticator.authenticate ### Authenticator.authenticate
Authenticator.authenticate(handler, data) Authenticator.authenticate(handler, data)
@@ -48,14 +47,13 @@ class DictionaryAuthenticator(Authenticator):
passwords = Dict(config=True, passwords = Dict(config=True,
help="""dict of username:password for authentication""" help="""dict of username:password for authentication"""
) )
@gen.coroutine @gen.coroutine
def authenticate(self, handler, data): def authenticate(self, handler, data):
if self.passwords.get(data['username']) == data['password']: if self.passwords.get(data['username']) == data['password']:
return data['username'] return data['username']
``` ```
### Authenticator.whitelist ### Authenticator.whitelist
Authenticators can specify a whitelist of usernames to allow authentication. Authenticators can specify a whitelist of usernames to allow authentication.
@@ -91,8 +89,9 @@ which is a regular expression string for validation.
To only allow usernames that start with 'w': To only allow usernames that start with 'w':
c.Authenticator.username_pattern = r'w.*' ```python
c.Authenticator.username_pattern = r'w.*'
```
## OAuth and other non-password logins ## OAuth and other non-password logins
@@ -103,9 +102,12 @@ You can see an example implementation of an Authenticator that uses [GitHub OAut
at [OAuthenticator][]. at [OAuthenticator][].
[Authenticator]: https://github.com/jupyter/jupyterhub/blob/master/jupyterhub/auth.py ## Writing a custom authenticator
[PAM]: https://en.wikipedia.org/wiki/Pluggable_authentication_module
[OAuth]: https://en.wikipedia.org/wiki/OAuth
[GitHub OAuth]: https://developer.github.com/v3/oauth/
[OAuthenticator]: https://github.com/jupyter/oauthenticator
If you are interested in writing a custom authenticator, you can read [this tutorial](http://jupyterhub-tutorial.readthedocs.io/en/latest/authenticators.html).
[Authenticator]: https://github.com/jupyterhub/jupyterhub/blob/master/jupyterhub/auth.py
[PAM]: https://en.wikipedia.org/wiki/Pluggable_authentication_module
[OAuth]: https://en.wikipedia.org/wiki/OAuth
[GitHub OAuth]: https://developer.github.com/v3/oauth/
[OAuthenticator]: https://github.com/jupyterhub/oauthenticator

View File

@@ -1,9 +1,77 @@
# Summary of changes in JupyterHub # Change log summary
See `git log` for a more detailed summary. For detailed changes from the prior release, click on the version number, and
its link will bring up a GitHub listing of changes. Use `git log` on the
command line for details.
## [Unreleased] 0.8
## 0.7
### [0.7.1] - 2016-01-02
#### Added
- `Spawner.will_resume` for signalling that a single-user server is paused instead of stopped.
This is needed for cases like `DockerSpawner.remove_containers = False`,
where the first API token is re-used for subsequent spawns.
- Warning on startup about single-character usernames,
caused by common `set('string')` typo in config.
#### Fixed
- Removed spurious warning about empty `next_url`, which is AOK.
### [0.7.0] - 2016-12-2
#### Added
- Implement Services API [\#705](https://github.com/jupyterhub/jupyterhub/pull/705)
- Add `/api/` and `/api/info` endpoints [\#675](https://github.com/jupyterhub/jupyterhub/pull/675)
- Add documentation for JupyterLab, pySpark configuration, troubleshooting,
and more.
- Add logging of error if adding users already in database. [\#689](https://github.com/jupyterhub/jupyterhub/pull/689)
- Add HubAuth class for authenticating with JupyterHub. This class can
be used by any application, even outside tornado.
- Add user groups.
- Add `/hub/user-redirect/...` URL for redirecting users to a file on their own server.
#### Changed
- Always install with setuptools but not eggs (effectively require
`pip install .`) [\#722](https://github.com/jupyterhub/jupyterhub/pull/722)
- Updated formatting of changelog. [\#711](https://github.com/jupyterhub/jupyterhub/pull/711)
- Single-user server is provided by JupyterHub package, so single-user servers depend on JupyterHub now.
#### Fixed
- Fix docker repository location [\#719](https://github.com/jupyterhub/jupyterhub/pull/719)
- Fix swagger spec conformance and timestamp type in API spec
- Various redirect-loop-causing bugs have been fixed.
#### Removed
- Deprecate `--no-ssl` command line option. It has no meaning and warns if
used. [\#789](https://github.com/jupyterhub/jupyterhub/pull/789)
- Deprecate `%U` username substitution in favor of `{username}`. [\#748](https://github.com/jupyterhub/jupyterhub/pull/748)
- Removed deprecated SwarmSpawner link. [\#699](https://github.com/jupyterhub/jupyterhub/pull/699)
## 0.6 ## 0.6
### [0.6.1] - 2016-05-04
Bugfixes on 0.6:
- statsd is an optional dependency, only needed if in use
- Notice more quickly when servers have crashed
- Better error pages for proxy errors
- Add Stop All button to admin panel for stopping all servers at once
### [0.6.0] - 2016-04-25
- JupyterHub has moved to a new `jupyterhub` namespace on GitHub and Docker. What was `juptyer/jupyterhub` is now `jupyterhub/jupyterhub`, etc. - JupyterHub has moved to a new `jupyterhub` namespace on GitHub and Docker. What was `juptyer/jupyterhub` is now `jupyterhub/jupyterhub`, etc.
- `jupyterhub/jupyterhub` image on DockerHub no longer loads the jupyterhub_config.py in an ONBUILD step. A new `jupyterhub/jupyterhub-onbuild` image does this - `jupyterhub/jupyterhub` image on DockerHub no longer loads the jupyterhub_config.py in an ONBUILD step. A new `jupyterhub/jupyterhub-onbuild` image does this
- Add statsd support, via `c.JupyterHub.statsd_{host,port,prefix}` - Add statsd support, via `c.JupyterHub.statsd_{host,port,prefix}`
@@ -16,7 +84,7 @@ See `git log` for a more detailed summary.
- Various fixes for user URLs and redirects - Various fixes for user URLs and redirects
## 0.5 ## [0.5] - 2016-03-07
- Single-user server must be run with Jupyter Notebook ≥ 4.0 - Single-user server must be run with Jupyter Notebook ≥ 4.0
@@ -30,11 +98,11 @@ See `git log` for a more detailed summary.
## 0.4 ## 0.4
### 0.4.1 ### [0.4.1] - 2016-02-03
Fix removal of `/login` page in 0.4.0, breaking some OAuth providers. Fix removal of `/login` page in 0.4.0, breaking some OAuth providers.
### 0.4.0 ### [0.4.0] - 2016-02-01
- Add `Spawner.user_options_form` for specifying an HTML form to present to users, - Add `Spawner.user_options_form` for specifying an HTML form to present to users,
allowing users to influence the spawning of their own servers. allowing users to influence the spawning of their own servers.
@@ -45,7 +113,7 @@ Fix removal of `/login` page in 0.4.0, breaking some OAuth providers.
- 0.4 will be the last JupyterHub release where single-user servers running IPython 3 is supported instead of Notebook ≥ 4.0. - 0.4 will be the last JupyterHub release where single-user servers running IPython 3 is supported instead of Notebook ≥ 4.0.
## 0.3 ## [0.3] - 2015-11-04
- No longer make the user starting the Hub an admin - No longer make the user starting the Hub an admin
- start PAM sessions on login - start PAM sessions on login
@@ -53,13 +121,24 @@ Fix removal of `/login` page in 0.4.0, breaking some OAuth providers.
allowing deeper interaction between Spawner/Authenticator pairs. allowing deeper interaction between Spawner/Authenticator pairs.
- login redirect fixes - login redirect fixes
## 0.2 ## [0.2] - 2015-07-12
- Based on standalone traitlets instead of IPython.utils.traitlets - Based on standalone traitlets instead of IPython.utils.traitlets
- multiple users in admin panel - multiple users in admin panel
- Fixes for usernames that require escaping - Fixes for usernames that require escaping
## 0.1 ## 0.1 - 2015-03-07
First preview release First preview release
[Unreleased]: https://github.com/jupyterhub/jupyterhub/compare/0.7.1...HEAD
[0.7.1]: https://github.com/jupyterhub/jupyterhub/compare/0.7.0...0.7.1
[0.7.0]: https://github.com/jupyterhub/jupyterhub/compare/0.6.1...0.7.0
[0.6.1]: https://github.com/jupyterhub/jupyterhub/compare/0.6.0...0.6.1
[0.6.0]: https://github.com/jupyterhub/jupyterhub/compare/0.5.0...0.6.0
[0.5]: https://github.com/jupyterhub/jupyterhub/compare/0.4.1...0.5.0
[0.4.1]: https://github.com/jupyterhub/jupyterhub/compare/0.4.0...0.4.1
[0.4.0]: https://github.com/jupyterhub/jupyterhub/compare/0.3.0...0.4.0
[0.3]: https://github.com/jupyterhub/jupyterhub/compare/0.2.0...0.3.0
[0.2]: https://github.com/jupyterhub/jupyterhub/compare/0.1.0...0.2.0

View File

@@ -1,59 +1,29 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# #
# JupyterHub documentation build configuration file, created by
# sphinx-quickstart on Mon Jan 4 16:31:09 2016.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys import sys
import os import os
import shlex import shlex
# Needed for conversion from markdown to html # For conversion from markdown to html
import recommonmark.parser import recommonmark.parser
# If extensions (or modules to document with autodoc) are in another directory, # Set paths
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.')) #sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------ # -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here. # Minimal Sphinx version
needs_sphinx = '1.3' needs_sphinx = '1.4'
# Add any Sphinx extension module names here, as strings. They can be # Sphinx extension modules
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [ extensions = [
'sphinx.ext.autodoc', 'sphinx.ext.autodoc',
'sphinx.ext.intersphinx', 'sphinx.ext.intersphinx',
'sphinx.ext.napoleon', 'sphinx.ext.napoleon',
] ]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates'] templates_path = ['_templates']
# Jupyter uses recommonmark's parser to convert markdown
source_parsers = {
'.md': 'recommonmark.parser.CommonMarkParser',
}
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = ['.rst', '.md']
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document. # The master toctree document.
master_doc = 'index' master_doc = 'index'
@@ -62,12 +32,10 @@ project = u'JupyterHub'
copyright = u'2016, Project Jupyter team' copyright = u'2016, Project Jupyter team'
author = u'Project Jupyter team' author = u'Project Jupyter team'
# The version info for the project you're documenting, acts as replacement for # Autopopulate version
# |version| and |release|, also used in various other places throughout the
# built documents.
# Project Jupyter uses the following to autopopulate version
from os.path import dirname from os.path import dirname
root = dirname(dirname(dirname(__file__))) docs = dirname(dirname(__file__))
root = dirname(docs)
sys.path.insert(0, root) sys.path.insert(0, root)
import jupyterhub import jupyterhub
@@ -76,162 +44,59 @@ version = '%i.%i' % jupyterhub.version_info[:2]
# The full version, including alpha/beta/rc tags. # The full version, including alpha/beta/rc tags.
release = jupyterhub.__version__ release = jupyterhub.__version__
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = [] exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx' pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False todo_include_todos = False
# -- Source -------------------------------------------------------------
source_parsers = {
'.md': 'recommonmark.parser.CommonMarkParser',
}
source_suffix = ['.rst', '.md']
#source_encoding = 'utf-8-sig'
# -- Options for HTML output ---------------------------------------------- # -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for # The theme to use for HTML and HTML Help pages.
# a list of builtin themes.
html_theme = 'sphinx_rtd_theme' html_theme = 'sphinx_rtd_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {} #html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = [] #html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None #html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None #html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None #html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None #html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here, # Paths that contain custom static files (such as style sheets)
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static'] html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = [] #html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y' #html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True #html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {} #html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {} #html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True #html_domain_indices = True
# If false, no index is generated.
#html_use_index = True #html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False #html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True #html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True #html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True #html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = '' #html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None #html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
#html_search_language = 'en' #html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# Now only 'ja' uses this config value
#html_search_options = {'type': 'default'} #html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#html_search_scorer = 'scorer.js' #html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'JupyterHubdoc' htmlhelp_basename = 'JupyterHubdoc'
# -- Options for LaTeX output --------------------------------------------- # -- Options for LaTeX output ---------------------------------------------
latex_elements = { latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper', #'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt', #'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '', #'preamble': '',
# Latex figure (float) alignment
#'figure_align': 'htbp', #'figure_align': 'htbp',
} }
@@ -243,28 +108,15 @@ latex_documents = [
u'Project Jupyter team', 'manual'), u'Project Jupyter team', 'manual'),
] ]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None #latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False #latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False #latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False #latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = [] #latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True #latex_domain_indices = True
# -- Options for manual page output --------------------------------------- # -- manual page output -------------------------------------------------
# One entry per manual page. List of tuples # One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section). # (source start file, name, description, authors, manual section).
@@ -273,11 +125,10 @@ man_pages = [
[author], 1) [author], 1)
] ]
# If true, show URL addresses after external links.
#man_show_urls = False #man_show_urls = False
# -- Options for Texinfo output ------------------------------------------- # -- Texinfo output -----------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples # Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author, # (source start file, target name, title, author,
@@ -288,20 +139,13 @@ texinfo_documents = [
'Miscellaneous'), 'Miscellaneous'),
] ]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = [] #texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True #texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote' #texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False #texinfo_no_detailmenu = False
# -- Options for Epub output ---------------------------------------------- # -- Epub output --------------------------------------------------------
# Bibliographic Dublin Core info. # Bibliographic Dublin Core info.
epub_title = project epub_title = project
@@ -309,78 +153,35 @@ epub_author = author
epub_publisher = author epub_publisher = author
epub_copyright = copyright epub_copyright = copyright
# The basename for the epub file. It defaults to the project name.
#epub_basename = project
# The HTML theme for the epub output. Since the default themes are not optimized
# for small screen space, using the same theme for HTML and epub output is
# usually not wise. This defaults to 'epub', a theme designed to save visual
# space.
#epub_theme = 'epub'
# The language of the text. It defaults to the language option
# or 'en' if the language is not set.
#epub_language = ''
# The scheme of the identifier. Typical schemes are ISBN or URL.
#epub_scheme = ''
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
#epub_identifier = ''
# A unique identification for the text.
#epub_uid = ''
# A tuple containing the cover image and cover page html template filenames.
#epub_cover = ()
# A sequence of (type, uri, title) tuples for the guide element of content.opf.
#epub_guide = ()
# HTML files that should be inserted before the pages created by sphinx.
# The format is a list of tuples containing the path and title.
#epub_pre_files = []
# HTML files shat should be inserted after the pages created by sphinx.
# The format is a list of tuples containing the path and title.
#epub_post_files = []
# A list of files that should not be packed into the epub file. # A list of files that should not be packed into the epub file.
epub_exclude_files = ['search.html'] epub_exclude_files = ['search.html']
# The depth of the table of contents in toc.ncx. # -- Intersphinx ----------------------------------------------------------
#epub_tocdepth = 3
# Allow duplicate toc entries.
#epub_tocdup = True
# Choose between 'default' and 'includehidden'.
#epub_tocscope = 'default'
# Fix unsupported image types using the Pillow.
#epub_fix_images = False
# Scale large images.
#epub_max_image_width = 0
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#epub_show_urls = 'inline'
# If false, no index is generated.
#epub_use_index = True
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {'https://docs.python.org/': None} intersphinx_mapping = {'https://docs.python.org/': None}
# Read The Docs # -- Read The Docs --------------------------------------------------------
# on_rtd is whether we are on readthedocs.org, this line of code grabbed from docs.readthedocs.org
on_rtd = os.environ.get('READTHEDOCS', None) == 'True' on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
if not on_rtd: # only import and set the theme if we're building docs locally if not on_rtd:
# only import and set the theme if we're building docs locally
import sphinx_rtd_theme import sphinx_rtd_theme
html_theme = 'sphinx_rtd_theme' html_theme = 'sphinx_rtd_theme'
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
else:
# readthedocs.org uses their theme by default, so no need to specify it
# build rest-api, since RTD doesn't run make
from subprocess import check_call as sh
sh(['make', 'rest-api'], cwd=docs)
# otherwise, readthedocs.org uses their theme by default, so no need to specify it # -- Spell checking -------------------------------------------------------
try:
import sphinxcontrib.spelling
except ImportError:
pass
else:
extensions.append("sphinxcontrib.spelling")
spelling_word_list_filename='spelling_wordlist.txt'

View File

@@ -0,0 +1,194 @@
# Configuration examples
This section provides configuration files and tips for the following
configurations:
- Example with GitHub OAuth
- Example with nginx reverse proxy
## Example with GitHub OAuth
In the following example, we show a configuration files for a fairly standard JupyterHub deployment with the following assumptions:
* JupyterHub is running on a single cloud server
* Using SSL on the standard HTTPS port 443
* You want to use GitHub OAuth (using oauthenticator) for login
* You need the users to exist locally on the server
* You want users' notebooks to be served from `~/assignments` to allow users to browse for notebooks within
other users home directories
* You want the landing page for each user to be a Welcome.ipynb notebook in their assignments directory.
* All runtime files are put into `/srv/jupyterhub` and log files in `/var/log`.
Let's start out with `jupyterhub_config.py`:
```python
# jupyterhub_config.py
c = get_config()
import os
pjoin = os.path.join
runtime_dir = os.path.join('/srv/jupyterhub')
ssl_dir = pjoin(runtime_dir, 'ssl')
if not os.path.exists(ssl_dir):
os.makedirs(ssl_dir)
# https on :443
c.JupyterHub.port = 443
c.JupyterHub.ssl_key = pjoin(ssl_dir, 'ssl.key')
c.JupyterHub.ssl_cert = pjoin(ssl_dir, 'ssl.cert')
# put the JupyterHub cookie secret and state db
# in /var/run/jupyterhub
c.JupyterHub.cookie_secret_file = pjoin(runtime_dir, 'cookie_secret')
c.JupyterHub.db_url = pjoin(runtime_dir, 'jupyterhub.sqlite')
# or `--db=/path/to/jupyterhub.sqlite` on the command-line
# put the log file in /var/log
c.JupyterHub.extra_log_file = '/var/log/jupyterhub.log'
# use GitHub OAuthenticator for local users
c.JupyterHub.authenticator_class = 'oauthenticator.LocalGitHubOAuthenticator'
c.GitHubOAuthenticator.oauth_callback_url = os.environ['OAUTH_CALLBACK_URL']
# create system users that don't exist yet
c.LocalAuthenticator.create_system_users = True
# specify users and admin
c.Authenticator.whitelist = {'rgbkrk', 'minrk', 'jhamrick'}
c.Authenticator.admin_users = {'jhamrick', 'rgbkrk'}
# start single-user notebook servers in ~/assignments,
# with ~/assignments/Welcome.ipynb as the default landing page
# this config could also be put in
# /etc/ipython/ipython_notebook_config.py
c.Spawner.notebook_dir = '~/assignments'
c.Spawner.args = ['--NotebookApp.default_url=/notebooks/Welcome.ipynb']
```
Using the GitHub Authenticator [requires a few additional env variables][oauth-setup],
which we will need to set when we launch the server:
```bash
export GITHUB_CLIENT_ID=github_id
export GITHUB_CLIENT_SECRET=github_secret
export OAUTH_CALLBACK_URL=https://example.com/hub/oauth_callback
export CONFIGPROXY_AUTH_TOKEN=super-secret
jupyterhub -f /path/to/aboveconfig.py
```
## Example with nginx reverse proxy
In the following example, we show configuration files for a JupyterHub server running locally on port `8000` but accessible from the outside on the standard SSL port `443`. This could be useful if the JupyterHub server machine is also hosting other domains or content on `443`. The goal here is to have the following be true:
* JupyterHub is running on a server, accessed *only* via `HUB.DOMAIN.TLD:443`
* On the same machine, `NO_HUB.DOMAIN.TLD` strictly serves different content, also on port `443`
* `nginx` is used to manage the web servers / reverse proxy (which means that only nginx will be able to bind two servers to `443`)
* After testing, the server in question should be able to score an A+ on the Qualys SSL Labs [SSL Server Test](https://www.ssllabs.com/ssltest/)
Let's start out with `jupyterhub_config.py`:
```python
# Force the proxy to only listen to connections to 127.0.0.1
c.JupyterHub.ip = '127.0.0.1'
```
The `nginx` server config files are fairly standard fare except for the two `location` blocks within the `HUB.DOMAIN.TLD` config file:
```bash
# HTTP server to redirect all 80 traffic to SSL/HTTPS
server {
listen 80;
server_name HUB.DOMAIN.TLD;
# Tell all requests to port 80 to be 302 redirected to HTTPS
return 302 https://$host$request_uri;
}
# HTTPS server to handle JupyterHub
server {
listen 443;
ssl on;
server_name HUB.DOMAIN.TLD;
ssl_certificate /etc/letsencrypt/live/HUB.DOMAIN.TLD/fullchain.pem
ssl_certificate_key /etc/letsencrypt/live/HUB.DOMAIN.TLD/privkey.pem
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
# Managing literal requests to the JupyterHub front end
location / {
proxy_pass https://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Managing WebHook/Socket requests between hub user servers and external proxy
location ~* /(api/kernels/[^/]+/(channels|iopub|shell|stdin)|terminals/websocket)/? {
proxy_pass https://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
# Managing requests to verify letsencrypt host
location ~ /.well-known {
allow all;
}
}
```
`nginx` will now be the front facing element of JupyterHub on `443` which means it is also free to bind other servers, like `NO_HUB.DOMAIN.TLD` to the same port on the same machine and network interface. In fact, one can simply use the same server blocks as above for `NO_HUB` and simply add line for the root directory of the site as well as the applicable location call:
```bash
server {
listen 80;
server_name NO_HUB.DOMAIN.TLD;
# Tell all requests to port 80 to be 302 redirected to HTTPS
return 302 https://$host$request_uri;
}
server {
listen 443;
ssl on;
# INSERT OTHER SSL PARAMETERS HERE AS ABOVE
# Set the appropriate root directory
root /var/www/html
# Set URI handling
location / {
try_files $uri $uri/ =404;
}
# Managing requests to verify letsencrypt host
location ~ /.well-known {
allow all;
}
}
```
Now just restart `nginx`, restart the JupyterHub, and enjoy accessing https://HUB.DOMAIN.TLD while serving other content securely on https://NO_HUB.DOMAIN.TLD.

View File

@@ -0,0 +1,58 @@
# Contributors
Project Jupyter thanks the following people for their help and
contribution on JupyterHub:
- anderbubble
- betatim
- Carreau
- ckald
- cwaldbieser
- danielballen
- daradib
- datapolitan
- dblockow-d2dcrc
- dietmarw
- DominicFollettSmith
- dsblank
- ellisonbg
- evanlinde
- Fokko
- iamed18
- JamiesHQ
- jdavidheiser
- jhamrick
- josephtate
- kinuax
- KrishnaPG
- ksolan
- mbmilligan
- minrk
- mistercrunch
- Mistobaan
- mwmarkland
- nthiery
- ObiWahn
- ozancaglayan
- parente
- PeterDaveHello
- peterruppel
- rafael-ladislau
- rgbkrk
- robnagler
- ryanlovett
- Scrypy
- shreddd
- spoorthyv
- ssanderson
- takluyver
- temogen
- TimShawver
- Todd-Z-Li
- toobaz
- tsaeger
- vilhelmen
- willingc
- YannBrrd
- yuvipanda
- zoltan-fedor

View File

@@ -1,36 +1,89 @@
# Getting started with JupyterHub # Getting started with JupyterHub
This document describes some of the basics of configuring JupyterHub to do what you want. This section contains getting started information on the following topics:
JupyterHub is highly customizable, so there's a lot to cover.
- [Technical Overview](getting-started.html#technical-overview)
- [Installation](getting-started.html#installation)
- [Configuration](getting-started.html#configuration)
- [Networking](getting-started.html#networking)
- [Security](getting-started.html#security)
- [Authentication and users](getting-started.html#authentication-and-users)
- [Spawners and single-user notebook servers](getting-started.html#spawners-and-single-user-notebook-servers)
- [External Services](getting-started.html#external-services)
## Installation ## Technical Overview
See [the readme](https://github.com/jupyter/jupyterhub/blob/master/README.md) for help installing JupyterHub. JupyterHub is a set of processes that together provide a single user Jupyter
Notebook server for each person in a group.
### Three subsystems
Three major subsystems run by the `jupyterhub` command line program:
- **Single-User Notebook Server**: a dedicated, single-user, Jupyter Notebook server is
started for each user on the system when the user logs in. The object that
starts these servers is called a **Spawner**.
- **Proxy**: the public facing part of JupyterHub that uses a dynamic proxy
to route HTTP requests to the Hub and Single User Notebook Servers.
- **Hub**: manages user accounts, authentication, and coordinates Single User
Notebook Servers using a Spawner.
![JupyterHub subsystems](images/jhub-parts.png)
## Overview ### Deployment server
To use JupyterHub, you need a Unix server (typically Linux) running somewhere
that is accessible to your team on the network. The JupyterHub server can be
on an internal network at your organization, or it can run on the public
internet (in which case, take care with the Hub's
[security](getting-started.html#security)).
JupyterHub is a set of processes that together provide a multiuser Jupyter Notebook server. ### Basic operation
There are three main categories of processes run by the `jupyterhub` command line program: Users access JupyterHub through a web browser, by going to the IP address or
the domain name of the server.
- **Single User Server**: a dedicated, single-user, Jupyter Notebook is started for each user on the system Basic principles of operation:
when they log in. The object that starts these processes is called a Spawner.
- **Proxy**: the public facing part of the server that uses a dynamic proxy to route HTTP requests
to the Hub and Single User Servers.
- **Hub**: manages user accounts and authentication and coordinates Single Users Servers using a Spawner.
## JupyterHub's default behavior * Hub spawns proxy
* Proxy forwards all requests to hub by default
* Hub handles login, and spawns single-user servers on demand
* Hub configures proxy to forward url prefixes to single-user servers
**IMPORTANT:** In its default configuration, JupyterHub requires SSL encryption (HTTPS) to run. Different **[authenticators](authenticators.html)** control access
**You should not run JupyterHub without SSL encryption on a public network.** to JupyterHub. The default one (PAM) uses the user accounts on the server where
See [Security documentation](#Security) for how to configure JupyterHub to use SSL, and in JupyterHub is running. If you use this, you will need to create a user account
certain cases, e.g. behind SSL termination in nginx, allowing the hub to run with no SSL on the system for each user on your team. Using other authenticators, you can
by requiring `--no-ssl` (as of [version 0.5](./changelog.html)). allow users to sign in with e.g. a GitHub account, or with any single-sign-on
system your organization has.
Next, **[spawners](spawners.html)** control how JupyterHub starts
the individual notebook server for each user. The default spawner will
start a notebook server on the same machine running under their system username.
The other main option is to start each server in a separate container, often
using Docker.
### Default behavior
**IMPORTANT: You should not run JupyterHub without SSL encryption on a public network.**
See [Security documentation](#security) for how to configure JupyterHub to use SSL,
or put it behind SSL termination in another proxy server, such as nginx.
---
**Deprecation note:** Removed `--no-ssl` in version 0.7.
JupyterHub versions 0.5 and 0.6 require extra confirmation via `--no-ssl` to
allow running without SSL using the command `jupyterhub --no-ssl`. The
`--no-ssl` command line option is not needed anymore in version 0.7.
---
To start JupyterHub in its default configuration, type the following at the command line: To start JupyterHub in its default configuration, type the following at the command line:
```bash
sudo jupyterhub sudo jupyterhub
```
The default Authenticator that ships with JupyterHub authenticates users The default Authenticator that ships with JupyterHub authenticates users
with their system name and password (via [PAM][]). with their system name and password (via [PAM][]).
@@ -42,9 +95,8 @@ These servers listen on localhost, and start in the given user's home directory.
By default, the **Proxy** listens on all public interfaces on port 8000. By default, the **Proxy** listens on all public interfaces on port 8000.
Thus you can reach JupyterHub through either: Thus you can reach JupyterHub through either:
http://localhost:8000 - `http://localhost:8000`
- or any other public IP or domain pointing to your system.
or any other public IP or domain pointing to your system.
In their default configuration, the other services, the **Hub** and **Single-User Servers**, In their default configuration, the other services, the **Hub** and **Single-User Servers**,
all communicate with each other on localhost only. all communicate with each other on localhost only.
@@ -53,16 +105,41 @@ By default, starting JupyterHub will write two files to disk in the current work
- `jupyterhub.sqlite` is the sqlite database containing all of the state of the **Hub**. - `jupyterhub.sqlite` is the sqlite database containing all of the state of the **Hub**.
This file allows the **Hub** to remember what users are running and where, This file allows the **Hub** to remember what users are running and where,
as well as other information enabling you to restart parts of JupyterHub separately. as well as other information enabling you to restart parts of JupyterHub separately. It is
important to note that this database contains *no* sensitive information other than **Hub**
usernames.
- `jupyterhub_cookie_secret` is the encryption key used for securing cookies. - `jupyterhub_cookie_secret` is the encryption key used for securing cookies.
This file needs to persist in order for restarting the Hub server to avoid invalidating cookies. This file needs to persist in order for restarting the Hub server to avoid invalidating cookies.
Conversely, deleting this file and restarting the server effectively invalidates all login cookies. Conversely, deleting this file and restarting the server effectively invalidates all login cookies.
The cookie secret file is discussed in the [Cookie Secret documentation](#Cookie secret). The cookie secret file is discussed in the [Cookie Secret documentation](#cookie-secret).
The location of these files can be specified via configuration, discussed below. The location of these files can be specified via configuration, discussed below.
## Installation
## How to configure JupyterHub See the project's [README](https://github.com/jupyterhub/jupyterhub/blob/master/README.md)
for help installing JupyterHub.
### Planning your installation
Prior to beginning installation, it's helpful to consider some of the following:
- deployment system (bare metal, Docker)
- Authentication (PAM, OAuth, etc.)
- Spawner of singleuser notebook servers (Docker, Batch, etc.)
- Services (nbgrader, etc.)
- JupyterHub database (default SQLite; traditional RDBMS such as PostgreSQL,)
MySQL, or other databases supported by [SQLAlchemy](http://www.sqlalchemy.org))
### Folders and File Locations
It is recommended to put all of the files used by JupyterHub into standard
UNIX filesystem locations.
* `/srv/jupyterhub` for all security and runtime files
* `/etc/jupyterhub` for all configuration files
* `/var/log` for log files
## Configuration
JupyterHub is configured in two ways: JupyterHub is configured in two ways:
@@ -74,12 +151,16 @@ By default, JupyterHub will look for a configuration file (which may not be crea
named `jupyterhub_config.py` in the current working directory. named `jupyterhub_config.py` in the current working directory.
You can create an empty configuration file with: You can create an empty configuration file with:
jupyterhub --generate-config ```bash
jupyterhub --generate-config
```
This empty configuration file has descriptions of all configuration variables and their default This empty configuration file has descriptions of all configuration variables and their default
values. You can load a specific config file with: values. You can load a specific config file with:
jupyterhub -f /path/to/jupyterhub_config.py ```bash
jupyterhub -f /path/to/jupyterhub_config.py
```
See also: [general docs](http://ipython.org/ipython-doc/dev/development/config.html) See also: [general docs](http://ipython.org/ipython-doc/dev/development/config.html)
on the config system Jupyter uses. on the config system Jupyter uses.
@@ -87,20 +168,26 @@ on the config system Jupyter uses.
### Command-line arguments ### Command-line arguments
Type the following for brief information about the command-line arguments: Type the following for brief information about the command-line arguments:
jupyterhub -h ```bash
jupyterhub -h
```
or: or:
jupyterhub --help-all ```bash
jupyterhub --help-all
```
for the full command line help. for the full command line help.
All configurable options are technically configurable on the command-line, All configurable options are technically configurable on the command-line,
even if some are really inconvenient to type. Just replace the desired option, even if some are really inconvenient to type. Just replace the desired option,
c.Class.trait, with --Class.trait. For example, to configure `c.Class.trait`, with `--Class.trait`. For example, to configure the
c.Spawner.notebook_dir = '~/assignments' from the command-line: `c.Spawner.notebook_dir` trait from the command-line:
jupyterhub --Spawner.notebook_dir='~/assignments' ```bash
jupyterhub --Spawner.notebook_dir='~/assignments'
```
## Networking ## Networking
@@ -113,7 +200,9 @@ instead, use of `'0.0.0.0'` is preferred.
Changing the IP address and port can be done with the following command line Changing the IP address and port can be done with the following command line
arguments: arguments:
jupyterhub --ip=192.168.1.2 --port=443 ```bash
jupyterhub --ip=192.168.1.2 --port=443
```
Or by placing the following lines in a configuration file: Or by placing the following lines in a configuration file:
@@ -128,6 +217,7 @@ Configuring only the main IP and port of JupyterHub should be sufficient for mos
However, more customized scenarios may need additional networking details to However, more customized scenarios may need additional networking details to
be configured. be configured.
### Configuring the Proxy's REST API communication IP address and port (optional) ### Configuring the Proxy's REST API communication IP address and port (optional)
The Hub service talks to the proxy via a REST API on a secondary port, The Hub service talks to the proxy via a REST API on a secondary port,
whose network interface and port can be configured separately. whose network interface and port can be configured separately.
@@ -156,17 +246,31 @@ c.JupyterHub.hub_port = 54321
## Security ## Security
**IMPORTANT:** In its default configuration, JupyterHub requires SSL encryption (HTTPS) to run. **IMPORTANT: You should not run JupyterHub without SSL encryption on a public network.**
**You should not run JupyterHub without SSL encryption on a public network.**
Security is the most important aspect of configuring Jupyter. There are three main aspects of the ---
**Deprecation note:** Removed `--no-ssl` in version 0.7.
JupyterHub versions 0.5 and 0.6 require extra confirmation via `--no-ssl` to
allow running without SSL using the command `jupyterhub --no-ssl`. The
`--no-ssl` command line option is not needed anymore in version 0.7.
---
Security is the most important aspect of configuring Jupyter. There are four main aspects of the
security configuration: security configuration:
1. SSL encryption (to enable HTTPS) 1. SSL encryption (to enable HTTPS)
2. Cookie secret (a key for encrypting browser cookies) 2. Cookie secret (a key for encrypting browser cookies)
3. Proxy authentication token (used for the Hub and other services to authenticate to the Proxy) 3. Proxy authentication token (used for the Hub and other services to authenticate to the Proxy)
4. Periodic security audits
## SSL encryption *Note* that the **Hub** hashes all secrets (e.g., auth tokens) before storing them in its
database. A loss of control over read-access to the database should have no security impact
on your deployment.
### SSL encryption
Since JupyterHub includes authentication and allows arbitrary code execution, you should not run Since JupyterHub includes authentication and allows arbitrary code execution, you should not run
it without SSL (HTTPS). This will require you to obtain an official, trusted SSL certificate or it without SSL (HTTPS). This will require you to obtain an official, trusted SSL certificate or
@@ -178,23 +282,35 @@ c.JupyterHub.ssl_key = '/path/to/my.key'
c.JupyterHub.ssl_cert = '/path/to/my.cert' c.JupyterHub.ssl_cert = '/path/to/my.cert'
``` ```
It is also possible to use letsencrypt (https://letsencrypt.org/) to obtain a free, trusted SSL It is also possible to use letsencrypt (https://letsencrypt.org/) to obtain
certificate. If you run letsencrypt using the default options, the needed configuration is (replace `your.domain.com` by your fully qualified domain name): a free, trusted SSL certificate. If you run letsencrypt using the default
options, the needed configuration is (replace `mydomain.tld` by your fully
qualified domain name):
```python ```python
c.JupyterHub.ssl_key = '/etc/letsencrypt/live/your.domain.com/privkey.pem' c.JupyterHub.ssl_key = '/etc/letsencrypt/live/{mydomain.tld}/privkey.pem'
c.JupyterHub.ssl_cert = '/etc/letsencrypt/live/your.domain.com/fullchain.pem' c.JupyterHub.ssl_cert = '/etc/letsencrypt/live/{mydomain.tld}/fullchain.pem'
```
If the fully qualified domain name (FQDN) is `example.com`, the following
would be the needed configuration:
```python
c.JupyterHub.ssl_key = '/etc/letsencrypt/live/example.com/privkey.pem'
c.JupyterHub.ssl_cert = '/etc/letsencrypt/live/example.com/fullchain.pem'
``` ```
Some cert files also contain the key, in which case only the cert is needed. It is important that Some cert files also contain the key, in which case only the cert is needed. It is important that
these files be put in a secure location on your server, where they are not readable by regular these files be put in a secure location on your server, where they are not readable by regular
users. users.
Note: In certain cases, e.g. behind SSL termination in nginx, allowing no SSL Note on **chain certificates**: If you are using a chain certificate, see also
running on the hub may be desired. To run the Hub without SSL, you must opt [chained certificate for SSL](troubleshooting.md#chained-certificates-for-ssl) in the JupyterHub troubleshooting FAQ).
in by configuring and confirming the `--no-ssl` option, added as of [version 0.5](./changelog.html).
## Cookie secret Note: In certain cases, e.g. **behind SSL termination in nginx**, allowing no SSL
running on the hub may be desired.
### Cookie secret
The cookie secret is an encryption key, used to encrypt the browser cookies used for The cookie secret is an encryption key, used to encrypt the browser cookies used for
authentication. If this value changes for the Hub, all single-user servers must also be restarted. authentication. If this value changes for the Hub, all single-user servers must also be restarted.
@@ -230,15 +346,20 @@ For security reasons, this environment variable should only be visible to the Hu
If you set it dynamically as above, all users will be logged out each time the If you set it dynamically as above, all users will be logged out each time the
Hub starts. Hub starts.
You can also set the secret in the configuration file itself as a binary string: You can also set the cookie secret in the configuration file itself,`jupyterhub_config.py`,
as a binary string:
```python ```python
c.JupyterHub.cookie_secret = bytes.fromhex('VERY LONG SECRET HEX STRING') c.JupyterHub.cookie_secret = bytes.fromhex('VERY LONG SECRET HEX STRING')
``` ```
## Proxy authentication token ### Proxy authentication token
The Hub authenticates its requests to the Proxy using a secret token that the Hub and Proxy agree upon. The value of this string should be a random string (for example, generated by `openssl rand -hex 32`). You can pass this value to the Hub and Proxy using either the `CONFIGPROXY_AUTH_TOKEN` environment variable: The Hub authenticates its requests to the Proxy using a secret token that
the Hub and Proxy agree upon. The value of this string should be a random
string (for example, generated by `openssl rand -hex 32`). You can pass
this value to the Hub and Proxy using either the `CONFIGPROXY_AUTH_TOKEN`
environment variable:
```bash ```bash
export CONFIGPROXY_AUTH_TOKEN=`openssl rand -hex 32` export CONFIGPROXY_AUTH_TOKEN=`openssl rand -hex 32`
@@ -246,7 +367,7 @@ export CONFIGPROXY_AUTH_TOKEN=`openssl rand -hex 32`
This environment variable needs to be visible to the Hub and Proxy. This environment variable needs to be visible to the Hub and Proxy.
Or you can set the value in the configuration file: Or you can set the value in the configuration file, `jupyterhub_config.py`:
```python ```python
c.JupyterHub.proxy_auth_token = '0bc02bede919e99a26de1e2a7a5aadfaf6228de836ec39a05a6c6942831d8fe5' c.JupyterHub.proxy_auth_token = '0bc02bede919e99a26de1e2a7a5aadfaf6228de836ec39a05a6c6942831d8fe5'
@@ -256,12 +377,27 @@ If you don't set the Proxy authentication token, the Hub will generate a random
means that any time you restart the Hub you **must also restart the Proxy**. If the proxy is a means that any time you restart the Hub you **must also restart the Proxy**. If the proxy is a
subprocess of the Hub, this should happen automatically (this is the default configuration). subprocess of the Hub, this should happen automatically (this is the default configuration).
Another time you must set the Proxy authentication token yourself is if you want other services, such as [nbgrader](https://github.com/jupyter/nbgrader) to also be able to connect to the Proxy. Another time you must set the Proxy authentication token yourself is if
you want other services, such as [nbgrader](https://github.com/jupyter/nbgrader)
to also be able to connect to the Proxy.
## Configuring authentication ### Security audits
We recommend that you do periodic reviews of your deployment's security. It's
good practice to keep JupyterHub, configurable-http-proxy, and nodejs
versions up to date.
A handy website for testing your deployment is
[Qualsys' SSL analyzer tool](https://www.ssllabs.com/ssltest/analyze.html).
## Authentication and users
The default Authenticator uses [PAM][] to authenticate system users with
their username and password. The default behavior of this Authenticator
is to allow any user with an account and password on the system to login.
### Creating a whitelist of users
The default Authenticator uses [PAM][] to authenticate system users with their username and password.
The default behavior of this Authenticator is to allow any user with an account and password on the system to login.
You can restrict which users are allowed to login with `Authenticator.whitelist`: You can restrict which users are allowed to login with `Authenticator.whitelist`:
@@ -269,6 +405,8 @@ You can restrict which users are allowed to login with `Authenticator.whitelist`
c.Authenticator.whitelist = {'mal', 'zoe', 'inara', 'kaylee'} c.Authenticator.whitelist = {'mal', 'zoe', 'inara', 'kaylee'}
``` ```
### Managing Hub administrators
Admin users of JupyterHub have the ability to take actions on users' behalf, Admin users of JupyterHub have the ability to take actions on users' behalf,
such as stopping and restarting their servers, such as stopping and restarting their servers,
and adding and removing new users from the whitelist. and adding and removing new users from the whitelist.
@@ -284,7 +422,10 @@ If `JupyterHub.admin_access` is True (not default),
then admin users have permission to log in *as other users* on their respective machines, for debugging. then admin users have permission to log in *as other users* on their respective machines, for debugging.
**You should make sure your users know if admin_access is enabled.** **You should make sure your users know if admin_access is enabled.**
### Adding and removing users Note: additional configuration examples are provided in this guide's
[Configuration Examples section](./config-examples.html).
### Add or remove users from the Hub
Users can be added and removed to the Hub via the admin panel or REST API. These users will be Users can be added and removed to the Hub via the admin panel or REST API. These users will be
added to the whitelist and database. Restarting the Hub will not require manually updating the added to the whitelist and database. Restarting the Hub will not require manually updating the
@@ -308,7 +449,7 @@ hosted deployments of JupyterHub, to avoid the need to manually create all your
launching the service. It is not recommended when running JupyterHub in situations where launching the service. It is not recommended when running JupyterHub in situations where
JupyterHub users maps directly onto UNIX users. JupyterHub users maps directly onto UNIX users.
## Configuring single-user servers ## Spawners and single-user notebook servers
Since the single-user server is an instance of `jupyter notebook`, an entire separate Since the single-user server is an instance of `jupyter notebook`, an entire separate
multi-process application, there are many aspect of that server can configure, and a lot of ways multi-process application, there are many aspect of that server can configure, and a lot of ways
@@ -344,96 +485,42 @@ which is the place to put configuration that you want to affect all of your user
## External services ## External services
JupyterHub has a REST API that can be used to run external services. JupyterHub has a REST API that can be used by external services like the
More detail on this API will be added in the future. [cull_idle_servers](https://github.com/jupyterhub/jupyterhub/blob/master/examples/cull-idle/cull_idle_servers.py)
script which monitors and kills idle single-user servers periodically. In order to run such an
external service, you need to provide it an API token. In the case of `cull_idle_servers`, it is passed
as the environment variable called `JPY_API_TOKEN`.
## File locations Currently there are two ways of registering that token with JupyterHub. The first one is to use
the `jupyterhub` command to generate a token for a specific hub user:
It is recommended to put all of the files used by JupyterHub into standard UNIX filesystem locations.
* `/srv/jupyterhub` for all security and runtime files
* `/etc/jupyterhub` for all configuration files
* `/var/log` for log files
## Example
In the following example, we show a configuration files for a fairly standard JupyterHub deployment with the following assumptions:
* JupyterHub is running on a single cloud server
* Using SSL on the standard HTTPS port 443
* You want to use [GitHub OAuth][oauthenticator] for login
* You need the users to exist locally on the server
* You want users' notebooks to be served from `~/assignments` to allow users to browse for notebooks within
other users home directories
* You want the landing page for each user to be a Welcome.ipynb notebook in their assignments directory.
* All runtime files are put into `/srv/jupyterhub` and log files in `/var/log`.
Let's start out with `jupyterhub_config.py`:
```python
# jupyterhub_config.py
c = get_config()
import os
pjoin = os.path.join
runtime_dir = os.path.join('/srv/jupyterhub')
ssl_dir = pjoin(runtime_dir, 'ssl')
if not os.path.exists(ssl_dir):
os.makedirs(ssl_dir)
# https on :443
c.JupyterHub.port = 443
c.JupyterHub.ssl_key = pjoin(ssl_dir, 'ssl.key')
c.JupyterHub.ssl_cert = pjoin(ssl_dir, 'ssl.cert')
# put the JupyterHub cookie secret and state db
# in /var/run/jupyterhub
c.JupyterHub.cookie_secret_file = pjoin(runtime_dir, 'cookie_secret')
c.JupyterHub.db_url = pjoin(runtime_dir, 'jupyterhub.sqlite')
# or `--db=/path/to/jupyterhub.sqlite` on the command-line
# put the log file in /var/log
c.JupyterHub.log_file = '/var/log/jupyterhub.log'
# use GitHub OAuthenticator for local users
c.JupyterHub.authenticator_class = 'oauthenticator.LocalGitHubOAuthenticator'
c.GitHubOAuthenticator.oauth_callback_url = os.environ['OAUTH_CALLBACK_URL']
# create system users that don't exist yet
c.LocalAuthenticator.create_system_users = True
# specify users and admin
c.Authenticator.whitelist = {'rgbkrk', 'minrk', 'jhamrick'}
c.Authenticator.admin_users = {'jhamrick', 'rgbkrk'}
# start single-user notebook servers in ~/assignments,
# with ~/assignments/Welcome.ipynb as the default landing page
# this config could also be put in
# /etc/ipython/ipython_notebook_config.py
c.Spawner.notebook_dir = '~/assignments'
c.Spawner.args = ['--NotebookApp.default_url=/notebooks/Welcome.ipynb']
```
Using the GitHub Authenticator [requires a few additional env variables][oauth-setup],
which we will need to set when we launch the server:
```bash ```bash
export GITHUB_CLIENT_ID=github_id jupyterhub token <username>
export GITHUB_CLIENT_SECRET=github_secret
export OAUTH_CALLBACK_URL=https://example.com/hub/oauth_callback
export CONFIGPROXY_AUTH_TOKEN=super-secret
jupyterhub -f /path/to/aboveconfig.py
``` ```
# Further reading As of [version 0.6.0](./changelog.html), the preferred way of doing this is to first generate an API token:
- [Custom Authenticators](./authenticators.html) ```bash
- [Custom Spawners](./spawners.html) openssl rand -hex 32
- [Troubleshooting](./troubleshooting.html) ```
[oauth-setup]: https://github.com/jupyter/oauthenticator#setup and then write it to your JupyterHub configuration file (note that the **key** is the token while the **value** is the username):
[oauthenticator]: https://github.com/jupyter/oauthenticator
```python
c.JupyterHub.api_tokens = {'token' : 'username'}
```
Upon restarting JupyterHub, you should see a message like below in the logs:
```
Adding API token for <username>
```
Now you can run your script, i.e. `cull_idle_servers`, by providing it the API token and it will authenticate through
the REST API to interact with it.
[oauth-setup]: https://github.com/jupyterhub/oauthenticator#setup
[oauthenticator]: https://github.com/jupyterhub/oauthenticator
[PAM]: https://en.wikipedia.org/wiki/Pluggable_authentication_module [PAM]: https://en.wikipedia.org/wiki/Pluggable_authentication_module

View File

@@ -1,11 +1,11 @@
# How JupyterHub works # How JupyterHub works
JupyterHub is a multi-user server that manages and proxies multiple instances of the single-user <del>IPython</del> Jupyter notebook server. JupyterHub is a multi-user server that manages and proxies multiple instances of the single-user Jupyter notebook server.
There are three basic processes involved: There are three basic processes involved:
- multi-user Hub (Python/Tornado) - multi-user Hub (Python/Tornado)
- [configurable http proxy](https://github.com/jupyter/configurable-http-proxy) (node-http-proxy) - [configurable http proxy](https://github.com/jupyterhub/configurable-http-proxy) (node-http-proxy)
- multiple single-user IPython notebook servers (Python/IPython/Tornado) - multiple single-user IPython notebook servers (Python/IPython/Tornado)
The proxy is the only process that listens on a public interface. The proxy is the only process that listens on a public interface.
@@ -57,9 +57,9 @@ or at least with access to the PAM service,
which regular users typically do not have which regular users typically do not have
(on Ubuntu, this requires being added to the `shadow` group). (on Ubuntu, this requires being added to the `shadow` group).
[More info on custom Authenticators](authenticators.md). [More info on custom Authenticators](authenticators.html).
See a list of custom Authenticators [on the wiki](https://github.com/jupyter/jupyterhub/wiki/Authenticators). See a list of custom Authenticators [on the wiki](https://github.com/jupyterhub/jupyterhub/wiki/Authenticators).
### Spawning ### Spawning
@@ -72,6 +72,6 @@ and needs to be able to take three actions:
2. poll whether the process is still running 2. poll whether the process is still running
3. stop the process 3. stop the process
[More info on custom Spawners](spawners.md). [More info on custom Spawners](spawners.html).
See a list of custom Spawners [on the wiki](https://github.com/jupyter/jupyterhub/wiki/Spawners). See a list of custom Spawners [on the wiki](https://github.com/jupyterhub/jupyterhub/wiki/Spawners).

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

After

Width:  |  Height:  |  Size: 27 KiB

View File

@@ -1,94 +1,116 @@
JupyterHub JupyterHub
========== ==========
JupyterHub is a server that gives multiple users access to Jupyter notebooks, With JupyterHub you can create a **multi-user Hub** which spawns, manages,
running an independent Jupyter notebook server for each user. and proxies multiple instances of the single-user
`Jupyter notebook <https://jupyter-notebook.readthedocs.io/en/latest/>`_ server.
To use JupyterHub, you need a Unix server (typically Linux) running Due to its flexibility and customization options, JupyterHub can be used to
somewhere that is accessible to your team on the network. The JupyterHub server serve notebooks to a class of students, a corporate data science group, or a
can be on an internal network at your organisation, or it can run on the public scientific research group.
internet (in which case, take care with `security <getting-started.html#security>`__).
Users access JupyterHub in a web browser, by going to the IP address or
domain name of the server.
Different :doc:`authenticators <authenticators>` control access
to JupyterHub. The default one (pam) uses the user accounts on the server where
JupyterHub is running. If you use this, you will need to create a user account
on the system for each user on your team. Using other authenticators, you can
allow users to sign in with e.g. a Github account, or with any single-sign-on
system your organisation has.
Next, :doc:`spawners <spawners>` control how JupyterHub starts
the individual notebook server for each user. The default spawner will
start a notebook server on the same machine running under their system username.
The other main option is to start each server in a separate container, often
using Docker.
JupyterHub runs as three separate parts:
* The multi-user Hub (Python & Tornado)
* A `configurable http proxy <https://github.com/jupyter/configurable-http-proxy>`_ (NodeJS)
* Multiple single-user Jupyter notebook servers (Python & Tornado)
Basic principles:
* Hub spawns proxy
* Proxy forwards ~all requests to hub by default
* Hub handles login, and spawns single-user servers on demand
* Hub configures proxy to forward url prefixes to single-user servers
Contents: .. image:: images/jhub-parts.png
:alt: JupyterHub subsystems
:width: 40%
:align: right
Three subsystems make up JupyterHub:
* a multi-user **Hub** (tornado process)
* a **configurable http proxy** (node-http-proxy)
* multiple **single-user Jupyter notebook servers** (Python/IPython/tornado)
JupyterHub's basic flow of operations includes:
- The Hub spawns a proxy
- The proxy forwards all requests to the Hub by default
- The Hub handles user login and spawns single-user servers on demand
- The Hub configures the proxy to forward URL prefixes to the single-user notebook servers
For convenient administration of the Hub, its users, and :doc:`services`
(added in version 7.0), JupyterHub also provides a
`REST API <http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#!/default>`__.
Contents
--------
**User Guide**
* :doc:`quickstart`
* :doc:`getting-started`
* :doc:`howitworks`
* :doc:`websecurity`
* :doc:`rest`
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
:caption: User Documentation :hidden:
:caption: User Guide
quickstart
getting-started getting-started
howitworks howitworks
websecurity websecurity
rest
**Configuration Guide**
* :doc:`authenticators`
* :doc:`spawners`
* :doc:`services`
* :doc:`config-examples`
* :doc:`upgrading`
* :doc:`troubleshooting`
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
:caption: Configuration :hidden:
:caption: Configuration Guide
authenticators authenticators
spawners spawners
services
config-examples
upgrading
troubleshooting troubleshooting
.. toctree::
:maxdepth: 1
:caption: Developer Documentation
api/index
.. toctree::
:maxdepth: 1
:caption: Community documentation
**API Reference**
* :doc:`api/index`
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
:hidden:
:caption: API Reference
api/index
**About JupyterHub**
* :doc:`changelog`
* :doc:`contributor-list`
.. toctree::
:maxdepth: 2
:hidden:
:caption: About JupyterHub :caption: About JupyterHub
changelog changelog
contributor-list
.. toctree::
:maxdepth: 1
:caption: Questions? Suggestions?
Jupyter mailing list <https://groups.google.com/forum/#!forum/jupyter>
Jupyter website <https://jupyter.org>
Stack Overflow - Jupyter <https://stackoverflow.com/questions/tagged/jupyter>
Stack Overflow - Jupyter-notebook <https://stackoverflow.com/questions/tagged/jupyter-notebook>
Indices and tables Indices and tables
================== ------------------
* :ref:`genindex` * :ref:`genindex`
* :ref:`modindex` * :ref:`modindex`
* :ref:`search`
Questions? Suggestions?
-----------------------
- `Jupyter mailing list <https://groups.google.com/forum/#!forum/jupyter>`_
- `Jupyter website <https://jupyter.org>`_

160
docs/source/quickstart.md Normal file
View File

@@ -0,0 +1,160 @@
# Quickstart - Installation
## Prerequisites
**Before installing JupyterHub**, you will need:
- [Python](https://www.python.org/downloads/) 3.3 or greater
An understanding of using [`pip`](https://pip.pypa.io/en/stable/) or
[`conda`](http://conda.pydata.org/docs/get-started.html) for
installing Python packages is helpful.
- [nodejs/npm](https://www.npmjs.com/)
[Install nodejs/npm](https://docs.npmjs.com/getting-started/installing-node),
using your operating system's package manager. For example, install on Linux
(Debian/Ubuntu) using:
```bash
sudo apt-get install npm nodejs-legacy
```
(The `nodejs-legacy` package installs the `node` executable and is currently
required for npm to work on Debian/Ubuntu.)
- TLS certificate and key for HTTPS communication
- Domain name
**Before running the single-user notebook servers** (which may be on the same
system as the Hub or not):
- [Jupyter Notebook](https://jupyter.readthedocs.io/en/latest/install.html)
version 4 or greater
## Installation
JupyterHub can be installed with `pip` or `conda` and the proxy with `npm`:
**pip, npm:**
```bash
python3 -m pip install jupyterhub
npm install -g configurable-http-proxy
```
**conda** (one command installs jupyterhub and proxy):
```bash
conda install -c conda-forge jupyterhub
```
To test your installation:
```bash
jupyterhub -h
configurable-http-proxy -h
```
If you plan to run notebook servers locally, you will need also to install
Jupyter notebook:
**pip:**
```bash
python3 -m pip install notebook
```
**conda:**
```bash
conda install notebook
```
## Start the Hub server
To start the Hub server, run the command:
```bash
jupyterhub
```
Visit `https://localhost:8000` in your browser, and sign in with your unix
credentials.
To allow multiple users to sign into the Hub server, you must start `jupyterhub` as a *privileged user*, such as root:
```bash
sudo jupyterhub
```
The [wiki](https://github.com/jupyterhub/jupyterhub/wiki/Using-sudo-to-run-JupyterHub-without-root-privileges)
describes how to run the server as a *less privileged user*, which requires
additional configuration of the system.
----
## Basic Configuration
The [getting started document](docs/source/getting-started.md) contains
detailed information abouts configuring a JupyterHub deployment.
The JupyterHub **tutorial** provides a video and documentation that explains
and illustrates the fundamental steps for installation and configuration.
[Repo](https://github.com/jupyterhub/jupyterhub-tutorial)
| [Tutorial documentation](http://jupyterhub-tutorial.readthedocs.io/en/latest/)
#### Generate a default configuration file
Generate a default config file:
jupyterhub --generate-config
#### Customize the configuration, authentication, and process spawning
Spawn the server on ``10.0.1.2:443`` with **https**:
jupyterhub --ip 10.0.1.2 --port 443 --ssl-key my_ssl.key --ssl-cert my_ssl.cert
The authentication and process spawning mechanisms can be replaced,
which should allow plugging into a variety of authentication or process
control environments. Some examples, meant as illustration and testing of this
concept, are:
- Using GitHub OAuth instead of PAM with [OAuthenticator](https://github.com/jupyterhub/oauthenticator)
- Spawning single-user servers with Docker, using the [DockerSpawner](https://github.com/jupyterhub/dockerspawner)
----
## Alternate Installation using Docker
A ready to go [docker image for JupyterHub](https://hub.docker.com/r/jupyterhub/jupyterhub/)
gives a straightforward deployment of JupyterHub.
*Note: This `jupyterhub/jupyterhub` docker image is only an image for running
the Hub service itself. It does not provide the other Jupyter components, such
as Notebook installation, which are needed by the single-user servers.
To run the single-user servers, which may be on the same system as the Hub or
not, Jupyter Notebook version 4 or greater must be installed.*
#### Starting JupyterHub with docker
The JupyterHub docker image can be started with the following command:
docker run -d --name jupyterhub jupyterhub/jupyterhub jupyterhub
This command will create a container named `jupyterhub` that you can
**stop and resume** with `docker stop/start`.
The Hub service will be listening on all interfaces at port 8000, which makes
this a good choice for **testing JupyterHub on your desktop or laptop**.
If you want to run docker on a computer that has a public IP then you should
(as in MUST) **secure it with ssl** by adding ssl options to your docker
configuration or using a ssl enabled proxy.
[Mounting volumes](https://docs.docker.com/engine/userguide/containers/dockervolumes/)
will allow you to **store data outside the docker image (host system) so it will be persistent**,
even when you start a new image.
The command `docker exec -it jupyterhub bash` will spawn a root shell in your
docker container. You can **use the root shell to create system users in the container**.
These accounts will be used for authentication in JupyterHub's default
configuration.

70
docs/source/rest.md Normal file
View File

@@ -0,0 +1,70 @@
# Using JupyterHub's REST API
Using the [JupyterHub REST API][], you can perform actions on the Hub,
such as:
- checking which users are active
- adding or removing users
- stopping or starting single user notebook servers
- authenticating services
A [REST](https://en.wikipedia.org/wiki/Representational_state_transfer)
API provides a standard way for users to get and send information to the
Hub.
## Creating an API token
To send requests using JupyterHub API, you must pass an API token with the
request. You can create a token for an individual user using the following
command:
jupyterhub token USERNAME
## Adding tokens to the config file
You may also add a dictionary of API tokens and usernames to the hub's
configuration file, `jupyterhub_config.py`:
```python
c.JupyterHub.api_tokens = {
'secret-token': 'username',
}
```
## Making an API request
To authenticate your requests, pass the API token in the request's
Authorization header.
**Example: List the hub's users**
Using the popular Python requests library, the following code sends an API
request and an API token for authorization:
```python
import requests
api_url = 'http://127.0.0.1:8081/hub/api'
r = requests.get(api_url + '/users',
headers={
'Authorization': 'token %s' % token,
}
)
r.raise_for_status()
users = r.json()
```
## Learning more about the API
You can see the full [JupyterHub REST API][] for details.
The same REST API Spec can be viewed in a more interactive style [on swagger's petstore][].
Both resources contain the same information and differ only in its display.
Note: The Swagger specification is being renamed the [OpenAPI Initiative][].
[on swagger's petstore]: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/jupyterhub/jupyterhub/master/docs/rest-api.yml#!/default
[OpenAPI Initiative]: https://www.openapis.org/
[JupyterHub REST API]: ./_static/rest-api/index.html

357
docs/source/services.md Normal file
View File

@@ -0,0 +1,357 @@
# Services
With version 0.7, JupyterHub adds support for **Services**.
This section provides the following information about Services:
- [Definition of a Service](services.html#definition-of-a-service)
- [Properties of a Service](services.html#properties-of-a-service)
- [Hub-Managed Services](services.html#hub-managed-services)
- [Launching a Hub-Managed Service](services.html#launching-a-hub-managed-service)
- [Externally-Managed Services](services.html#externally-managed-services)
- [Writing your own Services](services.html#writing-your-own-services)
- [Hub Authentication and Services](services.html#hub-authentication-and-services)
## Definition of a Service
When working with JupyterHub, a **Service** is defined as a process that interacts
with the Hub's REST API. A Service may perform a specific or
action or task. For example, the following tasks can each be a unique Service:
- shutting down individuals' single user notebook servers that have been idle
for some time
- registering additional web servers which should use the Hub's authentication
and be served behind the Hub's proxy.
Two key features help define a Service:
- Is the Service **managed** by JupyterHub?
- Does the Service have a web server that should be added to the proxy's
table?
Currently, these characteristics distinguish two types of Services:
- A **Hub-Managed Service** which is managed by JupyterHub
- An **Externally-Managed Service** which runs its own web server and
communicates operation instructions via the Hub's API.
## Properties of a Service
A Service may have the following properties:
- `name: str` - the name of the service
- `admin: bool (default - false)` - whether the service should have
administrative privileges
- `url: str (default - None)` - The URL where the service is/should be. If a
url is specified for where the Service runs its own web server,
the service will be added to the proxy at `/services/:name`
If a service is also to be managed by the Hub, it has a few extra options:
- `command: (str/Popen list`) - Command for JupyterHub to spawn the service.
- Only use this if the service should be a subprocess.
- If command is not specified, the Service is assumed to be managed
externally.
- If a command is specified for launching the Service, the Service will
be started and managed by the Hub.
- `environment: dict` - additional environment variables for the Service.
- `user: str` - the name of a system user to manage the Service. If
unspecified, run as the same user as the Hub.
## Hub-Managed Services
A **Hub-Managed Service** is started by the Hub, and the Hub is responsible
for the Service's actions. A Hub-Managed Service can only be a local
subprocess of the Hub. The Hub will take care of starting the process and
restarts it if it stops.
While Hub-Managed Services share some similarities with notebook Spawners,
there are no plans for Hub-Managed Services to support the same spawning
abstractions as a notebook Spawner.
If you wish to run a Service in a Docker container or other deployment
environments, the Service can be registered as an
**Externally-Managed Service**, as described below.
## Launching a Hub-Managed Service
A Hub-Managed Service is characterized by its specified `command` for launching
the Service. For example, a 'cull idle' notebook server task configured as a
Hub-Managed Service would include:
- the Service name,
- admin permissions, and
- the `command` to launch the Service which will cull idle servers after a
timeout interval
This example would be configured as follows in `jupyterhub_config.py`:
```python
c.JupyterHub.services = [
{
'name': 'cull-idle',
'admin': True,
'command': ['python', '/path/to/cull-idle.py', '--timeout']
}
]
```
A Hub-Managed Service may also be configured with additional optional
parameters, which describe the environment needed to start the Service process:
- `environment: dict` - additional environment variables for the Service.
- `user: str` - name of the user to run the server if different from the Hub.
Requires Hub to be root.
- `cwd: path` directory in which to run the Service, if different from the
Hub directory.
The Hub will pass the following environment variables to launch the Service:
```bash
JUPYTERHUB_SERVICE_NAME: The name of the service
JUPYTERHUB_API_TOKEN: API token assigned to the service
JUPYTERHUB_API_URL: URL for the JupyterHub API (default, http://127.0.0.1:8080/hub/api)
JUPYTERHUB_BASE_URL: Base URL of the Hub (https://mydomain[:port]/)
JUPYTERHUB_SERVICE_PREFIX: URL path prefix of this service (/services/:service-name/)
JUPYTERHUB_SERVICE_URL: Local URL where the service is expected to be listening.
Only for proxied web services.
```
For the previous 'cull idle' Service example, these environment variables
would be passed to the Service when the Hub starts the 'cull idle' Service:
```bash
JUPYTERHUB_SERVICE_NAME: 'cull-idle'
JUPYTERHUB_API_TOKEN: API token assigned to the service
JUPYTERHUB_API_URL: http://127.0.0.1:8080/hub/api
JUPYTERHUB_BASE_URL: https://mydomain[:port]
JUPYTERHUB_SERVICE_PREFIX: /services/cull-idle/
```
See the JupyterHub GitHub repo for additional information about the
[`cull-idle` example](https://github.com/jupyterhub/jupyterhub/tree/master/examples/cull-idle).
## Externally-Managed Services
You may prefer to use your own service management tools, such as Docker or
systemd, to manage a JupyterHub Service. These **Externally-Managed
Services**, unlike Hub-Managed Services, are not subprocesses of the Hub. You
must tell JupyterHub which API token the Externally-Managed Service is using
to perform its API requests. Each Externally-Managed Service will need a
unique API token, because the Hub authenticates each API request and the API
token is used to identify the originating Service or user.
A configuration example of an Externally-Managed Service with admin access and
running its own web server is:
```python
c.JupyterHub.services = [
{
'name': 'my-web-service',
'url': 'https://10.0.1.1:1984',
'api_token': 'super-secret',
}
]
```
In this case, the `url` field will be passed along to the Service as
`JUPYTERHUB_SERVICE_URL`.
## Writing your own Services
When writing your own services, you have a few decisions to make (in addition
to what your service does!):
1. Does my service need a public URL?
2. Do I want JupyterHub to start/stop the service?
3. Does my service need to authenticate users?
When a Service is managed by JupyterHub, the Hub will pass the necessary
information to the Service via the environment variables described above. A
flexible Service, whether managed by the Hub or not, can make use of these
same environment variables.
When you run a service that has a url, it will be accessible under a
`/services/` prefix, such as `https://myhub.horse/services/my-service/`. For
your service to route proxied requests properly, it must take
`JUPYTERHUB_SERVICE_PREFIX` into account when routing requests. For example, a
web service would normally service its root handler at `'/'`, but the proxied
service would need to serve `JUPYTERHUB_SERVICE_PREFIX + '/'`.
## Hub Authentication and Services
JupyterHub 0.7 introduces some utilities for using the Hub's authentication
mechanism to govern access to your service. When a user logs into JupyterHub,
the Hub sets a **cookie (`jupyterhub-services`)**. The service can use this
cookie to authenticate requests.
JupyterHub ships with a reference implementation of Hub authentication that
can be used by services. You may go beyond this reference implementation and
create custom hub-authenticating clients and services. We describe the process
below.
The reference, or base, implementation is the [`HubAuth`][HubAuth] class,
which implements the requests to the Hub.
To use HubAuth, you must set the `.api_token`, either programmatically when constructing the class,
or via the `JUPYTERHUB_API_TOKEN` environment variable.
Most of the logic for authentication implementation is found in the
[`HubAuth.user_for_cookie`](services.auth.html#jupyterhub.services.auth.HubAuth.user_for_cookie)
method, which makes a request of the Hub, and returns:
- None, if no user could be identified, or
- a dict of the following form:
```python
{
"name": "username",
"groups": ["list", "of", "groups"],
"admin": False, # or True
}
```
You are then free to use the returned user information to take appropriate
action.
HubAuth also caches the Hub's response for a number of seconds,
configurable by the `cookie_cache_max_age` setting (default: five minutes).
### Flask Example
For example, you have a Flask service that returns information about a user.
JupyterHub's HubAuth class can be used to authenticate requests to the Flask
service. See the `service-whoami-flask` example in the
[JupyterHub GitHub repo](https://github.com/jupyterhub/jupyterhub/tree/master/examples/service-whoami-flask)
for more details.
```python
from functools import wraps
import json
import os
from urllib.parse import quote
from flask import Flask, redirect, request, Response
from jupyterhub.services.auth import HubAuth
prefix = os.environ.get('JUPYTERHUB_SERVICE_PREFIX', '/')
auth = HubAuth(
api_token=os.environ['JUPYTERHUB_API_TOKEN'],
cookie_cache_max_age=60,
)
app = Flask(__name__)
def authenticated(f):
"""Decorator for authenticating with the Hub"""
@wraps(f)
def decorated(*args, **kwargs):
cookie = request.cookies.get(auth.cookie_name)
if cookie:
user = auth.user_for_cookie(cookie)
else:
user = None
if user:
return f(user, *args, **kwargs)
else:
# redirect to login url on failed auth
return redirect(auth.login_url + '?next=%s' % quote(request.path))
return decorated
@app.route(prefix + '/')
@authenticated
def whoami(user):
return Response(
json.dumps(user, indent=1, sort_keys=True),
mimetype='application/json',
)
```
### Authenticating tornado services with JupyterHub
Since most Jupyter services are written with tornado,
we include a mixin class, [`HubAuthenticated`][HubAuthenticated],
for quickly authenticating your own tornado services with JupyterHub.
Tornado's `@web.authenticated` method calls a Handler's `.get_current_user`
method to identify the user. Mixing in `HubAuthenticated` defines
`get_current_user` to use HubAuth. If you want to configure the HubAuth
instance beyond the default, you'll want to define an `initialize` method,
such as:
```python
class MyHandler(HubAuthenticated, web.RequestHandler):
hub_users = {'inara', 'mal'}
def initialize(self, hub_auth):
self.hub_auth = hub_auth
@web.authenticated
def get(self):
...
```
The HubAuth will automatically load the desired configuration from the Service
environment variables.
If you want to limit user access, you can whitelist users through either the
`.hub_users` attribute or `.hub_groups`. These are sets that check against the
username and user group list, respectively. If a user matches neither the user
list nor the group list, they will not be allowed access. If both are left
undefined, then any user will be allowed.
### Implementing your own Authentication with JupyterHub
If you don't want to use the reference implementation
(e.g. you find the implementation a poor fit for your Flask app),
you can implement authentication via the Hub yourself.
We recommend looking at the [`HubAuth`][HubAuth] class implementation for reference,
and taking note of the following process:
1. retrieve the cookie `jupyterhub-services` from the request.
2. Make an API request `GET /hub/api/authorizations/cookie/jupyterhub-services/cookie-value`,
where cookie-value is the url-encoded value of the `jupyterhub-services` cookie.
This request must be authenticated with a Hub API token in the `Authorization` header.
For example, with [requests][]:
```python
r = requests.get(
'/'.join((["http://127.0.0.1:8081/hub/api",
"authorizations/cookie/jupyterhub-services",
quote(encrypted_cookie, safe=''),
]),
headers = {
'Authorization' : 'token %s' % api_token,
},
)
r.raise_for_status()
user = r.json()
```
3. On success, the reply will be a JSON model describing the user:
```json
{
"name": "inara",
"groups": ["serenity", "guild"],
}
```
An example of using an Externally-Managed Service and authentication is
[nbviewer](https://github.com/jupyter/nbviewer#securing-the-notebook-viewer),
and an example of its configuration is found [here](https://github.com/jupyter/nbviewer/blob/master/nbviewer/providers/base.py#L94).
nbviewer can also be run as a Hub-Managed Service as described [here](https://github.com/jupyter/nbviewer#securing-the-notebook-viewer).
[requests]: http://docs.python-requests.org/en/master/
[services_auth]: api/services.auth.html
[HubAuth]: api/services.auth.html#jupyterhub.services.auth.HubAuth
[HubAuthenticated]: api/services.auth.html#jupyterhub.services.auth.HubAuthenticated

View File

@@ -1,4 +1,4 @@
# Writing a custom Spawner # Spawners
A [Spawner][] starts each single-user notebook server. A [Spawner][] starts each single-user notebook server.
The Spawner represents an abstract interface to a process, The Spawner represents an abstract interface to a process,
@@ -8,21 +8,25 @@ and a custom Spawner needs to be able to take three actions:
- poll whether the process is still running - poll whether the process is still running
- stop the process - stop the process
## Examples ## Examples
Custom Spawners for JupyterHub can be found on the [JupyterHub wiki](https://github.com/jupyter/jupyterhub/wiki/Spawners). Some examples include: Custom Spawners for JupyterHub can be found on the [JupyterHub wiki](https://github.com/jupyterhub/jupyterhub/wiki/Spawners).
- [DockerSpawner](https://github.com/jupyter/dockerspawner) for spawning user servers in Docker containers Some examples include:
* dockerspawner.DockerSpawner for spawning identical Docker containers for
- [DockerSpawner](https://github.com/jupyterhub/dockerspawner) for spawning user servers in Docker containers
* `dockerspawner.DockerSpawner` for spawning identical Docker containers for
each users each users
* dockerspawner.SystemUserSpawner for spawning Docker containers with an * `dockerspawner.SystemUserSpawner` for spawning Docker containers with an
environment and home directory for each users environment and home directory for each users
- [SudoSpawner](https://github.com/jupyter/sudospawner) enables JupyterHub to * both `DockerSpawner` and `SystemUserSpawner` also work with Docker Swarm for
launching containers on remote machines
- [SudoSpawner](https://github.com/jupyterhub/sudospawner) enables JupyterHub to
run without being root, by spawning an intermediate process via `sudo` run without being root, by spawning an intermediate process via `sudo`
- [BatchSpawner](https://github.com/mbmilligan/batchspawner) for spawning remote - [BatchSpawner](https://github.com/jupyterhub/batchspawner) for spawning remote
servers using batch systems servers using batch systems
- [RemoteSpawner](https://github.com/zonca/remotespawner) to spawn notebooks - [RemoteSpawner](https://github.com/zonca/remotespawner) to spawn notebooks
and a remote server and tunnel the port via SSH and a remote server and tunnel the port via SSH
- [SwarmSpawner](https://github.com/compmodels/jupyterhub/blob/master/swarmspawner.py)
for spawning containers using Docker Swarm
## Spawner control methods ## Spawner control methods
@@ -61,11 +65,11 @@ and an integer exit status, otherwise.
For the local process case, `Spawner.poll` uses `os.kill(PID, 0)` For the local process case, `Spawner.poll` uses `os.kill(PID, 0)`
to check if the local process is still running. to check if the local process is still running.
### Spawner.stop ### Spawner.stop
`Spawner.stop` should stop the process. It must be a tornado coroutine, which should return when the process has finished exiting. `Spawner.stop` should stop the process. It must be a tornado coroutine, which should return when the process has finished exiting.
## Spawner state ## Spawner state
JupyterHub should be able to stop and restart without tearing down JupyterHub should be able to stop and restart without tearing down
@@ -97,6 +101,7 @@ def clear_state(self):
self.pid = 0 self.pid = 0
``` ```
## Spawner options form ## Spawner options form
(new in 0.4) (new in 0.4)
@@ -113,8 +118,7 @@ If the `Spawner.options_form` is defined, when a user tries to start their serve
If `Spawner.options_form` is undefined, the user's server is spawned directly, and no spawn page is rendered. If `Spawner.options_form` is undefined, the user's server is spawned directly, and no spawn page is rendered.
See [this example](https://github.com/jupyter/jupyterhub/blob/master/examples/spawn-form/jupyterhub_config.py) for a form that allows custom CLI args for the local spawner. See [this example](https://github.com/jupyterhub/jupyterhub/blob/master/examples/spawn-form/jupyterhub_config.py) for a form that allows custom CLI args for the local spawner.
### `Spawner.options_from_form` ### `Spawner.options_from_form`
@@ -153,8 +157,58 @@ which would return:
} }
``` ```
When `Spawner.spawn` is called, this dictionary is accessible as `self.user_options`. When `Spawner.start` is called, this dictionary is accessible as `self.user_options`.
[Spawner]: https://github.com/jupyterhub/jupyterhub/blob/master/jupyterhub/spawner.py
[Spawner]: https://github.com/jupyter/jupyterhub/blob/master/jupyterhub/spawner.py ## Writing a custom spawner
If you are interested in building a custom spawner, you can read [this tutorial](http://jupyterhub-tutorial.readthedocs.io/en/latest/spawners.html).
## Spawners, resource limits, and guarantees (Optional)
Some spawners of the single-user notebook servers allow setting limits or
guarantees on resources, such as CPU and memory. To provide a consistent
experience for sysadmins and users, we provide a standard way to set and
discover these resource limits and guarantees, such as for memory and CPU. For
the limits and guarantees to be useful, the spawner must implement support for
them.
### Memory Limits & Guarantees
`c.Spawner.mem_limit`: A **limit** specifies the *maximum amount of memory*
that may be allocated, though there is no promise that the maximum amount will
be available. In supported spawners, you can set `c.Spawner.mem_limit` to
limit the total amount of memory that a single-user notebook server can
allocate. Attempting to use more memory than this limit will cause errors. The
single-user notebook server can discover its own memory limit by looking at
the environment variable `MEM_LIMIT`, which is specified in absolute bytes.
`c.Spawner.mem_guarantee`: Sometimes, a **guarantee** of a *minumum amount of
memory* is desirable. In this case, you can set `c.Spawner.mem_guarantee` to
to provide a guarantee that at minimum this much memory will always be
available for the single-user notebook server to use. The environment variable
`MEM_GUARANTEE` will also be set in the single-user notebook server.
The spawner's underlying system or cluster is responsible for enforcing these
limits and providing these guarantees. If these values are set to `None`, no
limits or guarantees are provided, and no environment values are set.
### CPU Limits & Guarantees
`c.Spawner.cpu_limit`: In supported spawners, you can set
`c.Spawner.cpu_limit` to limit the total number of cpu-cores that a
single-user notebook server can use. These can be fractional - `0.5` means 50%
of one CPU core, `4.0` is 4 cpu-cores, etc. This value is also set in the
single-user notebook server's environment variable `CPU_LIMIT`. The limit does
not claim that you will be able to use all the CPU up to your limit as other
higher priority applications might be taking up CPU.
`c.Spawner.cpu_guarantee`: You can set `c.Spawner.cpu_guarantee` to provide a
guarantee for CPU usage. The environment variable `CPU_GUARANTEE` will be set
in the single-user notebook server when a guarantee is being provided.
The spawner's underlying system or cluster is responsible for enforcing these
limits and providing these guarantees. If these values are set to `None`, no
limits or guarantees are provided, and no environment values are set.

View File

@@ -0,0 +1,213 @@
admin
Afterwards
alchemyst
alope
api
API
apps
args
asctime
auth
authenticator
Authenticator
authenticators
Authenticators
Autograde
autograde
autogradeapp
autograded
Autograded
autograder
Autograder
autograding
backends
Bitdiddle
bugfix
Bugfixes
bugtracker
Carreau
Changelog
changelog
checksum
checksums
cmd
cogsci
conda
config
coroutine
coroutines
crt
customizable
datefmt
decrypted
dev
DockerSpawner
dockerspawner
dropdown
duedate
Duedate
ellachao
ellisonbg
entrypoint
env
Filenames
filesystem
formatters
formdata
formgrade
formgrader
gif
GitHub
Gradebook
gradebook
Granger
hardcoded
hOlle
Homebrew
html
http
https
hubapi
Indices
IFramed
inline
iopub
ip
ipynb
IPython
ischurov
ivanslapnicar
jdfreder
jhamrick
jklymak
jonathanmorgan
joschu
JUPYTER
Jupyter
jupyter
jupyterhub
Kerberos
kerberos
letsencrypt
lgpage
linkcheck
linux
localhost
logfile
login
logins
logout
lookup
lphk
mandli
Marr
mathjax
matplotlib
metadata
mikebolt
minrk
Mitigations
mixin
Mixin
multi
multiuser
namespace
nbconvert
nbgrader
neuroscience
nginx
np
npm
oauth
OAuth
oauthenticator
ok
olgabot
osx
PAM
phantomjs
Phantomjs
plugin
plugins
Popen
positionally
postgres
pregenerated
prepend
prepopulate
preprocessor
Preprocessor
prev
Programmatically
programmatically
ps
py
Qualys
quickstart
readonly
redSlug
reinstall
resize
rst
runtime
rw
sandboxed
sansary
singleuser
smeylan
spawner
Spawner
spawners
Spawners
spellcheck
SQL
sqlite
startup
statsd
stdin
stdout
stoppped
subclasses
subcommand
subdomain
subdomains
Subdomains
suchow
suprocesses
svurens
sys
SystemUserSpawner
systemwide
tasilb
teardown
threadsafe
timestamp
timestamps
TLD
todo
toolbar
traitlets
travis
tuples
undeletable
unicode
uninstall
UNIX
unix
untracked
untrusted
url
username
usernames
utcnow
utils
vinaykola
virtualenv
whitelist
whitespace
wildcard
Wildcards
willingc
wordlist
Workflow
workflow

View File

@@ -1,20 +1,29 @@
# Troubleshooting # Troubleshooting
This document is under active development.
When troubleshooting, you may see unexpected behaviors or receive an error When troubleshooting, you may see unexpected behaviors or receive an error
message. These two lists provide links to identifying the cause of the message. This section provide links for identifying the cause of the
problem and how to resolve it. problem and how to resolve it.
## Behavior problems [*Behavior*](#behavior)
- [JupyterHub proxy fails to start](#jupyterhub-proxy-fails-to-start) - JupyterHub proxy fails to start
- sudospawner fails to run
## Errors [*Errors*](#errors)
- [500 error after spawning a single-user server](#500-error-after-spawning-my-single-user-server) - 500 error after spawning my single-user server
---- [*How do I...?*](#how-do-i)
- Use a chained SSL certificate
- Install JupyterHub without a network connection
- I want access to the whole filesystem, but still default users to their home directory
- How do I increase the number of pySpark executors on YARN?
- How do I use JupyterLab's prerelease version with JupyterHub?
- How do I set up JupyterHub for a workshop (when users are not known ahead of time)?
## JupyterHub proxy fails to start [*Troubleshooting commands*](#troubleshooting-commands)
## Behavior
### JupyterHub proxy fails to start
If you have tried to start the JupyterHub proxy and it fails to start: If you have tried to start the JupyterHub proxy and it fails to start:
@@ -22,13 +31,27 @@ If you have tried to start the JupyterHub proxy and it fails to start:
``c.JupyterHub.ip = '*'``; if it is, try ``c.JupyterHub.ip = ''`` ``c.JupyterHub.ip = '*'``; if it is, try ``c.JupyterHub.ip = ''``
- Try starting with ``jupyterhub --ip=0.0.0.0`` - Try starting with ``jupyterhub --ip=0.0.0.0``
---- ### sudospawner fails to run
## 500 error after spawning my single-user server If the sudospawner script is not found in the path, sudospawner will not run.
To avoid this, specify sudospawner's absolute path. For example, start
jupyterhub with:
jupyterhub --SudoSpawner.sudospawner_path='/absolute/path/to/sudospawner'
You receive a 500 error when accessing the URL `/user/you/...`. This is often or add:
seen when your single-user server cannot check your cookies with the Hub.
c.SudoSpawner.sudospawner_path = '/absolute/path/to/sudospawner'
to the config file, `jupyterhub_config.py`.
## Errors
### 500 error after spawning my single-user server
You receive a 500 error when accessing the URL `/user/<your_name>/...`.
This is often seen when your single-user server cannot verify your user cookie
with the Hub.
There are two likely reasons for this: There are two likely reasons for this:
@@ -36,23 +59,23 @@ There are two likely reasons for this:
configuration problems) configuration problems)
2. The single-user server cannot *authenticate* its requests (invalid token) 2. The single-user server cannot *authenticate* its requests (invalid token)
### Symptoms: #### Symptoms
The main symptom is a failure to load *any* page served by the single-user The main symptom is a failure to load *any* page served by the single-user
server, met with a 500 error. This is typically the first page at `/user/you` server, met with a 500 error. This is typically the first page at `/user/<your_name>`
after logging in or clicking "Start my server". When a single-user server after logging in or clicking "Start my server". When a single-user notebook server
receives a request, it makes an API request to the Hub to check if the cookie receives a request, the notebook server makes an API request to the Hub to
corresponds to the right user. This request is logged. check if the cookie corresponds to the right user. This request is logged.
If everything is working, it will look like this: If everything is working, the response logged will be similar to this:
``` ```
200 GET /hub/api/authorizations/cookie/jupyter-hub-token-name/[secret] (@10.0.1.4) 6.10ms 200 GET /hub/api/authorizations/cookie/jupyter-hub-token-name/[secret] (@10.0.1.4) 6.10ms
``` ```
You should see a similar 200 message, as above, in the Hub log when you first You should see a similar 200 message, as above, in the Hub log when you first
visit your single-user server. If you don't see this message in the log, it visit your single-user notebook server. If you don't see this message in the log, it
may mean that your single-user server isn't connecting to your Hub. may mean that your single-user notebook server isn't connecting to your Hub.
If you see 403 (forbidden) like this, it's a token problem: If you see 403 (forbidden) like this, it's a token problem:
@@ -60,12 +83,12 @@ If you see 403 (forbidden) like this, it's a token problem:
403 GET /hub/api/authorizations/cookie/jupyter-hub-token-name/[secret] (@10.0.1.4) 4.14ms 403 GET /hub/api/authorizations/cookie/jupyter-hub-token-name/[secret] (@10.0.1.4) 4.14ms
``` ```
Check the logs of the single-user server, which may have more detailed Check the logs of the single-user notebook server, which may have more detailed
information on the cause. information on the cause.
### Causes and resolutions: #### Causes and resolutions
#### No authorization request ##### No authorization request
If you make an API request and it is not received by the server, you likely If you make an API request and it is not received by the server, you likely
have a network configuration issue. Often, this happens when the Hub is only have a network configuration issue. Often, this happens when the Hub is only
@@ -78,7 +101,7 @@ that all single-user servers can connect to, e.g.:
c.JupyterHub.hub_ip = '10.0.0.1' c.JupyterHub.hub_ip = '10.0.0.1'
``` ```
#### 403 GET /hub/api/authorizations/cookie ##### 403 GET /hub/api/authorizations/cookie
If you receive a 403 error, the API token for the single-user server is likely If you receive a 403 error, the API token for the single-user server is likely
invalid. Commonly, the 403 error is caused by resetting the JupyterHub invalid. Commonly, the 403 error is caused by resetting the JupyterHub
@@ -90,10 +113,162 @@ the container every time. This means that the same API token is used by the
server for its whole life, until the container is rebuilt. server for its whole life, until the container is rebuilt.
The fix for this Docker case is to remove any Docker containers seeing this The fix for this Docker case is to remove any Docker containers seeing this
issue (typicaly all containers created before a certain point in time): issue (typically all containers created before a certain point in time):
docker rm -f jupyter-name docker rm -f jupyter-name
After this, when you start your server via JupyterHub, it will build a After this, when you start your server via JupyterHub, it will build a
new container. If this was the underlying cause of the issue, you should see new container. If this was the underlying cause of the issue, you should see
your server again. your server again.
## How do I...?
### Use a chained SSL certificate
Some certificate providers, i.e. Entrust, may provide you with a chained
certificate that contains multiple files. If you are using a chained
certificate you will need to concatenate the individual files by appending the
chain cert and root cert to your host cert:
cat your_host.crt chain.crt root.crt > your_host-chained.crt
You would then set in your `jupyterhub_config.py` file the `ssl_key` and
`ssl_cert` as follows:
c.JupyterHub.ssl_cert = your_host-chained.crt
c.JupyterHub.ssl_key = your_host.key
#### Example
Your certificate provider gives you the following files: `example_host.crt`,
`Entrust_L1Kroot.txt` and `Entrust_Root.txt`.
Concatenate the files appending the chain cert and root cert to your host cert:
cat example_host.crt Entrust_L1Kroot.txt Entrust_Root.txt > example_host-chained.crt
You would then use the `example_host-chained.crt` as the value for
JupyterHub's `ssl_cert`. You may pass this value as a command line option
when starting JupyterHub or more conveniently set the `ssl_cert` variable in
JupyterHub's configuration file, `jupyterhub_config.py`. In `jupyterhub_config.py`,
set:
c.JupyterHub.ssl_cert = /path/to/example_host-chained.crt
c.JupyterHub.ssl_key = /path/to/example_host.key
where `ssl_cert` is example-chained.crt and ssl_key to your private key.
Then restart JupyterHub.
See also [JupyterHub SSL encryption](getting-started.md#ssl-encryption).
### Install JupyterHub without a network connection
Both conda and pip can be used without a network connection. You can make your
own repository (directory) of conda packages and/or wheels, and then install
from there instead of the internet.
For instance, you can install JupyterHub with pip and configurable-http-proxy
with npmbox:
pip wheel jupyterhub
npmbox configurable-http-proxy
### I want access to the whole filesystem, but still default users to their home directory
Setting the following in `jupyterhub_config.py` will configure access to
the entire filesystem and set the default to the user's home directory.
c.Spawner.notebook_dir = '/'
c.Spawner.default_url = '/home/%U' # %U will be replaced with the username
### How do I increase the number of pySpark executors on YARN?
From the command line, pySpark executors can be configured using a command
similar to this one:
pyspark --total-executor-cores 2 --executor-memory 1G
[Cloudera documentation for configuring spark on YARN applications](https://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_running_spark_on_yarn.html#spark_on_yarn_config_apps)
provides additional information. The [pySpark configuration documentation](https://spark.apache.org/docs/0.9.0/configuration.html)
is also helpful for programmatic configuration examples.
### How do I use JupyterLab's prerelease version with JupyterHub?
While JupyterLab is still under active development, we have had users
ask about how to try out JupyterLab with JupyterHub.
You need to install and enable the JupyterLab extension system-wide,
then you can change the default URL to `/lab`.
For instance:
pip install jupyterlab
jupyter serverextension enable --py jupyterlab --sys-prefix
The important thing is that jupyterlab is installed and enabled in the
single-user notebook server environment. For system users, this means
system-wide, as indicated above. For Docker containers, it means inside
the single-user docker image, etc.
In `jupyterhub_config.py`, configure the Spawner to tell the single-user
notebook servers to default to JupyterLab:
c.Spawner.default_url = '/lab'
### How do I set up JupyterHub for a workshop (when users are not known ahead of time)?
1. Set up JupyterHub using OAuthenticator for GitHub authentication
2. Configure whitelist to be an empty list in` jupyterhub_config.py`
3. Configure admin list to have workshop leaders be listed with administrator privileges.
Users will need a GitHub account to login and be authenticated by the Hub.
## Troubleshooting commands
The following commands provide additional detail about installed packages,
versions, and system information that may be helpful when troubleshooting
a JupyterHub deployment. The commands are:
- System and deployment information
```bash
jupyter troubleshooting
```
- Kernel information
```bash
jupyter kernelspec list
```
- Debug logs when running JupyterHub
```bash
jupyterhub --debug
```
## Toree integration with HDFS rack awareness script
The Apache Toree kernel will an issue, when running with JupyterHub, if the standard HDFS
rack awareness script is used. This will materialize in the logs as a repeated WARN:
```bash
16/11/29 16:24:20 WARN ScriptBasedMapping: Exception running /etc/hadoop/conf/topology_script.py some.ip.address
ExitCodeException exitCode=1: File "/etc/hadoop/conf/topology_script.py", line 63
print rack
^
SyntaxError: Missing parentheses in call to 'print'
at `org.apache.hadoop.util.Shell.runCommand(Shell.java:576)`
```
In order to resolve this issue, there are two potential options.
1. Update HDFS core-site.xml, so the parameter "net.topology.script.file.name" points to a custom
script (e.g. /etc/hadoop/conf/custom_topology_script.py). Copy the original script and change the first line point
to a python two installation (e.g. /usr/bin/python).
2. In spark-env.sh add a Python 2 installation to your path (e.g. export PATH=/opt/anaconda2/bin:$PATH).

106
docs/source/upgrading.md Normal file
View File

@@ -0,0 +1,106 @@
# Upgrading JupyterHub and its database
From time to time, you may wish to upgrade JupyterHub to take advantage
of new releases. Much of this process is automated using scripts,
such as those generated by alembic for database upgrades. Before upgrading a
JupyterHub deployment, it's critical to backup your data and configurations
before shutting down the JupyterHub process and server.
## Databases: SQLite (default) or RDBMS (PostgreSQL, MySQL)
The default database for JupyterHub is a [SQLite](https://sqlite.org) database.
We have chosen SQLite as JupyterHub's default for its lightweight simplicity
in certain uses such as testing, small deployments and workshops.
When running a long term deployment or a production system, we recommend using
a traditional RDBMS database, such as [PostgreSQL](https://www.postgresql.org)
or [MySQL](https://www.mysql.com), that supports the SQL `ALTER TABLE`
statement.
For production systems, SQLite has some disadvantages when used with JupyterHub:
- `upgrade-db` may not work, and you may need to start with a fresh database
- `downgrade-db` **will not** work if you want to rollback to an earlier
version, so backup the `jupyterhub.sqlite` file before upgrading
The sqlite documentation provides a helpful page about [when to use sqlite and
where traditional RDBMS may be a better choice](https://sqlite.org/whentouse.html).
## The upgrade process
Four fundamental process steps are needed when upgrading JupyterHub and its
database:
1. Backup JupyterHub database
2. Backup JupyterHub configuration file
3. Shutdown the Hub
4. Upgrade JupyterHub
5. Upgrade the database using run `jupyterhub upgrade-db`
Let's take a closer look at each step in the upgrade process as well as some
additional information about JupyterHub databases.
### Backup JupyterHub database
To prevent unintended loss of data or configuration information, you should
back up the JupyterHub database (the default SQLite database or a RDBMS
database using PostgreSQL, MySQL, or others supported by SQLAlchemy):
- If using the default SQLite database, back up the `jupyterhub.sqlite`
database.
- If using an RDBMS database such as PostgreSQL, MySQL, or other supported by
SQLAlchemy, back up the JupyterHub database.
Losing the Hub database is often not a big deal. Information that resides only
in the Hub database includes:
- active login tokens (user cookies, service tokens)
- users added via GitHub UI, instead of config files
- info about running servers
If the following conditions are true, you should be fine clearing the Hub
database and starting over:
- users specified in config file
- user servers are stopped during upgrade
- don't mind causing users to login again after upgrade
### Backup JupyterHub configuration file
Additionally, backing up your configuration file, `jupyterhub_config.py`, to
a secure location.
### Shutdown JupyterHub
Prior to shutting down JupyterHub, you should notify the Hub users of the
scheduled downtime. This gives users the opportunity to finish any outstanding
work in process.
Next, shutdown the JupyterHub service.
### Upgrade JupyterHub
Follow directions that correspond to your package manager, `pip` or `conda`,
for the new JupyterHub release. These directions will guide you to the
specific command. In general, `pip install -U jupyterhub` or
`conda upgrade jupyterhub`
### Upgrade JupyterHub databases
To run the upgrade process for JupyterHub databases, enter:
```
jupyterhub upgrade-db
```
## Upgrade checklist
1. Backup JupyterHub database:
- `jupyterhub.sqlite` when using the default sqlite database
- Your JupyterHub database when using an RDBMS
2. Backup JupyterHub configuration file: `jupyterhub_config.py`
3. Shutdown the Hub
4. Upgrade JupyterHub
- `pip install -U jupyterhub` when using `pip`
- `conda upgrade jupyterhub` when using `conda`
5. Upgrade the database using run `jupyterhub upgrade-db`

View File

@@ -1,63 +1,80 @@
# Web Security in JupyterHub # Web Security in JupyterHub
JupyterHub is designed to be a simple multi-user server for modestly sized groups of semi-trusted users. JupyterHub is designed to be a simple multi-user server for modestly sized
While the design reflects serving semi-trusted users, groups of semi-trusted users. While the design reflects serving semi-trusted
JupyterHub is not necessarily unsuitable for serving untrusted users. users, JupyterHub is not necessarily unsuitable for serving untrusted users.
Using JupyterHub with untrusted users does mean more work and much care is required to secure a Hub against untrusted users, Using JupyterHub with untrusted users does mean more work and much care is
with extra caution on protecting users from each other as the Hub is serving untrusted users. required to secure a Hub against untrusted users, with extra caution on
protecting users from each other as the Hub is serving untrusted users.
One aspect of JupyterHub's design simplicity for semi-trusted users is that the Hub and single-user servers are placed in a single domain, behind a [proxy][configurable-http-proxy]. One aspect of JupyterHub's design simplicity for semi-trusted users is that
As a result, if the Hub is serving untrusted users, the Hub and single-user servers are placed in a single domain, behind a
many of the web's cross-site protections are not applied between single-user servers and the Hub, [proxy][configurable-http-proxy]. As a result, if the Hub is serving untrusted
or between single-user servers and each other, users, many of the web's cross-site protections are not applied between
since browsers see the whole thing (proxy, Hub, and single user servers) as a single website. single-user servers and the Hub, or between single-user servers and each
other, since browsers see the whole thing (proxy, Hub, and single user
servers) as a single website.
To protect users from each other, a user must never be able to write arbitrary HTML and serve it to another user on the Hub's domain. To protect users from each other, a user must never be able to write arbitrary
JupyterHub's authentication setup prevents this because only the owner of a given single-user server is allowed to view user-authored pages served by their server. HTML and serve it to another user on the Hub's domain. JupyterHub's
To protect all users from each other, JupyterHub administrators must ensure that: authentication setup prevents this because only the owner of a given
single-user server is allowed to view user-authored pages served by their
server. To protect all users from each other, JupyterHub administrators must
ensure that:
* A user does not have permission to modify their single-user server: * A user does not have permission to modify their single-user server:
- A user may not install new packages in the Python environment that runs their server. - A user may not install new packages in the Python environment that runs
- If the PATH is used to resolve the single-user executable (instead of an absolute path), a user may not create new files in any PATH directory that precedes the directory containing jupyterhub-singleuser. their server.
- A user may not modify environment variables (e.g. PATH, PYTHONPATH) for their single-user server. - If the PATH is used to resolve the single-user executable (instead of an
* A user may not modify the configuration of the notebook server (the ~/.jupyter or JUPYTER_CONFIG_DIR directory). absolute path), a user may not create new files in any PATH directory
that precedes the directory containing jupyterhub-singleuser.
- A user may not modify environment variables (e.g. PATH, PYTHONPATH) for
their single-user server.
* A user may not modify the configuration of the notebook server
(the ~/.jupyter or JUPYTER_CONFIG_DIR directory).
If any additional services are run on the same domain as the Hub, the services must never display user-authored HTML that is neither sanitized nor sandboxed (e.g. IFramed) to any user that lacks authentication as the author of a file. If any additional services are run on the same domain as the Hub, the services
must never display user-authored HTML that is neither sanitized nor sandboxed
(e.g. IFramed) to any user that lacks authentication as the author of a file.
## Mitigations ## Mitigations
There are two main configuration options provided by JupyterHub to mitigate these issues: There are two main configuration options provided by JupyterHub to mitigate
these issues:
### Subdomains ### Subdomains
JupyterHub 0.5 adds the ability to run single-user servers on their own subdomains, JupyterHub 0.5 adds the ability to run single-user servers on their own
which means the cross-origin protections between servers has the desired effect, subdomains, which means the cross-origin protections between servers has the
and user servers and the Hub are protected from each other. desired effect, and user servers and the Hub are protected from each other. A
A user's server will be at `username.jupyter.mydomain.com`, etc. user's server will be at `username.jupyter.mydomain.com`, etc. This requires
This requires all user subdomains to point to the same address, all user subdomains to point to the same address, which is most easily
which is most easily accomplished with wildcard DNS. accomplished with wildcard DNS. Since this spreads the service across multiple
Since this spreads the service across multiple domains, you will need wildcard SSL, as well. domains, you will need wildcard SSL, as well. Unfortunately, for many
Unfortunately, for many institutional domains, wildcard DNS and SSL are not available, institutional domains, wildcard DNS and SSL are not available, but if you do
but if you do plan to serve untrusted users, enabling subdomains is highly encouraged, plan to serve untrusted users, enabling subdomains is highly encouraged, as it
as it resolves all of the cross-site issues. resolves all of the cross-site issues.
### Disabling user config ### Disabling user config
If subdomains are not available or not desirable, If subdomains are not available or not desirable, 0.5 also adds an option
0.5 also adds an option `Spawner.disable_use_config`, `Spawner.disable_user_config`, which you can set to prevent the user-owned
which you can set to prevent the user-owned configuration files from being loaded. configuration files from being loaded. This leaves only package installation
This leaves only package installation and PATHs as things the admin must enforce. and PATHs as things the admin must enforce.
For most Spawners, PATH is not something users an influence, For most Spawners, PATH is not something users can influence, but care should
but care should be taken to ensure that the Spawn does *not* evaluate shell configuration files prior to launching the server. be taken to ensure that the Spawn does *not* evaluate shell configuration
files prior to launching the server.
Package isolation is most easily handled by running the single-user server in a virtualenv with disabled system-site-packages. Package isolation is most easily handled by running the single-user server in
a virtualenv with disabled system-site-packages.
## Extra notes ## Extra notes
It is important to note that the control over the environment only affects the single-user server, It is important to note that the control over the environment only affects the
and not the environment(s) in which the user's kernel(s) may run. single-user server, and not the environment(s) in which the user's kernel(s)
Installing additional packages in the kernel environment does not pose additional risk to the web application's security. may run. Installing additional packages in the kernel environment does not
pose additional risk to the web application's security.
[configurable-http-proxy]: https://github.com/jupyterhub/configurable-http-proxy [configurable-http-proxy]: https://github.com/jupyterhub/configurable-http-proxy

View File

@@ -0,0 +1,41 @@
# `cull-idle` Example
The `cull_idle_servers.py` file provides a script to cull and shut down idle
single-user notebook servers. This script is used when `cull-idle` is run as
a Service or when it is run manually as a standalone script.
## Configure `cull-idle` to run as a Hub-Managed Service
In `jupyterhub_config.py`, add the following dictionary for the `cull-idle`
Service to the `c.JupyterHub.services` list:
```python
c.JupyterHub.services = [
{
'name': 'cull-idle',
'admin': True,
'command': 'python cull_idle_servers.py --timeout=3600'.split(),
}
]
```
where:
- `'admin': True` indicates that the Service has 'admin' permissions, and
- `'command'` indicates that the Service will be managed by the Hub.
## Run `cull-idle` manually as a standalone script
This will run `cull-idle` manually. `cull-idle` can be run as a standalone
script anywhere with access to the Hub, and will periodically check for idle
servers and shut them down via the Hub's REST API. In order to shutdown the
servers, the token given to cull-idle must have admin privileges.
Generate an API token and store it in the `JUPYTERHUB_API_TOKEN` environment
variable. Run `cull_idle_servers.py` manually.
```bash
export JUPYTERHUB_API_TOKEN=`jupyterhub token`
python cull_idle_servers.py [--timeout=900] [--url=http://127.0.0.1:8081/hub/api]
```

View File

@@ -9,10 +9,21 @@ so cull timeout should be greater than the sum of:
- single-user websocket ping interval (default: 30s) - single-user websocket ping interval (default: 30s)
- JupyterHub.last_activity_interval (default: 5 minutes) - JupyterHub.last_activity_interval (default: 5 minutes)
Generate an API token and store it in `JPY_API_TOKEN`: You can run this as a service managed by JupyterHub with this in your config::
export JPY_API_TOKEN=`jupyterhub token`
python cull_idle_servers.py [--timeout=900] [--url=http://127.0.0.1:8081/hub] c.JupyterHub.services = [
{
'name': 'cull-idle',
'admin': True,
'command': 'python cull_idle_servers.py --timeout=3600'.split(),
}
]
Or run it manually by generating an API token and storing it in `JUPYTERHUB_API_TOKEN`:
export JUPYTERHUB_API_TOKEN=`jupyterhub token`
python cull_idle_servers.py [--timeout=900] [--url=http://127.0.0.1:8081/hub/api]
""" """
import datetime import datetime
@@ -34,7 +45,7 @@ def cull_idle(url, api_token, timeout):
auth_header = { auth_header = {
'Authorization': 'token %s' % api_token 'Authorization': 'token %s' % api_token
} }
req = HTTPRequest(url=url + '/api/users', req = HTTPRequest(url=url + '/users',
headers=auth_header, headers=auth_header,
) )
now = datetime.datetime.utcnow() now = datetime.datetime.utcnow()
@@ -47,7 +58,7 @@ def cull_idle(url, api_token, timeout):
last_activity = parse_date(user['last_activity']) last_activity = parse_date(user['last_activity'])
if user['server'] and last_activity < cull_limit: if user['server'] and last_activity < cull_limit:
app_log.info("Culling %s (inactive since %s)", user['name'], last_activity) app_log.info("Culling %s (inactive since %s)", user['name'], last_activity)
req = HTTPRequest(url=url + '/api/users/%s/server' % user['name'], req = HTTPRequest(url=url + '/users/%s/server' % user['name'],
method='DELETE', method='DELETE',
headers=auth_header, headers=auth_header,
) )
@@ -60,7 +71,7 @@ def cull_idle(url, api_token, timeout):
app_log.debug("Finished culling %s", name) app_log.debug("Finished culling %s", name)
if __name__ == '__main__': if __name__ == '__main__':
define('url', default='http://127.0.0.1:8081/hub', help="The JupyterHub API URL") define('url', default=os.environ.get('JUPYTERHUB_API_URL'), help="The JupyterHub API URL")
define('timeout', default=600, help="The idle timeout (in seconds)") define('timeout', default=600, help="The idle timeout (in seconds)")
define('cull_every', default=0, help="The interval (in seconds) for checking for idle servers to cull") define('cull_every', default=0, help="The interval (in seconds) for checking for idle servers to cull")
@@ -68,7 +79,7 @@ if __name__ == '__main__':
if not options.cull_every: if not options.cull_every:
options.cull_every = options.timeout // 2 options.cull_every = options.timeout // 2
api_token = os.environ['JPY_API_TOKEN'] api_token = os.environ['JUPYTERHUB_API_TOKEN']
loop = IOLoop.current() loop = IOLoop.current()
cull = lambda : cull_idle(options.url, api_token, options.timeout) cull = lambda : cull_idle(options.url, api_token, options.timeout)

View File

@@ -0,0 +1,8 @@
# run cull-idle as a service
c.JupyterHub.services = [
{
'name': 'cull-idle',
'admin': True,
'command': 'python cull_idle_servers.py --timeout=3600'.split(),
}
]

View File

@@ -0,0 +1,33 @@
# Authenticating a flask service with JupyterHub
Uses `jupyterhub.services.HubAuth` to authenticate requests with the Hub in a [flask][] application.
## Run
1. Launch JupyterHub and the `whoami service` with
jupyterhub --ip=127.0.0.1
2. Visit http://127.0.0.1:8000/services/whoami
After logging in with your local-system credentials, you should see a JSON dump of your user info:
```json
{
"admin": false,
"last_activity": "2016-05-27T14:05:18.016372",
"name": "queequeg",
"pending": null,
"server": "/user/queequeg"
}
```
This relies on the Hub starting the whoami service, via config (see [jupyterhub_config.py](./jupyterhub_config.py)).
A similar service could be run externally, by setting the JupyterHub service environment variables:
JUPYTERHUB_API_TOKEN
JUPYTERHUB_SERVICE_PREFIX
[flask]: http://flask.pocoo.org

View File

@@ -0,0 +1,13 @@
import os
import sys
c.JupyterHub.services = [
{
'name': 'whoami',
'url': 'http://127.0.0.1:10101',
'command': ['flask', 'run', '--port=10101'],
'environment': {
'FLASK_APP': 'whoami-flask.py',
}
}
]

View File

@@ -0,0 +1,4 @@
export CONFIGPROXY_AUTH_TOKEN=`openssl rand -hex 32`
# start JupyterHub
jupyterhub --ip=127.0.0.1

View File

@@ -0,0 +1,50 @@
#!/usr/bin/env python3
"""
whoami service authentication with the Hub
"""
from functools import wraps
import json
import os
from urllib.parse import quote
from flask import Flask, redirect, request, Response
from jupyterhub.services.auth import HubAuth
prefix = os.environ.get('JUPYTERHUB_SERVICE_PREFIX', '/')
auth = HubAuth(
api_token=os.environ['JUPYTERHUB_API_TOKEN'],
cookie_cache_max_age=60,
)
app = Flask(__name__)
def authenticated(f):
"""Decorator for authenticating with the Hub"""
@wraps(f)
def decorated(*args, **kwargs):
cookie = request.cookies.get(auth.cookie_name)
if cookie:
user = auth.user_for_cookie(cookie)
else:
user = None
if user:
return f(user, *args, **kwargs)
else:
# redirect to login url on failed auth
return redirect(auth.login_url + '?next=%s' % quote(request.path))
return decorated
@app.route(prefix + '/')
@authenticated
def whoami(user):
return Response(
json.dumps(user, indent=1, sort_keys=True),
mimetype='application/json',
)

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

View File

@@ -0,0 +1,42 @@
"""An example service authenticating with the Hub.
This example service serves `/services/whoami/`,
authenticated with the Hub,
showing the user their own info.
"""
from getpass import getuser
import json
import os
from urllib.parse import urlparse
from tornado.ioloop import IOLoop
from tornado.httpserver import HTTPServer
from tornado.web import RequestHandler, Application, authenticated
from jupyterhub.services.auth import HubAuthenticated
class WhoAmIHandler(HubAuthenticated, RequestHandler):
hub_users = {getuser()} # the users allowed to access this service
@authenticated
def get(self):
user_model = self.get_current_user()
self.set_header('content-type', 'application/json')
self.write(json.dumps(user_model, indent=1, sort_keys=True))
def main():
app = Application([
(os.environ['JUPYTERHUB_SERVICE_PREFIX'] + '/?', WhoAmIHandler),
(r'.*', WhoAmIHandler),
], login_url='/hub/login')
http_server = HTTPServer(app)
url = urlparse(os.environ['JUPYTERHUB_SERVICE_URL'])
http_server.listen(url.port, url.hostname)
IOLoop.current().start()
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,32 @@
# Authenticating a service with JupyterHub
Uses `jupyterhub.services.HubAuthenticated` to authenticate requests with the Hub.
## Run
1. Launch JupyterHub and the `whoami service` with
jupyterhub --ip=127.0.0.1
2. Visit http://127.0.0.1:8000/services/whoami
After logging in with your local-system credentials, you should see a JSON dump of your user info:
```json
{
"admin": false,
"last_activity": "2016-05-27T14:05:18.016372",
"name": "queequeg",
"pending": null,
"server": "/user/queequeg"
}
```
This relies on the Hub starting the whoami services, via config (see [jupyterhub_config.py](./jupyterhub_config.py)).
A similar service could be run externally, by setting the JupyterHub service environment variables:
JUPYTERHUB_API_TOKEN
JUPYTERHUB_SERVICE_PREFIX
or instantiating and configuring a HubAuth object yourself, and attaching it as `self.hub_auth` in your HubAuthenticated handlers.

View File

@@ -0,0 +1,10 @@
import os
import sys
c.JupyterHub.services = [
{
'name': 'whoami',
'url': 'http://127.0.0.1:10101',
'command': [sys.executable, './whoami.py'],
}
]

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

View File

@@ -0,0 +1,40 @@
"""An example service authenticating with the Hub.
This serves `/services/whoami/`, authenticated with the Hub, showing the user their own info.
"""
from getpass import getuser
import json
import os
from urllib.parse import urlparse
from tornado.ioloop import IOLoop
from tornado.httpserver import HTTPServer
from tornado.web import RequestHandler, Application, authenticated
from jupyterhub.services.auth import HubAuthenticated
class WhoAmIHandler(HubAuthenticated, RequestHandler):
hub_users = {getuser()} # the users allowed to access me
@authenticated
def get(self):
user_model = self.get_current_user()
self.set_header('content-type', 'application/json')
self.write(json.dumps(user_model, indent=1, sort_keys=True))
def main():
app = Application([
(os.environ['JUPYTERHUB_SERVICE_PREFIX'] + '/?', WhoAmIHandler),
(r'.*', WhoAmIHandler),
], login_url='/hub/login')
http_server = HTTPServer(app)
url = urlparse(os.environ['JUPYTERHUB_SERVICE_URL'])
http_server.listen(url.port, url.hostname)
IOLoop.current().start()
if __name__ == '__main__':
main()

66
jupyterhub/alembic.ini Normal file
View File

@@ -0,0 +1,66 @@
# A generic, single database configuration.
[alembic]
script_location = {alembic_dir}
sqlalchemy.url = {db_url}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# max length of characters to apply to the
# "slug" field
#truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; this defaults
# to jupyterhub/alembic/versions. When using multiple version
# directories, initial revisions must be specified with --version-path
# version_locations = %(here)s/bar %(here)s/bat jupyterhub/alembic/versions
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

View File

@@ -0,0 +1 @@
This is the alembic configuration for JupyterHub data base migrations.

70
jupyterhub/alembic/env.py Normal file
View File

@@ -0,0 +1,70 @@
from __future__ import with_statement
from alembic import context
from sqlalchemy import engine_from_config, pool
from logging.config import fileConfig
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = None
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url, target_metadata=target_metadata, literal_binds=True)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool)
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

View File

@@ -0,0 +1,24 @@
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision | comma,n}
Create Date: ${create_date}
"""
# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
branch_labels = ${repr(branch_labels)}
depends_on = ${repr(depends_on)}
from alembic import op
import sqlalchemy as sa
${imports if imports else ""}
def upgrade():
${upgrades if upgrades else "pass"}
def downgrade():
${downgrades if downgrades else "pass"}

View File

@@ -0,0 +1,24 @@
"""base revision for 0.5
Revision ID: 19c0846f6344
Revises:
Create Date: 2016-04-11 16:05:34.873288
"""
# revision identifiers, used by Alembic.
revision = '19c0846f6344'
down_revision = None
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
def upgrade():
pass
def downgrade():
pass

View File

@@ -0,0 +1,25 @@
"""services
Revision ID: af4cbdb2d13c
Revises: eeb276e51423
Create Date: 2016-07-28 16:16:38.245348
"""
# revision identifiers, used by Alembic.
revision = 'af4cbdb2d13c'
down_revision = 'eeb276e51423'
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
def upgrade():
op.add_column('api_tokens', sa.Column('service_id', sa.Integer))
def downgrade():
# sqlite cannot downgrade because of limited ALTER TABLE support (no DROP COLUMN)
op.drop_column('api_tokens', 'service_id')

View File

@@ -0,0 +1,26 @@
"""auth_state
Adds auth_state column to Users table.
Revision ID: eeb276e51423
Revises: 19c0846f6344
Create Date: 2016-04-11 16:06:49.239831
"""
# revision identifiers, used by Alembic.
revision = 'eeb276e51423'
down_revision = '19c0846f6344'
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
from jupyterhub.orm import JSONDict
def upgrade():
op.add_column('users', sa.Column('auth_state', JSONDict))
def downgrade():
# sqlite cannot downgrade because of limited ALTER TABLE support (no DROP COLUMN)
op.drop_column('users', 'auth_state')

View File

@@ -1,11 +1,6 @@
from .base import * from .base import *
from .auth import * from . import auth, hub, proxy, users, groups, services
from .hub import *
from .proxy import *
from .users import *
from . import auth, hub, proxy, users
default_handlers = [] default_handlers = []
for mod in (auth, hub, proxy, users): for mod in (auth, hub, proxy, users, groups, services):
default_handlers.extend(mod.default_handlers) default_handlers.extend(mod.default_handlers)

View File

@@ -87,9 +87,11 @@ class APIHandler(BaseHandler):
})) }))
def user_model(self, user): def user_model(self, user):
"""Get the JSON model for a User object"""
model = { model = {
'name': user.name, 'name': user.name,
'admin': user.admin, 'admin': user.admin,
'groups': [ g.name for g in user.groups ],
'server': user.url if user.running else None, 'server': user.url if user.running else None,
'pending': None, 'pending': None,
'last_activity': user.last_activity.isoformat(), 'last_activity': user.last_activity.isoformat(),
@@ -99,24 +101,57 @@ class APIHandler(BaseHandler):
elif user.stop_pending: elif user.stop_pending:
model['pending'] = 'stop' model['pending'] = 'stop'
return model return model
_model_types = { def group_model(self, group):
"""Get the JSON model for a Group object"""
return {
'name': group.name,
'users': [ u.name for u in group.users ]
}
_user_model_types = {
'name': str, 'name': str,
'admin': bool, 'admin': bool,
'groups': list,
} }
def _check_user_model(self, model): _group_model_types = {
'name': str,
'users': list,
}
def _check_model(self, model, model_types, name):
"""Check a model provided by a REST API request
Args:
model (dict): user-provided model
model_types (dict): dict of key:type used to validate types and keys
name (str): name of the model, used in error messages
"""
if not isinstance(model, dict): if not isinstance(model, dict):
raise web.HTTPError(400, "Invalid JSON data: %r" % model) raise web.HTTPError(400, "Invalid JSON data: %r" % model)
if not set(model).issubset(set(self._model_types)): if not set(model).issubset(set(model_types)):
raise web.HTTPError(400, "Invalid JSON keys: %r" % model) raise web.HTTPError(400, "Invalid JSON keys: %r" % model)
for key, value in model.items(): for key, value in model.items():
if not isinstance(value, self._model_types[key]): if not isinstance(value, model_types[key]):
raise web.HTTPError(400, "user.%s must be %s, not: %r" % ( raise web.HTTPError(400, "%s.%s must be %s, not: %r" % (
key, self._model_types[key], type(value) name, key, model_types[key], type(value)
)) ))
def _check_user_model(self, model):
"""Check a request-provided user model from a REST API"""
self._check_model(model, self._user_model_types, 'user')
for username in model.get('users', []):
if not isinstance(username, str):
raise web.HTTPError(400, ("usernames must be str, not %r", type(username)))
def _check_group_model(self, model):
"""Check a request-provided group model from a REST API"""
self._check_model(model, self._group_model_types, 'group')
for groupname in model.get('groups', []):
if not isinstance(groupname, str):
raise web.HTTPError(400, ("group names must be str, not %r", type(groupname)))
def options(self, *args, **kwargs): def options(self, *args, **kwargs):
self.set_header('Access-Control-Allow-Headers', 'accept, content-type') self.set_header('Access-Control-Allow-Headers', 'accept, content-type')
self.finish() self.finish()

View File

@@ -0,0 +1,136 @@
"""Group handlers"""
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
import json
from tornado import gen, web
from .. import orm
from ..utils import admin_only
from .base import APIHandler
class _GroupAPIHandler(APIHandler):
def _usernames_to_users(self, usernames):
"""Turn a list of usernames into user objects"""
users = []
for username in usernames:
username = self.authenticator.normalize_username(username)
user = self.find_user(username)
if user is None:
raise web.HTTPError(400, "No such user: %s" % username)
users.append(user.orm_user)
return users
def find_group(self, name):
"""Find and return a group by name.
Raise 404 if not found.
"""
group = orm.Group.find(self.db, name=name)
if group is None:
raise web.HTTPError(404, "No such group: %s", name)
return group
class GroupListAPIHandler(_GroupAPIHandler):
@admin_only
def get(self):
"""List groups"""
data = [ self.group_model(g) for g in self.db.query(orm.Group) ]
self.write(json.dumps(data))
class GroupAPIHandler(_GroupAPIHandler):
"""View and modify groups by name"""
@admin_only
def get(self, name):
group = self.find_group(name)
self.write(json.dumps(self.group_model(group)))
@admin_only
@gen.coroutine
def post(self, name):
"""POST creates a group by name"""
model = self.get_json_body()
if model is None:
model = {}
else:
self._check_group_model(model)
existing = orm.Group.find(self.db, name=name)
if existing is not None:
raise web.HTTPError(400, "Group %s already exists" % name)
usernames = model.get('users', [])
# check that users exist
users = self._usernames_to_users(usernames)
# create the group
self.log.info("Creating new group %s with %i users",
name, len(users),
)
self.log.debug("Users: %s", usernames)
group = orm.Group(name=name, users=users)
self.db.add(group)
self.db.commit()
self.write(json.dumps(self.group_model(group)))
self.set_status(201)
@admin_only
def delete(self, name):
"""Delete a group by name"""
group = self.find_group(name)
self.log.info("Deleting group %s", name)
self.db.delete(group)
self.db.commit()
self.set_status(204)
class GroupUsersAPIHandler(_GroupAPIHandler):
"""Modify a group's user list"""
@admin_only
def post(self, name):
"""POST adds users to a group"""
group = self.find_group(name)
data = self.get_json_body()
self._check_group_model(data)
if 'users' not in data:
raise web.HTTPError(400, "Must specify users to add")
self.log.info("Adding %i users to group %s", len(data['users']), name)
self.log.debug("Adding: %s", data['users'])
for user in self._usernames_to_users(data['users']):
if user not in group.users:
group.users.append(user)
else:
self.log.warning("User %s already in group %s", user.name, name)
self.db.commit()
self.write(json.dumps(self.group_model(group)))
@gen.coroutine
@admin_only
def delete(self, name):
"""DELETE removes users from a group"""
group = self.find_group(name)
data = self.get_json_body()
self._check_group_model(data)
if 'users' not in data:
raise web.HTTPError(400, "Must specify users to delete")
self.log.info("Removing %i users from group %s", len(data['users']), name)
self.log.debug("Removing: %s", data['users'])
for user in self._usernames_to_users(data['users']):
if user in group.users:
group.users.remove(user)
else:
self.log.warning("User %s already not in group %s", user.name, name)
self.db.commit()
self.write(json.dumps(self.group_model(group)))
default_handlers = [
(r"/api/groups", GroupListAPIHandler),
(r"/api/groups/([^/]+)", GroupAPIHandler),
(r"/api/groups/([^/]+)/users", GroupUsersAPIHandler),
]

View File

@@ -4,12 +4,15 @@
# Distributed under the terms of the Modified BSD License. # Distributed under the terms of the Modified BSD License.
import json import json
import sys
from tornado import web from tornado import web
from tornado.ioloop import IOLoop from tornado.ioloop import IOLoop
from ..utils import admin_only from ..utils import admin_only
from .base import APIHandler from .base import APIHandler
from ..version import __version__
class ShutdownAPIHandler(APIHandler): class ShutdownAPIHandler(APIHandler):
@@ -49,6 +52,56 @@ class ShutdownAPIHandler(APIHandler):
loop.add_callback(loop.stop) loop.add_callback(loop.stop)
class RootAPIHandler(APIHandler):
def get(self):
"""GET /api/ returns info about the Hub and its API.
It is not an authenticated endpoint.
For now, it just returns the version of JupyterHub itself.
"""
data = {
'version': __version__,
}
self.finish(json.dumps(data))
class InfoAPIHandler(APIHandler):
@admin_only
def get(self):
"""GET /api/info returns detailed info about the Hub and its API.
It is not an authenticated endpoint.
For now, it just returns the version of JupyterHub itself.
"""
def _class_info(typ):
"""info about a class (Spawner or Authenticator)"""
info = {
'class': '{mod}.{name}'.format(mod=typ.__module__, name=typ.__name__),
}
pkg = typ.__module__.split('.')[0]
try:
version = sys.modules[pkg].__version__
except (KeyError, AttributeError):
version = 'unknown'
info['version'] = version
return info
data = {
'version': __version__,
'python': sys.version,
'sys_executable': sys.executable,
'spawner': _class_info(self.settings['spawner_class']),
'authenticator': _class_info(self.authenticator.__class__),
}
self.finish(json.dumps(data))
default_handlers = [ default_handlers = [
(r"/api/shutdown", ShutdownAPIHandler), (r"/api/shutdown", ShutdownAPIHandler),
(r"/api/?", RootAPIHandler),
(r"/api/info", InfoAPIHandler),
] ]

View File

@@ -28,7 +28,7 @@ class ProxyAPIHandler(APIHandler):
@gen.coroutine @gen.coroutine
def post(self): def post(self):
"""POST checks the proxy to ensure""" """POST checks the proxy to ensure"""
yield self.proxy.check_routes(self.users) yield self.proxy.check_routes(self.users, self.services)
@admin_only @admin_only
@@ -59,7 +59,7 @@ class ProxyAPIHandler(APIHandler):
self.proxy.auth_token = model['auth_token'] self.proxy.auth_token = model['auth_token']
self.db.commit() self.db.commit()
self.log.info("Updated proxy at %s", server.bind_url) self.log.info("Updated proxy at %s", server.bind_url)
yield self.proxy.check_routes(self.users) yield self.proxy.check_routes(self.users, self.services)

View File

@@ -0,0 +1,64 @@
"""Service handlers
Currently GET-only, no actions can be taken to modify services.
"""
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
import json
from tornado import web
from .. import orm
from ..utils import admin_only
from .base import APIHandler
def service_model(service):
"""Produce the model for a service"""
return {
'name': service.name,
'admin': service.admin,
'url': service.url,
'prefix': service.server.base_url if service.server else '',
'command': service.command,
'pid': service.proc.pid if service.proc else 0,
}
class ServiceListAPIHandler(APIHandler):
@admin_only
def get(self):
data = {name: service_model(service) for name, service in self.services.items()}
self.write(json.dumps(data))
def admin_or_self(method):
"""Decorator for restricting access to either the target service or admin"""
def decorated_method(self, name):
current = self.get_current_user()
if current is None:
raise web.HTTPError(403)
if not current.admin:
# not admin, maybe self
if not isinstance(current, orm.Service):
raise web.HTTPError(403)
if current.name != name:
raise web.HTTPError(403)
# raise 404 if not found
if name not in self.services:
raise web.HTTPError(404)
return method(self, name)
return decorated_method
class ServiceAPIHandler(APIHandler):
@admin_or_self
def get(self, name):
service = self.services[name]
self.write(json.dumps(service_model(service)))
default_handlers = [
(r"/api/services", ServiceListAPIHandler),
(r"/api/services/([^/]+)", ServiceAPIHandler),
]

View File

@@ -161,8 +161,9 @@ class UserServerAPIHandler(APIHandler):
@admin_or_self @admin_or_self
def post(self, name): def post(self, name):
user = self.find_user(name) user = self.find_user(name)
if user.spawner: if user.running:
state = yield user.spawner.poll() # include notify, so that a server that died is noticed immediately
state = yield user.spawner.poll_and_notify()
if state is None: if state is None:
raise web.HTTPError(400, "%s's server is already running" % name) raise web.HTTPError(400, "%s's server is already running" % name)
@@ -180,7 +181,8 @@ class UserServerAPIHandler(APIHandler):
return return
if not user.running: if not user.running:
raise web.HTTPError(400, "%s's server is not running" % name) raise web.HTTPError(400, "%s's server is not running" % name)
status = yield user.spawner.poll() # include notify, so that a server that died is noticed immediately
status = yield user.spawner.poll_and_notify()
if status is not None: if status is not None:
raise web.HTTPError(400, "%s's server is not running" % name) raise web.HTTPError(400, "%s's server is not running" % name)
yield self.stop_single_user(user) yield self.stop_single_user(user)

View File

@@ -8,13 +8,12 @@ import atexit
import binascii import binascii
import logging import logging
import os import os
import shutil
import signal import signal
import socket import socket
import sys import sys
import threading import threading
import statsd
from datetime import datetime from datetime import datetime
from distutils.version import LooseVersion as V
from getpass import getuser from getpass import getuser
from subprocess import Popen from subprocess import Popen
from urllib.parse import urlparse from urllib.parse import urlparse
@@ -46,8 +45,9 @@ here = os.path.dirname(__file__)
import jupyterhub import jupyterhub
from . import handlers, apihandlers from . import handlers, apihandlers
from .handlers.static import CacheControlStaticFilesHandler, LogoHandler from .handlers.static import CacheControlStaticFilesHandler, LogoHandler
from .services.service import Service
from . import orm from . import dbutil, orm
from .user import User, UserDict from .user import User, UserDict
from ._data import DATA_FILES_PATH from ._data import DATA_FILES_PATH
from .log import CoroutineLogFormatter, log_request from .log import CoroutineLogFormatter, log_request
@@ -95,7 +95,7 @@ flags = {
"disable persisting state database to disk" "disable persisting state database to disk"
), ),
'no-ssl': ({'JupyterHub': {'confirm_no_ssl': True}}, 'no-ssl': ({'JupyterHub': {'confirm_no_ssl': True}},
"Allow JupyterHub to run without SSL (SSL termination should be happening elsewhere)." "[DEPRECATED in 0.7: does nothing]"
), ),
} }
@@ -104,6 +104,7 @@ SECRET_BYTES = 2048 # the number of bytes to use when generating new secrets
class NewToken(Application): class NewToken(Application):
"""Generate and print a new API token""" """Generate and print a new API token"""
name = 'jupyterhub-token' name = 'jupyterhub-token'
version = jupyterhub.__version__
description = """Generate and return new API token for a user. description = """Generate and return new API token for a user.
Usage: Usage:
@@ -143,6 +144,46 @@ class NewToken(Application):
token = user.new_api_token() token = user.new_api_token()
print(token) print(token)
class UpgradeDB(Application):
"""Upgrade the JupyterHub database schema."""
name = 'jupyterhub-upgrade-db'
version = jupyterhub.__version__
description = """Upgrade the JupyterHub database to the current schema.
Usage:
jupyterhub upgrade-db
"""
aliases = common_aliases
classes = []
def _backup_db_file(self, db_file):
"""Backup a database file"""
if not os.path.exists(db_file):
return
timestamp = datetime.now().strftime('.%Y-%m-%d-%H%M%S')
backup_db_file = db_file + timestamp
for i in range(1, 10):
if not os.path.exists(backup_db_file):
break
backup_db_file = '{}.{}.{}'.format(db_file, timestamp, i)
if os.path.exists(backup_db_file):
self.exit("backup db file already exists: %s" % backup_db_file)
self.log.info("Backing up %s => %s", db_file, backup_db_file)
shutil.copy(db_file, backup_db_file)
def start(self):
hub = JupyterHub(parent=self)
hub.load_config_file(hub.config_file)
if (hub.db_url.startswith('sqlite:///')):
db_file = hub.db_url.split(':///', 1)[1]
self._backup_db_file(db_file)
self.log.info("Upgrading %s", hub.db_url)
dbutil.upgrade(hub.db_url)
class JupyterHub(Application): class JupyterHub(Application):
"""An Application for starting a Multi-User Jupyter Notebook server.""" """An Application for starting a Multi-User Jupyter Notebook server."""
@@ -160,7 +201,7 @@ class JupyterHub(Application):
generate default config file: generate default config file:
jupyterhub --generate-config -f /etc/jupyterhub/jupyterhub.py jupyterhub --generate-config -f /etc/jupyterhub/jupyterhub_config.py
spawn the server on 10.0.1.2:443 with https: spawn the server on 10.0.1.2:443 with https:
@@ -171,7 +212,8 @@ class JupyterHub(Application):
flags = Dict(flags) flags = Dict(flags)
subcommands = { subcommands = {
'token': (NewToken, "Generate an API token for a user") 'token': (NewToken, "Generate an API token for a user"),
'upgrade-db': (UpgradeDB, "Upgrade your JupyterHub state database to the current version."),
} }
classes = List([ classes = List([
@@ -181,6 +223,17 @@ class JupyterHub(Application):
PAMAuthenticator, PAMAuthenticator,
]) ])
load_groups = Dict(List(Unicode()),
help="""Dict of 'group': ['usernames'] to load at startup.
This strictly *adds* groups and users to groups.
Loading one set of groups, then starting JupyterHub again with a different
set will not remove users or groups from previous launches.
That must be done through the API.
"""
).tag(config=True)
config_file = Unicode('jupyterhub_config.py', config_file = Unicode('jupyterhub_config.py',
help="The config file to load", help="The config file to load",
).tag(config=True) ).tag(config=True)
@@ -220,9 +273,7 @@ class JupyterHub(Application):
return [os.path.join(self.data_files_path, 'templates')] return [os.path.join(self.data_files_path, 'templates')]
confirm_no_ssl = Bool(False, confirm_no_ssl = Bool(False,
help="""Confirm that JupyterHub should be run without SSL. help="""DEPRECATED: does nothing"""
This is **NOT RECOMMENDED** unless SSL termination is being handled by another layer.
"""
).tag(config=True) ).tag(config=True)
ssl_key = Unicode('', ssl_key = Unicode('',
help="""Path to SSL key file for the public facing interface of the proxy help="""Path to SSL key file for the public facing interface of the proxy
@@ -260,6 +311,15 @@ class JupyterHub(Application):
# if not specified, assume https: You have to be really explicit about HTTP! # if not specified, assume https: You have to be really explicit about HTTP!
self.subdomain_host = 'https://' + new self.subdomain_host = 'https://' + new
domain = Unicode(
help="domain name, e.g. 'example.com' (excludes protocol, port)"
)
@default('domain')
def _domain_default(self):
if not self.subdomain_host:
return ''
return urlparse(self.subdomain_host).hostname
port = Integer(8000, port = Integer(8000,
help="The public facing port of the proxy" help="The public facing port of the proxy"
).tag(config=True) ).tag(config=True)
@@ -325,19 +385,18 @@ class JupyterHub(Application):
help="The ip for this process" help="The ip for this process"
).tag(config=True) ).tag(config=True)
hub_prefix = URLPrefix('/hub/', hub_prefix = URLPrefix('/hub/',
help="The prefix for the hub server. Must not be '/'" help="The prefix for the hub server. Always /base_url/hub/"
).tag(config=True) )
@default('hub_prefix') @default('hub_prefix')
def _hub_prefix_default(self): def _hub_prefix_default(self):
return url_path_join(self.base_url, '/hub/') return url_path_join(self.base_url, '/hub/')
@observe('hub_prefix') @observe('base_url')
def _hub_prefix_changed(self, name, old, new): def _update_hub_prefix(self, change):
if new == '/': """add base URL to hub prefix"""
raise TraitError("'/' is not a valid hub prefix") base_url = change['new']
if not new.startswith(self.base_url): self.hub_prefix = self._hub_prefix_default()
self.hub_prefix = url_path_join(self.base_url, new)
cookie_secret = Bytes( cookie_secret = Bytes(
help="""The cookie secret to use to encrypt cookies. help="""The cookie secret to use to encrypt cookies.
@@ -354,11 +413,53 @@ class JupyterHub(Application):
).tag(config=True) ).tag(config=True)
api_tokens = Dict(Unicode(), api_tokens = Dict(Unicode(),
help="""Dict of token:username to be loaded into the database. help="""PENDING DEPRECATION: consider using service_tokens
Allows ahead-of-time generation of API tokens for use by services. Dict of token:username to be loaded into the database.
Allows ahead-of-time generation of API tokens for use by externally managed services,
which authenticate as JupyterHub users.
Consider using service_tokens for general services that talk to the JupyterHub API.
""" """
).tag(config=True) ).tag(config=True)
@observe('api_tokens')
def _deprecate_api_tokens(self, change):
self.log.warning("JupyterHub.api_tokens is pending deprecation."
" Consider using JupyterHub.service_tokens."
" If you have a use case for services that identify as users,"
" let us know: https://github.com/jupyterhub/jupyterhub/issues"
)
service_tokens = Dict(Unicode(),
help="""Dict of token:servicename to be loaded into the database.
Allows ahead-of-time generation of API tokens for use by externally managed services.
"""
).tag(config=True)
services = List(Dict(),
help="""List of service specification dictionaries.
A service
For instance::
services = [
{
'name': 'cull_idle',
'command': ['/path/to/cull_idle_servers.py'],
},
{
'name': 'formgrader',
'url': 'http://127.0.0.1:1234',
'token': 'super-secret',
'environment':
}
]
"""
).tag(config=True)
_service_map = Dict()
authenticator_class = Type(PAMAuthenticator, Authenticator, authenticator_class = Type(PAMAuthenticator, Authenticator,
help="""Class for authenticating users. help="""Class for authenticating users.
@@ -462,7 +563,7 @@ class JupyterHub(Application):
).tag(config=True) ).tag(config=True)
statsd_host = Unicode( statsd_host = Unicode(
help="Host to send statds metrics to" help="Host to send statsd metrics to"
).tag(config=True) ).tag(config=True)
statsd_port = Integer( statsd_port = Integer(
@@ -508,21 +609,20 @@ class JupyterHub(Application):
help="Extra log handlers to set on JupyterHub logger", help="Extra log handlers to set on JupyterHub logger",
).tag(config=True) ).tag(config=True)
@property statsd = Any(allow_none=False, help="The statsd client, if any. A mock will be used if we aren't using statsd")
def statsd(self): @default('statsd')
if hasattr(self, '_statsd'): def _statsd(self):
return self._statsd
if self.statsd_host: if self.statsd_host:
self._statsd = statsd.StatsClient( import statsd
client = statsd.StatsClient(
self.statsd_host, self.statsd_host,
self.statsd_port, self.statsd_port,
self.statsd_prefix self.statsd_prefix
) )
return self._statsd return client
else: else:
# return an empty mock object! # return an empty mock object!
self._statsd = EmptyClass() return EmptyClass()
return self._statsd
def init_logging(self): def init_logging(self):
# This prevents double log messages because tornado use a root logger that # This prevents double log messages because tornado use a root logger that
@@ -708,6 +808,11 @@ class JupyterHub(Application):
self.log.debug("Database error was:", exc_info=True) self.log.debug("Database error was:", exc_info=True)
if self.db_url.startswith('sqlite:///'): if self.db_url.startswith('sqlite:///'):
self._check_db_path(self.db_url.split(':///', 1)[1]) self._check_db_path(self.db_url.split(':///', 1)[1])
self.log.critical('\n'.join([
"If you recently upgraded JupyterHub, try running",
" jupyterhub upgrade-db",
"to upgrade your JupyterHub database schema",
]))
self.exit(1) self.exit(1)
def init_hub(self): def init_hub(self):
@@ -802,45 +907,157 @@ class JupyterHub(Application):
# but changes to the whitelist can occur in the database, # but changes to the whitelist can occur in the database,
# and persist across sessions. # and persist across sessions.
for user in db.query(orm.User): for user in db.query(orm.User):
yield gen.maybe_future(self.authenticator.add_user(user)) try:
yield gen.maybe_future(self.authenticator.add_user(user))
except Exception:
# TODO: Review approach to synchronize whitelist with db
# known cause of the exception is a user who has already been removed from the system
# but the user still exists in the hub's user db
self.log.exception("Error adding user %r already in db", user.name)
db.commit() # can add_user touch the db? db.commit() # can add_user touch the db?
# The whitelist set and the users in the db are now the same. # The whitelist set and the users in the db are now the same.
# From this point on, any user changes should be done simultaneously # From this point on, any user changes should be done simultaneously
# to the whitelist set and user db, unless the whitelist is empty (all users allowed). # to the whitelist set and user db, unless the whitelist is empty (all users allowed).
def init_api_tokens(self): @gen.coroutine
"""Load predefined API tokens (for services) into database""" def init_groups(self):
"""Load predefined groups into the database"""
db = self.db db = self.db
for token, username in self.api_tokens.items(): for name, usernames in self.load_groups.items():
username = self.authenticator.normalize_username(username) group = orm.Group.find(db, name)
if not self.authenticator.check_whitelist(username): if group is None:
raise ValueError("Token username %r is not in whitelist" % username) group = orm.Group(name=name)
if not self.authenticator.validate_username(username): db.add(group)
raise ValueError("Token username %r is not valid" % username) for username in usernames:
orm_token = orm.APIToken.find(db, token) username = self.authenticator.normalize_username(username)
if orm_token is None: if not (yield gen.maybe_future(self.authenticator.check_whitelist(username))):
user = orm.User.find(db, username) raise ValueError("Username %r is not in whitelist" % username)
user_created = False user = orm.User.find(db, name=username)
if user is None: if user is None:
user_created = True if not self.authenticator.validate_username(username):
self.log.debug("Adding user %r to database", username) raise ValueError("Group username %r is not valid" % username)
user = orm.User(name=username) user = orm.User(name=username)
db.add(user) db.add(user)
group.users.append(user)
db.commit()
@gen.coroutine
def _add_tokens(self, token_dict, kind):
"""Add tokens for users or services to the database"""
if kind == 'user':
Class = orm.User
elif kind == 'service':
Class = orm.Service
else:
raise ValueError("kind must be user or service, not %r" % kind)
db = self.db
for token, name in token_dict.items():
if kind == 'user':
name = self.authenticator.normalize_username(name)
if not (yield gen.maybe_future(self.authenticator.check_whitelist(name))):
raise ValueError("Token name %r is not in whitelist" % name)
if not self.authenticator.validate_username(name):
raise ValueError("Token name %r is not valid" % name)
orm_token = orm.APIToken.find(db, token)
if orm_token is None:
obj = Class.find(db, name)
created = False
if obj is None:
created = True
self.log.debug("Adding %s %r to database", kind, name)
obj = Class(name=name)
db.add(obj)
db.commit() db.commit()
self.log.info("Adding API token for %s", username) self.log.info("Adding API token for %s: %s", kind, name)
try: try:
user.new_api_token(token) obj.new_api_token(token)
except Exception: except Exception:
if user_created: if created:
# don't allow bad tokens to create users # don't allow bad tokens to create users
db.delete(user) db.delete(obj)
db.commit() db.commit()
raise raise
else: else:
self.log.debug("Not duplicating token %s", orm_token) self.log.debug("Not duplicating token %s", orm_token)
db.commit() db.commit()
@gen.coroutine
def init_api_tokens(self):
"""Load predefined API tokens (for services) into database"""
yield self._add_tokens(self.service_tokens, kind='service')
yield self._add_tokens(self.api_tokens, kind='user')
def init_services(self):
self._service_map.clear()
if self.domain:
domain = 'services.' + self.domain
parsed = urlparse(self.subdomain_host)
host = '%s://services.%s' % (parsed.scheme, parsed.netloc)
else:
domain = host = ''
for spec in self.services:
if 'name' not in spec:
raise ValueError('service spec must have a name: %r' % spec)
name = spec['name']
# get/create orm
orm_service = orm.Service.find(self.db, name=name)
if orm_service is None:
# not found, create a new one
orm_service = orm.Service(name=name)
self.db.add(orm_service)
orm_service.admin = spec.get('admin', False)
self.db.commit()
service = Service(parent=self,
base_url=self.base_url,
db=self.db, orm=orm_service,
domain=domain, host=host,
hub_api_url=self.hub.api_url,
)
traits = service.traits(input=True)
for key, value in spec.items():
if key not in traits:
raise AttributeError("No such service field: %s" % key)
setattr(service, key, value)
if service.url:
parsed = urlparse(service.url)
if parsed.port is not None:
port = parsed.port
elif parsed.scheme == 'http':
port = 80
elif parsed.scheme == 'https':
port = 443
server = service.orm.server = orm.Server(
proto=parsed.scheme,
ip=parsed.hostname,
port=port,
cookie_name='jupyterhub-services',
base_url=service.prefix,
)
self.db.add(server)
else:
service.orm.server = None
self._service_map[name] = service
if service.managed:
if not service.api_token:
# generate new token
service.api_token = service.orm.new_api_token()
else:
# ensure provided token is registered
self.service_tokens[service.api_token] = service.name
else:
self.service_tokens[service.api_token] = service.name
# delete services from db not in service config:
for service in self.db.query(orm.Service):
if service.name not in self._service_map:
self.db.delete(service)
self.db.commit()
@gen.coroutine @gen.coroutine
def init_spawners(self): def init_spawners(self):
db = self.db db = self.db
@@ -865,11 +1082,6 @@ class JupyterHub(Application):
for orm_user in db.query(orm.User): for orm_user in db.query(orm.User):
self.users[orm_user.id] = user = User(orm_user, self.tornado_settings) self.users[orm_user.id] = user = User(orm_user, self.tornado_settings)
if not user.state:
# without spawner state, server isn't valid
user.server = None
user_summaries.append(_user_summary(user))
continue
self.log.debug("Loading state for %s from db", user.name) self.log.debug("Loading state for %s from db", user.name)
spawner = user.spawner spawner = user.spawner
status = yield spawner.poll() status = yield spawner.poll()
@@ -904,6 +1116,7 @@ class JupyterHub(Application):
self.proxy.log = self.log self.proxy.log = self.log
self.proxy.public_server.ip = self.ip self.proxy.public_server.ip = self.ip
self.proxy.public_server.port = self.port self.proxy.public_server.port = self.port
self.proxy.public_server.base_url = self.base_url
self.proxy.api_server.ip = self.proxy_api_ip self.proxy.api_server.ip = self.proxy_api_ip
self.proxy.api_server.port = self.proxy_api_port self.proxy.api_server.port = self.proxy_api_port
self.proxy.api_server.base_url = '/api/routes/' self.proxy.api_server.base_url = '/api/routes/'
@@ -916,7 +1129,7 @@ class JupyterHub(Application):
if self.proxy.public_server.is_up() or self.proxy.api_server.is_up(): if self.proxy.public_server.is_up() or self.proxy.api_server.is_up():
# check for *authenticated* access to the proxy (auth token can change) # check for *authenticated* access to the proxy (auth token can change)
try: try:
yield self.proxy.get_routes() routes = yield self.proxy.get_routes()
except (HTTPError, OSError, socket.error) as e: except (HTTPError, OSError, socket.error) as e:
if isinstance(e, HTTPError) and e.code == 403: if isinstance(e, HTTPError) and e.code == 403:
msg = "Did CONFIGPROXY_AUTH_TOKEN change?" msg = "Did CONFIGPROXY_AUTH_TOKEN change?"
@@ -928,6 +1141,7 @@ class JupyterHub(Application):
return return
else: else:
self.log.info("Proxy already running at: %s", self.proxy.public_server.bind_url) self.log.info("Proxy already running at: %s", self.proxy.public_server.bind_url)
yield self.proxy.check_routes(self.users, self._service_map, routes)
self.proxy_process = None self.proxy_process = None
return return
@@ -939,6 +1153,7 @@ class JupyterHub(Application):
'--api-ip', self.proxy.api_server.ip, '--api-ip', self.proxy.api_server.ip,
'--api-port', str(self.proxy.api_server.port), '--api-port', str(self.proxy.api_server.port),
'--default-target', self.hub.server.host, '--default-target', self.hub.server.host,
'--error-target', url_path_join(self.hub.server.url, 'error'),
] ]
if self.subdomain_host: if self.subdomain_host:
cmd.append('--host-routing') cmd.append('--host-routing')
@@ -954,22 +1169,14 @@ class JupyterHub(Application):
'--statsd-port', str(self.statsd_port), '--statsd-port', str(self.statsd_port),
'--statsd-prefix', self.statsd_prefix + '.chp' '--statsd-prefix', self.statsd_prefix + '.chp'
]) ])
# Require SSL to be used or `--no-ssl` to confirm no SSL on # Warn if SSL is not used
if ' --ssl' not in ' '.join(cmd): if ' --ssl' not in ' '.join(cmd):
if self.confirm_no_ssl: self.log.warning("Running JupyterHub without SSL."
self.log.warning("Running JupyterHub without SSL." " I hope there is SSL termination happening somewhere else...")
" There better be SSL termination happening somewhere else...")
else:
self.log.error(
"Refusing to run JuptyterHub without SSL."
" If you are terminating SSL in another layer,"
" pass --no-ssl to tell JupyterHub to allow the proxy to listen on HTTP."
)
self.exit(1)
self.log.info("Starting proxy @ %s", self.proxy.public_server.bind_url) self.log.info("Starting proxy @ %s", self.proxy.public_server.bind_url)
self.log.debug("Proxy cmd: %s", cmd) self.log.debug("Proxy cmd: %s", cmd)
try: try:
self.proxy_process = Popen(cmd, env=env) self.proxy_process = Popen(cmd, env=env, start_new_session=True)
except FileNotFoundError as e: except FileNotFoundError as e:
self.log.error( self.log.error(
"Failed to find proxy %r\n" "Failed to find proxy %r\n"
@@ -1007,6 +1214,7 @@ class JupyterHub(Application):
yield self.start_proxy() yield self.start_proxy()
self.log.info("Setting up routes on new proxy") self.log.info("Setting up routes on new proxy")
yield self.proxy.add_all_users(self.users) yield self.proxy.add_all_users(self.users)
yield self.proxy.add_all_services(self.services)
self.log.info("New proxy back up, and good to go") self.log.info("New proxy back up, and good to go")
def init_tornado_settings(self): def init_tornado_settings(self):
@@ -1032,8 +1240,6 @@ class JupyterHub(Application):
else: else:
version_hash=datetime.now().strftime("%Y%m%d%H%M%S"), version_hash=datetime.now().strftime("%Y%m%d%H%M%S"),
subdomain_host = self.subdomain_host
domain = urlparse(subdomain_host).hostname
settings = dict( settings = dict(
log_function=log_request, log_function=log_request,
config=self.config, config=self.config,
@@ -1056,8 +1262,8 @@ class JupyterHub(Application):
template_path=self.template_paths, template_path=self.template_paths,
jinja2_env=jinja_env, jinja2_env=jinja_env,
version_hash=version_hash, version_hash=version_hash,
subdomain_host=subdomain_host, subdomain_host=self.subdomain_host,
domain=domain, domain=self.domain,
statsd=self.statsd, statsd=self.statsd,
) )
# allow configured settings to have priority # allow configured settings to have priority
@@ -1065,6 +1271,7 @@ class JupyterHub(Application):
self.tornado_settings = settings self.tornado_settings = settings
# constructing users requires access to tornado_settings # constructing users requires access to tornado_settings
self.tornado_settings['users'] = self.users self.tornado_settings['users'] = self.users
self.tornado_settings['services'] = self._service_map
def init_tornado_application(self): def init_tornado_application(self):
"""Instantiate the tornado Application object""" """Instantiate the tornado Application object"""
@@ -1101,7 +1308,9 @@ class JupyterHub(Application):
self.init_hub() self.init_hub()
self.init_proxy() self.init_proxy()
yield self.init_users() yield self.init_users()
self.init_api_tokens() yield self.init_groups()
self.init_services()
yield self.init_api_tokens()
self.init_tornado_settings() self.init_tornado_settings()
yield self.init_spawners() yield self.init_spawners()
self.init_handlers() self.init_handlers()
@@ -1109,9 +1318,16 @@ class JupyterHub(Application):
@gen.coroutine @gen.coroutine
def cleanup(self): def cleanup(self):
"""Shutdown our various subprocesses and cleanup runtime files.""" """Shutdown managed services and various subprocesses. Cleanup runtime files."""
futures = [] futures = []
managed_services = [ s for s in self._service_map.values() if s.managed ]
if managed_services:
self.log.info("Cleaning up %i services...", len(managed_services))
for service in managed_services:
futures.append(service.stop())
if self.cleanup_servers: if self.cleanup_servers:
self.log.info("Cleaning up single-user servers...") self.log.info("Cleaning up single-user servers...")
# request (async) process termination # request (async) process termination
@@ -1121,7 +1337,7 @@ class JupyterHub(Application):
else: else:
self.log.info("Leaving single-user servers running") self.log.info("Leaving single-user servers running")
# clean up proxy while SUS are shutting down # clean up proxy while single-user servers are shutting down
if self.cleanup_proxy: if self.cleanup_proxy:
if self.proxy_process: if self.proxy_process:
self.log.info("Cleaning up proxy[%i]...", self.proxy_process.pid) self.log.info("Cleaning up proxy[%i]...", self.proxy_process.pid)
@@ -1154,6 +1370,11 @@ class JupyterHub(Application):
def write_config_file(self): def write_config_file(self):
"""Write our default config to a .py config file""" """Write our default config to a .py config file"""
config_file_dir = os.path.dirname(os.path.abspath(self.config_file))
if not os.path.isdir(config_file_dir):
self.exit("{} does not exist. The destination directory must exist before generating config file.".format(
config_file_dir,
))
if os.path.exists(self.config_file) and not self.answer_yes: if os.path.exists(self.config_file) and not self.answer_yes:
answer = '' answer = ''
def ask(): def ask():
@@ -1204,7 +1425,7 @@ class JupyterHub(Application):
self.statsd.gauge('users.active', active_users_count) self.statsd.gauge('users.active', active_users_count)
self.db.commit() self.db.commit()
yield self.proxy.check_routes(self.users, routes) yield self.proxy.check_routes(self.users, self._service_map, routes)
@gen.coroutine @gen.coroutine
def start(self): def start(self):
@@ -1237,9 +1458,18 @@ class JupyterHub(Application):
except Exception as e: except Exception as e:
self.log.critical("Failed to start proxy", exc_info=True) self.log.critical("Failed to start proxy", exc_info=True)
self.exit(1) self.exit(1)
return
for service_name, service in self._service_map.items():
if not service.managed:
continue
try:
service.start()
except Exception as e:
self.log.critical("Failed to start service %s", service_name, exc_info=True)
self.exit(1)
loop.add_callback(self.proxy.add_all_users, self.users) loop.add_callback(self.proxy.add_all_users, self.users)
loop.add_callback(self.proxy.add_all_services, self._service_map)
if self.proxy_process: if self.proxy_process:
# only check / restart the proxy if we started it in the first place. # only check / restart the proxy if we started it in the first place.
@@ -1308,6 +1538,7 @@ class JupyterHub(Application):
print("\nInterrupted") print("\nInterrupted")
NewToken.classes.append(JupyterHub) NewToken.classes.append(JupyterHub)
UpgradeDB.classes.append(JupyterHub)
main = JupyterHub.launch_instance main = JupyterHub.launch_instance

View File

@@ -21,88 +21,128 @@ from .handlers.login import LoginHandler
from .utils import url_path_join from .utils import url_path_join
from .traitlets import Command from .traitlets import Command
class Authenticator(LoggingConfigurable):
"""A class for authentication.
The primary API is one method, `authenticate`, a tornado coroutine
for authenticating users.
"""
db = Any()
admin_users = Set(
help="""set of usernames of admin users
If unspecified, only the user that launches the server will be admin. class Authenticator(LoggingConfigurable):
"""Base class for implementing an authentication provider for JupyterHub"""
db = Any()
admin_users = Set(
help="""
Set of users that will have admin rights on this JupyterHub.
Admin users have extra privilages:
- Use the admin panel to see list of users logged in
- Add / remove users in some authenticators
- Restart / halt the hub
- Start / stop users' single-user servers
- Can access each individual users' single-user server (if configured)
Admin access should be treated the same way root access is.
Defaults to an empty set, in which case no user has admin access.
""" """
).tag(config=True) ).tag(config=True)
whitelist = Set( whitelist = Set(
help="""Username whitelist. help="""
Whitelist of usernames that are allowed to log in.
Use this to restrict which users can login.
If empty, allow any user to attempt login. Use this with supported authenticators to restrict which users can log in. This is an
additional whitelist that further restricts users, beyond whatever restrictions the
authenticator has in place.
If empty, does not perform any additional restriction.
""" """
).tag(config=True) ).tag(config=True)
custom_html = Unicode('',
help="""HTML login form for custom handlers. @observe('whitelist')
Override in form-based custom authenticators def _check_whitelist(self, change):
that don't use username+password, short_names = [name for name in change['new'] if len(name) <= 1]
or need custom branding. if short_names:
sorted_names = sorted(short_names)
single = ''.join(sorted_names)
string_set_typo = "set('%s')" % single
self.log.warning("whitelist contains single-character names: %s; did you mean set([%r]) instead of %s?",
sorted_names[:8], single, string_set_typo,
)
custom_html = Unicode(
help="""
HTML form to be overridden by authenticators if they want a custom authentication form.
Defaults to an empty string, which shows the default username/password form.
""" """
) )
login_service = Unicode('',
help="""Name of the login service for external login_service = Unicode(
login services (e.g. 'GitHub'). help="""
Name of the login service that this authenticator is providing using to authenticate users.
Example: GitHub, MediaWiki, Google, etc.
Setting this value replaces the login form with a "Login with <login_service>" button.
Any authenticator that redirects to an external service (e.g. using OAuth) should set this.
""" """
) )
username_pattern = Unicode( username_pattern = Unicode(
help="""Regular expression pattern for validating usernames. help="""
Regular expression pattern that all valid usernames must match.
If not defined: allow any username.
If a username does not match the pattern specified here, authentication will not be attempted.
If not set, allow any username.
""" """
).tag(config=True) ).tag(config=True)
@observe('username_pattern') @observe('username_pattern')
def _username_pattern_changed(self, change): def _username_pattern_changed(self, change):
if not change['new']: if not change['new']:
self.username_regex = None self.username_regex = None
self.username_regex = re.compile(change['new']) self.username_regex = re.compile(change['new'])
username_regex = Any() username_regex = Any(
help="""
Compiled regex kept in sync with `username_pattern`
"""
)
def validate_username(self, username): def validate_username(self, username):
"""Validate a (normalized) username. """Validate a normalized username
Return True if username is valid, False otherwise. Return True if username is valid, False otherwise.
""" """
if not self.username_regex: if not self.username_regex:
return True return True
return bool(self.username_regex.match(username)) return bool(self.username_regex.match(username))
username_map = Dict( username_map = Dict(
help="""Dictionary mapping authenticator usernames to JupyterHub users. help="""Dictionary mapping authenticator usernames to JupyterHub users.
Can be used to map OAuth service names to local users, for instance. Primarily used to normalize OAuth user names to local users.
Used in normalize_username.
""" """
).tag(config=True) ).tag(config=True)
def normalize_username(self, username): def normalize_username(self, username):
"""Normalize a username. """Normalize the given username and return it
Override in subclasses if usernames should have some normalization. Override in subclasses if usernames need different normalization rules.
Default: cast to lowercase, lookup in username_map.
The default attempts to lowercase the username and apply `username_map` if it is
set.
""" """
username = username.lower() username = username.lower()
username = self.username_map.get(username, username) username = self.username_map.get(username, username)
return username return username
def check_whitelist(self, username): def check_whitelist(self, username):
"""Check a username against our whitelist. """Check if a username is allowed to authenticate based on whitelist configuration
Return True if username is allowed, False otherwise. Return True if username is allowed, False otherwise.
No whitelist means any username should be allowed. No whitelist means any username is allowed.
Names are normalized *before* being checked against the whitelist. Names are normalized *before* being checked against the whitelist.
""" """
if not self.whitelist: if not self.whitelist:
@@ -112,18 +152,21 @@ class Authenticator(LoggingConfigurable):
@gen.coroutine @gen.coroutine
def get_authenticated_user(self, handler, data): def get_authenticated_user(self, handler, data):
"""This is the outer API for authenticating a user. """Authenticate the user who is attempting to log in
Returns normalized username if successful, None otherwise.
This calls `authenticate`, which should be overridden in subclasses, This calls `authenticate`, which should be overridden in subclasses,
normalizes the username if any normalization should be done, normalizes the username if any normalization should be done,
and then validates the name in the whitelist. and then validates the name in the whitelist.
This is the outer API for authenticating a user.
Subclasses should not need to override this method. Subclasses should not need to override this method.
The various stages can be overridden separately: The various stages can be overridden separately:
- `authenticate` turns formdata into a username
- authenticate turns formdata into a username - `normalize_username` normalizes the username
- normalize_username normalizes the username - `check_whitelist` checks against the user whitelist
- check_whitelist checks against the user whitelist
""" """
username = yield self.authenticate(handler, data) username = yield self.authenticate(handler, data)
if username is None: if username is None:
@@ -132,52 +175,62 @@ class Authenticator(LoggingConfigurable):
if not self.validate_username(username): if not self.validate_username(username):
self.log.warning("Disallowing invalid username %r.", username) self.log.warning("Disallowing invalid username %r.", username)
return return
if self.check_whitelist(username):
whitelist_pass = yield gen.maybe_future(self.check_whitelist(username))
if whitelist_pass:
return username return username
else: else:
self.log.warning("User %r not in whitelist.", username) self.log.warning("User %r not in whitelist.", username)
return return
@gen.coroutine @gen.coroutine
def authenticate(self, handler, data): def authenticate(self, handler, data):
"""Authenticate a user with login form data. """Authenticate a user with login form data
This must be a tornado gen.coroutine. This must be a tornado gen.coroutine.
It must return the username on successful authentication, It must return the username on successful authentication,
and return None on failed authentication. and return None on failed authentication.
Checking the whitelist is handled separately by the caller. Checking the whitelist is handled separately by the caller.
Args: Args:
handler (tornado.web.RequestHandler): the current request handler handler (tornado.web.RequestHandler): the current request handler
data (dict): The formdata of the login form. data (dict): The formdata of the login form.
The default form has 'username' and 'password' fields. The default form has 'username' and 'password' fields.
Return: Returns:
str: the username of the authenticated user username (str or None): The username of the authenticated user,
None: Authentication failed or None if Authentication failed
""" """
def pre_spawn_start(self, user, spawner): def pre_spawn_start(self, user, spawner):
"""Hook called before spawning a user's server. """Hook called before spawning a user's server
Can be used to do auth-related startup, e.g. opening PAM sessions. Can be used to do auth-related startup, e.g. opening PAM sessions.
""" """
def post_spawn_stop(self, user, spawner): def post_spawn_stop(self, user, spawner):
"""Hook called after stopping a user container. """Hook called after stopping a user container
Can be used to do auth-related cleanup, e.g. closing PAM sessions. Can be used to do auth-related cleanup, e.g. closing PAM sessions.
""" """
def add_user(self, user): def add_user(self, user):
"""Add a new user """Hook called when a user is added to JupyterHub
This is called:
- When a user first authenticates
- When the hub restarts, for all users.
This method may be a coroutine.
By default, this just adds the user to the whitelist. By default, this just adds the user to the whitelist.
Subclasses may do more extensive things, Subclasses may do more extensive things, such as adding actual unix users,
such as adding actual unix users,
but they should call super to ensure the whitelist is updated. but they should call super to ensure the whitelist is updated.
Note that this should be idempotent, since it is called whenever the hub restarts
for all users.
Args: Args:
user (User): The User wrapper object user (User): The User wrapper object
""" """
@@ -185,97 +238,108 @@ class Authenticator(LoggingConfigurable):
raise ValueError("Invalid username: %s" % user.name) raise ValueError("Invalid username: %s" % user.name)
if self.whitelist: if self.whitelist:
self.whitelist.add(user.name) self.whitelist.add(user.name)
def delete_user(self, user): def delete_user(self, user):
"""Triggered when a user is deleted. """Hook called when a user is deleted
Removes the user from the whitelist. Removes the user from the whitelist.
Subclasses should call super to ensure the whitelist is updated. Subclasses should call super to ensure the whitelist is updated.
Args: Args:
user (User): The User wrapper object user (User): The User wrapper object
""" """
self.whitelist.discard(user.name) self.whitelist.discard(user.name)
def login_url(self, base_url): def login_url(self, base_url):
"""Override to register a custom login handler """Override this when registering a custom login handler
Generally used in combination with get_handlers. Generally used by authenticators that do not use simple form based authentication.
The subclass overriding this is responsible for making sure there is a handler
available to handle the URL returned from this method, using the `get_handlers`
method.
Args: Args:
base_url (str): the base URL of the Hub (e.g. /hub/) base_url (str): the base URL of the Hub (e.g. /hub/)
Returns: Returns:
str: The login URL, e.g. '/hub/login' str: The login URL, e.g. '/hub/login'
""" """
return url_path_join(base_url, 'login') return url_path_join(base_url, 'login')
def logout_url(self, base_url): def logout_url(self, base_url):
"""Override to register a custom logout handler. """Override when registering a custom logout handler
Generally used in combination with get_handlers. The subclass overriding this is responsible for making sure there is a handler
available to handle the URL returned from this method, using the `get_handlers`
method.
Args: Args:
base_url (str): the base URL of the Hub (e.g. /hub/) base_url (str): the base URL of the Hub (e.g. /hub/)
Returns: Returns:
str: The logout URL, e.g. '/hub/logout' str: The logout URL, e.g. '/hub/logout'
""" """
return url_path_join(base_url, 'logout') return url_path_join(base_url, 'logout')
def get_handlers(self, app): def get_handlers(self, app):
"""Return any custom handlers the authenticator needs to register """Return any custom handlers the authenticator needs to register
(e.g. for OAuth). Used in conjugation with `login_url` and `logout_url`.
Args: Args:
app (JupyterHub Application): app (JupyterHub Application):
the application object, in case it needs to be accessed for info. the application object, in case it needs to be accessed for info.
Returns: Returns:
list: list of ``('/url', Handler)`` tuples passed to tornado. handlers (list):
list of ``('/url', Handler)`` tuples passed to tornado.
The Hub prefix is added to any URLs. The Hub prefix is added to any URLs.
""" """
return [ return [
('/login', LoginHandler), ('/login', LoginHandler),
] ]
class LocalAuthenticator(Authenticator): class LocalAuthenticator(Authenticator):
"""Base class for Authenticators that work with local Linux/UNIX users """Base class for Authenticators that work with local Linux/UNIX users
Checks for local users, and can attempt to create them if they exist. Checks for local users, and can attempt to create them if they exist.
""" """
create_system_users = Bool(False, create_system_users = Bool(False,
help="""If a user is added that doesn't exist on the system, help="""
should I try to create the system user? If set to True, will attempt to create local system users if they do not exist already.
Supports Linux and BSD variants only.
""" """
).tag(config=True) ).tag(config=True)
add_user_cmd = Command( add_user_cmd = Command(
help="""The command to use for creating users as a list of strings. help="""
The command to use for creating users as a list of strings
For each element in the list, the string USERNAME will be replaced with For each element in the list, the string USERNAME will be replaced with
the user's username. The username will also be appended as the final argument. the user's username. The username will also be appended as the final argument.
For Linux, the default value is: For Linux, the default value is:
['adduser', '-q', '--gecos', '""', '--disabled-password'] ['adduser', '-q', '--gecos', '""', '--disabled-password']
To specify a custom home directory, set this to: To specify a custom home directory, set this to:
['adduser', '-q', '--gecos', '""', '--home', '/customhome/USERNAME', '--disabled-password'] ['adduser', '-q', '--gecos', '""', '--home', '/customhome/USERNAME', '--disabled-password']
This will run the command: This will run the command:
adduser -q --gecos "" --home /customhome/river --disabled-password river adduser -q --gecos "" --home /customhome/river --disabled-password river
when the user 'river' is created. when the user 'river' is created.
""" """
).tag(config=True) ).tag(config=True)
@default('add_user_cmd') @default('add_user_cmd')
def _add_user_cmd_default(self): def _add_user_cmd_default(self):
"""Guess the most likely-to-work adduser command for each platform"""
if sys.platform == 'darwin': if sys.platform == 'darwin':
raise ValueError("I don't know how to create users on OS X") raise ValueError("I don't know how to create users on OS X")
elif which('pw'): elif which('pw'):
@@ -286,10 +350,18 @@ class LocalAuthenticator(Authenticator):
return ['adduser', '-q', '--gecos', '""', '--disabled-password'] return ['adduser', '-q', '--gecos', '""', '--disabled-password']
group_whitelist = Set( group_whitelist = Set(
help="Automatically whitelist anyone in this group.", help="""
Whitelist all users from this UNIX group.
This makes the username whitelist ineffective.
"""
).tag(config=True) ).tag(config=True)
@observe('group_whitelist') @observe('group_whitelist')
def _group_whitelist_changed(self, change): def _group_whitelist_changed(self, change):
"""
Log a warning if both group_whitelist and user whitelist are set.
"""
if self.whitelist: if self.whitelist:
self.log.warning( self.log.warning(
"Ignoring username whitelist because group whitelist supplied!" "Ignoring username whitelist because group whitelist supplied!"
@@ -302,6 +374,9 @@ class LocalAuthenticator(Authenticator):
return super().check_whitelist(username) return super().check_whitelist(username)
def check_group_whitelist(self, username): def check_group_whitelist(self, username):
"""
If group_whitelist is configured, check if authenticating user is part of group.
"""
if not self.group_whitelist: if not self.group_whitelist:
return False return False
for grnam in self.group_whitelist: for grnam in self.group_whitelist:
@@ -316,9 +391,9 @@ class LocalAuthenticator(Authenticator):
@gen.coroutine @gen.coroutine
def add_user(self, user): def add_user(self, user):
"""Add a new user """Hook called whenever a new user is added
If self.create_system_users, the user will attempt to be created. If self.create_system_users, the user will attempt to be created if it doesn't exist.
""" """
user_exists = yield gen.maybe_future(self.system_user_exists(user)) user_exists = yield gen.maybe_future(self.system_user_exists(user))
if not user_exists: if not user_exists:
@@ -326,9 +401,9 @@ class LocalAuthenticator(Authenticator):
yield gen.maybe_future(self.add_system_user(user)) yield gen.maybe_future(self.add_system_user(user))
else: else:
raise KeyError("User %s does not exist." % user.name) raise KeyError("User %s does not exist." % user.name)
yield gen.maybe_future(super().add_user(user)) yield gen.maybe_future(super().add_user(user))
@staticmethod @staticmethod
def system_user_exists(user): def system_user_exists(user):
"""Check if the user exists on the system""" """Check if the user exists on the system"""
@@ -340,7 +415,10 @@ class LocalAuthenticator(Authenticator):
return True return True
def add_system_user(self, user): def add_system_user(self, user):
"""Create a new Linux/UNIX user on the system. Works on FreeBSD and Linux, at least.""" """Create a new local UNIX user on the system.
Tested to work on FreeBSD and Linux, at least.
"""
name = user.name name = user.name
cmd = [ arg.replace('USERNAME', name) for arg in self.add_user_cmd ] + [name] cmd = [ arg.replace('USERNAME', name) for arg in self.add_user_cmd ] + [name]
self.log.info("Creating user: %s", ' '.join(map(pipes.quote, cmd))) self.log.info("Creating user: %s", ' '.join(map(pipes.quote, cmd)))
@@ -352,30 +430,37 @@ class LocalAuthenticator(Authenticator):
class PAMAuthenticator(LocalAuthenticator): class PAMAuthenticator(LocalAuthenticator):
"""Authenticate local Linux/UNIX users with PAM""" """Authenticate local UNIX users with PAM"""
encoding = Unicode('utf8', encoding = Unicode('utf8',
help="""The encoding to use for PAM""" help="""
The text encoding to use when communicating with PAM
"""
).tag(config=True) ).tag(config=True)
service = Unicode('login', service = Unicode('login',
help="""The PAM service to use for authentication.""" help="""
The name of the PAM service to use for authentication
"""
).tag(config=True) ).tag(config=True)
open_sessions = Bool(True, open_sessions = Bool(True,
help="""Whether to open PAM sessions when spawners are started. help="""
Whether to open a new PAM session when spawners are started.
This may trigger things like mounting shared filsystems, This may trigger things like mounting shared filsystems,
loading credentials, etc. depending on system configuration, loading credentials, etc. depending on system configuration,
but it does not always work. but it does not always work.
It can be disabled with:: If any errors are encountered when opening/closing PAM sessions,
this is automatically set to False.
c.PAMAuthenticator.open_sessions = False
""" """
).tag(config=True) ).tag(config=True)
@gen.coroutine @gen.coroutine
def authenticate(self, handler, data): def authenticate(self, handler, data):
"""Authenticate with PAM, and return the username if login is successful. """Authenticate with PAM, and return the username if login is successful.
Return None otherwise. Return None otherwise.
""" """
username = data['username'] username = data['username']
@@ -388,9 +473,9 @@ class PAMAuthenticator(LocalAuthenticator):
self.log.warning("PAM Authentication failed: %s", e) self.log.warning("PAM Authentication failed: %s", e)
else: else:
return username return username
def pre_spawn_start(self, user, spawner): def pre_spawn_start(self, user, spawner):
"""Open PAM session for user""" """Open PAM session for user if so configured"""
if not self.open_sessions: if not self.open_sessions:
return return
try: try:
@@ -399,9 +484,9 @@ class PAMAuthenticator(LocalAuthenticator):
self.log.warning("Failed to open PAM session for %s: %s", user.name, e) self.log.warning("Failed to open PAM session for %s: %s", user.name, e)
self.log.warning("Disabling PAM sessions from now on.") self.log.warning("Disabling PAM sessions from now on.")
self.open_sessions = False self.open_sessions = False
def post_spawn_stop(self, user, spawner): def post_spawn_stop(self, user, spawner):
"""Close PAM session for user""" """Close PAM session for user if we were configured to opened one"""
if not self.open_sessions: if not self.open_sessions:
return return
try: try:
@@ -410,4 +495,3 @@ class PAMAuthenticator(LocalAuthenticator):
self.log.warning("Failed to close PAM session for %s: %s", user.name, e) self.log.warning("Failed to close PAM session for %s: %s", user.name, e)
self.log.warning("Disabling PAM sessions from now on.") self.log.warning("Disabling PAM sessions from now on.")
self.open_sessions = False self.open_sessions = False

93
jupyterhub/dbutil.py Normal file
View File

@@ -0,0 +1,93 @@
"""Database utilities for JupyterHub"""
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
# Based on pgcontents.utils.migrate, used under the Apache license.
from contextlib import contextmanager
import os
from subprocess import check_call
import sys
from tempfile import TemporaryDirectory
_here = os.path.abspath(os.path.dirname(__file__))
ALEMBIC_INI_TEMPLATE_PATH = os.path.join(_here, 'alembic.ini')
ALEMBIC_DIR = os.path.join(_here, 'alembic')
def write_alembic_ini(alembic_ini='alembic.ini', db_url='sqlite:///jupyterhub.sqlite'):
"""Write a complete alembic.ini from our template.
Parameters
----------
alembic_ini: str
path to the alembic.ini file that should be written.
db_url: str
The SQLAlchemy database url, e.g. `sqlite:///jupyterhub.sqlite`.
"""
with open(ALEMBIC_INI_TEMPLATE_PATH) as f:
alembic_ini_tpl = f.read()
with open(alembic_ini, 'w') as f:
f.write(
alembic_ini_tpl.format(
alembic_dir=ALEMBIC_DIR,
db_url=db_url,
)
)
@contextmanager
def _temp_alembic_ini(db_url):
"""Context manager for temporary JupyterHub alembic directory
Temporarily write an alembic.ini file for use with alembic migration scripts.
Context manager yields alembic.ini path.
Parameters
----------
db_url: str
The SQLAlchemy database url, e.g. `sqlite:///jupyterhub.sqlite`.
Returns
-------
alembic_ini: str
The path to the temporary alembic.ini that we have created.
This file will be cleaned up on exit from the context manager.
"""
with TemporaryDirectory() as td:
alembic_ini = os.path.join(td, 'alembic.ini')
write_alembic_ini(alembic_ini, db_url)
yield alembic_ini
def upgrade(db_url, revision='head'):
"""Upgrade the given database to revision.
db_url: str
The SQLAlchemy database url, e.g. `sqlite:///jupyterhub.sqlite`.
revision: str [default: head]
The alembic revision to upgrade to.
"""
with _temp_alembic_ini(db_url) as alembic_ini:
check_call(
['alembic', '-c', alembic_ini, 'upgrade', revision]
)
def _alembic(*args):
"""Run an alembic command with a temporary alembic.ini"""
with _temp_alembic_ini('sqlite:///jupyterhub.sqlite') as alembic_ini:
check_call(
['alembic', '-c', alembic_ini] + list(args)
)
if __name__ == '__main__':
import sys
_alembic(*sys.argv[1:])

View File

@@ -6,6 +6,7 @@
import re import re
from datetime import timedelta from datetime import timedelta
from http.client import responses from http.client import responses
from urllib.parse import urlparse
from jinja2 import TemplateNotFound from jinja2 import TemplateNotFound
@@ -68,6 +69,9 @@ class BaseHandler(RequestHandler):
return self.settings.setdefault('users', {}) return self.settings.setdefault('users', {})
@property @property
def services(self):
return self.settings.setdefault('services', {})
@property
def hub(self): def hub(self):
return self.settings['hub'] return self.settings['hub']
@@ -144,7 +148,7 @@ class BaseHandler(RequestHandler):
if orm_token is None: if orm_token is None:
return None return None
else: else:
return orm_token.user return orm_token.user or orm_token.service
def _user_for_cookie(self, cookie_name, cookie_value=None): def _user_for_cookie(self, cookie_name, cookie_value=None):
"""Get the User for a given cookie, if there is one""" """Get the User for a given cookie, if there is one"""
@@ -218,6 +222,7 @@ class BaseHandler(RequestHandler):
if user and user.server: if user and user.server:
self.clear_cookie(user.server.cookie_name, path=user.server.base_url, **kwargs) self.clear_cookie(user.server.cookie_name, path=user.server.base_url, **kwargs)
self.clear_cookie(self.hub.server.cookie_name, path=self.hub.server.base_url, **kwargs) self.clear_cookie(self.hub.server.cookie_name, path=self.hub.server.base_url, **kwargs)
self.clear_cookie('jupyterhub-services', path=url_path_join(self.base_url, 'services'))
def _set_user_cookie(self, user, server): def _set_user_cookie(self, user, server):
# tornado <4.2 have a bug that consider secure==True as soon as # tornado <4.2 have a bug that consider secure==True as soon as
@@ -236,6 +241,13 @@ class BaseHandler(RequestHandler):
**kwargs **kwargs
) )
def set_service_cookie(self, user):
"""set the login cookie for services"""
self._set_user_cookie(user, orm.Server(
cookie_name='jupyterhub-services',
base_url=url_path_join(self.base_url, 'services')
))
def set_server_cookie(self, user): def set_server_cookie(self, user):
"""set the login cookie for the single-user server""" """set the login cookie for the single-user server"""
self._set_user_cookie(user, user.server) self._set_user_cookie(user, user.server)
@@ -254,6 +266,10 @@ class BaseHandler(RequestHandler):
if user.server: if user.server:
self.set_server_cookie(user) self.set_server_cookie(user)
# set single cookie for services
if self.db.query(orm.Service).filter(orm.Service.server != None).first():
self.set_service_cookie(user)
# create and set a new cookie token for the hub # create and set a new cookie token for the hub
if not self.get_current_user_cookie(): if not self.get_current_user_cookie():
self.set_hub_cookie(user) self.set_hub_cookie(user)
@@ -311,10 +327,13 @@ class BaseHandler(RequestHandler):
try: try:
yield gen.with_timeout(timedelta(seconds=self.slow_spawn_timeout), f) yield gen.with_timeout(timedelta(seconds=self.slow_spawn_timeout), f)
except gen.TimeoutError: except gen.TimeoutError:
if user.spawn_pending: # waiting_for_response indicates server process has started,
# but is yet to become responsive.
if not user.waiting_for_response:
# still in Spawner.start, which is taking a long time # still in Spawner.start, which is taking a long time
# we shouldn't poll while spawn_pending is True # we shouldn't poll while spawn is incomplete.
self.log.warning("User %s server is slow to start", user.name) self.log.warning("User %s's server is slow to start (timeout=%s)",
user.name, self.slow_spawn_timeout)
# schedule finish for when the user finishes spawning # schedule finish for when the user finishes spawning
IOLoop.current().add_future(f, finish_user_spawn) IOLoop.current().add_future(f, finish_user_spawn)
else: else:
@@ -324,7 +343,9 @@ class BaseHandler(RequestHandler):
if status is None: if status is None:
# hit timeout, but server's running. Hope that it'll show up soon enough, # hit timeout, but server's running. Hope that it'll show up soon enough,
# though it's possible that it started at the wrong URL # though it's possible that it started at the wrong URL
self.log.warning("User %s server is slow to become responsive", user.name) self.log.warning("User %s's server is slow to become responsive (timeout=%s)",
user.name, self.slow_spawn_timeout)
self.log.debug("Expecting server for %s at: %s", user.name, user.server.url)
# schedule finish for when the user finishes spawning # schedule finish for when the user finishes spawning
IOLoop.current().add_future(f, finish_user_spawn) IOLoop.current().add_future(f, finish_user_spawn)
else: else:
@@ -410,6 +431,7 @@ class BaseHandler(RequestHandler):
"""render custom error pages""" """render custom error pages"""
exc_info = kwargs.get('exc_info') exc_info = kwargs.get('exc_info')
message = '' message = ''
exception = None
status_message = responses.get(status_code, 'Unknown HTTP Error') status_message = responses.get(status_code, 'Unknown HTTP Error')
if exc_info: if exc_info:
exception = exc_info[1] exception = exc_info[1]
@@ -455,7 +477,11 @@ class PrefixRedirectHandler(BaseHandler):
Redirects /foo to /prefix/foo, etc. Redirects /foo to /prefix/foo, etc.
""" """
def get(self): def get(self):
path = self.request.uri[len(self.base_url):] uri = self.request.uri
if uri.startswith(self.base_url):
path = self.request.uri[len(self.base_url):]
else:
path = self.request.path
self.redirect(url_path_join( self.redirect(url_path_join(
self.hub.server.base_url, path, self.hub.server.base_url, path,
), permanent=False) ), permanent=False)
@@ -466,14 +492,30 @@ class UserSpawnHandler(BaseHandler):
If logged in, spawn a single-user server and redirect request. If logged in, spawn a single-user server and redirect request.
If a user, alice, requests /user/bob/notebooks/mynotebook.ipynb, If a user, alice, requests /user/bob/notebooks/mynotebook.ipynb,
redirect her to /user/alice/notebooks/mynotebook.ipynb, which should she will be redirected to /hub/user/bob/notebooks/mynotebook.ipynb,
in turn call this function. which will be handled by this handler,
which will in turn send her to /user/alice/notebooks/mynotebook.ipynb.
""" """
@gen.coroutine @gen.coroutine
def get(self, name, user_path): def get(self, name, user_path):
current_user = self.get_current_user() current_user = self.get_current_user()
if current_user and current_user.name == name: if current_user and current_user.name == name:
# If people visit /user/:name directly on the Hub,
# the redirects will just loop, because the proxy is bypassed.
# Try to check for that and warn,
# though the user-facing behavior is unchainged
host_info = urlparse(self.request.full_url())
port = host_info.port
if not port:
port = 443 if host_info.scheme == 'https' else 80
if port != self.proxy.public_server.port and port == self.hub.server.port:
self.log.warning("""
Detected possible direct connection to Hub's private ip: %s, bypassing proxy.
This will result in a redirect loop.
Make sure to connect to the proxied public URL %s
""", self.request.full_url(), self.proxy.public_server.url)
# logged in as correct user, spawn the server # logged in as correct user, spawn the server
if current_user.spawner: if current_user.spawner:
if current_user.spawn_pending: if current_user.spawn_pending:
@@ -514,6 +556,24 @@ class UserSpawnHandler(BaseHandler):
)) ))
class UserRedirectHandler(BaseHandler):
"""Redirect requests to user servers.
Allows public linking to "this file on your server".
/user-redirect/path/to/foo will redirect to /user/:name/path/to/foo
If the user is not logged in, send to login URL, redirecting back here.
.. versionadded:: 0.7
"""
@web.authenticated
def get(self, path):
user = self.get_current_user()
url = url_path_join(user.url, path)
self.redirect(url)
class CSPReportHandler(BaseHandler): class CSPReportHandler(BaseHandler):
'''Accepts a content security policy violation report''' '''Accepts a content security policy violation report'''
@web.authenticated @web.authenticated
@@ -526,7 +586,9 @@ class CSPReportHandler(BaseHandler):
# Report it to statsd as well # Report it to statsd as well
self.statsd.incr('csp_report') self.statsd.incr('csp_report')
default_handlers = [ default_handlers = [
(r'/user/([^/]+)(/.*)?', UserSpawnHandler), (r'/user/([^/]+)(/.*)?', UserSpawnHandler),
(r'/user-redirect/(.*)?', UserRedirectHandler),
(r'/security/csp-report', CSPReportHandler), (r'/security/csp-report', CSPReportHandler),
] ]

View File

@@ -3,6 +3,8 @@
# Copyright (c) Jupyter Development Team. # Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License. # Distributed under the terms of the Modified BSD License.
from urllib.parse import urlparse
from tornado.escape import url_escape from tornado.escape import url_escape
from tornado import gen from tornado import gen
@@ -15,12 +17,12 @@ class LogoutHandler(BaseHandler):
user = self.get_current_user() user = self.get_current_user()
if user: if user:
self.log.info("User logged out: %s", user.name) self.log.info("User logged out: %s", user.name)
self.clear_login_cookie() self.clear_login_cookie()
for name in user.other_user_cookies: for name in user.other_user_cookies:
self.clear_login_cookie(name) self.clear_login_cookie(name)
user.other_user_cookies = set([]) user.other_user_cookies = set([])
self.statsd.incr('logout')
self.redirect(self.hub.server.base_url, permanent=False) self.redirect(self.hub.server.base_url, permanent=False)
self.statsd.incr('logout')
class LoginHandler(BaseHandler): class LoginHandler(BaseHandler):
@@ -38,8 +40,11 @@ class LoginHandler(BaseHandler):
def get(self): def get(self):
self.statsd.incr('login.request') self.statsd.incr('login.request')
next_url = self.get_argument('next', '') next_url = self.get_argument('next', '')
if not next_url.startswith('/'): if (next_url + '/').startswith('%s://%s/' % (self.request.protocol, self.request.host)):
# disallow non-absolute next URLs (e.g. full URLs) # treat absolute URLs for our host as absolute paths:
next_url = urlparse(next_url).path
elif not next_url.startswith('/'):
# disallow non-absolute next URLs (e.g. full URLs to other hosts)
next_url = '' next_url = ''
user = self.get_current_user() user = self.get_current_user()
if user: if user:

View File

@@ -3,17 +3,22 @@
# Copyright (c) Jupyter Development Team. # Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License. # Distributed under the terms of the Modified BSD License.
from http.client import responses
from jinja2 import TemplateNotFound
from tornado import web, gen from tornado import web, gen
from .. import orm from .. import orm
from ..utils import admin_only, url_path_join from ..utils import admin_only, url_path_join
from .base import BaseHandler from .base import BaseHandler
from urllib.parse import quote
class RootHandler(BaseHandler): class RootHandler(BaseHandler):
"""Render the Hub root page. """Render the Hub root page.
If next argument is passed by single-user server,
redirect to base_url + single-user page.
If logged in, redirects to: If logged in, redirects to:
- single-user server if running - single-user server if running
@@ -22,6 +27,21 @@ class RootHandler(BaseHandler):
Otherwise, renders login page. Otherwise, renders login page.
""" """
def get(self): def get(self):
next_url = self.get_argument('next', '')
if next_url and not next_url.startswith('/'):
self.log.warning("Disallowing redirect outside JupyterHub: %r", next_url)
next_url = ''
if next_url and next_url.startswith(url_path_join(self.base_url, 'user/')):
# add /hub/ prefix, to ensure we redirect to the right user's server.
# The next request will be handled by UserSpawnHandler,
# ultimately redirecting to the logged-in user's server.
without_prefix = next_url[len(self.base_url):]
next_url = url_path_join(self.hub.server.base_url, without_prefix)
self.log.warning("Redirecting %s to %s. For sharing public links, use /user-redirect/",
self.request.uri, next_url,
)
self.redirect(next_url)
return
user = self.get_current_user() user = self.get_current_user()
if user: if user:
if user.running: if user.running:
@@ -31,9 +51,8 @@ class RootHandler(BaseHandler):
else: else:
url = url_path_join(self.hub.server.base_url, 'home') url = url_path_join(self.hub.server.base_url, 'home')
self.log.debug("User is not running: %s", url) self.log.debug("User is not running: %s", url)
self.redirect(url) else:
return url = self.authenticator.login_url(self.base_url)
url = url_path_join(self.hub.server.base_url, 'login')
self.redirect(url) self.redirect(url)
@@ -41,9 +60,14 @@ class HomeHandler(BaseHandler):
"""Render the user's home page.""" """Render the user's home page."""
@web.authenticated @web.authenticated
@gen.coroutine
def get(self): def get(self):
user = self.get_current_user()
if user.running:
# trigger poll_and_notify event in case of a server that died
yield user.spawner.poll_and_notify()
html = self.render_template('home.html', html = self.render_template('home.html',
user=self.get_current_user(), user=user,
) )
self.finish(html) self.finish(html)
@@ -160,9 +184,43 @@ class AdminHandler(BaseHandler):
self.finish(html) self.finish(html)
class ProxyErrorHandler(BaseHandler):
"""Handler for rendering proxy error pages"""
def get(self, status_code_s):
status_code = int(status_code_s)
status_message = responses.get(status_code, 'Unknown HTTP Error')
# build template namespace
hub_home = url_path_join(self.hub.server.base_url, 'home')
message_html = ''
if status_code == 503:
message_html = ' '.join([
"Your server appears to be down.",
"Try restarting it <a href='%s'>from the hub</a>" % hub_home
])
ns = dict(
status_code=status_code,
status_message=status_message,
message_html=message_html,
logo_url=hub_home,
)
self.set_header('Content-Type', 'text/html')
# render the template
try:
html = self.render_template('%s.html' % status_code, **ns)
except TemplateNotFound:
self.log.debug("No template for %d", status_code)
html = self.render_template('error.html', **ns)
self.write(html)
default_handlers = [ default_handlers = [
(r'/', RootHandler), (r'/', RootHandler),
(r'/home', HomeHandler), (r'/home', HomeHandler),
(r'/admin', AdminHandler), (r'/admin', AdminHandler),
(r'/spawn', SpawnHandler), (r'/spawn', SpawnHandler),
(r'/error/(\d+)', ProxyErrorHandler),
] ]

View File

@@ -20,7 +20,7 @@ from sqlalchemy.ext.declarative import declarative_base, declared_attr
from sqlalchemy.orm import sessionmaker, relationship from sqlalchemy.orm import sessionmaker, relationship
from sqlalchemy.pool import StaticPool from sqlalchemy.pool import StaticPool
from sqlalchemy.sql.expression import bindparam from sqlalchemy.sql.expression import bindparam
from sqlalchemy import create_engine from sqlalchemy import create_engine, Table
from .utils import ( from .utils import (
random_port, url_path_join, wait_for_server, wait_for_http_server, random_port, url_path_join, wait_for_server, wait_for_http_server,
@@ -152,6 +152,35 @@ class Proxy(Base):
return client.fetch(req) return client.fetch(req)
@gen.coroutine
def add_service(self, service, client=None):
"""Add a service's server to the proxy table."""
if not service.server:
raise RuntimeError(
"Service %s does not have an http endpoint to add to the proxy.", service.name)
self.log.info("Adding service %s to proxy %s => %s",
service.name, service.proxy_path, service.server.host,
)
yield self.api_request(service.proxy_path,
method='POST',
body=dict(
target=service.server.host,
service=service.name,
),
client=client,
)
@gen.coroutine
def delete_service(self, service, client=None):
"""Remove a service's server from the proxy table."""
self.log.info("Removing service %s from proxy", service.name)
yield self.api_request(service.proxy_path,
method='DELETE',
client=client,
)
@gen.coroutine @gen.coroutine
def add_user(self, user, client=None): def add_user(self, user, client=None):
"""Add a user's server to the proxy table.""" """Add a user's server to the proxy table."""
@@ -174,7 +203,7 @@ class Proxy(Base):
@gen.coroutine @gen.coroutine
def delete_user(self, user, client=None): def delete_user(self, user, client=None):
"""Remove a user's server to the proxy table.""" """Remove a user's server from the proxy table."""
self.log.info("Removing user %s from proxy", user.name) self.log.info("Removing user %s from proxy", user.name)
yield self.api_request(user.proxy_path, yield self.api_request(user.proxy_path,
method='DELETE', method='DELETE',
@@ -182,10 +211,20 @@ class Proxy(Base):
) )
@gen.coroutine @gen.coroutine
def get_routes(self, client=None): def add_all_services(self, service_dict):
"""Fetch the proxy's routes""" """Update the proxy table from the database.
resp = yield self.api_request('', client=client)
return json.loads(resp.body.decode('utf8', 'replace')) Used when loading up a new proxy.
"""
db = inspect(self).session
futures = []
for orm_service in db.query(Service):
service = service_dict[orm_service.name]
if service.server:
futures.append(self.add_service(service))
# wait after submitting them all
for f in futures:
yield f
@gen.coroutine @gen.coroutine
def add_all_users(self, user_dict): def add_all_users(self, user_dict):
@@ -204,27 +243,44 @@ class Proxy(Base):
yield f yield f
@gen.coroutine @gen.coroutine
def check_routes(self, user_dict, routes=None): def get_routes(self, client=None):
"""Fetch the proxy's routes"""
resp = yield self.api_request('', client=client)
return json.loads(resp.body.decode('utf8', 'replace'))
@gen.coroutine
def check_routes(self, user_dict, service_dict, routes=None):
"""Check that all users are properly routed on the proxy""" """Check that all users are properly routed on the proxy"""
if not routes: if not routes:
routes = yield self.get_routes() routes = yield self.get_routes()
have_routes = { r['user'] for r in routes.values() if 'user' in r } user_routes = { r['user'] for r in routes.values() if 'user' in r }
futures = [] futures = []
db = inspect(self).session db = inspect(self).session
for orm_user in db.query(User).filter(User.server != None): for orm_user in db.query(User):
user = user_dict[orm_user] user = user_dict[orm_user]
if not user.running: if user.running:
# Don't add users to the proxy that haven't finished starting if user.name not in user_routes:
continue self.log.warning("Adding missing route for %s (%s)", user.name, user.server)
if user.server is None: futures.append(self.add_user(user))
else:
# User not running, make sure it's not in the table
if user.name in user_routes:
self.log.warning("Removing route for not running %s", user.name)
futures.append(self.delete_user(user))
# check service routes
service_routes = { r['service'] for r in routes.values() if 'service' in r }
for orm_service in db.query(Service).filter(Service.server != None):
service = service_dict[orm_service.name]
if service.server is None:
# This should never be True, but seems to be on rare occasion. # This should never be True, but seems to be on rare occasion.
# catch filter bug, either in sqlalchemy or my understanding of its behavior # catch filter bug, either in sqlalchemy or my understanding of its behavior
self.log.error("User %s has no server, but wasn't filtered out.", user) self.log.error("Service %s has no server, but wasn't filtered out.", service)
continue continue
if user.name not in have_routes: if service.name not in service_routes:
self.log.warning("Adding missing route for %s (%s)", user.name, user.server) self.log.warning("Adding missing route for %s (%s)", service.name, service.server)
futures.append(self.add_user(user)) futures.append(self.add_service(service))
for f in futures: for f in futures:
yield f yield f
@@ -258,6 +314,32 @@ class Hub(Base):
return "<%s [unconfigured]>" % self.__class__.__name__ return "<%s [unconfigured]>" % self.__class__.__name__
# user:group many:many mapping table
user_group_map = Table('user_group_map', Base.metadata,
Column('user_id', ForeignKey('users.id'), primary_key=True),
Column('group_id', ForeignKey('groups.id'), primary_key=True),
)
class Group(Base):
"""User Groups"""
__tablename__ = 'groups'
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(Unicode(1023), unique=True)
users = relationship('User', secondary='user_group_map', back_populates='groups')
def __repr__(self):
return "<%s %s (%i users)>" % (
self.__class__.__name__, self.name, len(self.users)
)
@classmethod
def find(cls, db, name):
"""Find a group by name.
Returns None if not found.
"""
return db.query(cls).filter(cls.name==name).first()
class User(Base): class User(Base):
"""The User table """The User table
@@ -276,16 +358,22 @@ class User(Base):
""" """
__tablename__ = 'users' __tablename__ = 'users'
id = Column(Integer, primary_key=True, autoincrement=True) id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(Unicode(1023)) name = Column(Unicode(1023), unique=True)
# should we allow multiple servers per user? # should we allow multiple servers per user?
_server_id = Column(Integer, ForeignKey('servers.id')) _server_id = Column(Integer, ForeignKey('servers.id', ondelete="SET NULL"))
server = relationship(Server, primaryjoin=_server_id == Server.id) server = relationship(Server, primaryjoin=_server_id == Server.id)
admin = Column(Boolean, default=False) admin = Column(Boolean, default=False)
last_activity = Column(DateTime, default=datetime.utcnow) last_activity = Column(DateTime, default=datetime.utcnow)
api_tokens = relationship("APIToken", backref="user") api_tokens = relationship("APIToken", backref="user")
cookie_id = Column(Unicode(1023), default=new_token) cookie_id = Column(Unicode(1023), default=new_token)
# User.state is actually Spawner state
# We will need to figure something else out if/when we have multiple spawners per user
state = Column(JSONDict) state = Column(JSONDict)
# Authenticators can store their state here:
auth_state = Column(JSONDict)
# group mapping
groups = relationship('Group', secondary='user_group_map', back_populates='users')
other_user_cookies = set([]) other_user_cookies = set([])
@@ -308,21 +396,7 @@ class User(Base):
If `token` is given, load that token. If `token` is given, load that token.
""" """
assert self.id is not None return APIToken.new(token=token, user=self)
db = inspect(self).session
if token is None:
token = new_token()
else:
if len(token) < 8:
raise ValueError("Tokens must be at least 8 characters, got %r" % token)
found = APIToken.find(db, token)
if found:
raise ValueError("Collision on token: %s..." % token[:4])
orm_token = APIToken(user_id=self.id)
orm_token.token = token
db.add(orm_token)
db.commit()
return token
@classmethod @classmethod
def find(cls, db, name): def find(cls, db, name):
@@ -332,13 +406,67 @@ class User(Base):
""" """
return db.query(cls).filter(cls.name==name).first() return db.query(cls).filter(cls.name==name).first()
class Service(Base):
"""A service run with JupyterHub
A service is similar to a User without a Spawner.
A service can have API tokens for accessing the Hub's API
It has:
- name
- admin
- api tokens
- server (if proxied http endpoint)
In addition to what it has in common with users, a Service has extra info:
- pid: the process id (if managed)
"""
__tablename__ = 'services'
id = Column(Integer, primary_key=True, autoincrement=True)
# common user interface:
name = Column(Unicode(1023), unique=True)
admin = Column(Boolean, default=False)
api_tokens = relationship("APIToken", backref="service")
# service-specific interface
_server_id = Column(Integer, ForeignKey('servers.id'))
server = relationship(Server, primaryjoin=_server_id == Server.id)
pid = Column(Integer)
def new_api_token(self, token=None):
"""Create a new API token
If `token` is given, load that token.
"""
return APIToken.new(token=token, service=self)
@classmethod
def find(cls, db, name):
"""Find a service by name.
Returns None if not found.
"""
return db.query(cls).filter(cls.name==name).first()
class APIToken(Base): class APIToken(Base):
"""An API token""" """An API token"""
__tablename__ = 'api_tokens' __tablename__ = 'api_tokens'
# _constraint = ForeignKeyConstraint(['user_id', 'server_id'], ['users.id', 'services.id'])
@declared_attr @declared_attr
def user_id(cls): def user_id(cls):
return Column(Integer, ForeignKey('users.id')) return Column(Integer, ForeignKey('users.id', ondelete="CASCADE"), nullable=True)
@declared_attr
def service_id(cls):
return Column(Integer, ForeignKey('services.id', ondelete="CASCADE"), nullable=True)
id = Column(Integer, primary_key=True) id = Column(Integer, primary_key=True)
hashed = Column(Unicode(1023)) hashed = Column(Unicode(1023))
@@ -359,22 +487,42 @@ class APIToken(Base):
self.hashed = hash_token(token, rounds=self.rounds, salt=self.salt_bytes, algorithm=self.algorithm) self.hashed = hash_token(token, rounds=self.rounds, salt=self.salt_bytes, algorithm=self.algorithm)
def __repr__(self): def __repr__(self):
return "<{cls}('{pre}...', user='{u}')>".format( if self.user is not None:
kind = 'user'
name = self.user.name
elif self.service is not None:
kind = 'service'
name = self.service.name
else:
# this shouldn't happen
kind = 'owner'
name = 'unknown'
return "<{cls}('{pre}...', {kind}='{name}')>".format(
cls=self.__class__.__name__, cls=self.__class__.__name__,
pre=self.prefix, pre=self.prefix,
u=self.user.name, kind=kind,
name=name,
) )
@classmethod @classmethod
def find(cls, db, token): def find(cls, db, token, *, kind=None):
"""Find a token object by value. """Find a token object by value.
Returns None if not found. Returns None if not found.
`kind='user'` only returns API tokens for users
`kind='service'` only returns API tokens for services
""" """
prefix = token[:cls.prefix_length] prefix = token[:cls.prefix_length]
# since we can't filter on hashed values, filter on prefix # since we can't filter on hashed values, filter on prefix
# so we aren't comparing with all tokens # so we aren't comparing with all tokens
prefix_match = db.query(cls).filter(bindparam('prefix', prefix).startswith(cls.prefix)) prefix_match = db.query(cls).filter(bindparam('prefix', prefix).startswith(cls.prefix))
if kind == 'user':
prefix_match = prefix_match.filter(cls.user_id != None)
elif kind == 'service':
prefix_match = prefix_match.filter(cls.service_id != None)
elif kind is not None:
raise ValueError("kind must be 'user', 'service', or None, not %r" % kind)
for orm_token in prefix_match: for orm_token in prefix_match:
if orm_token.match(token): if orm_token.match(token):
return orm_token return orm_token
@@ -383,6 +531,31 @@ class APIToken(Base):
"""Is this my token?""" """Is this my token?"""
return compare_token(self.hashed, token) return compare_token(self.hashed, token)
@classmethod
def new(cls, token=None, user=None, service=None):
"""Generate a new API token for a user or service"""
assert user or service
assert not (user and service)
db = inspect(user or service).session
if token is None:
token = new_token()
else:
if len(token) < 8:
raise ValueError("Tokens must be at least 8 characters, got %r" % token)
found = APIToken.find(db, token)
if found:
raise ValueError("Collision on token: %s..." % token[:4])
orm_token = APIToken(token=token)
if user:
assert user.id is not None
orm_token.user_id = user.id
else:
assert service.id is not None
orm_token.service_id = service.id
db.add(orm_token)
db.commit()
return token
def new_session_factory(url="sqlite:///:memory:", reset=False, **kwargs): def new_session_factory(url="sqlite:///:memory:", reset=False, **kwargs):
"""Create a new session at url""" """Create a new session at url"""

View File

301
jupyterhub/services/auth.py Normal file
View File

@@ -0,0 +1,301 @@
"""Authenticating services with JupyterHub
Cookies are sent to the Hub for verification, replying with a JSON model describing the authenticated user.
HubAuth can be used in any application, even outside tornado.
HubAuthenticated is a mixin class for tornado handlers that should authenticate with the Hub.
"""
import os
import socket
import time
from urllib.parse import quote
import requests
from tornado.log import app_log
from tornado.web import HTTPError
from traitlets.config import Configurable
from traitlets import Unicode, Integer, Instance, default
from ..utils import url_path_join
class _ExpiringDict(dict):
"""Dict-like cache for Hub API requests
Values will expire after max_age seconds.
A monotonic timer is used (time.monotonic).
A max_age of 0 means cache forever.
"""
max_age = 0
def __init__(self, max_age=0):
self.max_age = max_age
self.timestamps = {}
self.values = {}
def __setitem__(self, key, value):
"""Store key and record timestamp"""
self.timestamps[key] = time.monotonic()
self.values[key] = value
def _check_age(self, key):
"""Check timestamp for a key"""
if key not in self.values:
# not registered, nothing to do
return
now = time.monotonic()
timestamp = self.timestamps[key]
if self.max_age > 0 and timestamp + self.max_age < now:
self.values.pop(key)
self.timestamps.pop(key)
def __contains__(self, key):
"""dict check for `key in dict`"""
self._check_age(key)
return key in self.values
def __getitem__(self, key):
"""Check age before returning value"""
self._check_age(key)
return self.values[key]
def get(self, key, default=None):
"""dict-like get:"""
try:
return self[key]
except KeyError:
return default
class HubAuth(Configurable):
"""A class for authenticating with JupyterHub
This can be used by any application.
If using tornado, use via :class:`HubAuthenticated` mixin.
If using manually, use the ``.user_for_cookie(cookie_value)`` method
to identify the user corresponding to a given cookie value.
The following config must be set:
- api_token (token for authenticating with JupyterHub API),
fetched from the JUPYTERHUB_API_TOKEN env by default.
The following config MAY be set:
- api_url: the base URL of the Hub's internal API,
fetched from JUPYTERHUB_API_URL by default.
- cookie_cache_max_age: the number of seconds responses
from the Hub should be cached.
- login_url (the *public* ``/hub/login`` URL of the Hub).
- cookie_name: the name of the cookie I should be using,
if different from the default (unlikely).
"""
# where is the hub
api_url = Unicode(os.environ.get('JUPYTERHUB_API_URL') or 'http://127.0.0.1:8081/hub/api',
help="""The base API URL of the Hub.
Typically http://hub-ip:hub-port/hub/api
"""
).tag(config=True)
login_url = Unicode('/hub/login',
help="""The login URL of the Hub
Typically /hub/login
"""
).tag(config=True)
api_token = Unicode(os.environ.get('JUPYTERHUB_API_TOKEN', ''),
help="""API key for accessing Hub API.
Generate with `jupyterhub token [username]` or add to JupyterHub.services config.
"""
).tag(config=True)
cookie_name = Unicode('jupyterhub-services',
help="""The name of the cookie I should be looking for"""
).tag(config=True)
cookie_cache_max_age = Integer(300,
help="""The maximum time (in seconds) to cache the Hub's response for cookie authentication.
A larger value reduces load on the Hub and occasional response lag.
A smaller value reduces propagation time of changes on the Hub (rare).
Default: 300 (five minutes)
"""
).tag(config=True)
cookie_cache = Instance(_ExpiringDict, allow_none=False)
@default('cookie_cache')
def _cookie_cache(self):
return _ExpiringDict(self.cookie_cache_max_age)
def user_for_cookie(self, encrypted_cookie, use_cache=True):
"""Ask the Hub to identify the user for a given cookie.
Args:
encrypted_cookie (str): the cookie value (not decrypted, the Hub will do that)
use_cache (bool): Specify use_cache=False to skip cached cookie values (default: True)
Returns:
user_model (dict): The user model, if a user is identified, None if authentication fails.
The 'name' field contains the user's name.
"""
if use_cache:
cached = self.cookie_cache.get(encrypted_cookie)
if cached is not None:
return cached
try:
r = requests.get(
url_path_join(self.api_url,
"authorizations/cookie",
self.cookie_name,
quote(encrypted_cookie, safe=''),
),
headers = {
'Authorization' : 'token %s' % self.api_token,
},
)
except requests.ConnectionError:
msg = "Failed to connect to Hub API at %r." % self.api_url
msg += " Is the Hub accessible at this URL (from host: %s)?" % socket.gethostname()
if '127.0.0.1' in self.api_url:
msg += " Make sure to set c.JupyterHub.hub_ip to an IP accessible to" + \
" single-user servers if the servers are not on the same host as the Hub."
raise HTTPError(500, msg)
if r.status_code == 404:
data = None
elif r.status_code == 403:
app_log.error("I don't have permission to verify cookies, my auth token may have expired: [%i] %s", r.status_code, r.reason)
raise HTTPError(500, "Permission failure checking authorization, I may need a new token")
elif r.status_code >= 500:
app_log.error("Upstream failure verifying auth token: [%i] %s", r.status_code, r.reason)
raise HTTPError(502, "Failed to check authorization (upstream problem)")
elif r.status_code >= 400:
app_log.warning("Failed to check authorization: [%i] %s", r.status_code, r.reason)
raise HTTPError(500, "Failed to check authorization")
else:
data = r.json()
self.cookie_cache[encrypted_cookie] = data
return data
def get_user(self, handler):
"""Get the Hub user for a given tornado handler.
Checks cookie with the Hub to identify the current user.
Args:
handler (tornado.web.RequestHandler): the current request handler
Returns:
user_model (dict): The user model, if a user is identified, None if authentication fails.
The 'name' field contains the user's name.
"""
# only allow this to be called once per handler
# avoids issues if an error is raised,
# since this may be called again when trying to render the error page
if hasattr(handler, '_cached_hub_user'):
return handler._cached_hub_user
handler._cached_hub_user = None
encrypted_cookie = handler.get_cookie(self.cookie_name)
if encrypted_cookie:
user_model = self.user_for_cookie(encrypted_cookie)
handler._cached_hub_user = user_model
return user_model
else:
app_log.debug("No token cookie")
return None
class HubAuthenticated(object):
"""Mixin for tornado handlers that are authenticated with JupyterHub
A handler that mixes this in must have the following attributes/properties:
- .hub_auth: A HubAuth instance
- .hub_users: A set of usernames to allow.
If left unspecified or None, username will not be checked.
- .hub_groups: A set of group names to allow.
If left unspecified or None, groups will not be checked.
Examples::
class MyHandler(HubAuthenticated, web.RequestHandler):
hub_users = {'inara', 'mal'}
def initialize(self, hub_auth):
self.hub_auth = hub_auth
@web.authenticated
def get(self):
...
"""
hub_users = None # set of allowed users
hub_groups = None # set of allowed groups
# self.hub_auth must be a HubAuth instance.
# If nothing specified, use default config,
# which will be configured with defaults
# based on JupyterHub environment variables for services.
_hub_auth = None
@property
def hub_auth(self):
if self._hub_auth is None:
self._hub_auth = HubAuth()
return self._hub_auth
@hub_auth.setter
def hub_auth(self, auth):
self._hub_auth = auth
def check_hub_user(self, user_model):
"""Check whether Hub-authenticated user should be allowed.
Returns the input if the user should be allowed, None otherwise.
Override if you want to check anything other than the username's presence in hub_users list.
Args:
user_model (dict): the user model returned from :class:`HubAuth`
Returns:
user_model (dict): The user model if the user should be allowed, None otherwise.
"""
if self.hub_users is None and self.hub_groups is None:
# no whitelist specified, allow any authenticated Hub user
return user_model
name = user_model['name']
if self.hub_users and name in self.hub_users:
# user in whitelist
return user_model
elif self.hub_groups and set(user_model['groups']).union(self.hub_groups):
# group in whitelist
return user_model
else:
app_log.warning("Not allowing Hub user %s" % name)
return None
def get_current_user(self):
"""Tornado's authentication method
Returns:
user_model (dict): The user model, if a user is identified, None if authentication fails.
"""
user_model = self.hub_auth.get_user(self)
if not user_model:
return
return self.check_hub_user(user_model)

View File

@@ -0,0 +1,258 @@
"""A service is a process that talks to JupyterHub
Cases:
Managed:
- managed by JupyterHub (always subprocess, no custom Spawners)
- always a long-running process
- managed services are restarted automatically if they exit unexpectedly
Unmanaged:
- managed by external service (docker, systemd, etc.)
- do not need to be long-running processes, or processes at all
URL: needs a route added to the proxy.
- Public route will always be /services/service-name
- url specified in config
- if port is 0, Hub will select a port
API access:
- admin: tokens will have admin-access to the API
- not admin: tokens will only have non-admin access
(not much they can do other than defer to Hub for auth)
An externally managed service running on a URL::
{
'name': 'my-service',
'url': 'https://host:8888',
'admin': True,
'token': 'super-secret',
}
A hub-managed service with no URL:
{
'name': 'cull-idle',
'command': ['python', '/path/to/cull-idle']
'admin': True,
}
"""
from getpass import getuser
import pipes
import shutil
from subprocess import Popen
from urllib.parse import urlparse
from tornado import gen
from traitlets import (
HasTraits,
Any, Bool, Dict, Unicode, Instance,
default, observe,
)
from traitlets.config import LoggingConfigurable
from .. import orm
from ..traitlets import Command
from ..spawner import LocalProcessSpawner
from ..utils import url_path_join
class _MockUser(HasTraits):
name = Unicode()
server = Instance(orm.Server, allow_none=True)
state = Dict()
service = Instance(__module__ + '.Service')
# We probably shouldn't use a Spawner here,
# but there are too many concepts to share.
class _ServiceSpawner(LocalProcessSpawner):
"""Subclass of LocalProcessSpawner
Removes notebook-specific-ness from LocalProcessSpawner.
"""
cwd = Unicode()
cmd = Command(minlen=0)
def make_preexec_fn(self, name):
if not name or name == getuser():
# no setuid if no name
return
return super().make_preexec_fn(name)
def start(self):
"""Start the process"""
env = self.get_env()
cmd = self.cmd
self.log.info("Spawning %s", ' '.join(pipes.quote(s) for s in cmd))
try:
self.proc = Popen(self.cmd, env=env,
preexec_fn=self.make_preexec_fn(self.user.name),
start_new_session=True, # don't forward signals
cwd=self.cwd or None,
)
except PermissionError:
# use which to get abspath
script = shutil.which(cmd[0]) or cmd[0]
self.log.error("Permission denied trying to run %r. Does %s have access to this file?",
script, self.user.name,
)
raise
self.pid = self.proc.pid
class Service(LoggingConfigurable):
"""An object wrapping a service specification for Hub API consumers.
A service has inputs:
- name: str
the name of the service
- admin: bool(false)
whether the service should have administrative privileges
- url: str (None)
The URL where the service is/should be.
If specified, the service will be added to the proxy at /services/:name
If a service is to be managed by the Hub, it has a few extra options:
- command: (str/Popen list)
Command for JupyterHub to spawn the service.
Only use this if the service should be a subprocess.
If command is not specified, it is assumed to be managed
by a
- environment: dict
Additional environment variables for the service.
- user: str
The name of a system user to become.
If unspecified, run as the same user as the Hub.
"""
# inputs:
name = Unicode(
help="""The name of the service.
If the service has an http endpoint, it
"""
).tag(input=True)
admin = Bool(False,
help="Does the service need admin-access to the Hub API?"
).tag(input=True)
url = Unicode(
help="""URL of the service.
Only specify if the service runs an HTTP(s) endpoint that.
If managed, will be passed as JUPYTERHUB_SERVICE_URL env.
"""
).tag(input=True)
api_token = Unicode(
help="""The API token to use for the service.
If unspecified, an API token will be generated for managed services.
"""
).tag(input=True)
# Managed service API:
@property
def managed(self):
"""Am I managed by the Hub?"""
return bool(self.command)
command = Command(minlen=0,
help="Command to spawn this service, if managed."
).tag(input=True)
cwd = Unicode(
help="""The working directory in which to run the service."""
).tag(input=True)
environment = Dict(
help="""Environment variables to pass to the service.
Only used if the Hub is spawning the service.
"""
).tag(input=True)
user = Unicode(getuser(),
help="""The user to become when launching the service.
If unspecified, run the service as the same user as the Hub.
"""
).tag(input=True)
domain = Unicode()
host = Unicode()
proc = Any()
# handles on globals:
proxy = Any()
base_url = Unicode()
db = Any()
orm = Any()
@property
def server(self):
return self.orm.server
@property
def prefix(self):
return url_path_join(self.base_url, 'services', self.name)
@property
def proxy_path(self):
if not self.server:
return ''
if self.domain:
return url_path_join('/' + self.domain, self.server.base_url)
else:
return self.server.base_url
def __repr__(self):
return "<{cls}(name={name}{managed})>".format(
cls=self.__class__.__name__,
name=self.name,
managed=' managed' if self.managed else '',
)
def start(self):
"""Start a managed service"""
if not self.managed:
raise RuntimeError("Cannot start unmanaged service %s" % self)
self.log.info("Starting service %r: %r", self.name, self.command)
env = {}
env.update(self.environment)
env['JUPYTERHUB_SERVICE_NAME'] = self.name
env['JUPYTERHUB_API_TOKEN'] = self.api_token
env['JUPYTERHUB_API_URL'] = self.hub_api_url
env['JUPYTERHUB_BASE_URL'] = self.base_url
if self.url:
env['JUPYTERHUB_SERVICE_URL'] = self.url
env['JUPYTERHUB_SERVICE_PREFIX'] = self.server.base_url
self.spawner = _ServiceSpawner(
cmd=self.command,
environment=env,
api_token=self.api_token,
cwd=self.cwd,
user=_MockUser(
name=self.user,
service=self,
server=self.orm.server,
),
)
self.spawner.start()
self.proc = self.spawner.proc
self.spawner.add_poll_callback(self._proc_stopped)
self.spawner.start_polling()
def _proc_stopped(self):
"""Called when the service process unexpectedly exits"""
self.log.error("Service %s exited with status %i", self.name, self.proc.returncode)
self.start()
def stop(self):
"""Stop a managed service"""
if not self.managed:
raise RuntimeError("Cannot start unmanaged service %s" % self)
self.spawner.stop_polling()
return self.spawner.stop()

273
jupyterhub/singleuser.py Normal file
View File

@@ -0,0 +1,273 @@
#!/usr/bin/env python
"""Extend regular notebook server to be aware of multiuser things."""
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
import os
from jinja2 import ChoiceLoader, FunctionLoader
from tornado import ioloop
from textwrap import dedent
try:
import notebook
except ImportError:
raise ImportError("JupyterHub single-user server requires notebook >= 4.0")
from traitlets import (
Bool,
Unicode,
CUnicode,
default,
validate,
TraitError,
)
from notebook.notebookapp import (
NotebookApp,
aliases as notebook_aliases,
flags as notebook_flags,
)
from notebook.auth.login import LoginHandler
from notebook.auth.logout import LogoutHandler
from jupyterhub import __version__
from .services.auth import HubAuth, HubAuthenticated
from .utils import url_path_join
# Authenticate requests with the Hub
class HubAuthenticatedHandler(HubAuthenticated):
"""Class we are going to patch-in for authentication with the Hub"""
@property
def hub_auth(self):
return self.settings['hub_auth']
@property
def hub_users(self):
return { self.settings['user'] }
class JupyterHubLoginHandler(LoginHandler):
"""LoginHandler that hooks up Hub authentication"""
@staticmethod
def login_available(settings):
return True
@staticmethod
def get_user(handler):
"""alternative get_current_user to query the Hub"""
# patch in HubAuthenticated class for querying the Hub for cookie authentication
name = 'NowHubAuthenticated'
if handler.__class__.__name__ != name:
handler.__class__ = type(name, (HubAuthenticatedHandler, handler.__class__), {})
return handler.get_current_user()
class JupyterHubLogoutHandler(LogoutHandler):
def get(self):
self.redirect(
self.settings['hub_host'] +
url_path_join(self.settings['hub_prefix'], 'logout'))
# register new hub related command-line aliases
aliases = dict(notebook_aliases)
aliases.update({
'user' : 'SingleUserNotebookApp.user',
'cookie-name': 'HubAuth.cookie_name',
'hub-prefix': 'SingleUserNotebookApp.hub_prefix',
'hub-host': 'SingleUserNotebookApp.hub_host',
'hub-api-url': 'SingleUserNotebookApp.hub_api_url',
'base-url': 'SingleUserNotebookApp.base_url',
})
flags = dict(notebook_flags)
flags.update({
'disable-user-config': ({
'SingleUserNotebookApp': {
'disable_user_config': True
}
}, "Disable user-controlled configuration of the notebook server.")
})
page_template = """
{% extends "templates/page.html" %}
{% block header_buttons %}
{{super()}}
<a href='{{hub_control_panel_url}}'
class='btn btn-default btn-sm navbar-btn pull-right'
style='margin-right: 4px; margin-left: 2px;'
>
Control Panel</a>
{% endblock %}
{% block logo %}
<img src='{{logo_url}}' alt='Jupyter Notebook'/>
{% endblock logo %}
"""
def _exclude_home(path_list):
"""Filter out any entries in a path list that are in my home directory.
Used to disable per-user configuration.
"""
home = os.path.expanduser('~')
for p in path_list:
if not p.startswith(home):
yield p
class SingleUserNotebookApp(NotebookApp):
"""A Subclass of the regular NotebookApp that is aware of the parent multiuser context."""
description = dedent("""
Single-user server for JupyterHub. Extends the Jupyter Notebook server.
Meant to be invoked by JupyterHub Spawners, and not directly.
""")
examples = ""
subcommands = {}
version = __version__
classes = NotebookApp.classes + [HubAuth]
user = CUnicode(config=True)
def _user_changed(self, name, old, new):
self.log.name = new
hub_prefix = Unicode().tag(config=True)
hub_host = Unicode().tag(config=True)
hub_api_url = Unicode().tag(config=True)
aliases = aliases
flags = flags
# disble some single-user configurables
token = ''
open_browser = False
trust_xheaders = True
login_handler_class = JupyterHubLoginHandler
logout_handler_class = JupyterHubLogoutHandler
port_retries = 0 # disable port-retries, since the Spawner will tell us what port to use
disable_user_config = Bool(False,
help="""Disable user configuration of single-user server.
Prevents user-writable files that normally configure the single-user server
from being loaded, ensuring admins have full control of configuration.
"""
).tag(config=True)
@validate('notebook_dir')
def _notebook_dir_validate(self, proposal):
value = os.path.expanduser(proposal['value'])
# Strip any trailing slashes
# *except* if it's root
_, path = os.path.splitdrive(value)
if path == os.sep:
return value
value = value.rstrip(os.sep)
if not os.path.isabs(value):
# If we receive a non-absolute path, make it absolute.
value = os.path.abspath(value)
if not os.path.isdir(value):
raise TraitError("No such notebook dir: %r" % value)
return value
@default('log_datefmt')
def _log_datefmt_default(self):
"""Exclude date from default date format"""
return "%Y-%m-%d %H:%M:%S"
@default('log_format')
def _log_format_default(self):
"""override default log format to include time"""
return "%(color)s[%(levelname)1.1s %(asctime)s.%(msecs).03d %(name)s %(module)s:%(lineno)d]%(end_color)s %(message)s"
def _confirm_exit(self):
# disable the exit confirmation for background notebook processes
ioloop.IOLoop.instance().stop()
def migrate_config(self):
if self.disable_user_config:
# disable config-migration when user config is disabled
return
else:
super(SingleUserNotebookApp, self).migrate_config()
@property
def config_file_paths(self):
path = super(SingleUserNotebookApp, self).config_file_paths
if self.disable_user_config:
# filter out user-writable config dirs if user config is disabled
path = list(_exclude_home(path))
return path
@property
def nbextensions_path(self):
path = super(SingleUserNotebookApp, self).nbextensions_path
if self.disable_user_config:
path = list(_exclude_home(path))
return path
@validate('static_custom_path')
def _validate_static_custom_path(self, proposal):
path = proposal['value']
if self.disable_user_config:
path = list(_exclude_home(path))
return path
def start(self):
super(SingleUserNotebookApp, self).start()
def init_hub_auth(self):
if not os.environ.get('JPY_API_TOKEN'):
self.exit("JPY_API_TOKEN env is required to run jupyterhub-singleuser. Did you launch it manually?")
self.hub_auth = HubAuth(
parent=self,
api_token=os.environ.pop('JPY_API_TOKEN'),
api_url=self.hub_api_url,
)
def init_webapp(self):
# load the hub related settings into the tornado settings dict
self.init_hub_auth()
s = self.tornado_settings
s['user'] = self.user
s['hub_prefix'] = self.hub_prefix
s['hub_host'] = self.hub_host
s['hub_auth'] = self.hub_auth
s['login_url'] = self.hub_host + self.hub_prefix
s['csp_report_uri'] = self.hub_host + url_path_join(self.hub_prefix, 'security/csp-report')
super(SingleUserNotebookApp, self).init_webapp()
self.patch_templates()
def patch_templates(self):
"""Patch page templates to add Hub-related buttons"""
self.jinja_template_vars['logo_url'] = self.hub_host + url_path_join(self.hub_prefix, 'logo')
self.jinja_template_vars['hub_host'] = self.hub_host
self.jinja_template_vars['hub_prefix'] = self.hub_prefix
env = self.web_app.settings['jinja2_env']
env.globals['hub_control_panel_url'] = \
self.hub_host + url_path_join(self.hub_prefix, 'home')
# patch jinja env loading to modify page template
def get_page(name):
if name == 'page.html':
return page_template
orig_loader = env.loader
env.loader = ChoiceLoader([
FunctionLoader(get_page),
orig_loader,
])
def main(argv=None):
return SingleUserNotebookApp.launch_instance(argv)
if __name__ == "__main__":
main()

View File

@@ -1,4 +1,6 @@
"""Class for spawning single-user notebook servers.""" """
Contains base Spawner class & default implementation
"""
# Copyright (c) Jupyter Development Team. # Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License. # Distributed under the terms of the Modified BSD License.
@@ -7,6 +9,7 @@ import errno
import os import os
import pipes import pipes
import pwd import pwd
import shutil
import signal import signal
import sys import sys
import grp import grp
@@ -15,39 +18,73 @@ from subprocess import Popen
from tempfile import mkdtemp from tempfile import mkdtemp
from tornado import gen from tornado import gen
from tornado.ioloop import IOLoop, PeriodicCallback from tornado.ioloop import PeriodicCallback
from traitlets.config import LoggingConfigurable from traitlets.config import LoggingConfigurable
from traitlets import ( from traitlets import (
Any, Bool, Dict, Instance, Integer, Float, List, Unicode, Any, Bool, Dict, Instance, Integer, Float, List, Unicode,
validate,
) )
from .traitlets import Command from .traitlets import Command, ByteSpecification
from .utils import random_port from .utils import random_port
class Spawner(LoggingConfigurable): class Spawner(LoggingConfigurable):
"""Base class for spawning single-user notebook servers. """Base class for spawning single-user notebook servers.
Subclass this, and override the following methods: Subclass this, and override the following methods:
- load_state - load_state
- get_state - get_state
- start - start
- stop - stop
- poll - poll
As JupyterHub supports multiple users, an instance of the Spawner subclass
is created for each user. If there are 20 JupyterHub users, there will be 20
instances of the subclass.
""" """
db = Any() db = Any()
user = Any() user = Any()
hub = Any() hub = Any()
authenticator = Any() authenticator = Any()
api_token = Unicode() api_token = Unicode()
will_resume = Bool(False,
help="""Whether the Spawner will resume on next start
Default is False where each launch of the Spawner will be a new instance.
If True, an existing Spawner will resume instead of starting anew
(e.g. resuming a Docker container),
and API tokens in use when the Spawner stops will not be deleted.
"""
)
ip = Unicode('127.0.0.1', ip = Unicode('127.0.0.1',
help="The IP address (or hostname) the single-user server should listen on" help="""
The IP address (or hostname) the single-user server should listen on.
The JupyterHub proxy implementation should be able to send packets to this interface.
"""
).tag(config=True) ).tag(config=True)
port = Integer(0,
help="""
The port for single-user servers to listen on.
Defaults to `0`, which uses a randomly allocated port number each time.
New in version 0.7.
"""
)
start_timeout = Integer(60, start_timeout = Integer(60,
help="""Timeout (in seconds) before giving up on the spawner. help="""
Timeout (in seconds) before giving up on starting of single-user server.
This is the timeout for start to return, not the timeout for the server to respond. This is the timeout for start to return, not the timeout for the server to respond.
Callers of spawner.start will assume that startup has failed if it takes longer than this. Callers of spawner.start will assume that startup has failed if it takes longer than this.
start should return when the server process is started and its location is known. start should return when the server process is started and its location is known.
@@ -55,7 +92,8 @@ class Spawner(LoggingConfigurable):
).tag(config=True) ).tag(config=True)
http_timeout = Integer(30, http_timeout = Integer(30,
help="""Timeout (in seconds) before giving up on a spawned HTTP server help="""
Timeout (in seconds) before giving up on a spawned HTTP server
Once a server has successfully been spawned, this is the amount of time Once a server has successfully been spawned, this is the amount of time
we wait before assuming that the server is unable to accept we wait before assuming that the server is unable to accept
@@ -64,21 +102,30 @@ class Spawner(LoggingConfigurable):
).tag(config=True) ).tag(config=True)
poll_interval = Integer(30, poll_interval = Integer(30,
help="""Interval (in seconds) on which to poll the spawner.""" help="""
Interval (in seconds) on which to poll the spawner for single-user server's status.
At every poll interval, each spawner's `.poll` method is called, which checks
if the single-user server is still running. If it isn't running, then JupyterHub modifies
its own state accordingly and removes appropriate routes from the configurable proxy.
"""
).tag(config=True) ).tag(config=True)
_callbacks = List() _callbacks = List()
_poll_callback = Any() _poll_callback = Any()
debug = Bool(False, debug = Bool(False,
help="Enable debug-logging of the single-user server" help="Enable debug-logging of the single-user server"
).tag(config=True) ).tag(config=True)
options_form = Unicode("", help=""" options_form = Unicode(
help="""
An HTML form for options a user can specify on launching their server. An HTML form for options a user can specify on launching their server.
The surrounding `<form>` element and the submit button are already provided. The surrounding `<form>` element and the submit button are already provided.
For example: For example:
Set your key: Set your key:
<input name="key" val="default_key"></input> <input name="key" val="default_key"></input>
<br> <br>
@@ -87,6 +134,8 @@ class Spawner(LoggingConfigurable):
<option value="A">The letter A</option> <option value="A">The letter A</option>
<option value="B">The letter B</option> <option value="B">The letter B</option>
</select> </select>
The data from this form submission will be passed on to your spawner in `self.user_options`
""").tag(config=True) """).tag(config=True)
def options_from_form(self, form_data): def options_from_form(self, form_data):
@@ -102,9 +151,15 @@ class Spawner(LoggingConfigurable):
prior to `Spawner.start`. prior to `Spawner.start`.
""" """
return form_data return form_data
user_options = Dict(help="This is where form-specified options ultimately end up.") user_options = Dict(
help="""
Dict of user specified options for the user's spawned instance of a single-user server.
These user options are usually provided by the `options_form` displayed to the user when they start
their server.
""")
env_keep = List([ env_keep = List([
'PATH', 'PATH',
'PYTHONPATH', 'PYTHONPATH',
@@ -114,115 +169,238 @@ class Spawner(LoggingConfigurable):
'LANG', 'LANG',
'LC_ALL', 'LC_ALL',
], ],
help="Whitelist of environment variables for the subprocess to inherit" help="""
Whitelist of environment variables for the single-user server to inherit from the JupyterHub process.
This whitelist is used to ensure that sensitive information in the JupyterHub process's environment
(such as `CONFIGPROXY_AUTH_TOKEN`) is not passed to the single-user server's process.
"""
).tag(config=True) ).tag(config=True)
env = Dict(help="""Deprecated: use Spawner.get_env or Spawner.environment env = Dict(help="""Deprecated: use Spawner.get_env or Spawner.environment
- extend Spawner.get_env for adding required env in Spawner subclasses - extend Spawner.get_env for adding required env in Spawner subclasses
- Spawner.environment for config-specified env - Spawner.environment for config-specified env
""") """)
environment = Dict( environment = Dict(
help="""Environment variables to load for the Spawner. help="""
Extra environment variables to set for the single-user server's process.
Value could be a string or a callable. If it is a callable, it will Environment variables that end up in the single-user server's process come from 3 sources:
be called with one parameter, which will be the instance of the spawner - This `environment` configurable
in use. It should quickly (without doing much blocking operations) return - The JupyterHub process' environment variables that are whitelisted in `env_keep`
a string that will be used as the value for the environment variable. - Variables to establish contact between the single-user notebook and the hub (such as JPY_API_TOKEN)
The `enviornment` configurable should be set by JupyterHub administrators to add
installation specific environment variables. It is a dict where the key is the name of the environment
variable, and the value can be a string or a callable. If it is a callable, it will be called
with one parameter (the spawner instance), and should return a string fairly quickly (no blocking
operations please!).
Note that the spawner class' interface is not guaranteed to be exactly same across upgrades,
so if you are using the callable take care to verify it continues to work after upgrades!
""" """
).tag(config=True) ).tag(config=True)
cmd = Command(['jupyterhub-singleuser'], cmd = Command(['jupyterhub-singleuser'],
help="""The command used for starting notebooks.""" help="""
The command used for starting the single-user server.
Provide either a string or a list containing the path to the startup script command. Extra arguments,
other than this path, should be provided via `args`.
This is usually set if you want to start the single-user server in a different python
environment (with virtualenv/conda) than JupyterHub itself.
Some spawners allow shell-style expansion here, allowing you to use environment variables.
Most, including the default, do not. Consult the documentation for your spawner to verify!
"""
).tag(config=True) ).tag(config=True)
args = List(Unicode(), args = List(Unicode(),
help="""Extra arguments to be passed to the single-user server""" help="""
).tag(config=True) Extra arguments to be passed to the single-user server.
notebook_dir = Unicode('', Some spawners allow shell-style expansion here, allowing you to use environment variables here.
help="""The notebook directory for the single-user server Most, including the default, do not. Consult the documentation for your spawner to verify!
`~` will be expanded to the user's home directory
`%U` will be expanded to the user's username
""" """
).tag(config=True) ).tag(config=True)
default_url = Unicode('', notebook_dir = Unicode(
help="""The default URL for the single-user server. help="""
Path to the notebook directory for the single-user server.
Can be used in conjunction with --notebook-dir=/ to enable The user sees a file listing of this directory when the notebook interface is started. The
full filesystem traversal, while preserving user's homedir as current interface does not easily allow browsing beyond the subdirectories in this directory's
landing page for notebook tree.
`%U` will be expanded to the user's username `~` will be expanded to the home directory of the user, and {username} will be replaced
with the name of the user.
Note that this does *not* prevent users from accessing files outside of this path! They
can do so with many other means.
""" """
).tag(config=True) ).tag(config=True)
default_url = Unicode(
help="""
The URL the single-user server should start in.
`{username}` will be expanded to the user's username
Example uses:
- You can set `notebook_dir` to `/` and `default_url` to `/home/{username}` to allow people to
navigate the whole filesystem from their notebook, but still start in their home directory.
- You can set this to `/lab` to have JupyterLab start by default, rather than Jupyter Notebook.
"""
).tag(config=True)
@validate('notebook_dir', 'default_url')
def _deprecate_percent_u(self, proposal):
print(proposal)
v = proposal['value']
if '%U' in v:
self.log.warning("%%U for username in %s is deprecated in JupyterHub 0.7, use {username}",
proposal['trait'].name,
)
v = v.replace('%U', '{username}')
self.log.warning("Converting %r to %r", proposal['value'], v)
return v
disable_user_config = Bool(False, disable_user_config = Bool(False,
help="""Disable per-user configuration of single-user servers. help="""
Disable per-user configuration of single-user servers.
This prevents any config in users' $HOME directories
from having an effect on their server. When starting the user's single-user server, any config file found in the user's $HOME directory
will be ignored.
Note: a user could circumvent this if the user modifies their Python environment, such as when
they have their own conda environments / virtualenvs / containers.
""" """
).tag(config=True) ).tag(config=True)
mem_limit = ByteSpecification(None,
help="""
Maximum number of bytes a single-user notebook server is allowed to use.
Allows the following suffixes:
- K -> Kilobytes
- M -> Megabytes
- G -> Gigabytes
- T -> Terabytes
If the single user server tries to allocate more memory than this,
it will fail. There is no guarantee that the single-user notebook server
will be able to allocate this much memory - only that it can not
allocate more than this.
This needs to be supported by your spawner for it to work.
"""
).tag(config=True)
cpu_limit = Float(None,
allow_none=True,
help="""
Maximum number of cpu-cores a single-user notebook server is allowed to use.
If this value is set to 0.5, allows use of 50% of one CPU.
If this value is set to 2, allows use of up to 2 CPUs.
The single-user notebook server will never be scheduled by the kernel to
use more cpu-cores than this. There is no guarantee that it can
access this many cpu-cores.
This needs to be supported by your spawner for it to work.
"""
).tag(config=True)
mem_guarantee = ByteSpecification(None,
help="""
Minimum number of bytes a single-user notebook server is guaranteed to have available.
Allows the following suffixes:
- K -> Kilobytes
- M -> Megabytes
- G -> Gigabytes
- T -> Terabytes
This needs to be supported by your spawner for it to work.
"""
).tag(config=True)
cpu_guarantee = Float(None,
allow_none=True,
help="""
Minimum number of cpu-cores a single-user notebook server is guaranteed to have available.
If this value is set to 0.5, allows use of 50% of one CPU.
If this value is set to 2, allows use of up to 2 CPUs.
Note that this needs to be supported by your spawner for it to work.
"""
).tag(config=True)
def __init__(self, **kwargs): def __init__(self, **kwargs):
super(Spawner, self).__init__(**kwargs) super(Spawner, self).__init__(**kwargs)
if self.user.state: if self.user.state:
self.load_state(self.user.state) self.load_state(self.user.state)
def load_state(self, state): def load_state(self, state):
"""load state from the database """Restore state of spawner from database.
This is the extensible part of state Called for each user's spawner after the hub process restarts.
Override in a subclass if there is state to load. `state` is a dict that'll contain the value returned by `get_state` of
Should call `super`. the spawner, or {} if the spawner hasn't persisted any state yet.
See Also Override in subclasses to restore any extra state that is needed to track
-------- the single-user server for that user. Subclasses should call super().
get_state, clear_state
""" """
pass pass
def get_state(self): def get_state(self):
"""store the state necessary for load_state """Save state of spawner into database.
A black box of extra state for custom spawners. A black box of extra state for custom spawners. The returned value of this is
Subclasses should call `super`. passed to `load_state`.
Subclasses should call `super().get_state()`, augment the state returned from
there, and return that state.
Returns Returns
------- -------
state: dict state: dict
a JSONable dict of state a JSONable dict of state
""" """
state = {} state = {}
return state return state
def clear_state(self): def clear_state(self):
"""clear any state that should be cleared when the process stops """Clear any state that should be cleared when the single-user server stops.
State that should be preserved across server instances should not be cleared. State that should be preserved across single-user server instances should not be cleared.
Subclasses should call super, to ensure that state is properly cleared. Subclasses should call super, to ensure that state is properly cleared.
""" """
self.api_token = '' self.api_token = ''
def get_env(self): def get_env(self):
"""Return the environment dict to use for the Spawner. """Return the environment dict to use for the Spawner.
This applies things like `env_keep`, anything defined in `Spawner.environment`, This applies things like `env_keep`, anything defined in `Spawner.environment`,
and adds the API token to the env. and adds the API token to the env.
When overriding in subclasses, subclasses must call `super().get_env()`, extend the
returned dict and return it.
Use this to access the env in Spawner.start to allow extension in subclasses. Use this to access the env in Spawner.start to allow extension in subclasses.
""" """
env = {} env = {}
if self.env: if self.env:
warnings.warn("Spawner.env is deprecated, found %s" % self.env, DeprecationWarning) warnings.warn("Spawner.env is deprecated, found %s" % self.env, DeprecationWarning)
env.update(self.env) env.update(self.env)
for key in self.env_keep: for key in self.env_keep:
if key in os.environ: if key in os.environ:
env[key] = os.environ[key] env[key] = os.environ[key]
@@ -238,27 +416,88 @@ class Spawner(LoggingConfigurable):
env[key] = value env[key] = value
env['JPY_API_TOKEN'] = self.api_token env['JPY_API_TOKEN'] = self.api_token
# Put in limit and guarantee info if they exist.
# Note that this is for use by the humans / notebook extensions in the
# single-user notebook server, and not for direct usage by the spawners
# themselves. Spawners should just use the traitlets directly.
if self.mem_limit:
env['MEM_LIMIT'] = str(self.mem_limit)
if self.mem_guarantee:
env['MEM_GUARANTEE'] = str(self.mem_guarantee)
if self.cpu_limit:
env['CPU_LIMIT'] = str(self.cpu_limit)
if self.cpu_guarantee:
env['CPU_GUARANTEE'] = str(self.cpu_guarantee)
return env return env
def template_namespace(self):
"""Return the template namespace for format-string formatting.
Currently used on default_url and notebook_dir.
Subclasses may add items to the available namespace.
The default implementation includes::
{
'username': user.name,
'base_url': users_base_url,
}
Returns:
ns (dict): namespace for string formatting.
"""
d = {'username': self.user.name}
if self.user.server:
d['base_url'] = self.user.server.base_url
return d
def format_string(self, s):
"""Render a Python format string
Uses :meth:`Spawner.template_namespace` to populate format namespace.
Args:
s (str): Python format-string to be formatted.
Returns:
str: Formatted string, rendered
"""
return s.format(**self.template_namespace())
def get_args(self): def get_args(self):
"""Return the arguments to be passed after self.cmd""" """Return the arguments to be passed after self.cmd
Doesn't expect shell expansion to happen.
"""
args = [ args = [
'--user=%s' % self.user.name, '--user="%s"' % self.user.name,
'--port=%i' % self.user.server.port, '--cookie-name="%s"' % self.user.server.cookie_name,
'--cookie-name=%s' % self.user.server.cookie_name, '--base-url="%s"' % self.user.server.base_url,
'--base-url=%s' % self.user.server.base_url, '--hub-host="%s"' % self.hub.host,
'--hub-host=%s' % self.hub.host, '--hub-prefix="%s"' % self.hub.server.base_url,
'--hub-prefix=%s' % self.hub.server.base_url, '--hub-api-url="%s"' % self.hub.api_url,
'--hub-api-url=%s' % self.hub.api_url,
] ]
if self.ip: if self.ip:
args.append('--ip=%s' % self.ip) args.append('--ip="%s"' % self.ip)
if self.port:
args.append('--port=%i' % self.port)
elif self.user.server.port:
self.log.warning("Setting port from user.server is deprecated as of JupyterHub 0.7.")
args.append('--port=%i' % self.user.server.port)
if self.notebook_dir: if self.notebook_dir:
self.notebook_dir = self.notebook_dir.replace("%U",self.user.name) notebook_dir = self.format_string(self.notebook_dir)
args.append('--notebook-dir=%s' % self.notebook_dir) args.append('--notebook-dir="%s"' % notebook_dir)
if self.default_url: if self.default_url:
self.default_url = self.default_url.replace("%U",self.user.name) default_url = self.format_string(self.default_url)
args.append('--NotebookApp.default_url=%s' % self.default_url) args.append('--NotebookApp.default_url="%s"' % default_url)
if self.debug: if self.debug:
args.append('--debug') args.append('--debug')
@@ -266,47 +505,74 @@ class Spawner(LoggingConfigurable):
args.append('--disable-user-config') args.append('--disable-user-config')
args.extend(self.args) args.extend(self.args)
return args return args
@gen.coroutine @gen.coroutine
def start(self): def start(self):
"""Start the single-user process""" """Start the single-user server
Returns:
(str, int): the (ip, port) where the Hub can connect to the server.
.. versionchanged:: 0.7
Return ip, port instead of setting on self.user.server directly.
"""
raise NotImplementedError("Override in subclass. Must be a Tornado gen.coroutine.") raise NotImplementedError("Override in subclass. Must be a Tornado gen.coroutine.")
@gen.coroutine @gen.coroutine
def stop(self, now=False): def stop(self, now=False):
"""Stop the single-user process""" """Stop the single-user server
If `now` is set to `False`, do not wait for the server to stop. Otherwise, wait for
the server to stop before returning.
Must be a Tornado coroutine.
"""
raise NotImplementedError("Override in subclass. Must be a Tornado gen.coroutine.") raise NotImplementedError("Override in subclass. Must be a Tornado gen.coroutine.")
@gen.coroutine @gen.coroutine
def poll(self): def poll(self):
"""Check if the single-user process is running """Check if the single-user process is running
return None if it is, an exit status (0 if unknown) if it is not. Returns:
None if single-user process is running.
Integer exit status (0 if unknown), if it is not running.
State transitions, behavior, and return response:
- If the Spawner has not been initialized (neither loaded state, nor called start),
it should behave as if it is not running (status=0).
- If the Spawner has not finished starting,
it should behave as if it is running (status=None).
Design assumptions about when `poll` may be called:
- On Hub launch: `poll` may be called before `start` when state is loaded on Hub launch.
`poll` should return exit status 0 (unknown) if the Spawner has not been initialized via
`load_state` or `start`.
- If `.start()` is async: `poll` may be called during any yielded portions of the `start`
process. `poll` should return None when `start` is yielded, indicating that the `start`
process has not yet completed.
""" """
raise NotImplementedError("Override in subclass. Must be a Tornado gen.coroutine.") raise NotImplementedError("Override in subclass. Must be a Tornado gen.coroutine.")
def add_poll_callback(self, callback, *args, **kwargs): def add_poll_callback(self, callback, *args, **kwargs):
"""add a callback to fire when the subprocess stops """Add a callback to fire when the single-user server stops"""
as noticed by periodic poll_and_notify()
"""
if args or kwargs: if args or kwargs:
cb = callback cb = callback
callback = lambda : cb(*args, **kwargs) callback = lambda : cb(*args, **kwargs)
self._callbacks.append(callback) self._callbacks.append(callback)
def stop_polling(self): def stop_polling(self):
"""stop the periodic poll""" """Stop polling for single-user server's running state"""
if self._poll_callback: if self._poll_callback:
self._poll_callback.stop() self._poll_callback.stop()
self._poll_callback = None self._poll_callback = None
def start_polling(self): def start_polling(self):
"""Start polling periodically """Start polling periodically for single-user server's running state.
callbacks registered via `add_poll_callback` will fire Callbacks registered via `add_poll_callback` will fire if/when the server stops.
if/when the process stops.
Explicit termination via the stop method will not trigger the callbacks. Explicit termination via the stop method will not trigger the callbacks.
""" """
if self.poll_interval <= 0: if self.poll_interval <= 0:
@@ -314,9 +580,9 @@ class Spawner(LoggingConfigurable):
return return
else: else:
self.log.debug("Polling subprocess every %is", self.poll_interval) self.log.debug("Polling subprocess every %is", self.poll_interval)
self.stop_polling() self.stop_polling()
self._poll_callback = PeriodicCallback( self._poll_callback = PeriodicCallback(
self.poll_and_notify, self.poll_and_notify,
1e3 * self.poll_interval 1e3 * self.poll_interval
@@ -325,25 +591,25 @@ class Spawner(LoggingConfigurable):
@gen.coroutine @gen.coroutine
def poll_and_notify(self): def poll_and_notify(self):
"""Used as a callback to periodically poll the process, """Used as a callback to periodically poll the process and notify any watchers"""
and notify any watchers
"""
status = yield self.poll() status = yield self.poll()
if status is None: if status is None:
# still running, nothing to do here # still running, nothing to do here
return return
self.stop_polling() self.stop_polling()
add_callback = IOLoop.current().add_callback
for callback in self._callbacks: for callback in self._callbacks:
add_callback(callback) try:
yield gen.maybe_future(callback())
except Exception:
self.log.exception("Unhandled error in poll callback for %s", self)
return status
death_interval = Float(0.1) death_interval = Float(0.1)
@gen.coroutine @gen.coroutine
def wait_for_death(self, timeout=10): def wait_for_death(self, timeout=10):
"""wait for the process to die, up to timeout seconds""" """Wait for the single-user server to die, up to timeout seconds"""
loop = IOLoop.current()
for i in range(int(timeout / self.death_interval)): for i in range(int(timeout / self.death_interval)):
status = yield self.poll() status = yield self.poll()
if status is not None: if status is not None:
@@ -351,8 +617,12 @@ class Spawner(LoggingConfigurable):
else: else:
yield gen.sleep(self.death_interval) yield gen.sleep(self.death_interval)
def _try_setcwd(path): def _try_setcwd(path):
"""Try to set CWD, walking up and ultimately falling back to a temp dir""" """Try to set CWD to path, walking up until a valid directory is found.
If no valid directory is found, a temp directory is created and cwd is set to that.
"""
while path != '/': while path != '/':
try: try:
os.chdir(path) os.chdir(path)
@@ -368,15 +638,24 @@ def _try_setcwd(path):
def set_user_setuid(username): def set_user_setuid(username):
"""return a preexec_fn for setting the user (via setuid) of a spawned process""" """Return a preexec_fn for spawning a single-user server as a particular user.
Returned preexec_fn will set uid/gid, and attempt to chdir to the target user's
home directory.
"""
user = pwd.getpwnam(username) user = pwd.getpwnam(username)
uid = user.pw_uid uid = user.pw_uid
gid = user.pw_gid gid = user.pw_gid
home = user.pw_dir home = user.pw_dir
gids = [ g.gr_gid for g in grp.getgrall() if username in g.gr_mem ] gids = [ g.gr_gid for g in grp.getgrall() if username in g.gr_mem ]
def preexec(): def preexec():
# set the user and group """Set uid/gid of current process
Executed after fork but before exec by python.
Also try to chdir to the user's home directory.
"""
os.setgid(gid) os.setgid(gid)
try: try:
os.setgroups(gids) os.setgroups(gids)
@@ -386,53 +665,92 @@ def set_user_setuid(username):
# start in the user's home dir # start in the user's home dir
_try_setcwd(home) _try_setcwd(home)
return preexec return preexec
class LocalProcessSpawner(Spawner): class LocalProcessSpawner(Spawner):
"""A Spawner that just uses Popen to start local processes as users. """
A Spawner that uses `subprocess.Popen` to start single-user servers as local processes.
Requires users to exist on the local system.
Requires local UNIX users matching the authenticated users to exist.
Does not work on Windows.
This is the default spawner for JupyterHub. This is the default spawner for JupyterHub.
""" """
INTERRUPT_TIMEOUT = Integer(10, INTERRUPT_TIMEOUT = Integer(10,
help="Seconds to wait for process to halt after SIGINT before proceeding to SIGTERM" help="""
Seconds to wait for single-user server process to halt after SIGINT.
If the process has not exited cleanly after this many seconds, a SIGTERM is sent.
"""
).tag(config=True) ).tag(config=True)
TERM_TIMEOUT = Integer(5, TERM_TIMEOUT = Integer(5,
help="Seconds to wait for process to halt after SIGTERM before proceeding to SIGKILL" help="""
Seconds to wait for single-user server process to halt after SIGTERM.
If the process does not exit cleanly after this many seconds of SIGTERM, a SIGKILL is sent.
"""
).tag(config=True) ).tag(config=True)
KILL_TIMEOUT = Integer(5, KILL_TIMEOUT = Integer(5,
help="Seconds to wait for process to halt after SIGKILL before giving up" help="""
Seconds to wait for process to halt after SIGKILL before giving up.
If the process does not exit cleanly after this many seconds of SIGKILL, it becomes a zombie
process. The hub process will log a warning and then give up.
"""
).tag(config=True) ).tag(config=True)
proc = Instance(Popen, allow_none=True) proc = Instance(Popen,
pid = Integer(0) allow_none=True,
help="""
The process representing the single-user server process spawned for current user.
Is None if no process has been spawned yet.
""")
pid = Integer(0,
help="""
The process id (pid) of the single-user server process spawned for current user.
"""
)
def make_preexec_fn(self, name): def make_preexec_fn(self, name):
"""
Return a function that can be used to set the user id of the spawned process to user with name `name`
This function can be safely passed to `preexec_fn` of `Popen`
"""
return set_user_setuid(name) return set_user_setuid(name)
def load_state(self, state): def load_state(self, state):
"""load pid from state""" """Restore state about spawned single-user server after a hub restart.
Local processes only need the process id.
"""
super(LocalProcessSpawner, self).load_state(state) super(LocalProcessSpawner, self).load_state(state)
if 'pid' in state: if 'pid' in state:
self.pid = state['pid'] self.pid = state['pid']
def get_state(self): def get_state(self):
"""add pid to state""" """Save state that is needed to restore this spawner instance after a hub restore.
Local processes only need the process id.
"""
state = super(LocalProcessSpawner, self).get_state() state = super(LocalProcessSpawner, self).get_state()
if self.pid: if self.pid:
state['pid'] = self.pid state['pid'] = self.pid
return state return state
def clear_state(self): def clear_state(self):
"""clear pid state""" """Clear stored state about this spawner (pid)"""
super(LocalProcessSpawner, self).clear_state() super(LocalProcessSpawner, self).clear_state()
self.pid = 0 self.pid = 0
def user_env(self, env): def user_env(self, env):
"""Augment environment of spawned process with user specific env variables."""
env['USER'] = self.user.name env['USER'] = self.user.name
home = pwd.getpwnam(self.user.name).pw_dir home = pwd.getpwnam(self.user.name).pw_dir
shell = pwd.getpwnam(self.user.name).pw_shell shell = pwd.getpwnam(self.user.name).pw_shell
@@ -443,35 +761,57 @@ class LocalProcessSpawner(Spawner):
if shell: if shell:
env['SHELL'] = shell env['SHELL'] = shell
return env return env
def get_env(self): def get_env(self):
"""Add user environment variables""" """Get the complete set of environment variables to be set in the spawned process."""
env = super().get_env() env = super().get_env()
env = self.user_env(env) env = self.user_env(env)
return env return env
@gen.coroutine @gen.coroutine
def start(self): def start(self):
"""Start the process""" """Start the single-user server."""
if self.ip: self.port = random_port()
self.user.server.ip = self.ip
self.user.server.port = random_port()
cmd = [] cmd = []
env = self.get_env() env = self.get_env()
cmd.extend(self.cmd) cmd.extend(self.cmd)
cmd.extend(self.get_args()) cmd.extend(self.get_args())
self.log.info("Spawning %s", ' '.join(pipes.quote(s) for s in cmd)) self.log.info("Spawning %s", ' '.join(pipes.quote(s) for s in cmd))
self.proc = Popen(cmd, env=env, try:
preexec_fn=self.make_preexec_fn(self.user.name), self.proc = Popen(cmd, env=env,
start_new_session=True, # don't forward signals preexec_fn=self.make_preexec_fn(self.user.name),
) start_new_session=True, # don't forward signals
)
except PermissionError:
# use which to get abspath
script = shutil.which(cmd[0]) or cmd[0]
self.log.error("Permission denied trying to run %r. Does %s have access to this file?",
script, self.user.name,
)
raise
self.pid = self.proc.pid self.pid = self.proc.pid
if self.__class__ is not LocalProcessSpawner:
# subclasses may not pass through return value of super().start,
# relying on deprecated 0.6 way of setting ip, port,
# so keep a redundant copy here for now.
# A deprecation warning will be shown if the subclass
# does not return ip, port.
if self.ip:
self.user.server.ip = self.ip
self.user.server.port = self.port
return (self.ip or '127.0.0.1', self.port)
@gen.coroutine @gen.coroutine
def poll(self): def poll(self):
"""Poll the process""" """Poll the spawned process to see if it is still running.
If the process is still running, we return None. If it is not running,
we return the exit code of the process if we have access to it, or 0 otherwise.
"""
# if we started the process, poll with Popen # if we started the process, poll with Popen
if self.proc is not None: if self.proc is not None:
status = self.proc.poll() status = self.proc.poll()
@@ -479,15 +819,14 @@ class LocalProcessSpawner(Spawner):
# clear state if the process is done # clear state if the process is done
self.clear_state() self.clear_state()
return status return status
# if we resumed from stored state, # if we resumed from stored state,
# we don't have the Popen handle anymore, so rely on self.pid # we don't have the Popen handle anymore, so rely on self.pid
if not self.pid: if not self.pid:
# no pid, not running # no pid, not running
self.clear_state() self.clear_state()
return 0 return 0
# send signal 0 to check if PID exists # send signal 0 to check if PID exists
# this doesn't work on Windows, but that's okay because we don't support Windows. # this doesn't work on Windows, but that's okay because we don't support Windows.
alive = yield self._signal(0) alive = yield self._signal(0)
@@ -496,10 +835,15 @@ class LocalProcessSpawner(Spawner):
return 0 return 0
else: else:
return None return None
@gen.coroutine @gen.coroutine
def _signal(self, sig): def _signal(self, sig):
"""simple implementation of signal, which we can use when we are using setuid (we are root)""" """Send given signal to a single-user server's process.
Returns True if the process still exists, False otherwise.
The hub process is assumed to have enough privileges to do this (e.g. root).
"""
try: try:
os.kill(self.pid, sig) os.kill(self.pid, sig)
except OSError as e: except OSError as e:
@@ -508,12 +852,13 @@ class LocalProcessSpawner(Spawner):
else: else:
raise raise
return True # process exists return True # process exists
@gen.coroutine @gen.coroutine
def stop(self, now=False): def stop(self, now=False):
"""stop the subprocess """Stop the single-user server process for the current user.
if `now`, skip waiting for clean shutdown If `now` is set to True, do not wait for the process to die.
Otherwise, it'll wait.
""" """
if not now: if not now:
status = yield self.poll() status = yield self.poll()
@@ -522,7 +867,7 @@ class LocalProcessSpawner(Spawner):
self.log.debug("Interrupting %i", self.pid) self.log.debug("Interrupting %i", self.pid)
yield self._signal(signal.SIGINT) yield self._signal(signal.SIGINT)
yield self.wait_for_death(self.INTERRUPT_TIMEOUT) yield self.wait_for_death(self.INTERRUPT_TIMEOUT)
# clean shutdown failed, use TERM # clean shutdown failed, use TERM
status = yield self.poll() status = yield self.poll()
if status is not None: if status is not None:
@@ -530,7 +875,7 @@ class LocalProcessSpawner(Spawner):
self.log.debug("Terminating %i", self.pid) self.log.debug("Terminating %i", self.pid)
yield self._signal(signal.SIGTERM) yield self._signal(signal.SIGTERM)
yield self.wait_for_death(self.TERM_TIMEOUT) yield self.wait_for_death(self.TERM_TIMEOUT)
# TERM failed, use KILL # TERM failed, use KILL
status = yield self.poll() status = yield self.poll()
if status is not None: if status is not None:
@@ -543,4 +888,3 @@ class LocalProcessSpawner(Spawner):
if status is None: if status is None:
# it all failed, zombie process # it all failed, zombie process
self.log.warning("Process %i never died", self.pid) self.log.warning("Process %i never died", self.pid)

View File

@@ -5,14 +5,19 @@
import logging import logging
from getpass import getuser from getpass import getuser
from subprocess import TimeoutExpired
from pytest import fixture import time
from unittest import mock
from pytest import fixture, yield_fixture, raises
from tornado import ioloop from tornado import ioloop
from .. import orm from .. import orm
from ..utils import random_port
from .mocking import MockHub from .mocking import MockHub
from .test_services import mockservice_cmd
import jupyterhub.services.service
# global db session object # global db session object
_db = None _db = None
@@ -53,3 +58,58 @@ def app(request):
app.stop() app.stop()
request.addfinalizer(fin) request.addfinalizer(fin)
return app return app
# mock services for testing.
# Shorter intervals, etc.
class MockServiceSpawner(jupyterhub.services.service._ServiceSpawner):
poll_interval = 1
def _mockservice(request, app, url=False):
name = 'mock-service'
spec = {
'name': name,
'command': mockservice_cmd,
'admin': True,
}
if url:
spec['url'] = 'http://127.0.0.1:%i' % random_port(),
with mock.patch.object(jupyterhub.services.service, '_ServiceSpawner', MockServiceSpawner):
app.services = [{
'name': name,
'command': mockservice_cmd,
'url': 'http://127.0.0.1:%i' % random_port(),
'admin': True,
}]
app.init_services()
app.io_loop.add_callback(app.proxy.add_all_services, app._service_map)
assert name in app._service_map
service = app._service_map[name]
app.io_loop.add_callback(service.start)
request.addfinalizer(service.stop)
for i in range(20):
if not getattr(service, 'proc', False):
time.sleep(0.2)
# ensure process finishes starting
with raises(TimeoutExpired):
service.proc.wait(1)
return service
@yield_fixture
def mockservice(request, app):
yield _mockservice(request, app, url=False)
@yield_fixture
def mockservice_url(request, app):
yield _mockservice(request, app, url=True)
@yield_fixture
def no_patience(app):
"""Set slow-spawning timeouts to zero"""
with mock.patch.dict(app.tornado_application.settings,
{'slow_spawn_timeout': 0,
'slow_stop_timeout': 0}):
yield

View File

@@ -19,7 +19,8 @@ from ..app import JupyterHub
from ..auth import PAMAuthenticator from ..auth import PAMAuthenticator
from .. import orm from .. import orm
from ..spawner import LocalProcessSpawner from ..spawner import LocalProcessSpawner
from ..utils import url_path_join from ..singleuser import SingleUserNotebookApp
from ..utils import random_port
from pamela import PAMError from pamela import PAMError
@@ -36,7 +37,11 @@ def mock_open_session(username, service):
class MockSpawner(LocalProcessSpawner): class MockSpawner(LocalProcessSpawner):
"""Base mock spawner
- disables user-switching that we need root permissions to do
- spawns jupyterhub.tests.mocksu instead of a full single-user server
"""
def make_preexec_fn(self, *a, **kw): def make_preexec_fn(self, *a, **kw):
# skip the setuid stuff # skip the setuid stuff
return return
@@ -46,6 +51,7 @@ class MockSpawner(LocalProcessSpawner):
def user_env(self, env): def user_env(self, env):
return env return env
@default('cmd') @default('cmd')
def _cmd_default(self): def _cmd_default(self):
return [sys.executable, '-m', 'jupyterhub.tests.mocksu'] return [sys.executable, '-m', 'jupyterhub.tests.mocksu']
@@ -56,8 +62,9 @@ class SlowSpawner(MockSpawner):
@gen.coroutine @gen.coroutine
def start(self): def start(self):
yield super().start() (ip, port) = yield super().start()
yield gen.sleep(2) yield gen.sleep(2)
return ip, port
@gen.coroutine @gen.coroutine
def stop(self): def stop(self):
@@ -78,6 +85,7 @@ class NeverSpawner(MockSpawner):
class FormSpawner(MockSpawner): class FormSpawner(MockSpawner):
"""A spawner that has an options form defined"""
options_form = "IMAFORM" options_form = "IMAFORM"
def options_from_form(self, form_data): def options_from_form(self, form_data):
@@ -109,14 +117,16 @@ class MockPAMAuthenticator(PAMAuthenticator):
): ):
return super(MockPAMAuthenticator, self).authenticate(*args, **kwargs) return super(MockPAMAuthenticator, self).authenticate(*args, **kwargs)
class MockHub(JupyterHub): class MockHub(JupyterHub):
"""Hub with various mock bits""" """Hub with various mock bits"""
db_file = None db_file = None
confirm_no_ssl = True
last_activity_interval = 2 last_activity_interval = 2
base_url = '/@/space%20word/'
@default('subdomain_host') @default('subdomain_host')
def _subdomain_host_default(self): def _subdomain_host_default(self):
return os.environ.get('JUPYTERHUB_TEST_SUBDOMAIN_HOST', '') return os.environ.get('JUPYTERHUB_TEST_SUBDOMAIN_HOST', '')
@@ -176,6 +186,7 @@ class MockHub(JupyterHub):
self.db_file.close() self.db_file.close()
def login_user(self, name): def login_user(self, name):
"""Login a user by name, returning her cookies."""
base_url = public_url(self) base_url = public_url(self)
r = requests.post(base_url + 'hub/login', r = requests.post(base_url + 'hub/login',
data={ data={
@@ -190,19 +201,75 @@ class MockHub(JupyterHub):
def public_host(app): def public_host(app):
"""Return the public *host* (no URL prefix) of the given JupyterHub instance."""
if app.subdomain_host: if app.subdomain_host:
return app.subdomain_host return app.subdomain_host
else: else:
return app.proxy.public_server.host return app.proxy.public_server.host
def public_url(app): def public_url(app, user_or_service=None):
return public_host(app) + app.proxy.public_server.base_url """Return the full, public base URL (including prefix) of the given JupyterHub instance."""
if user_or_service:
if app.subdomain_host:
def user_url(user, app): host = user_or_service.host
if app.subdomain_host: else:
host = user.host host = public_host(app)
return host + user_or_service.server.base_url
else: else:
host = public_host(app) return public_host(app) + app.proxy.public_server.base_url
return host + user.server.base_url
# single-user-server mocking:
class MockSingleUserServer(SingleUserNotebookApp):
"""Mock-out problematic parts of single-user server when run in a thread
Currently:
- disable signal handler
"""
def init_signal(self):
pass
class TestSingleUserSpawner(MockSpawner):
"""Spawner that starts a MockSingleUserServer in a thread."""
_thread = None
@gen.coroutine
def start(self):
self.user.server.port = random_port()
env = self.get_env()
args = self.get_args()
evt = threading.Event()
print(args, env)
def _run():
io_loop = IOLoop()
io_loop.make_current()
io_loop.add_callback(lambda : evt.set())
with mock.patch.dict(os.environ, env):
app = self._app = MockSingleUserServer()
app.initialize(args)
app.start()
self._thread = threading.Thread(target=_run)
self._thread.start()
ready = evt.wait(timeout=3)
assert ready
@gen.coroutine
def stop(self):
self._app.stop()
self._thread.join()
@gen.coroutine
def poll(self):
if self._thread is None:
return 0
if self._thread.is_alive():
return None
else:
return 0

View File

@@ -0,0 +1,67 @@
"""Mock service for testing
basic HTTP Server that echos URLs back,
and allow retrieval of sys.argv.
"""
import argparse
import json
import os
import sys
from urllib.parse import urlparse
import requests
from tornado import web, httpserver, ioloop
from jupyterhub.services.auth import HubAuthenticated
class EchoHandler(web.RequestHandler):
def get(self):
self.write(self.request.path)
class EnvHandler(web.RequestHandler):
def get(self):
self.set_header('Content-Type', 'application/json')
self.write(json.dumps(dict(os.environ)))
class APIHandler(web.RequestHandler):
def get(self, path):
api_token = os.environ['JUPYTERHUB_API_TOKEN']
api_url = os.environ['JUPYTERHUB_API_URL']
r = requests.get(api_url + path, headers={
'Authorization': 'token %s' % api_token
})
r.raise_for_status()
self.set_header('Content-Type', 'application/json')
self.write(r.text)
class WhoAmIHandler(HubAuthenticated, web.RequestHandler):
@web.authenticated
def get(self):
self.write(self.get_current_user())
def main():
if os.environ['JUPYTERHUB_SERVICE_URL']:
url = urlparse(os.environ['JUPYTERHUB_SERVICE_URL'])
app = web.Application([
(r'.*/env', EnvHandler),
(r'.*/api/(.*)', APIHandler),
(r'.*/whoami/?', WhoAmIHandler),
(r'.*', EchoHandler),
])
server = httpserver.HTTPServer(app)
server.listen(url.port, url.hostname)
try:
ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
print('\nInterrupted')
if __name__ == '__main__':
from tornado.options import parse_command_line
parse_command_line()
main()

Binary file not shown.

View File

@@ -3,17 +3,21 @@
import json import json
import time import time
from queue import Queue from queue import Queue
import sys
from unittest import mock
from urllib.parse import urlparse, quote from urllib.parse import urlparse, quote
from pytest import mark, yield_fixture
import requests import requests
from tornado import gen from tornado import gen
import jupyterhub
from .. import orm from .. import orm
from ..user import User from ..user import User
from ..utils import url_path_join as ujoin from ..utils import url_path_join as ujoin
from . import mocking from . import mocking
from .mocking import public_url, user_url from .mocking import public_host, public_url
def check_db_locks(func): def check_db_locks(func):
@@ -21,14 +25,13 @@ def check_db_locks(func):
Decorator for test functions that verifies no locks are held on the Decorator for test functions that verifies no locks are held on the
application's database upon exit by creating and dropping a dummy table. application's database upon exit by creating and dropping a dummy table.
Relies on an instance of JupyterhubApp being the first argument to the Relies on an instance of JupyterHubApp being the first argument to the
decorated function. decorated function.
""" """
def new_func(*args, **kwargs): def new_func(app, *args, **kwargs):
retval = func(*args, **kwargs) retval = func(app, *args, **kwargs)
app = args[0]
temp_session = app.session_factory() temp_session = app.session_factory()
temp_session.execute('CREATE TABLE dummy (foo INT)') temp_session.execute('CREATE TABLE dummy (foo INT)')
temp_session.execute('DROP TABLE dummy') temp_session.execute('DROP TABLE dummy')
@@ -43,8 +46,13 @@ def find_user(db, name):
return db.query(orm.User).filter(orm.User.name==name).first() return db.query(orm.User).filter(orm.User.name==name).first()
def add_user(db, app=None, **kwargs): def add_user(db, app=None, **kwargs):
orm_user = orm.User(**kwargs) orm_user = find_user(db, name=kwargs.get('name'))
db.add(orm_user) if orm_user is None:
orm_user = orm.User(**kwargs)
db.add(orm_user)
else:
for attr, value in kwargs.items():
setattr(orm_user, attr, value)
db.commit() db.commit()
if app: if app:
user = app.users[orm_user.id] = User(orm_user, app.tornado_settings) user = app.users[orm_user.id] = User(orm_user, app.tornado_settings)
@@ -105,7 +113,7 @@ def test_auth_api(app):
def test_referer_check(app, io_loop): def test_referer_check(app, io_loop):
url = ujoin(public_url(app), app.hub.server.base_url) url = ujoin(public_host(app), app.hub.server.base_url)
host = urlparse(url).netloc host = urlparse(url).netloc
user = find_user(app.db, 'admin') user = find_user(app.db, 'admin')
if user is None: if user is None:
@@ -147,7 +155,9 @@ def test_referer_check(app, io_loop):
) )
assert r.status_code == 200 assert r.status_code == 200
# user API tests
@mark.user
def test_get_users(app): def test_get_users(app):
db = app.db db = app.db
r = api_request(app, 'users') r = api_request(app, 'users')
@@ -159,12 +169,14 @@ def test_get_users(app):
assert users == [ assert users == [
{ {
'name': 'admin', 'name': 'admin',
'groups': [],
'admin': True, 'admin': True,
'server': None, 'server': None,
'pending': None, 'pending': None,
}, },
{ {
'name': 'user', 'name': 'user',
'groups': [],
'admin': False, 'admin': False,
'server': None, 'server': None,
'pending': None, 'pending': None,
@@ -176,6 +188,8 @@ def test_get_users(app):
) )
assert r.status_code == 403 assert r.status_code == 403
@mark.user
def test_add_user(app): def test_add_user(app):
db = app.db db = app.db
name = 'newuser' name = 'newuser'
@@ -187,6 +201,7 @@ def test_add_user(app):
assert not user.admin assert not user.admin
@mark.user
def test_get_user(app): def test_get_user(app):
name = 'user' name = 'user'
r = api_request(app, 'users', name) r = api_request(app, 'users', name)
@@ -195,12 +210,14 @@ def test_get_user(app):
user.pop('last_activity') user.pop('last_activity')
assert user == { assert user == {
'name': name, 'name': name,
'groups': [],
'admin': False, 'admin': False,
'server': None, 'server': None,
'pending': None, 'pending': None,
} }
@mark.user
def test_add_multi_user_bad(app): def test_add_multi_user_bad(app):
r = api_request(app, 'users', method='post') r = api_request(app, 'users', method='post')
assert r.status_code == 400 assert r.status_code == 400
@@ -210,6 +227,7 @@ def test_add_multi_user_bad(app):
assert r.status_code == 400 assert r.status_code == 400
@mark.user
def test_add_multi_user_invalid(app): def test_add_multi_user_invalid(app):
app.authenticator.username_pattern = r'w.*' app.authenticator.username_pattern = r'w.*'
r = api_request(app, 'users', method='post', r = api_request(app, 'users', method='post',
@@ -220,6 +238,7 @@ def test_add_multi_user_invalid(app):
assert r.json()['message'] == 'Invalid usernames: andrew, tara' assert r.json()['message'] == 'Invalid usernames: andrew, tara'
@mark.user
def test_add_multi_user(app): def test_add_multi_user(app):
db = app.db db = app.db
names = ['a', 'b'] names = ['a', 'b']
@@ -255,6 +274,7 @@ def test_add_multi_user(app):
assert r_names == ['ab'] assert r_names == ['ab']
@mark.user
def test_add_multi_user_admin(app): def test_add_multi_user_admin(app):
db = app.db db = app.db
names = ['c', 'd'] names = ['c', 'd']
@@ -273,6 +293,7 @@ def test_add_multi_user_admin(app):
assert user.admin assert user.admin
@mark.user
def test_add_user_bad(app): def test_add_user_bad(app):
db = app.db db = app.db
name = 'dne_newuser' name = 'dne_newuser'
@@ -281,6 +302,8 @@ def test_add_user_bad(app):
user = find_user(db, name) user = find_user(db, name)
assert user is None assert user is None
@mark.user
def test_add_admin(app): def test_add_admin(app):
db = app.db db = app.db
name = 'newadmin' name = 'newadmin'
@@ -293,6 +316,8 @@ def test_add_admin(app):
assert user.name == name assert user.name == name
assert user.admin assert user.admin
@mark.user
def test_delete_user(app): def test_delete_user(app):
db = app.db db = app.db
mal = add_user(db, name='mal') mal = add_user(db, name='mal')
@@ -300,6 +325,7 @@ def test_delete_user(app):
assert r.status_code == 204 assert r.status_code == 204
@mark.user
def test_make_admin(app): def test_make_admin(app):
db = app.db db = app.db
name = 'admin2' name = 'admin2'
@@ -319,6 +345,7 @@ def test_make_admin(app):
assert user.name == name assert user.name == name
assert user.admin assert user.admin
def get_app_user(app, name): def get_app_user(app, name):
"""Get the User object from the main thread """Get the User object from the main thread
@@ -333,6 +360,7 @@ def get_app_user(app, name):
user_id = q.get(timeout=2) user_id = q.get(timeout=2)
return app.users[user_id] return app.users[user_id]
def test_spawn(app, io_loop): def test_spawn(app, io_loop):
db = app.db db = app.db
name = 'wash' name = 'wash'
@@ -341,6 +369,7 @@ def test_spawn(app, io_loop):
's': ['value'], 's': ['value'],
'i': 5, 'i': 5,
} }
before_servers = sorted(db.query(orm.Server), key=lambda s: s.url)
r = api_request(app, 'users', name, 'server', method='post', data=json.dumps(options)) r = api_request(app, 'users', name, 'server', method='post', data=json.dumps(options))
assert r.status_code == 201 assert r.status_code == 201
assert 'pid' in user.state assert 'pid' in user.state
@@ -351,8 +380,8 @@ def test_spawn(app, io_loop):
status = io_loop.run_sync(app_user.spawner.poll) status = io_loop.run_sync(app_user.spawner.poll)
assert status is None assert status is None
assert user.server.base_url == '/user/%s' % name assert user.server.base_url == ujoin(app.base_url, 'user/%s' % name)
url = user_url(user, app) url = public_url(app, user)
print(url) print(url)
r = requests.get(url) r = requests.get(url)
assert r.status_code == 200 assert r.status_code == 200
@@ -361,10 +390,10 @@ def test_spawn(app, io_loop):
r = requests.get(ujoin(url, 'args')) r = requests.get(ujoin(url, 'args'))
assert r.status_code == 200 assert r.status_code == 200
argv = r.json() argv = r.json()
for expected in ['--user=%s' % name, '--base-url=%s' % user.server.base_url]: for expected in ['--user="%s"' % name, '--base-url="%s"' % user.server.base_url]:
assert expected in argv assert expected in argv
if app.subdomain_host: if app.subdomain_host:
assert '--hub-host=%s' % app.subdomain_host in argv assert '--hub-host="%s"' % app.subdomain_host in argv
r = api_request(app, 'users', name, 'server', method='delete') r = api_request(app, 'users', name, 'server', method='delete')
assert r.status_code == 204 assert r.status_code == 204
@@ -373,17 +402,22 @@ def test_spawn(app, io_loop):
status = io_loop.run_sync(app_user.spawner.poll) status = io_loop.run_sync(app_user.spawner.poll)
assert status == 0 assert status == 0
def test_slow_spawn(app, io_loop): # check that we cleaned up after ourselves
# app.tornado_application.settings['spawner_class'] = mocking.SlowSpawner assert user.server is None
app.tornado_settings['spawner_class'] = mocking.SlowSpawner after_servers = sorted(db.query(orm.Server), key=lambda s: s.url)
app.tornado_application.settings['slow_spawn_timeout'] = 0 assert before_servers == after_servers
app.tornado_application.settings['slow_stop_timeout'] = 0 tokens = list(db.query(orm.APIToken).filter(orm.APIToken.user_id==user.id))
assert tokens == []
def test_slow_spawn(app, io_loop, no_patience, request):
patch = mock.patch.dict(app.tornado_settings, {'spawner_class': mocking.SlowSpawner})
patch.start()
request.addfinalizer(patch.stop)
db = app.db db = app.db
name = 'zoe' name = 'zoe'
user = add_user(db, app=app, name=name) user = add_user(db, app=app, name=name)
r = api_request(app, 'users', name, 'server', method='post') r = api_request(app, 'users', name, 'server', method='post')
app.tornado_settings['spawner_class'] = mocking.MockSpawner
r.raise_for_status() r.raise_for_status()
assert r.status_code == 202 assert r.status_code == 202
app_user = get_app_user(app, name) app_user = get_app_user(app, name)
@@ -425,15 +459,15 @@ def test_slow_spawn(app, io_loop):
assert r.status_code == 400 assert r.status_code == 400
def test_never_spawn(app, io_loop): def test_never_spawn(app, io_loop, no_patience, request):
app.tornado_settings['spawner_class'] = mocking.NeverSpawner patch = mock.patch.dict(app.tornado_settings, {'spawner_class': mocking.NeverSpawner})
app.tornado_application.settings['slow_spawn_timeout'] = 0 patch.start()
request.addfinalizer(patch.stop)
db = app.db db = app.db
name = 'badger' name = 'badger'
user = add_user(db, app=app, name=name) user = add_user(db, app=app, name=name)
r = api_request(app, 'users', name, 'server', method='post') r = api_request(app, 'users', name, 'server', method='post')
app.tornado_settings['spawner_class'] = mocking.MockSpawner
app_user = get_app_user(app, name) app_user = get_app_user(app, name)
assert app_user.spawner is not None assert app_user.spawner is not None
assert app_user.spawn_pending assert app_user.spawn_pending
@@ -482,6 +516,7 @@ def test_cookie(app):
reply = r.json() reply = r.json()
assert reply['name'] == name assert reply['name'] == name
def test_token(app): def test_token(app):
name = 'book' name = 'book'
user = add_user(app.db, app=app, name=name) user = add_user(app.db, app=app, name=name)
@@ -493,6 +528,7 @@ def test_token(app):
r = api_request(app, 'authorizations/token', 'notauthorized') r = api_request(app, 'authorizations/token', 'notauthorized')
assert r.status_code == 404 assert r.status_code == 404
def test_get_token(app): def test_get_token(app):
name = 'user' name = 'user'
user = add_user(app.db, app=app, name=name) user = add_user(app.db, app=app, name=name)
@@ -505,6 +541,7 @@ def test_get_token(app):
token = json.loads(data) token = json.loads(data)
assert not token['Authentication'] is None assert not token['Authentication'] is None
def test_bad_get_token(app): def test_bad_get_token(app):
name = 'user' name = 'user'
password = 'fake' password = 'fake'
@@ -515,6 +552,221 @@ def test_bad_get_token(app):
})) }))
assert r.status_code == 403 assert r.status_code == 403
# group API tests
@mark.group
def test_groups_list(app):
r = api_request(app, 'groups')
r.raise_for_status()
reply = r.json()
assert reply == []
# create a group
group = orm.Group(name='alphaflight')
app.db.add(group)
app.db.commit()
r = api_request(app, 'groups')
r.raise_for_status()
reply = r.json()
assert reply == [{
'name': 'alphaflight',
'users': []
}]
@mark.group
def test_group_get(app):
group = orm.Group.find(app.db, name='alphaflight')
user = add_user(app.db, app=app, name='sasquatch')
group.users.append(user)
app.db.commit()
r = api_request(app, 'groups/runaways')
assert r.status_code == 404
r = api_request(app, 'groups/alphaflight')
r.raise_for_status()
reply = r.json()
assert reply == {
'name': 'alphaflight',
'users': ['sasquatch']
}
@mark.group
def test_group_create_delete(app):
db = app.db
r = api_request(app, 'groups/runaways', method='delete')
assert r.status_code == 404
r = api_request(app, 'groups/new', method='post', data=json.dumps({
'users': ['doesntexist']
}))
assert r.status_code == 400
assert orm.Group.find(db, name='new') is None
r = api_request(app, 'groups/omegaflight', method='post', data=json.dumps({
'users': ['sasquatch']
}))
r.raise_for_status()
omegaflight = orm.Group.find(db, name='omegaflight')
sasquatch = find_user(db, name='sasquatch')
assert omegaflight in sasquatch.groups
assert sasquatch in omegaflight.users
# create duplicate raises 400
r = api_request(app, 'groups/omegaflight', method='post')
assert r.status_code == 400
r = api_request(app, 'groups/omegaflight', method='delete')
assert r.status_code == 204
assert omegaflight not in sasquatch.groups
assert orm.Group.find(db, name='omegaflight') is None
# delete nonexistant gives 404
r = api_request(app, 'groups/omegaflight', method='delete')
assert r.status_code == 404
@mark.group
def test_group_add_users(app):
db = app.db
# must specify users
r = api_request(app, 'groups/alphaflight/users', method='post', data='{}')
assert r.status_code == 400
names = ['aurora', 'guardian', 'northstar', 'sasquatch', 'shaman', 'snowbird']
users = [ find_user(db, name=name) or add_user(db, app=app, name=name) for name in names ]
r = api_request(app, 'groups/alphaflight/users', method='post', data=json.dumps({
'users': names,
}))
r.raise_for_status()
for user in users:
print(user.name)
assert [ g.name for g in user.groups ] == ['alphaflight']
group = orm.Group.find(db, name='alphaflight')
assert sorted([ u.name for u in group.users ]) == sorted(names)
@mark.group
def test_group_delete_users(app):
db = app.db
# must specify users
r = api_request(app, 'groups/alphaflight/users', method='delete', data='{}')
assert r.status_code == 400
names = ['aurora', 'guardian', 'northstar', 'sasquatch', 'shaman', 'snowbird']
users = [ find_user(db, name=name) for name in names ]
r = api_request(app, 'groups/alphaflight/users', method='delete', data=json.dumps({
'users': names[:2],
}))
r.raise_for_status()
for user in users[:2]:
assert user.groups == []
for user in users[2:]:
assert [ g.name for g in user.groups ] == ['alphaflight']
group = orm.Group.find(db, name='alphaflight')
assert sorted([ u.name for u in group.users ]) == sorted(names[2:])
# service API
@mark.services
def test_get_services(app, mockservice):
db = app.db
r = api_request(app, 'services')
r.raise_for_status()
assert r.status_code == 200
services = r.json()
assert services == {
'mock-service': {
'name': 'mock-service',
'admin': True,
'command': mockservice.command,
'pid': mockservice.proc.pid,
'prefix': mockservice.server.base_url,
'url': mockservice.url,
}
}
r = api_request(app, 'services',
headers=auth_header(db, 'user'),
)
assert r.status_code == 403
@mark.services
def test_get_service(app, mockservice):
db = app.db
r = api_request(app, 'services/%s' % mockservice.name)
r.raise_for_status()
assert r.status_code == 200
service = r.json()
assert service == {
'name': 'mock-service',
'admin': True,
'command': mockservice.command,
'pid': mockservice.proc.pid,
'prefix': mockservice.server.base_url,
'url': mockservice.url,
}
r = api_request(app, 'services/%s' % mockservice.name,
headers={
'Authorization': 'token %s' % mockservice.api_token
}
)
r.raise_for_status()
r = api_request(app, 'services/%s' % mockservice.name,
headers=auth_header(db, 'user'),
)
assert r.status_code == 403
def test_root_api(app):
base_url = app.hub.server.url
url = ujoin(base_url, 'api')
r = requests.get(url)
r.raise_for_status()
expected = {
'version': jupyterhub.__version__
}
assert r.json() == expected
def test_info(app):
r = api_request(app, 'info')
r.raise_for_status()
data = r.json()
assert data['version'] == jupyterhub.__version__
assert sorted(data) == [
'authenticator',
'python',
'spawner',
'sys_executable',
'version',
]
assert data['python'] == sys.version
assert data['sys_executable'] == sys.executable
assert data['authenticator'] == {
'class': 'jupyterhub.tests.mocking.MockPAMAuthenticator',
'version': jupyterhub.__version__,
}
assert data['spawner'] == {
'class': 'jupyterhub.tests.mocking.MockSpawner',
'version': jupyterhub.__version__,
}
# general API tests
def test_options(app): def test_options(app):
r = api_request(app, 'users', method='options') r = api_request(app, 'users', method='options')
r.raise_for_status() r.raise_for_status()
@@ -526,6 +778,7 @@ def test_bad_json_body(app):
assert r.status_code == 400 assert r.status_code == 400
# shutdown must be last
def test_shutdown(app): def test_shutdown(app):
r = api_request(app, 'shutdown', method='post', data=json.dumps({ r = api_request(app, 'shutdown', method='post', data=json.dumps({
'servers': True, 'servers': True,
@@ -539,3 +792,4 @@ def test_shutdown(app):
else: else:
break break
assert not app.io_loop._running assert not app.io_loop._running

View File

@@ -55,8 +55,7 @@ def test_generate_config():
assert 'Spawner.cmd' in cfg_text assert 'Spawner.cmd' in cfg_text
assert 'Authenticator.whitelist' in cfg_text assert 'Authenticator.whitelist' in cfg_text
def test_init_tokens(io_loop):
def test_init_tokens():
with TemporaryDirectory() as td: with TemporaryDirectory() as td:
db_file = os.path.join(td, 'jupyterhub.sqlite') db_file = os.path.join(td, 'jupyterhub.sqlite')
tokens = { tokens = {
@@ -64,8 +63,8 @@ def test_init_tokens():
'also-super-secret': 'gordon', 'also-super-secret': 'gordon',
'boagasdfasdf': 'chell', 'boagasdfasdf': 'chell',
} }
app = MockHub(db_file=db_file, api_tokens=tokens) app = MockHub(db_url=db_file, api_tokens=tokens)
app.initialize([]) io_loop.run_sync(lambda : app.initialize([]))
db = app.db db = app.db
for token, username in tokens.items(): for token, username in tokens.items():
api_token = orm.APIToken.find(db, token) api_token = orm.APIToken.find(db, token)
@@ -74,8 +73,8 @@ def test_init_tokens():
assert user.name == username assert user.name == username
# simulate second startup, reloading same tokens: # simulate second startup, reloading same tokens:
app = MockHub(db_file=db_file, api_tokens=tokens) app = MockHub(db_url=db_file, api_tokens=tokens)
app.initialize([]) io_loop.run_sync(lambda : app.initialize([]))
db = app.db db = app.db
for token, username in tokens.items(): for token, username in tokens.items():
api_token = orm.APIToken.find(db, token) api_token = orm.APIToken.find(db, token)
@@ -85,9 +84,9 @@ def test_init_tokens():
# don't allow failed token insertion to create users: # don't allow failed token insertion to create users:
tokens['short'] = 'gman' tokens['short'] = 'gman'
app = MockHub(db_file=db_file, api_tokens=tokens) app = MockHub(db_url=db_file, api_tokens=tokens)
# with pytest.raises(ValueError): with pytest.raises(ValueError):
app.initialize([]) io_loop.run_sync(lambda : app.initialize([]))
assert orm.User.find(app.db, 'gman') is None assert orm.User.find(app.db, 'gman') is None
@@ -140,3 +139,20 @@ def test_cookie_secret_env(tmpdir):
assert hub.cookie_secret == binascii.a2b_hex('abc123') assert hub.cookie_secret == binascii.a2b_hex('abc123')
assert not os.path.exists(hub.cookie_secret_file) assert not os.path.exists(hub.cookie_secret_file)
def test_load_groups(io_loop):
to_load = {
'blue': ['cyclops', 'rogue', 'wolverine'],
'gold': ['storm', 'jean-grey', 'colossus'],
}
hub = MockHub(load_groups=to_load)
hub.init_db()
io_loop.run_sync(hub.init_users)
hub.init_groups()
db = hub.db
blue = orm.Group.find(db, name='blue')
assert blue is not None
assert sorted([ u.name for u in blue.users ]) == sorted(to_load['blue'])
gold = orm.Group.find(db, name='gold')
assert gold is not None
assert sorted([ u.name for u in gold.users ]) == sorted(to_load['gold'])

View File

@@ -168,6 +168,23 @@ def test_normalize_names(io_loop):
})) }))
assert authorized == 'zoe' assert authorized == 'zoe'
authorized = io_loop.run_sync(lambda: a.get_authenticated_user(None, {
'username': 'Glenn',
'password': 'Glenn',
}))
assert authorized == 'glenn'
authorized = io_loop.run_sync(lambda: a.get_authenticated_user(None, {
'username': 'hExi',
'password': 'hExi',
}))
assert authorized == 'hexi'
authorized = io_loop.run_sync(lambda: a.get_authenticated_user(None, {
'username': 'Test',
'password': 'Test',
}))
assert authorized == 'test'
def test_username_map(io_loop): def test_username_map(io_loop):
a = MockPAMAuthenticator(username_map={'wash': 'alpha'}) a = MockPAMAuthenticator(username_map={'wash': 'alpha'})
@@ -189,6 +206,9 @@ def test_validate_names(io_loop):
a = auth.PAMAuthenticator() a = auth.PAMAuthenticator()
assert a.validate_username('willow') assert a.validate_username('willow')
assert a.validate_username('giles') assert a.validate_username('giles')
assert a.validate_username('Test')
assert a.validate_username('hExi')
assert a.validate_username('Glenn#Smith!')
a = auth.PAMAuthenticator(username_pattern='w.*') a = auth.PAMAuthenticator(username_pattern='w.*')
assert not a.validate_username('xander') assert not a.validate_username('xander')
assert a.validate_username('willow') assert a.validate_username('willow')

View File

@@ -0,0 +1,48 @@
from glob import glob
import os
import shutil
from sqlalchemy.exc import OperationalError
from pytest import raises
from ..dbutil import upgrade
from ..app import NewToken, UpgradeDB, JupyterHub
here = os.path.dirname(__file__)
old_db = os.path.join(here, 'old-jupyterhub.sqlite')
def generate_old_db(path):
db_path = os.path.join(path, "jupyterhub.sqlite")
print(old_db, db_path)
shutil.copy(old_db, db_path)
return 'sqlite:///%s' % db_path
def test_upgrade(tmpdir):
print(tmpdir)
db_url = generate_old_db(str(tmpdir))
print(db_url)
upgrade(db_url)
def test_upgrade_entrypoint(tmpdir, io_loop):
generate_old_db(str(tmpdir))
tmpdir.chdir()
tokenapp = NewToken()
tokenapp.initialize(['kaylee'])
with raises(OperationalError):
tokenapp.start()
sqlite_files = glob(os.path.join(str(tmpdir), 'jupyterhub.sqlite*'))
assert len(sqlite_files) == 1
upgradeapp = UpgradeDB()
io_loop.run_sync(lambda : upgradeapp.initialize([]))
upgradeapp.start()
# check that backup was created:
sqlite_files = glob(os.path.join(str(tmpdir), 'jupyterhub.sqlite*'))
assert len(sqlite_files) == 2
# run tokenapp again, it should work
tokenapp.start()

View File

@@ -90,6 +90,8 @@ def test_tokens(db):
assert len(user.api_tokens) == 2 assert len(user.api_tokens) == 2
found = orm.APIToken.find(db, token=token) found = orm.APIToken.find(db, token=token)
assert found.match(token) assert found.match(token)
assert found.user is user
assert found.service is None
found = orm.APIToken.find(db, 'something else') found = orm.APIToken.find(db, 'something else')
assert found is None assert found is None
@@ -104,6 +106,67 @@ def test_tokens(db):
assert len(user.api_tokens) == 3 assert len(user.api_tokens) == 3
def test_service_tokens(db):
service = orm.Service(name='secret')
db.add(service)
db.commit()
token = service.new_api_token()
assert any(t.match(token) for t in service.api_tokens)
service.new_api_token()
assert len(service.api_tokens) == 2
found = orm.APIToken.find(db, token=token)
assert found.match(token)
assert found.user is None
assert found.service is service
service2 = orm.Service(name='secret')
db.add(service)
db.commit()
assert service2.id != service.id
def test_service_server(db):
service = orm.Service(name='has_servers')
db.add(service)
db.commit()
assert service.server is None
server = service.server = orm.Server()
assert service
assert server.id is None
db.commit()
assert isinstance(server.id, int)
def test_token_find(db):
service = db.query(orm.Service).first()
user = db.query(orm.User).first()
service_token = service.new_api_token()
user_token = user.new_api_token()
with pytest.raises(ValueError):
orm.APIToken.find(db, 'irrelevant', kind='richard')
# no kind, find anything
found = orm.APIToken.find(db, token=user_token)
assert found
assert found.match(user_token)
found = orm.APIToken.find(db, token=service_token)
assert found
assert found.match(service_token)
# kind=user, only find user tokens
found = orm.APIToken.find(db, token=user_token, kind='user')
assert found
assert found.match(user_token)
found = orm.APIToken.find(db, token=service_token, kind='user')
assert found is None
# kind=service, only find service tokens
found = orm.APIToken.find(db, token=service_token, kind='service')
assert found
assert found.match(service_token)
found = orm.APIToken.find(db, token=user_token, kind='service')
assert found is None
def test_spawn_fails(db, io_loop): def test_spawn_fails(db, io_loop):
orm_user = orm.User(name='aeofel') orm_user = orm.User(name='aeofel')
db.add(orm_user) db.add(orm_user)
@@ -124,3 +187,17 @@ def test_spawn_fails(db, io_loop):
assert user.server is None assert user.server is None
assert not user.running assert not user.running
def test_groups(db):
user = orm.User.find(db, name='aeofel')
db.add(user)
group = orm.Group(name='lives')
db.add(group)
db.commit()
assert group.users == []
assert user.groups == []
group.users.append(user)
db.commit()
assert group.users == [user]
assert user.groups == [group]

View File

@@ -8,11 +8,15 @@ from ..utils import url_path_join as ujoin
from .. import orm from .. import orm
import mock import mock
from .mocking import FormSpawner, public_url, public_host, user_url from .mocking import FormSpawner, public_url, public_host
from .test_api import api_request from .test_api import api_request
def get_page(path, app, **kw): def get_page(path, app, hub=True, **kw):
base_url = ujoin(public_url(app), app.hub.server.base_url) if hub:
prefix = app.hub.server.base_url
else:
prefix = app.base_url
base_url = ujoin(public_host(app), prefix)
print(base_url) print(base_url)
return requests.get(ujoin(base_url, path), **kw) return requests.get(ujoin(base_url, path), **kw)
@@ -21,17 +25,27 @@ def test_root_no_auth(app, io_loop):
routes = io_loop.run_sync(app.proxy.get_routes) routes = io_loop.run_sync(app.proxy.get_routes)
print(routes) print(routes)
print(app.hub.server) print(app.hub.server)
url = public_url(app) url = ujoin(public_host(app), app.hub.server.base_url)
print(url) print(url)
r = requests.get(url) r = requests.get(url)
r.raise_for_status() r.raise_for_status()
assert r.url == ujoin(url, app.hub.server.base_url, 'login') assert r.url == ujoin(url, 'login')
def test_root_auth(app): def test_root_auth(app):
cookies = app.login_user('river') cookies = app.login_user('river')
r = requests.get(public_url(app), cookies=cookies) r = requests.get(public_url(app), cookies=cookies)
r.raise_for_status() r.raise_for_status()
assert r.url == user_url(app.users['river'], app) assert r.url == public_url(app, app.users['river'])
def test_root_redirect(app):
name = 'wash'
cookies = app.login_user(name)
next_url = ujoin(app.base_url, 'user/other/test.ipynb')
url = '/?' + urlencode({'next': next_url})
r = get_page(url, app, cookies=cookies)
r.raise_for_status()
path = urlparse(r.url).path
assert path == ujoin(app.base_url, 'user/%s/test.ipynb' % name)
def test_home_no_auth(app): def test_home_no_auth(app):
r = get_page('home', app, allow_redirects=False) r = get_page('home', app, allow_redirects=False)
@@ -80,7 +94,7 @@ def test_spawn_redirect(app, io_loop):
r.raise_for_status() r.raise_for_status()
print(urlparse(r.url)) print(urlparse(r.url))
path = urlparse(r.url).path path = urlparse(r.url).path
assert path == '/user/%s' % name assert path == ujoin(app.base_url, 'user/%s' % name)
# should have started server # should have started server
status = io_loop.run_sync(u.spawner.poll) status = io_loop.run_sync(u.spawner.poll)
@@ -91,7 +105,7 @@ def test_spawn_redirect(app, io_loop):
r.raise_for_status() r.raise_for_status()
print(urlparse(r.url)) print(urlparse(r.url))
path = urlparse(r.url).path path = urlparse(r.url).path
assert path == '/user/%s' % name assert path == ujoin(app.base_url, '/user/%s' % name)
def test_spawn_page(app): def test_spawn_page(app):
with mock.patch.dict(app.users.settings, {'spawner_class': FormSpawner}): with mock.patch.dict(app.users.settings, {'spawner_class': FormSpawner}):
@@ -102,7 +116,7 @@ def test_spawn_page(app):
def test_spawn_form(app, io_loop): def test_spawn_form(app, io_loop):
with mock.patch.dict(app.users.settings, {'spawner_class': FormSpawner}): with mock.patch.dict(app.users.settings, {'spawner_class': FormSpawner}):
base_url = ujoin(public_url(app), app.hub.server.base_url) base_url = ujoin(public_host(app), app.hub.server.base_url)
cookies = app.login_user('jones') cookies = app.login_user('jones')
orm_u = orm.User.find(app.db, 'jones') orm_u = orm.User.find(app.db, 'jones')
u = app.users[orm_u] u = app.users[orm_u]
@@ -123,7 +137,7 @@ def test_spawn_form(app, io_loop):
def test_spawn_form_with_file(app, io_loop): def test_spawn_form_with_file(app, io_loop):
with mock.patch.dict(app.users.settings, {'spawner_class': FormSpawner}): with mock.patch.dict(app.users.settings, {'spawner_class': FormSpawner}):
base_url = ujoin(public_url(app), app.hub.server.base_url) base_url = ujoin(public_host(app), app.hub.server.base_url)
cookies = app.login_user('jones') cookies = app.login_user('jones')
orm_u = orm.User.find(app.db, 'jones') orm_u = orm.User.find(app.db, 'jones')
u = app.users[orm_u] u = app.users[orm_u]
@@ -138,8 +152,6 @@ def test_spawn_form_with_file(app, io_loop):
files={'hello': ('hello.txt', b'hello world\n')} files={'hello': ('hello.txt', b'hello world\n')}
) )
r.raise_for_status() r.raise_for_status()
print(u.spawner)
print(u.spawner.user_options)
assert u.spawner.user_options == { assert u.spawner.user_options == {
'energy': '511keV', 'energy': '511keV',
'bounds': [-1, 1], 'bounds': [-1, 1],
@@ -154,25 +166,49 @@ def test_user_redirect(app):
name = 'wash' name = 'wash'
cookies = app.login_user(name) cookies = app.login_user(name)
r = get_page('/user/baduser', app, cookies=cookies) r = get_page('/user-redirect/tree/top/', app)
r.raise_for_status() r.raise_for_status()
print(urlparse(r.url)) print(urlparse(r.url))
path = urlparse(r.url).path path = urlparse(r.url).path
assert path == '/user/%s' % name assert path == ujoin(app.base_url, '/hub/login')
r = get_page('/user/baduser/test.ipynb', app, cookies=cookies)
r.raise_for_status()
print(urlparse(r.url))
path = urlparse(r.url).path
assert path == '/user/%s/test.ipynb' % name
r = get_page('/user/baduser/test.ipynb', app)
r.raise_for_status()
print(urlparse(r.url))
path = urlparse(r.url).path
assert path == '/hub/login'
query = urlparse(r.url).query query = urlparse(r.url).query
assert query == urlencode({'next': '/hub/user/baduser/test.ipynb'}) assert query == urlencode({
'next': ujoin(app.hub.server.base_url, '/user-redirect/tree/top/')
})
r = get_page('/user-redirect/notebooks/test.ipynb', app, cookies=cookies)
r.raise_for_status()
print(urlparse(r.url))
path = urlparse(r.url).path
assert path == ujoin(app.base_url, '/user/%s/notebooks/test.ipynb' % name)
def test_user_redirect_deprecated(app):
"""redirecting from /user/someonelse/ URLs (deprecated)"""
name = 'wash'
cookies = app.login_user(name)
r = get_page('/user/baduser', app, cookies=cookies, hub=False)
r.raise_for_status()
print(urlparse(r.url))
path = urlparse(r.url).path
assert path == ujoin(app.base_url, '/user/%s' % name)
r = get_page('/user/baduser/test.ipynb', app, cookies=cookies, hub=False)
r.raise_for_status()
print(urlparse(r.url))
path = urlparse(r.url).path
assert path == ujoin(app.base_url, '/user/%s/test.ipynb' % name)
r = get_page('/user/baduser/test.ipynb', app, hub=False)
r.raise_for_status()
print(urlparse(r.url))
path = urlparse(r.url).path
assert path == ujoin(app.base_url, '/hub/login')
query = urlparse(r.url).query
assert query == urlencode({
'next': ujoin(app.base_url, '/hub/user/baduser/test.ipynb')
})
def test_login_fail(app): def test_login_fail(app):
@@ -233,8 +269,7 @@ def test_login_no_whitelist_adds_user(app):
def test_static_files(app): def test_static_files(app):
base_url = ujoin(public_url(app), app.hub.server.base_url) base_url = ujoin(public_host(app), app.hub.server.base_url)
print(base_url)
r = requests.get(ujoin(base_url, 'logo')) r = requests.get(ujoin(base_url, 'logo'))
r.raise_for_status() r.raise_for_status()
assert r.headers['content-type'] == 'image/png' assert r.headers['content-type'] == 'image/png'

View File

@@ -4,12 +4,12 @@ import json
import os import os
from queue import Queue from queue import Queue
from subprocess import Popen from subprocess import Popen
from urllib.parse import urlparse from urllib.parse import urlparse, unquote
from .. import orm from .. import orm
from .mocking import MockHub from .mocking import MockHub
from .test_api import api_request from .test_api import api_request
from ..utils import wait_for_http_server from ..utils import wait_for_http_server, url_path_join as ujoin
def test_external_proxy(request, io_loop): def test_external_proxy(request, io_loop):
"""Test a proxy started before the Hub""" """Test a proxy started before the Hub"""
@@ -63,7 +63,7 @@ def test_external_proxy(request, io_loop):
r.raise_for_status() r.raise_for_status()
routes = io_loop.run_sync(app.proxy.get_routes) routes = io_loop.run_sync(app.proxy.get_routes)
user_path = '/user/river' user_path = unquote(ujoin(app.base_url, 'user/river'))
if app.subdomain_host: if app.subdomain_host:
domain = urlparse(app.subdomain_host).hostname domain = urlparse(app.subdomain_host).hostname
user_path = '/%s.%s' % (name, domain) + user_path user_path = '/%s.%s' % (name, domain) + user_path
@@ -136,14 +136,14 @@ def test_check_routes(app, io_loop):
assert zoe is not None assert zoe is not None
zoe = app.users[zoe] zoe = app.users[zoe]
before = sorted(io_loop.run_sync(app.proxy.get_routes)) before = sorted(io_loop.run_sync(app.proxy.get_routes))
assert zoe.proxy_path in before assert unquote(zoe.proxy_path) in before
io_loop.run_sync(lambda : app.proxy.check_routes(app.users)) io_loop.run_sync(lambda : app.proxy.check_routes(app.users, app._service_map))
io_loop.run_sync(lambda : proxy.delete_user(zoe)) io_loop.run_sync(lambda : proxy.delete_user(zoe))
during = sorted(io_loop.run_sync(app.proxy.get_routes)) during = sorted(io_loop.run_sync(app.proxy.get_routes))
assert zoe.proxy_path not in during assert unquote(zoe.proxy_path) not in during
io_loop.run_sync(lambda : app.proxy.check_routes(app.users)) io_loop.run_sync(lambda : app.proxy.check_routes(app.users, app._service_map))
after = sorted(io_loop.run_sync(app.proxy.get_routes)) after = sorted(io_loop.run_sync(app.proxy.get_routes))
assert zoe.proxy_path in after assert unquote(zoe.proxy_path) in after
assert before == after assert before == after

View File

@@ -0,0 +1,104 @@
"""Tests for services"""
from binascii import hexlify
from contextlib import contextmanager
import os
from subprocess import Popen
import sys
from threading import Event
import time
import requests
from tornado import gen
from tornado.ioloop import IOLoop
from .mocking import public_url
from ..utils import url_path_join, wait_for_http_server
here = os.path.dirname(os.path.abspath(__file__))
mockservice_py = os.path.join(here, 'mockservice.py')
mockservice_cmd = [sys.executable, mockservice_py]
from ..utils import random_port
@contextmanager
def external_service(app, name='mockservice'):
env = {
'JUPYTERHUB_API_TOKEN': hexlify(os.urandom(5)),
'JUPYTERHUB_SERVICE_NAME': name,
'JUPYTERHUB_API_URL': url_path_join(app.hub.server.url, 'api/'),
'JUPYTERHUB_SERVICE_URL': 'http://127.0.0.1:%i' % random_port(),
}
p = Popen(mockservice_cmd, env=env)
IOLoop().run_sync(lambda : wait_for_http_server(env['JUPYTERHUB_SERVICE_URL']))
try:
yield env
finally:
p.terminate()
def test_managed_service(app, mockservice):
service = mockservice
proc = service.proc
first_pid = proc.pid
assert proc.poll() is None
# shut it down:
proc.terminate()
proc.wait(10)
assert proc.poll() is not None
# ensure Hub notices and brings it back up:
for i in range(20):
if service.proc is not proc:
break
else:
time.sleep(0.2)
assert service.proc.pid != first_pid
assert service.proc.poll() is None
def test_proxy_service(app, mockservice_url, io_loop):
service = mockservice_url
name = service.name
routes = io_loop.run_sync(app.proxy.get_routes)
url = public_url(app, service) + '/foo'
r = requests.get(url, allow_redirects=False)
path = '/services/{}/foo'.format(name)
r.raise_for_status()
assert r.status_code == 200
assert r.text.endswith(path)
def test_external_service(app, io_loop):
name = 'external'
with external_service(app, name=name) as env:
app.services = [{
'name': name,
'admin': True,
'url': env['JUPYTERHUB_SERVICE_URL'],
'api_token': env['JUPYTERHUB_API_TOKEN'],
}]
app.init_services()
app.init_api_tokens()
evt = Event()
@gen.coroutine
def add_services():
yield app.proxy.add_all_services(app._service_map)
evt.set()
app.io_loop.add_callback(add_services)
assert evt.wait(10)
service = app._service_map[name]
url = public_url(app, service) + '/api/users'
path = '/services/{}/api/users'.format(name)
r = requests.get(url, allow_redirects=False)
r.raise_for_status()
assert r.status_code == 200
resp = r.json()
assert isinstance(resp, list)
assert len(resp) >= 1
assert isinstance(resp[0], dict)
assert 'name' in resp[0]

View File

@@ -0,0 +1,209 @@
import json
from queue import Queue
import sys
from threading import Thread
import time
from unittest import mock
from pytest import raises
import requests
import requests_mock
from tornado.ioloop import IOLoop
from tornado.httpserver import HTTPServer
from tornado.web import RequestHandler, Application, authenticated, HTTPError
from ..services.auth import _ExpiringDict, HubAuth, HubAuthenticated
from ..utils import url_path_join
from .mocking import public_url
# mock for sending monotonic counter way into the future
monotonic_future = mock.patch('time.monotonic', lambda : sys.maxsize)
def test_expiring_dict():
cache = _ExpiringDict(max_age=30)
cache['key'] = 'cached value'
assert 'key' in cache
assert cache['key'] == 'cached value'
with raises(KeyError):
cache['nokey']
with monotonic_future:
assert 'key' not in cache
cache['key'] = 'cached value'
assert 'key' in cache
with monotonic_future:
assert 'key' not in cache
cache['key'] = 'cached value'
assert 'key' in cache
with monotonic_future:
with raises(KeyError):
cache['key']
cache['key'] = 'cached value'
assert 'key' in cache
with monotonic_future:
assert cache.get('key', 'default') == 'default'
cache.max_age = 0
cache['key'] = 'cached value'
assert 'key' in cache
with monotonic_future:
assert cache.get('key', 'default') == 'cached value'
def test_hub_auth():
start = time.monotonic()
auth = HubAuth(cookie_name='foo')
mock_model = {
'name': 'onyxia'
}
url = url_path_join(auth.api_url, "authorizations/cookie/foo/bar")
with requests_mock.Mocker() as m:
m.get(url, text=json.dumps(mock_model))
user_model = auth.user_for_cookie('bar')
assert user_model == mock_model
# check cache
user_model = auth.user_for_cookie('bar')
assert user_model == mock_model
with requests_mock.Mocker() as m:
m.get(url, status_code=404)
user_model = auth.user_for_cookie('bar', use_cache=False)
assert user_model is None
# invalidate cache with timer
mock_model = {
'name': 'willow'
}
with monotonic_future, requests_mock.Mocker() as m:
m.get(url, text=json.dumps(mock_model))
user_model = auth.user_for_cookie('bar')
assert user_model == mock_model
with requests_mock.Mocker() as m:
m.get(url, status_code=500)
with raises(HTTPError) as exc_info:
user_model = auth.user_for_cookie('bar', use_cache=False)
assert exc_info.value.status_code == 502
with requests_mock.Mocker() as m:
m.get(url, status_code=400)
with raises(HTTPError) as exc_info:
user_model = auth.user_for_cookie('bar', use_cache=False)
assert exc_info.value.status_code == 500
def test_hub_authenticated(request):
auth = HubAuth(cookie_name='jubal')
mock_model = {
'name': 'jubalearly'
}
cookie_url = url_path_join(auth.api_url, "authorizations/cookie", auth.cookie_name)
good_url = url_path_join(cookie_url, "early")
bad_url = url_path_join(cookie_url, "late")
class TestHandler(HubAuthenticated, RequestHandler):
hub_auth = auth
@authenticated
def get(self):
self.finish(self.get_current_user())
# start hub-authenticated service in a thread:
port = 50505
q = Queue()
def run():
app = Application([
('/*', TestHandler),
], login_url=auth.login_url)
http_server = HTTPServer(app)
http_server.listen(port)
loop = IOLoop.current()
loop.add_callback(lambda : q.put(loop))
loop.start()
t = Thread(target=run)
t.start()
def finish_thread():
loop.stop()
t.join()
request.addfinalizer(finish_thread)
# wait for thread to start
loop = q.get(timeout=10)
with requests_mock.Mocker(real_http=True) as m:
# no cookie
r = requests.get('http://127.0.0.1:%i' % port,
allow_redirects=False,
)
r.raise_for_status()
assert r.status_code == 302
assert auth.login_url in r.headers['Location']
# wrong cookie
m.get(bad_url, status_code=404)
r = requests.get('http://127.0.0.1:%i' % port,
cookies={'jubal': 'late'},
allow_redirects=False,
)
r.raise_for_status()
assert r.status_code == 302
assert auth.login_url in r.headers['Location']
# upstream 403
m.get(bad_url, status_code=403)
r = requests.get('http://127.0.0.1:%i' % port,
cookies={'jubal': 'late'},
allow_redirects=False,
)
assert r.status_code == 500
m.get(good_url, text=json.dumps(mock_model))
# no whitelist
r = requests.get('http://127.0.0.1:%i' % port,
cookies={'jubal': 'early'},
allow_redirects=False,
)
r.raise_for_status()
assert r.status_code == 200
# pass whitelist
TestHandler.hub_users = {'jubalearly'}
r = requests.get('http://127.0.0.1:%i' % port,
cookies={'jubal': 'early'},
allow_redirects=False,
)
r.raise_for_status()
assert r.status_code == 200
# no pass whitelist
TestHandler.hub_users = {'kaylee'}
r = requests.get('http://127.0.0.1:%i' % port,
cookies={'jubal': 'early'},
allow_redirects=False,
)
r.raise_for_status()
assert r.status_code == 302
assert auth.login_url in r.headers['Location']
def test_service_cookie_auth(app, mockservice_url):
cookies = app.login_user('badger')
r = requests.get(public_url(app, mockservice_url) + '/whoami/', cookies=cookies)
r.raise_for_status()
print(r.text)
reply = r.json()
sub_reply = { key: reply.get(key, 'missing') for key in ['name', 'admin']}
assert sub_reply == {
'name': 'badger',
'admin': False,
}

View File

@@ -0,0 +1,74 @@
"""Tests for jupyterhub.singleuser"""
from subprocess import check_output
import sys
import requests
import jupyterhub
from .mocking import TestSingleUserSpawner, public_url
from ..utils import url_path_join
def test_singleuser_auth(app, io_loop):
# use TestSingleUserSpawner to launch a single-user app in a thread
app.spawner_class = TestSingleUserSpawner
app.tornado_settings['spawner_class'] = TestSingleUserSpawner
# login, start the server
cookies = app.login_user('nandy')
user = app.users['nandy']
if not user.running:
io_loop.run_sync(user.spawn)
url = public_url(app, user)
# no cookies, redirects to login page
r = requests.get(url)
r.raise_for_status()
assert '/hub/login' in r.url
# with cookies, login successful
r = requests.get(url, cookies=cookies)
r.raise_for_status()
assert r.url.rstrip('/').endswith('/user/nandy/tree')
assert r.status_code == 200
# logout
r = requests.get(url_path_join(url, 'logout'), cookies=cookies)
assert len(r.cookies) == 0
def test_disable_user_config(app, io_loop):
# use TestSingleUserSpawner to launch a single-user app in a thread
app.spawner_class = TestSingleUserSpawner
app.tornado_settings['spawner_class'] = TestSingleUserSpawner
# login, start the server
cookies = app.login_user('nandy')
user = app.users['nandy']
# stop spawner, if running:
if user.running:
print("stopping")
io_loop.run_sync(user.stop)
# start with new config:
user.spawner.debug = True
user.spawner.disable_user_config = True
io_loop.run_sync(user.spawn)
io_loop.run_sync(lambda : app.proxy.add_user(user))
url = public_url(app, user)
# with cookies, login successful
r = requests.get(url, cookies=cookies)
r.raise_for_status()
assert r.url.rstrip('/').endswith('/user/nandy/tree')
assert r.status_code == 200
def test_help_output():
out = check_output([sys.executable, '-m', 'jupyterhub.singleuser', '--help-all']).decode('utf8', 'replace')
assert 'JupyterHub' in out
def test_version():
out = check_output([sys.executable, '-m', 'jupyterhub.singleuser', '--version']).decode('utf8', 'replace')
assert jupyterhub.__version__ in out

View File

@@ -41,6 +41,8 @@ def new_spawner(db, **kwargs):
kwargs.setdefault('cmd', [sys.executable, '-c', _echo_sleep]) kwargs.setdefault('cmd', [sys.executable, '-c', _echo_sleep])
kwargs.setdefault('user', db.query(orm.User).first()) kwargs.setdefault('user', db.query(orm.User).first())
kwargs.setdefault('hub', db.query(orm.Hub).first()) kwargs.setdefault('hub', db.query(orm.Hub).first())
kwargs.setdefault('notebook_dir', os.getcwd())
kwargs.setdefault('default_url', '/user/{username}/lab')
kwargs.setdefault('INTERRUPT_TIMEOUT', 1) kwargs.setdefault('INTERRUPT_TIMEOUT', 1)
kwargs.setdefault('TERM_TIMEOUT', 1) kwargs.setdefault('TERM_TIMEOUT', 1)
kwargs.setdefault('KILL_TIMEOUT', 1) kwargs.setdefault('KILL_TIMEOUT', 1)
@@ -50,12 +52,18 @@ def new_spawner(db, **kwargs):
def test_spawner(db, io_loop): def test_spawner(db, io_loop):
spawner = new_spawner(db) spawner = new_spawner(db)
io_loop.run_sync(spawner.start) ip, port = io_loop.run_sync(spawner.start)
assert spawner.user.server.ip == '127.0.0.1' assert ip == '127.0.0.1'
assert isinstance(port, int)
assert port > 0
spawner.user.server.ip = ip
spawner.user.server.port = port
db.commit()
# wait for the process to get to the while True: loop # wait for the process to get to the while True: loop
time.sleep(1) time.sleep(1)
status = io_loop.run_sync(spawner.poll) status = io_loop.run_sync(spawner.poll)
assert status is None assert status is None
io_loop.run_sync(spawner.stop) io_loop.run_sync(spawner.stop)
@@ -64,8 +72,14 @@ def test_spawner(db, io_loop):
def test_single_user_spawner(db, io_loop): def test_single_user_spawner(db, io_loop):
spawner = new_spawner(db, cmd=['jupyterhub-singleuser']) spawner = new_spawner(db, cmd=['jupyterhub-singleuser'])
io_loop.run_sync(spawner.start) spawner.api_token = 'secret'
assert spawner.user.server.ip == '127.0.0.1' ip, port = io_loop.run_sync(spawner.start)
assert ip == '127.0.0.1'
assert isinstance(port, int)
assert port > 0
spawner.user.server.ip = ip
spawner.user.server.port = port
db.commit()
# wait for http server to come up, # wait for http server to come up,
# checking for early termination every 1s # checking for early termination every 1s
def wait(): def wait():
@@ -116,6 +130,7 @@ def test_stop_spawner_stop_now(db, io_loop):
status = io_loop.run_sync(spawner.poll) status = io_loop.run_sync(spawner.poll)
assert status == -signal.SIGTERM assert status == -signal.SIGTERM
def test_spawner_poll(db, io_loop): def test_spawner_poll(db, io_loop):
first_spawner = new_spawner(db) first_spawner = new_spawner(db)
user = first_spawner.user user = first_spawner.user
@@ -148,6 +163,7 @@ def test_spawner_poll(db, io_loop):
status = io_loop.run_sync(spawner.poll) status = io_loop.run_sync(spawner.poll)
assert status is not None assert status is not None
def test_setcwd(): def test_setcwd():
cwd = os.getcwd() cwd = os.getcwd()
with tempfile.TemporaryDirectory() as td: with tempfile.TemporaryDirectory() as td:
@@ -166,3 +182,13 @@ def test_setcwd():
spawnermod._try_setcwd(cwd) spawnermod._try_setcwd(cwd)
assert os.getcwd().startswith(temp_root) assert os.getcwd().startswith(temp_root)
os.chdir(cwd) os.chdir(cwd)
def test_string_formatting(db):
s = new_spawner(db, notebook_dir='user/%U/', default_url='/base/{username}')
name = s.user.name
assert s.notebook_dir == 'user/{username}/'
assert s.default_url == '/base/{username}'
assert s.format_string(s.notebook_dir) == 'user/%s/' % name
assert s.format_string(s.default_url) == '/base/%s' % name

View File

@@ -1,11 +1,12 @@
from traitlets import HasTraits import pytest
from traitlets import HasTraits, TraitError
from jupyterhub.traitlets import URLPrefix, Command, ByteSpecification
from jupyterhub.traitlets import URLPrefix, Command
def test_url_prefix(): def test_url_prefix():
class C(HasTraits): class C(HasTraits):
url = URLPrefix() url = URLPrefix()
c = C() c = C()
c.url = '/a/b/c/' c.url = '/a/b/c/'
assert c.url == '/a/b/c/' assert c.url == '/a/b/c/'
@@ -14,14 +15,38 @@ def test_url_prefix():
c.url = 'a/b/c/d' c.url = 'a/b/c/d'
assert c.url == '/a/b/c/d/' assert c.url == '/a/b/c/d/'
def test_command(): def test_command():
class C(HasTraits): class C(HasTraits):
cmd = Command('default command') cmd = Command('default command')
cmd2 = Command(['default_cmd']) cmd2 = Command(['default_cmd'])
c = C() c = C()
assert c.cmd == ['default command'] assert c.cmd == ['default command']
assert c.cmd2 == ['default_cmd'] assert c.cmd2 == ['default_cmd']
c.cmd = 'foo bar' c.cmd = 'foo bar'
assert c.cmd == ['foo bar'] assert c.cmd == ['foo bar']
def test_memoryspec():
class C(HasTraits):
mem = ByteSpecification()
c = C()
c.mem = 1024
assert c.mem == 1024
c.mem = '1024K'
assert c.mem == 1024 * 1024
c.mem = '1024M'
assert c.mem == 1024 * 1024 * 1024
c.mem = '1024G'
assert c.mem == 1024 * 1024 * 1024 * 1024
c.mem = '1024T'
assert c.mem == 1024 * 1024 * 1024 * 1024 * 1024
with pytest.raises(TraitError):
c.mem = '1024Gi'

View File

@@ -1,8 +1,11 @@
"""extra traitlets""" """
Traitlets that are used in JupyterHub
"""
# Copyright (c) Jupyter Development Team. # Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License. # Distributed under the terms of the Modified BSD License.
from traitlets import List, Unicode from traitlets import List, Unicode, Integer, TraitError
class URLPrefix(Unicode): class URLPrefix(Unicode):
def validate(self, obj, value): def validate(self, obj, value):
@@ -22,8 +25,47 @@ class Command(List):
if isinstance(default_value, str): if isinstance(default_value, str):
default_value = [default_value] default_value = [default_value]
super().__init__(Unicode(), default_value, **kwargs) super().__init__(Unicode(), default_value, **kwargs)
def validate(self, obj, value): def validate(self, obj, value):
if isinstance(value, str): if isinstance(value, str):
value = [value] value = [value]
return super().validate(obj, value) return super().validate(obj, value)
class ByteSpecification(Integer):
"""
Allow easily specifying bytes in units of 1024 with suffixes
Suffixes allowed are:
- K -> Kilobyte
- M -> Megabyte
- G -> Gigabyte
- T -> Terabyte
"""
UNIT_SUFFIXES = {
'K': 1024,
'M': 1024 * 1024,
'G': 1024 * 1024 * 1024,
'T': 1024 * 1024 * 1024 * 1024
}
# Default to allowing None as a value
allow_none = True
def validate(self, obj, value):
"""
Validate that the passed in value is a valid memory specification
It could either be a pure int, when it is taken as a byte value.
If it has one of the suffixes, it is converted into the appropriate
pure byte value.
"""
if isinstance(value, int):
return value
num = value[:-1]
suffix = value[-1]
if not num.isdigit() and suffix not in ByteSpecification.UNIT_SUFFIXES:
raise TraitError('{val} is not a valid memory specification. Must be an int or a string with suffix K, M, G, T'.format(val=value))
else:
return int(num) * ByteSpecification.UNIT_SUFFIXES[suffix]

View File

@@ -99,6 +99,7 @@ class User(HasTraits):
spawner = None spawner = None
spawn_pending = False spawn_pending = False
stop_pending = False stop_pending = False
waiting_for_response = False
@property @property
def authenticator(self): def authenticator(self):
@@ -161,9 +162,9 @@ class User(HasTraits):
@property @property
def proxy_path(self): def proxy_path(self):
if self.settings.get('subdomain_host'): if self.settings.get('subdomain_host'):
return url_path_join('/' + self.domain, self.server.base_url) return url_path_join('/' + self.domain, self.base_url)
else: else:
return self.server.base_url return self.base_url
@property @property
def domain(self): def domain(self):
@@ -173,7 +174,7 @@ class User(HasTraits):
@property @property
def host(self): def host(self):
"""Get the *host* for my server (domain[:port])""" """Get the *host* for my server (proto://domain[:port])"""
# FIXME: escaped_name probably isn't escaped enough in general for a domain fragment # FIXME: escaped_name probably isn't escaped enough in general for a domain fragment
parsed = urlparse(self.settings['subdomain_host']) parsed = urlparse(self.settings['subdomain_host'])
h = '%s://%s.%s' % (parsed.scheme, self.escaped_name, parsed.netloc) h = '%s://%s.%s' % (parsed.scheme, self.escaped_name, parsed.netloc)
@@ -225,7 +226,20 @@ class User(HasTraits):
f = spawner.start() f = spawner.start()
# commit any changes in spawner.start (always commit db changes before yield) # commit any changes in spawner.start (always commit db changes before yield)
db.commit() db.commit()
yield gen.with_timeout(timedelta(seconds=spawner.start_timeout), f) ip_port = yield gen.with_timeout(timedelta(seconds=spawner.start_timeout), f)
if ip_port:
# get ip, port info from return value of start()
self.server.ip, self.server.port = ip_port
else:
# prior to 0.7, spawners had to store this info in user.server themselves.
# Handle < 0.7 behavior with a warning, assuming info was stored in db by the Spawner.
self.log.warning("DEPRECATION: Spawner.start should return (ip, port) in JupyterHub >= 0.7")
if spawner.api_token != api_token:
# Spawner re-used an API token, discard the unused api_token
orm_token = orm.APIToken.find(self.db, api_token)
if orm_token is not None:
self.db.delete(orm_token)
self.db.commit()
except Exception as e: except Exception as e:
if isinstance(e, gen.TimeoutError): if isinstance(e, gen.TimeoutError):
self.log.warning("{user}'s server failed to start in {s} seconds, giving up".format( self.log.warning("{user}'s server failed to start in {s} seconds, giving up".format(
@@ -251,7 +265,7 @@ class User(HasTraits):
self.state = spawner.get_state() self.state = spawner.get_state()
self.last_activity = datetime.utcnow() self.last_activity = datetime.utcnow()
db.commit() db.commit()
self.spawn_pending = False self.waiting_for_response = True
try: try:
yield self.server.wait_up(http=True, timeout=spawner.http_timeout) yield self.server.wait_up(http=True, timeout=spawner.http_timeout)
except Exception as e: except Exception as e:
@@ -278,6 +292,9 @@ class User(HasTraits):
), exc_info=True) ), exc_info=True)
# raise original TimeoutError # raise original TimeoutError
raise e raise e
finally:
self.waiting_for_response = False
self.spawn_pending = False
return self return self
@gen.coroutine @gen.coroutine
@@ -291,13 +308,24 @@ class User(HasTraits):
self.spawner.stop_polling() self.spawner.stop_polling()
self.stop_pending = True self.stop_pending = True
try: try:
api_token = self.spawner.api_token
status = yield spawner.poll() status = yield spawner.poll()
if status is None: if status is None:
yield self.spawner.stop() yield self.spawner.stop()
spawner.clear_state() spawner.clear_state()
self.state = spawner.get_state() self.state = spawner.get_state()
self.last_activity = datetime.utcnow() self.last_activity = datetime.utcnow()
# cleanup server entry, API token from defunct server
if self.server:
# cleanup server entry from db
self.db.delete(self.server)
self.server = None self.server = None
if not spawner.will_resume:
# find and remove the API token if the spawner isn't
# going to re-use it next time
orm_token = orm.APIToken.find(self.db, api_token)
if orm_token:
self.db.delete(orm_token)
self.db.commit() self.db.commit()
finally: finally:
self.stop_pending = False self.stop_pending = False

View File

@@ -5,8 +5,8 @@
version_info = ( version_info = (
0, 0,
6, 7,
0, 1,
# 'dev', # 'dev',
) )

View File

@@ -10,7 +10,8 @@
}, },
"devDependencies": { "devDependencies": {
"bower": "*", "bower": "*",
"less": "~2", "less": "^2.7.1",
"less-plugin-clean-css": "*" "less-plugin-clean-css": "^1.5.1",
"clean-css": "^3.4.13"
} }
} }

View File

@@ -1,5 +1,6 @@
name: jupyterhub name: jupyterhub
type: sphinx type: sphinx
requirements_file: docs/requirements.txt conda:
file: docs/environment.yml
python: python:
version: 3 version: 3

View File

@@ -1,7 +1,7 @@
alembic
traitlets>=4.1 traitlets>=4.1
tornado>=4.1 tornado>=4.1
jinja2 jinja2
pamela pamela
statsd
sqlalchemy>=1.0 sqlalchemy>=1.0
requests requests

View File

@@ -1,297 +1,6 @@
#!/usr/bin/env python #!/usr/bin/env python3
"""Extend regular notebook server to be aware of multiuser things."""
# Copyright (c) Jupyter Development Team. from jupyterhub.singleuser import main
# Distributed under the terms of the Modified BSD License.
import os if __name__ == '__main__':
try:
from urllib.parse import quote
except ImportError:
# PY2 Compat
from urllib import quote
import requests
from jinja2 import ChoiceLoader, FunctionLoader
from tornado import ioloop
from tornado.web import HTTPError
try:
import notebook
except ImportError:
raise ImportError("JupyterHub single-user server requires notebook >= 4.0")
from traitlets import (
Bool,
Integer,
Unicode,
CUnicode,
)
from notebook.notebookapp import (
NotebookApp,
aliases as notebook_aliases,
flags as notebook_flags,
)
from notebook.auth.login import LoginHandler
from notebook.auth.logout import LogoutHandler
from notebook.utils import url_path_join
# Define two methods to attach to AuthenticatedHandler,
# which authenticate via the central auth server.
class JupyterHubLoginHandler(LoginHandler):
@staticmethod
def login_available(settings):
return True
@staticmethod
def verify_token(self, cookie_name, encrypted_cookie):
"""method for token verification"""
cookie_cache = self.settings['cookie_cache']
if encrypted_cookie in cookie_cache:
# we've seen this token before, don't ask upstream again
return cookie_cache[encrypted_cookie]
hub_api_url = self.settings['hub_api_url']
hub_api_key = self.settings['hub_api_key']
r = requests.get(url_path_join(
hub_api_url, "authorizations/cookie", cookie_name, quote(encrypted_cookie, safe=''),
),
headers = {'Authorization' : 'token %s' % hub_api_key},
)
if r.status_code == 404:
data = None
elif r.status_code == 403:
self.log.error("I don't have permission to verify cookies, my auth token may have expired: [%i] %s", r.status_code, r.reason)
raise HTTPError(500, "Permission failure checking authorization, I may need to be restarted")
elif r.status_code >= 500:
self.log.error("Upstream failure verifying auth token: [%i] %s", r.status_code, r.reason)
raise HTTPError(502, "Failed to check authorization (upstream problem)")
elif r.status_code >= 400:
self.log.warn("Failed to check authorization: [%i] %s", r.status_code, r.reason)
raise HTTPError(500, "Failed to check authorization")
else:
data = r.json()
cookie_cache[encrypted_cookie] = data
return data
@staticmethod
def get_user(self):
"""alternative get_current_user to query the central server"""
# only allow this to be called once per handler
# avoids issues if an error is raised,
# since this may be called again when trying to render the error page
if hasattr(self, '_cached_user'):
return self._cached_user
self._cached_user = None
my_user = self.settings['user']
encrypted_cookie = self.get_cookie(self.cookie_name)
if encrypted_cookie:
auth_data = JupyterHubLoginHandler.verify_token(self, self.cookie_name, encrypted_cookie)
if not auth_data:
# treat invalid token the same as no token
return None
user = auth_data['name']
if user == my_user:
self._cached_user = user
return user
else:
return None
else:
self.log.debug("No token cookie")
return None
class JupyterHubLogoutHandler(LogoutHandler):
def get(self):
self.redirect(
self.settings['hub_host'] +
url_path_join(self.settings['hub_prefix'], 'logout'))
# register new hub related command-line aliases
aliases = dict(notebook_aliases)
aliases.update({
'user' : 'SingleUserNotebookApp.user',
'cookie-name': 'SingleUserNotebookApp.cookie_name',
'hub-prefix': 'SingleUserNotebookApp.hub_prefix',
'hub-host': 'SingleUserNotebookApp.hub_host',
'hub-api-url': 'SingleUserNotebookApp.hub_api_url',
'base-url': 'SingleUserNotebookApp.base_url',
})
flags = dict(notebook_flags)
flags.update({
'disable-user-config': ({
'SingleUserNotebookApp': {
'disable_user_config': True
}
}, "Disable user-controlled configuration of the notebook server.")
})
page_template = """
{% extends "templates/page.html" %}
{% block header_buttons %}
{{super()}}
<a href='{{hub_control_panel_url}}'
class='btn btn-default btn-sm navbar-btn pull-right'
style='margin-right: 4px; margin-left: 2px;'
>
Control Panel</a>
{% endblock %}
{% block logo %}
<img src='{{logo_url}}' alt='Jupyter Notebook'/>
{% endblock logo %}
"""
def _exclude_home(path_list):
"""Filter out any entries in a path list that are in my home directory.
Used to disable per-user configuration.
"""
home = os.path.expanduser('~')
for p in path_list:
if not p.startswith(home):
yield p
class SingleUserNotebookApp(NotebookApp):
"""A Subclass of the regular NotebookApp that is aware of the parent multiuser context."""
user = CUnicode(config=True)
def _user_changed(self, name, old, new):
self.log.name = new
cookie_name = Unicode(config=True)
hub_prefix = Unicode(config=True)
hub_host = Unicode(config=True)
hub_api_url = Unicode(config=True)
aliases = aliases
flags = flags
open_browser = False
trust_xheaders = True
login_handler_class = JupyterHubLoginHandler
logout_handler_class = JupyterHubLogoutHandler
port_retries = 0 # disable port-retries, since the Spawner will tell us what port to use
disable_user_config = Bool(False, config=True,
help="""Disable user configuration of single-user server.
Prevents user-writable files that normally configure the single-user server
from being loaded, ensuring admins have full control of configuration.
"""
)
cookie_cache_lifetime = Integer(
config=True,
default_value=300,
allow_none=True,
help="""
Time, in seconds, that we cache a validated cookie before requiring
revalidation with the hub.
""",
)
def _log_datefmt_default(self):
"""Exclude date from default date format"""
return "%Y-%m-%d %H:%M:%S"
def _log_format_default(self):
"""override default log format to include time"""
return "%(color)s[%(levelname)1.1s %(asctime)s.%(msecs).03d %(name)s %(module)s:%(lineno)d]%(end_color)s %(message)s"
def _confirm_exit(self):
# disable the exit confirmation for background notebook processes
ioloop.IOLoop.instance().stop()
def _clear_cookie_cache(self):
self.log.debug("Clearing cookie cache")
self.tornado_settings['cookie_cache'].clear()
def migrate_config(self):
if self.disable_user_config:
# disable config-migration when user config is disabled
return
else:
super(SingleUserNotebookApp, self).migrate_config()
@property
def config_file_paths(self):
path = super(SingleUserNotebookApp, self).config_file_paths
if self.disable_user_config:
# filter out user-writable config dirs if user config is disabled
path = list(_exclude_home(path))
return path
@property
def nbextensions_path(self):
path = super(SingleUserNotebookApp, self).nbextensions_path
if self.disable_user_config:
path = list(_exclude_home(path))
return path
def _static_custom_path_default(self):
path = super(SingleUserNotebookApp, self)._static_custom_path_default()
if self.disable_user_config:
path = list(_exclude_home(path))
return path
def start(self):
# Start a PeriodicCallback to clear cached cookies. This forces us to
# revalidate our user with the Hub at least every
# `cookie_cache_lifetime` seconds.
if self.cookie_cache_lifetime:
ioloop.PeriodicCallback(
self._clear_cookie_cache,
self.cookie_cache_lifetime * 1e3,
).start()
super(SingleUserNotebookApp, self).start()
def init_webapp(self):
# load the hub related settings into the tornado settings dict
env = os.environ
s = self.tornado_settings
s['cookie_cache'] = {}
s['user'] = self.user
s['hub_api_key'] = env.pop('JPY_API_TOKEN')
s['hub_prefix'] = self.hub_prefix
s['hub_host'] = self.hub_host
s['cookie_name'] = self.cookie_name
s['login_url'] = self.hub_host + self.hub_prefix
s['hub_api_url'] = self.hub_api_url
s['csp_report_uri'] = self.hub_host + url_path_join(self.hub_prefix, 'security/csp-report')
super(SingleUserNotebookApp, self).init_webapp()
self.patch_templates()
def patch_templates(self):
"""Patch page templates to add Hub-related buttons"""
self.jinja_template_vars['logo_url'] = self.hub_host + url_path_join(self.hub_prefix, 'logo')
env = self.web_app.settings['jinja2_env']
env.globals['hub_control_panel_url'] = \
self.hub_host + url_path_join(self.hub_prefix, 'home')
# patch jinja env loading to modify page template
def get_page(name):
if name == 'page.html':
return page_template
orig_loader = env.loader
env.loader = ChoiceLoader([
FunctionLoader(get_page),
orig_loader,
])
def main():
return SingleUserNotebookApp.launch_instance()
if __name__ == "__main__":
main() main()

View File

@@ -16,7 +16,7 @@ import sys
v = sys.version_info v = sys.version_info
if v[:2] < (3,3): if v[:2] < (3,3):
error = "ERROR: Jupyter Hub requires Python version 3.3 or above." error = "ERROR: JupyterHub requires Python version 3.3 or above."
print(error, file=sys.stderr) print(error, file=sys.stderr)
sys.exit(1) sys.exit(1)
@@ -28,12 +28,12 @@ if os.name in ('nt', 'dos'):
# At least we're on the python version we need, move on. # At least we're on the python version we need, move on.
import os import os
from glob import glob from glob import glob
from distutils.core import setup
from subprocess import check_call from subprocess import check_call
from setuptools import setup
from setuptools.command.bdist_egg import bdist_egg
pjoin = os.path.join pjoin = os.path.join
here = os.path.abspath(os.path.dirname(__file__)) here = os.path.abspath(os.path.dirname(__file__))
@@ -46,16 +46,11 @@ is_repo = os.path.exists(pjoin(here, '.git'))
# Build basic package data, etc. # Build basic package data, etc.
#--------------------------------------------------------------------------- #---------------------------------------------------------------------------
# setuptools for wheel, develop
for cmd in ['bdist_wheel', 'develop']:
if cmd in sys.argv:
import setuptools
def get_data_files(): def get_data_files():
"""Get data files in share/jupyter""" """Get data files in share/jupyter"""
data_files = [] data_files = []
ntrim = len(here) + 1 ntrim = len(here + os.path.sep)
for (d, dirs, filenames) in os.walk(share_jupyter): for (d, dirs, filenames) in os.walk(share_jupyter):
data_files.append(( data_files.append((
@@ -64,6 +59,18 @@ def get_data_files():
)) ))
return data_files return data_files
def get_package_data():
"""Get package data
(mostly alembic config)
"""
package_data = {}
package_data['jupyterhub'] = [
'alembic.ini',
'alembic/*',
'alembic/versions/*',
]
return package_data
ns = {} ns = {}
with open(pjoin(here, 'jupyterhub', 'version.py')) as f: with open(pjoin(here, 'jupyterhub', 'version.py')) as f:
@@ -82,9 +89,10 @@ setup_args = dict(
# dummy, so that install_data doesn't get skipped # dummy, so that install_data doesn't get skipped
# this will be overridden when bower is run anyway # this will be overridden when bower is run anyway
data_files = get_data_files() or ['dummy'], data_files = get_data_files() or ['dummy'],
package_data = get_package_data(),
version = ns['__version__'], version = ns['__version__'],
description = "JupyterHub: A multi-user server for Jupyter notebooks", description = "JupyterHub: A multi-user server for Jupyter notebooks",
long_description = "See https://jupyterhub.readthedocs.org for more info.", long_description = "See https://jupyterhub.readthedocs.io for more info.",
author = "Jupyter Development Team", author = "Jupyter Development Team",
author_email = "jupyter@googlegroups.com", author_email = "jupyter@googlegroups.com",
url = "http://jupyter.org", url = "http://jupyter.org",
@@ -257,34 +265,44 @@ def js_css_first(cls, strict=True):
return Command return Command
class bdist_egg_disabled(bdist_egg):
"""Disabled version of bdist_egg
Prevents setup.py install performing setuptools' default easy_install,
which it should never ever do.
"""
def run(self):
sys.exit("Aborting implicit building of eggs. Use `pip install .` to install from source.")
setup_args['cmdclass'] = { setup_args['cmdclass'] = {
'js': Bower, 'js': Bower,
'css': CSS, 'css': CSS,
'build_py': js_css_first(build_py, strict=is_repo), 'build_py': js_css_first(build_py, strict=is_repo),
'sdist': js_css_first(sdist, strict=True), 'sdist': js_css_first(sdist, strict=True),
'bdist_egg': bdist_egg if 'bdist_egg' in sys.argv else bdist_egg_disabled,
} }
# setuptools requirements # setuptools requirements
if 'setuptools' in sys.modules: setup_args['zip_safe'] = False
setup_args['zip_safe'] = False from setuptools.command.develop import develop
from setuptools.command.develop import develop class develop_js_css(develop):
class develop_js_css(develop): def run(self):
def run(self): if not self.uninstall:
if not self.uninstall: self.distribution.run_command('js')
self.distribution.run_command('js') self.distribution.run_command('css')
self.distribution.run_command('css') develop.run(self)
develop.run(self) setup_args['cmdclass']['develop'] = develop_js_css
setup_args['cmdclass']['develop'] = develop_js_css setup_args['install_requires'] = install_requires = []
setup_args['install_requires'] = install_requires = []
with open('requirements.txt') as f: with open('requirements.txt') as f:
for line in f.readlines(): for line in f.readlines():
req = line.strip() req = line.strip()
if not req or req.startswith('#') or '://' in req: if not req or req.startswith('#') or '://' in req:
continue continue
install_requires.append(req) install_requires.append(req)
#--------------------------------------------------------------------------- #---------------------------------------------------------------------------
# setup # setup

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 4.2 KiB

Some files were not shown because too many files have changed in this diff Show More