Compare commits

...

168 Commits

Author SHA1 Message Date
Min RK
6f15113e2a link and date for 0.9.0 2018-06-15 15:36:48 +02:00
Min RK
f3f08c9caa 0.9.0 2018-06-15 15:23:25 +02:00
Min RK
c495c4731a Merge pull request #1983 from willingc/test-tilde
add test case for user with tilde
2018-06-15 14:48:49 +02:00
Min RK
e08a50ef66 Merge pull request #1988 from gesiscss/redirects
fix AddSlashHandler for hub_prefix without trailing /
2018-06-15 14:48:14 +02:00
Min RK
fbcd792062 Merge pull request #1984 from chicocvenancio/tilde_safe_in_proxy
mark tilde as safe in proxy routespec quoting FIX:#1982
2018-06-15 14:38:38 +02:00
Min RK
bb81ce0160 also test @ handling in proxy.check_routes
@ and ~ should be the same
2018-06-15 14:33:31 +02:00
Kenan Erdogan
315087d67c fix AddSlashHandler for hub_prefix without trailing / 2018-06-15 13:36:05 +02:00
Chico Venancio
31e6a15a85 mark tilde as safe in proxy routespec quoting FIX:#1982 2018-06-14 18:18:52 -03:00
Carol Willing
aed99d8d19 add test case for user with tilde 2018-06-14 13:24:05 -07:00
Min RK
bedac5f148 Merge pull request #1980 from willingc/pypi-meta
Add info to display at pypi site
2018-06-14 11:51:29 +02:00
Carol Willing
376aa13981 correct link 2018-06-13 14:37:27 -07:00
Carol Willing
4bc8b48763 add info to display at pypi site 2018-06-13 14:35:23 -07:00
Carol Willing
21496890f6 Remove stray bullet that I missed in review 2018-06-13 11:10:41 -07:00
Carol Willing
70dcd50e44 Merge pull request #1976 from minrk/changelog-more
last few things in changelog
2018-06-13 11:09:30 -07:00
Min RK
24094567e5 Merge pull request #1977 from kpfleming/patch-1
Correct 'conda' installation instructions
2018-06-13 15:44:27 +02:00
Kevin P. Fleming
6bd0febbe1 Correct 'conda' installation instructions
JupyterHub packages are in the 'conda-forge' channel of Anaconda packages; if the Anaconda installation doesn't already have 'conda-forge' enabled, `conda install jupyterhub` fails.

Rather than adding instructions to enable 'conda-forge' in Anaconda, this patch modifies the installation command to specify that channel.
2018-06-13 09:42:05 -04:00
Min RK
57075aba52 Add last few entries in changelog for 0.9 2018-06-13 15:15:18 +02:00
Min RK
f0260aae52 add missing expiry fields in rest-api doc 2018-06-13 15:15:09 +02:00
Min RK
edd8e21f71 Merge pull request #1969 from willingc/edit-userenv
Edit and reflow user environment reference
2018-06-13 09:49:23 +02:00
Min RK
681d3ce2d8 Merge pull request #1971 from willingc/contributor-list
Update contributor list for 0.9 release
2018-06-13 09:45:37 +02:00
Carol Willing
97e792ccde Update issue templates 2018-06-12 15:47:05 -07:00
Carol Willing
a5a0543b2a Delete old issue template 2018-06-12 15:42:46 -07:00
Carol Willing
5a810ccba3 Update issue templates 2018-06-12 15:41:30 -07:00
Carol Willing
0a6b2cdadc Merge pull request #1973 from jupyterhub/willingc-patch-1
Create CODE_OF_CONDUCT.md
2018-06-12 15:33:37 -07:00
Carol Willing
08903e7af8 Create PULL_REQUEST_TEMPLATE.md 2018-06-12 15:29:54 -07:00
Carol Willing
78439329c0 Merge pull request #1972 from willingc/insights
Move issue template one level down in .github directory
2018-06-12 15:28:34 -07:00
Carol Willing
4dfd6bc4b9 Create CODE_OF_CONDUCT.md 2018-06-12 15:25:27 -07:00
Carol Willing
574cc39b5f set up pull request template directory 2018-06-12 15:16:02 -07:00
Carol Willing
6fb43a8241 update issue templaate location to current github recommendation 2018-06-12 15:13:39 -07:00
Carol Willing
84c82fe382 update the contributor list for 0.9 2018-06-12 14:51:51 -07:00
Carol Willing
5e45e76f5b update contributors for 0.9 2018-06-12 14:36:00 -07:00
Carol Willing
92fd819cd6 Merge pull request #1970 from JasonJWilliamsNY/hub-not-found-at-localhost
Hub not found at localhost
2018-06-12 14:04:09 -07:00
Jason Williams
cb5ef0c302 Update troubleshooting.md 2018-06-12 17:01:37 -04:00
Jason Williams
34fab033fe Jupyterhub on Docker add workaround for unable to connect to localhost
Added a command that worked for me to fix the situation that localhost:8000 is unable to reach the hub even though the published command for Docker exposes the correct port.
2018-06-12 16:59:17 -04:00
Carol Willing
37f4c4429e edit and reflow user environment reference 2018-06-12 08:47:22 -07:00
Carol Willing
293410ec94 Merge pull request #1967 from minrk/config-docs
docs: configuring user environments
2018-06-12 07:55:53 -07:00
Min RK
ed6ee27dcd docs: configuring user environments
covers system-wide installation, kernelspec registration, and the differences between containers and host systems
2018-06-12 14:34:26 +02:00
Min RK
ca16ddb7ad back to dev 2018-06-12 14:21:16 +02:00
Min RK
2102c1fd1c 0.9.0rc1 2018-06-12 14:19:59 +02:00
Min RK
aa9676ec5e Merge pull request #1913 from rkdarst/announcement_text
Add customizable announcement text on home,login,logout,spawn
2018-06-12 14:14:21 +02:00
Min RK
5e93c7de4c announcement doc language
per willingc review
2018-06-12 13:48:42 +02:00
Min RK
d22626906b multiline conditionals setting announcement variable in templates
for readability per review by willingc
2018-06-12 13:48:24 +02:00
Min RK
5f91ed044e parametrize test_announcements 2018-06-12 13:47:55 +02:00
Min RK
5c3c7493c1 Merge pull request #1963 from willingc/hooks-doc
add a small section for pre/post spawn hooks
2018-06-11 15:27:39 +02:00
Carol Willing
1b7965092e remove backticks and long for rst format 2018-06-08 14:21:31 -07:00
Carol Willing
ef60be5a99 put backticks outside of link 2018-06-08 14:19:43 -07:00
Carol Willing
f78d652cd6 fix missing brackets 2018-06-08 14:18:14 -07:00
Carol Willing
3650575797 add a small section for pre/post spawn hooks 2018-06-08 14:13:45 -07:00
Tim Head
0f000f6d41 Merge pull request #1961 from willingc/doc-shib
Add link to authenticators used with Shibboleth
2018-06-08 18:17:08 +02:00
Carol Willing
643729ac0c Merge pull request #1962 from chicocvenancio/docs_mysql_dynamic
database docs utfmb4 collation and some versions of mysql/mariadb
2018-06-08 09:14:04 -07:00
Chico Venancio
91a67bf580 database docs: fix formatting 2018-06-08 13:09:09 -03:00
Chico Venancio
c75eddb730 database docs utfmb4 collation and some versions of mysql/mariadb 2018-06-08 12:55:02 -03:00
Carol Willing
0f5888ad6c Add link to authenticators used with Shibboleth 2018-06-08 08:22:11 -07:00
Carol Willing
8c48f3b856 Merge pull request #1960 from willingc/db-doc
add database doc section and edits to upgrading db
2018-06-08 08:08:51 -07:00
Carol Willing
6e7e18bc3c add @minrk review comments 2018-06-08 07:34:09 -07:00
Tim Head
3dfd7e5a84 Merge pull request #1958 from willingc/proxy-error
Add error message text
2018-06-08 15:19:27 +02:00
Carol Willing
19ecbf3734 add note about why no sqlite and nfs 2018-06-08 06:06:15 -07:00
Carol Willing
eac3e8ba90 add database doc section and edits to upgrading db 2018-06-08 05:51:00 -07:00
Carol Willing
a7a6829b69 add additional reference per @betatim review 2018-06-08 05:01:32 -07:00
Carol Willing
61299113c8 add error message text 2018-06-07 21:44:18 -07:00
Tim Head
21a57dfa0b Merge pull request #1949 from willingc/npm-doc
clarify that conda installs npm and proxy
2018-06-07 19:52:00 +02:00
Carol Willing
a7226a8231 changes per @minrk review 2018-06-07 09:10:04 -07:00
Min RK
6e3dd21f60 Merge pull request #1952 from willingc/docker-conda
bump miniconda to 4.5.1 in Dockerfile
2018-06-07 10:24:33 +02:00
Min RK
cf049730d4 Merge pull request #1954 from willingc/black-test
Blacken python doc build files
2018-06-07 10:24:14 +02:00
Min RK
cb9ce4d3af Merge pull request #1955 from dtaniwaki/handle-fatal-error
only relay headers from HTTPErrors
2018-06-07 10:22:38 +02:00
Daisuke Taniwaki
925ee1dfb2 Do not refer spawner on fatal errors 2018-06-07 14:53:46 +09:00
Daisuke Taniwaki
5d9122b26c Avoid setting unexpected headers 2018-06-07 14:53:34 +09:00
Carol Willing
6821ad0c59 blacken autodoc sphinx extension 2018-06-06 12:57:14 -07:00
Carol Willing
ff7851ee2e blacken conf.py 2018-06-06 12:52:30 -07:00
Carol Willing
6940ed85b1 bump miniconda to 4.5.1 2018-06-06 08:25:28 -07:00
Carol Willing
3d497a7f43 clarify that conda installs npm and proxy 2018-06-06 06:56:22 -07:00
Carol Willing
cc6968e225 Merge pull request #1942 from minrk/nginx-file
note where nginx config files are typically created.
2018-06-06 06:02:30 -07:00
Carol Willing
a6c517c344 Merge pull request #1947 from minrk/progress-stopping
Avoid showing spawn-pending page when user is stopping
2018-06-06 06:00:58 -07:00
Carol Willing
a3e08b7f52 Merge pull request #1948 from minrk/aclosing
Python 3.5.1 cannot close async iterators
2018-06-06 05:56:00 -07:00
Min RK
14c8d7dc46 Merge pull request #1946 from dtaniwaki/configure-max-inactive-duration
Configure max inactive duration
2018-06-06 12:54:55 +02:00
Daisuke Taniwaki
ac2590c679 Add active_user_window configuration 2018-06-06 19:00:34 +09:00
Min RK
ead13c6a11 further clarify that we are creating a new file, not editing nginx.confg 2018-06-06 12:00:21 +02:00
Min RK
5002ab2990 Python 3.5.1 cannot close async iterators
so provide a null aclosing async context manager that does nothing
2018-06-06 11:43:33 +02:00
Min RK
ab3e7293a4 disable my server link while stop is pending
makes it a little harder to request a spawn while stop is pending
2018-06-06 10:53:50 +02:00
Min RK
062af5e5cb Avoid showing spawn_pending page when pending action is stop
Separate stop_pending page when this occurs,
similar to the old spawn pending spinner without progress events
2018-06-06 10:53:05 +02:00
Carol Willing
92088570ea Merge pull request #1943 from minrk/getuser-delayed
delay call to getuser in token app
2018-06-05 10:18:08 -07:00
Min RK
604ccf515d delay call to getuser in token app
avoids issues with getuser preventing launch, e.g. in weird containers where the current user doesn’t exist
2018-06-05 17:52:00 +02:00
Min RK
ec9b244990 note where nginx config files are typically created. 2018-06-04 11:10:21 +02:00
Min RK
09acdc23b5 Merge pull request #1940 from dtaniwaki/fix-created-columne-error
Handle NULL created column of tokens table
2018-06-04 10:55:20 +02:00
Richard Darst
e7808b50af Add tests of page announcements
- Adds test_pages.py:test_page_contents, which currently tests just
  the page annoucement variables.
2018-06-03 01:18:48 +03:00
Richard Darst
9c27095744 Add customizable announcement text on home,login,logout,spawn
- Using the new template_vars setting (#1872), allow the variable
  `announcement` to create a header message on all the pages in the
  title, or the variables `announcement_{home,login,logout,spawn}` to
  set variables on these single pages.
- This is not the most powerful method of putting an announcement into
  the templates, because it requires a server restart to change.  But
  the invasiveness is very low, and allows minimal message
  without having to touch the templates themselves.
- Closes: #1836
2018-06-03 01:18:48 +03:00
Daisuke Taniwaki
690b07982e Handle NULL created column of api_tokens table 2018-06-02 23:55:21 +09:00
Min RK
784e5aa4ee Merge pull request #1926 from minrk/tilde-safe
tilde is a safe character in user URLs
2018-05-30 14:48:35 +02:00
Min RK
29187cab3a Merge pull request #1929 from minrk/pgbin
install psycopg2 from binary
2018-05-29 11:03:41 +02:00
Min RK
43a72807c6 install psycopg2 from binary
it has a new package name for the binary wheel
2018-05-29 10:41:53 +02:00
Min RK
1d1f6f1870 Merge pull request #1923 from nxg/doc-changes-1747
Documentation clarifications (adding explicitness).
2018-05-29 10:21:42 +02:00
Min RK
505a6eb4e3 ensure user subdomains are valid
escape with `_` instead of `%`.

This is not technically rigorous, as collisions are possible (users foo_40 and foo@ have the same domain)
and other domain restrictions are not applied (length, starting characters, etc.).
Username normalization can be used to apply stricter, more rigorous structure.
2018-05-29 10:19:21 +02:00
Min RK
cc49df8147 Merge pull request #1852 from summerswallow-whi/service-info
Attach an info field to the service
2018-05-28 14:57:10 +02:00
Min RK
98d60402b5 add service.info to rest api docs 2018-05-28 14:09:53 +02:00
Min RK
319e8a1062 update service models in tests 2018-05-28 14:09:44 +02:00
Min RK
0c5d564830 tilde is a safe character in user URLs
Chrome unconditionally reverts any not-strictly-necessary escaping in URLs (this seems wrong?)
2018-05-28 13:46:52 +02:00
Norman Gray
c0404cf9d9 Documentation clarifications (adding explicitness).
Addresses issue #1747.

These additions aren't perfect -- it's unfortunate that I've added
mention of reverse proxies on two separate pages.  I don't _think_
these can reasonably be put on the same page -- perhaps a cross
reference?
2018-05-27 18:49:40 +01:00
Min RK
f364661363 Merge pull request #1899 from adelcast/dev/adelcast/kill_proxy_tree
ConfigurableHTTPProxy.stop: kill child processes on Windows case
2018-05-25 15:25:53 +02:00
Min RK
f92d77b06d Merge pull request #1915 from rkdarst/respawn_error_msg
Clarify error message on implicit respawns.
2018-05-25 10:09:35 +02:00
Haw-minn Lu
2cf00e6aae Add info field to service model 2018-05-24 11:19:18 -07:00
Richard Darst
dfdb0cff2b Clarify error message on implicit respawns.
- This message is presented when the last spawn failed, along with a
  HTTP 500.  The current text is quite confusing, especially when the
  problem may just be solvable by trying to respawn again.
2018-05-24 16:07:26 +03:00
Alejandro del Castillo
d0dad84ffa ConfigurableHTTPProxy.stop: kill child processes on Windows case
On the Windows case, the configurable-http-proxy is spwaned using a
shell. To stop the proxy, we need to terminate both the main process
(shell) and its child (proxy).

Signed-off-by: Alejandro del Castillo <alejandro.delcastillo@ni.com>
2018-05-23 10:10:50 -05:00
Min RK
1745937f1a back to dev 2018-05-23 16:47:56 +02:00
Min RK
e7eb674a89 0.9.0b3 2018-05-23 16:30:07 +02:00
Min RK
b232633100 Merge pull request #1894 from minrk/db-rollback
Rollback database sessions on SQLAlchemy errors
2018-05-23 16:09:51 +02:00
Carol Willing
6abd19c149 Merge pull request #1911 from minrk/log-classes
log Authenticator and Spawner classes at startup
2018-05-22 11:50:59 -07:00
Min RK
0aa0ff8db7 Merge pull request #1912 from minrk/double-slash
Fix login redirect checking for `//` urls
2018-05-22 15:56:29 +02:00
Min RK
a907429fd4 more test cases for login redirects 2018-05-22 15:40:27 +02:00
Min RK
598b550a67 fix query/hash login redirect handling 2018-05-22 15:40:14 +02:00
Min RK
92bb442494 more robust checking for login redirects outside jupyterhub 2018-05-22 15:40:00 +02:00
Min RK
2d41f6223e log Authenticator and Spawner classes at startup
for better diagnostics
2018-05-22 13:52:41 +02:00
Min RK
791dd5fb9f Merge pull request #1895 from minrk/oauth-commits
avoid creating one huge transaction cleaning up oauth clients
2018-05-22 13:37:56 +02:00
Carol Willing
9a0ccf4c98 Merge pull request #1910 from minrk/ip-typo
default bind url should be on all ips
2018-05-22 01:26:35 -07:00
Min RK
ad2abc5771 default bind url should be on all ips
preserves jupyterhub default behavior

typo introduced in new bind_url config
2018-05-22 09:55:01 +02:00
Min RK
2d99b3943f enable pessimistic connection handling
from the sqlalchemy docs

checks if a connection is valid via `SELECT 1` prior to using it.

Since we have long-running connections, this helps us survive database restarts, disconnects, etc.
2018-05-21 22:14:11 +02:00
Min RK
a358132f95 remove --rm from docker-db.sh
for easier stop/start testing
2018-05-21 22:12:30 +02:00
Tim Head
09cd37feee Merge pull request #1896 from thoralf-gutierrez/fix-typos-in-config
Fix typos in auth config documentation
2018-05-16 22:37:51 +02:00
Thoralf Gutierrez
0f3610e81d Fix typos in auth config documentation 2018-05-16 10:58:02 -07:00
Min RK
3f97c438e2 avoid creating one huge transaction cleaning up oauth clients 2018-05-15 16:33:50 +02:00
Min RK
42351201d2 back to dev 2018-05-15 16:32:24 +02:00
Min RK
907bbb8e9d 0.9.0b2 2018-05-15 14:03:10 +02:00
Min RK
63f3d8b621 catch database errors in update_last_activity 2018-05-15 13:53:05 +02:00
Min RK
47d6e841fd cache get_current_user result
avoids raising an error rendering templates, etc.
2018-05-15 13:49:38 +02:00
Min RK
e3bb09fabe rollback database session on db errors
ensures reconnect will occur when database connection is lost
2018-05-15 13:49:14 +02:00
Carol Willing
d4e0c01189 Merge pull request #1893 from minrk/version
ensure jupyterhub version matches pep440
2018-05-15 07:40:24 -04:00
Min RK
50370d42b0 ensure jupyterhub version matches pep440
avoids mismatch jupyterhub version and tag in docker builds
2018-05-15 13:19:43 +02:00
Min RK
aa190a80b7 Merge pull request #1891 from minrk/base_url
fix and test bind_url / base_url interactions
2018-05-15 12:07:44 +01:00
Min RK
e48bae77aa Merge pull request #1890 from minrk/default-url
test default_url handling
2018-05-15 10:51:17 +01:00
Min RK
96cf0f99ed fix and test bind_url / base_url interactions 2018-05-15 10:51:11 +02:00
Min RK
f380968049 test default_url handling
- default_url is used even if not logged in
- flesh out docstrings
- pass via settings
2018-05-15 10:15:33 +02:00
Min RK
02468f4625 Merge pull request #1854 from summerswallow-whi/extra_handler
Add custom handlers and allow setting of defaults
2018-05-15 08:55:15 +01:00
Haw-minn Lu
24611f94cf Remove base_url from default_url
Add help to new traits
change extra_page_handler to extra_handler
2018-05-14 11:53:22 -07:00
Min RK
dc75a9a4b7 Merge pull request #1881 from paccorsi/check-post-stop-hook
Check the value of post stop hook
2018-05-14 13:31:33 +01:00
Min RK
33f459a23a Merge pull request #1878 from ausecocloud/master
fix listing of OAuth tokens on tokens page
2018-05-14 13:31:06 +01:00
Min RK
bdcc251002 Merge pull request #1882 from dhirschfeld/patch-1
Allow configuring the heading in spawn.html
2018-05-14 13:30:47 +01:00
Pierre Accorsi
86052ba7b4 Check the value of post stop hook 2018-05-11 10:12:45 -04:00
Dave Hirschfeld
62ebcf55c9 Allow configuring the heading in spawn.html 2018-05-11 13:34:17 +10:00
Haw-minn Lu
80ac2475a0 Restore whitespacing to original 2018-05-10 11:25:02 -07:00
Haw-minn Lu
5179d922f5 Clean up extra handler defaults 2018-05-10 11:22:50 -07:00
Gerhard Weis
26f085a8ed add test for oauth tokens on tokens page 2018-05-10 08:46:28 +10:00
Gerhard Weis
b7d302cc72 fix listing of OAuth tokens on tokens page 2018-05-10 08:46:28 +10:00
Carol Willing
f2941e3631 Merge pull request #1873 from minrk/apitoken-expiry
implement API token expiry
2018-05-09 11:45:41 -04:00
Carol Willing
26a6401af4 Merge pull request #1876 from willingc/sudo-section
refactor sudo example config
2018-05-08 09:23:28 -07:00
Carol Willing
5c8ce338a1 edit per @minrk review 2018-05-08 11:54:38 -04:00
Carol Willing
5addc7bbaf correct directive 2018-05-07 21:03:13 -07:00
Carol Willing
da095170bf remove toctree item 2018-05-07 20:38:15 -07:00
Carol Willing
1aab0a69bd fix typo 2018-05-07 20:31:20 -07:00
Carol Willing
fc8e04b62f reflow templates file 2018-05-07 20:29:13 -07:00
Carol Willing
c6c53b4e10 update index 2018-05-07 20:28:55 -07:00
Carol Willing
9b0219a2d8 break up configuration examples 2018-05-07 20:18:02 -07:00
Carol Willing
6e212fa476 reflow proxy doc 2018-05-07 20:17:14 -07:00
Carol Willing
58f9237b12 refactor sudo example config 2018-05-07 15:38:16 -07:00
Carol Willing
74fd925219 Merge pull request #1864 from datalayer-contrib/docs-sudo
Add Docs about sudo (and remove it from the wiki)
2018-05-07 23:29:08 +02:00
Carol Willing
2696bb97d2 Merge pull request #1875 from willingc/api-redux
add packages to environment.yml
2018-05-07 23:16:53 +02:00
Haw-minn Lu
9cefb27704 Move extra_handlers to fall below builtins in priority 2018-05-07 14:06:34 -07:00
Carol Willing
5e75357b06 add packages to environment.yml 2018-05-07 13:54:06 -07:00
Min RK
79bebb4bc9 Merge pull request #1872 from thedataincubator/template-vars
Allow extra variables to be passed into templates
2018-05-07 20:33:44 +02:00
Eric Charles
0ed88f212b add sudo.md 2018-05-07 19:49:26 +02:00
Eric Charles
a8c1cab5fe add sudo doc 2018-05-07 19:49:26 +02:00
Min RK
e1a6b1a70f Merge pull request #1856 from minrk/whoami-users
note about hub_users in whoami example
2018-05-07 19:47:45 +02:00
Robert Schroll
c95ed16786 Allow extra variables to be passed into templates 2018-05-07 10:47:27 -07:00
Min RK
ec784803b4 remove duplicate whoami-oauth.py from external-oauth example 2018-05-07 15:35:05 +02:00
Min RK
302d7a22d3 leave user-whitelist example in a comment
allow all users by default because default whitelist is confusing
2018-05-07 15:34:33 +02:00
Min RK
7c6591aefe add token expiry to token model 2018-05-07 13:02:26 +02:00
Min RK
58c91e3fd4 implement API token expiry 2018-05-07 13:00:37 +02:00
Min RK
db4cf7ae62 note about hub_users in whoami example
explain what hub_users does and the value in the example
2018-05-07 10:55:39 +02:00
Haw-minn Lu
bc86ee1c31 Add custom handlers and allow setting of defaults 2018-04-27 15:58:59 -07:00
Haw-minn Lu
a73e6f0bf8 Attach an info field to the service 2018-04-27 14:51:55 -07:00
70 changed files with 1929 additions and 626 deletions

37
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,37 @@
---
name: Bug report
about: Create a report to help us improve
---
Hi! Thanks for using JupyterHub.
If you are reporting an issue with JupyterHub, please use the [GitHub issue](https://github.com/jupyterhub/jupyterhub/issues) search feature to check if your issue has been asked already. If it has, please add your comments to the existing issue.
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
- Running `jupyter troubleshoot` from the command line, if possible, and posting
its output would also be helpful.
- Running in `--debug` mode can also be helpful for troubleshooting.

View File

@@ -0,0 +1,7 @@
---
name: Installation and configuration issues
about: Installation and configuration assistance
---
If you are having issues with installation or configuration, you may ask for help on the JupyterHub gitter channel or file an issue here.

0
.github/PULL_REQUEST_TEMPLATE/.keep vendored Normal file
View File

View File

@@ -1,29 +0,0 @@
Hi! Thanks for using JupyterHub.
If you are reporting an issue with JupyterHub:
- Please use the [GitHub issue](https://github.com/jupyterhub/jupyterhub/issues)
search feature to check if your issue has been asked already. If it has,
please add your comments to the existing issue.
- Where applicable, please fill out the details below to help us troubleshoot
the issue that you are facing. Please be as thorough as you are able to
provide details on the issue.
**How to reproduce the issue**
**What you expected to happen**
**What actually happens**
**Share what version of JupyterHub you are using**
Running `jupyter troubleshoot` from the command line, if possible, and posting
its output would also be helpful.
```
Insert jupyter troubleshoot output here
```

View File

@@ -29,7 +29,7 @@ before_install:
pip install 'mysql-connector<2.2'
elif [[ $JUPYTERHUB_TEST_DB_URL == postgresql* ]]; then
DB=postgres bash ci/init-db.sh
pip install psycopg2
pip install psycopg2-binary
fi
install:
- pip install --upgrade pip

1
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1 @@
Please refer to [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/master/conduct/code_of_conduct.md).

View File

@@ -35,8 +35,8 @@ RUN apt-get -y update && \
ENV LANG C.UTF-8
# install Python + NodeJS with conda
RUN wget -q https://repo.continuum.io/miniconda/Miniconda3-4.4.10-Linux-x86_64.sh -O /tmp/miniconda.sh && \
echo 'bec6203dbb2f53011e974e9bf4d46e93 */tmp/miniconda.sh' | md5sum -c - && \
RUN wget -q https://repo.continuum.io/miniconda/Miniconda3-4.5.1-Linux-x86_64.sh -O /tmp/miniconda.sh && \
echo '0c28787e3126238df24c5d4858bd0744 */tmp/miniconda.sh' | md5sum -c - && \
bash /tmp/miniconda.sh -f -b -p /opt/conda && \
/opt/conda/bin/conda install --yes -c conda-forge \
python=3.6 sqlalchemy tornado jinja2 traitlets requests pip pycurl \

1
PULL_REQUEST_TEMPLATE.md Normal file
View File

@@ -0,0 +1 @@

View File

@@ -50,37 +50,62 @@ for administration of the Hub and its users.
## Installation
### Check prerequisites
A Linux/Unix based system with the following:
- A Linux/Unix based system
- [Python](https://www.python.org/downloads/) 3.5 or greater
- [nodejs/npm](https://www.npmjs.com/)
- [Python](https://www.python.org/downloads/) 3.4 or greater
- [nodejs/npm](https://www.npmjs.com/) Install a recent version of
[nodejs/npm](https://docs.npmjs.com/getting-started/installing-node)
For example, install it on Linux (Debian/Ubuntu) using:
* If you are using **`conda`**, the nodejs and npm dependencies will be installed for
you by conda.
sudo apt-get install npm nodejs-legacy
* If you are using **`pip`**, install a recent version of
[nodejs/npm](https://docs.npmjs.com/getting-started/installing-node).
For example, install it on Linux (Debian/Ubuntu) using:
The `nodejs-legacy` package installs the `node` executable and is currently
required for npm to work on Debian/Ubuntu.
```
sudo apt-get install npm nodejs-legacy
```
The `nodejs-legacy` package installs the `node` executable and is currently
required for npm to work on Debian/Ubuntu.
- TLS certificate and key for HTTPS communication
- Domain name
### Install packages
#### Using `conda`
To install JupyterHub along with its dependencies including nodejs/npm:
```bash
conda install -c conda-forge jupyterhub
```
If you plan to run notebook servers locally, install the Jupyter notebook
or JupyterLab:
```bash
conda install notebook
conda install jupyterlab
```
#### Using `pip`
JupyterHub can be installed with `pip`, and the proxy with `npm`:
```bash
npm install -g configurable-http-proxy
pip3 install jupyterhub
python3 -m pip install jupyterhub
```
If you plan to run notebook servers locally, you will need to install the
[Jupyter notebook](https://jupyter.readthedocs.io/en/latest/install.html)
package:
pip3 install --upgrade notebook
python3 -m pip install --upgrade notebook
### Run the Hub server

View File

@@ -8,7 +8,7 @@ export MYSQL_HOST=127.0.0.1
export MYSQL_TCP_PORT=${MYSQL_TCP_PORT:-13306}
export PGHOST=127.0.0.1
NAME="hub-test-$DB"
DOCKER_RUN="docker run --rm -d --name $NAME"
DOCKER_RUN="docker run -d --name $NAME"
docker rm -f "$NAME" 2>/dev/null || true
@@ -47,4 +47,4 @@ Set these environment variables:
export MYSQL_HOST=127.0.0.1
export MYSQL_TCP_PORT=$MYSQL_TCP_PORT
export PGHOST=127.0.0.1
"
"

View File

@@ -15,3 +15,5 @@ dependencies:
- pip:
- python-oauth2
- recommonmark==0.4.0
- async_generator
- prometheus_client

View File

@@ -252,6 +252,17 @@ paths:
$ref: '#/definitions/Token'
post:
summary: Create a new token for the user
parameters:
- name: expires_in
type: number
required: false
in: body
description: lifetime (in seconds) after which the requested token will expire.
- name: note
type: string
required: false
in: body
description: A note attached to the token for future bookkeeping
responses:
'201':
description: The newly created token
@@ -689,6 +700,11 @@ definitions:
description: The command used to start the service (if managed)
items:
type: string
info:
type: object
description: |
Additional information a deployment can attach to a service.
JupyterHub does not use this field.
Token:
type: object
properties:
@@ -711,6 +727,10 @@ definitions:
type: string
format: date-time
description: Timestamp when this token was created
expires_at:
type: string
format: date-time
description: Timestamp when this token expires. Null if there is no expiry.
last_activity:
type: string
format: date-time

View File

@@ -9,7 +9,7 @@ command line for details.
## 0.9
### 0.9.0
### [0.9.0] 2018-06-15
JupyterHub 0.9 is a major upgrade of JupyterHub.
There are several changes to the database schema,
@@ -51,7 +51,7 @@ and tornado < 5.0.
Sets ip, port, base_url all at once.
- Add `JupyterHub.hub_bind_url` for setting the full host+port of the Hub.
`hub_bind_url` supports unix domain sockets, e.g.
`unix+http://%2Fsrv%2Fjupytrehub.sock`
`unix+http://%2Fsrv%2Fjupyterhub.sock`
- Deprecate `JupyterHub.hub_connect_port` config in favor of `JupyterHub.hub_connect_url`. `hub_connect_ip` is not deprecated
and can still be used in the common case where only the ip address of the hub differs from the bind ip.
@@ -93,6 +93,12 @@ and tornado < 5.0.
- Add session-id cookie, enabling immediate revocation of login tokens.
- Authenticators may specify that users are admins by specifying the `admin` key when return the user model as a dict.
- Added "Start All" button to admin page for launching all user servers at once.
- Services have an `info` field which is a dictionary.
This is accessible via the REST API.
- `JupyterHub.extra_handlers` allows defining additonal tornado RequestHandlers attached to the Hub.
- API tokens may now expire.
Expiry is available in the REST model as `expires_at`,
and settable when creating API tokens by specifying `expires_in`.
#### Fixed
@@ -113,6 +119,11 @@ and tornado < 5.0.
- Various fixes in race conditions and performance improvements with the default proxy.
- Fixes for CORS headers
- Stop setting `.form-control` on spawner form inputs unconditionally.
- Better recovery from database errors and database connection issues
without having to restart the Hub.
- Fix handling of `~` character in usernames.
- Fix jupyterhub startup when `getpass.getuser()` would fail,
e.g. due to missing entry in passwd file in containers.
## 0.8
@@ -368,7 +379,8 @@ Fix removal of `/login` page in 0.4.0, breaking some OAuth providers.
First preview release
[Unreleased]: https://github.com/jupyterhub/jupyterhub/compare/0.8.1...HEAD
[Unreleased]: https://github.com/jupyterhub/jupyterhub/compare/0.9.0...HEAD
[0.9.0]: https://github.com/jupyterhub/jupyterhub/compare/0.8.1...0.9.0
[0.8.1]: https://github.com/jupyterhub/jupyterhub/compare/0.8.0...0.8.1
[0.8.0]: https://github.com/jupyterhub/jupyterhub/compare/0.7.2...0.8.0
[0.7.2]: https://github.com/jupyterhub/jupyterhub/compare/0.7.1...0.7.2

View File

@@ -35,12 +35,14 @@ author = u'Project Jupyter team'
# Autopopulate version
from os.path import dirname
docs = dirname(dirname(__file__))
root = dirname(docs)
sys.path.insert(0, root)
sys.path.insert(0, os.path.join(docs, 'sphinxext'))
import jupyterhub
# The short X.Y version.
version = '%i.%i' % jupyterhub.version_info[:2]
# The full version, including alpha/beta/rc tags.
@@ -56,12 +58,10 @@ default_role = 'literal'
# -- Source -------------------------------------------------------------
source_parsers = {
'.md': 'recommonmark.parser.CommonMarkParser',
}
source_parsers = {'.md': 'recommonmark.parser.CommonMarkParser'}
source_suffix = ['.rst', '.md']
#source_encoding = 'utf-8-sig'
# source_encoding = 'utf-8-sig'
# -- Options for HTML output ----------------------------------------------
@@ -96,7 +96,7 @@ html_sidebars = {
'navigation.html',
'relations.html',
'sourcelink.html',
],
]
}
htmlhelp_basename = 'JupyterHubdoc'
@@ -104,38 +104,40 @@ htmlhelp_basename = 'JupyterHubdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
#'papersize': 'letterpaper',
#'pointsize': '10pt',
#'preamble': '',
#'figure_align': 'htbp',
# 'papersize': 'letterpaper',
# 'pointsize': '10pt',
# 'preamble': '',
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'JupyterHub.tex', u'JupyterHub Documentation',
u'Project Jupyter team', 'manual'),
(
master_doc,
'JupyterHub.tex',
u'JupyterHub Documentation',
u'Project Jupyter team',
'manual',
)
]
#latex_logo = None
#latex_use_parts = False
#latex_show_pagerefs = False
#latex_show_urls = False
#latex_appendices = []
#latex_domain_indices = True
# latex_logo = None
# latex_use_parts = False
# latex_show_pagerefs = False
# latex_show_urls = False
# latex_appendices = []
# latex_domain_indices = True
# -- manual page output -------------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'jupyterhub', u'JupyterHub Documentation',
[author], 1)
]
man_pages = [(master_doc, 'jupyterhub', u'JupyterHub Documentation', [author], 1)]
#man_show_urls = False
# man_show_urls = False
# -- Texinfo output -----------------------------------------------------
@@ -144,15 +146,21 @@ man_pages = [
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'JupyterHub', u'JupyterHub Documentation',
author, 'JupyterHub', 'One line description of project.',
'Miscellaneous'),
(
master_doc,
'JupyterHub',
u'JupyterHub Documentation',
author,
'JupyterHub',
'One line description of project.',
'Miscellaneous',
)
]
#texinfo_appendices = []
#texinfo_domain_indices = True
#texinfo_show_urls = 'footnote'
#texinfo_no_detailmenu = False
# texinfo_appendices = []
# texinfo_domain_indices = True
# texinfo_show_urls = 'footnote'
# texinfo_no_detailmenu = False
# -- Epub output --------------------------------------------------------
@@ -179,6 +187,7 @@ else:
# readthedocs.org uses their theme by default, so no need to specify it
# build rest-api, since RTD doesn't run make
from subprocess import check_call as sh
sh(['make', 'rest-api'], cwd=docs)
# -- Spell checking -------------------------------------------------------
@@ -190,4 +199,4 @@ except ImportError:
else:
extensions.append("sphinxcontrib.spelling")
spelling_word_list_filename='spelling_wordlist.txt'
spelling_word_list_filename = 'spelling_wordlist.txt'

View File

@@ -3,38 +3,65 @@
Project Jupyter thanks the following people for their help and
contribution on JupyterHub:
- adelcast
- Analect
- anderbubble
- anikitml
- ankitksharma
- apetresc
- athornton
- barrachri
- BerserkerTroll
- betatim
- Carreau
- cfournie
- charnpreetsingh
- chicovenancio
- cikao
- ckald
- cmoscardi
- consideRatio
- cqzlxl
- CRegenschein
- cwaldbieser
- danielballen
- danoventa
- daradib
- darky2004
- datapolitan
- dblockow-d2dcrc
- DeepHorizons
- DerekHeldtWerle
- dhirschfeld
- dietmarw
- dingc3
- dmartzol
- DominicFollettSmith
- dsblank
- dtaniwaki
- echarles
- ellisonbg
- emmanuel
- evanlinde
- Fokko
- fperez
- franga2000
- GladysNalvarte
- glenak1911
- gweis
- iamed18
- jamescurtin
- JamiesHQ
- JasonJWilliamsNY
- jbweston
- jdavidheiser
- jencabral
- jhamrick
- jkinkead
- johnkpark
- josephtate
- jzf2101
- karfai
- kinuax
- KrishnaPG
- kroq-gar78
@@ -44,27 +71,44 @@ contribution on JupyterHub:
- minrk
- mistercrunch
- Mistobaan
- mpacer
- mwmarkland
- ndly
- nthiery
- nxg
- ObiWahn
- ozancaglayan
- paccorsi
- parente
- PeterDaveHello
- peterruppel
- phill84
- pjamason
- prasadkatti
- rafael-ladislau
- rcthomas
- rgbkrk
- rkdarst
- robnagler
- rschroll
- ryanlovett
- sangramga
- Scrypy
- schon
- shreddd
- Siecje
- smiller5678
- spoorthyv
- ssanderson
- summerswallow
- syutbai
- takluyver
- temogen
- ThomasMChen
- Thoralf Gutierrez
- timfreund
- TimShawver
- tklever
- Todd-Z-Li
- toobaz
- tsaeger

View File

@@ -35,6 +35,10 @@ Configuring only the main IP and port of JupyterHub should be sufficient for
most deployments of JupyterHub. However, more customized scenarios may need
additional networking details to be configured.
Note that `c.JupyterHub.ip` and `c.JupyterHub.port` are single values,
not tuples or lists JupyterHub listens to only a single IP address and
port.
## Set the Proxy's REST API communication URL (optional)
By default, this REST API listens on port 8081 of `localhost` only.
@@ -86,3 +90,12 @@ configuration for, e.g. docker, is:
c.JupyterHub.hub_ip = '0.0.0.0' # listen on all interfaces
c.JupyterHub.hub_connect_ip = '10.0.1.4' # ip as seen on the docker network. Can also be a hostname.
```
## Adjusting the hub's URL
The hub will most commonly be running on a hostname of its own. If it
is not for example, if the hub is being reverse-proxied and being
exposed at a URL such as `https://proxy.example.org/jupyter/` then
you will need to tell JupyterHub the base URL of the service. In such
a case, it is both necessary and sufficient to set
`c.JupyterHub.base_url = '/jupyter/'` in the configuration.

View File

@@ -72,8 +72,13 @@ would be the needed configuration:
If SSL termination happens outside of the Hub
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In certain cases, e.g. behind `SSL termination in NGINX <https://www.nginx.com/resources/admin-guide/nginx-ssl-termination/>`_,
allowing no SSL running on the hub may be the desired configuration option.
In certain cases, for example if the hub is running behind a reverse proxy, and
`SSL termination is being provided by NGINX <https://www.nginx.com/resources/admin-guide/nginx-ssl-termination/>`_,
it is reasonable to run the hub without SSL.
To achieve this, simply omit the configuration settings
``c.JupyterHub.ssl_key`` and ``c.JupyterHub.ssl_cert``
(setting them to ``None`` does not have the same effect, and is an error).
.. _cookie-secret:

View File

@@ -59,6 +59,9 @@ Contents
* :doc:`reference/rest`
* :doc:`reference/upgrading`
* :doc:`reference/config-examples`
* :doc:`reference/config-ghoauth`
* :doc:`reference/config-proxy`
* :doc:`reference/config-sudo`
**API Reference**

View File

@@ -5,20 +5,27 @@
Before installing JupyterHub, you will need:
- a Linux/Unix based system
- [Python](https://www.python.org/downloads/) 3.4 or greater. An understanding
- [Python](https://www.python.org/downloads/) 3.5 or greater. An understanding
of using [`pip`](https://pip.pypa.io/en/stable/) or
[`conda`](https://conda.io/docs/get-started.html) for
installing Python packages is helpful.
- [nodejs/npm](https://www.npmjs.com/). [Install nodejs/npm](https://docs.npmjs.com/getting-started/installing-node),
using your operating system's package manager. For example, install on Linux
Debian/Ubuntu using:
using your operating system's package manager.
```bash
sudo apt-get install npm nodejs-legacy
```
* If you are using **`conda`**, the nodejs and npm dependencies will be installed for
you by conda.
* If you are using **`pip`**, install a recent version of
[nodejs/npm](https://docs.npmjs.com/getting-started/installing-node).
For example, install it on Linux (Debian/Ubuntu) using:
```
sudo apt-get install npm nodejs-legacy
```
The `nodejs-legacy` package installs the `node` executable and is currently
required for npm to work on Debian/Ubuntu.
The `nodejs-legacy` package installs the `node` executable and is currently
required for `npm` to work on Debian/Ubuntu.
- TLS certificate and key for HTTPS communication
- Domain name

View File

@@ -38,6 +38,8 @@ with any provider, is also available.
- ldapauthenticator for LDAP
- tmpauthenticator for temporary accounts
- For Shibboleth, [jhub_shibboleth_auth](https://github.com/gesiscss/jhub_shibboleth_auth)
and [jhub_remote_user_authenticator](https://github.com/cwaldbieser/jhub_remote_user_authenticator)
## Technical Overview of Authentication
@@ -206,7 +208,13 @@ class MyAuthenticator(Authenticator):
spawner.environment['UPSTREAM_TOKEN'] = auth_state['upstream_token']
```
## pre_spawn_start and post_spawn_stop hooks
Authenticators uses two hooks, [pre_spawn_start(user, spawner)][] and
[post_spawn_stop(user, spawner)][] to add pass additional state information
between the authenticator and a spawner. These hooks are typically used auth-related
startup, i.e. opening a PAM session, and auth-related cleanup, i.e. closing a
PAM session.
## JupyterHub as an OAuth provider

View File

@@ -1,281 +1,8 @@
# Configuration examples
This section provides examples, including configuration files and tips, for the
following configurations:
The following sections provide examples, including configuration files and tips, for the
following:
- Using GitHub OAuth
- Using nginx reverse proxy
## Using GitHub OAuth
In this example, we show a configuration file for a fairly standard JupyterHub
deployment with the following assumptions:
* Running JupyterHub on a single cloud server
* Using SSL on the standard HTTPS port 443
* Using GitHub OAuth (using oauthenticator) for login
* Using the default spawner (to configure other spawners, uncomment and edit
`spawner_class` as well as follow the instructions for your desired spawner)
* Users exist locally on the server
* Users' notebooks to be served from `~/assignments` to allow users to browse
for notebooks within other users' home directories
* You want the landing page for each user to be a `Welcome.ipynb` notebook in
their assignments directory.
* All runtime files are put into `/srv/jupyterhub` and log files in `/var/log`.
The `jupyterhub_config.py` file would have these settings:
```python
# jupyterhub_config.py file
c = get_config()
import os
pjoin = os.path.join
runtime_dir = os.path.join('/srv/jupyterhub')
ssl_dir = pjoin(runtime_dir, 'ssl')
if not os.path.exists(ssl_dir):
os.makedirs(ssl_dir)
# Allows multiple single-server per user
c.JupyterHub.allow_named_servers = True
# https on :443
c.JupyterHub.port = 443
c.JupyterHub.ssl_key = pjoin(ssl_dir, 'ssl.key')
c.JupyterHub.ssl_cert = pjoin(ssl_dir, 'ssl.cert')
# put the JupyterHub cookie secret and state db
# in /var/run/jupyterhub
c.JupyterHub.cookie_secret_file = pjoin(runtime_dir, 'cookie_secret')
c.JupyterHub.db_url = pjoin(runtime_dir, 'jupyterhub.sqlite')
# or `--db=/path/to/jupyterhub.sqlite` on the command-line
# use GitHub OAuthenticator for local users
c.JupyterHub.authenticator_class = 'oauthenticator.LocalGitHubOAuthenticator'
c.GitHubOAuthenticator.oauth_callback_url = os.environ['OAUTH_CALLBACK_URL']
# create system users that don't exist yet
c.LocalAuthenticator.create_system_users = True
# specify users and admin
c.Authenticator.whitelist = {'rgbkrk', 'minrk', 'jhamrick'}
c.Authenticator.admin_users = {'jhamrick', 'rgbkrk'}
# uses the default spawner
# To use a different spawner, uncomment `spawner_class` and set to desired
# spawner (e.g. SudoSpawner). Follow instructions for desired spawner
# configuration.
# c.JupyterHub.spawner_class = 'sudospawner.SudoSpawner'
# start single-user notebook servers in ~/assignments,
# with ~/assignments/Welcome.ipynb as the default landing page
# this config could also be put in
# /etc/jupyter/jupyter_notebook_config.py
c.Spawner.notebook_dir = '~/assignments'
c.Spawner.args = ['--NotebookApp.default_url=/notebooks/Welcome.ipynb']
```
Using the GitHub Authenticator requires a few additional
environment variable to be set prior to launching JupyterHub:
```bash
export GITHUB_CLIENT_ID=github_id
export GITHUB_CLIENT_SECRET=github_secret
export OAUTH_CALLBACK_URL=https://example.com/hub/oauth_callback
export CONFIGPROXY_AUTH_TOKEN=super-secret
# append log output to log file /var/log/jupyterhub.log
jupyterhub -f /etc/jupyterhub/jupyterhub_config.py &>> /var/log/jupyterhub.log
```
## Using a reverse proxy
In the following example, we show configuration files for a JupyterHub server
running locally on port `8000` but accessible from the outside on the standard
SSL port `443`. This could be useful if the JupyterHub server machine is also
hosting other domains or content on `443`. The goal in this example is to
satisfy the following:
* JupyterHub is running on a server, accessed *only* via `HUB.DOMAIN.TLD:443`
* On the same machine, `NO_HUB.DOMAIN.TLD` strictly serves different content,
also on port `443`
* `nginx` or `apache` is used as the public access point (which means that
only nginx/apache will bind to `443`)
* After testing, the server in question should be able to score at least an A on the
Qualys SSL Labs [SSL Server Test](https://www.ssllabs.com/ssltest/)
Let's start out with needed JupyterHub configuration in `jupyterhub_config.py`:
```python
# Force the proxy to only listen to connections to 127.0.0.1
c.JupyterHub.ip = '127.0.0.1'
```
For high-quality SSL configuration, we also generate Diffie-Helman parameters.
This can take a few minutes:
```bash
openssl dhparam -out /etc/ssl/certs/dhparam.pem 4096
```
### nginx
The **`nginx` server config file** is fairly standard fare except for the two
`location` blocks within the `HUB.DOMAIN.TLD` config file:
```bash
# top-level http config for websocket headers
# If Upgrade is defined, Connection = upgrade
# If Upgrade is empty, Connection = close
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# HTTP server to redirect all 80 traffic to SSL/HTTPS
server {
listen 80;
server_name HUB.DOMAIN.TLD;
# Tell all requests to port 80 to be 302 redirected to HTTPS
return 302 https://$host$request_uri;
}
# HTTPS server to handle JupyterHub
server {
listen 443;
ssl on;
server_name HUB.DOMAIN.TLD;
ssl_certificate /etc/letsencrypt/live/HUB.DOMAIN.TLD/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/HUB.DOMAIN.TLD/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
# Managing literal requests to the JupyterHub front end
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# websocket headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
# Managing requests to verify letsencrypt host
location ~ /.well-known {
allow all;
}
}
```
If `nginx` is not running on port 443, substitute `$http_host` for `$host` on
the lines setting the `Host` header.
`nginx` will now be the front facing element of JupyterHub on `443` which means
it is also free to bind other servers, like `NO_HUB.DOMAIN.TLD` to the same port
on the same machine and network interface. In fact, one can simply use the same
server blocks as above for `NO_HUB` and simply add line for the root directory
of the site as well as the applicable location call:
```bash
server {
listen 80;
server_name NO_HUB.DOMAIN.TLD;
# Tell all requests to port 80 to be 302 redirected to HTTPS
return 302 https://$host$request_uri;
}
server {
listen 443;
ssl on;
# INSERT OTHER SSL PARAMETERS HERE AS ABOVE
# SSL cert may differ
# Set the appropriate root directory
root /var/www/html
# Set URI handling
location / {
try_files $uri $uri/ =404;
}
# Managing requests to verify letsencrypt host
location ~ /.well-known {
allow all;
}
}
```
Now restart `nginx`, restart the JupyterHub, and enjoy accessing
`https://HUB.DOMAIN.TLD` while serving other content securely on
`https://NO_HUB.DOMAIN.TLD`.
### Apache
As with nginx above, you can use [Apache](https://httpd.apache.org) as the reverse proxy.
First, we will need to enable the apache modules that we are going to need:
```bash
a2enmod ssl rewrite proxy proxy_http proxy_wstunnel
```
Our Apache configuration is equivalent to the nginx configuration above:
- Redirect HTTP to HTTPS
- Good SSL Configuration
- Support for websockets on any proxied URL
- JupyterHub is running locally at http://127.0.0.1:8000
```bash
# redirect HTTP to HTTPS
Listen 80
<VirtualHost HUB.DOMAIN.TLD:80>
ServerName HUB.DOMAIN.TLD
Redirect / https://HUB.DOMAIN.TLD/
</VirtualHost>
Listen 443
<VirtualHost HUB.DOMAIN.TLD:443>
ServerName HUB.DOMAIN.TLD
# configure SSL
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/HUB.DOMAIN.TLD/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/HUB.DOMAIN.TLD/privkey.pem
SSLProtocol All -SSLv2 -SSLv3
SSLOpenSSLConfCmd DHParameters /etc/ssl/certs/dhparam.pem
SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
# Use RewriteEngine to handle websocket connection upgrades
RewriteEngine On
RewriteCond %{HTTP:Connection} Upgrade [NC]
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteRule /(.*) ws://127.0.0.1:8000/$1 [P,L]
<Location "/">
# preserve Host header to avoid cross-origin problems
ProxyPreserveHost on
# proxy to JupyterHub
ProxyPass http://127.0.0.1:8000/
ProxyPassReverse http://127.0.0.1:8000/
</Location>
</VirtualHost>
```
- Configuring GitHub OAuth
- Using reverse proxy (nginx and Apache)
- Run JupyterHub without root privileges using `sudo`

View File

@@ -0,0 +1,82 @@
# Configure GitHub OAuth
In this example, we show a configuration file for a fairly standard JupyterHub
deployment with the following assumptions:
* Running JupyterHub on a single cloud server
* Using SSL on the standard HTTPS port 443
* Using GitHub OAuth (using oauthenticator) for login
* Using the default spawner (to configure other spawners, uncomment and edit
`spawner_class` as well as follow the instructions for your desired spawner)
* Users exist locally on the server
* Users' notebooks to be served from `~/assignments` to allow users to browse
for notebooks within other users' home directories
* You want the landing page for each user to be a `Welcome.ipynb` notebook in
their assignments directory.
* All runtime files are put into `/srv/jupyterhub` and log files in `/var/log`.
The `jupyterhub_config.py` file would have these settings:
```python
# jupyterhub_config.py file
c = get_config()
import os
pjoin = os.path.join
runtime_dir = os.path.join('/srv/jupyterhub')
ssl_dir = pjoin(runtime_dir, 'ssl')
if not os.path.exists(ssl_dir):
os.makedirs(ssl_dir)
# Allows multiple single-server per user
c.JupyterHub.allow_named_servers = True
# https on :443
c.JupyterHub.port = 443
c.JupyterHub.ssl_key = pjoin(ssl_dir, 'ssl.key')
c.JupyterHub.ssl_cert = pjoin(ssl_dir, 'ssl.cert')
# put the JupyterHub cookie secret and state db
# in /var/run/jupyterhub
c.JupyterHub.cookie_secret_file = pjoin(runtime_dir, 'cookie_secret')
c.JupyterHub.db_url = pjoin(runtime_dir, 'jupyterhub.sqlite')
# or `--db=/path/to/jupyterhub.sqlite` on the command-line
# use GitHub OAuthenticator for local users
c.JupyterHub.authenticator_class = 'oauthenticator.LocalGitHubOAuthenticator'
c.GitHubOAuthenticator.oauth_callback_url = os.environ['OAUTH_CALLBACK_URL']
# create system users that don't exist yet
c.LocalAuthenticator.create_system_users = True
# specify users and admin
c.Authenticator.whitelist = {'rgbkrk', 'minrk', 'jhamrick'}
c.Authenticator.admin_users = {'jhamrick', 'rgbkrk'}
# uses the default spawner
# To use a different spawner, uncomment `spawner_class` and set to desired
# spawner (e.g. SudoSpawner). Follow instructions for desired spawner
# configuration.
# c.JupyterHub.spawner_class = 'sudospawner.SudoSpawner'
# start single-user notebook servers in ~/assignments,
# with ~/assignments/Welcome.ipynb as the default landing page
# this config could also be put in
# /etc/jupyter/jupyter_notebook_config.py
c.Spawner.notebook_dir = '~/assignments'
c.Spawner.args = ['--NotebookApp.default_url=/notebooks/Welcome.ipynb']
```
Using the GitHub Authenticator requires a few additional
environment variable to be set prior to launching JupyterHub:
```bash
export GITHUB_CLIENT_ID=github_id
export GITHUB_CLIENT_SECRET=github_secret
export OAUTH_CALLBACK_URL=https://example.com/hub/oauth_callback
export CONFIGPROXY_AUTH_TOKEN=super-secret
# append log output to log file /var/log/jupyterhub.log
jupyterhub -f /etc/jupyterhub/jupyterhub_config.py &>> /var/log/jupyterhub.log
```

View File

@@ -0,0 +1,192 @@
# Using a reverse proxy
In the following example, we show configuration files for a JupyterHub server
running locally on port `8000` but accessible from the outside on the standard
SSL port `443`. This could be useful if the JupyterHub server machine is also
hosting other domains or content on `443`. The goal in this example is to
satisfy the following:
* JupyterHub is running on a server, accessed *only* via `HUB.DOMAIN.TLD:443`
* On the same machine, `NO_HUB.DOMAIN.TLD` strictly serves different content,
also on port `443`
* `nginx` or `apache` is used as the public access point (which means that
only nginx/apache will bind to `443`)
* After testing, the server in question should be able to score at least an A on the
Qualys SSL Labs [SSL Server Test](https://www.ssllabs.com/ssltest/)
Let's start out with needed JupyterHub configuration in `jupyterhub_config.py`:
```python
# Force the proxy to only listen to connections to 127.0.0.1
c.JupyterHub.ip = '127.0.0.1'
```
For high-quality SSL configuration, we also generate Diffie-Helman parameters.
This can take a few minutes:
```bash
openssl dhparam -out /etc/ssl/certs/dhparam.pem 4096
```
## nginx
This **`nginx` config file** is fairly standard fare except for the two
`location` blocks within the main section for HUB.DOMAIN.tld.
To create a new site for jupyterhub in your nginx config, make a new file
in `sites.enabled`, e.g. `/etc/nginx/sites.enabled/jupyterhub.conf`:
```bash
# top-level http config for websocket headers
# If Upgrade is defined, Connection = upgrade
# If Upgrade is empty, Connection = close
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# HTTP server to redirect all 80 traffic to SSL/HTTPS
server {
listen 80;
server_name HUB.DOMAIN.TLD;
# Tell all requests to port 80 to be 302 redirected to HTTPS
return 302 https://$host$request_uri;
}
# HTTPS server to handle JupyterHub
server {
listen 443;
ssl on;
server_name HUB.DOMAIN.TLD;
ssl_certificate /etc/letsencrypt/live/HUB.DOMAIN.TLD/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/HUB.DOMAIN.TLD/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
# Managing literal requests to the JupyterHub front end
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# websocket headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
# Managing requests to verify letsencrypt host
location ~ /.well-known {
allow all;
}
}
```
If `nginx` is not running on port 443, substitute `$http_host` for `$host` on
the lines setting the `Host` header.
`nginx` will now be the front facing element of JupyterHub on `443` which means
it is also free to bind other servers, like `NO_HUB.DOMAIN.TLD` to the same port
on the same machine and network interface. In fact, one can simply use the same
server blocks as above for `NO_HUB` and simply add line for the root directory
of the site as well as the applicable location call:
```bash
server {
listen 80;
server_name NO_HUB.DOMAIN.TLD;
# Tell all requests to port 80 to be 302 redirected to HTTPS
return 302 https://$host$request_uri;
}
server {
listen 443;
ssl on;
# INSERT OTHER SSL PARAMETERS HERE AS ABOVE
# SSL cert may differ
# Set the appropriate root directory
root /var/www/html
# Set URI handling
location / {
try_files $uri $uri/ =404;
}
# Managing requests to verify letsencrypt host
location ~ /.well-known {
allow all;
}
}
```
Now restart `nginx`, restart the JupyterHub, and enjoy accessing
`https://HUB.DOMAIN.TLD` while serving other content securely on
`https://NO_HUB.DOMAIN.TLD`.
## Apache
As with nginx above, you can use [Apache](https://httpd.apache.org) as the reverse proxy.
First, we will need to enable the apache modules that we are going to need:
```bash
a2enmod ssl rewrite proxy proxy_http proxy_wstunnel
```
Our Apache configuration is equivalent to the nginx configuration above:
- Redirect HTTP to HTTPS
- Good SSL Configuration
- Support for websockets on any proxied URL
- JupyterHub is running locally at http://127.0.0.1:8000
```bash
# redirect HTTP to HTTPS
Listen 80
<VirtualHost HUB.DOMAIN.TLD:80>
ServerName HUB.DOMAIN.TLD
Redirect / https://HUB.DOMAIN.TLD/
</VirtualHost>
Listen 443
<VirtualHost HUB.DOMAIN.TLD:443>
ServerName HUB.DOMAIN.TLD
# configure SSL
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/HUB.DOMAIN.TLD/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/HUB.DOMAIN.TLD/privkey.pem
SSLProtocol All -SSLv2 -SSLv3
SSLOpenSSLConfCmd DHParameters /etc/ssl/certs/dhparam.pem
SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
# Use RewriteEngine to handle websocket connection upgrades
RewriteEngine On
RewriteCond %{HTTP:Connection} Upgrade [NC]
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteRule /(.*) ws://127.0.0.1:8000/$1 [P,L]
<Location "/">
# preserve Host header to avoid cross-origin problems
ProxyPreserveHost on
# proxy to JupyterHub
ProxyPass http://127.0.0.1:8000/
ProxyPassReverse http://127.0.0.1:8000/
</Location>
</VirtualHost>
```

View File

@@ -0,0 +1,254 @@
# Run JupyterHub without root privileges using `sudo`
**Note:** Setting up `sudo` permissions involves many pieces of system
configuration. It is quite easy to get wrong and very difficult to debug.
Only do this if you are very sure you must.
## Overview
There are many Authenticators and Spawners available for JupyterHub. Some, such
as DockerSpawner or OAuthenticator, do not need any elevated permissions. This
document describes how to get the full default behavior of JupyterHub while
running notebook servers as real system users on a shared system without
running the Hub itself as root.
Since JupyterHub needs to spawn processes as other users, the simplest way
is to run it as root, spawning user servers with [setuid](http://linux.die.net/man/2/setuid).
But this isn't especially safe, because you have a process running on the
public web as root.
A **more prudent way** to run the server while preserving functionality is to
create a dedicated user with `sudo` access restricted to launching and
monitoring single-user servers.
## Create a user
To do this, first create a user that will run the Hub:
```bash
sudo useradd rhea
```
This user shouldn't have a login shell or password (possible with -r).
## Set up sudospawner
Next, you will need [sudospawner](https://github.com/jupyter/sudospawner)
to enable monitoring the single-user servers with sudo:
```bash
sudo pip install sudospawner
```
Now we have to configure sudo to allow the Hub user (`rhea`) to launch
the sudospawner script on behalf of our hub users (here `zoe` and `wash`).
We want to confine these permissions to only what we really need.
## Edit `/etc/sudoers`
To do this we add to `/etc/sudoers` (use `visudo` for safe editing of sudoers):
- specify the list of users `JUPYTER_USERS` for whom `rhea` can spawn servers
- set the command `JUPYTER_CMD` that `rhea` can execute on behalf of users
- give `rhea` permission to run `JUPYTER_CMD` on behalf of `JUPYTER_USERS`
without entering a password
For example:
```bash
# comma-separated whitelist of users that can spawn single-user servers
# this should include all of your Hub users
Runas_Alias JUPYTER_USERS = rhea, zoe, wash
# the command(s) the Hub can run on behalf of the above users without needing a password
# the exact path may differ, depending on how sudospawner was installed
Cmnd_Alias JUPYTER_CMD = /usr/local/bin/sudospawner
# actually give the Hub user permission to run the above command on behalf
# of the above users without prompting for a password
rhea ALL=(JUPYTER_USERS) NOPASSWD:JUPYTER_CMD
```
It might be useful to modifiy `secure_path` to add commands in path.
As an alternative to adding every user to the `/etc/sudoers` file, you can
use a group in the last line above, instead of `JUPYTER_USERS`:
```bash
rhea ALL=(%jupyterhub) NOPASSWD:JUPYTER_CMD
```
If the `jupyterhub` group exists, there will be no need to edit `/etc/sudoers`
again. A new user will gain access to the application when added to the group:
```bash
$ adduser -G jupyterhub newuser
```
## Test `sudo` setup
Test that the new user doesn't need to enter a password to run the sudospawner
command.
This should prompt for your password to switch to rhea, but *not* prompt for
any password for the second switch. It should show some help output about
logging options:
```bash
$ sudo -u rhea sudo -n -u $USER /usr/local/bin/sudospawner --help
Usage: /usr/local/bin/sudospawner [OPTIONS]
Options:
--help show this help information
...
```
And this should fail:
```bash
$ sudo -u rhea sudo -n -u $USER echo 'fail'
sudo: a password is required
```
## Enable PAM for non-root
By default, [PAM authentication](http://en.wikipedia.org/wiki/Pluggable_authentication_module)
is used by JupyterHub. To use PAM, the process may need to be able to read
the shadow password database.
### Shadow group (Linux)
```bash
$ ls -l /etc/shadow
-rw-r----- 1 root shadow 2197 Jul 21 13:41 shadow
```
If there's already a shadow group, you are set. If its permissions are more like:
```bash
$ ls -l /etc/shadow
-rw------- 1 root wheel 2197 Jul 21 13:41 shadow
```
Then you may want to add a shadow group, and make the shadow file group-readable:
```bash
$ sudo groupadd shadow
$ sudo chgrp shadow /etc/shadow
$ sudo chmod g+r /etc/shadow
```
We want our new user to be able to read the shadow passwords, so add it to the shadow group:
```bash
$ sudo usermod -a -G shadow rhea
```
If you want jupyterhub to serve pages on a restricted port (such as port 80 for http),
then you will need to give `node` permission to do so:
```bash
sudo setcap 'cap_net_bind_service=+ep' /usr/bin/node
```
However, you may want to further understand the consequences of this.
You may also be interested in limiting the amount of CPU any process can use
on your server. `cpulimit` is a useful tool that is available for many Linux
distributions' packaging system. This can be used to keep any user's process
from using too much CPU cycles. You can configure it accoring to [these
instructions](http://ubuntuforums.org/showthread.php?t=992706).
### Shadow group (FreeBSD)
**NOTE:** This has not been tested and may not work as expected.
```bash
$ ls -l /etc/spwd.db /etc/master.passwd
-rw------- 1 root wheel 2516 Aug 22 13:35 /etc/master.passwd
-rw------- 1 root wheel 40960 Aug 22 13:35 /etc/spwd.db
```
Add a shadow group if there isn't one, and make the shadow file group-readable:
```bash
$ sudo pw group add shadow
$ sudo chgrp shadow /etc/spwd.db
$ sudo chmod g+r /etc/spwd.db
$ sudo chgrp shadow /etc/master.passwd
$ sudo chmod g+r /etc/master.passwd
```
We want our new user to be able to read the shadow passwords, so add it to the
shadow group:
```bash
$ sudo pw user mod rhea -G shadow
```
## Test that PAM works
We can verify that PAM is working, with:
```bash
$ sudo -u rhea python3 -c "import pamela, getpass; print(pamela.authenticate('$USER', getpass.getpass()))"
Password: [enter your unix password]
```
## Make a directory for JupyterHub
JupyterHub stores its state in a database, so it needs write access to a directory.
The simplest way to deal with this is to make a directory owned by your Hub user,
and use that as the CWD when launching the server.
```bash
$ sudo mkdir /etc/jupyterhub
$ sudo chown rhea /etc/jupyterhub
```
## Start jupyterhub
Finally, start the server as our newly configured user, `rhea`:
```bash
$ cd /etc/jupyterhub
$ sudo -u rhea jupyterhub --JupyterHub.spawner_class=sudospawner.SudoSpawner
```
And try logging in.
### Troubleshooting: SELinux
If you still get a generic `Permission denied` `PermissionError`, it's possible SELinux is blocking you.
Here's how you can make a module to allow this.
First, put this in a file sudo_exec_selinux.te:
```bash
module sudo_exec 1.1;
require {
type unconfined_t;
type sudo_exec_t;
class file { read entrypoint };
}
#============= unconfined_t ==============
allow unconfined_t sudo_exec_t:file entrypoint;
```
Then run all of these commands as root:
```bash
$ checkmodule -M -m -o sudo_exec_selinux.mod sudo_exec_selinux.te
$ semodule_package -o sudo_exec_selinux.pp -m sudo_exec_selinux.mod
$ semodule -i sudo_exec_selinux.pp
```
### Troubleshooting: PAM session errors
If the PAM authentication doesn't work and you see errors for
`login:session-auth`, or similar, considering updating to `master`
and/or incorporating this commit https://github.com/jupyter/jupyterhub/commit/40368b8f555f04ffdd662ffe99d32392a088b1d2
and configuration option, `c.PAMAuthenticator.open_sessions = False`.

View File

@@ -0,0 +1,147 @@
# Configuring user environments
Deploying JupyterHub means you are providing Jupyter notebook environments for
multiple users. Often, this includes a desire to configure the user
environment in some way.
Since the `jupyterhub-singleuser` server extends the standard Jupyter notebook
server, most configuration and documentation that applies to Jupyter Notebook
applies to the single-user environments. Configuration of user environments
typically does not occur through JupyterHub itself, but rather through system-
wide configuration of Jupyter, which is inherited by `jupyterhub-singleuser`.
**Tip:** When searching for configuration tips for JupyterHub user
environments, try removing JupyterHub from your search because there are a lot
more people out there configuring Jupyter than JupyterHub and the
configuration is the same.
This section will focus on user environments, including:
- Installing packages
- Configuring Jupyter and IPython
- Installing kernelspecs
- Using containers vs. multi-user hosts
## Installing packages
To make packages available to users, you generally will install packages
system-wide or in a shared environment.
This installation location should always be in the same environment that
`jupyterhub-singleuser` itself is installed in, and must be *readable and
executable* by your users. If you want users to be able to install additional
packages, it must also be *writable* by your users.
If you are using a standard system Python install, you would use:
```bash
sudo python3 -m pip install numpy
```
to install the numpy package in the default system Python 3 environment
(typically `/usr/local`).
You may also use conda to install packages. If you do, you should make sure
that the conda environment has appropriate permissions for users to be able to
run Python code in the env.
## Configuring Jupyter and IPython
[Jupyter](https://jupyter-notebook.readthedocs.io/en/stable/config_overview.html)
and [IPython](https://ipython.readthedocs.io/en/stable/development/config.html)
have their own configuration systems.
As a JupyterHub administrator, you will typically want to install and configure
environments for all JupyterHub users. For example, you wish for each student in
a class to have the same user environment configuration.
Jupyter and IPython support **"system-wide"** locations for configuration, which
is the logical place to put global configuration that you want to affect all
users. It's generally more efficient to configure user environments "system-wide",
and it's a good idea to avoid creating files in users' home directories.
The typical locations for these config files are:
- **system-wide** in `/etc/{jupyter|ipython}`
- **env-wide** (environment wide) in `{sys.prefix}/etc/{jupyter|ipython}`.
### Example: Enable an extension system-wide
For example, to enable the `cython` IPython extension for all of your users,
create the file `/etc/ipython/ipython_config.py`:
```python
c.InteractiveShellApp.extensions.append("cython")
```
### Example: Enable a Jupyter notebook configuration setting for all users
To enable Jupyter notebook's internal idle-shutdown behavior (requires
notebook ≥ 5.4), set the following in the `/etc/jupyter/jupyter_notebook_config.py`
file:
```python
# shutdown the server after no activity for an hour
c.NotebookApp.shutdown_no_activity_timeout = 60 * 60
# shutdown kernels after no activity for 20 minutes
c.MappingKernelManager.cull_idle_timeout = 20 * 60
# check for idle kernels every two minutes
c.MappingKernelManager.cull_interval = 2 * 60
```
## Installing kernelspecs
You may have multiple Jupyter kernels installed and want to make sure that
they are available to all of your users. This means installing kernelspecs
either system-wide (e.g. in /usr/local/) or in the `sys.prefix` of JupyterHub
itself.
Jupyter kernelspec installation is system wide by default, but some kernels
may default to installing kernelspecs in your home directory. These will need
to be moved system-wide to ensure that they are accessible.
You can see where your kernelspecs are with:
```bash
jupyter kernelspec list
```
### Example: Installing kernels system-wide
Assuming I have a Python 2 and Python 3 environment that I want to make
sure are available, I can install their specs system-wide (in /usr/local) with:
```bash
/path/to/python3 -m IPython kernel install --prefix=/usr/local
/path/to/python2 -m IPython kernel install --prefix=/usr/local
```
## Multi-user hosts vs. Containers
There are two broad categories of user environments that depend on what
Spawner you choose:
- Multi-user hosts (shared sytem)
- Container-based
How you configure user environments for each category can differ a bit
depending on what Spawner you are using.
The first category is a **shared system (multi-user host)** where
each user has a JupyterHub account and a home directory as well as being
a real system user. In this example, shared configuration and installation
must be in a 'system-wide' location, such as `/etc/` or `/usr/local`
or a custom prefix such as `/opt/conda`.
When JupyterHub uses **container-based** Spawners (e.g. KubeSpawner or
DockerSpawner), the 'system-wide' environment is really the container image
which you are using for users.
In both cases, you want to *avoid putting configuration in user home
directories* because users can change those configuration settings. Also,
home directories typically persist once they are created, so they are
difficult for admins to update later.

View File

@@ -0,0 +1,62 @@
# The Hub's Database
JupyterHub uses a database to store information about users, services, and other
data needed for operating the Hub.
## Default SQLite database
The default database for JupyterHub is a [SQLite](https://sqlite.org) database.
We have chosen SQLite as JupyterHub's default for its lightweight simplicity
in certain uses such as testing, small deployments and workshops.
For production systems, SQLite has some disadvantages when used with JupyterHub:
- `upgrade-db` may not work, and you may need to start with a fresh database
- `downgrade-db` **will not** work if you want to rollback to an earlier
version, so backup the `jupyterhub.sqlite` file before upgrading
The sqlite documentation provides a helpful page about [when to use SQLite and
where traditional RDBMS may be a better choice](https://sqlite.org/whentouse.html).
## Using an RDBMS (PostgreSQL, MySQL)
When running a long term deployment or a production system, we recommend using
a traditional RDBMS database, such as [PostgreSQL](https://www.postgresql.org)
or [MySQL](https://www.mysql.com), that supports the SQL `ALTER TABLE`
statement.
## Notes and Tips
### SQLite
The SQLite database should not be used on NFS. SQLite uses reader/writer locks
to control access to the database. This locking mechanism might not work
correctly if the database file is kept on an NFS filesystem. This is because
`fcntl()` file locking is broken on many NFS implementations. Therefore, you
should avoid putting SQLite database files on NFS since it will not handle well
multiple processes which might try to access the file at the same time.
### PostgreSQL
We recommend using PostgreSQL for production if you are unsure whether to use
MySQL or PostgreSQL or if you do not have a strong preference. There is
additional configuration required for MySQL that is not needed for PostgreSQL.
### MySQL / MariaDB
- You should use the `pymysql` sqlalchemy provider (the other one, MySQLdb,
isn't available for py3).
- You also need to set `pool_recycle` to some value (typically 60 - 300)
which depends on your MySQL setup. This is necessary since MySQL kills
connections serverside if they've been idle for a while, and the connection
from the hub will be idle for longer than most connections. This behavior
will lead to frustrating 'the connection has gone away' errors from
sqlalchemy if `pool_recycle` is not set.
- If you use `utf8mb4` collation with MySQL earlier than 5.7.7 or MariaDB
earlier than 10.2.1 you may get an `1709, Index column size too large` error.
To fix this you need to set `innodb_large_prefix` to enabled and
`innodb_file_format` to `Barracuda` to allow for the index sizes jupyterhub
uses. `row_format` will be set to `DYNAMIC` as long as those options are set
correctly. Later versions of MariaDB and MySQL should set these values by
default, as well as have a default `DYNAMIC` `row_format` and pose no trouble
to users.

View File

@@ -11,6 +11,11 @@ Technical Reference
services
proxy
rest
database
upgrading
templates
config-user-env
config-examples
config-ghoauth
config-proxy
config-sudo

View File

@@ -1,22 +1,26 @@
# Writing a custom Proxy implementation
JupyterHub 0.8 introduced the ability to write a custom implementation of the proxy.
This enables deployments with different needs than the default proxy,
configurable-http-proxy (CHP).
CHP is a single-process nodejs proxy that they Hub manages by default as a subprocess
(it can be run externally, as well, and typically is in production deployments).
JupyterHub 0.8 introduced the ability to write a custom implementation of the
proxy. This enables deployments with different needs than the default proxy,
configurable-http-proxy (CHP). CHP is a single-process nodejs proxy that they
Hub manages by default as a subprocess (it can be run externally, as well, and
typically is in production deployments).
The upside to CHP, and why we use it by default, is that it's easy to install and run (if you have nodejs, you are set!).
The downsides are that it's a single process and does not support any persistence of the routing table.
So if the proxy process dies, your whole JupyterHub instance is inaccessible until the Hub notices, restarts the proxy, and restores the routing table.
For deployments that want to avoid such a single point of failure,
or leverage existing proxy infrastructure in their chosen deployment (such as Kubernetes ingress objects),
the Proxy API provides a way to do that.
The upside to CHP, and why we use it by default, is that it's easy to install
and run (if you have nodejs, you are set!). The downsides are that it's a
single process and does not support any persistence of the routing table. So
if the proxy process dies, your whole JupyterHub instance is inaccessible
until the Hub notices, restarts the proxy, and restores the routing table. For
deployments that want to avoid such a single point of failure, or leverage
existing proxy infrastructure in their chosen deployment (such as Kubernetes
ingress objects), the Proxy API provides a way to do that.
In general, for a proxy to be usable by JupyterHub, it must:
1. support websockets without prior knowledge of the URL where websockets may occur
2. support trie-based routing (i.e. allow different routes on `/foo` and `/foo/bar` and route based on specificity)
1. support websockets without prior knowledge of the URL where websockets may
occur
2. support trie-based routing (i.e. allow different routes on `/foo` and
`/foo/bar` and route based on specificity)
3. adding or removing a route should not cause existing connections to drop
Optionally, if the JupyterHub deployment is to use host-based routing,
@@ -35,10 +39,10 @@ class MyProxy(Proxy):
...
```
## Starting and stopping the proxy
If your proxy should be launched when the Hub starts, you must define how to start and stop your proxy:
If your proxy should be launched when the Hub starts, you must define how
to start and stop your proxy:
```python
from tornado import gen
@@ -55,8 +59,8 @@ class MyProxy(Proxy):
These methods **may** be coroutines.
`c.Proxy.should_start` is a configurable flag that determines whether the Hub should call these methods when the Hub itself starts and stops.
`c.Proxy.should_start` is a configurable flag that determines whether the
Hub should call these methods when the Hub itself starts and stops.
### Purely external proxies
@@ -70,31 +74,30 @@ class MyProxy(Proxy):
should_start = False
```
## Routes
## Adding and removing routes
At its most basic, a Proxy implementation defines a mechanism to add, remove, and retrieve routes.
A proxy that implements these three methods is complete.
At its most basic, a Proxy implementation defines a mechanism to add, remove,
and retrieve routes. A proxy that implements these three methods is complete.
Each of these methods **may** be a coroutine.
**Definition:** routespec
A routespec, which will appear in these methods, is a string describing a route to be proxied,
such as `/user/name/`. A routespec will:
A routespec, which will appear in these methods, is a string describing a
route to be proxied, such as `/user/name/`. A routespec will:
1. always end with `/`
2. always start with `/` if it is a path-based route `/proxy/path/`
3. precede the leading `/` with a host for host-based routing, e.g. `host.tld/proxy/path/`
3. precede the leading `/` with a host for host-based routing, e.g.
`host.tld/proxy/path/`
### Adding a route
When adding a route, JupyterHub may pass a JSON-serializable dict as a `data` argument
that should be attacked to the proxy route.
When that route is retrieved, the `data` argument should be returned as well.
If your proxy implementation doesn't support storing data attached to routes,
then your Python wrapper may have to handle storing the `data` piece itself,
e.g in a simple file or database.
When adding a route, JupyterHub may pass a JSON-serializable dict as a `data`
argument that should be attacked to the proxy route. When that route is
retrieved, the `data` argument should be returned as well. If your proxy
implementation doesn't support storing data attached to routes, then your
Python wrapper may have to handle storing the `data` piece itself, e.g in a
simple file or database.
```python
@gen.coroutine
@@ -113,12 +116,10 @@ proxy.add_route('/user/pgeorgiou/', 'http://127.0.0.1:1227',
{'user': 'pgeorgiou'})
```
### Removing routes
`delete_route()` is given a routespec to delete.
If there is no such route, `delete_route` should still succeed,
but a warning may be issued.
`delete_route()` is given a routespec to delete. If there is no such route,
`delete_route` should still succeed, but a warning may be issued.
```python
@gen.coroutine
@@ -126,18 +127,17 @@ def delete_route(self, routespec):
"""Delete the route"""
```
### Retrieving routes
For retrieval, you only *need* to implement a single method that retrieves all routes.
The return value for this function should be a dictionary, keyed by `routespect`,
of dicts whose keys are the same three arguments passed to `add_route`
(`routespec`, `target`, `data`)
For retrieval, you only *need* to implement a single method that retrieves all
routes. The return value for this function should be a dictionary, keyed by
`routespect`, of dicts whose keys are the same three arguments passed to
`add_route` (`routespec`, `target`, `data`)
```python
@gen.coroutine
def get_all_routes(self):
"""Return all routes, keyed by routespec""""
"""Return all routes, keyed by routespec"""
```
```python
@@ -150,15 +150,15 @@ def get_all_routes(self):
}
```
## Note on activity tracking
#### Note on activity tracking
JupyterHub can track activity of users, for use in services such as culling idle servers.
As of JupyterHub 0.8, this activity tracking is the responsibility of the proxy.
If your proxy implementation can track activity to endpoints,
it may add a `last_activity` key to the `data` of routes retrieved in `.get_all_routes()`.
If present, the value of `last_activity` should be an [ISO8601](https://en.wikipedia.org/wiki/ISO_8601) UTC date string:
JupyterHub can track activity of users, for use in services such as culling
idle servers. As of JupyterHub 0.8, this activity tracking is the
responsibility of the proxy. If your proxy implementation can track activity
to endpoints, it may add a `last_activity` key to the `data` of routes
retrieved in `.get_all_routes()`. If present, the value of `last_activity`
should be an [ISO8601](https://en.wikipedia.org/wiki/ISO_8601) UTC date
string:
```python
{
@@ -173,11 +173,9 @@ If present, the value of `last_activity` should be an [ISO8601](https://en.wikip
}
```
If the proxy does not track activity, then only activity to the Hub itself is
tracked, and services such as cull-idle will not work.
If the proxy does not track activity, then only activity to the Hub itself is tracked,
and services such as cull-idle will not work.
Now that `notebook-5.0` tracks activity internally,
we can retrieve activity information from the single-user servers instead,
removing the need to track activity in the proxy.
But this is not yet implemented in JupyterHub 0.8.0.
Now that `notebook-5.0` tracks activity internally, we can retrieve activity
information from the single-user servers instead, removing the need to track
activity in the proxy. But this is not yet implemented in JupyterHub 0.8.0.

View File

@@ -1,28 +1,55 @@
# Templates
# Working with templates and UI
The pages of the JupyterHub application are generated from [Jinja](http://jinja.pocoo.org/) templates. These allow the header, for example, to be defined once and incorporated into all pages. By providing your own templates, you can have complete control over JupyterHub's appearance.
The pages of the JupyterHub application are generated from
[Jinja](http://jinja.pocoo.org/) templates. These allow the header, for
example, to be defined once and incorporated into all pages. By providing
your own templates, you can have complete control over JupyterHub's
appearance.
## Custom Templates
JupyterHub will look for custom templates in all of the paths in the `JupyterHub.template_paths` configuration option, falling back on the [default templates](https://github.com/jupyterhub/jupyterhub/tree/master/share/jupyterhub/templates) if no custom template with that name is found. (This fallback behavior is new in version 0.9; previous versions searched only those paths explicitly included in `template_paths`.) This means you can override as many or as few templates as you desire.
JupyterHub will look for custom templates in all of the paths in the
`JupyterHub.template_paths` configuration option, falling back on the
[default templates](https://github.com/jupyterhub/jupyterhub/tree/master/share/jupyterhub/templates)
if no custom template with that name is found. This fallback
behavior is new in version 0.9; previous versions searched only those paths
explicitly included in `template_paths`. You may override as many
or as few templates as you desire.
## Extending Templates
Jinja provides a mechanism to [extend templates](http://jinja.pocoo.org/docs/2.10/templates/#template-inheritance). A base template can define a `block`, and child templates can replace or supplement the material in the block. The [JupyterHub templates](https://github.com/jupyterhub/jupyterhub/tree/master/share/jupyterhub/templates) make extensive use of this feature, which allows you to customize parts of the interface easily.
Jinja provides a mechanism to [extend templates](http://jinja.pocoo.org/docs/2.10/templates/#template-inheritance).
A base template can define a `block`, and child templates can replace or
supplement the material in the block. The
[JupyterHub templates](https://github.com/jupyterhub/jupyterhub/tree/master/share/jupyterhub/templates)
make extensive use of blocks, which allows you to customize parts of the
interface easily.
In general, a child template can extend a base template, `base.html`, by beginning with
```
In general, a child template can extend a base template, `base.html`, by beginning with:
```html
{% extends "base.html" %}
```
This works, unless you are trying to extend the default template for the same file name. Starting in version 0.9, you may refer to the base file with a `templates/` prefix. Thus, if you are writing a custom `base.html`, start it with
```
This works, unless you are trying to extend the default template for the same
file name. Starting in version 0.9, you may refer to the base file with a
`templates/` prefix. Thus, if you are writing a custom `base.html`, start the
file with this block:
```html
{% extends "templates/base.html" %}
```
By defining `block`s with same name as in the base template, child templates can replace those sections with custom content. The content from the base template can be included with the `{{ super() }}` directive.
By defining `block`s with same name as in the base template, child templates
can replace those sections with custom content. The content from the base
template can be included with the `{{ super() }}` directive.
### Example
To add an additional message to the spawn-pending page, below the existing text about the server starting up, place this content in a file named `spawn_pending.html` in a directory included in the `JupyterHub.template_paths` configuration option.
To add an additional message to the spawn-pending page, below the existing
text about the server starting up, place this content in a file named
`spawn_pending.html` in a directory included in the
`JupyterHub.template_paths` configuration option.
```html
{% extends "templates/spawn_pending.html" %}
@@ -32,3 +59,35 @@ To add an additional message to the spawn-pending page, below the existing text
<p>Patience is a virtue.</p>
{% endblock %}
```
## Page Announcements
To add announcements to be displayed on a page, you have two options:
- Extend the page templates as described above
- Use configuration variables
### Announcement Configuration Variables
If you set the configuration variable `JupyterHub.template_vars =
{'announcement': 'some_text}`, the given `some_text` will be placed on
the top of all pages. The more specific variables
`announcement_login`, `announcement_spawn`, `announcement_home`, and
`announcement_logout` are more specific and only show on their
respective pages (overriding the global `announcement` variable).
Note that changing these varables require a restart, unlike direct
template extension.
You can get the same effect by extending templates, which allows you
to update the messages without restarting. Set
`c.JupyterHub.template_paths` as mentioned above, and then create a
template (for example, `login.html`) with:
```html
{% extends "templates/login.html" %}
{% set announcement = 'some message' %}
```
Extending `page.html` puts the message on all pages, but note that
extending `page.html` take precedence over an extension of a specific
page (unlike the variable-based approach above).

View File

@@ -2,30 +2,22 @@
From time to time, you may wish to upgrade JupyterHub to take advantage
of new releases. Much of this process is automated using scripts,
such as those generated by alembic for database upgrades. Before upgrading a
JupyterHub deployment, it's critical to backup your data and configurations
before shutting down the JupyterHub process and server.
such as those generated by alembic for database upgrades. Whether you
are using the default SQLite database or an RDBMS, such as PostgreSQL or
MySQL, the process follows similar steps.
## Databases: SQLite (default) or RDBMS (PostgreSQL, MySQL)
**Before upgrading a JupyterHub deployment**, it's critical to backup your data
and configurations before shutting down the JupyterHub process and server.
The default database for JupyterHub is a [SQLite](https://sqlite.org) database.
We have chosen SQLite as JupyterHub's default for its lightweight simplicity
in certain uses such as testing, small deployments and workshops.
## Note about upgrading the SQLite database
When running a long term deployment or a production system, we recommend using
a traditional RDBMS database, such as [PostgreSQL](https://www.postgresql.org)
or [MySQL](https://www.mysql.com), that supports the SQL `ALTER TABLE`
statement.
For production systems, SQLite has some disadvantages when used with JupyterHub:
When used in production systems, SQLite has some disadvantages when it
comes to upgrading JupyterHub. These are:
- `upgrade-db` may not work, and you may need to start with a fresh database
- `downgrade-db` **will not** work if you want to rollback to an earlier
version, so backup the `jupyterhub.sqlite` file before upgrading
The sqlite documentation provides a helpful page about [when to use sqlite and
where traditional RDBMS may be a better choice](https://sqlite.org/whentouse.html).
## The upgrade process
Five fundamental process steps are needed when upgrading JupyterHub and its

View File

@@ -9,6 +9,7 @@ problem and how to resolve it.
- sudospawner fails to run
- What is the default behavior when none of the lists (admin, whitelist,
group whitelist) are set?
- JupyterHub Docker container not accessible at localhost
[*Errors*](#errors)
- 500 error after spawning my single-user server
@@ -63,6 +64,17 @@ this to a particular set of users, and the admin_users lets you specify who
among them may use the admin interface (not necessary, unless you need to do
things like inspect other users' servers, or modify the userlist at runtime).
### JupyterHub Docker container not accessible at localhost
Even though the command to start your Docker container exposes port 8000
(`docker run -p 8000:8000 -d --name jupyterhub jupyterhub/jupyterhub jupyterhub`),
it is possible that the IP address itself is not accessible/visible. As a result
when you try http://localhost:8000 in your browser, you are unable to connect
even though the container is running properly. One workaround is to explicitly
tell Jupyterhub to start at `0.0.0.0` which is visible to everyone. Try this
command:
`docker run -p 8000:8000 -d --name jupyterhub jupyterhub/jupyterhub jupyterhub --ip 0.0.0.0 --port 8000`
## Errors

View File

@@ -1,4 +1,4 @@
.. upgrade-dot-eight:
.. _upgrade-dot-eight:
Upgrading to JupyterHub version 0.8
===================================

View File

@@ -7,14 +7,18 @@ from sphinx.ext.autodoc import ClassDocumenter, AttributeDocumenter
class ConfigurableDocumenter(ClassDocumenter):
"""Specialized Documenter subclass for traits with config=True"""
objtype = 'configurable'
directivetype = 'class'
def get_object_members(self, want_all):
"""Add traits with .tag(config=True) to members list"""
check, members = super().get_object_members(want_all)
get_traits = self.object.class_own_traits if self.options.inherited_members \
else self.object.class_traits
get_traits = (
self.object.class_own_traits
if self.options.inherited_members
else self.object.class_traits
)
trait_members = []
for name, trait in sorted(get_traits(config=True).items()):
# put help in __doc__ where autodoc will look for it
@@ -42,10 +46,7 @@ class TraitDocumenter(AttributeDocumenter):
default_s = ''
else:
default_s = repr(default)
sig = ' = {}({})'.format(
self.object.__class__.__name__,
default_s,
)
sig = ' = {}({})'.format(self.object.__class__.__name__, default_s)
return super().add_directive_header(sig)

View File

@@ -5,7 +5,7 @@ for external services that may not be otherwise integrated with JupyterHub.
The main feature this enables is using JupyterHub like a 'regular' OAuth 2
provider for services running anywhere.
There are two examples here. `whoami-oauth` uses `jupyterhub.services.HubOAuthenticated`
There are two examples here. `whoami-oauth` (in the service-whoami directory) uses `jupyterhub.services.HubOAuthenticated`
to authenticate requests with the Hub for a service run on its own host.
This is an implementation of OAuth 2.0 provided by the jupyterhub package,
which configures all of the necessary URLs from environment variables.

View File

@@ -18,4 +18,4 @@ export JUPYTERHUB_OAUTH_CALLBACK_URL="$JUPYTERHUB_SERVICE_URL/oauth_callback"
export JUPYTERHUB_HOST='http://127.0.0.1:8000'
# launch the service
exec python3 whoami-oauth.py
exec python3 ../service-whoami/whoami-oauth.py

View File

@@ -1,46 +0,0 @@
"""An example service authenticating with the Hub.
This example service serves `/services/whoami/`,
authenticated with the Hub,
showing the user their own info.
"""
from getpass import getuser
import json
import os
from urllib.parse import urlparse
from tornado.ioloop import IOLoop
from tornado import log
from tornado.httpserver import HTTPServer
from tornado.web import RequestHandler, Application, authenticated
from jupyterhub.services.auth import HubOAuthenticated, HubOAuthCallbackHandler
from jupyterhub.utils import url_path_join
class WhoAmIHandler(HubOAuthenticated, RequestHandler):
hub_users = {getuser()} # the users allowed to access this service
@authenticated
def get(self):
user_model = self.get_current_user()
self.set_header('content-type', 'application/json')
self.write(json.dumps(user_model, indent=1, sort_keys=True))
def main():
log.enable_pretty_logging()
app = Application([
(os.environ['JUPYTERHUB_SERVICE_PREFIX'], WhoAmIHandler),
(url_path_join(os.environ['JUPYTERHUB_SERVICE_PREFIX'], 'oauth_callback'), HubOAuthCallbackHandler),
(r'.*', WhoAmIHandler),
], cookie_secret=os.urandom(32))
http_server = HTTPServer(app)
url = urlparse(os.environ['JUPYTERHUB_SERVICE_URL'])
log.app_log.info("Running whoami service on %s", os.environ['JUPYTERHUB_SERVICE_URL'])
http_server.listen(url.port, url.hostname)
IOLoop.current().start()
if __name__ == '__main__':
main()

View File

@@ -26,6 +26,10 @@ After logging in with your local-system credentials, you should see a JSON dump
This relies on the Hub starting the whoami services, via config (see [jupyterhub_config.py](./jupyterhub_config.py)).
You may set the `hub_users` configuration in the service script
to restrict access to the service to a whitelist of allowed users.
By default, any authenticated user is allowed.
A similar service could be run externally, by setting the JupyterHub service environment variables:
JUPYTERHUB_API_TOKEN

View File

@@ -17,7 +17,11 @@ from jupyterhub.services.auth import HubOAuthenticated, HubOAuthCallbackHandler
from jupyterhub.utils import url_path_join
class WhoAmIHandler(HubOAuthenticated, RequestHandler):
hub_users = {getuser()} # the users allowed to access this service
# hub_users can be a set of users who are allowed to access the service
# `getuser()` here would mean only the user who started the service
# can access the service:
# hub_users = {getuser()}
@authenticated
def get(self):

View File

@@ -15,7 +15,11 @@ from jupyterhub.services.auth import HubAuthenticated
class WhoAmIHandler(HubAuthenticated, RequestHandler):
hub_users = {getuser()} # the users allowed to access me
# hub_users can be a set of users who are allowed to access the service
# `getuser()` here would mean only the user who started the service
# can access the service:
# hub_users = {getuser()}
@authenticated
def get(self):
@@ -37,4 +41,4 @@ def main():
IOLoop.current().start()
if __name__ == '__main__':
main()
main()

View File

@@ -7,10 +7,17 @@ version_info = (
0,
9,
0,
'b1',
"", # release (b1, rc1)
# "dev", # dev
)
__version__ = '.'.join(map(str, version_info))
# pep 440 version: no dot before beta/rc, but before .dev
# 0.1.0rc1
# 0.1.0a1
# 0.1.0b1.dev
# 0.1.0.dev
__version__ = ".".join(map(str, version_info[:3])) + ".".join(version_info[3:])
def _check_version(hub_version, singleuser_version, log):

View File

@@ -0,0 +1,24 @@
"""Add APIToken.expires_at
Revision ID: 896818069c98
Revises: d68c98b66cd4
Create Date: 2018-05-07 11:35:58.050542
"""
# revision identifiers, used by Alembic.
revision = '896818069c98'
down_revision = 'd68c98b66cd4'
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
def upgrade():
op.add_column('api_tokens', sa.Column('expires_at', sa.DateTime(), nullable=True))
def downgrade():
op.drop_column('api_tokens', 'expires_at')

View File

@@ -6,6 +6,7 @@ import json
from http.client import responses
from sqlalchemy.exc import SQLAlchemyError
from tornado import web
from .. import orm
@@ -87,13 +88,20 @@ class APIHandler(BaseHandler):
if reason:
status_message = reason
if exception and isinstance(exception, SQLAlchemyError):
self.log.warning("Rolling back session due to database error %s", exception)
self.db.rollback()
self.set_header('Content-Type', 'application/json')
# allow setting headers from exceptions
# since exception handler clears headers
headers = getattr(exception, 'headers', None)
if headers:
for key, value in headers.items():
self.set_header(key, value)
if isinstance(exception, web.HTTPError):
# allow setting headers from exceptions
# since exception handler clears headers
headers = getattr(exception, 'headers', None)
if headers:
for key, value in headers.items():
self.set_header(key, value)
# Content-Length must be recalculated.
self.clear_header('Content-Length')
self.write(json.dumps({
'status': status_code,
@@ -115,16 +123,20 @@ class APIHandler(BaseHandler):
def token_model(self, token):
"""Get the JSON model for an APIToken"""
expires_at = None
if isinstance(token, orm.APIToken):
kind = 'api_token'
extra = {
'note': token.note,
}
expires_at = token.expires_at
elif isinstance(token, orm.OAuthAccessToken):
kind = 'oauth'
extra = {
'oauth_client': token.client.description or token.client.client_id,
}
if token.expires_at:
expires_at = datetime.fromtimestamp(token.expires_at)
else:
raise TypeError(
"token must be an APIToken or OAuthAccessToken, not %s"

View File

@@ -23,6 +23,7 @@ def service_model(service):
'prefix': service.server.base_url if service.server else '',
'command': service.command,
'pid': service.proc.pid if service.proc else 0,
'info': service.info
}
class ServiceListAPIHandler(APIHandler):

View File

@@ -4,6 +4,7 @@
# Distributed under the terms of the Modified BSD License.
import asyncio
from datetime import datetime
import json
from async_generator import aclosing
@@ -201,13 +202,30 @@ class UserTokenListAPIHandler(APIHandler):
user = self.find_user(name)
if not user:
raise web.HTTPError(404, "No such user: %s" % name)
now = datetime.utcnow()
api_tokens = []
def sort_key(token):
return token.last_activity or token.created
for token in sorted(user.api_tokens, key=sort_key):
if token.expires_at and token.expires_at < now:
# exclude expired tokens
self.db.delete(token)
self.db.commit()
continue
api_tokens.append(self.token_model(token))
oauth_tokens = []
# OAuth tokens use integer timestamps
now_timestamp = now.timestamp()
for token in sorted(user.oauth_tokens, key=sort_key):
if token.expires_at and token.expires_at < now_timestamp:
# exclude expired tokens
self.db.delete(token)
self.db.commit()
continue
oauth_tokens.append(self.token_model(token))
self.write(json.dumps({
'api_tokens': api_tokens,
@@ -252,7 +270,7 @@ class UserTokenListAPIHandler(APIHandler):
if requester is not user:
note += " by %s %s" % (kind, requester.name)
api_token = user.new_api_token(note=note)
api_token = user.new_api_token(note=note, expires_in=body.get('expires_in', None))
if requester is not user:
self.log.info("%s %s requested API token for %s", kind.title(), requester.name, user.name)
else:

View File

@@ -9,6 +9,7 @@ import atexit
import binascii
from concurrent.futures import ThreadPoolExecutor
from datetime import datetime, timezone
from functools import partial
from getpass import getuser
import logging
from operator import itemgetter
@@ -26,7 +27,7 @@ if sys.version_info[:2] < (3, 3):
from dateutil.parser import parse as parse_date
from jinja2 import Environment, FileSystemLoader, PrefixLoader, ChoiceLoader
from sqlalchemy.exc import OperationalError
from sqlalchemy.exc import OperationalError, SQLAlchemyError
from tornado.httpclient import AsyncHTTPClient
import tornado.httpserver
@@ -139,7 +140,10 @@ class NewToken(Application):
ab01cd23ef45
"""
name = Unicode(getuser())
name = Unicode()
@default('name')
def _default_name(self):
return getuser()
aliases = token_aliases
classes = []
@@ -273,6 +277,9 @@ class JupyterHub(Application):
service_check_interval = Integer(60,
help="Interval (in seconds) at which to check connectivity of services with web endpoints."
).tag(config=True)
active_user_window = Integer(30 * 60,
help="Duration (in seconds) to determine the number of active users."
).tag(config=True)
data_files_path = Unicode(DATA_FILES_PATH,
help="The location of jupyterhub data files (e.g. /usr/local/share/jupyterhub)"
@@ -286,6 +293,10 @@ class JupyterHub(Application):
def _template_paths_default(self):
return [os.path.join(self.data_files_path, 'templates')]
template_vars = Dict(
help="Extra variables to be passed into jinja templates",
).tag(config=True)
confirm_no_ssl = Bool(False,
help="""DEPRECATED: does nothing"""
).tag(config=True)
@@ -310,6 +321,7 @@ class JupyterHub(Application):
should be accessed by users.
.. deprecated: 0.9
Use JupyterHub.bind_url
"""
).tag(config=True)
@@ -325,26 +337,6 @@ class JupyterHub(Application):
"""
).tag(config=True)
@observe('ip', 'port')
def _ip_port_changed(self, change):
urlinfo = urlparse(self.bind_url)
urlinfo = urlinfo._replace(netloc='%s:%i' % (self.ip, self.port))
self.bind_url = urlunparse(urlinfo)
bind_url = Unicode(
"http://127.0.0.1:8000",
help="""The public facing URL of the whole JupyterHub application.
This is the address on which the proxy will bind.
Sets protocol, ip, base_url
"""
).tag(config=True)
@observe('bind_url')
def _bind_url_changed(self, change):
urlinfo = urlparse(change.new)
self.base_url = urlinfo.path
base_url = URLPrefix('/',
help="""The base URL of the entire application.
@@ -361,6 +353,25 @@ class JupyterHub(Application):
# call validate to ensure leading/trailing slashes
return JupyterHub.base_url.validate(self, urlparse(self.bind_url).path)
@observe('ip', 'port', 'base_url')
def _url_part_changed(self, change):
"""propagate deprecated ip/port/base_url config to the bind_url"""
urlinfo = urlparse(self.bind_url)
urlinfo = urlinfo._replace(netloc='%s:%i' % (self.ip, self.port))
urlinfo = urlinfo._replace(path=self.base_url)
bind_url = urlunparse(urlinfo)
if bind_url != self.bind_url:
self.bind_url = bind_url
bind_url = Unicode(
"http://:8000",
help="""The public facing URL of the whole JupyterHub application.
This is the address on which the proxy will bind.
Sets protocol, ip, base_url
"""
).tag(config=True)
subdomain_host = Unicode('',
help="""Run single-user servers on subdomains of this host.
@@ -932,6 +943,24 @@ class JupyterHub(Application):
handlers[i] = tuple(lis)
return handlers
extra_handlers = List(
help="""
Register extra tornado Handlers for jupyterhub.
Should be of the form ``("<regex>", Handler)``
The Hub prefix will be added, so `/my-page` will be served at `/hub/my-page`.
""",
).tag(config=True)
default_url = Unicode(
help="""
The default URL for users when they arrive (e.g. when user directs to "/")
By default, redirects users to their own server.
""",
).tag(config=True)
def init_handlers(self):
h = []
# load handlers from the authenticator
@@ -940,12 +969,13 @@ class JupyterHub(Application):
h.extend(handlers.default_handlers)
h.extend(apihandlers.default_handlers)
# add any user configurable handlers.
h.extend(self.extra_handlers)
h.append((r'/logo', LogoHandler, {'path': self.logo_file}))
self.handlers = self.add_url_prefix(self.hub_prefix, h)
# some extra handlers, outside hub_prefix
self.handlers.extend([
# add trailing / to `/hub`
(self.hub_prefix.rstrip('/'), handlers.AddSlashHandler),
# add trailing / to ``/user|services/:name`
(r"%s(user|services)/([^/]+)" % self.base_url, handlers.AddSlashHandler),
(r"(?!%s).*" % self.hub_prefix, handlers.PrefixRedirectHandler),
@@ -1264,10 +1294,23 @@ class JupyterHub(Application):
self.log.debug("Not duplicating token %s", orm_token)
db.commit()
# purge expired tokens hourly
purge_expired_tokens_interval = 3600
async def init_api_tokens(self):
"""Load predefined API tokens (for services) into database"""
await self._add_tokens(self.service_tokens, kind='service')
await self._add_tokens(self.api_tokens, kind='user')
purge_expired_tokens = partial(orm.APIToken.purge_expired, self.db)
purge_expired_tokens()
# purge expired tokens hourly
# we don't need to be prompt about this
# because expired tokens cannot be used anyway
pc = PeriodicCallback(
purge_expired_tokens,
1e3 * self.purge_expired_tokens_interval,
)
pc.start()
def init_services(self):
self._service_map.clear()
@@ -1454,10 +1497,16 @@ class JupyterHub(Application):
oauth_client_ids.add(spawner.oauth_client_id)
client_store = self.oauth_provider.client_authenticator.client_store
for oauth_client in self.db.query(orm.OAuthClient):
for i, oauth_client in enumerate(self.db.query(orm.OAuthClient)):
if oauth_client.identifier not in oauth_client_ids:
self.log.warning("Deleting OAuth client %s", oauth_client.identifier)
self.db.delete(oauth_client)
# Some deployments that create temporary users may have left *lots*
# of entries here.
# Don't try to delete them all in one transaction,
# commit at most 100 deletions at a time.
if i % 100 == 0:
self.db.commit()
self.db.commit()
def init_proxy(self):
@@ -1517,6 +1566,7 @@ class JupyterHub(Application):
authenticator=self.authenticator,
spawner_class=self.spawner_class,
base_url=self.base_url,
default_url=self.default_url,
cookie_secret=self.cookie_secret,
cookie_max_age_days=self.cookie_max_age_days,
redirect_to_server=self.redirect_to_server,
@@ -1526,6 +1576,7 @@ class JupyterHub(Application):
static_url_prefix=url_path_join(self.hub.base_url, 'static/'),
static_handler_class=CacheControlStaticFilesHandler,
template_path=self.template_paths,
template_vars=self.template_vars,
jinja2_env=jinja_env,
version_hash=version_hash,
subdomain_host=self.subdomain_host,
@@ -1580,6 +1631,33 @@ class JupyterHub(Application):
cfg.JupyterHub.merge(cfg.JupyterHubApp)
self.update_config(cfg)
self.write_pid_file()
def _log_cls(name, cls):
"""Log a configured class
Logs the class and version (if found) of Authenticator
and Spawner
"""
# try to guess the version from the top-level module
# this will work often enough to be useful.
# no need to be perfect.
if cls.__module__:
mod = sys.modules.get(cls.__module__.split('.')[0])
version = getattr(mod, '__version__', '')
if version:
version = '-{}'.format(version)
else:
version = ''
self.log.info(
"Using %s: %s.%s%s",
name,
cls.__module__ or '',
cls.__name__,
version,
)
_log_cls("Authenticator", self.authenticator_class)
_log_cls("Spawner", self.spawner_class)
self.init_pycurl()
self.init_secrets()
self.init_db()
@@ -1712,13 +1790,18 @@ class JupyterHub(Application):
spawner.last_activity = max(spawner.last_activity, dt)
else:
spawner.last_activity = dt
# FIXME: Make this configurable duration. 30 minutes for now!
if (now - user.last_activity).total_seconds() < 30 * 60:
if (now - user.last_activity).total_seconds() < self.active_user_window:
active_users_count += 1
self.statsd.gauge('users.running', users_count)
self.statsd.gauge('users.active', active_users_count)
self.db.commit()
try:
self.db.commit()
except SQLAlchemyError:
self.log.exception("Rolling back session due to database error")
self.db.rollback()
return
await self.proxy.check_routes(self.users, self._service_map, routes)
async def start(self):

View File

@@ -49,7 +49,7 @@ class Authenticator(LoggingConfigurable):
Encrypting auth_state requires the cryptography package.
Additionally, the JUPYTERHUB_CRYPTO_KEY envirionment variable must
Additionally, the JUPYTERHUB_CRYPT_KEY environment variable must
contain one (or more, separated by ;) 32B encryption keys.
These can be either base64 or hex-encoded.

View File

@@ -15,6 +15,7 @@ import uuid
from jinja2 import TemplateNotFound
from sqlalchemy.exc import SQLAlchemyError
from tornado.log import app_log
from tornado.httputil import url_concat, HTTPHeaders
from tornado.ioloop import IOLoop
@@ -39,7 +40,8 @@ reasons = {
'timeout': "Failed to reach your server."
" Please try again later."
" Contact admin if the issue persists.",
'error': "Failed to start your server. Please contact admin.",
'error': "Failed to start your server on the last attempt. "
" Please contact admin if the issue persists.",
}
# constant, not configurable
@@ -61,6 +63,10 @@ class BaseHandler(RequestHandler):
def base_url(self):
return self.settings.get('base_url', '/')
@property
def default_url(self):
return self.settings.get('default_url', '')
@property
def version_hash(self):
return self.settings.get('version_hash', '')
@@ -260,10 +266,17 @@ class BaseHandler(RequestHandler):
def get_current_user(self):
"""get current username"""
user = self.get_current_user_token()
if user is not None:
return user
return self.get_current_user_cookie()
if not hasattr(self, '_jupyterhub_user'):
try:
user = self.get_current_user_token()
if user is None:
user = self.get_current_user_cookie()
self._jupyterhub_user = user
except Exception:
# don't let errors here raise more than once
self._jupyterhub_user = None
raise
return self._jupyterhub_user
def find_user(self, name):
"""Get a user by name
@@ -413,10 +426,20 @@ class BaseHandler(RequestHandler):
- else: /hub/home
"""
next_url = self.get_argument('next', default='')
if (next_url + '/').startswith('%s://%s/' % (self.request.protocol, self.request.host)):
if (next_url + '/').startswith(
(
'%s://%s/' % (self.request.protocol, self.request.host),
'//%s/' % self.request.host,
)
):
# treat absolute URLs for our host as absolute paths:
next_url = urlparse(next_url).path
if next_url and not next_url.startswith('/'):
parsed = urlparse(next_url)
next_url = parsed.path
if parsed.query:
next_url = next_url + '?' + parsed.query
if parsed.hash:
next_url = next_url + '#' + parsed.hash
if next_url and (urlparse(next_url).netloc or not next_url.startswith('/')):
self.log.warning("Disallowing redirect outside JupyterHub: %r", next_url)
next_url = ''
if next_url and next_url.startswith(url_path_join(self.base_url, 'user/')):
@@ -429,9 +452,14 @@ class BaseHandler(RequestHandler):
self.request.uri, next_url,
)
if not next_url:
# custom default URL
next_url = self.default_url
if not next_url:
# default URL after login
# if self.redirect_to_server, default login URL initiates spawn
# if self.redirect_to_server, default login URL initiates spawn,
# otherwise send to Hub home page (control panel)
if user and self.redirect_to_server:
next_url = user.url
else:
@@ -752,7 +780,7 @@ class BaseHandler(RequestHandler):
@property
def template_namespace(self):
user = self.get_current_user()
return dict(
ns = dict(
base_url=self.hub.base_url,
prefix=self.base_url,
user=user,
@@ -762,6 +790,9 @@ class BaseHandler(RequestHandler):
static_url=self.static_url,
version_hash=self.version_hash,
)
if self.settings['template_vars']:
ns.update(self.settings['template_vars'])
return ns
def write_error(self, status_code, **kwargs):
"""render custom error pages"""
@@ -782,6 +813,10 @@ class BaseHandler(RequestHandler):
if reason:
message = reasons.get(reason, reason)
if exception and isinstance(exception, SQLAlchemyError):
self.log.warning("Rolling back session due to database error %s", exception)
self.db.rollback()
# build template namespace
ns = dict(
status_code=status_code,
@@ -792,19 +827,27 @@ class BaseHandler(RequestHandler):
)
self.set_header('Content-Type', 'text/html')
# allow setting headers from exceptions
# since exception handler clears headers
headers = getattr(exception, 'headers', None)
if headers:
for key, value in headers.items():
self.set_header(key, value)
if isinstance(exception, web.HTTPError):
# allow setting headers from exceptions
# since exception handler clears headers
headers = getattr(exception, 'headers', None)
if headers:
for key, value in headers.items():
self.set_header(key, value)
# Content-Length must be recalculated.
self.clear_header('Content-Length')
# render the template
try:
html = self.render_template('%s.html' % status_code, **ns)
except TemplateNotFound:
self.log.debug("No template for %d", status_code)
html = self.render_template('error.html', **ns)
try:
html = self.render_template('error.html', **ns)
except:
# In this case, any side effect must be avoided.
ns['no_spawner_check'] = True
html = self.render_template('error.html', **ns)
self.write(html)
@@ -907,7 +950,7 @@ class UserSpawnHandler(BaseHandler):
raise copy.copy(exc).with_traceback(exc.__traceback__)
# check for pending spawn
if spawner.pending and spawner._spawn_future:
if spawner.pending == 'spawn' and spawner._spawn_future:
# wait on the pending spawn
self.log.debug("Waiting for %s pending %s", spawner._log_name, spawner.pending)
try:
@@ -917,14 +960,20 @@ class UserSpawnHandler(BaseHandler):
pass
# we may have waited above, check pending again:
# page could be pending spawn *or* stop
if spawner.pending:
self.log.info("%s is pending %s", spawner._log_name, spawner.pending)
# spawn has started, but not finished
self.statsd.incr('redirects.user_spawn_pending', 1)
url_parts = []
if spawner.pending == "stop":
page = "stop_pending.html"
else:
page = "spawn_pending.html"
html = self.render_template(
"spawn_pending.html",
page,
user=user,
spawner=spawner,
progress_url=spawner._progress_url,
)
self.finish(html)
@@ -1070,6 +1119,7 @@ class AddSlashHandler(BaseHandler):
self.redirect(urlunparse(dest))
default_handlers = [
(r'', AddSlashHandler), # add trailing / to `/hub`
(r'/user/([^/]+)(/.*)?', UserSpawnHandler),
(r'/user-redirect/(.*)?', UserRedirectHandler),
(r'/security/csp-report', CSPReportHandler),

View File

@@ -3,6 +3,7 @@
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
from collections import defaultdict
from datetime import datetime
from http.client import responses
@@ -30,7 +31,9 @@ class RootHandler(BaseHandler):
"""
def get(self):
user = self.get_current_user()
if user:
if self.default_url:
url = self.default_url
elif user:
url = self.get_next_url(user)
else:
url = self.settings['login_url']
@@ -229,12 +232,24 @@ class TokenPageHandler(BaseHandler):
token.last_activity or never,
token.created or never,
)
api_tokens = sorted(user.api_tokens, key=sort_key, reverse=True)
now = datetime.utcnow()
api_tokens = []
for token in sorted(user.api_tokens, key=sort_key, reverse=True):
if token.expires_at and token.expires_at < now:
self.db.delete(token)
self.db.commit()
continue
api_tokens.append(token)
# group oauth client tokens by client id
from collections import defaultdict
oauth_tokens = defaultdict(list)
for token in user.oauth_tokens:
if token.expires_at and token.expires_at < now:
self.log.warning("Deleting expired token")
self.db.delete(token)
self.db.commit()
continue
if not token.client_id:
# token should have been deleted when client was deleted
self.log.warning("Deleting stale oauth token for %s", user.name)
@@ -260,7 +275,7 @@ class TokenPageHandler(BaseHandler):
token = tokens[0]
oauth_clients.append({
'client': token.client,
'description': token.client.description or token.client.client_id,
'description': token.client.description or token.client.identifier,
'created': created,
'last_activity': last_activity,
'tokens': tokens,
@@ -321,7 +336,7 @@ class ProxyErrorHandler(BaseHandler):
default_handlers = [
(r'/?', RootHandler),
(r'/', RootHandler),
(r'/home', HomeHandler),
(r'/admin', AdminHandler),
(r'/spawn', SpawnHandler),

View File

@@ -3,7 +3,7 @@
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
from datetime import datetime
from datetime import datetime, timedelta
import enum
import json
@@ -14,7 +14,7 @@ from tornado.log import app_log
from sqlalchemy.types import TypeDecorator, TEXT, LargeBinary
from sqlalchemy import (
create_engine, event, inspect,
create_engine, event, exc, inspect, or_, select,
Column, Integer, ForeignKey, Unicode, Boolean,
DateTime, Enum, Table,
)
@@ -33,6 +33,9 @@ from .utils import (
new_token, hash_token, compare_token,
)
# top-level variable for easier mocking in tests
utcnow = datetime.utcnow
class JSONDict(TypeDecorator):
"""Represents an immutable structure as a json-encoded string.
@@ -176,12 +179,12 @@ class User(Base):
running=sum(bool(s.server) for s in self._orm_spawners),
)
def new_api_token(self, token=None, generated=True, note=''):
def new_api_token(self, token=None, **kwargs):
"""Create a new API token
If `token` is given, load that token.
"""
return APIToken.new(token=token, user=self, note=note, generated=generated)
return APIToken.new(token=token, user=self, **kwargs)
@classmethod
def find(cls, db, name):
@@ -242,11 +245,11 @@ class Service(Base):
server = relationship(Server, cascade='all')
pid = Column(Integer)
def new_api_token(self, token=None, generated=True, note=''):
def new_api_token(self, token=None, **kwargs):
"""Create a new API token
If `token` is given, load that token.
"""
return APIToken.new(token=token, service=self, note=note, generated=generated)
return APIToken.new(token=token, service=self, **kwargs)
@classmethod
def find(cls, db, name):
@@ -348,6 +351,7 @@ class APIToken(Hashed, Base):
# token metadata for bookkeeping
created = Column(DateTime, default=datetime.utcnow)
expires_at = Column(DateTime, default=None, nullable=True)
last_activity = Column(DateTime)
note = Column(Unicode(1023))
@@ -369,6 +373,22 @@ class APIToken(Hashed, Base):
name=name,
)
@classmethod
def purge_expired(cls, db):
"""Purge expired API Tokens from the database"""
now = utcnow()
deleted = False
for token in (
db.query(cls)
.filter(cls.expires_at != None)
.filter(cls.expires_at < now)
):
app_log.debug("Purging expired %s", token)
deleted = True
db.delete(token)
if deleted:
db.commit()
@classmethod
def find(cls, db, token, *, kind=None):
"""Find a token object by value.
@@ -379,6 +399,9 @@ class APIToken(Hashed, Base):
`kind='service'` only returns API tokens for services
"""
prefix_match = cls.find_prefix(db, token)
prefix_match = prefix_match.filter(
or_(cls.expires_at == None, cls.expires_at >= utcnow())
)
if kind == 'user':
prefix_match = prefix_match.filter(cls.user_id != None)
elif kind == 'service':
@@ -390,7 +413,8 @@ class APIToken(Hashed, Base):
return orm_token
@classmethod
def new(cls, token=None, user=None, service=None, note='', generated=True):
def new(cls, token=None, user=None, service=None, note='', generated=True,
expires_in=None):
"""Generate a new API token for a user or service"""
assert user or service
assert not (user and service)
@@ -412,6 +436,8 @@ class APIToken(Hashed, Base):
else:
assert service.id is not None
orm_token.service = service
if expires_in is not None:
orm_token.expires_at = utcnow() + timedelta(seconds=expires_in)
db.add(orm_token)
db.commit()
return token
@@ -549,7 +575,7 @@ def _expire_relationship(target, relationship_prop):
def _notify_deleted_relationships(session, obj):
"""Expire relationships when an object becomes deleted
Needed for
Needed to keep relationships up to date.
"""
mapper = inspect(obj).mapper
for prop in mapper.relationships:
@@ -557,6 +583,52 @@ def _notify_deleted_relationships(session, obj):
_expire_relationship(obj, prop)
def register_ping_connection(engine):
"""Check connections before using them.
Avoids database errors when using stale connections.
From SQLAlchemy docs on pessimistic disconnect handling:
https://docs.sqlalchemy.org/en/rel_1_1/core/pooling.html#disconnect-handling-pessimistic
"""
@event.listens_for(engine, "engine_connect")
def ping_connection(connection, branch):
if branch:
# "branch" refers to a sub-connection of a connection,
# we don't want to bother pinging on these.
return
# turn off "close with result". This flag is only used with
# "connectionless" execution, otherwise will be False in any case
save_should_close_with_result = connection.should_close_with_result
connection.should_close_with_result = False
try:
# run a SELECT 1. use a core select() so that
# the SELECT of a scalar value without a table is
# appropriately formatted for the backend
connection.scalar(select([1]))
except exc.DBAPIError as err:
# catch SQLAlchemy's DBAPIError, which is a wrapper
# for the DBAPI's exception. It includes a .connection_invalidated
# attribute which specifies if this connection is a "disconnect"
# condition, which is based on inspection of the original exception
# by the dialect in use.
if err.connection_invalidated:
app_log.error("Database connection error, attempting to reconnect: %s", err)
# run the same SELECT again - the connection will re-validate
# itself and establish a new connection. The disconnect detection
# here also causes the whole connection pool to be invalidated
# so that all stale connections are discarded.
connection.scalar(select([1]))
else:
raise
finally:
# restore "close with result"
connection.should_close_with_result = save_should_close_with_result
def check_db_revision(engine):
"""Check the JupyterHub database revision
@@ -635,10 +707,12 @@ def mysql_large_prefix_check(engine):
else:
return False
def add_row_format(base):
for t in base.metadata.tables.values():
t.dialect_kwargs['mysql_ROW_FORMAT'] = 'DYNAMIC'
def new_session_factory(url="sqlite:///:memory:",
reset=False,
expire_on_commit=False,
@@ -658,6 +732,9 @@ def new_session_factory(url="sqlite:///:memory:",
kwargs.setdefault('poolclass', StaticPool)
engine = create_engine(url, **kwargs)
# enable pessimistic disconnect handling
register_ping_connection(engine)
if reset:
Base.metadata.drop_all(engine)

View File

@@ -488,7 +488,10 @@ class ConfigurableHTTPProxy(Proxy):
except FileNotFoundError as e:
self.log.error(
"Failed to find proxy %r\n"
"The proxy can be installed with `npm install -g configurable-http-proxy`"
"The proxy can be installed with `npm install -g configurable-http-proxy`."
"To install `npm`, install nodejs which includes `npm`."
"If you see an `EACCES` error or permissions error, refer to the `npm` "
"documentation on How To Prevent Permissions Errors."
% self.command
)
raise
@@ -516,13 +519,26 @@ class ConfigurableHTTPProxy(Proxy):
self._check_running_callback = pc
pc.start()
def _kill_proc_tree(self, pid):
import psutil
parent = psutil.Process(pid)
children = parent.children(recursive=True)
for child in children:
child.kill()
psutil.wait_procs(children, timeout=5)
def stop(self):
self.log.info("Cleaning up proxy[%i]...", self.proxy_process.pid)
if self._check_running_callback is not None:
self._check_running_callback.stop()
if self.proxy_process.poll() is None:
try:
self.proxy_process.terminate()
if os.name == 'nt':
# On Windows we spawned a shell on Popen, so we need to
# terminate all child processes as well
self._kill_proc_tree(self.proxy_process.pid)
else:
self.proxy_process.terminate()
except Exception as e:
self.log.error("Failed to terminate proxy process: %s", e)
@@ -558,7 +574,7 @@ class ConfigurableHTTPProxy(Proxy):
"""
# chp stores routes in unescaped form.
# restore escaped-form we created it with.
routespec = quote(chp_path, safe='@/')
routespec = quote(chp_path, safe='@/~')
if self.host_routing:
# host routes don't start with /
routespec = routespec.lstrip('/')

View File

@@ -175,6 +175,13 @@ class Service(LoggingConfigurable):
If unspecified, an API token will be generated for managed services.
"""
).tag(input=True)
info = Dict(
help="""Provide a place to include miscellaneous information about the service,
provided through the configuration
"""
).tag(input=True)
# Managed service API:
spawner = Any()

View File

@@ -703,10 +703,11 @@ class Spawner(LoggingConfigurable):
def run_post_stop_hook(self):
"""Run the post_stop_hook if defined"""
try:
return self.post_stop_hook(self)
except Exception:
self.log.exception("post_stop_hook failed with exception: %s", self)
if self.post_stop_hook is not None:
try:
return self.post_stop_hook(self)
except Exception:
self.log.exception("post_stop_hook failed with exception: %s", self)
@property
def _progress_url(self):

View File

@@ -1525,6 +1525,7 @@ def test_get_services(app, mockservice_url):
'pid': mockservice.proc.pid,
'prefix': mockservice.server.base_url,
'url': mockservice.url,
'info': {},
}
}
@@ -1551,6 +1552,7 @@ def test_get_service(app, mockservice_url):
'pid': mockservice.proc.pid,
'prefix': mockservice.server.base_url,
'url': mockservice.url,
'info': {},
}
r = yield api_request(app, 'services/%s' % mockservice.name,

View File

@@ -8,19 +8,22 @@ from subprocess import check_output, Popen, PIPE
from tempfile import NamedTemporaryFile, TemporaryDirectory
from unittest.mock import patch
from tornado import gen
import pytest
from tornado import gen
from traitlets.config import Config
from .mocking import MockHub
from .test_api import add_user
from .. import orm
from ..app import COOKIE_SECRET_BYTES
from ..app import COOKIE_SECRET_BYTES, JupyterHub
def test_help_all():
out = check_output([sys.executable, '-m', 'jupyterhub', '--help-all']).decode('utf8', 'replace')
assert '--ip' in out
assert '--JupyterHub.ip' in out
def test_token_app():
cmd = [sys.executable, '-m', 'jupyterhub', 'token']
out = check_output(cmd + ['--help-all']).decode('utf8', 'replace')
@@ -30,6 +33,7 @@ def test_token_app():
out = check_output(cmd + ['user'], cwd=td).decode('utf8', 'replace').strip()
assert re.match(r'^[a-z0-9]+$', out)
def test_generate_config():
with NamedTemporaryFile(prefix='jupyterhub_config', suffix='.py') as tf:
cfg_file = tf.name
@@ -218,3 +222,51 @@ def test_resume_spawners(tmpdir, request):
assert not user.running
assert user.spawner.server is None
assert list(db.query(orm.Server)) == []
@pytest.mark.parametrize(
'hub_config, expected',
[
(
{'ip': '0.0.0.0'},
{'bind_url': 'http://0.0.0.0:8000/'},
),
(
{'port': 123, 'base_url': '/prefix'},
{
'bind_url': 'http://:123/prefix/',
'base_url': '/prefix/',
},
),
(
{'bind_url': 'http://0.0.0.0:12345/sub'},
{'base_url': '/sub/'},
),
(
# no config, test defaults
{},
{
'base_url': '/',
'bind_url': 'http://:8000',
'ip': '',
'port': 8000,
},
),
]
)
def test_url_config(hub_config, expected):
# construct the config object
cfg = Config()
for key, value in hub_config.items():
cfg.JupyterHub[key] = value
# instantiate the Hub and load config
app = JupyterHub(config=cfg)
# validate config
for key, value in hub_config.items():
if key not in expected:
assert getattr(app, key) == value
# validate additional properties
for key, value in expected.items():
assert getattr(app, key) == value

View File

@@ -3,8 +3,10 @@
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
from datetime import datetime, timedelta
import os
import socket
from unittest import mock
import pytest
from tornado import gen
@@ -99,6 +101,32 @@ def test_tokens(db):
assert len(user.api_tokens) == 3
def test_token_expiry(db):
user = orm.User(name='parker')
db.add(user)
db.commit()
now = datetime.utcnow()
token = user.new_api_token(expires_in=60)
orm_token = orm.APIToken.find(db, token=token)
assert orm_token
assert orm_token.expires_at is not None
# approximate range
assert orm_token.expires_at > now + timedelta(seconds=50)
assert orm_token.expires_at < now + timedelta(seconds=70)
the_future = mock.patch('jupyterhub.orm.utcnow', lambda : now + timedelta(seconds=70))
with the_future:
found = orm.APIToken.find(db, token=token)
assert found is None
# purging shouldn't delete non-expired tokens
orm.APIToken.purge_expired(db)
assert orm.APIToken.find(db, token=token)
with the_future:
orm.APIToken.purge_expired(db)
assert orm.APIToken.find(db, token=token) is None
# after purging, make sure we aren't in the user token list
assert orm_token not in user.api_tokens
def test_service_tokens(db):
service = orm.Service(name='secret')
db.add(service)

View File

@@ -3,6 +3,7 @@
from urllib.parse import urlencode, urlparse
from tornado import gen
from tornado.httputil import url_concat
from ..handlers import BaseHandler
from ..utils import url_path_join as ujoin
@@ -16,6 +17,7 @@ from .mocking import FormSpawner, public_url, public_host
from .test_api import api_request, add_user
from .utils import async_requests
def get_page(path, app, hub=True, **kw):
if hub:
prefix = app.hub.base_url
@@ -53,6 +55,30 @@ def test_root_redirect(app):
assert path == ujoin(app.base_url, 'user/%s/test.ipynb' % name)
@pytest.mark.gen_test
def test_root_default_url_noauth(app):
with mock.patch.dict(app.tornado_settings,
{'default_url': '/foo/bar'}):
r = yield get_page('/', app, allow_redirects=False)
r.raise_for_status()
url = r.headers.get('Location', '')
path = urlparse(url).path
assert path == '/foo/bar'
@pytest.mark.gen_test
def test_root_default_url_auth(app):
name = 'wash'
cookies = yield app.login_user(name)
with mock.patch.dict(app.tornado_settings,
{'default_url': '/foo/bar'}):
r = yield get_page('/', app, cookies=cookies, allow_redirects=False)
r.raise_for_status()
url = r.headers.get('Location', '')
path = urlparse(url).path
assert path == '/foo/bar'
@pytest.mark.gen_test
def test_home_no_auth(app):
r = yield get_page('home', app, allow_redirects=False)
@@ -342,29 +368,50 @@ def test_login_strip(app):
assert called_with == [form_data]
@pytest.mark.parametrize(
'running, next_url, location',
[
# default URL if next not specified, for both running and not
(True, '', ''),
(False, '', ''),
# next_url is respected
(False, '/hub/admin', '/hub/admin'),
(False, '/user/other', '/hub/user/other'),
(False, '/absolute', '/absolute'),
(False, '/has?query#andhash', '/has?query#andhash'),
# next_url outside is not allowed
(False, 'https://other.domain', ''),
(False, 'ftp://other.domain', ''),
(False, '//other.domain', ''),
]
)
@pytest.mark.gen_test
def test_login_redirect(app):
def test_login_redirect(app, running, next_url, location):
cookies = yield app.login_user('river')
user = app.users['river']
# no next_url, server running
yield user.spawn()
r = yield get_page('login', app, cookies=cookies, allow_redirects=False)
r.raise_for_status()
assert r.status_code == 302
assert '/user/river' in r.headers['Location']
if location:
location = ujoin(app.base_url, location)
else:
# use default url
location = user.url
# no next_url, server not running
yield user.stop()
r = yield get_page('login', app, cookies=cookies, allow_redirects=False)
r.raise_for_status()
assert r.status_code == 302
assert '/user/river' in r.headers['Location']
url = 'login'
if next_url:
if '//' not in next_url:
next_url = ujoin(app.base_url, next_url, '')
url = url_concat(url, dict(next=next_url))
# next URL given, use it
r = yield get_page('login?next=/hub/admin', app, cookies=cookies, allow_redirects=False)
if running and not user.active:
# ensure running
yield user.spawn()
elif user.active and not running:
# ensure not running
yield user.stop()
r = yield get_page(url, app, cookies=cookies, allow_redirects=False)
r.raise_for_status()
assert r.status_code == 302
assert r.headers['Location'].endswith('/hub/admin')
assert location == r.headers['Location']
@pytest.mark.gen_test
@@ -447,6 +494,21 @@ def test_token_auth(app):
assert r.status_code == 200
@pytest.mark.gen_test
def test_oauth_token_page(app):
name = 'token'
cookies = yield app.login_user(name)
user = app.users[orm.User.find(app.db, name)]
client = orm.OAuthClient(identifier='token')
app.db.add(client)
oauth_token = orm.OAuthAccessToken(client=client, user=user, grant_type=orm.GrantType.authorization_code)
app.db.add(oauth_token)
app.db.commit()
r = yield get_page('token', app, cookies=cookies)
r.raise_for_status()
assert r.status_code == 200
@pytest.mark.parametrize("error_status", [
503,
404,
@@ -456,3 +518,55 @@ def test_token_auth(app):
def test_proxy_error(app, error_status):
r = yield get_page('/error/%i' % error_status, app)
assert r.status_code == 200
@pytest.mark.gen_test
@pytest.mark.parametrize(
"announcements",
[
"",
"spawn",
"spawn,home,login",
"login,logout",
]
)
def test_announcements(app, announcements):
"""Test announcements on various pages"""
# Default announcement - same on all pages
ann01 = "ANNOUNCE01"
template_vars = {"announcement": ann01}
announcements = announcements.split(",")
for name in announcements:
template_vars["announcement_" + name] = "ANN_" + name
def assert_announcement(name, text):
if name in announcements:
assert template_vars["announcement_" + name] in text
assert ann01 not in text
else:
assert ann01 in text
cookies = yield app.login_user("jones")
with mock.patch.dict(
app.tornado_settings,
{"template_vars": template_vars, "spawner_class": FormSpawner},
):
r = yield get_page("login", app)
r.raise_for_status()
assert_announcement("login", r.text)
r = yield get_page("spawn", app, cookies=cookies)
r.raise_for_status()
assert_announcement("spawn", r.text)
r = yield get_page("home", app, cookies=cookies) # hub/home
r.raise_for_status()
assert_announcement("home", r.text)
# need auto_login=True to get logout page
auto_login = app.authenticator.auto_login
app.authenticator.auto_login = True
try:
r = yield get_page("logout", app, cookies=cookies)
finally:
app.authenticator.auto_login = auto_login
r.raise_for_status()
assert_announcement("logout", r.text)

View File

@@ -154,6 +154,8 @@ def test_external_proxy(request):
'zoe',
'50fia',
'秀樹',
'~TestJH',
'has@',
])
def test_check_routes(app, username, disable_check_routes):
proxy = app.proxy

View File

@@ -11,6 +11,7 @@ import sys
import tempfile
import time
from unittest import mock
from urllib.parse import urlparse
import pytest
from tornado import gen
@@ -20,7 +21,8 @@ from .. import orm
from .. import spawner as spawnermod
from ..spawner import LocalProcessSpawner, Spawner
from ..user import User
from ..utils import new_token
from ..utils import new_token, url_path_join
from .mocking import public_url
from .test_api import add_user
from .utils import async_requests
@@ -304,7 +306,7 @@ def test_spawner_reuse_api_token(db, app):
@pytest.mark.gen_test
def test_spawner_insert_api_token(app):
"""Token provided by spawner is not in the db
Insert token into db as a user-provided token.
"""
# setup: new user, double check that they don't have any tokens registered
@@ -379,3 +381,28 @@ def test_spawner_delete_server(app):
# verify that both ORM and top-level references are None
assert spawner.orm_spawner.server is None
assert spawner.server is None
@pytest.mark.parametrize(
"name",
[
"has@x",
"has~x",
"has%x",
"has%40x",
]
)
@pytest.mark.gen_test
def test_spawner_routing(app, name):
"""Test routing of names with special characters"""
db = app.db
with mock.patch.dict(app.config.LocalProcessSpawner, {'cmd': [sys.executable, '-m', 'jupyterhub.tests.mocksu']}):
user = add_user(app.db, app, name=name)
yield user.spawn()
yield wait_for_spawner(user.spawner)
yield app.proxy.add_user(user)
url = url_path_join(public_url(app, user), "test/url")
r = yield async_requests.get(url, allow_redirects=False)
r.raise_for_status()
assert r.url == url
assert r.text == urlparse(url).path

View File

@@ -282,7 +282,7 @@ class User:
@property
def escaped_name(self):
"""My name, escaped for use in URLs, cookies, etc."""
return quote(self.name, safe='@')
return quote(self.name, safe='@~')
@property
def proxy_spec(self):
@@ -295,15 +295,17 @@ class User:
@property
def domain(self):
"""Get the domain for my server."""
# FIXME: escaped_name probably isn't escaped enough in general for a domain fragment
return self.escaped_name + '.' + self.settings['domain']
# use underscore as escape char for domains
return quote(self.name).replace('%', '_').lower() + '.' + self.settings['domain']
@property
def host(self):
"""Get the *host* for my server (proto://domain[:port])"""
# FIXME: escaped_name probably isn't escaped enough in general for a domain fragment
parsed = urlparse(self.settings['subdomain_host'])
h = '%s://%s.%s' % (parsed.scheme, self.escaped_name, parsed.netloc)
h = '%s://%s' % (parsed.scheme, self.domain)
if parsed.port:
h += ':%i' % parsed.port
return h
@property

View File

@@ -19,7 +19,7 @@ import threading
import uuid
import warnings
from async_generator import aclosing, async_generator, yield_
from async_generator import aclosing, asynccontextmanager, async_generator, yield_
from tornado import gen, ioloop, web
from tornado.platform.asyncio import to_asyncio_future
from tornado.httpclient import AsyncHTTPClient, HTTPError
@@ -452,6 +452,21 @@ def maybe_future(obj):
return to_asyncio_future(gen.maybe_future(obj))
@asynccontextmanager
@async_generator
async def not_aclosing(coro):
"""An empty context manager for Python < 3.5.2
which lacks the `aclose` method on async iterators
"""
await yield_(await coro)
if sys.version_info < (3, 5, 2):
# Python 3.5.1 is missing the aclose method on async iterators,
# so we can't close them
aclosing = not_aclosing
@async_generator
async def iterate_until(deadline_future, generator):
"""An async generator that yields items from a generator

View File

@@ -83,6 +83,10 @@ for d, _, _ in os.walk('jupyterhub'):
if os.path.exists(pjoin(d, '__init__.py')):
packages.append(d.replace(os.path.sep, '.'))
with open('README.md', encoding="utf8") as f:
readme = f.read()
setup_args = dict(
name = 'jupyterhub',
scripts = glob(pjoin('scripts', '*')),
@@ -93,10 +97,11 @@ setup_args = dict(
package_data = get_package_data(),
version = ns['__version__'],
description = "JupyterHub: A multi-user server for Jupyter notebooks",
long_description = "See https://jupyterhub.readthedocs.io for more info.",
long_description = readme,
long_description_content_type = 'text/markdown',
author = "Jupyter Development Team",
author_email = "jupyter@googlegroups.com",
url = "http://jupyter.org",
url = "https://jupyter.org",
license = "BSD",
platforms = "Linux, Mac OS X",
keywords = ['Interactive', 'Interpreter', 'Shell', 'Web'],
@@ -109,6 +114,12 @@ setup_args = dict(
'Programming Language :: Python',
'Programming Language :: Python :: 3',
],
project_urls = {
'Documentation': 'https://jupyterhub.readthedocs.io',
'Funding': 'https://jupyter.org/about',
'Source': 'https://github.com/jupyterhub/jupyterhub/',
'Tracker': 'https://github.com/jupyterhub/jupyterhub/issues',
},
)
#---------------------------------------------------------------------------

View File

@@ -1,19 +1,29 @@
// Copyright (c) Jupyter Development Team.
// Distributed under the terms of the Modified BSD License.
require(["jquery", "jhapi"], function ($, JHAPI) {
"use strict";
require(["jquery", "jhapi"], function($, JHAPI) {
"use strict";
var base_url = window.jhdata.base_url;
var user = window.jhdata.user;
var api = new JHAPI(base_url);
var base_url = window.jhdata.base_url;
var user = window.jhdata.user;
var api = new JHAPI(base_url);
$("#stop").click(function () {
api.stop_server(user, {
success: function () {
$("#stop").hide();
}
});
$("#stop").click(function() {
$("#start")
.attr("disabled", true)
.attr("title", "Your server is stopping")
.click(function() {
return false;
});
api.stop_server(user, {
success: function() {
$("#start")
.text("Start My Server")
.attr("title", "Start your server")
.attr("disabled", false)
.off("click");
$("#stop").hide();
}
});
});
});

View File

@@ -12,7 +12,7 @@ require(["jquery", "jhapi", "moment"], function($, JHAPI, moment) {
// convert ISO datestamps to nice momentjs ones
el = $(el);
let m = moment(new Date(el.text().trim()));
el.text(m.isValid() ? m.fromNow() : "Never");
el.text(m.isValid() ? m.fromNow() : el.text());
});
$("#request-token-form").submit(function() {

View File

@@ -1,4 +1,7 @@
{% extends "page.html" %}
{% if announcement_home %}
{% set announcement = announcement_home %}
{% endif %}
{% block main %}
@@ -8,8 +11,8 @@
{% if user.running %}
<a id="stop" role="button" class="btn btn-lg btn-danger">Stop My Server</a>
{% endif %}
<a id="start"role="button" class="btn btn-lg btn-primary" href="{{ url }}">
{% if not user.running %}
<a id="start" role="button" class="btn btn-lg btn-primary" href="{{ url }}">
{% if not user.active %}
Start
{% endif %}
My Server

View File

@@ -1,4 +1,7 @@
{% extends "page.html" %}
{% if announcement_login %}
{% set announcement = announcement_login %}
{% endif %}
{% block login_widget %}
{% endblock %}

View File

@@ -1,8 +1,14 @@
{% extends "page.html" %}
{% if announcement_logout %}
{% set announcement = announcement_logout %}
{% endif %}
{% block main %}
<div id="logout-main" class="container">
<p>
Successfully logged out.
</p>
</div>
{% endblock %}

View File

@@ -70,7 +70,7 @@
{% else %}
admin_access: false,
{% endif %}
{% if user and user.spawner.options_form %}
{% if not no_spawner_check and user and user.spawner.options_form %}
options_form: true,
{% else %}
options_form: false,
@@ -140,6 +140,16 @@
</nav>
{% endblock %}
{% block announcement %}
{% if announcement %}
<div class="container text-center announcement">
{{ announcement | safe }}
</div>
{% endif %}
{% endblock %}
{% block main %}
{% endblock %}

View File

@@ -1,11 +1,16 @@
{% extends "page.html" %}
{% if announcement_spawn %}
{% set announcement = announcement_spawn %}
{% endif %}
{% block main %}
<div class="container">
{% block heading %}
<div class="row text-center">
<h1>Spawner options</h1>
<h1>Spawner Options</h1>
</div>
{% endblock %}
<div class="row col-sm-offset-2 col-sm-8">
{% if for_user and user.name != for_user.name -%}
<p>Spawning server for {{ for_user.name }}</p>

View File

@@ -0,0 +1,32 @@
{% extends "page.html" %}
{% block main %}
<div class="container">
<div class="row">
<div class="text-center">
{% block message %}
<p>Your server is stopping.</p>
<p>You will be able to start it again once it has finished stopping.</p>
{% endblock message %}
<p><i class="fa fa-spinner fa-pulse fa-fw fa-3x" aria-hidden="true"></i></p>
<a role="button" id="refresh" class="btn btn-lg btn-primary" href="#">refresh</a>
</div>
</div>
</div>
{% endblock %}
{% block script %}
{{ super() }}
<script type="text/javascript">
require(["jquery"], function ($) {
$("#refresh").click(function () {
window.location.reload();
})
setTimeout(function () {
window.location.reload();
}, 5000);
});
</script>
{% endblock %}

View File

@@ -73,7 +73,11 @@
{%- endif -%}
</td>
<td class="time-col col-sm-3">
{%- if token.created -%}
{{ token.created.isoformat() + 'Z' }}
{%- else -%}
N/A
{%- endif -%}
</td>
<td class="col-sm-1 text-center">
<button class="revoke-token-btn btn btn-xs btn-danger">revoke</button>
@@ -118,7 +122,11 @@
{%- endif -%}
</td>
<td class="time-col col-sm-3">
{%- if client['created'] -%}
{{ client['created'].isoformat() + 'Z' }}
{%- else -%}
N/A
{%- endif -%}
</td>
<td class="col-sm-1 text-center">
<button class="revoke-token-btn btn btn-xs btn-danger">revoke</a>