Compare commits

...

118 Commits

Author SHA1 Message Date
Min RK
8892270c24 0.8.0 2017-10-03 21:35:24 +02:00
Min RK
b928df6cba update changelog links for 0.8.0 release 2017-10-03 21:35:24 +02:00
Carol Willing
3fc74bd79e Merge pull request #1462 from minrk/proxy-docs
Document custom proxy implementations
2017-10-03 08:36:02 -07:00
Carol Willing
b34be77fec Merge pull request #1463 from minrk/auth-docs
Document auth_state
2017-10-03 08:29:45 -07:00
Min RK
d991c06098 document auth_state 2017-10-03 13:08:10 +02:00
Min RK
01a67ba156 document custom proxies 2017-10-03 12:42:52 +02:00
Min RK
8831573b6c typos in services.auth headings 2017-10-03 12:42:52 +02:00
Min RK
c5bc5411fb ignore docs/build 2017-10-03 12:42:52 +02:00
Carol Willing
a13ccd7530 Merge pull request #1461 from minrk/apache-docs
Update reverse proxy config examples
2017-10-03 02:46:27 -07:00
Min RK
e9a744e8b7 further clarify config-examples comments
per review
2017-10-03 10:19:43 +02:00
Min RK
582d43c153 add apache reverse proxy to config-examples 2017-10-02 18:18:03 +02:00
Min RK
7b5550928f mention how to generate dhparams
since we use them
2017-10-02 18:17:39 +02:00
Min RK
83920a3258 remove websocket-path-awareness from nginx config
using map, knowledge of the path is no longer necessary
2017-10-02 17:20:09 +02:00
Min RK
d1670aa443 fix mixed tabs and spaces 2017-10-02 16:19:21 +02:00
Min RK
c6f589124e Merge pull request #1458 from ryanlovett/master
Conditionally substitute $http_host for $host.
2017-09-29 16:06:56 +02:00
Carol Willing
35991e5194 Merge pull request #1455 from minrk/db-upgrade-test
Add db-upgrade test
2017-09-28 10:08:27 -07:00
Ryan Lovett
b956190393 Conditionally substitute $http_host for $host.
Necessary when using non-standard port. Closes #1457.
2017-09-28 09:40:51 -07:00
Min RK
122c989b7a specify mysql host and port explicitly
seems to be preferring MYSQL_UNIX_PORT
2017-09-28 18:20:57 +02:00
Min RK
5602575099 move db scripts to general ci directory
- remove shell test-db-upgrade test
- run mysql with docker on Travis because the version there is too old (< 5.7)
2017-09-28 16:20:15 +02:00
Min RK
4534499aad make db scripts accept one db at a time 2017-09-28 16:20:15 +02:00
Min RK
f733a91d7c avoid key length errors with old mysql + jupyterhub 0.7 2017-09-28 16:20:15 +02:00
Min RK
bf3fa30a01 load upgrade_db_url in test 2017-09-28 16:20:15 +02:00
Min RK
2625229847 note about venv 2017-09-28 16:20:15 +02:00
Min RK
2c3eb6d0d6 only count sqlite files when using sqlite 2017-09-28 15:09:17 +02:00
Min RK
5ff98fd1a5 run upgrade-tests on travis via pytest 2017-09-28 15:09:17 +02:00
Carol Willing
f79b71727b Merge pull request #1454 from minrk/auto-login-logout
typo rendering logout page when auto_login=True
2017-09-27 10:33:42 -07:00
Min RK
d3a3b8ca19 test db-upgrade on travis 2017-09-27 19:06:54 +02:00
Min RK
df9e002b9a separate docker-db from init-db
so we don't need docker on Travis
2017-09-27 19:05:55 +02:00
Min RK
a4a2c9d068 add tests for db upgrade with mysql, postgres 2017-09-27 18:41:08 +02:00
Min RK
c453e5ad20 mysql needs an extra step to drop _server_id 2017-09-27 18:34:54 +02:00
Min RK
617b879c2a stamp version before performing upgrade-db 2017-09-27 18:34:54 +02:00
Min RK
a0042e9302 typo rendering logout page when auto_login=True
and include it in test coverage
2017-09-27 14:29:56 +02:00
Min RK
6bbfcdfe4f 0.8.0rc2 2017-09-25 11:20:01 +02:00
Min RK
25662285af Merge pull request #1442 from DeepHorizons/add_more_spawner_statsd
[WIP] Added additional statsd collection for the spawner
2017-09-25 10:43:33 +02:00
Joshua Milas
84d12e8d72 Mock out the statsd object for testing 2017-09-22 12:57:41 -04:00
Joshua Milas
c317cbce36 Added additional statsd info for the spawner
spawner.failure coutner collects the number of failures for various reasons:
spawner.stop timer for seeing how long it takes a user server to stop
2017-09-22 12:13:15 -04:00
Min RK
d279604fac Merge pull request #1439 from minrk/oauth-state-cookie
avoid oauth state cookie collisions
2017-09-22 17:33:27 +02:00
Min RK
70fc4ef886 test concurrent oauth login state 2017-09-21 14:38:10 +02:00
Min RK
24ff91eef5 avoid oauth state cookie collisions
in case of multiple simultaneous

- state arg is strictly required now
- default cookie name in case of no collision is unchanged
- in case of collision, randomize cookie name with a suffix and store cookie_name in state
- expire state cookies after 10 minutes, not 1 day
2017-09-21 14:32:47 +02:00
Min RK
afc6789c74 Merge pull request #1441 from minrk/test-trailing-slash-wtf
debug intermittent failure on Travis
2017-09-21 14:18:08 +02:00
Min RK
819e5e222a stop server before testing trailing-slash handling
ensures `/user/name` is handled by the Hub without relying on CHP bug that was fixed in 3.0
2017-09-21 14:08:08 +02:00
Min RK
e1a4f37bbc cache pip packages on travis 2017-09-21 14:08:08 +02:00
Carol Willing
a73477feed Merge pull request #1435 from Analect/named-server-docs
Adding a short description ref starting/stopping named-servers via API
2017-09-20 21:29:11 -07:00
analect
89722ee2f3 Added in necessity to set c.JupyterHub.allow_named_servers = True 2017-09-20 10:27:28 +01:00
Min RK
30d4b2cef4 0.8.0rc1 2017-09-19 19:07:34 +02:00
analect
ca4fce7ffb Add Analect to contributor list 2017-09-19 16:18:09 +01:00
analect
018b2daace Fixing typo. 2017-09-19 16:17:54 +01:00
analect
fd01165cf6 Adding a short description ref starting/stopping named-servers via API 2017-09-19 14:33:20 +01:00
Carol Willing
34e4719893 Merge pull request #1434 from Analect/rest-api-named-server
Add handling for POST/DELETE of named-servers in hub API introduced in 0.8x
2017-09-19 06:17:05 -07:00
analect
c6ac9e1d15 Add handling for POST/DELETE of named-servers introduced in 0.8x 2017-09-19 13:20:15 +01:00
Min RK
70b8876239 Merge pull request #1413 from yuvipanda/memory-float
Allow non integral memory byte specifications
2017-09-18 10:50:56 +02:00
Min RK
5e34f4481a refer to self.UNIT_SUFFIXES 2017-09-18 10:10:20 +02:00
Min RK
eae5594698 byte specifications always return integers 2017-09-18 10:09:14 +02:00
Carol Willing
f02022a00c Merge pull request #1428 from minrk/default-server-name
allow default (empty) server name with named servers
2017-09-17 20:01:31 -07:00
Min RK
f964013516 exercise default server handler with named servers enabled 2017-09-17 11:55:50 +02:00
Min RK
5f7ffaf1f6 allow default (empty) server name with named servers
remove generated names behavior because it doesn't work
2017-09-17 11:47:17 +02:00
Carol Willing
0e7ccb7520 Merge pull request #1422 from minrk/lowercase-timeouts
lowercase LocalProcessSpawner timeouts
2017-09-15 08:11:15 -07:00
Min RK
c9db504a49 Merge pull request #1424 from phill84/bugfix/control-panel-button-height
wrap control panel button in a span
2017-09-15 06:56:41 -07:00
Jiening Wen
716677393e wrap control panel button in a span
make sure the same style is applied to all buttons in header-container
2017-09-15 15:29:38 +02:00
Min RK
ba8484f161 lowercase LocalProcessSpawner timeouts
traitlets doesn't like uppercase configurables
2017-09-15 12:07:03 +02:00
Yuvi Panda
ceec84dbb4 Merge pull request #1417 from minrk/test-delete
test restoring and deleting spawners while the Hub is down
2017-09-14 12:54:38 -07:00
Yuvi Panda
f2a83ec846 Merge pull request #1418 from minrk/oauth-state-boogaloo
Fixes (and tests!) for oauth state handling
2017-09-14 12:43:39 -07:00
Carol Willing
7deea6083a Merge pull request #1416 from minrk/traitlets-log
avoid error if another traitlets Application is initialized
2017-09-14 10:50:52 -07:00
Min RK
a169ff3548 test oauth redirects
include coverage of state handling
2017-09-14 16:06:57 +02:00
Min RK
f84a88da21 fix oauth state redirect
check for HubOAuth, not HubOAuthenticated
2017-09-14 16:06:36 +02:00
Min RK
eecec7183e fix clearing of oauth state cookie
missing path arg
2017-09-14 16:01:34 +02:00
Min RK
f11705ee26 delete service.server from db when they stop
same ondelete='SET NULL' as on spawner.server
2017-09-14 13:30:38 +02:00
Min RK
78ac5abf23 test restoring and deleting spawners while the Hub is down
- set ONDELETE='set null' on spawner->server relation (fixes error when deleting servers that stopped)
- set `spawner.server = None`, which is not triggered when deleting orm_spawner.server
2017-09-14 13:16:29 +02:00
Min RK
2beeaa0932 avoid error if another traitlets Application is initialized
encountered when doing db debugging in IPython
2017-09-14 11:37:34 +02:00
yuvipanda
90cb8423bc Allow non integral memory byte specifications 2017-09-12 16:19:10 -07:00
Min RK
3b07bd286b Merge pull request #1408 from DeepHorizons/update_service_doc
Updated the reference flask service example to include token auth
2017-09-12 23:49:55 +02:00
Joshua Milas
73564b97ea Updated the whoami-flask example 2017-09-11 12:16:17 -04:00
Joshua Milas
65cad5efad Updated the reference flask example to include token auth 2017-09-11 00:09:57 -04:00
Carol Willing
52eb627cd6 Merge pull request #1407 from willingc/spawn-hooks
Add pre/post spawn hooks to docs
2017-09-08 13:01:56 -07:00
Carol Willing
506e568a9a Add pre/post spawn hooks to docs 2017-09-08 13:00:14 -07:00
Min RK
6c89de082f 0.8.0b5 2017-09-08 11:19:25 +02:00
Carol Willing
6fb31cc613 Merge pull request #1393 from minrk/spawn-future
improve reporting of spawn failure
2017-09-07 10:20:38 -07:00
Carol Willing
cfb22baf05 Merge pull request #1399 from minrk/trailing-slash
add trailing slash on /user/name
2017-09-07 09:59:58 -07:00
Min RK
2d0c1ff0a8 Merge pull request #1404 from minrk/sqla-11
we require sqlalchemy 1.1
2017-09-07 16:48:13 +02:00
Min RK
7789e13879 we require sqlalchemy 1.1
for enum support

[ref](http://docs.sqlalchemy.org/en/latest/changelog/changelog_11.html#change-9d6d98d7acabc8564b8eebb11c28a624)
2017-09-07 15:10:48 +02:00
Yuvi Panda
f7b90e2c09 Merge pull request #1400 from minrk/auth-custom-html
allow Authenticator.custom_html to be HTML
2017-09-06 11:56:14 -07:00
Carol Willing
ccb29167dd Merge pull request #1392 from minrk/rm-extra-log
update docs to preferred method of writing to log file
2017-09-06 07:32:25 -07:00
Min RK
4ef1eca3c9 allow Authenticator.custom_html to be HTML 2017-09-06 15:14:26 +02:00
Min RK
c26ede30b9 Point users to /hub/home to retry spawn on spawn failure 2017-09-06 15:03:26 +02:00
Min RK
64c69a3164 update docs to preferred method of writing to log file
extra_log_files config is unreliable and doesn't capture all output.

Piping output is much more robust and reliable.
2017-09-06 14:38:33 +02:00
Min RK
ad7867ff11 add trailing slash on /user/name
proxies may not route `/user/name` correctly, only `/user/name/...`, so make sure that `/user/name` is redirected to `/user/name/`

this manifests as a redirect loop between /user/name and /hub/user/name when a route exists but /user/name is still
being routed to the Hub
2017-09-06 12:37:22 +02:00
Yuvi Panda
14fc1588f8 Merge pull request #1380 from minrk/cull-idle-users
add —cull-users to cull_idle_servers
2017-09-05 12:48:24 -07:00
Min RK
7e5a925f4f raise original spawn failure on implicit spawn
so the error message is the same, however it was arrived at.

potential downside: it could look like the current request is spawning and failing,
rather than the reality that a previous spawn failed and we are just re-presenting the earlier error.
It's possible for there to have been a long time in between spawn and error.
2017-09-04 14:27:01 +02:00
Min RK
3c61e422da prevent implicit spawn on /user/:name if previous spawn failed
require users to visit /hub/home and click 'Start My Server' to get a new server

Visits to /hub/user/:name will get an error if the previous spawn failed,
rather than triggering a new spawn.
This should guarantee that a user sees an error if their spawn failed,
regardless of when the failure occurred and how long it took.
Some cases of slow errors could result in triggering a new spawn indefinitely without
the user seeing an error message.

/hub/spawn was a simple redirect to /user/:name in the absence of a spawn form,
but now clears the `_spawn_future` prior to redirect
to signal that a new spawn has been explicitly requested in the case of a prior failure.
2017-09-04 14:17:24 +02:00
Min RK
0e2cf37981 point to single-user logs when spawner fails to start 2017-09-04 13:14:07 +02:00
Min RK
503d5e389f render pending page if triggered spawn doesn't finish
instead of redirecting, which starts redirect loop counter
2017-09-04 12:02:40 +02:00
Min RK
7b1e61ab2c allow waiting for pending spawn via spawner._spawn_future
avoids losing errors when visiting `/hub/user/:name` during a pending spawn
2017-09-04 11:53:42 +02:00
Min RK
4692d6638d 0.8.0b4 2017-08-31 16:47:12 +02:00
Carol Willing
7829070e1c Merge pull request #1383 from minrk/singleuser-token-cookie
set cookie on singleuser when authenticated with ?token=...
2017-08-31 09:31:35 -05:00
Min RK
5e4b935322 only HubOAuth can set token cookie 2017-08-31 16:04:54 +02:00
Carol Willing
4c445c7a88 Add jencabral to contributors 2017-08-31 07:52:08 -05:00
Carol Willing
8e2965df6a Merge pull request #1384 from minrk/spawner-db
restore db access on Spawner
2017-08-31 07:50:18 -05:00
Min RK
7a41d24606 set cookie on singleuser when authenticated with ?token=...
Allows `/user/name?token=...` URL to login users for more than one request.

matches token behavior of regular notebook server.
2017-08-31 13:53:48 +02:00
Min RK
5f84a006dc restore db access on Spawner
Shouldn’t be strictly necessary, but doesn’t hurt
2017-08-31 10:03:44 +02:00
Carol Willing
e19296a230 Merge pull request #1382 from minrk/request-token
let admins request tokens for other users
2017-08-31 00:04:59 -04:00
Min RK
89ba97f413 exercise more token API cases
separate parametrize cases for clarity
2017-08-30 14:38:00 +02:00
Min RK
fe2157130b Merge pull request #1381 from minrk/log-fix
fix logging error when login_user is called with no form data and login fails
2017-08-30 14:09:52 +02:00
Min RK
e3b17e8176 Merge pull request #1379 from ding-c3/master
Pass timeout value to exponential_backoff in wait functions
2017-08-30 14:05:42 +02:00
Min RK
027f2f95c6 let admins request tokens for other users 2017-08-30 12:31:41 +02:00
Min RK
210975324a fix logging error when login_user is called with no form data and login fails 2017-08-30 11:31:44 +02:00
Min RK
f9a90d2494 add —cull-users to cull_idle_servers
allows deleting idle users in addition to servers for temp-user cases such as binder/tmpnb
2017-08-30 10:31:44 +02:00
Alex Ding
932689f2f8 Pass timeout value to exponential_backoff in wait functions 2017-08-29 17:45:21 -07:00
Min RK
f91e911d1a Merge pull request #1375 from lsst-sqre/master
Prevent "extra" from being used before definition.
2017-08-29 08:36:25 -04:00
Adam Thornton
b75cce857e Merge pull request #1 from lsst-sqre/ticket/DM-11663
Fix "extra" so it isn't used before definition.
2017-08-28 19:00:17 -04:00
adam
62f00690f7 Fix "extra" so it isn't used before definition. 2017-08-28 15:58:31 -07:00
Yuvi Panda
f700ba4154 Merge pull request #1368 from minrk/check-version-error
Provide more detailed error message in case of version mismatch
2017-08-28 13:27:00 -04:00
Min RK
8b91842eae Merge pull request #1369 from minrk/template-typo
typo in navbar template
2017-08-27 16:41:44 -04:00
Min RK
80a9eb93f4 Merge pull request #1370 from yuvipanda/button-roles
Add role=button attribute to all <a> & <span> buttons
2017-08-27 15:39:04 -04:00
yuvipanda
e1deecbbfb Add role=button attribute to all <a> & <span> buttons
Simple accessibility win - screen readers will now be
able to properly present these as buttons than links.
2017-08-27 11:17:22 -04:00
Min RK
d3142704b7 typo in navbar template
mixed up elements causing funky alignment on some pages
2017-08-26 22:42:17 -04:00
Min RK
447edd081a Provide more detailed error message in case of version mismatch
this is the most likely cause of redirect loops when using docker,
so record the spawner version and check it when a redirect is detected.

In the event of a redirect and mismatch, fail with a message explaining the version mismatch and how to fix it.
2017-08-26 22:41:24 -04:00
Min RK
e1531ec277 Merge pull request #1366 from minrk/typo
typo in proxy recovery
2017-08-26 20:21:51 -04:00
Min RK
d12ac4b1f6 typo in proxy recovery
should have been the dict of instantiated services, not the list of service configurations
2017-08-26 15:25:17 -04:00
54 changed files with 1303 additions and 237 deletions

1
.gitignore vendored
View File

@@ -6,6 +6,7 @@ node_modules
/build
dist
docs/_build
docs/build
docs/source/_static/rest-api
.ipynb_checkpoints
# ignore config file at the top-level of the repo

View File

@@ -1,5 +1,7 @@
language: python
sudo: false
cache:
- pip
python:
- nightly
- 3.6
@@ -9,8 +11,8 @@ env:
global:
- ASYNC_TEST_TIMEOUT=15
services:
- mysql
- postgresql
- postgres
- docker
# installing dependencies
before_install:
@@ -19,10 +21,12 @@ before_install:
- npm install -g configurable-http-proxy
- |
if [[ $JUPYTERHUB_TEST_DB_URL == mysql* ]]; then
mysql -e 'CREATE DATABASE jupyterhub CHARACTER SET utf8 COLLATE utf8_general_ci;'
unset MYSQL_UNIX_PORT
DB=mysql bash ci/docker-db.sh
DB=mysql bash ci/init-db.sh
pip install 'mysql-connector<2.2'
elif [[ $JUPYTERHUB_TEST_DB_URL == postgresql* ]]; then
psql -c 'create database jupyterhub;' -U postgres
DB=postgres bash ci/init-db.sh
pip install psycopg2
fi
install:
@@ -32,6 +36,20 @@ install:
# running tests
script:
- |
if [[ ! -z "$JUPYTERHUB_TEST_DB_URL" ]]; then
# if testing upgrade-db, run `jupyterhub token` with 0.7
# to initialize an old db. Used in upgrade-tests
export JUPYTERHUB_TEST_UPGRADE_DB_URL=${JUPYTERHUB_TEST_DB_URL}_upgrade
# use virtualenv instead of venv because venv doesn't work here
python -m pip install virtualenv
python -m virtualenv old-hub-env
./old-hub-env/bin/python -m pip install jupyterhub==0.7.2 psycopg2 'mysql-connector<2.2'
./old-hub-env/bin/jupyterhub token kaylee \
--JupyterHub.db_url=$JUPYTERHUB_TEST_UPGRADE_DB_URL \
--Authenticator.whitelist="{'kaylee'}" \
--JupyterHub.authenticator_class=jupyterhub.auth.Authenticator
fi
- pytest -v --maxfail=2 --cov=jupyterhub jupyterhub/tests
after_success:
- codecov
@@ -42,8 +60,12 @@ matrix:
- python: 3.6
env: JUPYTERHUB_TEST_SUBDOMAIN_HOST=http://localhost.jovyan.org:8000
- python: 3.6
env: JUPYTERHUB_TEST_DB_URL=mysql+mysqlconnector://root@127.0.0.1/jupyterhub
env:
- MYSQL_HOST=127.0.0.1
- MYSQL_TCP_PORT=13306
- JUPYTERHUB_TEST_DB_URL=mysql+mysqlconnector://root@127.0.0.1:$MYSQL_TCP_PORT/jupyterhub
- python: 3.6
env: JUPYTERHUB_TEST_DB_URL=postgresql://postgres@127.0.0.1/jupyterhub
env:
- JUPYTERHUB_TEST_DB_URL=postgresql://postgres@127.0.0.1/jupyterhub
allow_failures:
- python: nightly

View File

@@ -11,6 +11,7 @@ graft jupyterhub
graft scripts
graft share
graft singleuser
graft ci
# Documentation
graft docs

50
ci/docker-db.sh Normal file
View File

@@ -0,0 +1,50 @@
#!/usr/bin/env bash
# source this file to setup postgres and mysql
# for local testing (as similar as possible to docker)
set -e
export MYSQL_HOST=127.0.0.1
export MYSQL_TCP_PORT=${MYSQL_TCP_PORT:-13306}
export PGHOST=127.0.0.1
NAME="hub-test-$DB"
DOCKER_RUN="docker run --rm -d --name $NAME"
docker rm -f "$NAME" 2>/dev/null || true
case "$DB" in
"mysql")
RUN_ARGS="-e MYSQL_ALLOW_EMPTY_PASSWORD=1 -p $MYSQL_TCP_PORT:3306 mysql:5.7"
CHECK="mysql --host $MYSQL_HOST --port $MYSQL_TCP_PORT --user root -e \q"
;;
"postgres")
RUN_ARGS="-p 5432:5432 postgres:9.5"
CHECK="psql --user postgres -c \q"
;;
*)
echo '$DB must be mysql or postgres'
exit 1
esac
$DOCKER_RUN $RUN_ARGS
echo -n "waiting for $DB "
for i in {1..60}; do
if $CHECK; then
echo 'done'
break
else
echo -n '.'
sleep 1
fi
done
$CHECK
echo -e "
Set these environment variables:
export MYSQL_HOST=127.0.0.1
export MYSQL_TCP_PORT=$MYSQL_TCP_PORT
export PGHOST=127.0.0.1
"

27
ci/init-db.sh Normal file
View File

@@ -0,0 +1,27 @@
#!/usr/bin/env bash
# initialize jupyterhub databases for testing
set -e
MYSQL="mysql --user root --host $MYSQL_HOST --port $MYSQL_TCP_PORT -e "
PSQL="psql --user postgres -c "
case "$DB" in
"mysql")
EXTRA_CREATE='CHARACTER SET utf8 COLLATE utf8_general_ci'
SQL="$MYSQL"
;;
"postgres")
SQL="$PSQL"
;;
*)
echo '$DB must be mysql or postgres'
exit 1
esac
set -x
$SQL 'DROP DATABASE jupyterhub;' 2>/dev/null || true
$SQL "CREATE DATABASE jupyterhub ${EXTRA_CREATE};"
$SQL 'DROP DATABASE jupyterhub_upgrade;' 2>/dev/null || true
$SQL "CREATE DATABASE jupyterhub_upgrade ${EXTRA_CREATE};"

View File

@@ -203,6 +203,43 @@ paths:
description: The user's notebook server has stopped
'202':
description: The user's notebook server has not yet stopped as it is taking a while to stop
/users/{name}/servers/{server_name}:
post:
summary: Start a user's single-user named-server notebook server
parameters:
- name: name
description: username
in: path
required: true
type: string
- name: server_name
description: name given to a named-server
in: path
required: true
type: string
responses:
'201':
description: The user's notebook named-server has started
'202':
description: The user's notebook named-server has not yet started, but has been requested
delete:
summary: Stop a user's named-server
parameters:
- name: name
description: username
in: path
required: true
type: string
- name: server_name
description: name given to a named-server
in: path
required: true
type: string
responses:
'204':
description: The user's notebook named-server has stopped
'202':
description: The user's notebook named-server has not yet stopped as it is taking a while to stop
/users/{name}/admin-access:
post:
summary: Grant admin access to this user's notebook server

View File

@@ -17,7 +17,7 @@ Module: :mod:`jupyterhub.services.auth`
:members:
:class:`HubOAuth`
----------------
-----------------
.. autoconfigurable:: HubOAuth
:members:
@@ -30,7 +30,7 @@ Module: :mod:`jupyterhub.services.auth`
:members:
:class:`HubOAuthenticated`
-------------------------
--------------------------
.. autoclass:: HubOAuthenticated

View File

@@ -5,7 +5,9 @@ its link will bring up a GitHub listing of changes. Use `git log` on the
command line for details.
## [Unreleased] 0.8
## [Unreleased]
## [0.8.0] 2017-10-03
JupyterHub 0.8 is a big release!
@@ -23,7 +25,7 @@ in your Dockerfile is sufficient.
#### Added
- JupyterHub now defined a `.Proxy` API for custom
- JupyterHub now defined a `Proxy` API for custom
proxy implementations other than the default.
The defaults are unchanged,
but configuration of the proxy is now done on the `ConfigurableHTTPProxy` class instead of the top-level JupyterHub.
@@ -32,7 +34,7 @@ in your Dockerfile is sufficient.
(anything that uses HubAuth)
can now accept token-authenticated requests via the Authentication header.
- Authenticators can now store state in the Hub's database.
To do so, the `.authenticate` method should return a dict of the form
To do so, the `authenticate` method should return a dict of the form
```python
{
@@ -233,7 +235,8 @@ Fix removal of `/login` page in 0.4.0, breaking some OAuth providers.
First preview release
[Unreleased]: https://github.com/jupyterhub/jupyterhub/compare/0.7.2...HEAD
[Unreleased]: https://github.com/jupyterhub/jupyterhub/compare/0.8.0...HEAD
[0.8.0]: https://github.com/jupyterhub/jupyterhub/compare/0.7.2...0.8.0
[0.7.2]: https://github.com/jupyterhub/jupyterhub/compare/0.7.1...0.7.2
[0.7.1]: https://github.com/jupyterhub/jupyterhub/compare/0.7.0...0.7.1
[0.7.0]: https://github.com/jupyterhub/jupyterhub/compare/0.6.1...0.7.0

View File

@@ -3,6 +3,7 @@
Project Jupyter thanks the following people for their help and
contribution on JupyterHub:
- Analect
- anderbubble
- apetresc
- barrachri
@@ -31,6 +32,7 @@ contribution on JupyterHub:
- JamiesHQ
- jbweston
- jdavidheiser
- jencabral
- jhamrick
- josephtate
- kinuax

View File

@@ -84,6 +84,7 @@ class DictionaryAuthenticator(Authenticator):
return data['username']
```
#### Normalize usernames
Since the Authenticator and Spawner both use the same username,
@@ -116,6 +117,7 @@ To only allow usernames that start with 'w':
c.Authenticator.username_pattern = r'w.*'
```
### How to write a custom authenticator
You can use custom Authenticator subclasses to enable authentication
@@ -123,6 +125,11 @@ via other mechanisms. One such example is using [GitHub OAuth][].
Because the username is passed from the Authenticator to the Spawner,
a custom Authenticator and Spawner are often used together.
For example, the Authenticator methods, [pre_spawn_start(user, spawner)][]
and [post_spawn_stop(user, spawner)][], are hooks that can be used to do
auth-related startup (e.g. opening PAM sessions) and cleanup
(e.g. closing PAM sessions).
See a list of custom Authenticators [on the wiki](https://github.com/jupyterhub/jupyterhub/wiki/Authenticators).
@@ -130,6 +137,77 @@ If you are interested in writing a custom authenticator, you can read
[this tutorial](http://jupyterhub-tutorial.readthedocs.io/en/latest/authenticators.html).
### Authentication state
JupyterHub 0.8 adds the ability to persist state related to authentication,
such as auth-related tokens.
If such state should be persisted, `.authenticate()` should return a dictionary of the form:
```python
{
'username': 'name',
'auth_state': {
'key': 'value',
}
}
```
where `username` is the username that has been authenticated,
and `auth_state` is any JSON-serializable dictionary.
Because `auth_state` may contain sensitive information,
it is encrypted before being stored in the database.
To store auth_state, two conditions must be met:
1. persisting auth state must be enabled explicitly via configuration
```python
c.Authenticator.enable_auth_state = True
```
2. encryption must be enabled by the presence of `JUPYTERHUB_CRYPT_KEY` environment variable,
which should be a hex-encoded 32-byte key.
For example:
```bash
export JUPYTERHUB_CRYPT_KEY=$(openssl rand -hex 32)
```
JupyterHub uses [Fernet](https://cryptography.io/en/latest/fernet/) to encrypt auth_state.
To facilitate key-rotation, `JUPYTERHUB_CRYPT_KEY` may be a semicolon-separated list of encryption keys.
If there are multiple keys present, the **first** key is always used to persist any new auth_state.
#### Using auth_state
Typically, if `auth_state` is persisted it is desirable to affect the Spawner environment in some way.
This may mean defining environment variables, placing certificate in the user's home directory, etc.
The `Authenticator.pre_spawn_start` method can be used to pass information from authenticator state
to Spawner environment:
```python
class MyAuthenticator(Authenticator):
@gen.coroutine
def authenticate(self, handler, data=None):
username = yield identify_user(handler, data)
upstream_token = yield token_for_user(username)
return {
'username': username,
'auth_state': {
'upstream_token': upstream_token,
},
}
@gen.coroutine
def pre_spawn_start(self, user, spawner):
"""Pass upstream_token to spawner via environment variable"""
auth_state = yield user.get_auth_state()
if not auth_state:
# auth_state not enabled
return
spawner.environment['UPSTREAM_TOKEN'] = auth_state['upstream_token']
```
## JupyterHub as an OAuth provider
Beginning with version 0.8, JupyterHub is an OAuth provider.
@@ -140,3 +218,5 @@ Beginning with version 0.8, JupyterHub is an OAuth provider.
[OAuth]: https://en.wikipedia.org/wiki/OAuth
[GitHub OAuth]: https://developer.github.com/v3/oauth/
[OAuthenticator]: https://github.com/jupyterhub/oauthenticator
[pre_spawn_start(user, spawner)]: http://jupyterhub.readthedocs.io/en/latest/api/auth.html#jupyterhub.auth.Authenticator.pre_spawn_start
[post_spawn_stop(user, spawner)]: http://jupyterhub.readthedocs.io/en/latest/api/auth.html#jupyterhub.auth.Authenticator.post_spawn_stop

View File

@@ -49,9 +49,6 @@ c.JupyterHub.cookie_secret_file = pjoin(runtime_dir, 'cookie_secret')
c.JupyterHub.db_url = pjoin(runtime_dir, 'jupyterhub.sqlite')
# or `--db=/path/to/jupyterhub.sqlite` on the command-line
# put the log file in /var/log
c.JupyterHub.extra_log_file = '/var/log/jupyterhub.log'
# use GitHub OAuthenticator for local users
c.JupyterHub.authenticator_class = 'oauthenticator.LocalGitHubOAuthenticator'
c.GitHubOAuthenticator.oauth_callback_url = os.environ['OAUTH_CALLBACK_URL']
@@ -79,10 +76,11 @@ export GITHUB_CLIENT_ID=github_id
export GITHUB_CLIENT_SECRET=github_secret
export OAUTH_CALLBACK_URL=https://example.com/hub/oauth_callback
export CONFIGPROXY_AUTH_TOKEN=super-secret
jupyterhub -f /etc/jupyterhub/jupyterhub_config.py
# append log output to log file /var/log/jupyterhub.log
jupyterhub -f /etc/jupyterhub/jupyterhub_config.py &>> /var/log/jupyterhub.log
```
## Using nginx reverse proxy
## Using a reverse proxy
In the following example, we show configuration files for a JupyterHub server
running locally on port `8000` but accessible from the outside on the standard
@@ -93,9 +91,9 @@ satisfy the following:
* JupyterHub is running on a server, accessed *only* via `HUB.DOMAIN.TLD:443`
* On the same machine, `NO_HUB.DOMAIN.TLD` strictly serves different content,
also on port `443`
* `nginx` is used to manage the web servers / reverse proxy (which means that
only nginx will be able to bind two servers to `443`)
* After testing, the server in question should be able to score an A+ on the
* `nginx` or `apache` is used as the public access point (which means that
only nginx/apache will bind to `443`)
* After testing, the server in question should be able to score at least an A on the
Qualys SSL Labs [SSL Server Test](https://www.ssllabs.com/ssltest/)
Let's start out with needed JupyterHub configuration in `jupyterhub_config.py`:
@@ -105,30 +103,47 @@ Let's start out with needed JupyterHub configuration in `jupyterhub_config.py`:
c.JupyterHub.ip = '127.0.0.1'
```
For high-quality SSL configuration, we also generate Diffie-Helman parameters.
This can take a few minutes:
```bash
openssl dhparam -out /etc/ssl/certs/dhparam.pem 4096
```
### nginx
The **`nginx` server config file** is fairly standard fare except for the two
`location` blocks within the `HUB.DOMAIN.TLD` config file:
```bash
# top-level http config for websocket headers
# If Upgrade is defined, Connection = upgrade
# If Upgrade is empty, Connection = close
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# HTTP server to redirect all 80 traffic to SSL/HTTPS
server {
listen 80;
server_name HUB.DOMAIN.TLD;
listen 80;
server_name HUB.DOMAIN.TLD;
# Tell all requests to port 80 to be 302 redirected to HTTPS
return 302 https://$host$request_uri;
# Tell all requests to port 80 to be 302 redirected to HTTPS
return 302 https://$host$request_uri;
}
# HTTPS server to handle JupyterHub
server {
listen 443;
ssl on;
listen 443;
ssl on;
server_name HUB.DOMAIN.TLD;
server_name HUB.DOMAIN.TLD;
ssl_certificate /etc/letsencrypt/live/HUB.DOMAIN.TLD/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/HUB.DOMAIN.TLD/privkey.pem;
ssl_certificate /etc/letsencrypt/live/HUB.DOMAIN.TLD/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/HUB.DOMAIN.TLD/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
@@ -138,37 +153,28 @@ server {
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
# Managing literal requests to the JupyterHub front end
location / {
proxy_pass https://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Managing literal requests to the JupyterHub front end
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# websocket headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
# Managing WebHook/Socket requests between hub user servers and external proxy
location ~* /(api/kernels/[^/]+/(channels|iopub|shell|stdin)|terminals/websocket)/? {
proxy_pass https://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
# Managing requests to verify letsencrypt host
# Managing requests to verify letsencrypt host
location ~ /.well-known {
allow all;
allow all;
}
}
```
If `nginx` is not running on port 443, substitute `$http_host` for `$host` on
the lines setting the `Host` header.
`nginx` will now be the front facing element of JupyterHub on `443` which means
it is also free to bind other servers, like `NO_HUB.DOMAIN.TLD` to the same port
on the same machine and network interface. In fact, one can simply use the same
@@ -177,35 +183,90 @@ of the site as well as the applicable location call:
```bash
server {
listen 80;
server_name NO_HUB.DOMAIN.TLD;
listen 80;
server_name NO_HUB.DOMAIN.TLD;
# Tell all requests to port 80 to be 302 redirected to HTTPS
return 302 https://$host$request_uri;
# Tell all requests to port 80 to be 302 redirected to HTTPS
return 302 https://$host$request_uri;
}
server {
listen 443;
ssl on;
listen 443;
ssl on;
# INSERT OTHER SSL PARAMETERS HERE AS ABOVE
# INSERT OTHER SSL PARAMETERS HERE AS ABOVE
# SSL cert may differ
# Set the appropriate root directory
root /var/www/html
# Set the appropriate root directory
root /var/www/html
# Set URI handling
location / {
try_files $uri $uri/ =404;
}
# Set URI handling
location / {
try_files $uri $uri/ =404;
}
# Managing requests to verify letsencrypt host
# Managing requests to verify letsencrypt host
location ~ /.well-known {
allow all;
allow all;
}
}
```
Now just restart `nginx`, restart the JupyterHub, and enjoy accessing
Now restart `nginx`, restart the JupyterHub, and enjoy accessing
`https://HUB.DOMAIN.TLD` while serving other content securely on
`https://NO_HUB.DOMAIN.TLD`.
### Apache
As with nginx above, you can use [Apache](https://httpd.apache.org) as the reverse proxy.
First, we will need to enable the apache modules that we are going to need:
```bash
a2enmod ssl rewrite proxy proxy_http proxy_wstunnel
```
Our Apache configuration is equivalent to the nginx configuration above:
- Redirect HTTP to HTTPS
- Good SSL Configuration
- Support for websockets on any proxied URL
- JupyterHub is running locally at http://127.0.0.1:8000
```bash
# redirect HTTP to HTTPS
Listen 80
<VirtualHost HUB.DOMAIN.TLD:80>
ServerName HUB.DOMAIN.TLD
Redirect / https://HUB.DOMAIN.TLD/
</VirtualHost>
Listen 443
<VirtualHost HUB.DOMAIN.TLD:443>
ServerName HUB.DOMAIN.TLD
# configure SSL
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/HUB.DOMAIN.TLD/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/HUB.DOMAIN.TLD/privkey.pem
SSLProtocol All -SSLv2 -SSLv3
SSLOpenSSLConfCmd DHParameters /etc/ssl/certs/dhparam.pem
SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
# Use RewriteEngine to handle websocket connection upgrades
RewriteEngine On
RewriteCond %{HTTP:Connection} Upgrade [NC]
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteRule /(.*) ws://127.0.0.1:8000/$1 [P,L]
<Location "/">
# preserve Host header to avoid cross-origin problems
ProxyPreserveHost on
# proxy to JupyterHub
ProxyPass http://127.0.0.1:8000/
ProxyPassReverse http://127.0.0.1:8000/
</Location>
</VirtualHost>
```

View File

@@ -9,6 +9,7 @@ Technical Reference
authenticators
spawners
services
proxy
rest
upgrading
config-examples

View File

@@ -0,0 +1,183 @@
# Writing a custom Proxy implementation
JupyterHub 0.8 introduced the ability to write a custom implementation of the proxy.
This enables deployments with different needs than the default proxy,
configurable-http-proxy (CHP).
CHP is a single-process nodejs proxy that they Hub manages by default as a subprocess
(it can be run externally, as well, and typically is in production deployments).
The upside to CHP, and why we use it by default, is that it's easy to install and run (if you have nodejs, you are set!).
The downsides are that it's a single process and does not support any persistence of the routing table.
So if the proxy process dies, your whole JupyterHub instance is inaccessible until the Hub notices, restarts the proxy, and restores the routing table.
For deployments that want to avoid such a single point of failure,
or leverage existing proxy infrastructure in their chosen deployment (such as Kubernetes ingress objects),
the Proxy API provides a way to do that.
In general, for a proxy to be usable by JupyterHub, it must:
1. support websockets without prior knowledge of the URL where websockets may occur
2. support trie-based routing (i.e. allow different routes on `/foo` and `/foo/bar` and route based on specificity)
3. adding or removing a route should not cause existing connections to drop
Optionally, if the JupyterHub deployment is to use host-based routing,
the Proxy must additionally support routing based on the Host of the request.
## Subclassing Proxy
To start, any Proxy implementation should subclass the base Proxy class,
as is done with custom Spawners and Authenticators.
```python
from jupyterhub.proxy import Proxy
class MyProxy(Proxy):
"""My Proxy implementation"""
...
```
## Starting and stopping the proxy
If your proxy should be launched when the Hub starts, you must define how to start and stop your proxy:
```python
from tornado import gen
class MyProxy(Proxy):
...
@gen.coroutine
def start(self):
"""Start the proxy"""
@gen.coroutine
def stop(self):
"""Stop the proxy"""
```
These methods **may** be coroutines.
`c.Proxy.should_start` is a configurable flag that determines whether the Hub should call these methods when the Hub itself starts and stops.
### Purely external proxies
Probably most custom proxies will be externally managed,
such as Kubernetes ingress-based implementations.
In this case, you do not need to define `start` and `stop`.
To disable the methods, you can define `should_start = False` at the class level:
```python
class MyProxy(Proxy):
should_start = False
```
## Adding and removing routes
At its most basic, a Proxy implementation defines a mechanism to add, remove, and retrieve routes.
A proxy that implements these three methods is complete.
Each of these methods **may** be a coroutine.
**Definition:** routespec
A routespec, which will appear in these methods, is a string describing a route to be proxied,
such as `/user/name/`. A routespec will:
1. always end with `/`
2. always start with `/` if it is a path-based route `/proxy/path/`
3. precede the leading `/` with a host for host-based routing, e.g. `host.tld/proxy/path/`
### Adding a route
When adding a route, JupyterHub may pass a JSON-serializable dict as a `data` argument
that should be attacked to the proxy route.
When that route is retrieved, the `data` argument should be returned as well.
If your proxy implementation doesn't support storing data attached to routes,
then your Python wrapper may have to handle storing the `data` piece itself,
e.g in a simple file or database.
```python
@gen.coroutine
def add_route(self, routespec, target, data):
"""Proxy `routespec` to `target`.
Store `data` associated with the routespec
for retrieval later.
"""
```
Adding a route for a user looks like this:
```python
proxy.add_route('/user/pgeorgiou/', 'http://127.0.0.1:1227',
{'user': 'pgeorgiou'})
```
### Removing routes
`delete_route()` is given a routespec to delete.
If there is no such route, `delete_route` should still succeed,
but a warning may be issued.
```python
@gen.coroutine
def delete_route(self, routespec):
"""Delete the route"""
```
### Retrieving routes
For retrieval, you only *need* to implement a single method that retrieves all routes.
The return value for this function should be a dictionary, keyed by `routespect`,
of dicts whose keys are the same three arguments passed to `add_route`
(`routespec`, `target`, `data`)
```python
@gen.coroutine
def get_all_routes(self):
"""Return all routes, keyed by routespec""""
```
```python
{
'/proxy/path/': {
'routespec': '/proxy/path/',
'target': 'http://...',
'data': {},
},
}
```
#### Note on activity tracking
JupyterHub can track activity of users, for use in services such as culling idle servers.
As of JupyterHub 0.8, this activity tracking is the responsibility of the proxy.
If your proxy implementation can track activity to endpoints,
it may add a `last_activity` key to the `data` of routes retrieved in `.get_all_routes()`.
If present, the value of `last_activity` should be an [ISO8601](https://en.wikipedia.org/wiki/ISO_8601) UTC date string:
```python
{
'/user/pgeorgiou/': {
'routespec': '/user/pgeorgiou/',
'target': 'http://127.0.0.1:1227',
'data': {
'user': 'pgeourgiou',
'last_activity': '2017-10-03T10:33:49.570Z',
},
},
}
```
If the proxy does not track activity, then only activity to the Hub itself is tracked,
and services such as cull-idle will not work.
Now that `notebook-5.0` tracks activity internally,
we can retrieve activity information from the single-user servers instead,
removing the need to track activity in the proxy.
But this is not yet implemented in JupyterHub 0.8.0.

View File

@@ -119,6 +119,55 @@ token does **not** authorize access to the [Jupyter Notebook REST API][]
provided by notebook servers managed by JupyterHub. A different token is used
to access the **Jupyter Notebook** API.
## Enabling users to spawn multiple named-servers via the API
With JupyterHub version 0.8, support for multiple servers per user has landed.
Prior to that, each user could only launch a single default server via the API
like this:
```bash
curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/server"
```
With the named-server functionality, it's now possible to launch more than one
specifically named servers against a given user. This could be used, for instance,
to launch each server based on a different image.
First you must enable named-servers by including the following setting in the `jupyterhub_config.py` file.
`c.JupyterHub.allow_named_servers = True`
If using the [zero-to-jupyterhub-k8s](https://github.com/jupyterhub/zero-to-jupyterhub-k8s) set-up to run JupyterHub,
then instead of editing the `jupyterhub_config.py` file directly, you could pass
the following as part of the `config.yaml` file, as per the [tutorial](https://zero-to-jupyterhub.readthedocs.io/en/latest/):
```bash
hub:
extraConfig: |
c.JupyterHub.allow_named_servers = True
```
With that setting in place, a new named-server is activated like this:
```bash
curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/servers/<serverA>"
curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/servers/<serverB>"
```
The same servers can be stopped by substituting `DELETE` for `POST` above.
### Some caveats for using named-servers
The named-server capabilities are not fully implemented for JupyterHub as yet.
While it's possible to start/stop a server via the API, the UI on the
JupyterHub control-panel has not been implemented, and so it may not be obvious
to those viewing the panel that a named-server may be running for a given user.
For named-servers via the API to work, the spawner used to spawn these servers
will need to be able to handle the case of multiple servers per user and ensure
uniqueness of names, particularly if servers are spawned via docker containers
or kubernetes pods.
## Learn more about the API
You can see the full [JupyterHub REST API][] for details. This REST API Spec can

View File

@@ -200,7 +200,9 @@ or via the `JUPYTERHUB_API_TOKEN` environment variable.
Most of the logic for authentication implementation is found in the
[`HubAuth.user_for_cookie`](services.auth.html#jupyterhub.services.auth.HubAuth.user_for_cookie)
method, which makes a request of the Hub, and returns:
and in the
[`HubAuth.user_for_token`](services.auth.html#jupyterhub.services.auth.HubAuth.user_for_token)
methods, which makes a request of the Hub, and returns:
- None, if no user could be identified, or
- a dict of the following form:
@@ -252,8 +254,11 @@ def authenticated(f):
@wraps(f)
def decorated(*args, **kwargs):
cookie = request.cookies.get(auth.cookie_name)
token = request.headers.get(auth.auth_header_name)
if cookie:
user = auth.user_for_cookie(cookie)
elif token:
user = auth.user_for_token(token)
else:
user = None
if user:

View File

@@ -40,8 +40,11 @@ from tornado.options import define, options, parse_command_line
@coroutine
def cull_idle(url, api_token, timeout):
"""cull idle single-user servers"""
def cull_idle(url, api_token, timeout, cull_users=False):
"""Shutdown idle single-user servers
If cull_users, inactive *users* will be deleted as well.
"""
auth_header = {
'Authorization': 'token %s' % api_token
}
@@ -54,26 +57,50 @@ def cull_idle(url, api_token, timeout):
resp = yield client.fetch(req)
users = json.loads(resp.body.decode('utf8', 'replace'))
futures = []
for user in users:
last_activity = parse_date(user['last_activity'])
if user['server'] and last_activity < cull_limit:
app_log.info("Culling %s (inactive since %s)", user['name'], last_activity)
@coroutine
def cull_one(user, last_activity):
"""cull one user"""
# shutdown server first. Hub doesn't allow deleting users with running servers.
if user['server']:
app_log.info("Culling server for %s (inactive since %s)", user['name'], last_activity)
req = HTTPRequest(url=url + '/users/%s/server' % user['name'],
method='DELETE',
headers=auth_header,
)
futures.append((user['name'], client.fetch(req)))
elif user['server'] and last_activity > cull_limit:
yield client.fetch(req)
if cull_users:
app_log.info("Culling user %s (inactive since %s)", user['name'], last_activity)
req = HTTPRequest(url=url + '/users/%s' % user['name'],
method='DELETE',
headers=auth_header,
)
yield client.fetch(req)
for user in users:
if not user['server'] and not cull_users:
# server not running and not culling users, nothing to do
continue
last_activity = parse_date(user['last_activity'])
if last_activity < cull_limit:
futures.append((user['name'], cull_one(user, last_activity)))
else:
app_log.debug("Not culling %s (active since %s)", user['name'], last_activity)
for (name, f) in futures:
yield f
app_log.debug("Finished culling %s", name)
if __name__ == '__main__':
define('url', default=os.environ.get('JUPYTERHUB_API_URL'), help="The JupyterHub API URL")
define('timeout', default=600, help="The idle timeout (in seconds)")
define('cull_every', default=0, help="The interval (in seconds) for checking for idle servers to cull")
define('cull_users', default=False,
help="""Cull users in addition to servers.
This is for use in temporary-user cases such as tmpnb.""",
)
parse_command_line()
if not options.cull_every:
@@ -82,7 +109,7 @@ if __name__ == '__main__':
api_token = os.environ['JUPYTERHUB_API_TOKEN']
loop = IOLoop.current()
cull = lambda : cull_idle(options.url, api_token, options.timeout)
cull = lambda : cull_idle(options.url, api_token, options.timeout, options.cull_users)
# run once before scheduling periodic call
loop.run_sync(cull)
# schedule periodic cull

View File

@@ -28,8 +28,11 @@ def authenticated(f):
@wraps(f)
def decorated(*args, **kwargs):
cookie = request.cookies.get(auth.cookie_name)
token = request.headers.get(auth.auth_header_name)
if cookie:
user = auth.user_for_cookie(cookie)
elif token:
user = auth.user_for_token(token)
else:
user = None
if user:

View File

@@ -59,7 +59,7 @@ def oauth_callback():
# validate state field
arg_state = request.args.get('state', None)
cookie_state = request.cookies.get(auth.state_cookie_name)
if arg_state != cookie_state:
if arg_state is None or arg_state != cookie_state:
# state doesn't match
return 403

View File

@@ -7,7 +7,6 @@ version_info = (
0,
8,
0,
'b3',
)
__version__ = '.'.join(map(str, version_info))
@@ -28,6 +27,7 @@ def _check_version(hub_version, singleuser_version, log):
from distutils.version import LooseVersion as V
hub_major_minor = V(hub_version).version[:2]
singleuser_major_minor = V(singleuser_version).version[:2]
extra = ""
if singleuser_major_minor == hub_major_minor:
# patch-level mismatch or lower, log difference at debug-level
# because this should be fine
@@ -35,8 +35,11 @@ def _check_version(hub_version, singleuser_version, log):
else:
# log warning-level for more significant mismatch, such as 0.8 vs 0.9, etc.
log_method = log.warning
log_method("jupyterhub version %s != jupyterhub-singleuser version %s",
hub_version, singleuser_version,
extra = " This could cause failure to authenticate and result in redirect loops!"
log_method(
"jupyterhub version %s != jupyterhub-singleuser version %s." + extra,
hub_version,
singleuser_version,
)
else:
log.debug("jupyterhub and jupyterhub-singleuser both on version %s" % hub_version)

View File

@@ -12,9 +12,16 @@ config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
if 'jupyterhub' in sys.modules:
from traitlets.config import MultipleInstanceError
from jupyterhub.app import JupyterHub
app = None
if JupyterHub.initialized():
app = JupyterHub.instance()
try:
app = JupyterHub.instance()
except MultipleInstanceError:
# could have been another Application
pass
if app is not None:
alembic_logger = logging.getLogger('alembic')
alembic_logger.propagate = True
alembic_logger.parent = app.log

View File

@@ -36,6 +36,10 @@ def upgrade():
# drop some columns no longer in use
try:
op.drop_column('users', 'auth_state')
# mysql cannot drop _server_id without also dropping
# implicitly created foreign key
if op.get_context().dialect.name == 'mysql':
op.drop_constraint('users_ibfk_1', 'users', type_='foreignkey')
op.drop_column('users', '_server_id')
except sa.exc.OperationalError:
# this won't be a problem moving forward, but downgrade will fail

View File

@@ -41,15 +41,27 @@ class TokenAPIHandler(APIHandler):
# for authenticators where that's possible
data = self.get_json_body()
try:
authenticated = yield self.authenticate(self, data)
user = yield self.login_user(data)
except Exception as e:
self.log.error("Failure trying to authenticate with form data: %s" % e)
authenticated = None
if authenticated is None:
user = None
if user is None:
raise web.HTTPError(403)
user = self.find_user(authenticated['name'])
else:
data = self.get_json_body()
# admin users can request
if data and data.get('username') != user.name:
if user.admin:
user = self.find_user(data['username'])
if user is None:
raise web.HTTPError(400, "No such user '%s'" % data['username'])
else:
raise web.HTTPError(403, "Only admins can request tokens for other users.")
api_token = user.new_api_token()
self.write(json.dumps({'token': api_token}))
self.write(json.dumps({
'token': api_token,
'user': self.user_model(user),
}))
class CookieAPIHandler(APIHandler):

View File

@@ -114,7 +114,7 @@ class APIHandler(BaseHandler):
if spawner.pending:
s['pending'] = spawner.pending
if spawner.server:
s['url'] = user.url + name + '/'
s['url'] = url_path_join(user.url, name, '/')
return model
def group_model(self, group):

View File

@@ -185,8 +185,6 @@ class UserServerAPIHandler(APIHandler):
user = self.find_user(name)
if server_name and not self.allow_named_servers:
raise web.HTTPError(400, "Named servers are not enabled.")
if self.allow_named_servers and not server_name:
server_name = user.default_server_name()
spawner = user.spawners[server_name]
pending = spawner.pending
if pending == 'spawn':

View File

@@ -23,6 +23,7 @@ if sys.version_info[:2] < (3, 3):
from jinja2 import Environment, FileSystemLoader
from sqlalchemy import create_engine
from sqlalchemy.exc import OperationalError
from tornado.httpclient import AsyncHTTPClient
@@ -189,6 +190,13 @@ class UpgradeDB(Application):
db_file = hub.db_url.split(':///', 1)[1]
self._backup_db_file(db_file)
self.log.info("Upgrading %s", hub.db_url)
# run check-db-revision first
engine = create_engine(hub.db_url)
try:
orm.check_db_revision(engine)
except orm.DatabaseSchemaMismatch:
# ignore mismatch error because that's what we are here for!
pass
dbutil.upgrade(hub.db_url)
@@ -801,12 +809,10 @@ class JupyterHub(Application):
self.handlers = self.add_url_prefix(self.hub_prefix, h)
# some extra handlers, outside hub_prefix
self.handlers.extend([
(r"%s" % self.hub_prefix.rstrip('/'), web.RedirectHandler,
{
"url": self.hub_prefix,
"permanent": False,
}
),
# add trailing / to `/hub`
(self.hub_prefix.rstrip('/'), handlers.AddSlashHandler),
# add trailing / to ``/user|services/:name`
(r"%s(user|services)/([^/]+)" % self.base_url, handlers.AddSlashHandler),
(r"(?!%s).*" % self.hub_prefix, handlers.PrefixRedirectHandler),
(r'(.*)', handlers.Template404),
])
@@ -1221,7 +1227,7 @@ class JupyterHub(Application):
status = yield spawner.poll()
except Exception:
self.log.exception("Failed to poll spawner for %s, assuming the spawner is not running.",
user.name if name else '%s|%s' % (user.name, name))
spawner._log_name)
status = -1
if status is None:
@@ -1232,11 +1238,13 @@ class JupyterHub(Application):
# user not running. This is expected if server is None,
# but indicates the user's server died while the Hub wasn't running
# if spawner.server is defined.
log = self.log.warning if spawner.server else self.log.debug
log("%s not running.", user.name)
# remove all server or servers entry from db related to the user
if spawner.server:
self.log.warning("%s appears to have stopped while the Hub was down", spawner._log_name)
# remove server entry from db
db.delete(spawner.orm_spawner.server)
spawner.server = None
else:
self.log.debug("%s not running", spawner._log_name)
db.commit()
user_summaries.append(_user_summary(user))

View File

@@ -20,7 +20,7 @@ from .. import __version__
from .. import orm
from ..objects import Server
from ..spawner import LocalProcessSpawner
from ..utils import default_server_name, url_path_join, exponential_backoff
from ..utils import url_path_join
# pattern for the authentication token header
auth_header_pat = re.compile(r'^(?:token|bearer)\s+([^\s]+)$', flags=re.IGNORECASE)
@@ -347,7 +347,7 @@ class BaseHandler(RequestHandler):
else:
self.statsd.incr('login.failure')
self.statsd.timing('login.authenticate.failure', auth_timer.ms)
self.log.warning("Failed login for %s", data.get('username', 'unknown user'))
self.log.warning("Failed login for %s", (data or {}).get('username', 'unknown user'))
#---------------------------------------------------------------
@@ -376,9 +376,10 @@ class BaseHandler(RequestHandler):
@gen.coroutine
def spawn_single_user(self, user, server_name='', options=None):
# in case of error, include 'try again from /hub/home' message
self.extra_error_html = self.spawn_home_error
user_server_name = user.name
if self.allow_named_servers and not server_name:
server_name = default_server_name(user)
if server_name:
user_server_name = '%s:%s' % (user.name, server_name)
@@ -440,11 +441,7 @@ class BaseHandler(RequestHandler):
otherwise it is called immediately.
"""
# wait for spawn Future
try:
yield spawn_future
except Exception:
spawner._spawn_pending = False
raise
yield spawn_future
toc = IOLoop.current().time()
self.log.info("User %s took %.3f seconds to start", user_server_name, toc-tic)
self.statsd.timing('spawner.success', (toc - tic) * 1000)
@@ -459,10 +456,22 @@ class BaseHandler(RequestHandler):
spawner.add_poll_callback(self.user_stopped, user, server_name)
finally:
spawner._proxy_pending = False
spawner._spawn_pending = False
# hook up spawner._spawn_future so that other requests can await
# this result
finish_spawn_future = spawner._spawn_future = finish_user_spawn()
def _clear_spawn_future(f):
# clear spawner._spawn_future when it's done
# keep an exception around, though, to prevent repeated implicit spawns
# if spawn is failing
if f.exception() is None:
spawner._spawn_future = None
# Now we're all done. clear _spawn_pending flag
spawner._spawn_pending = False
finish_spawn_future.add_done_callback(_clear_spawn_future)
try:
yield gen.with_timeout(timedelta(seconds=self.slow_spawn_timeout), finish_user_spawn())
yield gen.with_timeout(timedelta(seconds=self.slow_spawn_timeout), finish_spawn_future)
except gen.TimeoutError:
# waiting_for_response indicates server process has started,
# but is yet to become responsive.
@@ -479,7 +488,8 @@ class BaseHandler(RequestHandler):
if status is not None:
toc = IOLoop.current().time()
self.statsd.timing('spawner.failure', (toc - tic) * 1000)
raise web.HTTPError(500, "Spawner failed to start [status=%s]" % status)
raise web.HTTPError(500, "Spawner failed to start [status=%s]. The logs for %s may contain details." % (
status, spawner._log_name))
if spawner._waiting_for_response:
# hit timeout waiting for response, but server's running.
@@ -535,6 +545,7 @@ class BaseHandler(RequestHandler):
spawner._stop_pending = False
toc = IOLoop.current().time()
self.log.info("User %s server took %.3f seconds to stop", user.name, toc - tic)
self.statsd.timing('spawner.stop', (toc - tic) * 1000)
try:
yield gen.with_timeout(timedelta(seconds=self.slow_stop_timeout), stop())
@@ -549,6 +560,19 @@ class BaseHandler(RequestHandler):
# template rendering
#---------------------------------------------------------------
@property
def spawn_home_error(self):
"""Extra message pointing users to try spawning again from /hub/home.
Should be added to `self.extra_error_html` for any handler
that could serve a failed spawn message.
"""
home = url_path_join(self.hub.base_url, 'home')
return (
"You can try restarting your server from the "
"<a href='{home}'>home page</a>.".format(home=home)
)
def get_template(self, name):
"""Return the jinja template object for a given name"""
return self.settings['jinja2_env'].get_template(name)
@@ -596,6 +620,7 @@ class BaseHandler(RequestHandler):
status_code=status_code,
status_message=status_message,
message=message,
extra_error_html=getattr(self, 'extra_error_html', ''),
exception=exception,
)
@@ -649,10 +674,13 @@ class UserSpawnHandler(BaseHandler):
current_user = self.get_current_user()
if current_user and current_user.name == name:
# if spawning fails for any reason, point users to /hub/home to retry
self.extra_error_html = self.spawn_home_error
# If people visit /user/:name directly on the Hub,
# the redirects will just loop, because the proxy is bypassed.
# Try to check for that and warn,
# though the user-facing behavior is unchainged
# though the user-facing behavior is unchanged
host_info = urlparse(self.request.full_url())
port = host_info.port
if not port:
@@ -664,8 +692,34 @@ class UserSpawnHandler(BaseHandler):
Make sure to connect to the proxied public URL %s
""", self.request.full_url(), self.proxy.public_url)
# logged in as correct user, spawn the server
# logged in as correct user, check for pending spawn
spawner = current_user.spawner
# First, check for previous failure.
if (
not spawner.active
and spawner._spawn_future
and spawner._spawn_future.done()
and spawner._spawn_future.exception()
):
# Condition: spawner not active and _spawn_future exists and contains an Exception
# Implicit spawn on /user/:name is not allowed if the user's last spawn failed.
# We should point the user to Home if the most recent spawn failed.
self.log.error("Preventing implicit spawn for %s because last spawn failed: %s",
spawner._log_name, spawner._spawn_future.exception())
raise spawner._spawn_future.exception()
# check for pending spawn
if spawner.pending and spawner._spawn_future:
# wait on the pending spawn
self.log.debug("Waiting for %s pending %s", spawner._log_name, spawner.pending)
try:
yield gen.with_timeout(timedelta(seconds=self.slow_spawn_timeout), spawner._spawn_future)
except gen.TimeoutError:
self.log.info("Pending spawn for %s didn't finish in %.1f seconds", spawner._log_name, self.slow_spawn_timeout)
pass
# we may have waited above, check pending again:
if spawner.pending:
self.log.info("%s is pending %s", spawner._log_name, spawner.pending)
# spawn has started, but not finished
@@ -675,7 +729,12 @@ class UserSpawnHandler(BaseHandler):
return
# spawn has supposedly finished, check on the status
status = yield spawner.poll()
if spawner.ready:
status = yield spawner.poll()
else:
status = 0
# server is not running, trigger spawn
if status is not None:
if spawner.options_form:
self.redirect(url_concat(url_path_join(self.hub.base_url, 'spawn'),
@@ -684,6 +743,15 @@ class UserSpawnHandler(BaseHandler):
else:
yield self.spawn_single_user(current_user)
# spawn didn't finish, show pending page
if spawner.pending:
self.log.info("%s is pending %s", spawner._log_name, spawner.pending)
# spawn has started, but not finished
self.statsd.incr('redirects.user_spawn_pending', 1)
html = self.render_template("spawn_pending.html", user=current_user)
self.finish(html)
return
# We do exponential backoff here - since otherwise we can get stuck in a redirect loop!
# This is important in many distributed proxy implementations - those are often eventually
# consistent and can take upto a couple of seconds to actually apply throughout the cluster.
@@ -693,9 +761,23 @@ class UserSpawnHandler(BaseHandler):
self.log.warning("Invalid redirects argument %r", self.get_argument('redirects'))
redirects = 0
if redirects >= self.settings.get('user_redirect_limit', 5):
# check redirect limit to prevent browser-enforced limits.
# In case of version mismatch, raise on only two redirects.
if redirects >= self.settings.get(
'user_redirect_limit', 4
) or (redirects >= 2 and spawner._jupyterhub_version != __version__):
# We stop if we've been redirected too many times.
raise web.HTTPError(500, "Redirect loop detected.")
msg = "Redirect loop detected."
if spawner._jupyterhub_version != __version__:
msg += (
" Notebook has jupyterhub version {singleuser}, but the Hub expects {hub}."
" Try installing jupyterhub=={hub} in the user environment"
" if you continue to have problems."
).format(
singleuser=spawner._jupyterhub_version or 'unknown (likely < 0.8)',
hub=__version__,
)
raise web.HTTPError(500, msg)
# set login cookie anew
self.set_login_cookie(current_user)
@@ -769,6 +851,13 @@ class CSPReportHandler(BaseHandler):
self.statsd.incr('csp_report')
class AddSlashHandler(BaseHandler):
"""Handler for adding trailing slash to URLs that need them"""
def get(self, *args):
src = urlparse(self.request.uri)
dest = src._replace(path=src.path + '/')
self.redirect(urlunparse(dest))
default_handlers = [
(r'/user/([^/]+)(/.*)?', UserSpawnHandler),
(r'/user-redirect/(.*)?', UserRedirectHandler),

View File

@@ -20,7 +20,7 @@ class LogoutHandler(BaseHandler):
self.clear_login_cookie()
self.statsd.incr('logout')
if self.authenticator.auto_login:
self.render('logout.html')
self.render_template('logout.html')
else:
self.redirect(self.settings['login_url'], permanent=False)

View File

@@ -67,9 +67,13 @@ class HomeHandler(BaseHandler):
if user.running:
# trigger poll_and_notify event in case of a server that died
yield user.spawner.poll_and_notify()
# send the user to /spawn if they aren't running,
# to establish that this is an explicit spawn request rather
# than an implicit one, which can be caused by any link to `/user/:name`
url = user.url if user.running else url_path_join(self.hub.base_url, 'spawn')
html = self.render_template('home.html',
user=user,
url=user.url,
url=url,
)
self.finish(html)
@@ -92,7 +96,10 @@ class SpawnHandler(BaseHandler):
@web.authenticated
def get(self):
"""GET renders form for spawning with user-specified options"""
"""GET renders form for spawning with user-specified options
or triggers spawn via redirect if there is no form.
"""
user = self.get_current_user()
if not self.allow_named_servers and user.running:
url = user.url
@@ -102,7 +109,12 @@ class SpawnHandler(BaseHandler):
if user.spawner.options_form:
self.finish(self._render_form())
else:
# not running, no form. Trigger spawn.
# Explicit spawn request: clear _spawn_future
# which may have been saved to prevent implicit spawns
# after a failure.
if user.spawner._spawn_future and user.spawner._spawn_future.done():
user.spawner._spawn_future = None
# not running, no form. Trigger spawn by redirecting to /user/:name
self.redirect(user.url)
@web.authenticated

View File

@@ -177,7 +177,7 @@ class Spawner(Base):
id = Column(Integer, primary_key=True, autoincrement=True)
user_id = Column(Integer, ForeignKey('users.id', ondelete='CASCADE'))
server_id = Column(Integer, ForeignKey('servers.id'))
server_id = Column(Integer, ForeignKey('servers.id', ondelete='SET NULL'))
server = relationship(Server)
state = Column(JSONDict)
@@ -213,7 +213,7 @@ class Service(Base):
api_tokens = relationship("APIToken", backref="service")
# service-specific interface
_server_id = Column(Integer, ForeignKey('servers.id'))
_server_id = Column(Integer, ForeignKey('servers.id', ondelete='SET NULL'))
server = relationship(Server, primaryjoin=_server_id == Server.id)
pid = Column(Integer)

View File

@@ -375,7 +375,7 @@ class Proxy(LoggingConfigurable):
self.log.info("Setting up routes on new proxy")
yield self.add_hub_route(self.app.hub)
yield self.add_all_users(self.app.users)
yield self.add_all_services(self.app.services)
yield self.add_all_services(self.app._service_map)
self.log.info("New proxy back up and good to go")

View File

@@ -13,8 +13,10 @@ authenticate with the Hub.
import base64
import json
import os
import random
import re
import socket
import string
import time
from urllib.parse import quote, urlencode
import uuid
@@ -531,22 +533,37 @@ class HubOAuth(HubAuth):
-------
state (str): The OAuth state that has been stored in the cookie (url safe, base64-encoded)
"""
b64_state = self.generate_state(next_url)
extra_state = {}
if handler.get_cookie(self.state_cookie_name):
# oauth state cookie is already set
# use a randomized cookie suffix to avoid collisions
# in case of concurrent logins
app_log.warning("Detected unused OAuth state cookies")
cookie_suffix = ''.join(random.choice(string.ascii_letters) for i in range(8))
cookie_name = '{}-{}'.format(self.state_cookie_name, cookie_suffix)
extra_state['cookie_name'] = cookie_name
else:
cookie_name = self.state_cookie_name
b64_state = self.generate_state(next_url, **extra_state)
kwargs = {
'path': self.base_url,
'httponly': True,
'expires_days': 1,
# Expire oauth state cookie in ten minutes.
# Usually this will be cleared by completed login
# in less than a few seconds.
# OAuth that doesn't complete shouldn't linger too long.
'max_age': 600,
}
if handler.request.protocol == 'https':
kwargs['secure'] = True
handler.set_secure_cookie(
self.state_cookie_name,
cookie_name,
b64_state,
**kwargs
)
return b64_state
def generate_state(self, next_url=None):
def generate_state(self, next_url=None, **extra_state):
"""Generate a state string, given a next_url redirect target
Parameters
@@ -557,16 +574,27 @@ class HubOAuth(HubAuth):
-------
state (str): The base64-encoded state string.
"""
return self._encode_state({
state = {
'uuid': uuid.uuid4().hex,
'next_url': next_url
})
'next_url': next_url,
}
state.update(extra_state)
return self._encode_state(state)
def get_next_url(self, b64_state=''):
"""Get the next_url for redirection, given an encoded OAuth state"""
state = self._decode_state(b64_state)
return state.get('next_url') or self.base_url
def get_state_cookie_name(self, b64_state=''):
"""Get the cookie name for oauth state, given an encoded OAuth state
Cookie name is stored in the state itself because the cookie name
is randomized to deal with races between concurrent oauth sequences.
"""
state = self._decode_state(b64_state)
return state.get('cookie_name') or self.state_cookie_name
def set_cookie(self, handler, access_token):
"""Set a cookie recording OAuth result"""
kwargs = {
@@ -657,13 +685,12 @@ class HubAuthenticated(object):
def get_login_url(self):
"""Return the Hub's login URL"""
login_url = self.hub_auth.login_url
app_log.debug("Redirecting to login url: %s", login_url)
if isinstance(self.hub_auth, HubOAuthenticated):
if isinstance(self.hub_auth, HubOAuth):
# add state argument to OAuth url
state = self.hub_auth.set_state_cookie(self, next_url=self.request.uri)
return url_concat(login_url, {'state': state})
else:
return login_url
login_url = url_concat(login_url, {'state': state})
app_log.debug("Redirecting to login url: %s", login_url)
return login_url
def check_hub_user(self, model):
"""Check whether Hub-authenticated user or service should be allowed.
@@ -731,6 +758,19 @@ class HubAuthenticated(object):
except Exception:
self._hub_auth_user_cache = None
raise
# store ?token=... tokens passed via url in a cookie for future requests
url_token = self.get_argument('token', '')
if (
user_model
and url_token
and getattr(self, '_token_authenticated', False)
and hasattr(self.hub_auth, 'set_cookie')
):
# authenticated via `?token=`
# set a cookie for future requests
# hub_auth.set_cookie is only available on HubOAuth
self.hub_auth.set_cookie(self, url_token)
return self._hub_auth_user_cache
@@ -757,18 +797,19 @@ class HubOAuthCallbackHandler(HubOAuthenticated, RequestHandler):
# validate OAuth state
arg_state = self.get_argument("state", None)
cookie_state = self.get_secure_cookie(self.hub_auth.state_cookie_name)
next_url = None
if arg_state or cookie_state:
# clear cookie state now that we've consumed it
self.clear_cookie(self.hub_auth.state_cookie_name)
if isinstance(cookie_state, bytes):
cookie_state = cookie_state.decode('ascii', 'replace')
# check that state matches
if arg_state != cookie_state:
app_log.debug("oauth state %r != %r", arg_state, cookie_state)
raise HTTPError(403, "oauth state does not match")
next_url = self.hub_auth.get_next_url(cookie_state)
if arg_state is None:
raise HTTPError("oauth state is missing. Try logging in again.")
cookie_name = self.hub_auth.get_state_cookie_name(arg_state)
cookie_state = self.get_secure_cookie(cookie_name)
# clear cookie state now that we've consumed it
self.clear_cookie(cookie_name, path=self.hub_auth.base_url)
if isinstance(cookie_state, bytes):
cookie_state = cookie_state.decode('ascii', 'replace')
# check that state matches
if arg_state != cookie_state:
app_log.warning("oauth state %r != %r", arg_state, cookie_state)
raise HTTPError(403, "oauth state does not match. Try logging in again.")
next_url = self.hub_auth.get_next_url(cookie_state)
# TODO: make async (in a Thread?)
token = self.hub_auth.token_for_code(code)
user_model = self.hub_auth.user_for_token(token)

View File

@@ -301,5 +301,8 @@ class Service(LoggingConfigurable):
if not self.managed:
raise RuntimeError("Cannot stop unmanaged service %s" % self)
if self.spawner:
if self.orm.server:
self.db.delete(self.orm.server)
self.db.commit()
self.spawner.stop_polling()
return self.spawner.stop()

View File

@@ -144,11 +144,13 @@ page_template = """
{% block header_buttons %}
{{super()}}
<a href='{{hub_control_panel_url}}'
class='btn btn-default btn-sm navbar-btn pull-right'
style='margin-right: 4px; margin-left: 2px;'
>
Control Panel</a>
<span>
<a href='{{hub_control_panel_url}}'
class='btn btn-default btn-sm navbar-btn pull-right'
style='margin-right: 4px; margin-left: 2px;'>
Control Panel
</a>
</span>
{% endblock %}
{% block logo %}
<img src='{{logo_url}}' alt='Jupyter Notebook'/>

View File

@@ -18,7 +18,7 @@ from tempfile import mkdtemp
from sqlalchemy import inspect
from tornado import gen
from tornado.ioloop import PeriodicCallback, IOLoop
from tornado.ioloop import PeriodicCallback
from traitlets.config import LoggingConfigurable
from traitlets import (
@@ -53,6 +53,8 @@ class Spawner(LoggingConfigurable):
_stop_pending = False
_proxy_pending = False
_waiting_for_response = False
_jupyterhub_version = None
_spawn_future = None
@property
def _log_name(self):
@@ -101,6 +103,7 @@ class Spawner(LoggingConfigurable):
authenticator = Any()
hub = Any()
orm_spawner = Any()
db = Any()
@observe('orm_spawner')
def _orm_spawner_changed(self, change):
@@ -836,7 +839,7 @@ class LocalProcessSpawner(Spawner):
This is the default spawner for JupyterHub.
"""
INTERRUPT_TIMEOUT = Integer(10,
interrupt_timeout = Integer(10,
help="""
Seconds to wait for single-user server process to halt after SIGINT.
@@ -844,7 +847,7 @@ class LocalProcessSpawner(Spawner):
"""
).tag(config=True)
TERM_TIMEOUT = Integer(5,
term_timeout = Integer(5,
help="""
Seconds to wait for single-user server process to halt after SIGTERM.
@@ -852,7 +855,7 @@ class LocalProcessSpawner(Spawner):
"""
).tag(config=True)
KILL_TIMEOUT = Integer(5,
kill_timeout = Integer(5,
help="""
Seconds to wait for process to halt after SIGKILL before giving up.
@@ -1068,7 +1071,7 @@ class LocalProcessSpawner(Spawner):
return
self.log.debug("Interrupting %i", self.pid)
yield self._signal(signal.SIGINT)
yield self.wait_for_death(self.INTERRUPT_TIMEOUT)
yield self.wait_for_death(self.interrupt_timeout)
# clean shutdown failed, use TERM
status = yield self.poll()
@@ -1076,7 +1079,7 @@ class LocalProcessSpawner(Spawner):
return
self.log.debug("Terminating %i", self.pid)
yield self._signal(signal.SIGTERM)
yield self.wait_for_death(self.TERM_TIMEOUT)
yield self.wait_for_death(self.term_timeout)
# TERM failed, use KILL
status = yield self.poll()
@@ -1084,7 +1087,7 @@ class LocalProcessSpawner(Spawner):
return
self.log.debug("Killing %i", self.pid)
yield self._signal(signal.SIGKILL)
yield self.wait_for_death(self.KILL_TIMEOUT)
yield self.wait_for_death(self.kill_timeout)
status = yield self.poll()
if status is None:

View File

@@ -89,7 +89,7 @@ def api_request(app, *api_path, **kwargs):
base_url = app.hub.url
headers = kwargs.setdefault('headers', {})
if 'Authorization' not in headers:
if 'Authorization' not in headers and not kwargs.pop('noauth', False):
headers.update(auth_header(app.db, 'admin'))
url = ujoin(base_url, 'api', *api_path)
@@ -755,16 +755,16 @@ def test_token(app):
@mark.gen_test
@mark.parametrize("headers, data, status", [
({}, None, 200),
({'Authorization': ''}, None, 403),
({}, {'username': 'fake', 'password': 'fake'}, 200),
@mark.parametrize("headers, status", [
({}, 200),
({'Authorization': 'token bad'}, 403),
])
def test_get_new_token(app, headers, data, status):
if data:
data = json.dumps(data)
def test_get_new_token(app, headers, status):
# request a new token
r = yield api_request(app, 'authorizations', 'token', method='post', data=data, headers=headers)
r = yield api_request(app, 'authorizations', 'token',
method='post',
headers=headers,
)
assert r.status_code == status
if status != 200:
return
@@ -772,7 +772,61 @@ def test_get_new_token(app, headers, data, status):
assert 'token' in reply
r = yield api_request(app, 'authorizations', 'token', reply['token'])
r.raise_for_status()
assert 'name' in r.json()
reply = r.json()
assert reply['name'] == 'admin'
@mark.gen_test
def test_token_formdata(app):
"""Create a token for a user with formdata and no auth header"""
data = {
'username': 'fake',
'password': 'fake',
}
r = yield api_request(app, 'authorizations', 'token',
method='post',
data=json.dumps(data) if data else None,
noauth=True,
)
assert r.status_code == 200
reply = r.json()
assert 'token' in reply
r = yield api_request(app, 'authorizations', 'token', reply['token'])
r.raise_for_status()
reply = r.json()
assert reply['name'] == data['username']
@mark.gen_test
@mark.parametrize("as_user, for_user, status", [
('admin', 'other', 200),
('admin', 'missing', 400),
('user', 'other', 403),
('user', 'user', 200),
])
def test_token_as_user(app, as_user, for_user, status):
# ensure both users exist
u = add_user(app.db, app, name=as_user)
if for_user != 'missing':
add_user(app.db, app, name=for_user)
data = {'username': for_user}
headers = {
'Authorization': 'token %s' % u.new_api_token(),
}
r = yield api_request(app, 'authorizations', 'token',
method='post',
data=json.dumps(data),
headers=headers,
)
assert r.status_code == status
reply = r.json()
if status != 200:
return
assert 'token' in reply
r = yield api_request(app, 'authorizations', 'token', reply['token'])
r.raise_for_status()
reply = r.json()
assert reply['name'] == data['username']
# ---------------

View File

@@ -8,9 +8,11 @@ from subprocess import check_output, Popen, PIPE
from tempfile import NamedTemporaryFile, TemporaryDirectory
from unittest.mock import patch
from tornado import gen
import pytest
from .mocking import MockHub
from .test_api import add_user
from .. import orm
from ..app import COOKIE_SECRET_BYTES
@@ -161,3 +163,57 @@ def test_load_groups():
assert gold is not None
assert sorted([ u.name for u in gold.users ]) == sorted(to_load['gold'])
@pytest.mark.gen_test
def test_resume_spawners(tmpdir, request):
if not os.getenv('JUPYTERHUB_TEST_DB_URL'):
p = patch.dict(os.environ, {
'JUPYTERHUB_TEST_DB_URL': 'sqlite:///%s' % tmpdir.join('jupyterhub.sqlite'),
})
p.start()
request.addfinalizer(p.stop)
@gen.coroutine
def new_hub():
app = MockHub()
app.config.ConfigurableHTTPProxy.should_start = False
yield app.initialize([])
return app
app = yield new_hub()
db = app.db
# spawn a user's server
name = 'kurt'
user = add_user(db, app, name=name)
yield user.spawn()
proc = user.spawner.proc
assert proc is not None
# stop the Hub without cleaning up servers
app.cleanup_servers = False
yield app.stop()
# proc is still running
assert proc.poll() is None
# resume Hub, should still be running
app = yield new_hub()
db = app.db
user = app.users[name]
assert user.running
assert user.spawner.server is not None
# stop the Hub without cleaning up servers
app.cleanup_servers = False
yield app.stop()
# stop the server while the Hub is down. BAMF!
proc.terminate()
proc.wait(timeout=10)
assert proc.poll() is not None
# resume Hub, should be stopped
app = yield new_hub()
db = app.db
user = app.users[name]
assert not user.running
assert user.spawner.server is None
assert list(db.query(orm.Server)) == []

View File

@@ -4,6 +4,7 @@ import shutil
import pytest
from pytest import raises
from traitlets.config import Config
from ..dbutil import upgrade
from ..app import NewToken, UpgradeDB, JupyterHub
@@ -21,29 +22,35 @@ def generate_old_db(path):
def test_upgrade(tmpdir):
print(tmpdir)
db_url = generate_old_db(str(tmpdir))
print(db_url)
upgrade(db_url)
@pytest.mark.gen_test
def test_upgrade_entrypoint(tmpdir):
generate_old_db(str(tmpdir))
db_url = os.getenv('JUPYTERHUB_TEST_UPGRADE_DB_URL')
if not db_url:
# default: sqlite
db_url = generate_old_db(str(tmpdir))
cfg = Config()
cfg.JupyterHub.db_url = db_url
tmpdir.chdir()
tokenapp = NewToken()
tokenapp = NewToken(config=cfg)
tokenapp.initialize(['kaylee'])
with raises(SystemExit):
tokenapp.start()
sqlite_files = glob(os.path.join(str(tmpdir), 'jupyterhub.sqlite*'))
assert len(sqlite_files) == 1
if 'sqlite' in db_url:
sqlite_files = glob(os.path.join(str(tmpdir), 'jupyterhub.sqlite*'))
assert len(sqlite_files) == 1
upgradeapp = UpgradeDB()
upgradeapp = UpgradeDB(config=cfg)
yield upgradeapp.initialize([])
upgradeapp.start()
# check that backup was created:
sqlite_files = glob(os.path.join(str(tmpdir), 'jupyterhub.sqlite*'))
assert len(sqlite_files) == 2
if 'sqlite' in db_url:
sqlite_files = glob(os.path.join(str(tmpdir), 'jupyterhub.sqlite*'))
assert len(sqlite_files) == 2
# run tokenapp again, it should work
tokenapp.start()

View File

@@ -17,6 +17,57 @@ def named_servers(app):
app.tornado_application.settings[key] = app.tornado_settings[key] = False
@pytest.mark.gen_test
def test_default_server(app, named_servers):
"""Test the default /users/:user/server handler when named servers are enabled"""
username = 'rosie'
user = add_user(app.db, app, name=username)
r = yield api_request(app, 'users', username, 'server', method='post')
assert r.status_code == 201
assert r.text == ''
r = yield api_request(app, 'users', username)
r.raise_for_status()
user_model = r.json()
user_model.pop('last_activity')
assert user_model == {
'name': username,
'groups': [],
'kind': 'user',
'admin': False,
'pending': None,
'server': user.url,
'servers': {
'': {
'name': '',
'url': user.url,
},
},
}
# now stop the server
r = yield api_request(app, 'users', username, 'server', method='delete')
assert r.status_code == 204
assert r.text == ''
r = yield api_request(app, 'users', username)
r.raise_for_status()
user_model = r.json()
user_model.pop('last_activity')
assert user_model == {
'name': username,
'groups': [],
'kind': 'user',
'admin': False,
'pending': None,
'server': None,
'servers': {},
}
@pytest.mark.gen_test
def test_create_named_server(app, named_servers):
username = 'walnut'
@@ -49,13 +100,13 @@ def test_create_named_server(app, named_servers):
'kind': 'user',
'admin': False,
'pending': None,
'server': None,
'server': user.url,
'servers': {
name: {
'name': name,
'url': url_path_join(user.url, name, '/'),
}
for name in ['1', servername]
for name in ['', servername]
},
}
@@ -86,13 +137,13 @@ def test_delete_named_server(app, named_servers):
'kind': 'user',
'admin': False,
'pending': None,
'server': None,
'server': user.url,
'servers': {
name: {
'name': name,
'url': url_path_join(user.url, name, '/'),
}
for name in ['1']
for name in ['']
},
}

View File

@@ -14,6 +14,7 @@ from .. import objects
from .. import crypto
from ..user import User
from .mocking import MockSpawner
from ..emptyclass import EmptyClass
def test_server(db):
@@ -167,6 +168,7 @@ def test_spawn_fails(db):
user = User(orm_user, {
'spawner_class': BadSpawner,
'config': None,
'statsd': EmptyClass(),
})
with pytest.raises(RuntimeError) as exc:

View File

@@ -134,6 +134,16 @@ def test_spawn_redirect(app):
path = urlparse(r.url).path
assert path == ujoin(app.base_url, '/user/%s/' % name)
# stop server to ensure /user/name is handled by the Hub
r = yield api_request(app, 'users', name, 'server', method='delete', cookies=cookies)
r.raise_for_status()
# test handing of trailing slash on `/user/name`
r = yield get_page('user/' + name, app, hub=False, cookies=cookies)
r.raise_for_status()
path = urlparse(r.url).path
assert path == ujoin(app.base_url, '/user/%s/' % name)
@pytest.mark.gen_test
def test_spawn_page(app):
@@ -334,6 +344,19 @@ def test_auto_login(app, request):
r = yield async_requests.get(base_url)
assert r.url == public_url(app, path='hub/dummy')
@pytest.mark.gen_test
def test_auto_login_logout(app):
name = 'burnham'
cookies = yield app.login_user(name)
with mock.patch.dict(app.tornado_application.settings, {
'authenticator': Authenticator(auto_login=True),
}):
r = yield async_requests.get(public_host(app) + app.tornado_settings['logout_url'], cookies=cookies)
r.raise_for_status()
logout_url = public_host(app) + app.tornado_settings['logout_url']
assert r.url == logout_url
assert r.cookies == {}
@pytest.mark.gen_test
def test_logout(app):

View File

@@ -16,6 +16,7 @@ import requests_mock
from tornado.ioloop import IOLoop
from tornado.httpserver import HTTPServer
from tornado.web import RequestHandler, Application, authenticated, HTTPError
from tornado.httputil import url_concat
from ..services.auth import _ExpiringDict, HubAuth, HubAuthenticated
from ..utils import url_path_join
@@ -292,6 +293,7 @@ def test_hubauth_service_token(app, mockservice_url):
'name': name,
'admin': False,
}
assert not r.cookies
# token in ?token parameter
r = yield async_requests.get(public_url(app, mockservice_url) + '/whoami/?token=%s' % token)
@@ -315,15 +317,25 @@ def test_hubauth_service_token(app, mockservice_url):
@pytest.mark.gen_test
def test_oauth_service(app, mockservice_url):
url = url_path_join(public_url(app, mockservice_url) + 'owhoami/')
service = mockservice_url
url = url_path_join(public_url(app, mockservice_url) + 'owhoami/?arg=x')
# first request is only going to set login cookie
# FIXME: redirect to originating URL (OAuth loses this info)
s = requests.Session()
s.cookies = yield app.login_user('link')
name = 'link'
s.cookies = yield app.login_user(name)
# run session.get in async_requests thread
s_get = lambda *args, **kwargs: async_requests.executor.submit(s.get, *args, **kwargs)
r = yield s_get(url)
r.raise_for_status()
assert r.url == url
# verify oauth cookie is set
assert 'service-%s' % service.name in set(s.cookies.keys())
# verify oauth state cookie has been consumed
assert 'service-%s-oauth-state' % service.name not in set(s.cookies.keys())
# verify oauth state cookie was set at some point
assert set(r.history[0].cookies.keys()) == {'service-%s-oauth-state' % service.name}
# second request should be authenticated
r = yield s_get(url, allow_redirects=False)
r.raise_for_status()
@@ -335,3 +347,82 @@ def test_oauth_service(app, mockservice_url):
'kind': 'user',
}
# token-authenticated request to HubOAuth
token = app.users[name].new_api_token()
# token in ?token parameter
r = yield async_requests.get(url_concat(url, {'token': token}))
r.raise_for_status()
reply = r.json()
assert reply['name'] == name
# verify that ?token= requests set a cookie
assert len(r.cookies) != 0
# ensure cookie works in future requests
r = yield async_requests.get(
url,
cookies=r.cookies,
allow_redirects=False,
)
r.raise_for_status()
assert r.url == url
reply = r.json()
assert reply['name'] == name
@pytest.mark.gen_test
def test_oauth_cookie_collision(app, mockservice_url):
service = mockservice_url
url = url_path_join(public_url(app, mockservice_url) + 'owhoami/')
print(url)
s = requests.Session()
name = 'mypha'
s.cookies = yield app.login_user(name)
# run session.get in async_requests thread
s_get = lambda *args, **kwargs: async_requests.executor.submit(s.get, *args, **kwargs)
state_cookie_name = 'service-%s-oauth-state' % service.name
service_cookie_name = 'service-%s' % service.name
oauth_1 = yield s_get(url, allow_redirects=False)
print(oauth_1.headers)
print(oauth_1.cookies, oauth_1.url, url)
assert state_cookie_name in s.cookies
state_cookies = [ s for s in s.cookies.keys() if s.startswith(state_cookie_name) ]
# only one state cookie
assert state_cookies == [state_cookie_name]
state_1 = s.cookies[state_cookie_name]
# start second oauth login before finishing the first
oauth_2 = yield s_get(url, allow_redirects=False)
state_cookies = [ s for s in s.cookies.keys() if s.startswith(state_cookie_name) ]
assert len(state_cookies) == 2
# get the random-suffix cookie name
state_cookie_2 = sorted(state_cookies)[-1]
# we didn't clobber the default cookie
assert s.cookies[state_cookie_name] == state_1
# finish oauth 2
url = oauth_2.headers['Location']
if not urlparse(url).netloc:
url = public_host(app) + url
r = yield s_get(url)
r.raise_for_status()
# after finishing, state cookie is cleared
assert state_cookie_2 not in s.cookies
# service login cookie is set
assert service_cookie_name in s.cookies
service_cookie_2 = s.cookies[service_cookie_name]
# finish oauth 1
url = oauth_1.headers['Location']
if not urlparse(url).netloc:
url = public_host(app) + url
r = yield s_get(url)
r.raise_for_status()
# after finishing, state cookie is cleared (again)
assert state_cookie_name not in s.cookies
# service login cookie is set (again, to a different value)
assert service_cookie_name in s.cookies
assert s.cookies[service_cookie_name] != service_cookie_2
# after completing both OAuth logins, no OAuth state cookies remain
state_cookies = [ s for s in s.cookies.keys() if s.startswith(state_cookie_name) ]
assert state_cookies == []

View File

@@ -51,9 +51,9 @@ def new_spawner(db, **kwargs):
kwargs.setdefault('notebook_dir', os.getcwd())
kwargs.setdefault('default_url', '/user/{username}/lab')
kwargs.setdefault('oauth_client_id', 'mock-client-id')
kwargs.setdefault('INTERRUPT_TIMEOUT', 1)
kwargs.setdefault('TERM_TIMEOUT', 1)
kwargs.setdefault('KILL_TIMEOUT', 1)
kwargs.setdefault('interrupt_timeout', 1)
kwargs.setdefault('term_timeout', 1)
kwargs.setdefault('kill_timeout', 1)
kwargs.setdefault('poll_interval', 1)
return user._new_spawner('', spawner_class=LocalProcessSpawner, **kwargs)
@@ -299,7 +299,7 @@ def test_spawner_reuse_api_token(db, app):
@pytest.mark.gen_test
def test_spawner_insert_api_token(db, app):
def test_spawner_insert_api_token(app):
"""Token provided by spawner is not in the db
Insert token into db as a user-provided token.
@@ -326,7 +326,7 @@ def test_spawner_insert_api_token(db, app):
@pytest.mark.gen_test
def test_spawner_bad_api_token(db, app):
def test_spawner_bad_api_token(app):
"""Tokens are revoked when a Spawner gets another user's token"""
# we need two users for this one
user = add_user(app.db, app, name='antimone')
@@ -346,3 +346,37 @@ def test_spawner_bad_api_token(db, app):
yield user.spawn()
assert orm.APIToken.find(app.db, other_token) is None
assert other_user.api_tokens == []
@pytest.mark.gen_test
def test_spawner_delete_server(app):
"""Test deleting spawner.server
This can occur during app startup if their server has been deleted.
"""
db = app.db
user = add_user(app.db, app, name='gaston')
spawner = user.spawner
orm_server = orm.Server()
db.add(orm_server)
db.commit()
server_id = orm_server.id
spawner.server = Server.from_orm(orm_server)
db.commit()
assert spawner.server is not None
assert spawner.orm_spawner.server is not None
# trigger delete via db
db.delete(spawner.orm_spawner.server)
db.commit()
assert spawner.orm_spawner.server is None
# setting server = None also triggers delete
spawner.server = None
db.commit()
# verify that the server was actually deleted from the db
assert db.query(orm.Server).filter(orm.Server.id == server_id).first() is None
# verify that both ORM and top-level references are None
assert spawner.orm_spawner.server is None
assert spawner.server is None

View File

@@ -34,18 +34,27 @@ def test_memoryspec():
c = C()
c.mem = 1024
assert isinstance(c.mem, int)
assert c.mem == 1024
c.mem = '1024K'
assert isinstance(c.mem, int)
assert c.mem == 1024 * 1024
c.mem = '1024M'
assert isinstance(c.mem, int)
assert c.mem == 1024 * 1024 * 1024
c.mem = '1.5M'
assert isinstance(c.mem, int)
assert c.mem == 1.5 * 1024 * 1024
c.mem = '1024G'
assert isinstance(c.mem, int)
assert c.mem == 1024 * 1024 * 1024 * 1024
c.mem = '1024T'
assert isinstance(c.mem, int)
assert c.mem == 1024 * 1024 * 1024 * 1024 * 1024
with pytest.raises(TraitError):

View File

@@ -48,7 +48,7 @@ class ByteSpecification(Integer):
'K': 1024,
'M': 1024 * 1024,
'G': 1024 * 1024 * 1024,
'T': 1024 * 1024 * 1024 * 1024
'T': 1024 * 1024 * 1024 * 1024,
}
# Default to allowing None as a value
@@ -62,11 +62,15 @@ class ByteSpecification(Integer):
If it has one of the suffixes, it is converted into the appropriate
pure byte value.
"""
if isinstance(value, int):
return value
num = value[:-1]
if isinstance(value, (int, float)):
return int(value)
try:
num = float(value[:-1])
except ValueError:
raise TraitError('{val} is not a valid memory specification. Must be an int or a string with suffix K, M, G, T'.format(val=value))
suffix = value[-1]
if not num.isdigit() and suffix not in ByteSpecification.UNIT_SUFFIXES:
if suffix not in self.UNIT_SUFFIXES:
raise TraitError('{val} is not a valid memory specification. Must be an int or a string with suffix K, M, G, T'.format(val=value))
else:
return int(num) * ByteSpecification.UNIT_SUFFIXES[suffix]
return int(float(num) * self.UNIT_SUFFIXES[suffix])

View File

@@ -12,7 +12,7 @@ from tornado import gen
from tornado.log import app_log
from traitlets import HasTraits, Any, Dict, default
from .utils import url_path_join, default_server_name
from .utils import url_path_join
from . import orm
from ._version import _check_version, __version__
@@ -201,6 +201,7 @@ class User(HasTraits):
authenticator=self.authenticator,
config=self.settings.get('config'),
proxy_spec=url_path_join(self.proxy_spec, name, '/'),
db=self.db,
)
# update with kwargs. Mainly for testing.
spawn_kwargs.update(kwargs)
@@ -420,10 +421,12 @@ class User(HasTraits):
user=self.name, s=spawner.start_timeout,
))
e.reason = 'timeout'
self.settings['statsd'].incr('spawner.failure.timeout')
else:
self.log.error("Unhandled error starting {user}'s server: {error}".format(
user=self.name, error=e,
))
self.settings['statsd'].incr('spawner.failure.error')
e.reason = 'error'
try:
yield self.stop()
@@ -456,11 +459,13 @@ class User(HasTraits):
)
)
e.reason = 'timeout'
self.settings['statsd'].incr('spawner.failure.http_timeout')
else:
e.reason = 'error'
self.log.error("Unhandled error waiting for {user}'s server to show up at {url}: {error}".format(
user=self.name, url=server.url, error=e,
))
self.settings['statsd'].incr('spawner.failure.http_error')
try:
yield self.stop()
except Exception:
@@ -472,6 +477,9 @@ class User(HasTraits):
else:
server_version = resp.headers.get('X-JupyterHub-Version')
_check_version(__version__, server_version, self.log)
# record the Spawner version for better error messages
# if it doesn't work
spawner._jupyterhub_version = server_version
finally:
spawner._waiting_for_response = False
spawner._start_pending = False

View File

@@ -142,7 +142,8 @@ def wait_for_server(ip, port, timeout=10):
ip = '127.0.0.1'
yield exponential_backoff(
lambda: can_connect(ip, port),
"Server at {ip}:{port} didn't respond in {timeout} seconds".format(ip=ip, port=port, timeout=timeout)
"Server at {ip}:{port} didn't respond in {timeout} seconds".format(ip=ip, port=port, timeout=timeout),
timeout=timeout
)
@@ -175,7 +176,8 @@ def wait_for_http_server(url, timeout=10):
return False
re = yield exponential_backoff(
is_reachable,
"Server at {url} didn't respond in {timeout} seconds".format(url=url, timeout=timeout)
"Server at {url} didn't respond in {timeout} seconds".format(url=url, timeout=timeout),
timeout=timeout
)
return re
@@ -296,17 +298,3 @@ def url_path_join(*pieces):
return result
def default_server_name(user):
"""Return the default name for a new server for a given user.
Will be the first available integer string, e.g. '1' or '2'.
"""
existing_names = set(user.spawners)
# if there are 5 servers, count from 1 to 6
for n in range(1, len(existing_names) + 2):
name = str(n)
if name not in existing_names:
return name
raise RuntimeError("It should be impossible to get here")

View File

@@ -4,5 +4,5 @@ tornado>=4.1
jinja2
pamela
python-oauth2>=1.0
SQLAlchemy>=1.0
SQLAlchemy>=1.1
requests

View File

@@ -32,9 +32,9 @@
<tbody>
<tr class="user-row add-user-row">
<td colspan="12">
<a id="add-users" class="col-xs-2 btn btn-default">Add Users</a>
<a id="stop-all-servers" class="col-xs-2 col-xs-offset-5 btn btn-danger">Stop All</a>
<a id="shutdown-hub" class="col-xs-2 col-xs-offset-1 btn btn-danger">Shutdown Hub</a>
<a id="add-users" role="button" class="col-xs-2 btn btn-default">Add Users</a>
<a id="stop-all-servers" role="button" class="col-xs-2 col-xs-offset-5 btn btn-danger">Stop All</a>
<a id="shutdown-hub" role="button" class="col-xs-2 col-xs-offset-1 btn btn-danger">Shutdown Hub</a>
</td>
</tr>
{% for u in users %}
@@ -44,20 +44,20 @@
<td class="admin-col col-sm-2">{% if u.admin %}admin{% endif %}</td>
<td class="time-col col-sm-3">{{u.last_activity.isoformat() + 'Z'}}</td>
<td class="server-col col-sm-2 text-center">
<span class="stop-server btn btn-xs btn-danger {% if not u.running %}hidden{% endif %}">stop server</span>
<span class="start-server btn btn-xs btn-success {% if u.running %}hidden{% endif %}">start server</span>
<span role="button" class="stop-server btn btn-xs btn-danger {% if not u.running %}hidden{% endif %}">stop server</span>
<span role="button" class="start-server btn btn-xs btn-success {% if u.running %}hidden{% endif %}">start server</span>
</td>
<td class="server-col col-sm-1 text-center">
{% if admin_access %}
<span class="access-server btn btn-xs btn-success {% if not u.running %}hidden{% endif %}">access server</span>
<span role="button" class="access-server btn btn-xs btn-success {% if not u.running %}hidden{% endif %}">access server</span>
{% endif %}
</td>
<td class="edit-col col-sm-1 text-center">
<span class="edit-user btn btn-xs btn-primary">edit</span>
<span role="button" class="edit-user btn btn-xs btn-primary">edit</span>
</td>
<td class="edit-col col-sm-1 text-center">
{% if u.name != user.name %}
<span class="delete-user btn btn-xs btn-danger">delete</span>
<span role="button" class="delete-user btn btn-xs btn-danger">delete</span>
{% endif %}
</td>
{% endblock user_row %}

View File

@@ -22,6 +22,11 @@
{{message_html | safe}}
</p>
{% endif %}
{% if extra_error_html %}
<p>
{{extra_error_html | safe}}
</p>
{% endif %}
{% endblock error_detail %}
</div>

View File

@@ -6,9 +6,9 @@
<div class="row">
<div class="text-center">
{% if user.running %}
<a id="stop" class="btn btn-lg btn-danger">Stop My Server</a>
<a id="stop" role="button" class="btn btn-lg btn-danger">Stop My Server</a>
{% endif %}
<a id="start" class="btn btn-lg btn-success" href="{{ url }}">
<a id="start"role="button" class="btn btn-lg btn-success" href="{{ url }}">
{% if not user.running %}
Start
{% endif %}

View File

@@ -8,10 +8,10 @@
{% block login %}
<div id="login-main" class="container">
{% if custom_html %}
{{ custom_html }}
{{ custom_html | safe }}
{% elif login_service %}
<div class="service-login">
<a class='btn btn-jupyter btn-lg' href='{{authenticator_login_url}}'>
<a role="button" class='btn btn-jupyter btn-lg' href='{{authenticator_login_url}}'>
Sign in with {{login_service}}
</a>
</div>

View File

@@ -99,9 +99,9 @@
<ul class="nav navbar-nav">
<li><a href="{{base_url}}home">Home</a></li>
<li><a href="{{base_url}}token">Token</a></li>
{% endif %}
{% if user.admin %}
<li><a href="{{base_url}}admin">Admin</a></li>
{% endif %}
</ul>
{% endif %}
<ul class="nav navbar-nav navbar-right">
@@ -109,9 +109,9 @@
{% block login_widget %}
<span id="login_widget">
{% if user %}
<a id="logout" class="navbar-btn btn-sm btn btn-default" href="{{logout_url}}"> <i aria-hidden="true" class="fa fa-sign-out"></i> Logout</a>
<a id="logout" role="button" class="navbar-btn btn-sm btn btn-default" href="{{logout_url}}"> <i aria-hidden="true" class="fa fa-sign-out"></i> Logout</a>
{% else %}
<a id="login" class="btn-sm btn navbar-btn btn-default" href="{{login_url}}">Login</a>
<a id="login" role="button" class="btn-sm btn navbar-btn btn-default" href="{{login_url}}">Login</a>
{% endif %}
</span>
{% endblock %}

View File

@@ -8,7 +8,7 @@
<p>Your server is starting up.</p>
<p>You will be redirected automatically when it's ready for you.</p>
<p><i class="fa fa-spinner fa-pulse fa-fw fa-3x" aria-hidden="true"></i></p>
<a id="refresh" class="btn btn-lg btn-primary" href="#">refresh</a>
<a role="button" id="refresh" class="btn btn-lg btn-primary" href="#">refresh</a>
</div>
</div>
</div>

View File

@@ -5,7 +5,7 @@
<div class="container">
<div class="row">
<div class="text-center">
<a id="request-token" class="btn btn-lg btn-jupyter" href="#">
<a id="request-token" role="button" class="btn btn-lg btn-jupyter" href="#">
Request new API token
</a>
</div>