**Environment**
* image: k8s-hub (`jupyterhub/k8s-hub:0.11.1`);
* `authenticator_class: dummy`;
* db: cocroachdb (`sqlalchemy-cocroachdb`).
**Description:**
`save_bearer_token` method (`provider.py`) passes a float value to the `expires_at` field (int).
A user can create a notebook, it gets successfully scheduled, and then, once the pod is up and ready, the user is unable to enter the notebook, because jupyterhub cannot save a token. In logs, we can see the following:
```
[I 2021-05-29 14:45:04.302 JupyterHub log:181] 302 GET /hub/api/oauth2/authorize?client_id=jupyterhub-user-user2&redirect_uri=%2Fuser%2Fuser2%2Foauth_callback&response_type=code&state=[secret] -> /user/user2/oauth_callback?code=[secret]&state=[secret] (user2 40.113.125.116) 73.98ms
[E 2021-05-29 14:45:04.424 JupyterHub web:1789] Uncaught exception POST /hub/api/oauth2/token (10.42.80.10)
HTTPServerRequest(protocol='http', host='hub:8081', method='POST', uri='/hub/api/oauth2/token', version='HTTP/1.1', remote_ip='10.42.80.10')
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/tornado/web.py", line 1702, in _execute
result = method(*self.path_args, **self.path_kwargs)
File "/usr/local/lib/python3.8/dist-packages/jupyterhub/apihandlers/auth.py", line 324, in post
headers, body, status = self.oauth_provider.create_token_response(
File "/usr/local/lib/python3.8/dist-packages/oauthlib/oauth2/rfc6749/endpoints/base.py", line 116, in wrapper
return f(endpoint, uri, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/oauthlib/oauth2/rfc6749/endpoints/token.py", line 118, in create_token_response
return grant_type_handler.create_token_response(
File "/usr/local/lib/python3.8/dist-packages/oauthlib/oauth2/rfc6749/grant_types/authorization_code.py", line 313, in create_token_response
self.request_validator.save_token(token, request)
File "/usr/local/lib/python3.8/dist-packages/jupyterhub/oauth/provider.py", line 281, in save_token
return self.save_bearer_token(token, request, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/jupyterhub/oauth/provider.py", line 354, in save_bearer_token
self.db.commit()
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/orm/session.py", line 1042, in commit
self.transaction.commit()
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/orm/session.py", line 504, in commit
self._prepare_impl()
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/orm/session.py", line 483, in _prepare_impl
self.session.flush()
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/orm/session.py", line 2536, in flush
self._flush(objects)
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/orm/session.py", line 2678, in _flush
transaction.rollback(_capture_exception=True)
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__
compat.raise_(
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/orm/session.py", line 2638, in _flush
flush_context.execute()
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/orm/unitofwork.py", line 422, in execute
rec.execute(self)
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/orm/unitofwork.py", line 586, in execute
persistence.save_obj(
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/orm/persistence.py", line 239, in save_obj
_emit_insert_statements(
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/orm/persistence.py", line 1135, in _emit_insert_statements
result = cached_connections[connection].execute(
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 1011, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 1124, in _execute_clauseelement
ret = self._execute_context(
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 1316, in _execute_context
self._handle_dbapi_exception(
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 1510, in _handle_dbapi_exception
util.raise_(
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 593, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.DatatypeMismatch) value type decimal doesn't match type int of column "expires_at"
HINT: you will need to rewrite or cast the expression
[SQL: INSERT INTO oauth_access_tokens (client_id, grant_type, expires_at, refresh_token, refresh_expires_at, user_id, session_id, hashed, prefix, created, last_activity) VALUES (%(client_id)s, %(grant_type)s, %(expires_at)s, %(refresh_token)s, %(refresh_expires_at)s, %(user_id)s, %(session_id)s, %(hashed)s, %(prefix)s, %(created)s, %(last_activity)s) RETURNING oauth_access_tokens.id]
[parameters: {'client_id': 'jupyterhub-user-user2', 'grant_type': 'authorization_code', 'expires_at': 1622303104.418992, 'refresh_token': 'FVJ8S4is0367LlEMnxIiEIoTOeoxhf', 'refresh_expires_at': None, 'user_id': 662636890939424770, 'session_id': '4e041a2bfcb34a34a00033a281bc1236', 'hashed': 'sha512:1:3b18deae37fbf50a:03df035736960af14e19196e1d13fd74f55c21f17405119f80e75817ff37c7567fab089a3d40b97a57f94b54065ee56f7260895352516b9facb989d656f05be8', 'prefix': 't11z', 'created': datetime.datetime(2021, 5, 29, 14, 45, 4, 421305), 'last_activity': None}]
(Background on this error at: http://sqlalche.me/e/13/f405)
[W 2021-05-29 14:45:04.430 JupyterHub base:110] Rolling back session due to database error (psycopg2.errors.DatatypeMismatch) value type decimal doesn't match type int of column "expires_at"
HINT: you will need to rewrite or cast the expression
[SQL: INSERT INTO oauth_access_tokens (client_id, grant_type, expires_at, refresh_token, refresh_expires_at, user_id, session_id, hashed, prefix, created, last_activity) VALUES (%(client_id)s, %(grant_type)s, %(expires_at)s, %(refresh_token)s, %(refresh_expires_at)s, %(user_id)s, %(session_id)s, %(hashed)s, %(prefix)s, %(created)s, %(last_activity)s) RETURNING oauth_access_tokens.id]
[parameters: {'client_id': 'jupyterhub-user-user2', 'grant_type': 'authorization_code', 'expires_at': 1622303104.418992, 'refresh_token': 'FVJ8S4is0367LlEMnxIiEIoTOeoxhf', 'refresh_expires_at': None, 'user_id': 662636890939424770, 'session_id': '4e041a2bfcb34a34a00033a281bc1236', 'hashed': 'sha512:1:3b18deae37fbf50a:03df035736960af14e19196e1d13fd74f55c21f17405119f80e75817ff37c7567fab089a3d40b97a57f94b54065ee56f7260895352516b9facb989d656f05be8', 'prefix': 't11z', 'created': datetime.datetime(2021, 5, 29, 14, 45, 4, 421305), 'last_activity': None}]
(Background on this error at: http://sqlalche.me/e/13/f405)
[E 2021-05-29 14:45:04.443 JupyterHub log:173] {
"Host": "hub:8081",
"User-Agent": "python-requests/2.25.1",
"Accept-Encoding": "gzip, deflate",
"Accept": "*/*",
"Connection": "keep-alive",
"Content-Type": "application/x-www-form-urlencoded",
"Authorization": "token [secret]",
"Content-Length": "190"
}
[E 2021-05-29 14:45:04.444 JupyterHub log:181] 500 POST /hub/api/oauth2/token (user2 10.42.80.10) 63.28ms
```
Everything went well, when I changed:
`expires_at=orm.OAuthAccessToken.now() + token['expires_in'],`
to:
`expires_at=int(orm.OAuthAccessToken.now() + token['expires_in']),`
That's what this PR is about.
As a sidenote, `black` formatter adjusted the `orm_client = orm.OAuthClient(identifier=client_id,)` line, but I guess it should be fine. Please, feel free to revert this change if needed.
(Upd): added the missing `int` conversion.
Signed-off-by: Erik Sundell <erik.i.sundell@gmail.com>
- update references to default branch name in docs, workflows
- use HEAD in github urls, which always works regardless of default branch name
- fix petstore URLs since the old petstore links seem to have stopped working
to merge, in order:
- [x] approve this PR
- [x] rename the default branch to main in settings
- [x] merge this PR
Related tangent: I've been using [this git default-branch](https://github.com/minrk/git-stuff/blob/main/bin/git-default-branch) to help with my aliases and friends working with repos with different branch names.
Signed-off-by: Min RK <benjaminrk@gmail.com>
...where I thought it already was! Instead of on the test class.
and fix the logic for when it is called a bit:
- call on *all* Spawners, not just the default
- call on named server deletion when remove=True
closes 3451, finishes 3337
Signed-off-by: Min RK <benjaminrk@gmail.com>
should never occur in real applications where only one loop is run, but may occur in tests if the Proxy object lives longer than the loop that is running when it's created (imported?).
I *suspect* this is the source of our intermittent test failures with:
> got Future <Future pending> attached to a different loop
But since they are intermittent, it's hard to be sure, even if this PR passes.
The issue: we were allocating an asyncio.Lock(), which in turn grabs a handle on the current event loop, at *method definition time* in the decorator, instead of *call time*.
The solution: allocate the method at call time *and* double-check to ensure we never use a lock across event loops by storing the locks per-loop.
This should change nothing for 'real' hub instances, where only one loop is ever running, only tests where we start and stop loops a bunch.
Signed-off-by: Min RK <benjaminrk@gmail.com>
and clarify warning when a base handler isn't patched that auth is still being applied
- reorganize patch steps into functions for easier re-use
- patch notebook and jupyter_server handlers if they are already imported
- run patch after initialize to ensure extensions have done their importing before we check what's present
- apply class-level patch even when instance-level patch is happening to avoid triggering patch on every request
This change isn't as big as it looks, because it's mostly moving some re-used code to a couple of functions.
closes https://github.com/jupyter-server/jupyter_server/issues/488
Signed-off-by: Min RK <benjaminrk@gmail.com>
Pin references to github actions we rely on in workflows with jobs that reference GitHub secrets that could get exposed.
Signed-off-by: Min RK <benjaminrk@gmail.com>
When the hub is running in API-only mode, it's
very useful to have the proxy know where to send
URLs that would normally be serviced by the hub.
For example, / might go to a service that renders
a home page, while `/user` might go to a service that
tells the user their server is dead.
Right now, this happens 'out of band', with a process
that has to talk to the proxy directly. This is a
bit messy - the routes need to be re-added when the
proxy restarts, the hub might try to remove them, etc.
By adding support for this in the hub itself, all
this complexity is now removed and the hub continues
to own all the routes in the proxy
This allows for more flexible customization of the login page,
since it allows to re-use the login form in an extending template
by reusing the new block.
This was not cleanly possible before since the main container
was part of the very same block as the form code.
fixes#3414