Compare commits

...

32 Commits
0.9.2 ... 0.9.3

Author SHA1 Message Date
Min RK
e94f5e043a release 0.9.3 2018-09-12 09:46:02 +02:00
Min RK
5456fb6356 remove spurious print from keepalive code
and send keepalive every 8 seconds

to protect against possibly aggressive proxies dropping connections after 10 seconds of inactivity
2018-09-12 09:46:02 +02:00
Min RK
fb75b9a392 write needs no await 2018-09-11 16:42:29 +02:00
Min RK
90d341e6f7 changelog for 0.9.3
Mainly small fixes, but the token page could be completely broken

This release will include the spawner.handler addition,
but not the oauthlib change currently in master
2018-09-11 16:42:21 +02:00
Min RK
a0354de3c1 Merge pull request #2139 from minrk/token-page
token expiry fixes
2018-09-11 11:01:37 +02:00
Min RK
2e4e1ce82f test token page with html parsing 2018-09-11 10:16:36 +02:00
Min RK
06f646099f token expiry fixes
typos in token expiry:

- omitted from token model (it's in the spec in docs, but wasn't in the model)
- wrong type when sorting oauth tokens on token page could cause token page to not render
2018-09-11 08:54:12 +02:00
Min RK
3360817cb6 Merge pull request #2138 from SivaMaplelabs/undefined-variable
Fix undefined variable 'datetime' error
2018-09-11 08:52:59 +02:00
SivaMaplelabs
e042ad0b4a Fix undefined variable 'datetime' error 2018-09-10 20:04:54 +05:30
Min RK
246f9f9044 Merge pull request #2135 from adelcast/dev/adelcast/fix_chp
add Windows case when stopping the CHP
2018-09-10 15:19:14 +02:00
Alejandro del Castillo
bc08f4de34 proxy: add Windows case when zombie proxy is still running
Windows doesn't support signal.SIGKILL, which is used by
_check_previous_process to kill the CHP if still running. Use existing
implementation to kill the CHP and children processes on Windows
instead.

Signed-off-by: Alejandro del Castillo <alejandro.delcastillo@ni.com>
2018-09-06 18:06:16 -05:00
Alejandro del Castillo
12904ecc32 _check_previous_process: use signal list as input to os.kill
Previously, signal.SIGTERM was using 3 times, instead of using it 2
times, then signal.SIGKILL.

Signed-off-by: Alejandro del Castillo <alejandro.delcastillo@ni.com>
2018-09-06 16:15:56 -05:00
Matthias Bussonnier
601d371796 Merge pull request #2132 from willingc/iss204
fix link
2018-09-06 10:16:17 +02:00
Carol Willing
30d9e09390 fix link 2018-09-05 11:27:19 -07:00
Min RK
7850a5d478 Merge pull request #2036 from minrk/pass-handler
pass requesting handler to spawner
2018-09-04 18:27:02 +02:00
Min RK
f5a3b1bc5a Merge pull request #2122 from SivaMaplelabs/pylint-fix
Address the pylint warnings
2018-09-04 09:58:13 +02:00
SivaMaplelabs
b2fe8e5691 Address the pylint warnings 2018-09-03 21:35:46 +05:30
Min RK
9d4c410996 include params in redirect from /spawn -> /user/:name 2018-09-03 09:57:00 +02:00
Min RK
dcae92ce4a test passing url params to spawner 2018-09-03 09:56:42 +02:00
Carol Willing
29957b8cd8 Merge pull request #2112 from minrk/disable-quit
disable quit button
2018-08-30 22:47:13 -07:00
Carol Willing
6299e0368c Merge pull request #2119 from Carreau/typoes
Fix some typos using `codespell`.
2018-08-30 15:44:55 -07:00
Carol Willing
c862b6062d Merge pull request #2121 from minrk/progress-keepalive
add keepalive to progress eventstream
2018-08-30 15:43:18 -07:00
Min RK
146587ffff add keepalive to progress eventstream
avoids issues with proxies dropping connections when no data passes through

Progress behavior should already be resilient to dropped connections,
as the progress ought to just resume anew.
2018-08-30 19:03:14 +02:00
Matthias Bussonnier
077d8dec9a Fix some typos using codespell.
And checking each manually. It's funny because one of the words in the
sphinx custom dictionary was wrong :-)
2018-08-29 21:24:28 -07:00
Min RK
af8d6086fc disable quit button
quit button (new in recent notebook 5.x) shuts down the server, which we want to happen via the JupyterHub control panel
2018-08-27 16:18:53 +02:00
Min RK
18f8661d73 publish singleuser x.y.z.dev from master 2018-08-20 10:42:46 +02:00
Min RK
bd70f66c70 Merge pull request #2094 from minrk/image-dev-tag
add .dev suffix to development x.y image tags
2018-08-20 10:38:02 +02:00
Min RK
ac213fc4b5 add .dev suffix to development x.y image tags
instead of publishing "1.0" for a development version.
2018-08-20 10:37:43 +02:00
Min RK
db33549173 Merge pull request #2092 from minrk/stable-0.9
fix jupyterhub/singleuser tagging
2018-08-17 16:44:37 +02:00
Min RK
e985e2b84c singleuser stable version is 0.9 2018-08-17 16:33:42 +02:00
Min RK
1d9abf7528 back to dev 2018-08-17 16:30:24 +02:00
Min RK
897f5f62d5 pass requesting handler to spawner
allows Spawners to implement logic such as processing GET params to select inputs

USE WITH CARE because this gives authors of links the ability to pass parameters to spawn without user knowledge or input.

This should only be used for things like selecting from a list of all known-good choices, e.g. a profile list.
2018-07-13 17:23:19 -05:00
26 changed files with 245 additions and 45 deletions

View File

@@ -1,5 +1,6 @@
-r requirements.txt
mock
beautifulsoup4
codecov
cryptography
pytest-cov

View File

@@ -9,6 +9,21 @@ command line for details.
## 0.9
### [0.9.3] 2018-09-12
JupyterHub 0.9.3 contains small bugfixes and improvements
- Fix token page and model handling of `expires_at`.
This field was missing from the REST API model for tokens
and could cause the token page to not render
- Add keep-alive to progress event stream to avoid proxies dropping
the connection due to inactivity
- Documentation and example improvements
- Disable quit button when using notebook 5.6
- Prototype new feature (may change prior to 1.0):
pass requesting Handler to Spawners during start,
accessible as `self.handler`
### [0.9.2] 2018-08-10
JupyterHub 0.9.2 contains small bugfixes and improvements.
@@ -118,7 +133,7 @@ and tornado < 5.0.
- Added "Start All" button to admin page for launching all user servers at once.
- Services have an `info` field which is a dictionary.
This is accessible via the REST API.
- `JupyterHub.extra_handlers` allows defining additonal tornado RequestHandlers attached to the Hub.
- `JupyterHub.extra_handlers` allows defining additional tornado RequestHandlers attached to the Hub.
- API tokens may now expire.
Expiry is available in the REST model as `expires_at`,
and settable when creating API tokens by specifying `expires_in`.
@@ -402,7 +417,8 @@ Fix removal of `/login` page in 0.4.0, breaking some OAuth providers.
First preview release
[Unreleased]: https://github.com/jupyterhub/jupyterhub/compare/0.9.2...HEAD
[Unreleased]: https://github.com/jupyterhub/jupyterhub/compare/0.9.3...HEAD
[0.9.3]: https://github.com/jupyterhub/jupyterhub/compare/0.9.2...0.9.3
[0.9.2]: https://github.com/jupyterhub/jupyterhub/compare/0.9.1...0.9.2
[0.9.1]: https://github.com/jupyterhub/jupyterhub/compare/0.9.0...0.9.1
[0.9.0]: https://github.com/jupyterhub/jupyterhub/compare/0.8.1...0.9.0

View File

@@ -45,7 +45,7 @@ is important that these files be put in a secure location on your server, where
they are not readable by regular users.
If you are using a **chain certificate**, see also chained certificate for SSL
in the JupyterHub `troubleshooting FAQ <troubleshooting>`_.
in the JupyterHub `Troubleshooting FAQ <../troubleshooting.html>`_.
Using letsencrypt
~~~~~~~~~~~~~~~~~

View File

@@ -70,7 +70,7 @@ Cmnd_Alias JUPYTER_CMD = /usr/local/bin/sudospawner
rhea ALL=(JUPYTER_USERS) NOPASSWD:JUPYTER_CMD
```
It might be useful to modifiy `secure_path` to add commands in path.
It might be useful to modify `secure_path` to add commands in path.
As an alternative to adding every user to the `/etc/sudoers` file, you can
use a group in the last line above, instead of `JUPYTER_USERS`:

View File

@@ -125,7 +125,7 @@ sure are available, I can install their specs system-wide (in /usr/local) with:
There are two broad categories of user environments that depend on what
Spawner you choose:
- Multi-user hosts (shared sytem)
- Multi-user hosts (shared system)
- Container-based
How you configure user environments for each category can differ a bit

View File

@@ -196,7 +196,7 @@ allocate. Attempting to use more memory than this limit will cause errors. The
single-user notebook server can discover its own memory limit by looking at
the environment variable `MEM_LIMIT`, which is specified in absolute bytes.
`c.Spawner.mem_guarantee`: Sometimes, a **guarantee** of a *minumum amount of
`c.Spawner.mem_guarantee`: Sometimes, a **guarantee** of a *minimum amount of
memory* is desirable. In this case, you can set `c.Spawner.mem_guarantee` to
to provide a guarantee that at minimum this much memory will always be
available for the single-user notebook server to use. The environment variable

View File

@@ -75,7 +75,7 @@ the top of all pages. The more specific variables
`announcement_login`, `announcement_spawn`, `announcement_home`, and
`announcement_logout` are more specific and only show on their
respective pages (overriding the global `announcement` variable).
Note that changing these varables require a restart, unlike direct
Note that changing these variables require a restart, unlike direct
template extension.
You can get the same effect by extending templates, which allows you

View File

@@ -166,7 +166,7 @@ startup
statsd
stdin
stdout
stoppped
stopped
subclasses
subcommand
subdomain

View File

@@ -1,9 +1,14 @@
# Example for a Spawner.pre_spawn_hook
# create a directory for the user before the spawner starts
"""
Example for a Spawner.pre_spawn_hook
create a directory for the user before the spawner starts
"""
# pylint: disable=import-error
import os
import shutil
from jupyter_client.localinterfaces import public_ips
def create_dir_hook(spawner):
""" Create directory """
username = spawner.user.name # get the username
volume_path = os.path.join('/volumes/jupyterhub', username)
if not os.path.exists(volume_path):
@@ -12,23 +17,24 @@ def create_dir_hook(spawner):
# ...
def clean_dir_hook(spawner):
""" Delete directory """
username = spawner.user.name # get the username
temp_path = os.path.join('/volumes/jupyterhub', username, 'temp')
if os.path.exists(temp_path) and os.path.isdir(temp_path):
shutil.rmtree(temp_path)
# attach the hook functions to the spawner
# pylint: disable=undefined-variable
c.Spawner.pre_spawn_hook = create_dir_hook
c.Spawner.post_stop_hook = clean_dir_hook
# Use the DockerSpawner to serve your users' notebooks
c.JupyterHub.spawner_class = 'dockerspawner.DockerSpawner'
from jupyter_client.localinterfaces import public_ips
c.JupyterHub.hub_ip = public_ips()[0]
c.DockerSpawner.hub_ip_connect = public_ips()[0]
c.DockerSpawner.container_ip = "0.0.0.0"
# You can now mount the volume to the docker container as we've
# made sure the directory exists
# pylint: disable=bad-whitespace
c.DockerSpawner.volumes = { '/volumes/jupyterhub/{username}/': '/home/jovyan/work' }

View File

@@ -11,12 +11,16 @@ function get_hub_version() {
hub_xyz=$(cat hub_version)
split=( ${hub_xyz//./ } )
hub_xy="${split[0]}.${split[1]}"
# add .dev on hub_xy so it's 1.0.dev
if [[ ! -z "${split[3]}" ]]; then
hub_xy="${hub_xy}.${split[3]}"
fi
}
get_hub_version
# when building master, push 0.9.0 as well
# when building master, push 0.9.0.dev as well
docker tag $DOCKER_REPO:$DOCKER_TAG $DOCKER_REPO:$hub_xyz
docker push $DOCKER_REPO:$hub_xyz
docker tag $ONBUILD:$DOCKER_TAG $ONBUILD:$hub_xyz

View File

@@ -6,7 +6,7 @@
version_info = (
0,
9,
2,
3,
"", # release (b1, rc1, or "" for final or dev)
# "dev", # dev or nothing
)

View File

@@ -2,6 +2,7 @@
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
from datetime import datetime
import json
from http.client import responses
@@ -13,7 +14,17 @@ from .. import orm
from ..handlers import BaseHandler
from ..utils import isoformat, url_path_join
class APIHandler(BaseHandler):
"""Base class for API endpoints
Differences from page handlers:
- JSON responses and errors
- strict referer checking for Cookie-authenticated requests
- strict content-security-policy
- methods for REST API models
"""
@property
def content_security_policy(self):
@@ -156,6 +167,7 @@ class APIHandler(BaseHandler):
'kind': kind,
'created': isoformat(token.created),
'last_activity': isoformat(token.last_activity),
'expires_at': isoformat(expires_at),
}
model.update(extra)
return model

View File

@@ -428,6 +428,9 @@ class UserAdminAccessAPIHandler(APIHandler):
class SpawnProgressAPIHandler(APIHandler):
"""EventStream handler for pending spawns"""
keepalive_interval = 8
def get_content_type(self):
return 'text/event-stream'
@@ -440,6 +443,23 @@ class SpawnProgressAPIHandler(APIHandler):
# raise Finish to halt the handler
raise web.Finish()
_finished = False
def on_finish(self):
self._finished = True
async def keepalive(self):
"""Write empty lines periodically
to avoid being closed by intermediate proxies
when there's a large gap between events.
"""
while not self._finished:
try:
self.write("\n\n")
except (StreamClosedError, RuntimeError):
return
await asyncio.sleep(self.keepalive_interval)
@admin_or_self
async def get(self, username, server_name=''):
self.set_header('Cache-Control', 'no-cache')
@@ -453,6 +473,9 @@ class SpawnProgressAPIHandler(APIHandler):
# user has no such server
raise web.HTTPError(404)
spawner = user.spawners[server_name]
# start sending keepalive to avoid proxies closing the connection
asyncio.ensure_future(self.keepalive())
# cases:
# - spawner already started and ready
# - spawner not running at all

View File

@@ -602,7 +602,7 @@ class BaseHandler(RequestHandler):
self.log.debug("Initiating spawn for %s", user_server_name)
spawn_future = user.spawn(server_name, options)
spawn_future = user.spawn(server_name, options, handler=self)
self.log.debug("%i%s concurrent spawns",
spawn_pending_count,

View File

@@ -111,7 +111,11 @@ class SpawnHandler(BaseHandler):
if user.spawner._spawn_future and user.spawner._spawn_future.done():
user.spawner._spawn_future = None
# not running, no form. Trigger spawn by redirecting to /user/:name
self.redirect(user.url)
url = user.url
if self.request.query:
# add query params
url += '?' + self.request.query
self.redirect(url)
@web.authenticated
async def post(self, for_user=None):
@@ -243,9 +247,11 @@ class TokenPageHandler(BaseHandler):
api_tokens.append(token)
# group oauth client tokens by client id
# AccessTokens have expires_at as an integer timestamp
now_timestamp = now.timestamp()
oauth_tokens = defaultdict(list)
for token in user.oauth_tokens:
if token.expires_at and token.expires_at < now:
if token.expires_at and token.expires_at < now_timestamp:
self.log.warning("Deleting expired token")
self.db.delete(token)
self.db.commit()

View File

@@ -746,7 +746,7 @@ def new_session_factory(url="sqlite:///:memory:",
Base.metadata.create_all(engine)
# We set expire_on_commit=False, since we don't actually need
# SQLAlchemy to expire objects after commiting - we don't expect
# SQLAlchemy to expire objects after committing - we don't expect
# concurrent runs of the hub talking to the same db. Turning
# this off gives us a major performance boost
session_factory = sessionmaker(bind=engine,

View File

@@ -487,9 +487,14 @@ class ConfigurableHTTPProxy(Proxy):
# if we got here, CHP is still running
self.log.warning("Proxy still running at pid=%s", pid)
for i, sig in enumerate([signal.SIGTERM] * 2 + [signal.SIGKILL]):
if os.name != 'nt':
sig_list = [signal.SIGTERM] * 2 + [signal.SIGKILL]
for i in range(3):
try:
os.kill(pid, signal.SIGTERM)
if os.name == 'nt':
self._terminate_win(pid)
else:
os.kill(pid,sig_list[i])
except ProcessLookupError:
break
time.sleep(1)
@@ -600,18 +605,21 @@ class ConfigurableHTTPProxy(Proxy):
self._check_running_callback = pc
pc.start()
def _terminate_win(self, pid):
# On Windows we spawned a shell on Popen, so we need to
# terminate all child processes as well
import psutil
parent = psutil.Process(pid)
children = parent.children(recursive=True)
for child in children:
child.kill()
psutil.wait_procs(children, timeout=5)
def _terminate(self):
"""Terminate our process"""
if os.name == 'nt':
# On Windows we spawned a shell on Popen, so we need to
# terminate all child processes as well
import psutil
parent = psutil.Process(self.proxy_process.pid)
children = parent.children(recursive=True)
for child in children:
child.kill()
psutil.wait_procs(children, timeout=5)
self._terminate_win(self.proxy_process.pid)
else:
self.proxy_process.terminate()

View File

@@ -298,6 +298,7 @@ class SingleUserNotebookApp(NotebookApp):
# disble some single-user configurables
token = ''
open_browser = False
quit_button = False
trust_xheaders = True
login_handler_class = JupyterHubLoginHandler
logout_handler_class = JupyterHubLogoutHandler

View File

@@ -161,6 +161,7 @@ class Spawner(LoggingConfigurable):
admin_access = Bool(False)
api_token = Unicode()
oauth_client_id = Unicode()
handler = Any()
will_resume = Bool(False,
help="""Whether the Spawner will resume on next start

View File

@@ -86,6 +86,8 @@ class MockSpawner(LocalProcessSpawner):
pass
def user_env(self, env):
if self.handler:
env['HANDLER_ARGS'] = self.handler.request.query
return env
@default('cmd')

View File

@@ -604,6 +604,32 @@ def test_spawn(app):
assert app.users.count_active_users()['pending'] == 0
@mark.gen_test
def test_spawn_handler(app):
"""Test that the requesting Handler is passed to Spawner.handler"""
db = app.db
name = 'salmon'
user = add_user(db, app=app, name=name)
app_user = app.users[name]
# spawn via API with ?foo=bar
r = yield api_request(app, 'users', name, 'server', method='post', params={'foo': 'bar'})
r.raise_for_status()
# verify that request params got passed down
# implemented in MockSpawner
url = public_url(app, user)
r = yield async_requests.get(ujoin(url, 'env'))
env = r.json()
assert 'HANDLER_ARGS' in env
assert env['HANDLER_ARGS'] == 'foo=bar'
# make user spawner.handler doesn't persist after spawn finishes
assert app_user.spawner.handler is None
r = yield api_request(app, 'users', name, 'server', method='delete')
r.raise_for_status()
@mark.slow
@mark.gen_test
def test_slow_spawn(app, no_patience, slow_spawn):
@@ -1188,14 +1214,19 @@ def test_token_as_user_deprecated(app, as_user, for_user, status):
@mark.gen_test
@mark.parametrize("headers, status, note", [
({}, 200, 'test note'),
({}, 200, ''),
({'Authorization': 'token bad'}, 403, ''),
@mark.parametrize("headers, status, note, expires_in", [
({}, 200, 'test note', None),
({}, 200, '', 100),
({'Authorization': 'token bad'}, 403, '', None),
])
def test_get_new_token(app, headers, status, note):
def test_get_new_token(app, headers, status, note, expires_in):
options = {}
if note:
body = json.dumps({'note': note})
options['note'] = note
if expires_in:
options['expires_in'] = expires_in
if options:
body = json.dumps(options)
else:
body = ''
# request a new token
@@ -1213,6 +1244,10 @@ def test_get_new_token(app, headers, status, note):
assert reply['user'] == 'admin'
assert reply['created']
assert 'last_activity' in reply
if expires_in:
assert isinstance(reply['expires_at'], str)
else:
assert reply['expires_at'] is None
if note:
assert reply['note'] == note
else:

View File

@@ -1,7 +1,9 @@
"""Tests for HTML pages"""
import sys
from urllib.parse import urlencode, urlparse
from bs4 import BeautifulSoup
from tornado import gen
from tornado.httputil import url_concat
@@ -168,6 +170,31 @@ def test_spawn_redirect(app):
assert path == ujoin(app.base_url, '/user/%s/' % name)
@pytest.mark.gen_test
def test_spawn_handler_access(app):
name = 'winston'
cookies = yield app.login_user(name)
u = app.users[orm.User.find(app.db, name)]
status = yield u.spawner.poll()
assert status is not None
# spawn server via browser link with ?arg=value
r = yield get_page('spawn', app, cookies=cookies, params={'arg': 'value'})
r.raise_for_status()
# verify that request params got passed down
# implemented in MockSpawner
r = yield async_requests.get(ujoin(public_url(app, u), 'env'))
env = r.json()
assert 'HANDLER_ARGS' in env
assert env['HANDLER_ARGS'] == 'arg=value'
# stop server
r = yield api_request(app, 'users', name, 'server', method='delete')
r.raise_for_status()
@pytest.mark.gen_test
def test_spawn_admin_access(app, admin_access):
"""GET /user/:name as admin with admin-access spawns user's server"""
@@ -573,6 +600,51 @@ def test_announcements(app, announcements):
assert_announcement("logout", r.text)
@pytest.mark.gen_test
def test_token_page(app):
name = "cake"
cookies = yield app.login_user(name)
r = yield get_page("token", app, cookies=cookies)
r.raise_for_status()
assert urlparse(r.url).path.endswith('/hub/token')
def extract_body(r):
soup = BeautifulSoup(r.text, "html5lib")
import re
# trim empty lines
return re.sub(r"(\n\s*)+", "\n", soup.body.find(class_="container").text)
body = extract_body(r)
assert "Request new API token" in body, body
# no tokens yet, no lists
assert "API Tokens" not in body, body
assert "Authorized Applications" not in body, body
# request an API token
user = app.users[name]
token = user.new_api_token(expires_in=60, note="my-test-token")
app.db.commit()
r = yield get_page("token", app, cookies=cookies)
r.raise_for_status()
body = extract_body(r)
assert "API Tokens" in body, body
assert "my-test-token" in body, body
# no oauth tokens yet, shouldn't have that section
assert "Authorized Applications" not in body, body
# spawn the user to trigger oauth, etc.
# request an oauth token
user.spawner.cmd = [sys.executable, '-m', 'jupyterhub.singleuser']
r = yield get_page("spawn", app, cookies=cookies)
r.raise_for_status()
r = yield get_page("token", app, cookies=cookies)
r.raise_for_status()
body = extract_body(r)
assert "API Tokens" in body, body
assert "Server at %s" % user.base_url in body, body
assert "Authorized Applications" in body, body
@pytest.mark.gen_test
def test_server_not_running_api_request(app):
cookies = yield app.login_user("bees")

View File

@@ -331,7 +331,7 @@ class User:
url_parts.extend(['server/progress'])
return url_path_join(*url_parts)
async def spawn(self, server_name='', options=None):
async def spawn(self, server_name='', options=None, handler=None):
"""Start the user's spawner
depending from the value of JupyterHub.allow_named_servers
@@ -361,6 +361,9 @@ class User:
spawner.server = server = Server(orm_server=orm_server)
assert spawner.orm_spawner.server is orm_server
# pass requesting handler to the spawner
# e.g. for processing GET params
spawner.handler = handler
# Passing user_options to the spawner
spawner.user_options = options or {}
# we are starting a new server, make sure it doesn't restore state
@@ -484,6 +487,9 @@ class User:
# raise original exception
spawner._start_pending = False
raise e
finally:
# clear reference to handler after start finishes
spawner.handler = None
spawner.start_polling()
# store state

View File

@@ -1,7 +1,7 @@
#!/bin/bash
set -ex
stable=0.8
stable=0.9
for V in master $stable; do
docker build --build-arg JUPYTERHUB_VERSION=$V -t $DOCKER_REPO:$V .

View File

@@ -1,6 +1,7 @@
#!/bin/bash
set -ex
stable=0.8
stable=0.9
for V in master $stable; do
docker push $DOCKER_REPO:$V
done
@@ -12,6 +13,10 @@ function get_hub_version() {
hub_xyz=$(cat hub_version)
split=( ${hub_xyz//./ } )
hub_xy="${split[0]}.${split[1]}"
# add .dev on hub_xy so it's 1.0.dev
if [[ ! -z "${split[3]}" ]]; then
hub_xy="${hub_xy}.${split[3]}"
fi
}
# tag e.g. 0.8.1 with 0.8
get_hub_version $stable
@@ -22,3 +27,5 @@ docker push $DOCKER_REPO:$hub_xyz
get_hub_version master
docker tag $DOCKER_REPO:master $DOCKER_REPO:$hub_xy
docker push $DOCKER_REPO:$hub_xy
docker tag $DOCKER_REPO:master $DOCKER_REPO:$hub_xyz
docker push $DOCKER_REPO:$hub_xyz