From 28a0f4c8b484d93ffae1fad47573fe3279857046 Mon Sep 17 00:00:00 2001 From: Ayaz Salikhov Date: Wed, 2 Feb 2022 01:19:38 +0300 Subject: [PATCH 01/12] Cleanup docs --- base-notebook/Dockerfile | 1 + docs/contributing/features.md | 5 +- docs/contributing/lint.md | 8 +-- docs/contributing/packages.md | 35 +++-------- docs/contributing/stacks.md | 5 +- docs/contributing/tests.md | 10 +++- docs/maintaining/tasks.md | 11 ++-- docs/using/common.md | 10 ++-- docs/using/recipes.md | 4 +- docs/using/running.md | 94 +++++++++++++++--------------- docs/using/specifics.md | 4 +- examples/docker-compose/README.md | 8 +-- examples/openshift/README.md | 18 +++--- examples/source-to-image/README.md | 56 +++++++++--------- 14 files changed, 129 insertions(+), 140 deletions(-) diff --git a/base-notebook/Dockerfile b/base-notebook/Dockerfile index ab876927..e27a58e7 100644 --- a/base-notebook/Dockerfile +++ b/base-notebook/Dockerfile @@ -112,6 +112,7 @@ RUN set -x && \ conda config --system --set auto_update_conda false && \ conda config --system --set show_channel_urls true && \ if [[ "${PYTHON_VERSION}" != "default" ]]; then mamba install --quiet --yes python="${PYTHON_VERSION}"; fi && \ + # Pin major.minor version of python mamba list python | grep '^python ' | tr -s ' ' | cut -d ' ' -f 1,2 >> "${CONDA_DIR}/conda-meta/pinned" && \ # Using conda to update all packages: https://github.com/mamba-org/mamba/issues/1092 conda update --all --quiet --yes && \ diff --git a/docs/contributing/features.md b/docs/contributing/features.md index a51f4328..5b048905 100644 --- a/docs/contributing/features.md +++ b/docs/contributing/features.md @@ -37,8 +37,9 @@ Roughly speaking, we evaluate new features based on the following criteria: If there's agreement that the feature belongs in one or more of the core stacks: 1. Implement the feature in a local clone of the `jupyter/docker-stacks` project. -2. Please build the image locally before submitting a pull request - Building the image locally shortens the debugging cycle by taking some load off GitHub Actions, which graciously provide free build services for open source projects like this one. +2. Please, build the image locally before submitting a pull request. + It shortens the debugging cycle by taking some load off GitHub Actions, + which graciously provide free build services for open source projects like this one. If you use `make`, call: ```bash diff --git a/docs/contributing/lint.md b/docs/contributing/lint.md index 97672bc1..6be1f9a9 100644 --- a/docs/contributing/lint.md +++ b/docs/contributing/lint.md @@ -13,9 +13,9 @@ This can be achieved by using the generic task used to install all Python develo ```sh # Install all development dependencies for the project -$ make dev-env +make dev-env # It can also be installed directly -$ pip install pre-commit +pip install pre-commit ``` Then the git hooks scripts configured for the project in `.pre-commit-config.yaml` need to be installed in the local git repository. @@ -29,7 +29,7 @@ make pre-commit-install Now pre-commit (and so configured hooks) will run automatically on `git commit` on each changed file. However it is also possible to trigger it against all files. -- Note: Hadolint pre-commit uses docker to run, so docker should be running while running this command. +_Note: Hadolint pre-commit uses docker to run, so docker should be running while running this command._ ```sh make pre-commit-all @@ -37,7 +37,7 @@ make pre-commit-all ## Image Lint -To comply with [Docker best practices][dbp], we are using the [Hadolint][hadolint] tool to analyse each `Dockerfile` . +To comply with [Docker best practices][dbp], we are using the [Hadolint][hadolint] tool to analyse each `Dockerfile`. ### Ignoring Rules diff --git a/docs/contributing/packages.md b/docs/contributing/packages.md index d9fc9c7e..0932c94e 100644 --- a/docs/contributing/packages.md +++ b/docs/contributing/packages.md @@ -1,41 +1,20 @@ # Package Updates -We actively seek pull requests which update packages already included in the project Dockerfiles. -This is a great way for first-time contributors to participate in developing the Jupyter Docker -Stacks. +As a general rule, we do not pin package versions in our `Dockerfile`s. +The dependencies resolution is a difficult thing to do +This means that packages might have old versions. +Images are rebuilt weekly, so usually, packages receive updates quite frequently. -Please follow the process below to update a package version: +_Note: We pin major.minor version of python, so it will stay the same even after `mamba update` command._ -1. Locate the Dockerfile containing the library you wish to update (e.g., - [base-notebook/Dockerfile](https://github.com/jupyter/docker-stacks/blob/master/base-notebook/Dockerfile), - [scipy-notebook/Dockerfile](https://github.com/jupyter/docker-stacks/blob/master/scipy-notebook/Dockerfile)) -2. Adjust the version number for the package. - We prefer to pin the major and minor version number of packages so as to minimize rebuild side-effects when users submit pull requests (PRs). - For example, you'll find the Jupyter Notebook package, `notebook`, installed using conda with - `notebook=5.4.*`. -3. Please build the image locally before submitting a pull request. - Building the image locally shortens the debugging cycle by taking some load off GitHub Actions, which graciously provide free build services for open source projects like this one. - If you use `make`, call: - - ```bash - make build/somestack-notebook - ``` - -4. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request) - (PR) with your changes. -5. Watch for GitHub to report a build success or failure for your PR on GitHub. -6. Discuss changes with the maintainers and address any build issues. - Version conflicts are the most common problem. - You may need to upgrade additional packages to fix build failures. - -## Notes +## Outdated packages In order to help identifying packages that can be updated you can use the following helper tool. It will list all the packages installed in the `Dockerfile` that can be updated -- dependencies are filtered to focus only on requested packages. ```bash -$ make check-outdated/base-notebook +make check-outdated/base-notebook # INFO test_outdated:test_outdated.py:80 3/8 (38%) packages could be updated # INFO test_outdated:test_outdated.py:82 diff --git a/docs/contributing/stacks.md b/docs/contributing/stacks.md index be7666f5..9cb91d58 100644 --- a/docs/contributing/stacks.md +++ b/docs/contributing/stacks.md @@ -3,9 +3,10 @@ We love to see the community create and share new Jupyter Docker images. We've put together a [cookiecutter project](https://github.com/jupyter/cookiecutter-docker-stacks) and the documentation below to help you get started defining, building, and sharing your Jupyter environments in Docker. + Following these steps will: -1. Setup a project on GitHub containing a Dockerfile based on either the `jupyter/base-notebook` or `jupyter/minimal-notebook` image. +1. Setup a project on GitHub containing a Dockerfile based on of the images we provide. 2. Configure GitHub Actions to build and test your image when users submit pull requests to your repository. 3. Configure Docker Hub to build and host your images for others to use. 4. Update the [list of community stacks](../using/selecting.html#community-stacks) in this documentation to include your image. @@ -62,7 +63,7 @@ git init git add . git commit -m 'Seed repo' git remote add origin -git push -u origin master +git push -u origin main ``` ## Configuring GitHub actions diff --git a/docs/contributing/tests.md b/docs/contributing/tests.md index 58bed909..0e7078f5 100644 --- a/docs/contributing/tests.md +++ b/docs/contributing/tests.md @@ -5,8 +5,7 @@ of the Docker images. ## How the Tests Work -GitHub executes `make build-test-all` against pull requests submitted to the `jupyter/docker-stacks` -repository. +GitHub Action executes `make build-test-all` against pull requests submitted to the `jupyter/docker-stacks` repository. This `make` command builds every docker image. After building each image, the `make` command executes `pytest` to run both image-specific tests like those in [base-notebook/test/](https://github.com/jupyter/docker-stacks/tree/master/base-notebook/test) and @@ -14,6 +13,13 @@ common tests defined in [test/](https://github.com/jupyter/docker-stacks/tree/ma Both kinds of tests make use of global [pytest fixtures](https://docs.pytest.org/en/latest/reference/fixtures.html) defined in the [conftest.py](https://github.com/jupyter/docker-stacks/blob/master/conftest.py) file at the root of the projects. +## Unit tests + +If you want to simply run a python script in one of our images, you could add a unit test. +Simply create `-notebook/test/units/` directory, if it doesn't already exist and put your file there. +These file will run automatically when tests are run. +You could see an example for tensorflow package [here](https://github.com/jupyter/docker-stacks/blob/master/tensorflow-notebook/test/units/unit_tensorflow.py). + ## Contributing New Tests Please follow the process below to add new tests: diff --git a/docs/maintaining/tasks.md b/docs/maintaining/tasks.md index 50a1a8c7..64952966 100644 --- a/docs/maintaining/tasks.md +++ b/docs/maintaining/tasks.md @@ -14,13 +14,16 @@ To build new images and publish them to the Docker Hub registry, do the followin ## Updating the Ubuntu Base Image -When there's a security fix in the Ubuntu base image or after some time passes, it's a good idea to update the pinned SHA in the -[jupyter/base-notebook Dockerfile](https://github.com/jupyter/docker-stacks/blob/master/base-notebook/Dockerfile). -Submit it as a regular PR and go through the build process. -Expect the build to take a while to complete: every image layer will rebuild. +Latest LTS Ubuntu base image version is used as the base image for `minimal-notebook`. +We rebuild our images automatically each week, which means they receive the updates quite frequently. + +When there's a security fix in the Ubuntu base image, it's a good idea to manually trigger images rebuild [here](https://github.com/jupyter/docker-stacks/actions/workflows/docker.yml). +Pushing `Run Workflow` button will trigger this process. ## Adding a New Core Image to Docker Hub +_Note: in general, we do not add new core images and ask contributors to either create a [recipe](../using/recipes.md) or [community stack](../using/stacks.md)_ + When there's a new stack definition, do the following before merging the PR with the new stack: 1. Ensure the PR includes an update to the stack overview diagram diff --git a/docs/using/common.md b/docs/using/common.md index 747cc594..d95834d7 100644 --- a/docs/using/common.md +++ b/docs/using/common.md @@ -135,7 +135,7 @@ For example, to mount a host folder containing a `notebook.key` and `notebook.cr docker run -d -p 8888:8888 \ -v /some/host/folder:/etc/ssl/notebook \ jupyter/base-notebook start-notebook.sh \ - --NotebookApp.keyfile=/etc/ssl/notebook/notebook.key + --NotebookApp.keyfile=/etc/ssl/notebook/notebook.key \ --NotebookApp.certfile=/etc/ssl/notebook/notebook.crt ``` @@ -187,13 +187,13 @@ Example: # Run Jupyter Notebook on Jupyter Server docker run -it --rm -p 8888:8888 \ -e DOCKER_STACKS_JUPYTER_CMD=notebook \ - jupyter/base-notebook + jupyter/base-notebook # Executing the command: jupyter notebook ... # Run Jupyter Notebook classic docker run -it --rm -p 8888:8888 \ -e DOCKER_STACKS_JUPYTER_CMD=nbclassic \ - jupyter/base-notebook + jupyter/base-notebook # Executing the command: jupyter nbclassic ... ``` @@ -207,10 +207,10 @@ For example, to run the text-based `ipython` console in a container, do the foll docker run -it --rm jupyter/base-notebook start.sh ipython ``` -Or, to run JupyterLab instead of the classic notebook, run the following: +Or, to run Jupyter Notebook classic instead of the JupyterLab, run the following: ```bash -docker run -it --rm -p 8888:8888 jupyter/base-notebook start.sh jupyter lab +docker run -it --rm -p 8888:8888 jupyter/base-notebook start.sh jupyter notebook ``` This script is handy when you derive a new Dockerfile from this image and install additional Jupyter applications with subcommands like `jupyter console`, `jupyter kernelgateway`, etc. diff --git a/docs/using/recipes.md b/docs/using/recipes.md index 4e17fc24..345afe07 100644 --- a/docs/using/recipes.md +++ b/docs/using/recipes.md @@ -268,9 +268,7 @@ Enabling manpages in the base Ubuntu layer prevents this container bloat. To achieve this, use the previous `Dockerfile` with the original ubuntu image (`ubuntu:focal`) as your base container: ```dockerfile -# Ubuntu 20.04 (focal) from 2020-04-23 -# https://github.com/docker-library/official-images/commit/4475094895093bcc29055409494cce1e11b52f94 -ARG BASE_CONTAINER=ubuntu:focal-20200423@sha256:238e696992ba9913d24cfc3727034985abd136e08ee3067982401acdc30cbf3f +ARG BASE_CONTAINER=ubuntu:focal ``` For Ubuntu 18.04 (bionic) and earlier, you may also require to a workaround for a mandb bug, which was fixed in mandb >= 2.8.6.1: diff --git a/docs/using/running.md b/docs/using/running.md index 8bd3a9cc..2dbea04f 100644 --- a/docs/using/running.md +++ b/docs/using/running.md @@ -18,42 +18,42 @@ It then starts a container running a Jupyter Notebook server and exposes the ser The server logs appear in the terminal and include a URL to the notebook server. ```bash -$ docker run -p 8888:8888 jupyter/scipy-notebook:b418b67c225b +docker run -p 8888:8888 jupyter/scipy-notebook:b418b67c225b -Executing the command: jupyter notebook -[I 15:33:00.567 NotebookApp] Writing notebook server cookie secret to /home/jovyan/.local/share/jupyter/runtime/notebook_cookie_secret -[W 15:33:01.084 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended. -[I 15:33:01.150 NotebookApp] JupyterLab alpha preview extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab -[I 15:33:01.150 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab -[I 15:33:01.155 NotebookApp] Serving notebooks from local directory: /home/jovyan -[I 15:33:01.156 NotebookApp] 0 active kernels -[I 15:33:01.156 NotebookApp] The Jupyter Notebook is running at: -[I 15:33:01.157 NotebookApp] http://[all ip addresses on your system]:8888/?token=112bb073331f1460b73768c76dffb2f87ac1d4ca7870d46a -[I 15:33:01.157 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). -[C 15:33:01.160 NotebookApp] +# Executing the command: jupyter notebook +# [I 15:33:00.567 NotebookApp] Writing notebook server cookie secret to /home/jovyan/.local/share/jupyter/runtime/notebook_cookie_secret +# [W 15:33:01.084 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended. +# [I 15:33:01.150 NotebookApp] JupyterLab alpha preview extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab +# [I 15:33:01.150 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab +# [I 15:33:01.155 NotebookApp] Serving notebooks from local directory: /home/jovyan +# [I 15:33:01.156 NotebookApp] 0 active kernels +# [I 15:33:01.156 NotebookApp] The Jupyter Notebook is running at: +# [I 15:33:01.157 NotebookApp] http://[all ip addresses on your system]:8888/?token=112bb073331f1460b73768c76dffb2f87ac1d4ca7870d46a +# [I 15:33:01.157 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). +# [C 15:33:01.160 NotebookApp] - Copy/paste this URL into your browser when you connect for the first time, - to login with a token: - http://localhost:8888/?token=112bb073331f1460b73768c76dffb2f87ac1d4ca7870d46a +# Copy/paste this URL into your browser when you connect for the first time, +# to login with a token: +# http://localhost:8888/?token=112bb073331f1460b73768c76dffb2f87ac1d4ca7870d46a ``` Pressing `Ctrl-C` shuts down the notebook server but leaves the container intact on disk for later restart or permanent deletion using commands like the following: ```bash # list containers -$ docker ps -a -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -d67fe77f1a84 jupyter/base-notebook "tini -- start-noteb…" 44 seconds ago Exited (0) 39 seconds ago cocky_mirzakhani +docker ps -a +# CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +# d67fe77f1a84 jupyter/base-notebook "tini -- start-noteb…" 44 seconds ago Exited (0) 39 seconds ago cocky_mirzakhani # start the stopped container -$ docker start -a d67fe77f1a84 -Executing the command: jupyter notebook -[W 16:45:02.020 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended. -... +docker start -a d67fe77f1a84 +# Executing the command: jupyter notebook +# [W 16:45:02.020 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended. +# ... # remove the stopped container -$ docker rm d67fe77f1a84 -d67fe77f1a84 +docker rm d67fe77f1a84 +# d67fe77f1a84 ``` **Example 2** This command pulls the `jupyter/r-notebook` image tagged `b418b67c225b` from Docker Hub if it is not already present on the local host. @@ -61,23 +61,23 @@ It then starts a container running a Jupyter Notebook server and exposes the ser The server logs appear in the terminal and include a URL to the notebook server, but with the internal container port (8888) instead of the the correct host port (10000). ```bash -$ docker run --rm -p 10000:8888 -v "${PWD}":/home/jovyan/work jupyter/r-notebook:b418b67c225b +docker run --rm -p 10000:8888 -v "${PWD}":/home/jovyan/work jupyter/r-notebook:b418b67c225b -Executing the command: jupyter notebook -[I 19:31:09.573 NotebookApp] Writing notebook server cookie secret to /home/jovyan/.local/share/jupyter/runtime/notebook_cookie_secret -[W 19:31:11.930 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended. -[I 19:31:12.085 NotebookApp] JupyterLab alpha preview extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab -[I 19:31:12.086 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab -[I 19:31:12.117 NotebookApp] Serving notebooks from local directory: /home/jovyan -[I 19:31:12.117 NotebookApp] 0 active kernels -[I 19:31:12.118 NotebookApp] The Jupyter Notebook is running at: -[I 19:31:12.119 NotebookApp] http://[all ip addresses on your system]:8888/?token=3b8dce890cb65570fb0d9c4a41ae067f7604873bd604f5ac -[I 19:31:12.120 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). -[C 19:31:12.122 NotebookApp] +# Executing the command: jupyter notebook +# [I 19:31:09.573 NotebookApp] Writing notebook server cookie secret to /home/jovyan/.local/share/jupyter/runtime/notebook_cookie_secret +# [W 19:31:11.930 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended. +# [I 19:31:12.085 NotebookApp] JupyterLab alpha preview extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab +# [I 19:31:12.086 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab +# [I 19:31:12.117 NotebookApp] Serving notebooks from local directory: /home/jovyan +# [I 19:31:12.117 NotebookApp] 0 active kernels +# [I 19:31:12.118 NotebookApp] The Jupyter Notebook is running at: +# [I 19:31:12.119 NotebookApp] http://[all ip addresses on your system]:8888/?token=3b8dce890cb65570fb0d9c4a41ae067f7604873bd604f5ac +# [I 19:31:12.120 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). +# [C 19:31:12.122 NotebookApp] - Copy/paste this URL into your browser when you connect for the first time, - to login with a token: - http://localhost:8888/?token=3b8dce890cb65570fb0d9c4a41ae067f7604873bd604f5ac +# Copy/paste this URL into your browser when you connect for the first time, +# to login with a token: +# http://localhost:8888/?token=3b8dce890cb65570fb0d9c4a41ae067f7604873bd604f5ac ``` Pressing `Ctrl-C` shuts down the notebook server and immediately destroys the Docker container. @@ -95,14 +95,14 @@ The assigned port and notebook server token are visible using other Docker comma ```bash # get the random host port assigned to the container port 8888 -$ docker port notebook 8888 -0.0.0.0:32769 +docker port notebook 8888 +# 0.0.0.0:32769 # get the notebook token from the logs -$ docker logs --tail 3 notebook - Copy/paste this URL into your browser when you connect for the first time, - to login with a token: - http://localhost:8888/?token=15914ca95f495075c0aa7d0e060f1a78b6d94f70ea373b00 +docker logs --tail 3 notebook +# Copy/paste this URL into your browser when you connect for the first time, +# to login with a token: +# http://localhost:8888/?token=15914ca95f495075c0aa7d0e060f1a78b6d94f70ea373b00 ``` Together, the URL to visit on the host machine to access the server in this case is . @@ -112,11 +112,11 @@ The container runs in the background until stopped and/or removed by additional ```bash # stop the container docker stop notebook -notebook +# notebook # remove the container permanently docker rm notebook -notebook +# notebook ``` ## Using Binder diff --git a/docs/using/specifics.md b/docs/using/specifics.md index ecaa8654..426ceff0 100644 --- a/docs/using/specifics.md +++ b/docs/using/specifics.md @@ -162,8 +162,8 @@ Connection to Spark Cluster on **[Standalone Mode](https://spark.apache.org/docs 2. Run the Docker container with `--net=host` in a location that is network addressable by all of your Spark workers. (This is a [Spark networking requirement](https://spark.apache.org/docs/latest/cluster-overview.html#components).) - - NOTE: When using `--net=host`, you must also use the flags `--pid=host -e TINI_SUBREAPER=true`. - See for details. + +_Note: When using `--net=host`, you must also use the flags `--pid=host -e TINI_SUBREAPER=true`. See for details._ **Note**: In the following examples we are using the Spark master URL `spark://master:7077` that shall be replaced by the URL of the Spark master. diff --git a/examples/docker-compose/README.md b/examples/docker-compose/README.md index a5fb87f6..483fac07 100644 --- a/examples/docker-compose/README.md +++ b/examples/docker-compose/README.md @@ -108,8 +108,8 @@ The following command will create a certificate chain and store it in a Docker v ```bash FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \ - SECRETS_VOLUME=mydomain-secrets \ - bin/letsencrypt.sh + SECRETS_VOLUME=mydomain-secrets \ + bin/letsencrypt.sh ``` Now run `up.sh` with the `--letsencrypt` option. @@ -128,8 +128,8 @@ To hit their staging servers, set the environment variable `CERT_SERVER=--stagin ```bash FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \ - CERT_SERVER=--staging \ - bin/letsencrypt.sh + CERT_SERVER=--staging \ + bin/letsencrypt.sh ``` Also, be aware that Let's Encrypt certificates are short lived (90 days). diff --git a/examples/openshift/README.md b/examples/openshift/README.md index c1645814..7e9518bb 100644 --- a/examples/openshift/README.md +++ b/examples/openshift/README.md @@ -101,9 +101,9 @@ To override the name for the notebook, the image used, and the password, you can ```bash oc new-app --template jupyter-notebook \ - --param APPLICATION_NAME=mynotebook \ - --param NOTEBOOK_IMAGE=jupyter/scipy-notebook:latest \ - --param NOTEBOOK_PASSWORD=mypassword + --param APPLICATION_NAME=mynotebook \ + --param NOTEBOOK_IMAGE=jupyter/scipy-notebook:latest \ + --param NOTEBOOK_PASSWORD=mypassword ``` You can deploy any of the Jupyter Project docker-stacks images. @@ -136,9 +136,9 @@ To add persistent storage run: ```bash oc set volume dc/mynotebook --add \ - --type=pvc --claim-size=1Gi --claim-mode=ReadWriteOnce \ - --claim-name mynotebook-data --name data \ - --mount-path /home/jovyan + --type=pvc --claim-size=1Gi --claim-mode=ReadWriteOnce \ + --claim-name mynotebook-data --name data \ + --mount-path /home/jovyan ``` When you have deleted the notebook instance, if using a persistent volume, you will need to delete it in a separate step. @@ -229,9 +229,9 @@ Then deploy it using the name of the image stream created. ```bash oc new-app --template jupyter-notebook \ - --param APPLICATION_NAME=mynotebook \ - --param NOTEBOOK_IMAGE=datascience-notebook \ - --param NOTEBOOK_PASSWORD=mypassword + --param APPLICATION_NAME=mynotebook \ + --param NOTEBOOK_IMAGE=datascience-notebook \ + --param NOTEBOOK_PASSWORD=mypassword ``` Importing an image into OpenShift before deploying it means that when a notebook is started, the image need only be pulled from the internal OpenShift image registry rather than Docker Hub for each deployment. diff --git a/examples/source-to-image/README.md b/examples/source-to-image/README.md index 80bf932e..9202dd0c 100644 --- a/examples/source-to-image/README.md +++ b/examples/source-to-image/README.md @@ -31,11 +31,11 @@ As an example of how S2I can be used to create a custom image with a bundled set ```bash s2i build \ - --scripts-url https://raw.githubusercontent.com/jupyter/docker-stacks/master/examples/source-to-image \ - --context-dir docs/source/examples/Notebook \ - https://github.com/jupyter/notebook \ - jupyter/minimal-notebook:latest \ - notebook-examples + --scripts-url https://raw.githubusercontent.com/jupyter/docker-stacks/master/examples/source-to-image \ + --context-dir docs/source/examples/Notebook \ + https://github.com/jupyter/notebook \ + jupyter/minimal-notebook:latest \ + notebook-examples ``` This example command will pull down the Git repository @@ -45,30 +45,30 @@ The base image which the files will be combined with is `jupyter/minimal-noteboo The resulting image from running the command can be seen by running `docker images` command: ```bash -$ docker images -REPOSITORY TAG IMAGE ID CREATED SIZE -notebook-examples latest f5899ed1241d 2 minutes ago 2.59GB +docker images +# REPOSITORY TAG IMAGE ID CREATED SIZE +# notebook-examples latest f5899ed1241d 2 minutes ago 2.59GB ``` You can now run the image. ```bash -$ docker run --rm -p 8888:8888 notebook-examples -Executing the command: jupyter notebook -[I 01:14:50.532 NotebookApp] Writing notebook server cookie secret to /home/jovyan/.local/share/jupyter/runtime/notebook_cookie_secret -[W 01:14:50.724 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended. -[I 01:14:50.747 NotebookApp] JupyterLab beta preview extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab -[I 01:14:50.747 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab -[I 01:14:50.754 NotebookApp] Serving notebooks from local directory: /home/jovyan -[I 01:14:50.754 NotebookApp] 0 active kernels -[I 01:14:50.754 NotebookApp] The Jupyter Notebook is running at: -[I 01:14:50.754 NotebookApp] http://[all ip addresses on your system]:8888/?token=04646d5c5e928da75842cd318d4a3c5aa1f942fc5964323a -[I 01:14:50.754 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). -[C 01:14:50.755 NotebookApp] +docker run --rm -p 8888:8888 notebook-examples +# Executing the command: jupyter notebook +# [I 01:14:50.532 NotebookApp] Writing notebook server cookie secret to /home/jovyan/.local/share/jupyter/runtime/notebook_cookie_secret +# [W 01:14:50.724 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended. +# [I 01:14:50.747 NotebookApp] JupyterLab beta preview extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab +# [I 01:14:50.747 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab +# [I 01:14:50.754 NotebookApp] Serving notebooks from local directory: /home/jovyan +# [I 01:14:50.754 NotebookApp] 0 active kernels +# [I 01:14:50.754 NotebookApp] The Jupyter Notebook is running at: +# [I 01:14:50.754 NotebookApp] http://[all ip addresses on your system]:8888/?token=04646d5c5e928da75842cd318d4a3c5aa1f942fc5964323a +# [I 01:14:50.754 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). +# [C 01:14:50.755 NotebookApp] - Copy/paste this URL into your browser when you connect for the first time, - to login with a token: - http://localhost:8888/?token=04646d5c5e928da75842cd318d4a3c5aa1f942fc5964323a +# Copy/paste this URL into your browser when you connect for the first time, +# to login with a token: +# http://localhost:8888/?token=04646d5c5e928da75842cd318d4a3c5aa1f942fc5964323a ``` Open your browser on the URL displayed, and you will find the notebooks from the Git repository and can work with them. @@ -159,11 +159,11 @@ To use the OpenShift command line to build into an image, and deploy, the set of ```bash oc new-app --template jupyter-notebook-quickstart \ - --param APPLICATION_NAME=notebook-examples \ - --param GIT_REPOSITORY_URL=https://github.com/jupyter/notebook \ - --param CONTEXT_DIR=docs/source/examples/Notebook \ - --param BUILDER_IMAGE=jupyter/minimal-notebook:latest \ - --param NOTEBOOK_PASSWORD=mypassword + --param APPLICATION_NAME=notebook-examples \ + --param GIT_REPOSITORY_URL=https://github.com/jupyter/notebook \ + --param CONTEXT_DIR=docs/source/examples/Notebook \ + --param BUILDER_IMAGE=jupyter/minimal-notebook:latest \ + --param NOTEBOOK_PASSWORD=mypassword ``` You can provide a password using the `NOTEBOOK_PASSWORD` parameter. From 1196b0d1acd0f6f989b20fdbbf2ed5cbed26322b Mon Sep 17 00:00:00 2001 From: Ayaz Salikhov Date: Wed, 2 Feb 2022 01:21:33 +0300 Subject: [PATCH 02/12] Update docs/contributing/stacks.md --- docs/contributing/stacks.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/contributing/stacks.md b/docs/contributing/stacks.md index 9cb91d58..3d70a4b1 100644 --- a/docs/contributing/stacks.md +++ b/docs/contributing/stacks.md @@ -6,7 +6,7 @@ and the documentation below to help you get started defining, building, and shar Following these steps will: -1. Setup a project on GitHub containing a Dockerfile based on of the images we provide. +1. Setup a project on GitHub containing a Dockerfile based on any of the images we provide. 2. Configure GitHub Actions to build and test your image when users submit pull requests to your repository. 3. Configure Docker Hub to build and host your images for others to use. 4. Update the [list of community stacks](../using/selecting.html#community-stacks) in this documentation to include your image. From a4c750115321ac93376f3e40f8fd312be7161f32 Mon Sep 17 00:00:00 2001 From: Ayaz Salikhov Date: Wed, 2 Feb 2022 01:22:12 +0300 Subject: [PATCH 03/12] Update docs/maintaining/tasks.md --- docs/maintaining/tasks.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/maintaining/tasks.md b/docs/maintaining/tasks.md index 64952966..a41fff0a 100644 --- a/docs/maintaining/tasks.md +++ b/docs/maintaining/tasks.md @@ -14,7 +14,7 @@ To build new images and publish them to the Docker Hub registry, do the followin ## Updating the Ubuntu Base Image -Latest LTS Ubuntu base image version is used as the base image for `minimal-notebook`. +Latest LTS Ubuntu docker image is used as the base image for `minimal-notebook`. We rebuild our images automatically each week, which means they receive the updates quite frequently. When there's a security fix in the Ubuntu base image, it's a good idea to manually trigger images rebuild [here](https://github.com/jupyter/docker-stacks/actions/workflows/docker.yml). From c5eee163109ca1ccfb55f7a7a00dc9a7067dd4be Mon Sep 17 00:00:00 2001 From: Ayaz Salikhov Date: Wed, 2 Feb 2022 01:24:34 +0300 Subject: [PATCH 04/12] Fix --- docs/using/specifics.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/using/specifics.md b/docs/using/specifics.md index 426ceff0..dc8d956f 100644 --- a/docs/using/specifics.md +++ b/docs/using/specifics.md @@ -162,8 +162,7 @@ Connection to Spark Cluster on **[Standalone Mode](https://spark.apache.org/docs 2. Run the Docker container with `--net=host` in a location that is network addressable by all of your Spark workers. (This is a [Spark networking requirement](https://spark.apache.org/docs/latest/cluster-overview.html#components).) - -_Note: When using `--net=host`, you must also use the flags `--pid=host -e TINI_SUBREAPER=true`. See for details._ + _Note: When using `--net=host`, you must also use the flags `--pid=host -e TINI_SUBREAPER=true`. See for details._ **Note**: In the following examples we are using the Spark master URL `spark://master:7077` that shall be replaced by the URL of the Spark master. From 6ff12c18a99dd7108458abeca3138661f984ff54 Mon Sep 17 00:00:00 2001 From: Ayaz Salikhov Date: Wed, 2 Feb 2022 12:40:01 +0300 Subject: [PATCH 05/12] Update docs/contributing/lint.md Co-authored-by: Erik Sundell --- docs/contributing/lint.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/contributing/lint.md b/docs/contributing/lint.md index 6be1f9a9..1c22417e 100644 --- a/docs/contributing/lint.md +++ b/docs/contributing/lint.md @@ -29,7 +29,9 @@ make pre-commit-install Now pre-commit (and so configured hooks) will run automatically on `git commit` on each changed file. However it is also possible to trigger it against all files. -_Note: Hadolint pre-commit uses docker to run, so docker should be running while running this command._ +```{note} +Hadolint pre-commit uses docker to run, so docker should be running while running this command. +``` ```sh make pre-commit-all From 12ccbf51499a279891fb8f3bf8900c8d731fbe6c Mon Sep 17 00:00:00 2001 From: Ayaz Salikhov Date: Wed, 2 Feb 2022 12:41:16 +0300 Subject: [PATCH 06/12] Apply suggestions from code review Co-authored-by: Erik Sundell --- docs/contributing/packages.md | 4 +++- docs/contributing/tests.md | 2 +- docs/maintaining/tasks.md | 4 +++- docs/using/common.md | 2 +- docs/using/specifics.md | 4 +++- 5 files changed, 11 insertions(+), 5 deletions(-) diff --git a/docs/contributing/packages.md b/docs/contributing/packages.md index 0932c94e..eeec2ec9 100644 --- a/docs/contributing/packages.md +++ b/docs/contributing/packages.md @@ -5,7 +5,9 @@ The dependencies resolution is a difficult thing to do This means that packages might have old versions. Images are rebuilt weekly, so usually, packages receive updates quite frequently. -_Note: We pin major.minor version of python, so it will stay the same even after `mamba update` command._ +```{note} +We pin major.minor version of python, so it will stay the same even after `mamba update` command. +``` ## Outdated packages diff --git a/docs/contributing/tests.md b/docs/contributing/tests.md index 0e7078f5..24d1b628 100644 --- a/docs/contributing/tests.md +++ b/docs/contributing/tests.md @@ -18,7 +18,7 @@ defined in the [conftest.py](https://github.com/jupyter/docker-stacks/blob/maste If you want to simply run a python script in one of our images, you could add a unit test. Simply create `-notebook/test/units/` directory, if it doesn't already exist and put your file there. These file will run automatically when tests are run. -You could see an example for tensorflow package [here](https://github.com/jupyter/docker-stacks/blob/master/tensorflow-notebook/test/units/unit_tensorflow.py). +You could see an example for tensorflow package [here](https://github.com/jupyter/docker-stacks/blob/HEAD/tensorflow-notebook/test/units/unit_tensorflow.py). ## Contributing New Tests diff --git a/docs/maintaining/tasks.md b/docs/maintaining/tasks.md index a41fff0a..2312f234 100644 --- a/docs/maintaining/tasks.md +++ b/docs/maintaining/tasks.md @@ -22,7 +22,9 @@ Pushing `Run Workflow` button will trigger this process. ## Adding a New Core Image to Docker Hub -_Note: in general, we do not add new core images and ask contributors to either create a [recipe](../using/recipes.md) or [community stack](../using/stacks.md)_ +```{note} +In general, we do not add new core images and ask contributors to either create a [recipe](../using/recipes.md) or [community stack](../using/stacks.md). +``` When there's a new stack definition, do the following before merging the PR with the new stack: diff --git a/docs/using/common.md b/docs/using/common.md index d95834d7..b105dd10 100644 --- a/docs/using/common.md +++ b/docs/using/common.md @@ -207,7 +207,7 @@ For example, to run the text-based `ipython` console in a container, do the foll docker run -it --rm jupyter/base-notebook start.sh ipython ``` -Or, to run Jupyter Notebook classic instead of the JupyterLab, run the following: +Or, to run Jupyter Notebook classic instead of JupyterLab, run the following: ```bash docker run -it --rm -p 8888:8888 jupyter/base-notebook start.sh jupyter notebook diff --git a/docs/using/specifics.md b/docs/using/specifics.md index dc8d956f..90770038 100644 --- a/docs/using/specifics.md +++ b/docs/using/specifics.md @@ -162,7 +162,9 @@ Connection to Spark Cluster on **[Standalone Mode](https://spark.apache.org/docs 2. Run the Docker container with `--net=host` in a location that is network addressable by all of your Spark workers. (This is a [Spark networking requirement](https://spark.apache.org/docs/latest/cluster-overview.html#components).) - _Note: When using `--net=host`, you must also use the flags `--pid=host -e TINI_SUBREAPER=true`. See for details._ + ```{note} + When using `--net=host`, you must also use the flags `--pid=host -e TINI_SUBREAPER=true`. See for details._ + ``` **Note**: In the following examples we are using the Spark master URL `spark://master:7077` that shall be replaced by the URL of the Spark master. From 889b5a46b1329bf27b3463f75e43f5b353ab31ba Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Wed, 2 Feb 2022 09:41:35 +0000 Subject: [PATCH 07/12] [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --- docs/using/specifics.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/using/specifics.md b/docs/using/specifics.md index 90770038..17046898 100644 --- a/docs/using/specifics.md +++ b/docs/using/specifics.md @@ -162,6 +162,7 @@ Connection to Spark Cluster on **[Standalone Mode](https://spark.apache.org/docs 2. Run the Docker container with `--net=host` in a location that is network addressable by all of your Spark workers. (This is a [Spark networking requirement](https://spark.apache.org/docs/latest/cluster-overview.html#components).) + ```{note} When using `--net=host`, you must also use the flags `--pid=host -e TINI_SUBREAPER=true`. See for details._ ``` From dc6f492b02481ea8fa4018a0b8774e10d5b8c7e4 Mon Sep 17 00:00:00 2001 From: Ayaz Salikhov Date: Wed, 2 Feb 2022 12:45:09 +0300 Subject: [PATCH 08/12] Update docs/maintaining/tasks.md --- docs/maintaining/tasks.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/maintaining/tasks.md b/docs/maintaining/tasks.md index 2312f234..fcb40b89 100644 --- a/docs/maintaining/tasks.md +++ b/docs/maintaining/tasks.md @@ -14,7 +14,8 @@ To build new images and publish them to the Docker Hub registry, do the followin ## Updating the Ubuntu Base Image -Latest LTS Ubuntu docker image is used as the base image for `minimal-notebook`. +`minimal-notebook` is based on the Latest LTS Ubuntu docker image. +Other images are directly or indirectly inherited from `minimal-notebook`. We rebuild our images automatically each week, which means they receive the updates quite frequently. When there's a security fix in the Ubuntu base image, it's a good idea to manually trigger images rebuild [here](https://github.com/jupyter/docker-stacks/actions/workflows/docker.yml). From ff63c762e5484178ef37597ee7dd93a972aa081b Mon Sep 17 00:00:00 2001 From: Ayaz Salikhov Date: Wed, 2 Feb 2022 12:54:27 +0300 Subject: [PATCH 09/12] More notes --- docs/contributing/stacks.md | 6 +++++- docs/using/common.md | 5 ++++- docs/using/specifics.md | 12 +++++++++--- examples/docker-compose/README.md | 5 ++++- 4 files changed, 22 insertions(+), 6 deletions(-) diff --git a/docs/contributing/stacks.md b/docs/contributing/stacks.md index 3d70a4b1..e31eda79 100644 --- a/docs/contributing/stacks.md +++ b/docs/contributing/stacks.md @@ -119,7 +119,11 @@ you merge a GitHub pull request to the master branch of your project. 11. Enter a meaningful name for your token and click on **Create** ![DockerHub - New Access Token page with the name field set to "my-jupyter-docker-token"](../_static/docker-org-create-token.png) 12. Copy the personal access token displayed on the next screen. - **Note that you will not be able to see it again after you close the pop-up window**. + + ```{note} + you will not be able to see it again after you close the pop-up window**. + ``` + 13. Head back to your GitHub repository and click on the **Settings tab**. ![GitHub page with the the "Setting" tab active and a rectangle highlighting the "New repository secret" button in the UI](../_static/github-create-secrets.png) 14. Click on the **Secrets** section and then on the **New repository secret** button on the top right corner (see image above). diff --git a/docs/using/common.md b/docs/using/common.md index b105dd10..6d395cf8 100644 --- a/docs/using/common.md +++ b/docs/using/common.md @@ -78,10 +78,13 @@ You do so by passing arguments to the `docker run` command. For example, if setting `umask` to `002`, new files will be readable and writable by group members instead of the owner only. [Check this Wikipedia article](https://en.wikipedia.org/wiki/Umask) for an in-depth description of `umask` and suitable values for multiple needs. While the default `umask` value should be sufficient for most use cases, you can set the `NB_UMASK` value to fit your requirements. - _Note that `NB_UMASK` when set only applies to the Jupyter process itself - + + ```{note} + `NB_UMASK` when set only applies to the Jupyter process itself - you cannot use it to set a `umask` for additional files created during run-hooks. For example, via `pip` or `conda`. If you need to set a `umask` for these, you must set the `umask` value for each command._ + ``` - `-e CHOWN_HOME=yes` - Instructs the startup script to change the `${NB_USER}` home directory owner and group to the current value of `${NB_UID}` and `${NB_GID}`. This change will take effect even if the user home directory is mounted from the host using `-v` as described below. diff --git a/docs/using/specifics.md b/docs/using/specifics.md index 17046898..7b425027 100644 --- a/docs/using/specifics.md +++ b/docs/using/specifics.md @@ -9,7 +9,11 @@ This page provides details about features specific to one or more images. - `-p 4040:4040` - The `jupyter/pyspark-notebook` and `jupyter/all-spark-notebook` images open [SparkUI (Spark Monitoring and Instrumentation UI)](https://spark.apache.org/docs/latest/monitoring.html) at default port `4040`, this option map `4040` port inside docker container to `4040` port on host machine. - Note every new spark context that is created is put onto an incrementing port (ie. 4040, 4041, 4042, etc.), and it might be necessary to open multiple ports. + + ```{note} + Every new spark context that is created is put onto an incrementing port (ie. 4040, 4041, 4042, etc.), and it might be necessary to open multiple ports. + ``` + For example: `docker run -d -p 8888:8888 -p 4040:4040 -p 4041:4041 jupyter/pyspark-notebook`. #### IPython low-level output capture and forward @@ -245,6 +249,10 @@ rdd.sum() ### Define Spark Dependencies +```{note} +This example is given for [Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/hadoop/current/install.html). +``` + Spark dependencies can be declared thanks to the `spark.jars.packages` property (see [Spark Configuration](https://spark.apache.org/docs/latest/configuration.html#runtime-environment) for more information). @@ -274,8 +282,6 @@ USER ${NB_UID} Jars will be downloaded dynamically at the creation of the Spark session and stored by default in `${HOME}/.ivy2/jars` (can be changed by setting `spark.jars.ivy`). -_Note: This example is given for [Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/hadoop/current/install.html)._ - ## Tensorflow The `jupyter/tensorflow-notebook` image supports the use of diff --git a/examples/docker-compose/README.md b/examples/docker-compose/README.md index 483fac07..b9b72208 100644 --- a/examples/docker-compose/README.md +++ b/examples/docker-compose/README.md @@ -102,7 +102,10 @@ notebook/up.sh --secure --password a_secret Sure. If you want to secure access to publicly addressable notebook containers, you can generate a free certificate using the [Let's Encrypt](https://letsencrypt.org) service. This example includes the `bin/letsencrypt.sh` script, which runs the `letsencrypt` client to create a full-chain certificate and private key, and stores them in a Docker volume. -_Note:_ The script hard codes several `letsencrypt` options, one of which automatically agrees to the Let's Encrypt Terms of Service. + +```{note} +The script hard codes several `letsencrypt` options, one of which automatically agrees to the Let's Encrypt Terms of Service. +``` The following command will create a certificate chain and store it in a Docker volume named `mydomain-secrets`. From ce93de8e2a50aeaef72b39eb4b5063937006deda Mon Sep 17 00:00:00 2001 From: Ayaz Salikhov Date: Wed, 2 Feb 2022 12:59:39 +0300 Subject: [PATCH 10/12] Fix typo --- docs/using/common.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/using/common.md b/docs/using/common.md index 6d395cf8..9b27f0bd 100644 --- a/docs/using/common.md +++ b/docs/using/common.md @@ -72,7 +72,7 @@ You do so by passing arguments to the `docker run` command. - `--user 5000 --group-add users` - Launches the container with a specific user ID and adds that user to the `users` group so that it can modify files in the default home directory and `/opt/conda`. You can use these arguments as alternatives to setting `${NB_UID}` and `${NB_GID}`. -## Permision-specific configurations +## Permission-specific configurations - `-e NB_UMASK=` - Configures Jupyter to use a different `umask` value from default, i.e. `022`. For example, if setting `umask` to `002`, new files will be readable and writable by group members instead of the owner only. From 359a66057094836e116f2f0ba8be8ff7326778fb Mon Sep 17 00:00:00 2001 From: Ayaz Salikhov Date: Wed, 2 Feb 2022 15:31:19 +0300 Subject: [PATCH 11/12] Update docs/contributing/packages.md --- docs/contributing/packages.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/contributing/packages.md b/docs/contributing/packages.md index eeec2ec9..338e9563 100644 --- a/docs/contributing/packages.md +++ b/docs/contributing/packages.md @@ -1,7 +1,7 @@ # Package Updates As a general rule, we do not pin package versions in our `Dockerfile`s. -The dependencies resolution is a difficult thing to do +The dependencies resolution is a difficult thing to do. This means that packages might have old versions. Images are rebuilt weekly, so usually, packages receive updates quite frequently. From 9fa1a586fc33428741f14df67a62f1c29a8800e6 Mon Sep 17 00:00:00 2001 From: Ayaz Salikhov Date: Wed, 2 Feb 2022 16:18:28 +0300 Subject: [PATCH 12/12] Apply suggestions from code review Co-authored-by: Tania Allard --- docs/contributing/features.md | 2 +- docs/contributing/packages.md | 2 +- docs/contributing/tests.md | 4 ++-- docs/maintaining/tasks.md | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/contributing/features.md b/docs/contributing/features.md index 5b048905..a313493d 100644 --- a/docs/contributing/features.md +++ b/docs/contributing/features.md @@ -39,7 +39,7 @@ If there's agreement that the feature belongs in one or more of the core stacks: 1. Implement the feature in a local clone of the `jupyter/docker-stacks` project. 2. Please, build the image locally before submitting a pull request. It shortens the debugging cycle by taking some load off GitHub Actions, - which graciously provide free build services for open source projects like this one. + which graciously provides free build services for open source projects like this one. If you use `make`, call: ```bash diff --git a/docs/contributing/packages.md b/docs/contributing/packages.md index 338e9563..568ad94a 100644 --- a/docs/contributing/packages.md +++ b/docs/contributing/packages.md @@ -6,7 +6,7 @@ This means that packages might have old versions. Images are rebuilt weekly, so usually, packages receive updates quite frequently. ```{note} -We pin major.minor version of python, so it will stay the same even after `mamba update` command. +We pin major.minor version of python, so this will stay the same even after invoking the `mamba update` command. ``` ## Outdated packages diff --git a/docs/contributing/tests.md b/docs/contributing/tests.md index 24d1b628..a0157e4c 100644 --- a/docs/contributing/tests.md +++ b/docs/contributing/tests.md @@ -15,8 +15,8 @@ defined in the [conftest.py](https://github.com/jupyter/docker-stacks/blob/maste ## Unit tests -If you want to simply run a python script in one of our images, you could add a unit test. -Simply create `-notebook/test/units/` directory, if it doesn't already exist and put your file there. +If you want to run a python script in one of our images, you could add a unit test. +You can do this by creating a `-notebook/test/units/` directory, if it doesn't already exist and put your file there. These file will run automatically when tests are run. You could see an example for tensorflow package [here](https://github.com/jupyter/docker-stacks/blob/HEAD/tensorflow-notebook/test/units/unit_tensorflow.py). diff --git a/docs/maintaining/tasks.md b/docs/maintaining/tasks.md index fcb40b89..f490bc2a 100644 --- a/docs/maintaining/tasks.md +++ b/docs/maintaining/tasks.md @@ -18,7 +18,7 @@ To build new images and publish them to the Docker Hub registry, do the followin Other images are directly or indirectly inherited from `minimal-notebook`. We rebuild our images automatically each week, which means they receive the updates quite frequently. -When there's a security fix in the Ubuntu base image, it's a good idea to manually trigger images rebuild [here](https://github.com/jupyter/docker-stacks/actions/workflows/docker.yml). +When there's a security fix in the Ubuntu base image, it's a good idea to manually trigger images rebuild [from the GitHub actions workflow UI](https://github.com/jupyter/docker-stacks/actions/workflows/docker.yml). Pushing `Run Workflow` button will trigger this process. ## Adding a New Core Image to Docker Hub