Merge pull request #1591 from mathbunnyru/asalikhov/cleanup_docs

Cleanup docs
This commit is contained in:
Ayaz Salikhov
2022-02-02 16:20:40 +03:00
committed by GitHub
14 changed files with 161 additions and 147 deletions

View File

@@ -112,6 +112,7 @@ RUN set -x && \
conda config --system --set auto_update_conda false && \ conda config --system --set auto_update_conda false && \
conda config --system --set show_channel_urls true && \ conda config --system --set show_channel_urls true && \
if [[ "${PYTHON_VERSION}" != "default" ]]; then mamba install --quiet --yes python="${PYTHON_VERSION}"; fi && \ if [[ "${PYTHON_VERSION}" != "default" ]]; then mamba install --quiet --yes python="${PYTHON_VERSION}"; fi && \
# Pin major.minor version of python
mamba list python | grep '^python ' | tr -s ' ' | cut -d ' ' -f 1,2 >> "${CONDA_DIR}/conda-meta/pinned" && \ mamba list python | grep '^python ' | tr -s ' ' | cut -d ' ' -f 1,2 >> "${CONDA_DIR}/conda-meta/pinned" && \
# Using conda to update all packages: https://github.com/mamba-org/mamba/issues/1092 # Using conda to update all packages: https://github.com/mamba-org/mamba/issues/1092
conda update --all --quiet --yes && \ conda update --all --quiet --yes && \

View File

@@ -37,8 +37,9 @@ Roughly speaking, we evaluate new features based on the following criteria:
If there's agreement that the feature belongs in one or more of the core stacks: If there's agreement that the feature belongs in one or more of the core stacks:
1. Implement the feature in a local clone of the `jupyter/docker-stacks` project. 1. Implement the feature in a local clone of the `jupyter/docker-stacks` project.
2. Please build the image locally before submitting a pull request 2. Please, build the image locally before submitting a pull request.
Building the image locally shortens the debugging cycle by taking some load off GitHub Actions, which graciously provide free build services for open source projects like this one. It shortens the debugging cycle by taking some load off GitHub Actions,
which graciously provides free build services for open source projects like this one.
If you use `make`, call: If you use `make`, call:
```bash ```bash

View File

@@ -13,9 +13,9 @@ This can be achieved by using the generic task used to install all Python develo
```sh ```sh
# Install all development dependencies for the project # Install all development dependencies for the project
$ make dev-env make dev-env
# It can also be installed directly # It can also be installed directly
$ pip install pre-commit pip install pre-commit
``` ```
Then the git hooks scripts configured for the project in `.pre-commit-config.yaml` need to be installed in the local git repository. Then the git hooks scripts configured for the project in `.pre-commit-config.yaml` need to be installed in the local git repository.
@@ -29,7 +29,9 @@ make pre-commit-install
Now pre-commit (and so configured hooks) will run automatically on `git commit` on each changed file. Now pre-commit (and so configured hooks) will run automatically on `git commit` on each changed file.
However it is also possible to trigger it against all files. However it is also possible to trigger it against all files.
- Note: Hadolint pre-commit uses docker to run, so docker should be running while running this command. ```{note}
Hadolint pre-commit uses docker to run, so docker should be running while running this command.
```
```sh ```sh
make pre-commit-all make pre-commit-all

View File

@@ -1,41 +1,22 @@
# Package Updates # Package Updates
We actively seek pull requests which update packages already included in the project Dockerfiles. As a general rule, we do not pin package versions in our `Dockerfile`s.
This is a great way for first-time contributors to participate in developing the Jupyter Docker The dependencies resolution is a difficult thing to do.
Stacks. This means that packages might have old versions.
Images are rebuilt weekly, so usually, packages receive updates quite frequently.
Please follow the process below to update a package version: ```{note}
We pin major.minor version of python, so this will stay the same even after invoking the `mamba update` command.
1. Locate the Dockerfile containing the library you wish to update (e.g.,
[base-notebook/Dockerfile](https://github.com/jupyter/docker-stacks/blob/master/base-notebook/Dockerfile),
[scipy-notebook/Dockerfile](https://github.com/jupyter/docker-stacks/blob/master/scipy-notebook/Dockerfile))
2. Adjust the version number for the package.
We prefer to pin the major and minor version number of packages so as to minimize rebuild side-effects when users submit pull requests (PRs).
For example, you'll find the Jupyter Notebook package, `notebook`, installed using conda with
`notebook=5.4.*`.
3. Please build the image locally before submitting a pull request.
Building the image locally shortens the debugging cycle by taking some load off GitHub Actions, which graciously provide free build services for open source projects like this one.
If you use `make`, call:
```bash
make build/somestack-notebook
``` ```
4. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request) ## Outdated packages
(PR) with your changes.
5. Watch for GitHub to report a build success or failure for your PR on GitHub.
6. Discuss changes with the maintainers and address any build issues.
Version conflicts are the most common problem.
You may need to upgrade additional packages to fix build failures.
## Notes
In order to help identifying packages that can be updated you can use the following helper tool. In order to help identifying packages that can be updated you can use the following helper tool.
It will list all the packages installed in the `Dockerfile` that can be updated -- dependencies are It will list all the packages installed in the `Dockerfile` that can be updated -- dependencies are
filtered to focus only on requested packages. filtered to focus only on requested packages.
```bash ```bash
$ make check-outdated/base-notebook make check-outdated/base-notebook
# INFO test_outdated:test_outdated.py:80 3/8 (38%) packages could be updated # INFO test_outdated:test_outdated.py:80 3/8 (38%) packages could be updated
# INFO test_outdated:test_outdated.py:82 # INFO test_outdated:test_outdated.py:82

View File

@@ -3,9 +3,10 @@
We love to see the community create and share new Jupyter Docker images. We love to see the community create and share new Jupyter Docker images.
We've put together a [cookiecutter project](https://github.com/jupyter/cookiecutter-docker-stacks) We've put together a [cookiecutter project](https://github.com/jupyter/cookiecutter-docker-stacks)
and the documentation below to help you get started defining, building, and sharing your Jupyter environments in Docker. and the documentation below to help you get started defining, building, and sharing your Jupyter environments in Docker.
Following these steps will: Following these steps will:
1. Setup a project on GitHub containing a Dockerfile based on either the `jupyter/base-notebook` or `jupyter/minimal-notebook` image. 1. Setup a project on GitHub containing a Dockerfile based on any of the images we provide.
2. Configure GitHub Actions to build and test your image when users submit pull requests to your repository. 2. Configure GitHub Actions to build and test your image when users submit pull requests to your repository.
3. Configure Docker Hub to build and host your images for others to use. 3. Configure Docker Hub to build and host your images for others to use.
4. Update the [list of community stacks](../using/selecting.html#community-stacks) in this documentation to include your image. 4. Update the [list of community stacks](../using/selecting.html#community-stacks) in this documentation to include your image.
@@ -62,7 +63,7 @@ git init
git add . git add .
git commit -m 'Seed repo' git commit -m 'Seed repo'
git remote add origin <url from github> git remote add origin <url from github>
git push -u origin master git push -u origin main
``` ```
## Configuring GitHub actions ## Configuring GitHub actions
@@ -118,7 +119,11 @@ you merge a GitHub pull request to the master branch of your project.
11. Enter a meaningful name for your token and click on **Create** 11. Enter a meaningful name for your token and click on **Create**
![DockerHub - New Access Token page with the name field set to "my-jupyter-docker-token"](../_static/docker-org-create-token.png) ![DockerHub - New Access Token page with the name field set to "my-jupyter-docker-token"](../_static/docker-org-create-token.png)
12. Copy the personal access token displayed on the next screen. 12. Copy the personal access token displayed on the next screen.
**Note that you will not be able to see it again after you close the pop-up window**.
```{note}
you will not be able to see it again after you close the pop-up window**.
```
13. Head back to your GitHub repository and click on the **Settings tab**. 13. Head back to your GitHub repository and click on the **Settings tab**.
![GitHub page with the the "Setting" tab active and a rectangle highlighting the "New repository secret" button in the UI](../_static/github-create-secrets.png) ![GitHub page with the the "Setting" tab active and a rectangle highlighting the "New repository secret" button in the UI](../_static/github-create-secrets.png)
14. Click on the **Secrets** section and then on the **New repository secret** button on the top right corner (see image above). 14. Click on the **Secrets** section and then on the **New repository secret** button on the top right corner (see image above).

View File

@@ -5,8 +5,7 @@ of the Docker images.
## How the Tests Work ## How the Tests Work
GitHub executes `make build-test-all` against pull requests submitted to the `jupyter/docker-stacks` GitHub Action executes `make build-test-all` against pull requests submitted to the `jupyter/docker-stacks` repository.
repository.
This `make` command builds every docker image. This `make` command builds every docker image.
After building each image, the `make` command executes `pytest` to run both image-specific tests like those in After building each image, the `make` command executes `pytest` to run both image-specific tests like those in
[base-notebook/test/](https://github.com/jupyter/docker-stacks/tree/master/base-notebook/test) and [base-notebook/test/](https://github.com/jupyter/docker-stacks/tree/master/base-notebook/test) and
@@ -14,6 +13,13 @@ common tests defined in [test/](https://github.com/jupyter/docker-stacks/tree/ma
Both kinds of tests make use of global [pytest fixtures](https://docs.pytest.org/en/latest/reference/fixtures.html) Both kinds of tests make use of global [pytest fixtures](https://docs.pytest.org/en/latest/reference/fixtures.html)
defined in the [conftest.py](https://github.com/jupyter/docker-stacks/blob/master/conftest.py) file at the root of the projects. defined in the [conftest.py](https://github.com/jupyter/docker-stacks/blob/master/conftest.py) file at the root of the projects.
## Unit tests
If you want to run a python script in one of our images, you could add a unit test.
You can do this by creating a `<somestack>-notebook/test/units/` directory, if it doesn't already exist and put your file there.
These file will run automatically when tests are run.
You could see an example for tensorflow package [here](https://github.com/jupyter/docker-stacks/blob/HEAD/tensorflow-notebook/test/units/unit_tensorflow.py).
## Contributing New Tests ## Contributing New Tests
Please follow the process below to add new tests: Please follow the process below to add new tests:

View File

@@ -14,13 +14,19 @@ To build new images and publish them to the Docker Hub registry, do the followin
## Updating the Ubuntu Base Image ## Updating the Ubuntu Base Image
When there's a security fix in the Ubuntu base image or after some time passes, it's a good idea to update the pinned SHA in the `minimal-notebook` is based on the Latest LTS Ubuntu docker image.
[jupyter/base-notebook Dockerfile](https://github.com/jupyter/docker-stacks/blob/master/base-notebook/Dockerfile). Other images are directly or indirectly inherited from `minimal-notebook`.
Submit it as a regular PR and go through the build process. We rebuild our images automatically each week, which means they receive the updates quite frequently.
Expect the build to take a while to complete: every image layer will rebuild.
When there's a security fix in the Ubuntu base image, it's a good idea to manually trigger images rebuild [from the GitHub actions workflow UI](https://github.com/jupyter/docker-stacks/actions/workflows/docker.yml).
Pushing `Run Workflow` button will trigger this process.
## Adding a New Core Image to Docker Hub ## Adding a New Core Image to Docker Hub
```{note}
In general, we do not add new core images and ask contributors to either create a [recipe](../using/recipes.md) or [community stack](../using/stacks.md).
```
When there's a new stack definition, do the following before merging the PR with the new stack: When there's a new stack definition, do the following before merging the PR with the new stack:
1. Ensure the PR includes an update to the stack overview diagram 1. Ensure the PR includes an update to the stack overview diagram

View File

@@ -72,16 +72,19 @@ You do so by passing arguments to the `docker run` command.
- `--user 5000 --group-add users` - Launches the container with a specific user ID and adds that user to the `users` group so that it can modify files in the default home directory and `/opt/conda`. - `--user 5000 --group-add users` - Launches the container with a specific user ID and adds that user to the `users` group so that it can modify files in the default home directory and `/opt/conda`.
You can use these arguments as alternatives to setting `${NB_UID}` and `${NB_GID}`. You can use these arguments as alternatives to setting `${NB_UID}` and `${NB_GID}`.
## Permision-specific configurations ## Permission-specific configurations
- `-e NB_UMASK=<umask>` - Configures Jupyter to use a different `umask` value from default, i.e. `022`. - `-e NB_UMASK=<umask>` - Configures Jupyter to use a different `umask` value from default, i.e. `022`.
For example, if setting `umask` to `002`, new files will be readable and writable by group members instead of the owner only. For example, if setting `umask` to `002`, new files will be readable and writable by group members instead of the owner only.
[Check this Wikipedia article](https://en.wikipedia.org/wiki/Umask) for an in-depth description of `umask` and suitable values for multiple needs. [Check this Wikipedia article](https://en.wikipedia.org/wiki/Umask) for an in-depth description of `umask` and suitable values for multiple needs.
While the default `umask` value should be sufficient for most use cases, you can set the `NB_UMASK` value to fit your requirements. While the default `umask` value should be sufficient for most use cases, you can set the `NB_UMASK` value to fit your requirements.
_Note that `NB_UMASK` when set only applies to the Jupyter process itself -
```{note}
`NB_UMASK` when set only applies to the Jupyter process itself -
you cannot use it to set a `umask` for additional files created during run-hooks. you cannot use it to set a `umask` for additional files created during run-hooks.
For example, via `pip` or `conda`. For example, via `pip` or `conda`.
If you need to set a `umask` for these, you must set the `umask` value for each command._ If you need to set a `umask` for these, you must set the `umask` value for each command._
```
- `-e CHOWN_HOME=yes` - Instructs the startup script to change the `${NB_USER}` home directory owner and group to the current value of `${NB_UID}` and `${NB_GID}`. - `-e CHOWN_HOME=yes` - Instructs the startup script to change the `${NB_USER}` home directory owner and group to the current value of `${NB_UID}` and `${NB_GID}`.
This change will take effect even if the user home directory is mounted from the host using `-v` as described below. This change will take effect even if the user home directory is mounted from the host using `-v` as described below.
@@ -135,7 +138,7 @@ For example, to mount a host folder containing a `notebook.key` and `notebook.cr
docker run -d -p 8888:8888 \ docker run -d -p 8888:8888 \
-v /some/host/folder:/etc/ssl/notebook \ -v /some/host/folder:/etc/ssl/notebook \
jupyter/base-notebook start-notebook.sh \ jupyter/base-notebook start-notebook.sh \
--NotebookApp.keyfile=/etc/ssl/notebook/notebook.key --NotebookApp.keyfile=/etc/ssl/notebook/notebook.key \
--NotebookApp.certfile=/etc/ssl/notebook/notebook.crt --NotebookApp.certfile=/etc/ssl/notebook/notebook.crt
``` ```
@@ -207,10 +210,10 @@ For example, to run the text-based `ipython` console in a container, do the foll
docker run -it --rm jupyter/base-notebook start.sh ipython docker run -it --rm jupyter/base-notebook start.sh ipython
``` ```
Or, to run JupyterLab instead of the classic notebook, run the following: Or, to run Jupyter Notebook classic instead of JupyterLab, run the following:
```bash ```bash
docker run -it --rm -p 8888:8888 jupyter/base-notebook start.sh jupyter lab docker run -it --rm -p 8888:8888 jupyter/base-notebook start.sh jupyter notebook
``` ```
This script is handy when you derive a new Dockerfile from this image and install additional Jupyter applications with subcommands like `jupyter console`, `jupyter kernelgateway`, etc. This script is handy when you derive a new Dockerfile from this image and install additional Jupyter applications with subcommands like `jupyter console`, `jupyter kernelgateway`, etc.

View File

@@ -268,9 +268,7 @@ Enabling manpages in the base Ubuntu layer prevents this container bloat.
To achieve this, use the previous `Dockerfile` with the original ubuntu image (`ubuntu:focal`) as your base container: To achieve this, use the previous `Dockerfile` with the original ubuntu image (`ubuntu:focal`) as your base container:
```dockerfile ```dockerfile
# Ubuntu 20.04 (focal) from 2020-04-23 ARG BASE_CONTAINER=ubuntu:focal
# https://github.com/docker-library/official-images/commit/4475094895093bcc29055409494cce1e11b52f94
ARG BASE_CONTAINER=ubuntu:focal-20200423@sha256:238e696992ba9913d24cfc3727034985abd136e08ee3067982401acdc30cbf3f
``` ```
For Ubuntu 18.04 (bionic) and earlier, you may also require to a workaround for a mandb bug, which was fixed in mandb >= 2.8.6.1: For Ubuntu 18.04 (bionic) and earlier, you may also require to a workaround for a mandb bug, which was fixed in mandb >= 2.8.6.1:

View File

@@ -18,42 +18,42 @@ It then starts a container running a Jupyter Notebook server and exposes the ser
The server logs appear in the terminal and include a URL to the notebook server. The server logs appear in the terminal and include a URL to the notebook server.
```bash ```bash
$ docker run -p 8888:8888 jupyter/scipy-notebook:b418b67c225b docker run -p 8888:8888 jupyter/scipy-notebook:b418b67c225b
Executing the command: jupyter notebook # Executing the command: jupyter notebook
[I 15:33:00.567 NotebookApp] Writing notebook server cookie secret to /home/jovyan/.local/share/jupyter/runtime/notebook_cookie_secret # [I 15:33:00.567 NotebookApp] Writing notebook server cookie secret to /home/jovyan/.local/share/jupyter/runtime/notebook_cookie_secret
[W 15:33:01.084 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended. # [W 15:33:01.084 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended.
[I 15:33:01.150 NotebookApp] JupyterLab alpha preview extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab # [I 15:33:01.150 NotebookApp] JupyterLab alpha preview extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab
[I 15:33:01.150 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab # [I 15:33:01.150 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab
[I 15:33:01.155 NotebookApp] Serving notebooks from local directory: /home/jovyan # [I 15:33:01.155 NotebookApp] Serving notebooks from local directory: /home/jovyan
[I 15:33:01.156 NotebookApp] 0 active kernels # [I 15:33:01.156 NotebookApp] 0 active kernels
[I 15:33:01.156 NotebookApp] The Jupyter Notebook is running at: # [I 15:33:01.156 NotebookApp] The Jupyter Notebook is running at:
[I 15:33:01.157 NotebookApp] http://[all ip addresses on your system]:8888/?token=112bb073331f1460b73768c76dffb2f87ac1d4ca7870d46a # [I 15:33:01.157 NotebookApp] http://[all ip addresses on your system]:8888/?token=112bb073331f1460b73768c76dffb2f87ac1d4ca7870d46a
[I 15:33:01.157 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). # [I 15:33:01.157 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 15:33:01.160 NotebookApp] # [C 15:33:01.160 NotebookApp]
Copy/paste this URL into your browser when you connect for the first time, # Copy/paste this URL into your browser when you connect for the first time,
to login with a token: # to login with a token:
http://localhost:8888/?token=112bb073331f1460b73768c76dffb2f87ac1d4ca7870d46a # http://localhost:8888/?token=112bb073331f1460b73768c76dffb2f87ac1d4ca7870d46a
``` ```
Pressing `Ctrl-C` shuts down the notebook server but leaves the container intact on disk for later restart or permanent deletion using commands like the following: Pressing `Ctrl-C` shuts down the notebook server but leaves the container intact on disk for later restart or permanent deletion using commands like the following:
```bash ```bash
# list containers # list containers
$ docker ps -a docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES # CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d67fe77f1a84 jupyter/base-notebook "tini -- start-noteb…" 44 seconds ago Exited (0) 39 seconds ago cocky_mirzakhani # d67fe77f1a84 jupyter/base-notebook "tini -- start-noteb…" 44 seconds ago Exited (0) 39 seconds ago cocky_mirzakhani
# start the stopped container # start the stopped container
$ docker start -a d67fe77f1a84 docker start -a d67fe77f1a84
Executing the command: jupyter notebook # Executing the command: jupyter notebook
[W 16:45:02.020 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended. # [W 16:45:02.020 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended.
... # ...
# remove the stopped container # remove the stopped container
$ docker rm d67fe77f1a84 docker rm d67fe77f1a84
d67fe77f1a84 # d67fe77f1a84
``` ```
**Example 2** This command pulls the `jupyter/r-notebook` image tagged `b418b67c225b` from Docker Hub if it is not already present on the local host. **Example 2** This command pulls the `jupyter/r-notebook` image tagged `b418b67c225b` from Docker Hub if it is not already present on the local host.
@@ -61,23 +61,23 @@ It then starts a container running a Jupyter Notebook server and exposes the ser
The server logs appear in the terminal and include a URL to the notebook server, but with the internal container port (8888) instead of the the correct host port (10000). The server logs appear in the terminal and include a URL to the notebook server, but with the internal container port (8888) instead of the the correct host port (10000).
```bash ```bash
$ docker run --rm -p 10000:8888 -v "${PWD}":/home/jovyan/work jupyter/r-notebook:b418b67c225b docker run --rm -p 10000:8888 -v "${PWD}":/home/jovyan/work jupyter/r-notebook:b418b67c225b
Executing the command: jupyter notebook # Executing the command: jupyter notebook
[I 19:31:09.573 NotebookApp] Writing notebook server cookie secret to /home/jovyan/.local/share/jupyter/runtime/notebook_cookie_secret # [I 19:31:09.573 NotebookApp] Writing notebook server cookie secret to /home/jovyan/.local/share/jupyter/runtime/notebook_cookie_secret
[W 19:31:11.930 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended. # [W 19:31:11.930 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended.
[I 19:31:12.085 NotebookApp] JupyterLab alpha preview extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab # [I 19:31:12.085 NotebookApp] JupyterLab alpha preview extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab
[I 19:31:12.086 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab # [I 19:31:12.086 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab
[I 19:31:12.117 NotebookApp] Serving notebooks from local directory: /home/jovyan # [I 19:31:12.117 NotebookApp] Serving notebooks from local directory: /home/jovyan
[I 19:31:12.117 NotebookApp] 0 active kernels # [I 19:31:12.117 NotebookApp] 0 active kernels
[I 19:31:12.118 NotebookApp] The Jupyter Notebook is running at: # [I 19:31:12.118 NotebookApp] The Jupyter Notebook is running at:
[I 19:31:12.119 NotebookApp] http://[all ip addresses on your system]:8888/?token=3b8dce890cb65570fb0d9c4a41ae067f7604873bd604f5ac # [I 19:31:12.119 NotebookApp] http://[all ip addresses on your system]:8888/?token=3b8dce890cb65570fb0d9c4a41ae067f7604873bd604f5ac
[I 19:31:12.120 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). # [I 19:31:12.120 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 19:31:12.122 NotebookApp] # [C 19:31:12.122 NotebookApp]
Copy/paste this URL into your browser when you connect for the first time, # Copy/paste this URL into your browser when you connect for the first time,
to login with a token: # to login with a token:
http://localhost:8888/?token=3b8dce890cb65570fb0d9c4a41ae067f7604873bd604f5ac # http://localhost:8888/?token=3b8dce890cb65570fb0d9c4a41ae067f7604873bd604f5ac
``` ```
Pressing `Ctrl-C` shuts down the notebook server and immediately destroys the Docker container. Pressing `Ctrl-C` shuts down the notebook server and immediately destroys the Docker container.
@@ -95,14 +95,14 @@ The assigned port and notebook server token are visible using other Docker comma
```bash ```bash
# get the random host port assigned to the container port 8888 # get the random host port assigned to the container port 8888
$ docker port notebook 8888 docker port notebook 8888
0.0.0.0:32769 # 0.0.0.0:32769
# get the notebook token from the logs # get the notebook token from the logs
$ docker logs --tail 3 notebook docker logs --tail 3 notebook
Copy/paste this URL into your browser when you connect for the first time, # Copy/paste this URL into your browser when you connect for the first time,
to login with a token: # to login with a token:
http://localhost:8888/?token=15914ca95f495075c0aa7d0e060f1a78b6d94f70ea373b00 # http://localhost:8888/?token=15914ca95f495075c0aa7d0e060f1a78b6d94f70ea373b00
``` ```
Together, the URL to visit on the host machine to access the server in this case is <http://localhost:32769?token=15914ca95f495075c0aa7d0e060f1a78b6d94f70ea373b00>. Together, the URL to visit on the host machine to access the server in this case is <http://localhost:32769?token=15914ca95f495075c0aa7d0e060f1a78b6d94f70ea373b00>.
@@ -112,11 +112,11 @@ The container runs in the background until stopped and/or removed by additional
```bash ```bash
# stop the container # stop the container
docker stop notebook docker stop notebook
notebook # notebook
# remove the container permanently # remove the container permanently
docker rm notebook docker rm notebook
notebook # notebook
``` ```
## Using Binder ## Using Binder

View File

@@ -9,7 +9,11 @@ This page provides details about features specific to one or more images.
- `-p 4040:4040` - The `jupyter/pyspark-notebook` and `jupyter/all-spark-notebook` images open - `-p 4040:4040` - The `jupyter/pyspark-notebook` and `jupyter/all-spark-notebook` images open
[SparkUI (Spark Monitoring and Instrumentation UI)](https://spark.apache.org/docs/latest/monitoring.html) at default port `4040`, [SparkUI (Spark Monitoring and Instrumentation UI)](https://spark.apache.org/docs/latest/monitoring.html) at default port `4040`,
this option map `4040` port inside docker container to `4040` port on host machine. this option map `4040` port inside docker container to `4040` port on host machine.
Note every new spark context that is created is put onto an incrementing port (ie. 4040, 4041, 4042, etc.), and it might be necessary to open multiple ports.
```{note}
Every new spark context that is created is put onto an incrementing port (ie. 4040, 4041, 4042, etc.), and it might be necessary to open multiple ports.
```
For example: `docker run -d -p 8888:8888 -p 4040:4040 -p 4041:4041 jupyter/pyspark-notebook`. For example: `docker run -d -p 8888:8888 -p 4040:4040 -p 4041:4041 jupyter/pyspark-notebook`.
#### IPython low-level output capture and forward #### IPython low-level output capture and forward
@@ -162,8 +166,10 @@ Connection to Spark Cluster on **[Standalone Mode](https://spark.apache.org/docs
2. Run the Docker container with `--net=host` in a location that is network addressable by all of 2. Run the Docker container with `--net=host` in a location that is network addressable by all of
your Spark workers. your Spark workers.
(This is a [Spark networking requirement](https://spark.apache.org/docs/latest/cluster-overview.html#components).) (This is a [Spark networking requirement](https://spark.apache.org/docs/latest/cluster-overview.html#components).)
- NOTE: When using `--net=host`, you must also use the flags `--pid=host -e TINI_SUBREAPER=true`.
See <https://github.com/jupyter/docker-stacks/issues/64> for details. ```{note}
When using `--net=host`, you must also use the flags `--pid=host -e TINI_SUBREAPER=true`. See <https://github.com/jupyter/docker-stacks/issues/64> for details._
```
**Note**: In the following examples we are using the Spark master URL `spark://master:7077` that shall be replaced by the URL of the Spark master. **Note**: In the following examples we are using the Spark master URL `spark://master:7077` that shall be replaced by the URL of the Spark master.
@@ -243,6 +249,10 @@ rdd.sum()
### Define Spark Dependencies ### Define Spark Dependencies
```{note}
This example is given for [Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/hadoop/current/install.html).
```
Spark dependencies can be declared thanks to the `spark.jars.packages` property Spark dependencies can be declared thanks to the `spark.jars.packages` property
(see [Spark Configuration](https://spark.apache.org/docs/latest/configuration.html#runtime-environment) for more information). (see [Spark Configuration](https://spark.apache.org/docs/latest/configuration.html#runtime-environment) for more information).
@@ -272,8 +282,6 @@ USER ${NB_UID}
Jars will be downloaded dynamically at the creation of the Spark session and stored by default in `${HOME}/.ivy2/jars` (can be changed by setting `spark.jars.ivy`). Jars will be downloaded dynamically at the creation of the Spark session and stored by default in `${HOME}/.ivy2/jars` (can be changed by setting `spark.jars.ivy`).
_Note: This example is given for [Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/hadoop/current/install.html)._
## Tensorflow ## Tensorflow
The `jupyter/tensorflow-notebook` image supports the use of The `jupyter/tensorflow-notebook` image supports the use of

View File

@@ -102,7 +102,10 @@ notebook/up.sh --secure --password a_secret
Sure. If you want to secure access to publicly addressable notebook containers, you can generate a free certificate using the [Let's Encrypt](https://letsencrypt.org) service. Sure. If you want to secure access to publicly addressable notebook containers, you can generate a free certificate using the [Let's Encrypt](https://letsencrypt.org) service.
This example includes the `bin/letsencrypt.sh` script, which runs the `letsencrypt` client to create a full-chain certificate and private key, and stores them in a Docker volume. This example includes the `bin/letsencrypt.sh` script, which runs the `letsencrypt` client to create a full-chain certificate and private key, and stores them in a Docker volume.
_Note:_ The script hard codes several `letsencrypt` options, one of which automatically agrees to the Let's Encrypt Terms of Service.
```{note}
The script hard codes several `letsencrypt` options, one of which automatically agrees to the Let's Encrypt Terms of Service.
```
The following command will create a certificate chain and store it in a Docker volume named `mydomain-secrets`. The following command will create a certificate chain and store it in a Docker volume named `mydomain-secrets`.

View File

@@ -45,30 +45,30 @@ The base image which the files will be combined with is `jupyter/minimal-noteboo
The resulting image from running the command can be seen by running `docker images` command: The resulting image from running the command can be seen by running `docker images` command:
```bash ```bash
$ docker images docker images
REPOSITORY TAG IMAGE ID CREATED SIZE # REPOSITORY TAG IMAGE ID CREATED SIZE
notebook-examples latest f5899ed1241d 2 minutes ago 2.59GB # notebook-examples latest f5899ed1241d 2 minutes ago 2.59GB
``` ```
You can now run the image. You can now run the image.
```bash ```bash
$ docker run --rm -p 8888:8888 notebook-examples docker run --rm -p 8888:8888 notebook-examples
Executing the command: jupyter notebook # Executing the command: jupyter notebook
[I 01:14:50.532 NotebookApp] Writing notebook server cookie secret to /home/jovyan/.local/share/jupyter/runtime/notebook_cookie_secret # [I 01:14:50.532 NotebookApp] Writing notebook server cookie secret to /home/jovyan/.local/share/jupyter/runtime/notebook_cookie_secret
[W 01:14:50.724 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended. # [W 01:14:50.724 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended.
[I 01:14:50.747 NotebookApp] JupyterLab beta preview extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab # [I 01:14:50.747 NotebookApp] JupyterLab beta preview extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab
[I 01:14:50.747 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab # [I 01:14:50.747 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab
[I 01:14:50.754 NotebookApp] Serving notebooks from local directory: /home/jovyan # [I 01:14:50.754 NotebookApp] Serving notebooks from local directory: /home/jovyan
[I 01:14:50.754 NotebookApp] 0 active kernels # [I 01:14:50.754 NotebookApp] 0 active kernels
[I 01:14:50.754 NotebookApp] The Jupyter Notebook is running at: # [I 01:14:50.754 NotebookApp] The Jupyter Notebook is running at:
[I 01:14:50.754 NotebookApp] http://[all ip addresses on your system]:8888/?token=04646d5c5e928da75842cd318d4a3c5aa1f942fc5964323a # [I 01:14:50.754 NotebookApp] http://[all ip addresses on your system]:8888/?token=04646d5c5e928da75842cd318d4a3c5aa1f942fc5964323a
[I 01:14:50.754 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). # [I 01:14:50.754 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 01:14:50.755 NotebookApp] # [C 01:14:50.755 NotebookApp]
Copy/paste this URL into your browser when you connect for the first time, # Copy/paste this URL into your browser when you connect for the first time,
to login with a token: # to login with a token:
http://localhost:8888/?token=04646d5c5e928da75842cd318d4a3c5aa1f942fc5964323a # http://localhost:8888/?token=04646d5c5e928da75842cd318d4a3c5aa1f942fc5964323a
``` ```
Open your browser on the URL displayed, and you will find the notebooks from the Git repository and can work with them. Open your browser on the URL displayed, and you will find the notebooks from the Git repository and can work with them.