Merge branch 'master' into asalikhov/automatic_conda_versioning

This commit is contained in:
Ayaz Salikhov
2021-06-21 20:06:14 +03:00
28 changed files with 1038 additions and 813 deletions

View File

@@ -20,4 +20,7 @@ Example: Add a package [altair](https://altair-viz.github.io).
**How does this change will affect users?**
Example: Altair is a declarative statistical visualization library for Python, based on Vega and Vega-Lite, and the source is available on GitHub. With Altair, you can spend more time understanding your data and its meaning. Altairs API is simple, friendly and consistent and built on top of the powerful Vega-Lite visualization grammar. This elegant simplicity produces beautiful and effective visualizations with a minimal amount of code.
Example: Altair is a declarative statistical visualization library for Python, based on Vega and Vega-Lite, and the source is available on GitHub.
With Altair, you can spend more time understanding your data and its meaning.
Altairs API is simple, friendly and consistent and built on top of the powerful Vega-Lite visualization grammar.
This elegant simplicity produces beautiful and effective visualizations with a minimal amount of code.

View File

@@ -8,6 +8,7 @@ on:
push:
branches:
- master
- main
paths:
- "docs/**"
- ".github/workflows/sphinx.yml"
@@ -35,11 +36,10 @@ jobs:
- name: Build Documentation
run: make docs
- name: Extract Source Strings
if: github.ref == 'refs/heads/master' || github.ref == 'refs/heads/main'
working-directory: docs
run: |
make gettext
sphinx-intl update -p _build/gettext -l en
sphinx-build -M gettext ./ ./_build/
sphinx-intl update -p ./_build/gettext -l en
- name: Push Strings to Master
if: github.ref == 'refs/heads/master' || github.ref == 'refs/heads/main'
run: make git-commit

View File

@@ -4,4 +4,4 @@ default: true
# MD013/line-length - Line length
MD013:
# Number of characters
line_length: 1000
line_length: 200

View File

@@ -60,7 +60,7 @@ dev-env: ## install libraries required to build docs and run tests
@pip install -r requirements-dev.txt
docs: ## build HTML documentation
make -C docs html
sphinx-build docs/ docs/_build/
git-commit: LOCAL_PATH?=.
git-commit: GITHUB_SHA?=$(shell git rev-parse HEAD)

View File

@@ -12,44 +12,45 @@ containing Jupyter applications and interactive computing tools.
## Quick Start
You can try a
[relatively recent build of the jupyter/base-notebook image on mybinder.org](https://mybinder.org/v2/gh/jupyter/docker-stacks/master?filepath=README.ipynb)
by simply clicking the preceding link. The image used in binder was last updated on 22 May 2021.
Otherwise, the two examples below may help you get started if
you [have Docker installed](https://docs.docker.com/install/) know
[which Docker image](https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html) you
want to use, and want to launch a single Jupyter Notebook server in a container.
You can try a [relatively recent build of the jupyter/base-notebook image on mybinder.org](https://mybinder.org/v2/gh/jupyter/docker-stacks/master?filepath=README.ipynb)
by simply clicking the preceding link.
The image used in binder was last updated on 22 May 2021.
Otherwise, the two examples below may help you get started if you [have Docker installed](https://docs.docker.com/install/),
know [which Docker image](https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html) you want to use
and want to launch a single Jupyter Notebook server in a container.
The [User Guide on ReadTheDocs](https://jupyter-docker-stacks.readthedocs.io/) describes additional
uses and features in detail.
The [User Guide on ReadTheDocs](https://jupyter-docker-stacks.readthedocs.io/) describes additional uses and features in detail.
**Example 1:** This command pulls the `jupyter/scipy-notebook` image tagged `33add21fab64` from
Docker Hub if it is not already present on the local host. It then starts a container running a
Jupyter Notebook server and exposes the server on host port 8888. The server logs appear in the
terminal. Visiting `http://<hostname>:8888/?token=<token>` in a browser loads the Jupyter Notebook
dashboard page, where `hostname` is the name of the computer running docker and `token` is the
secret token printed in the console. The container remains intact for restart after the notebook
server exits.
**Example 1:** This command pulls the `jupyter/scipy-notebook` image tagged `33add21fab64` from Docker Hub if it is not already present on the local host.
It then starts a container running a Jupyter Notebook server and exposes the server on host port 8888.
The server logs appear in the terminal.
Visiting `http://<hostname>:8888/?token=<token>` in a browser loads the Jupyter Notebook dashboard page,
where `hostname` is the name of the computer running docker and `token` is the secret token printed in the console.
The container remains intact for restart after the notebook server exits.
```bash
docker run -p 8888:8888 jupyter/scipy-notebook:33add21fab64
```
**Example 2:** This command performs the same operations as **Example 1**, but it exposes the server
on host port 10000 instead of port 8888. Visiting `http://<hostname>:10000/?token=<token>` in a
browser loads JupyterLab, where `hostname` is the name of the computer running docker and `token` is
the secret token printed in the console.::
**Example 2:** This command performs the same operations as **Example 1**, but it exposes the server on host port 10000 instead of port 8888.
Visiting `http://<hostname>:10000/?token=<token>` in a browser loads JupyterLab,
where `hostname` is the name of the computer running docker and `token` is the secret token printed in the console.
```bash
docker run -p 10000:8888 jupyter/scipy-notebook:33add21fab64
```
**Example 3:** This command pulls the `jupyter/datascience-notebook` image tagged `33add21fab64`
from Docker Hub if it is not already present on the local host. It then starts an _ephemeral_
container running a Jupyter Notebook server and exposes the server on host port 10000. The command
mounts the current working directory on the host as `/home/jovyan/work` in the container. The server
logs appear in the terminal. Visiting `http://<hostname>:10000/?token=<token>` in a browser loads
JupyterLab, where `hostname` is the name of the computer running docker and `token` is the secret
token printed in the console. Docker destroys the container after notebook server exit, but any
files written to `~/work` in the container remain intact on the host.
**Example 3:** This command pulls the `jupyter/datascience-notebook` image tagged `33add21fab64` from Docker Hub if it is not already present on the local host.
It then starts an _ephemeral_ container running a Jupyter Notebook server and exposes the server on host port 10000.
The command mounts the current working directory on the host as `/home/jovyan/work` in the container.
The server logs appear in the terminal.
Visiting `http://<hostname>:10000/?token=<token>` in a browser loads JupyterLab,
where `hostname` is the name of the computer running docker and `token` is the secret token printed in the console.
Docker destroys the container after notebook server exit, but any files written to `~/work` in the container remain intact on the host.
```bash
docker run --rm -p 10000:8888 -e JUPYTER_ENABLE_LAB=yes -v "${PWD}":/home/jovyan/work jupyter/datascience-notebook:33add21fab64
```
## Contributing
@@ -59,13 +60,12 @@ maintained stacks.
## Maintainer Help Wanted
We value all positive contributions to the Docker stacks project, from
[bug reports](https://jupyter-docker-stacks.readthedocs.io/en/latest/contributing/issues.html) to
[pull requests](https://jupyter-docker-stacks.readthedocs.io/en/latest/contributing/packages.html)
to
[translations](https://jupyter-docker-stacks.readthedocs.io/en/latest/contributing/translations.html)
to help answering questions. We'd also like to invite members of the community to help with two
maintainer activities:
We value all positive contributions to the Docker stacks project,
from [bug reports](https://jupyter-docker-stacks.readthedocs.io/en/latest/contributing/issues.html)
to [pull requests](https://jupyter-docker-stacks.readthedocs.io/en/latest/contributing/packages.html)
to [translations](https://jupyter-docker-stacks.readthedocs.io/en/latest/contributing/translations.html)
to help answering questions.
We'd also like to invite members of the community to help with two maintainer activities:
- Issue triage: Reading and providing a first response to issues, labeling issues appropriately,
redirecting cross-project questions to Jupyter Discourse
@@ -73,9 +73,8 @@ maintainer activities:
to improve the contribution, deciding if the contribution should take another form (e.g., a recipe
instead of a permanent change to the images)
Anyone in the community can jump in and help with these activities at any time. We will happily
grant additional permissions (e.g., ability to merge PRs) to anyone who shows an on-going interest
in working on the project.
Anyone in the community can jump in and help with these activities at any time
We will happily grant additional permissions (e.g., ability to merge PRs) to anyone who shows an on-going interest in working on the project.
## Jupyter Notebook Deprecation Notice
@@ -85,7 +84,8 @@ more information is available in the [documentation](https://jupyter-docker-stac
At some point, JupyterLab will become the default for all of the Jupyter Docker stack images, however a new environment variable will be introduced to switch back to Jupyter Notebook if needed.
After the change of default, and according to the Jupyter Notebook project status and its compatibility with JupyterLab, these Docker images may remove the classic Jupyter Notebook interface altogether in favor of another _classic-like_ UI built atop JupyterLab.
After the change of default, and according to the Jupyter Notebook project status and its compatibility with JupyterLab,
these Docker images may remove the classic Jupyter Notebook interface altogether in favor of another _classic-like_ UI built atop JupyterLab.
This change is tracked in the issue [#1217](https://github.com/jupyter/docker-stacks/issues/1217), please check its content for more information.

View File

@@ -1,20 +0,0 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
SPHINXPROJ = docker-stacks
SOURCEDIR = .
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

View File

@@ -37,15 +37,14 @@ Roughly speaking, we evaluate new features based on the following criteria:
If there's agreement that the feature belongs in one or more of the core stacks:
1. Implement the feature in a local clone of the `jupyter/docker-stacks` project.
2. Please build the image locally before submitting a pull request. Building the image locally
shortens the debugging cycle by taking some load off GitHub Actions, which graciously provide
free build services for open source projects like this one. If you use `make`, call:
2. Please build the image locally before submitting a pull request
Building the image locally shortens the debugging cycle by taking some load off GitHub Actions, which graciously provide free build services for open source projects like this one.
If you use `make`, call:
```bash
make build/somestack-notebook
```
3. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request)
(PR) with your changes.
3. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request)(PR) with your changes.
4. Watch for GitHub to report a build success or failure for your PR on GitHub.
5. Discuss changes with the maintainers and address any build issues.

View File

@@ -1,25 +1,17 @@
# Project Issues
We appreciate your taking the time to report an issue you encountered using the
Jupyter Docker Stacks. Please review the following guidelines when reporting
your problem.
We appreciate your taking the time to report an issue you encountered using the Jupyter Docker Stacks.
Please review the following guidelines when reporting your problem.
- If you believe youve found a security vulnerability in any of the Jupyter
projects included in Jupyter Docker Stacks images, please report it to
[security@ipython.org](mailto:security@iypthon.org), not in the issue trackers
on GitHub. If you prefer to encrypt your security reports, you can use [this
PGP public
key](https://github.com/jupyter/jupyter.github.io/blob/master/assets/ipython_security.asc).
- If you believe youve found a security vulnerability in any of the Jupyter projects included in Jupyter Docker Stacks images,
please report it to [security@ipython.org](mailto:security@iypthon.org), not in the issue trackers on GitHub.
If you prefer to encrypt your security reports, you can use [this PGP public key](https://github.com/jupyter/jupyter.github.io/blob/master/assets/ipython_security.asc).
- If you think your problem is unique to the Jupyter Docker Stacks images,
please search the [jupyter/docker-stacks issue
tracker](https://github.com/jupyter/docker-stacks/issues) to see if someone
else has already reported the same problem. If not, please open a [new
issue](https://github.com/jupyter/docker-stacks/issues/new) and provide all of
the information requested in the issue template.
- If the issue you're seeing is with one of the open source libraries included
in the Docker images and is reproducible outside the images, please file a bug
with the appropriate open source project.
- If you have a general question about how to use the Jupyter Docker Stacks in
your environment, in conjunction with other tools, with customizations, and so
on, please post your question on the [Jupyter Discourse
site](https://discourse.jupyter.org).
please search the [jupyter/docker-stacks issue tracker](https://github.com/jupyter/docker-stacks/issues)
to see if someone else has already reported the same problem.
If not, please open a [new issue](https://github.com/jupyter/docker-stacks/issues/new) and provide all of the information requested in the issue template.
- If the issue you're seeing is with one of the open source libraries included in the Docker images and is reproducible outside the images,
please file a bug with the appropriate open source project.
- If you have a general question about how to use the Jupyter Docker Stacks in your environment,
in conjunction with other tools, with customizations, and so on,
please post your question on the [Jupyter Discourse site](https://discourse.jupyter.org).

View File

@@ -51,7 +51,9 @@ The following rules are ignored by default for all images in the `.hadolint.yaml
For other rules, the preferred way to do it is to flag ignored rules in the `Dockerfile`.
> It is also possible to ignore rules by using a special comment directly above the Dockerfile instruction you want to make an exception for. Ignore rule comments look like `# hadolint ignore=DL3001,SC1081`. For example:
> It is also possible to ignore rules by using a special comment directly above the Dockerfile instruction you want to make an exception for.
> Ignore rule comments look like `# hadolint ignore=DL3001,SC1081`.
> For example:
```dockerfile
FROM ubuntu

View File

@@ -9,13 +9,13 @@ Please follow the process below to update a package version:
1. Locate the Dockerfile containing the library you wish to update (e.g.,
[base-notebook/Dockerfile](https://github.com/jupyter/docker-stacks/blob/master/base-notebook/Dockerfile),
[scipy-notebook/Dockerfile](https://github.com/jupyter/docker-stacks/blob/master/scipy-notebook/Dockerfile))
2. Adjust the version number for the package. We prefer to pin the major and minor version number of
packages so as to minimize rebuild side-effects when users submit pull requests (PRs). For
example, you'll find the Jupyter Notebook package, `notebook`, installed using conda with
2. Adjust the version number for the package.
We prefer to pin the major and minor version number of packages so as to minimize rebuild side-effects when users submit pull requests (PRs).
For example, you'll find the Jupyter Notebook package, `notebook`, installed using conda with
`notebook=5.4.*`.
3. Please build the image locally before submitting a pull request. Building the image locally
shortens the debugging cycle by taking some load off GitHub Actions, which graciously provide
free build services for open source projects like this one. If you use `make`, call:
3. Please build the image locally before submitting a pull request.
Building the image locally shortens the debugging cycle by taking some load off GitHub Actions, which graciously provide free build services for open source projects like this one.
If you use `make`, call:
```bash
make build/somestack-notebook
@@ -24,13 +24,14 @@ Please follow the process below to update a package version:
4. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request)
(PR) with your changes.
5. Watch for GitHub to report a build success or failure for your PR on GitHub.
6. Discuss changes with the maintainers and address any build issues. Version conflicts are the most
common problem. You may need to upgrade additional packages to fix build failures.
6. Discuss changes with the maintainers and address any build issues.
Version conflicts are the most common problem.
You may need to upgrade additional packages to fix build failures.
## Notes
In order to help identifying packages that can be updated you can use the following helper tool. It
will list all the packages installed in the `Dockerfile` that can be updated -- dependencies are
In order to help identifying packages that can be updated you can use the following helper tool.
It will list all the packages installed in the `Dockerfile` that can be updated -- dependencies are
filtered to focus only on requested packages.
```bash

View File

@@ -1,8 +1,10 @@
# New Recipes
We welcome contributions of [recipes](../using/recipes.md), short examples of using, configuring, or extending the Docker Stacks, for inclusion in the documentation site. Follow the process below to add a new recipe:
We welcome contributions of [recipes](../using/recipes.md), short examples of using, configuring, or extending the Docker Stacks, for inclusion in the documentation site.
Follow the process below to add a new recipe:
1. Open the `docs/using/recipes.md` source file.
2. Add a second-level Markdown heading naming your recipe at the bottom of the file (e.g., `## Add the RISE extension`)
3. Write the body of your recipe under the heading, including whatever command line, Dockerfile, links, etc. you need.
4. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request) (PR) with your changes. Maintainers will respond and work with you to address any formatting or content issues.
4. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request) (PR) with your changes.
Maintainers will respond and work with you to address any formatting or content issues.

View File

@@ -1,19 +1,17 @@
# Community Stacks
We love to see the community create and share new Jupyter Docker images. We've put together a
[cookiecutter project](https://github.com/jupyter/cookiecutter-docker-stacks) and the documentation
below to help you get started defining, building, and sharing your Jupyter environments in Docker.
We love to see the community create and share new Jupyter Docker images.
We've put together a [cookiecutter project](https://github.com/jupyter/cookiecutter-docker-stacks)
and the documentation below to help you get started defining, building, and sharing your Jupyter environments in Docker.
Following these steps will:
1. Setup a project on GitHub containing a Dockerfile based on either the `jupyter/base-notebook` or
`jupyter/minimal-notebook` image.
2. Configure GitHub Actions to build and test your image when users submit pull requests to your
repository.
1. Setup a project on GitHub containing a Dockerfile based on either the `jupyter/base-notebook` or `jupyter/minimal-notebook` image.
2. Configure GitHub Actions to build and test your image when users submit pull requests to your repository.
3. Configure Docker Hub to build and host your images for others to use.
4. Update the [list of community stacks](../using/selecting.md#community-stacks) in this documentation to include your image.
This approach mirrors how we build and share the core stack images. Feel free to follow it or pave
your own path using alternative services and build tools.
This approach mirrors how we build and share the core stack images.
Feel free to follow it or pave your own path using alternative services and build tools.
## Creating a Project
@@ -23,31 +21,27 @@ First, install [cookiecutter](https://github.com/cookiecutter/cookiecutter) usin
pip install cookiecutter # or conda install cookiecutter
```
Run the cookiecutter command pointing to the
[jupyter/cookiecutter-docker-stacks](https://github.com/jupyter/cookiecutter-docker-stacks) project
on GitHub.
Run the cookiecutter command pointing to the [jupyter/cookiecutter-docker-stacks](https://github.com/jupyter/cookiecutter-docker-stacks) project on GitHub.
```bash
cookiecutter https://github.com/jupyter/cookiecutter-docker-stacks.git
```
Enter a name for your new stack image. This will serve as both the git repository name and the part
of the Docker image name after the slash.
Enter a name for your new stack image.
This will serve as both the git repository name and the part of the Docker image name after the slash.
```text
stack_name [my-jupyter-stack]:
```
Enter the user or organization name under which this stack will reside on Docker Hub. You
must have access to manage this Docker Hub organization to push images here and set up automated
builds.
Enter the user or organization name under which this stack will reside on Docker Hub.
You must have access to manage this Docker Hub organization to push images here and set up automated builds.
```text
stack_org [my-project]:
```
Select an image from the jupyter/docker-stacks project that will serve as the base for your new
image.
Select an image from the jupyter/docker-stacks project that will serve as the base for your new image.
```text
stack_base_image [jupyter/base-notebook]:
@@ -90,13 +84,15 @@ The cookiecutter template comes with a `.github/workflows/docker.yml` file, whic
- "*.md"
```
This will trigger the CI pipeline whenever you push to your `main` or `master` branch and when any Pull Requests are made to your repository. For more details on this configuration, visit the [GitHub actions documentation on triggers](https://docs.github.com/en/actions/reference/events-that-trigger-workflows).
This will trigger the CI pipeline whenever you push to your `main` or `master` branch and when any Pull Requests are made to your repository.
For more details on this configuration, visit the [GitHub actions documentation on triggers](https://docs.github.com/en/actions/reference/events-that-trigger-workflows).
2. Commit your changes and push to GitHub.
3. Head back to your repository and click on the **Actions** tab.
![GitHub actions tab screenshot](../_static/github-actions-tab.png)
From there, you can click on the workflows on the left-hand side of the screen.
4. In the next screen, you will be able to see information about the workflow run and duration. If you click again on the button with the workflow name, you will see the logs for the workflow steps.
4. In the next screen, you will be able to see information about the workflow run and duration.
If you click again on the button with the workflow name, you will see the logs for the workflow steps.
![Github actions workflow run screenshot](../_static/github-actions-workflow.png)
## Configuring Docker Hub
@@ -121,25 +117,29 @@ you merge a GitHub pull request to the master branch of your project.
![Docker account Security settings screenshot](../_static/docker-org-security.png)
11. Enter a meaningful name for your token and click on **Create**
![Docker account create new token screenshot](../_static/docker-org-create-token.png)
12. Copy the personal access token displayed on the next screen. **Note that you will not be able to see it again after you close the pop-up window**.
12. Copy the personal access token displayed on the next screen.
**Note that you will not be able to see it again after you close the pop-up window**.
13. Head back to your GitHub repository and click on the **Settings tab**.
![Github repository settings tab screenshot](../_static/github-create-secrets.png)
14. Click on the **Secrets** section and then on the **New repository secret** button on the top right corner (see image above).
15. Create a **DOCKERHUB_TOKEN** secret and paste the Personal Access Token from DockerHub in the **value** field.
![GitHub create secret token screenshot](../_static/github-secret-token.png)
16. Repeat the above step but creating a **DOCKERHUB_USERNAME** and replacing the _value_ field with your DockerHub username. Once you have completed these steps, your repository secrets section should look something like this:
16. Repeat the above step but creating a **DOCKERHUB_USERNAME** and replacing the _value_ field with your DockerHub username.
Once you have completed these steps, your repository secrets section should look something like this:
![GitHub repository secrets created screenshot](../_static/github-secrets-completed.png)
## Defining Your Image
Make edits to the Dockerfile in your project to add third-party libraries and configure Jupyter
applications. Refer to the Dockerfiles for the core stacks (e.g.,
[jupyter/datascience-notebook](https://github.com/jupyter/docker-stacks/blob/master/datascience-notebook/Dockerfile))
applications.
Refer to the Dockerfiles for the core stacks (e.g., [jupyter/datascience-notebook](https://github.com/jupyter/docker-stacks/blob/master/datascience-notebook/Dockerfile))
to get a feel for what's possible and best practices.
[Submit pull requests](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request)
to your project repository on GitHub. Ensure your image builds correctly on GitHub actions before merging to
master or main. Refer to Docker Hub to build your master or main branch that you can `docker pull`.
to your project repository on GitHub.
Ensure your image builds correctly on GitHub actions before merging to
master or main.
Refer to Docker Hub to build your master or main branch that you can `docker pull`.
## Sharing Your Image
@@ -149,5 +149,5 @@ following:
1. Clone the [jupyter/docker-stacks](https://github.com/jupyter/docker-stacks) GitHub repository.
2. Open the `docs/using/selecting.md` source file and locate the **Community Stacks** section.
3. Add a bullet with a link to your project and a short description of what your Docker image contains.
4. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request)
(PR) with your changes. Maintainers will respond and work with you to address any formatting or content issues.
4. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request)(PR) with your changes.
Maintainers will respond and work with you to address any formatting or content issues.

View File

@@ -6,13 +6,13 @@ of the Docker images.
## How the Tests Work
GitHub executes `make build-test-all` against pull requests submitted to the `jupyter/docker-stacks`
repository. This `make` command builds every docker image. After building each image, the `make`
command executes `pytest` to run both image-specific tests like those in
repository.
This `make` command builds every docker image.
After building each image, the `make` command executes `pytest` to run both image-specific tests like those in
[base-notebook/test/](https://github.com/jupyter/docker-stacks/tree/master/base-notebook/test) and
common tests defined in [test/](https://github.com/jupyter/docker-stacks/tree/master/test). Both
kinds of tests make use of global [pytest fixtures](https://docs.pytest.org/en/latest/reference/fixtures.html)
defined in the [conftest.py](https://github.com/jupyter/docker-stacks/blob/master/conftest.py) file
at the root of the projects.
common tests defined in [test/](https://github.com/jupyter/docker-stacks/tree/master/test).
Both kinds of tests make use of global [pytest fixtures](https://docs.pytest.org/en/latest/reference/fixtures.html)
defined in the [conftest.py](https://github.com/jupyter/docker-stacks/blob/master/conftest.py) file at the root of the projects.
## Contributing New Tests
@@ -22,7 +22,8 @@ Please follow the process below to add new tests:
[test/](https://github.com/jupyter/docker-stacks/tree/master/test) or create a new module.
2. If your test should run against a single image, add your test code to one of the modules in
`some-notebook/test/` or create a new module.
3. Build one or more images you intend to test and run the tests locally. If you use `make`, call:
3. Build one or more images you intend to test and run the tests locally.
If you use `make`, call:
```bash
make build/somestack-notebook

View File

@@ -1,7 +1,9 @@
# Doc Translations
We are delighted when members of the Jupyter community want to help translate these documentation pages to other languages. If you're interested, please visit links below below to join our team on [Transifex](https://transifex.com) and to start creating, reviewing, and updating translations of the Jupyter Docker Stacks documentation.
We are delighted when members of the Jupyter community want to help translate these documentation pages to other languages.
If you're interested, please visit links below below to join our team on [Transifex](https://transifex.com) and to start creating, reviewing, and updating translations of the Jupyter Docker Stacks documentation.
1. Follow the steps documented on the [Getting Started as a Translator](https://docs.transifex.com/getting-started-1/translators) page.
2. Look for _jupyter-docker-stacks_ when prompted to choose a translation team. Alternatively, visit <https://www.transifex.com/project-jupyter/jupyter-docker-stacks-1> after creating your account and request to join the project.
2. Look for _jupyter-docker-stacks_ when prompted to choose a translation team.
Alternatively, visit <https://www.transifex.com/project-jupyter/jupyter-docker-stacks-1> after creating your account and request to join the project.
3. See [Translating with the Web Editor](https://docs.transifex.com/translation/translating-with-the-web-editor) in the Transifex documentation.

View File

@@ -1,7 +1,8 @@
Jupyter Docker Stacks
=====================
Jupyter Docker Stacks are a set of ready-to-run Docker images containing Jupyter applications and interactive computing tools. You can use a stack image to do any of the following (and more):
Jupyter Docker Stacks are a set of ready-to-run Docker images containing Jupyter applications and interactive computing tools.
You can use a stack image to do any of the following (and more):
* Start a personal Jupyter Notebook server in a local Docker container
* Run JupyterLab servers for a team using JupyterHub
@@ -10,19 +11,30 @@ Jupyter Docker Stacks are a set of ready-to-run Docker images containing Jupyter
Quick Start
-----------
You can try a `recent build of the jupyter/base-notebook image on mybinder.org <https://mybinder.org/v2/gh/jupyter/docker-stacks/master?filepath=README.ipynb>`_ by simply clicking the preceding link. Otherwise, three examples below may help you get started if you `have Docker installed <https://docs.docker.com/install/>`_, know :doc:`which Docker image <using/selecting>` you want to use, and want to launch a single Jupyter Notebook server in a container.
You can try a `recent build of the jupyter/base-notebook image on mybinder.org <https://mybinder.org/v2/gh/jupyter/docker-stacks/master?filepath=README.ipynb>`_ by simply clicking the preceding link.
Otherwise, three examples below may help you get started if you `have Docker installed <https://docs.docker.com/install/>`_, know :doc:`which Docker image <using/selecting>` you want to use, and want to launch a single Jupyter Notebook server in a container.
The other pages in this documentation describe additional uses and features in detail.
**Example 1:** This command pulls the ``jupyter/scipy-notebook`` image tagged ``33add21fab64`` from Docker Hub if it is not already present on the local host. It then starts a container running a Jupyter Notebook server and exposes the server on host port 8888. The server logs appear in the terminal. Visiting ``http://<hostname>:8888/?token=<token>`` in a browser loads the Jupyter Notebook dashboard page, where ``hostname`` is the name of the computer running docker and ``token`` is the secret token printed in the console. The container remains intact for restart after the notebook server exits.::
**Example 1:** This command pulls the ``jupyter/scipy-notebook`` image tagged ``33add21fab64`` from Docker Hub if it is not already present on the local host.
It then starts a container running a Jupyter Notebook server and exposes the server on host port 8888.
The server logs appear in the terminal.
Visiting ``http://<hostname>:8888/?token=<token>`` in a browser loads the Jupyter Notebook dashboard page, where ``hostname`` is the name of the computer running docker and ``token`` is the secret token printed in the console.
The container remains intact for restart after the notebook server exits.::
docker run -p 8888:8888 jupyter/scipy-notebook:33add21fab64
**Example 2:** This command performs the same operations as **Example 1**, but it exposes the server on host port 10000 instead of port 8888. Visiting ``http://<hostname>:10000/?token=<token>`` in a browser loads Jupyter Notebook server, where ``hostname`` is the name of the computer running docker and ``token`` is the secret token printed in the console.::
**Example 2:** This command performs the same operations as **Example 1**, but it exposes the server on host port 10000 instead of port 8888.
Visiting ``http://<hostname>:10000/?token=<token>`` in a browser loads Jupyter Notebook server, where ``hostname`` is the name of the computer running docker and ``token`` is the secret token printed in the console.::
docker run -p 10000:8888 jupyter/scipy-notebook:33add21fab64
**Example 3:** This command pulls the ``jupyter/datascience-notebook`` image tagged ``33add21fab64`` from Docker Hub if it is not already present on the local host. It then starts an *ephemeral* container running a Jupyter Notebook server and exposes the server on host port 10000. The command mounts the current working directory on the host as ``/home/jovyan/work`` in the container. The server logs appear in the terminal. Visiting ``http://<hostname>:10000/lab?token=<token>`` in a browser loads JupyterLab, where ``hostname`` is the name of the computer running docker and ``token`` is the secret token printed in the console. Docker destroys the container after notebook server exit, but any files written to ``~/work`` in the container remain intact on the host.::
**Example 3:** This command pulls the ``jupyter/datascience-notebook`` image tagged ``33add21fab64`` from Docker Hub if it is not already present on the local host.
It then starts an *ephemeral* container running a Jupyter Notebook server and exposes the server on host port 10000.
The command mounts the current working directory on the host as ``/home/jovyan/work`` in the container.
The server logs appear in the terminal.
Visiting ``http://<hostname>:10000/lab?token=<token>`` in a browser loads JupyterLab, where ``hostname`` is the name of the computer running docker and ``token`` is the secret token printed in the console.
Docker destroys the container after notebook server exit, but any files written to ``~/work`` in the container remain intact on the host.::
docker run --rm -p 10000:8888 -e JUPYTER_ENABLE_LAB=yes -v "${PWD}":/home/jovyan/work jupyter/datascience-notebook:33add21fab64

View File

@@ -9,7 +9,7 @@ msgid ""
msgstr ""
"Project-Id-Version: docker-stacks latest\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2021-06-13 17:49+0000\n"
"POT-Creation-Date: 2021-06-21 16:38+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
@@ -18,12 +18,12 @@ msgstr ""
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"
#: ../../contributing/features.md:1 533431cb2ace40c9a0a1d827d3f3f7d4
#: ../../contributing/features.md:1 726d88e6e8f14d219cd8b2069e8ee558
msgid "New Features"
msgstr ""
# 64c3ecc68ada47afada78f945253c9e9
#: ../../contributing/features.md:3 cf7eb235c8344c4bb502897c8228ee31
#: ../../contributing/features.md:3 21c6114649214bc4bcb0cba8c242412d
msgid ""
"Thank you for contributing to the Jupyter Docker Stacks! We review pull "
"requests of new features (e.g., new packages, new scripts, new flags) to "
@@ -31,24 +31,24 @@ msgid ""
" maintaining the images over time."
msgstr ""
#: ../../contributing/features.md:7 b07a27b022094a5ea3a8a626edad7652
#: ../../contributing/features.md:7 b69084bd8ca442688e096613a08414a5
msgid "Suggesting a New Feature"
msgstr ""
# c995f8cabb1d4b4fb53a9c56ae8e017b
#: ../../contributing/features.md:9 fe629d1182f94b8fa06e7156ca1bc069
#: ../../contributing/features.md:9 133319edb48642d89965617ca520df42
msgid ""
"Please follow the process below to suggest a new feature for inclusion in"
" one of the core stacks:"
msgstr ""
#: ../../contributing/features.md:11 b4b23b1da8ab41c7a66ac97d9725bfac
#: ../../contributing/features.md:11 125d9289732f424ea151f21be8a9a4a4
msgid ""
"[Open a GitHub issue](https://github.com/jupyter/docker-stacks/issues) "
"describing the feature you'd like to contribute."
msgstr ""
#: ../../contributing/features.md:13 bfcd1afe25a04816871dd40810fecb9e
#: ../../contributing/features.md:13 22a5959849ce4eff989f2600284c294f
msgid ""
"Discuss with the maintainers whether the addition makes sense in [one of "
"the core stacks](../using/selecting.md#core-stacks), as a [recipe in the "
@@ -56,32 +56,32 @@ msgid ""
"something else entirely."
msgstr ""
#: ../../contributing/features.md:18 accfc3b1a23c4521a23e7b6bcaf73c9d
#: ../../contributing/features.md:18 655942aedeb9471697eade663f9268dd
msgid "Selection Criteria"
msgstr ""
# ca139cf0df684011bdf6f6f68e151796
#: ../../contributing/features.md:20 0efceae9abc246f0bff0dac7c39038a2
#: ../../contributing/features.md:20 d378c4d2c63d4778b28c28fb48a0389d
msgid ""
"Roughly speaking, we evaluate new features based on the following "
"criteria:"
msgstr ""
#: ../../contributing/features.md:22 42d6087c42c0426383e2bc261caec2a5
#: ../../contributing/features.md:22 2f6fc306cb0f42b89e1519fb46f80fd7
msgid ""
"**Usefulness to Jupyter users**: Is the feature generally applicable "
"across domains? Does it work with Jupyter Notebook, JupyterLab, "
"JupyterHub, etc.?"
msgstr ""
#: ../../contributing/features.md:24 45c1579efb0b498fad3b74363ec96530
#: ../../contributing/features.md:24 eec8fcbf86914424a108e6fe1abdd970
msgid ""
"**Fit with the image purpose**: Does the feature match the theme of the "
"stack in which it will be added? Would it fit better in a new, community "
"stack?"
msgstr ""
#: ../../contributing/features.md:26 825ec0d7440449969692272892ef5e7d
#: ../../contributing/features.md:26 de4739c3a36e4dcd9a3f4cac1eee10e1
msgid ""
"**Complexity of build / runtime configuration**: How many lines of code "
"does the feature require in one of the Dockerfiles or startup scripts? "
@@ -89,14 +89,14 @@ msgid ""
"use the images?"
msgstr ""
#: ../../contributing/features.md:29 5a8abe96fdb24ef6978e52ccf7ec4ee1
#: ../../contributing/features.md:29 0cab300e03044252bbec88ad26e51615
msgid ""
"**Impact on image metrics**: How many bytes does the feature and its "
"dependencies add to the image(s)? How many minutes do they add to the "
"build time?"
msgstr ""
#: ../../contributing/features.md:31 d9ead8db70034813b650b17e3fcd0c71
#: ../../contributing/features.md:31 b6fb3f81271d4d0098f130a4e5142c5a
msgid ""
"**Ability to support the addition**: Can existing maintainers answer user"
" questions and address future build issues? Are the contributors "
@@ -104,66 +104,63 @@ msgid ""
"ensure the feature continues to work over time?"
msgstr ""
#: ../../contributing/features.md:35 7b76c73cad614d01b233085fd07ea07e
#: ../../contributing/features.md:35 3040e51bb00b47ee8996e9260997deb7
msgid "Submitting a Pull Request"
msgstr ""
# f7ca9b40be90476eb97c8fcd67205e9d
#: ../../contributing/features.md:37 77f8c3197b254b4ea4e10ae8c7a55116
#: ../../contributing/features.md:37 f9bde4470eb04c61b055d4ff7ba653a0
msgid ""
"If there's agreement that the feature belongs in one or more of the core "
"stacks:"
msgstr ""
#: ../../contributing/features.md:39 eb0e5ce1989349c4919298fd868f69ca
#: ../../contributing/features.md:39 f16f97e89ffd48f8923c16417859e5c0
msgid ""
"Implement the feature in a local clone of the `jupyter/docker-stacks` "
"project."
msgstr ""
#: ../../contributing/features.md:40 ../../contributing/packages.md:16
#: 25f5d14b06864d2d860995a146ece8e8 f79e5f3ea9b8468997da680d9dfdd3a2
#: ../../contributing/features.md:40 b70a64d55bac4829b7ee544f59e6194b
msgid ""
"Please build the image locally before submitting a pull request. Building"
"Please build the image locally before submitting a pull request Building "
"the image locally shortens the debugging cycle by taking some load off "
"GitHub Actions, which graciously provide free build services for open "
"source projects like this one. If you use `make`, call:"
msgstr ""
#: ../../contributing/features.md:48 ../../contributing/packages.md:24
#: ../../contributing/tests.md:32 52e7a432a45e4e6aa7aa2f4e6c7eb2f1
#: 7c375a372f24454093c373f97778a9f4 e484fafe4ad2488ba767b8c4508248b6
#: ../../contributing/features.md:48 38b1b943efea448f94945aadebd4a6cc
msgid ""
"[Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A"
"-step-by-step-guide-on-preparing-and-submitting-a-pull-request)(PR) with "
"your changes."
msgstr ""
#: ../../contributing/features.md:50 ../../contributing/packages.md:26
#: ../../contributing/tests.md:34 89615a5dd93347098d00a093a1c94bc0
#: 977d453ab23a44f595d7ab9fa83d5f6e ad697539dd374b068a5cd55978c47697
#: ../../contributing/features.md:49 ../../contributing/packages.md:26
#: ../../contributing/tests.md:35 9ec3d508a04c41ccb660752bab736ff1
#: a86824b6f60e411ea8ffb5f7dce012ed b4213b1c36d949c7877b77527ab076b5
msgid ""
"Watch for GitHub to report a build success or failure for your PR on "
"GitHub."
msgstr ""
#: ../../contributing/features.md:51 8e1a748d5ede42e1b9065e4dd408426b
#: ../../contributing/features.md:50 e7ac395daac3431d8d8379eb51cb8755
msgid "Discuss changes with the maintainers and address any build issues."
msgstr ""
#: ../../contributing/issues.md:1 8ba9f40deb264d3ea4bb76225b86639c
#: ../../contributing/issues.md:1 7a1b09f04bd4408b9bcc1b5f4a9b177b
msgid "Project Issues"
msgstr ""
# 9c2a6e9f67354e86aca23758676fca43
#: ../../contributing/issues.md:3 a456514a45cd410c94f3dfed139be8ea
#: ../../contributing/issues.md:3 9c31fe8fae834dfb9630881263c35671
msgid ""
"We appreciate your taking the time to report an issue you encountered "
"using the Jupyter Docker Stacks. Please review the following guidelines "
"when reporting your problem."
msgstr ""
#: ../../contributing/issues.md:7 00b6169c66da4bb09006b9ea2ec146a5
#: ../../contributing/issues.md:6 b4ee985dac93493db3264516b2c00eb1
msgid ""
"If you believe youve found a security vulnerability in any of the "
"Jupyter projects included in Jupyter Docker Stacks images, please report "
@@ -173,7 +170,7 @@ msgid ""
"key](https://github.com/jupyter/jupyter.github.io/blob/master/assets/ipython_security.asc)."
msgstr ""
#: ../../contributing/issues.md:13 0f2776044bde4f53b88cddcbb8b076c9
#: ../../contributing/issues.md:9 d06e826c87fa4a999657835fbb495336
msgid ""
"If you think your problem is unique to the Jupyter Docker Stacks images, "
"please search the [jupyter/docker-stacks issue "
@@ -184,14 +181,14 @@ msgid ""
msgstr ""
# 69a18cc239b34b94800599bf185f58d6
#: ../../contributing/issues.md:19 d60b5741439c41bcacd0b71124dcf8d4
#: ../../contributing/issues.md:13 f68382ea23d64869abbbebab6cd9d34c
msgid ""
"If the issue you're seeing is with one of the open source libraries "
"included in the Docker images and is reproducible outside the images, "
"please file a bug with the appropriate open source project."
msgstr ""
#: ../../contributing/issues.md:22 06ee9e5629f14e18b49ef4ae44a23fe4
#: ../../contributing/issues.md:15 1a2f6c59316b4408aabb2f29f36a7636
msgid ""
"If you have a general question about how to use the Jupyter Docker Stacks"
" in your environment, in conjunction with other tools, with "
@@ -199,11 +196,11 @@ msgid ""
"Discourse site](https://discourse.jupyter.org)."
msgstr ""
#: ../../contributing/lint.md:1 db86a7064a27418f88764a006665a057
#: ../../contributing/lint.md:1 5120f45f45f94d67b58fb18757331872
msgid "Lint"
msgstr ""
#: ../../contributing/lint.md:3 a20b2766259048b8a6e3566f4439575b
#: ../../contributing/lint.md:3 9d1f555065a24a34a6eac88fb323c08b
msgid ""
"In order to enforce some rules **linters** are used in this project. "
"Linters can be run either during the **development phase** (by the "
@@ -212,89 +209,89 @@ msgid ""
"**git hooks** through [pre-commit][pre-commit]."
msgstr ""
#: ../../contributing/lint.md:7 49edecced7d340aabfce61eaf568db10
#: ../../contributing/lint.md:7 4c3e81a24bca43dfaa59f4a273f76f3a
msgid "Pre-commit hook"
msgstr ""
#: ../../contributing/lint.md:9 a9e7eba87b9a4eac9e22d135924dc614
#: ../../contributing/lint.md:9 46c6ee65381b4ce2af9249569e531071
msgid "Pre-commit hook installation"
msgstr ""
#: ../../contributing/lint.md:11 552b9e76a95b468cb581a7ede8f2e37a
#: ../../contributing/lint.md:11 60d2455a4bc64b7bb10b9eb7d06e22fa
msgid ""
"pre-commit is a Python package that needs to be installed. This can be "
"achieved by using the generic task used to install all Python development"
" dependencies."
msgstr ""
#: ../../contributing/lint.md:21 6178d1a235904897b5ae5100b386d8b3
#: ../../contributing/lint.md:21 124513da59b6412f942152dd99d63b2d
msgid ""
"Then the git hooks scripts configured for the project in `.pre-commit-"
"config.yaml` need to be installed in the local git repository."
msgstr ""
#: ../../contributing/lint.md:27 1c02c6e49269469bb3fb226df498b6c6
#: ../../contributing/lint.md:27 b2b353f6f8214a5ab5d62985f7369864
msgid "Run"
msgstr ""
#: ../../contributing/lint.md:29 1fe0a1218ae3458ca01f58756e0071b8
#: ../../contributing/lint.md:29 f61274e72911465ba58ed0271c3b752c
msgid ""
"Now pre-commit (and so configured hooks) will run automatically on `git "
"commit` on each changed file. However it is also possible to trigger it "
"against all files."
msgstr ""
#: ../../contributing/lint.md:32 77e867e4dd5e47ec908c55169bafb0eb
#: ../../contributing/lint.md:32 c74296fbcccb410f82191352679d8e60
msgid ""
"Note: Hadolint pre-commit uses docker to run, so docker should be running"
" while running this command."
msgstr ""
#: ../../contributing/lint.md:38 ef5b5200befc4f70a05bd994e782f842
#: ../../contributing/lint.md:38 c89502ec98114799a4d2d91ab958ac96
msgid "Image Lint"
msgstr ""
#: ../../contributing/lint.md:40 b0101f043870442eb95e08d3e50ebad0
#: ../../contributing/lint.md:40 bdd96336e6694f288d9238f8f2a311aa
msgid ""
"To comply with [Docker best practices][dbp], we are using the "
"[Hadolint][hadolint] tool to analyse each `Dockerfile` ."
msgstr ""
#: ../../contributing/lint.md:42 64b67989e7e141e6b65174ad1e31007b
#: ../../contributing/lint.md:42 c86246d827c645568edaee42dd06dce6
msgid "Ignoring Rules"
msgstr ""
#: ../../contributing/lint.md:44 99d58e485efd495c8279c7c4377fc07f
#: ../../contributing/lint.md:44 ea56575722d64c569196c28aaaef35a3
msgid ""
"Sometimes it is necessary to ignore [some rules][rules]. The following "
"rules are ignored by default for all images in the `.hadolint.yaml` file."
msgstr ""
#: ../../contributing/lint.md:47 2f79b0df269044c88ec9ee0eee333ff6
#: ../../contributing/lint.md:47 e7c9459bd8e04481a59a152a8097abe1
msgid "[`DL3006`][dl3006]: We use a specific policy to manage image tags."
msgstr ""
#: ../../contributing/lint.md:48 335e6acfc2ca4a2aa09683edeae5cd62
#: ../../contributing/lint.md:48 63a8ad9e8e2b408db8cbe5596e7ef88d
msgid "`base-notebook` `FROM` clause is fixed but based on an argument (`ARG`)."
msgstr ""
#: ../../contributing/lint.md:49 94b52a75568743858147c39b9c67b256
#: ../../contributing/lint.md:49 e57d407d824243dfbbd86641c3148bd3
msgid "Building downstream images from (`FROM`) the latest is done on purpose."
msgstr ""
#: ../../contributing/lint.md:50 cccaba3445984850b2bf39e25bc8b337
#: ../../contributing/lint.md:50 94df9364fc9e4d18a30c7a2274a4481f
msgid ""
"[`DL3008`][dl3008]: System packages are always updated (`apt-get`) to the"
" latest version."
msgstr ""
#: ../../contributing/lint.md:52 b1315459d4ed4276bf55fa8f7bbbe7f6
#: ../../contributing/lint.md:52 86e843d32c854ec18fc7be714f1d1a3d
msgid ""
"For other rules, the preferred way to do it is to flag ignored rules in "
"the `Dockerfile`."
msgstr ""
#: ../../contributing/lint.md:54 456f795cab254723aa1e1c6035a57283
#: ../../contributing/lint.md:54 7ed3bcc281cd4eb5a46f44e2f4dc6e0b
msgid ""
"It is also possible to ignore rules by using a special comment directly "
"above the Dockerfile instruction you want to make an exception for. "
@@ -302,12 +299,12 @@ msgid ""
"example:"
msgstr ""
#: ../../contributing/packages.md:1 e9b29adde194447eb0bece72d4e2807b
#: ../../contributing/packages.md:1 b3ec39ee1d964079a4ea34ad4f51654e
msgid "Package Updates"
msgstr ""
# 5f269a667f9a4c3ca342cfb49ecaefb2
#: ../../contributing/packages.md:3 00bc46cbd09b40ca9f1c261f3dfe5a18
#: ../../contributing/packages.md:3 7a82c6b1499e429e969d4421d903dfce
msgid ""
"We actively seek pull requests which update packages already included in "
"the project Dockerfiles. This is a great way for first-time contributors "
@@ -315,11 +312,11 @@ msgid ""
msgstr ""
# 30d4a79bce8d439d97e6e3555a088548
#: ../../contributing/packages.md:7 7a9d01fafdb64134a659657a30c933a1
#: ../../contributing/packages.md:7 5d1b3d8cc01b41bf8fa329a588ed579d
msgid "Please follow the process below to update a package version:"
msgstr ""
#: ../../contributing/packages.md:9 bfd8521b81b449028997f5cd058bc4d4
#: ../../contributing/packages.md:9 0dc465c531c44130947a76c7f5f530fd
msgid ""
"Locate the Dockerfile containing the library you wish to update (e.g., "
"[base-notebook/Dockerfile](https://github.com/jupyter/docker-"
@@ -328,7 +325,7 @@ msgid ""
"/scipy-notebook/Dockerfile))"
msgstr ""
#: ../../contributing/packages.md:12 dd60eedd6f8f44d8a89872cffc5981ea
#: ../../contributing/packages.md:12 52f02d498bec4509837ec165c7af8ddf
msgid ""
"Adjust the version number for the package. We prefer to pin the major and"
" minor version number of packages so as to minimize rebuild side-effects "
@@ -337,18 +334,34 @@ msgid ""
"`notebook=5.4.*`."
msgstr ""
#: ../../contributing/packages.md:27 f7883fd58b7e4f31ba0a7eca7b4f190f
#: ../../contributing/packages.md:16 5c9183dca62846638196ca85fc3adec6
msgid ""
"Please build the image locally before submitting a pull request. Building"
" the image locally shortens the debugging cycle by taking some load off "
"GitHub Actions, which graciously provide free build services for open "
"source projects like this one. If you use `make`, call:"
msgstr ""
#: ../../contributing/packages.md:24 ../../contributing/tests.md:33
#: 812e1075d49a47cd94bc55dd26771199 b6e54350efae4c578457d13c91e3d81a
msgid ""
"[Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A"
"-step-by-step-guide-on-preparing-and-submitting-a-pull-request) (PR) with"
" your changes."
msgstr ""
#: ../../contributing/packages.md:27 8091a57ecf3645bfbb487fb085a3e8d0
msgid ""
"Discuss changes with the maintainers and address any build issues. "
"Version conflicts are the most common problem. You may need to upgrade "
"additional packages to fix build failures."
msgstr ""
#: ../../contributing/packages.md:30 8b2e63c2242e4ab198e8bedcec89c12d
#: ../../contributing/packages.md:31 bc841b6163584dc280829da29ade7dd7
msgid "Notes"
msgstr ""
#: ../../contributing/packages.md:32 631c654ddb594c8199920e2a7be32f6a
#: ../../contributing/packages.md:33 e04d788858424f3bacb15ce4c79995be
msgid ""
"In order to help identifying packages that can be updated you can use the"
" following helper tool. It will list all the packages installed in the "
@@ -356,11 +369,11 @@ msgid ""
"only on requested packages."
msgstr ""
#: ../../contributing/recipes.md:1 743ccc1fcd0843a085b175f303937faf
#: ../../contributing/recipes.md:1 a5a036826b2546e7a3aff0d17d75161a
msgid "New Recipes"
msgstr ""
#: ../../contributing/recipes.md:3 33dcdda8053f47e4a84231209867f9f4
#: ../../contributing/recipes.md:3 9472dd9c6c594d40bd0df9d9f95619f8
msgid ""
"We welcome contributions of [recipes](../using/recipes.md), short "
"examples of using, configuring, or extending the Docker Stacks, for "
@@ -368,25 +381,24 @@ msgid ""
"new recipe:"
msgstr ""
#: ../../contributing/recipes.md:5 8940835d7c8b4423b4eb0fd88a8f4bb8
#: ../../contributing/recipes.md:6 0a43c4218bad4b849df50503a591e38d
msgid "Open the `docs/using/recipes.md` source file."
msgstr ""
#: ../../contributing/recipes.md:6 6db10f4f7f07423c9eaee8c2f8303de7
#: ../../contributing/recipes.md:7 1cac607c48c549b89c23281ad7c42861
msgid ""
"Add a second-level Markdown heading naming your recipe at the bottom of "
"the file (e.g., `## Add the RISE extension`)"
msgstr ""
# 8838b0ff2be24c23afaca9a6f43a9b66
#: ../../contributing/recipes.md:7 75a0266aaa9a4de08d2d8b42e6c80b17
#: ../../contributing/recipes.md:8 f5ad71f62bf347f7a32fae4607fda297
msgid ""
"Write the body of your recipe under the heading, including whatever "
"command line, Dockerfile, links, etc. you need."
msgstr ""
#: ../../contributing/recipes.md:8 ../../contributing/stacks.md:158
#: 11169926ef4949ca9ef48eee1a3bf6d2 4ef597a285e948039eab8b204e1707c3
#: ../../contributing/recipes.md:9 4b11242f90c74bce94623e33a53eb479
msgid ""
"[Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A"
"-step-by-step-guide-on-preparing-and-submitting-a-pull-request) (PR) with"
@@ -394,11 +406,11 @@ msgid ""
"formatting or content issues."
msgstr ""
#: ../../contributing/stacks.md:1 0e31728392cd4f4d9bf9e228fd5b8252
#: ../../contributing/stacks.md:1 fca1023c780041818b807f22577a7acd
msgid "Community Stacks"
msgstr ""
#: ../../contributing/stacks.md:3 f4cf4b6914fe4475af1989f156329be0
#: ../../contributing/stacks.md:3 b9ba5b1a6a5e418bb516321d84f3b871
msgid ""
"We love to see the community create and share new Jupyter Docker images. "
"We've put together a [cookiecutter project](https://github.com/jupyter"
@@ -407,48 +419,48 @@ msgid ""
"Docker. Following these steps will:"
msgstr ""
#: ../../contributing/stacks.md:8 45d6598b3ed646e2ad14e5a4a81c2564
#: ../../contributing/stacks.md:8 39df798495a442e08c9f672465a77350
msgid ""
"Setup a project on GitHub containing a Dockerfile based on either the "
"`jupyter/base-notebook` or `jupyter/minimal-notebook` image."
msgstr ""
#: ../../contributing/stacks.md:10 09897b1220a745eab8324d16d34316cc
#: ../../contributing/stacks.md:9 6e9c642210a5413c9bc7a5ac86a949e2
msgid ""
"Configure GitHub Actions to build and test your image when users submit "
"pull requests to your repository."
msgstr ""
#: ../../contributing/stacks.md:12 58838145d6674ee2a2d81bd9cdf5c64f
#: ../../contributing/stacks.md:10 0f663053ed004db3909beb03fc00d374
msgid "Configure Docker Hub to build and host your images for others to use."
msgstr ""
#: ../../contributing/stacks.md:13 08c921d6a4464f57af27b8201de0c954
#: ../../contributing/stacks.md:11 c9c09824b14348faa067098299aa2b75
msgid ""
"Update the [list of community stacks](../using/selecting.md#community-"
"stacks) in this documentation to include your image."
msgstr ""
# 8e0fd1dc73cc40ceab19307d0cd809c1
#: ../../contributing/stacks.md:15 8245deead08c4673883c429b2af17adc
#: ../../contributing/stacks.md:13 f72cb4abc74f4fb1b75f550e8a7f0e4e
msgid ""
"This approach mirrors how we build and share the core stack images. Feel "
"free to follow it or pave your own path using alternative services and "
"build tools."
msgstr ""
#: ../../contributing/stacks.md:18 dce46dd37bc345daa86dc50e207024dc
#: ../../contributing/stacks.md:16 43af64b7ee044898a626fa607f847aa0
msgid "Creating a Project"
msgstr ""
#: ../../contributing/stacks.md:20 f200706b6fb34d9e846bbfd86cbcdadb
#: ../../contributing/stacks.md:18 81f2e43ae6fe40a9b983d892a05900f7
msgid ""
"First, install "
"[cookiecutter](https://github.com/cookiecutter/cookiecutter) using pip or"
" conda:"
msgstr ""
#: ../../contributing/stacks.md:26 58de4428c0b347469285bc7da987f991
#: ../../contributing/stacks.md:24 5bd03ddc056a432284d2061e695001c7
msgid ""
"Run the cookiecutter command pointing to the [jupyter/cookiecutter-"
"docker-stacks](https://github.com/jupyter/cookiecutter-docker-stacks) "
@@ -456,13 +468,13 @@ msgid ""
msgstr ""
# 676ff068156d4ca7b1043b4a4fe2d1f1
#: ../../contributing/stacks.md:34 a7cbb3a362854405b6aa0f0973a7e75f
#: ../../contributing/stacks.md:30 2a91e25753ed42b597f6f60793613a89
msgid ""
"Enter a name for your new stack image. This will serve as both the git "
"repository name and the part of the Docker image name after the slash."
msgstr ""
#: ../../contributing/stacks.md:41 bde0e68fe34d479ab84c092bea5bf016
#: ../../contributing/stacks.md:37 3cea3268ea8246b8a431023064a98646
msgid ""
"Enter the user or organization name under which this stack will reside on"
" Docker Hub. You must have access to manage this Docker Hub organization "
@@ -470,40 +482,40 @@ msgid ""
msgstr ""
# b796c2d7c08b4a1db5cdfd3de7d84c16
#: ../../contributing/stacks.md:49 fb7b9537dd2143f39a1fd5af5133454c
#: ../../contributing/stacks.md:44 f0ce464a45b34f36bab7959607fd422d
msgid ""
"Select an image from the jupyter/docker-stacks project that will serve as"
" the base for your new image."
msgstr ""
# 7ef9d73286d04b12a1350e8d9565df65
#: ../../contributing/stacks.md:56 13e4ccdb878642b2854acd919ad5d2bd
#: ../../contributing/stacks.md:50 91842f34918247aca3279e2baa8d9894
msgid "Enter a longer description of the stack for your README."
msgstr ""
# 479d3a5c6ef9481a9dc4033224c540fa
#: ../../contributing/stacks.md:62 20e3f2d2a97a441bad3119a18f06c917
#: ../../contributing/stacks.md:56 10c14c6932ba4cb4a15e625fc1428584
msgid "Initialize your project as a Git repository and push it to GitHub."
msgstr ""
#: ../../contributing/stacks.md:74 8bf985a5459c4713a7b9bffc24e9c13c
#: ../../contributing/stacks.md:68 a87641289f464cfbac9602b4cd1b1b1e
msgid "Configuring GitHub actions"
msgstr ""
#: ../../contributing/stacks.md:76 793d5dd4857f4295b1355d74b2b45860
#: ../../contributing/stacks.md:70 e3c69bc4a8954ad1ba975d237310de50
msgid ""
"The cookiecutter template comes with a `.github/workflows/docker.yml` "
"file, which allows you to use GitHub actions to build your Docker image "
"whenever you or someone else submits a pull request."
msgstr ""
#: ../../contributing/stacks.md:78 5f360604062e4644b124a86ad38ab2bd
#: ../../contributing/stacks.md:72 0a7b2a2f1b4047f1850f8f872229f039
msgid ""
"By default the `.github/workflows/docker.yaml` file has the following "
"triggers configuration:"
msgstr ""
#: ../../contributing/stacks.md:99 77e4468916714e52b086c3f665ed9872
#: ../../contributing/stacks.md:87 567ba488168e4d7d9b683d406c6d7c3d
msgid ""
"This will trigger the CI pipeline whenever you push to your `main` or "
"`master` branch and when any Pull Requests are made to your repository. "
@@ -512,22 +524,22 @@ msgid ""
"/events-that-trigger-workflows)."
msgstr ""
#: ../../contributing/stacks.md:101 5bdcb30d566c4ac0bafc9dd59cbe898d
#: ../../contributing/stacks.md:90 b492fb27af1244e9b51f00f9c64108f0
msgid "Commit your changes and push to GitHub."
msgstr ""
#: ../../contributing/stacks.md:102 1c1e2a8db3404af4b7b4a5d93cc96862
#: ../../contributing/stacks.md:91 108b72ae09e045a5b5581a671a16f344
msgid ""
"Head back to your repository and click on the **Actions** tab. ![GitHub "
"actions tab screenshot](../_static/github-actions-tab.png) From there, "
"you can click on the workflows on the left-hand side of the screen."
msgstr ""
#: ../../contributing/stacks.md:102 e7b86f06c1c44b39aa23887cadc142a2
#: ../../contributing/stacks.md:91 de67bb65fc2149f4b0157934563f8da1
msgid "GitHub actions tab screenshot"
msgstr ""
#: ../../contributing/stacks.md:105 6ffc6fdf36ec439a92888285b1af576a
#: ../../contributing/stacks.md:94 6fdbbf9225634587908de5b957636eb1
msgid ""
"In the next screen, you will be able to see information about the "
"workflow run and duration. If you click again on the button with the "
@@ -535,141 +547,141 @@ msgid ""
"actions workflow run screenshot](../_static/github-actions-workflow.png)"
msgstr ""
#: ../../contributing/stacks.md:105 c2d76d6446d44552b91b759e7d33427f
#: ../../contributing/stacks.md:94 e86c72f6a22d467b94bbbf60fb56d987
msgid "Github actions workflow run screenshot"
msgstr ""
#: ../../contributing/stacks.md:108 076da3372e784fc4b7d2693c75abdb58
#: ../../contributing/stacks.md:98 9796dc3c26e94fb0ba1d68b24ca7cd96
msgid "Configuring Docker Hub"
msgstr ""
#: ../../contributing/stacks.md:110 fffd7e56710b41f5800f734c2b36cc12
#: ../../contributing/stacks.md:100 b2fc49097c204bf8b9a45561fe5eaaad
msgid ""
"Now, configure Docker Hub to build your stack image and push it to Docker"
" Hub repository whenever you merge a GitHub pull request to the master "
"branch of your project."
msgstr ""
#: ../../contributing/stacks.md:113 42adaa019d9d4176bd1e359a06b7f78c
#: ../../contributing/stacks.md:103 065fd0e89ec84aa586c54e704758200c
msgid "Visit [https://hub.docker.com/](https://hub.docker.com/) and log in."
msgstr ""
#: ../../contributing/stacks.md:114 eb7cbf963a9e4a8c81219845c49c7aec
#: ../../contributing/stacks.md:104 034ead098e404c06b67d48372be8ae04
msgid ""
"Select the account or organization matching the one you entered when "
"prompted with `stack_org` by the cookiecutter. ![Docker account selection"
" screenshot](../_static/docker-org-select.png)"
msgstr ""
#: ../../contributing/stacks.md:114 ../../contributing/stacks.md:124
#: 35a6abe488b9429bb03ae38c10b54f3c c8048508f0134a52a148eafbfb0e0b05
#: ../../contributing/stacks.md:104 ../../contributing/stacks.md:114
#: 969b5502a5b1477c8b131e11c5728333 f75a75f885994895bcc53cb656f77a2d
msgid "Docker account selection screenshot"
msgstr ""
#: ../../contributing/stacks.md:116 cf3c5b8229954bf3a67b61c33c240fde
#: ../../contributing/stacks.md:106 9f1039658c3b4b0c8cbae7662d506b7f
msgid "Scroll to the bottom of the page and click **Create repository**."
msgstr ""
#: ../../contributing/stacks.md:117 ca2bd720e39c476c9e91fe0a8e950407
#: ../../contributing/stacks.md:107 966b5884f6e4448f8723e1b1ebd36a41
msgid ""
"Enter the name of the image matching the one you entered when prompted "
"with `stack_name` by the cookiecutter. ![Docker image name and "
"description screenshot](../_static/docker-repo-name.png)"
msgstr ""
#: ../../contributing/stacks.md:117 95cc590684ee4d57b8ac1cb4728630c2
#: ../../contributing/stacks.md:107 e7a49b28c9504670854fb8868579033d
msgid "Docker image name and description screenshot"
msgstr ""
# 79092e5007ba4bdead594a71e30cd58a
#: ../../contributing/stacks.md:119 27d62843f112477ba1f61dfa4d03e57e
#: ../../contributing/stacks.md:109 991e7cf6cf1543058c011e52d1aba7db
msgid "Enter a description for your image."
msgstr ""
#: ../../contributing/stacks.md:120 1355e6dcf2b24dc3a4d7e43eff3b458f
#: ../../contributing/stacks.md:110 9a9ae7757dbc466cbb2f776fbe44cc1f
msgid ""
"Click **GitHub** under the **Build Settings** and follow the prompts to "
"connect your account if it is not already connected."
msgstr ""
#: ../../contributing/stacks.md:121 989370abe19145ab97ddf42e30142efd
#: ../../contributing/stacks.md:111 6dfb3cdac63445ca99ed5705215d6ebf
msgid ""
"Select the GitHub organization and repository containing your image "
"definition from the dropdowns. ![Docker from GitHub automated build "
"screenshot](../_static/docker-github-settings.png)"
msgstr ""
#: ../../contributing/stacks.md:121 19dd9f18aa6549ef90eed2b0b44e49f3
#: ../../contributing/stacks.md:111 7d0089ac72184890926b0723b4d6bebc
msgid "Docker from GitHub automated build screenshot"
msgstr ""
#: ../../contributing/stacks.md:123 f4905aac6205404484c05c02d880ceb3
#: ../../contributing/stacks.md:113 1d988e0b34ca4e82aec16e8f4ec187a8
msgid "Click the **Create and Build** button."
msgstr ""
#: ../../contributing/stacks.md:124 ef435b71972b4504b61f0a68690ff7ea
#: ../../contributing/stacks.md:114 e6b27e4f4b91416aa87ef0102e72f5c6
msgid ""
"Click on your avatar on the top-right corner and select Account settings."
" ![Docker account selection screenshot](../_static/docker-org-select.png)"
msgstr ""
#: ../../contributing/stacks.md:126 e0ab1859757949af96e42ece1c0ae85e
#: ../../contributing/stacks.md:116 0727ffeb3b954b1eb01dffa75274b5eb
msgid ""
"Click on **Security** and then click on the **New Access Token** button. "
"![Docker account Security settings screenshot](../_static/docker-org-"
"security.png)"
msgstr ""
#: ../../contributing/stacks.md:126 35118c9a050b444b8d4fe295700c9a37
#: ../../contributing/stacks.md:116 007fe9a3e9944506857bf4971b09e30d
msgid "Docker account Security settings screenshot"
msgstr ""
#: ../../contributing/stacks.md:128 65bfd28181164353bed5a502fe16318f
#: ../../contributing/stacks.md:118 25a84f7fad5843a193eb3c3d29cc186d
msgid ""
"Enter a meaningful name for your token and click on **Create** ![Docker "
"account create new token screenshot](../_static/docker-org-create-"
"token.png)"
msgstr ""
#: ../../contributing/stacks.md:128 e83e8cf9a31548379147d2def53f1ae4
#: ../../contributing/stacks.md:118 489510be69574c8aa225e55103cd8729
msgid "Docker account create new token screenshot"
msgstr ""
#: ../../contributing/stacks.md:130 b4ad68a142dd4d4db0285ef780e026c6
#: ../../contributing/stacks.md:120 5ddd367d3f1847c89338ddcc6096948b
msgid ""
"Copy the personal access token displayed on the next screen. **Note that "
"you will not be able to see it again after you close the pop-up window**."
msgstr ""
#: ../../contributing/stacks.md:131 ca2649cba03349a18ad971cdc6fb990f
#: ../../contributing/stacks.md:122 4c52f13d6a164d538126e4a3fde8d858
msgid ""
"Head back to your GitHub repository and click on the **Settings tab**. "
"![Github repository settings tab screenshot](../_static/github-create-"
"secrets.png)"
msgstr ""
#: ../../contributing/stacks.md:131 4db15770f11d4becabd1309c3f20968c
#: ../../contributing/stacks.md:122 39867a1b3c94415ab1b976b8f2ae5e3a
msgid "Github repository settings tab screenshot"
msgstr ""
#: ../../contributing/stacks.md:133 04c542fcbed74c69ba21eece124429a3
#: ../../contributing/stacks.md:124 c3076a8a160d47cc92c101238e0834c3
msgid ""
"Click on the **Secrets** section and then on the **New repository "
"secret** button on the top right corner (see image above)."
msgstr ""
#: ../../contributing/stacks.md:134 c71bb593479b4d7aa6ceed747b301cec
#: ../../contributing/stacks.md:125 3bd73698d6d5491fb1696b2e6edda80a
msgid ""
"Create a **DOCKERHUB_TOKEN** secret and paste the Personal Access Token "
"from DockerHub in the **value** field. ![GitHub create secret token "
"screenshot](../_static/github-secret-token.png)"
msgstr ""
#: ../../contributing/stacks.md:134 96931fcc297d4b79a17952825b229b72
#: ../../contributing/stacks.md:125 5f1e3b8aa2ab417ca5a8986179a0027f
msgid "GitHub create secret token screenshot"
msgstr ""
#: ../../contributing/stacks.md:136 d4f84016396948c0ac8535a7b197c5cd
#: ../../contributing/stacks.md:127 5bf805f63e7043f680e88e23a9393668
msgid ""
"Repeat the above step but creating a **DOCKERHUB_USERNAME** and replacing"
" the _value_ field with your DockerHub username. Once you have completed "
@@ -678,15 +690,15 @@ msgid ""
"secrets-completed.png)"
msgstr ""
#: ../../contributing/stacks.md:136 5e5f92f26f204af69ea2194899e76230
#: ../../contributing/stacks.md:127 900be7b1aa7247d39d67f713ab8ff101
msgid "GitHub repository secrets created screenshot"
msgstr ""
#: ../../contributing/stacks.md:139 6511126bdd0f479491c64e5ee93d5841
#: ../../contributing/stacks.md:131 db5acfbd08a349a0814323f2b847e00a
msgid "Defining Your Image"
msgstr ""
#: ../../contributing/stacks.md:141 34ddbe70533148fe9a4a2c8fa1370265
#: ../../contributing/stacks.md:133 4dc172c227f147eb9952cd2d3ab5c0ce
msgid ""
"Make edits to the Dockerfile in your project to add third-party libraries"
" and configure Jupyter applications. Refer to the Dockerfiles for the "
@@ -696,7 +708,7 @@ msgid ""
"best practices."
msgstr ""
#: ../../contributing/stacks.md:146 4fdb5b9098c74a9194ed2a81e2838269
#: ../../contributing/stacks.md:138 2f36f37a0b6640aabc1ce12f83477a61
msgid ""
"[Submit pull requests](https://github.com/PointCloudLibrary/pcl/wiki/A"
"-step-by-step-guide-on-preparing-and-submitting-a-pull-request) to your "
@@ -705,52 +717,60 @@ msgid ""
"build your master or main branch that you can `docker pull`."
msgstr ""
#: ../../contributing/stacks.md:150 eadd495dd3a74a30a79cf44cc0bb45f6
#: ../../contributing/stacks.md:144 57eebdbf5d4544b1ba3f83a73f7b851d
msgid "Sharing Your Image"
msgstr ""
# d8e9f1a37f4c4a72bb630e7a3b265b92
#: ../../contributing/stacks.md:152 2a0efb6179934db89053bebe607a8d08
#: ../../contributing/stacks.md:146 3b403df78fe447f6a1fbcd51e3e29d6c
msgid ""
"Finally, if you'd like to add a link to your project to this "
"documentation site, please do the following:"
msgstr ""
#: ../../contributing/stacks.md:155 c780267313be4e12a4ea3dadf5e7910f
#: ../../contributing/stacks.md:149 db5916f874f6433883e28940b95da941
msgid ""
"Clone the [jupyter/docker-stacks](https://github.com/jupyter/docker-"
"stacks) GitHub repository."
msgstr ""
#: ../../contributing/stacks.md:156 424fdbcdb3404db6b0109c0ad895da1d
#: ../../contributing/stacks.md:150 daac8dbc32f146c896b446dbc035fee0
msgid ""
"Open the `docs/using/selecting.md` source file and locate the **Community"
" Stacks** section."
msgstr ""
# 9d37dfec6fba48e6966c254b476e1e81
#: ../../contributing/stacks.md:157 45f4e6e53d14425e86a399c8e9a71677
#: ../../contributing/stacks.md:151 6a28583127d74505941965decb0e96e3
msgid ""
"Add a bullet with a link to your project and a short description of what "
"your Docker image contains."
msgstr ""
#: ../../contributing/tests.md:1 c52f0baba0f04ef69389c8d4269a5760
#: ../../contributing/stacks.md:152 cacf1844a20148a48f5dac5f6bfe2d89
msgid ""
"[Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A"
"-step-by-step-guide-on-preparing-and-submitting-a-pull-request)(PR) with "
"your changes. Maintainers will respond and work with you to address any "
"formatting or content issues."
msgstr ""
#: ../../contributing/tests.md:1 f67b57cc1e684b8293172781dcea0eb2
msgid "Image Tests"
msgstr ""
# 6dbd44985f3c4ba1a3823c90c5944ad0
#: ../../contributing/tests.md:3 bb3e1b974c514dc6b96a6b60857149e1
#: ../../contributing/tests.md:3 d0983107636143a993c8962cde882aa3
msgid ""
"We greatly appreciate pull requests that extend the automated tests that "
"vet the basic functionality of the Docker images."
msgstr ""
#: ../../contributing/tests.md:6 365d09e94b1d4649b610b61dba91b305
#: ../../contributing/tests.md:6 6da71c7b37ba439a9ce1fad9c0280782
msgid "How the Tests Work"
msgstr ""
#: ../../contributing/tests.md:8 e577574944104892b58c62afb001db73
#: ../../contributing/tests.md:8 2446dc37fd3c4bf4a58f2aa0a2f11e92
msgid ""
"GitHub executes `make build-test-all` against pull requests submitted to "
"the `jupyter/docker-stacks` repository. This `make` command builds every "
@@ -765,45 +785,45 @@ msgid ""
"stacks/blob/master/conftest.py) file at the root of the projects."
msgstr ""
#: ../../contributing/tests.md:17 8b011201b0eb4b0cbfe5748e8a82cf9d
#: ../../contributing/tests.md:17 9d47f7af99bb4f50a483c88e97ebcc44
msgid "Contributing New Tests"
msgstr ""
# d317e6be0fbf487e8528ff1fe0bbdb78
#: ../../contributing/tests.md:19 17d984f9048648e981058d226036347f
#: ../../contributing/tests.md:19 6425e1522d984309a3a01eb820aa6e61
msgid "Please follow the process below to add new tests:"
msgstr ""
#: ../../contributing/tests.md:21 be2a7556d4f94e42bd8136ea95ca813c
#: ../../contributing/tests.md:21 98b69b96620e498d90eb514feffdaa29
msgid ""
"If the test should run against every image built, add your test code to "
"one of the modules in [test/](https://github.com/jupyter/docker-"
"stacks/tree/master/test) or create a new module."
msgstr ""
#: ../../contributing/tests.md:23 37e9b84fc7fc4fd1a04de505a7c21a6a
#: ../../contributing/tests.md:23 7e1f9e96eec54e0f909419dccc77af05
msgid ""
"If your test should run against a single image, add your test code to one"
" of the modules in `some-notebook/test/` or create a new module."
msgstr ""
#: ../../contributing/tests.md:25 f4721fde71df403da9cd9e1fc1080309
#: ../../contributing/tests.md:25 3540a45df3004bb794a745bbad88e117
msgid ""
"Build one or more images you intend to test and run the tests locally. If"
" you use `make`, call:"
msgstr ""
#: ../../contributing/tests.md:35 33522eaa591f4507a6c754738363ebc5
#: ../../contributing/tests.md:36 620b0961e82748328275724399ae8d7e
msgid ""
"Discuss changes with the maintainers and address any issues running the "
"tests on GitHub."
msgstr ""
#: ../../contributing/translations.md:1 c4e14f02a2de43e38dfdcc18bd0c3037
#: ../../contributing/translations.md:1 761c807cb45c4887b7ea555c156cf6f2
msgid "Doc Translations"
msgstr ""
#: ../../contributing/translations.md:3 23ed68b217aa4c61bb8211b588ab5493
#: ../../contributing/translations.md:3 53d4d850e70c4a5da944ec4653e9a18c
msgid ""
"We are delighted when members of the Jupyter community want to help "
"translate these documentation pages to other languages. If you're "
@@ -812,14 +832,14 @@ msgid ""
"updating translations of the Jupyter Docker Stacks documentation."
msgstr ""
#: ../../contributing/translations.md:5 26cc6594b9ba4a94a3035978b16a0d1e
#: ../../contributing/translations.md:6 19925199b8344aa4a7f97dca9c4b0799
msgid ""
"Follow the steps documented on the [Getting Started as a "
"Translator](https://docs.transifex.com/getting-started-1/translators) "
"page."
msgstr ""
#: ../../contributing/translations.md:6 a642e488015c4ffc89ed77c18fd7ddab
#: ../../contributing/translations.md:7 c420ee850ee84bf6a1c439c3ee2d24da
msgid ""
"Look for _jupyter-docker-stacks_ when prompted to choose a translation "
"team. Alternatively, visit <https://www.transifex.com/project-jupyter"
@@ -827,7 +847,7 @@ msgid ""
" the project."
msgstr ""
#: ../../contributing/translations.md:7 8bbbfd130929478295b38e9ac58f2add
#: ../../contributing/translations.md:9 8e6f5644963144fd9bcb1f56c7f12b37
msgid ""
"See [Translating with the Web "
"Editor](https://docs.transifex.com/translation/translating-with-the-web-"

File diff suppressed because it is too large Load Diff

View File

@@ -9,16 +9,15 @@ To build new images and publish them to the Docker Hub registry, do the followin
3. Monitor the merge commit GitHub Actions status.
**Note**: we think, GitHub Actions are quite reliable, so please, investigate, if some error occurs.
The process of building docker images in PRs is exactly the same after merging to master, except there is an additional `push` step.
4. Try to avoid merging another PR to master until all pending builds complete. This way you will know which commit
might have broken the build and also have correct tags for moving tags (like `python` version).
4. Try to avoid merging another PR to master until all pending builds complete.
This way you will know which commit might have broken the build and also have correct tags for moving tags (like `python` version).
## Updating the Ubuntu Base Image
When there's a security fix in the Ubuntu base image or after some time passes, it's a good idea to
update the pinned SHA in the
When there's a security fix in the Ubuntu base image or after some time passes, it's a good idea to update the pinned SHA in the
[jupyter/base-notebook Dockerfile](https://github.com/jupyter/docker-stacks/blob/master/base-notebook/Dockerfile).
Submit it as a regular PR and go through the build process. Expect the build to take a while to
complete: every image layer will rebuild.
Submit it as a regular PR and go through the build process.
Expect the build to take a while to complete: every image layer will rebuild.
## Adding a New Core Image to Docker Hub

View File

@@ -1,36 +0,0 @@
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=.
set BUILDDIR=_build
set SPHINXPROJ=docker-stacks
if "%1" == "" goto help
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.https://www.sphinx-doc.org/en/master/
exit /b 1
)
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
:end
popd

View File

@@ -1,12 +1,15 @@
# Common Features
A container launched from any Jupyter Docker Stacks image runs a Jupyter Notebook server by default. The container does so by executing a `start-notebook.sh` script. This script configures the internal container environment and then runs `jupyter notebook`, passing it any command line arguments received.
A container launched from any Jupyter Docker Stacks image runs a Jupyter Notebook server by default.
The container does so by executing a `start-notebook.sh` script.
This script configures the internal container environment and then runs `jupyter notebook`, passing it any command line arguments received.
This page describes the options supported by the startup script as well as how to bypass it to run alternative commands.
## Notebook Options
You can pass [Jupyter command line options](https://jupyter-notebook.readthedocs.io/en/stable/config.html#options) to the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, you can run the following:
You can pass [Jupyter command line options](https://jupyter-notebook.readthedocs.io/en/stable/config.html#options) to the `start-notebook.sh` script when launching the container.
For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, you can run the following:
```bash
docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e'
@@ -21,21 +24,60 @@ docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp
## Docker Options
You may instruct the `start-notebook.sh` script to customize the container environment before launching
the notebook server. You do so by passing arguments to the `docker run` command.
the notebook server.
You do so by passing arguments to the `docker run` command.
- `-e NB_USER=jovyan` - Instructs the startup script to change the default container username from `jovyan` to the provided value. Causes the script to rename the `jovyan` user home folder. For this option to take effect, you must run the container with `--user root`, set the working directory `-w /home/${NB_USER}` and set the environment variable `-e CHOWN_HOME=yes` (see below for detail). This feature is useful when mounting host volumes with specific home folder.
- `-e NB_UID=1000` - Instructs the startup script to switch the numeric user ID of `${NB_USER}` to the given value. This feature is useful when mounting host volumes with specific owner permissions. For this option to take effect, you must run the container with `--user root`. (The startup script will `su ${NB_USER}` after adjusting the user ID.) You might consider using modern Docker options `--user` and `--group-add` instead. See the last bullet below for details.
- `-e NB_GID=100` - Instructs the startup script to change the primary group of`${NB_USER}` to `${NB_GID}` (the new group is added with a name of `${NB_GROUP}` if it is defined, otherwise the group is named `${NB_USER}`). This feature is useful when mounting host volumes with specific group permissions. For this option to take effect, you must run the container with `--user root`. (The startup script will `su ${NB_USER}` after adjusting the group ID.) You might consider using modern Docker options `--user` and `--group-add` instead. See the last bullet below for details. The user is added to supplemental group `users` (gid 100) in order to allow write access to the home directory and `/opt/conda`. If you override the user/group logic, ensure the user stays in group `users` if you want them to be able to modify files in the image.
- `-e NB_GROUP=<name>` - The name used for `${NB_GID}`, which defaults to `${NB_USER}`. This is only used if `${NB_GID}` is specified and completely optional: there is only cosmetic effect.
- `-e NB_UMASK=<umask>` - Configures Jupyter to use a different umask value from default, i.e. `022`. For example, if setting umask to `002`, new files will be readable and writable by group members instead of just writable by the owner. Wikipedia has a good article about [umask](https://en.wikipedia.org/wiki/Umask). Feel free to read it in order to choose the value that better fits your needs. Default value should fit most situations. Note that `NB_UMASK` when set only applies to the Jupyter process itself - you cannot use it to set a umask for additional files created during run-hooks e.g. via `pip` or `conda` - if you need to set a umask for these you must set `umask` for each command.
- `-e CHOWN_HOME=yes` - Instructs the startup script to change the `${NB_USER}` home directory owner and group to the current value of `${NB_UID}` and `${NB_GID}`. This change will take effect even if the user home directory is mounted from the host using `-v` as described below. The change is **not** applied recursively by default. You can change modify the `chown` behavior by setting `CHOWN_HOME_OPTS` (e.g., `-e CHOWN_HOME_OPTS='-R'`).
- `-e CHOWN_EXTRA="<some dir>,<some other dir>"` - Instructs the startup script to change the owner and group of each comma-separated container directory to the current value of `${NB_UID}` and `${NB_GID}`. The change is **not** applied recursively by default. You can change modify the `chown` behavior by setting `CHOWN_EXTRA_OPTS` (e.g., `-e CHOWN_EXTRA_OPTS='-R'`).
- `-e GRANT_SUDO=yes` - Instructs the startup script to grant the `NB_USER` user passwordless `sudo` capability. You do **not** need this option to allow the user to `conda` or `pip` install additional packages. This option is useful, however, when you wish to give `${NB_USER}` the ability to install OS packages with `apt` or modify other root-owned files in the container. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su ${NB_USER}` after adding `${NB_USER}` to sudoers.) **You should only enable `sudo` if you trust the user or if the container is running on an isolated host.**
- `-e NB_USER=jovyan` - Instructs the startup script to change the default container username from `jovyan` to the provided value.
Causes the script to rename the `jovyan` user home folder.
For this option to take effect, you must run the container with `--user root`, set the working directory `-w /home/${NB_USER}` and set the environment variable `-e CHOWN_HOME=yes` (see below for detail).
This feature is useful when mounting host volumes with specific home folder.
- `-e NB_UID=1000` - Instructs the startup script to switch the numeric user ID of `${NB_USER}` to the given value.
This feature is useful when mounting host volumes with specific owner permissions.
For this option to take effect, you must run the container with `--user root`.
(The startup script will `su ${NB_USER}` after adjusting the user ID.)
You might consider using modern Docker options `--user` and `--group-add` instead.
See the last bullet below for details.
- `-e NB_GID=100` - Instructs the startup script to change the primary group of`${NB_USER}` to `${NB_GID}`
(the new group is added with a name of `${NB_GROUP}` if it is defined, otherwise the group is named `${NB_USER}`).
This feature is useful when mounting host volumes with specific group permissions.
For this option to take effect, you must run the container with `--user root`.
(The startup script will `su ${NB_USER}` after adjusting the group ID.)
You might consider using modern Docker options `--user` and `--group-add` instead.
See the last bullet below for details.
The user is added to supplemental group `users` (gid 100) in order to allow write access to the home directory and `/opt/conda`.
If you override the user/group logic, ensure the user stays in group `users` if you want them to be able to modify files in the image.
- `-e NB_GROUP=<name>` - The name used for `${NB_GID}`, which defaults to `${NB_USER}`.
This is only used if `${NB_GID}` is specified and completely optional: there is only cosmetic effect.
- `-e NB_UMASK=<umask>` - Configures Jupyter to use a different umask value from default, i.e. `022`.
For example, if setting umask to `002`, new files will be readable and writable by group members instead of just writable by the owner.
Wikipedia has a good article about [umask](https://en.wikipedia.org/wiki/Umask).
Feel free to read it in order to choose the value that better fits your needs.
Default value should fit most situations.
Note that `NB_UMASK` when set only applies to the Jupyter process itself - you cannot use it to set a umask for additional files created during run-hooks
e.g. via `pip` or `conda` - if you need to set a umask for these you must set `umask` for each command.
- `-e CHOWN_HOME=yes` - Instructs the startup script to change the `${NB_USER}` home directory owner and group to the current value of `${NB_UID}` and `${NB_GID}`.
This change will take effect even if the user home directory is mounted from the host using `-v` as described below.
The change is **not** applied recursively by default.
You can change modify the `chown` behavior by setting `CHOWN_HOME_OPTS` (e.g., `-e CHOWN_HOME_OPTS='-R'`).
- `-e CHOWN_EXTRA="<some dir>,<some other dir>"` - Instructs the startup script to change the owner and group of each comma-separated container directory to the current value of `${NB_UID}` and `${NB_GID}`.
The change is **not** applied recursively by default.
You can change modify the `chown` behavior by setting `CHOWN_EXTRA_OPTS` (e.g., `-e CHOWN_EXTRA_OPTS='-R'`).
- `-e GRANT_SUDO=yes` - Instructs the startup script to grant the `NB_USER` user passwordless `sudo` capability.
You do **not** need this option to allow the user to `conda` or `pip` install additional packages.
This option is useful, however, when you wish to give `${NB_USER}` the ability to install OS packages with `apt` or modify other root-owned files in the container.
For this option to take effect, you must run the container with `--user root`.
(The `start-notebook.sh` script will `su ${NB_USER}` after adding `${NB_USER}` to sudoers.)
**You should only enable `sudo` if you trust the user or if the container is running on an isolated host.**
- `-e GEN_CERT=yes` - Instructs the startup script to generates a self-signed SSL certificate and configure Jupyter Notebook to use it to accept encrypted HTTPS connections.
- `-e JUPYTER_ENABLE_LAB=yes` - Instructs the startup script to run `jupyter lab` instead of the default `jupyter notebook` command. Useful in container orchestration environments where setting environment variables is easier than change command line parameters.
- `-e RESTARTABLE=yes` - Runs Jupyter in a loop so that quitting Jupyter does not cause the container to exit. This may be useful when you need to install extensions that require restarting Jupyter.
- `-v /some/host/folder/for/work:/home/jovyan/work` - Mounts a host machine directory as folder in the container. Useful when you want to preserve notebooks and other work even after the container is destroyed. **You must grant the within-container notebook user or group (`NB_UID` or `NB_GID`) write access to the host directory (e.g., `sudo chown 1000 /some/host/folder/for/work`).**
- `--user 5000 --group-add users` - Launches the container with a specific user ID and adds that user to the `users` group so that it can modify files in the default home directory and `/opt/conda`. You can use these arguments as alternatives to setting `${NB_UID}` and `${NB_GID}`.
- `-e JUPYTER_ENABLE_LAB=yes` - Instructs the startup script to run `jupyter lab` instead of the default `jupyter notebook` command.
Useful in container orchestration environments where setting environment variables is easier than change command line parameters.
- `-e RESTARTABLE=yes` - Runs Jupyter in a loop so that quitting Jupyter does not cause the container to exit.
This may be useful when you need to install extensions that require restarting Jupyter.
- `-v /some/host/folder/for/work:/home/jovyan/work` - Mounts a host machine directory as folder in the container.
Useful when you want to preserve notebooks and other work even after the container is destroyed.
**You must grant the within-container notebook user or group (`NB_UID` or `NB_GID`) write access to the host directory (e.g., `sudo chown 1000 /some/host/folder/for/work`).**
- `--user 5000 --group-add users` - Launches the container with a specific user ID and adds that user to the `users` group so that it can modify files in the default home directory and `/opt/conda`.
You can use these arguments as alternatives to setting `${NB_UID}` and `${NB_GID}`.
## Startup Hooks
@@ -52,7 +94,8 @@ script for execution details.
## SSL Certificates
You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt` and use them, you might run the following:
You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections.
For example, to mount a host folder containing a `notebook.key` and `notebook.crt` and use them, you might run the following:
```bash
docker run -d -p 8888:8888 \
@@ -62,7 +105,8 @@ docker run -d -p 8888:8888 \
--NotebookApp.certfile=/etc/ssl/notebook/notebook.crt
```
Alternatively, you may mount a single PEM file containing both the key and certificate. For example:
Alternatively, you may mount a single PEM file containing both the key and certificate.
For example:
```bash
docker run -d -p 8888:8888 \
@@ -71,11 +115,13 @@ docker run -d -p 8888:8888 \
--NotebookApp.certfile=/etc/ssl/notebook.pem
```
In either case, Jupyter Notebook expects the key and certificate to be a base64 encoded text file. The certificate file or PEM may contain one or more certificates (e.g., server, intermediate, and root).
In either case, Jupyter Notebook expects the key and certificate to be a base64 encoded text file.
The certificate file or PEM may contain one or more certificates (e.g., server, intermediate, and root).
For additional information about using SSL, see the following:
- The [docker-stacks/examples](https://github.com/jupyter/docker-stacks/tree/master/examples) for information about how to use [Let's Encrypt](https://letsencrypt.org/) certificates when you run these stacks on a publicly visible domain.
- The [docker-stacks/examples](https://github.com/jupyter/docker-stacks/tree/master/examples) for information about how to use
[Let's Encrypt](https://letsencrypt.org/) certificates when you run these stacks on a publicly visible domain.
- The [jupyter_notebook_config.py](https://github.com/jupyter/docker-stacks/blob/master/base-notebook/jupyter_notebook_config.py) file for how this Docker image generates a self-signed certificate.
- The [Jupyter Notebook documentation](https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#securing-a-notebook-server) for best practices about securing a public notebook server in general.
@@ -83,7 +129,9 @@ For additional information about using SSL, see the following:
### start.sh
The `start-notebook.sh` script actually inherits most of its option handling capability from a more generic `start.sh` script. The `start.sh` script supports all of the features described above, but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following:
The `start-notebook.sh` script actually inherits most of its option handling capability from a more generic `start.sh` script.
The `start.sh` script supports all of the features described above, but allows you to specify an arbitrary command to execute.
For example, to run the text-based `ipython` console in a container, do the following:
```bash
docker run -it --rm jupyter/base-notebook start.sh ipython
@@ -99,13 +147,17 @@ This script is particularly useful when you derive a new Dockerfile from this im
### Others
You can bypass the provided scripts and specify an arbitrary start command. If you do, keep in mind that features supported by the `start.sh` script and its kin will not function (e.g., `GRANT_SUDO`).
You can bypass the provided scripts and specify an arbitrary start command.
If you do, keep in mind that features supported by the `start.sh` script and its kin will not function (e.g., `GRANT_SUDO`).
## Conda Environments
The default Python 3.x [Conda environment](https://conda.io/projects/conda/en/latest/user-guide/concepts/environments.html) resides in `/opt/conda`. The `/opt/conda/bin` directory is part of the default `jovyan` user's `${PATH}`. That directory is also whitelisted for use in `sudo` commands by the `start.sh` script.
The default Python 3.x [Conda environment](https://conda.io/projects/conda/en/latest/user-guide/concepts/environments.html) resides in `/opt/conda`.
The `/opt/conda/bin` directory is part of the default `jovyan` user's `${PATH}`.
That directory is also whitelisted for use in `sudo` commands by the `start.sh` script.
The `jovyan` user has full read/write access to the `/opt/conda` directory. You can use either `pip`, `conda` or `mamba` to install new packages without any additional permissions.
The `jovyan` user has full read/write access to the `/opt/conda` directory.
You can use either `pip`, `conda` or `mamba` to install new packages without any additional permissions.
```bash
# install a package into the default (python 3.x) environment and cleanup after the installation

View File

@@ -1,19 +1,16 @@
# Contributed Recipes
Users sometimes share interesting ways of using the Jupyter Docker Stacks. We encourage users to
[contribute these recipes](../contributing/recipes.md) to the documentation in case they prove
Users sometimes share interesting ways of using the Jupyter Docker Stacks.
We encourage users to [contribute these recipes](../contributing/recipes.md) to the documentation in case they prove
useful to other members of the community by submitting a pull request to `docs/using/recipes.md`.
The sections below capture this knowledge.
## Using `sudo` within a container
Password authentication is disabled for the `NB_USER` (e.g., `jovyan`). This choice was made to
avoid distributing images with a weak default password that users ~might~ will forget to change
before running a container on a publicly accessible host.
Password authentication is disabled for the `NB_USER` (e.g., `jovyan`).
This choice was made to avoid distributing images with a weak default password that users ~might~ will forget to change before running a container on a publicly accessible host.
You can grant the within-container `NB_USER` passwordless `sudo` access by adding
`-e GRANT_SUDO=yes` and `--user root` to your Docker command line or appropriate container
orchestrator config.
You can grant the within-container `NB_USER` passwordless `sudo` access by adding `-e GRANT_SUDO=yes` and `--user root` to your Docker command line or appropriate container orchestrator config.
For example:
@@ -21,8 +18,8 @@ For example:
docker run -it -e GRANT_SUDO=yes --user root jupyter/minimal-notebook
```
**You should only enable `sudo` if you trust the user and/or if the container is running on an
isolated host.** See [Docker security documentation](https://docs.docker.com/engine/security/userns-remap/) for more information about running containers as `root`.
**You should only enable `sudo` if you trust the user and/or if the container is running on an isolated host.**
See [Docker security documentation](https://docs.docker.com/engine/security/userns-remap/) for more information about running containers as `root`.
## Using `pip install` or `conda install` in a Child Docker image
@@ -44,7 +41,8 @@ docker build --rm -t jupyter/my-datascience-notebook .
```
To use a requirements.txt file, first create your `requirements.txt` file with the listing of
packages desired. Next, create a new Dockerfile like the one shown below.
packages desired.
Next, create a new Dockerfile like the one shown below.
```dockerfile
# Start from a core stack version
@@ -73,9 +71,8 @@ Ref: [docker-stacks/commit/79169618d571506304934a7b29039085e77db78c](https://git
## Add a Python 2.x environment
Python 2.x was removed from all images on August 10th, 2017, starting in tag `cc9feab481f7`. You can
add a Python 2.x environment by defining your own Dockerfile inheriting from one of the images like
so:
Python 2.x was removed from all images on August 10th, 2017, starting in tag `cc9feab481f7`.
You can add a Python 2.x environment by defining your own Dockerfile inheriting from one of the images like so:
```dockerfile
# Choose your desired base image
@@ -150,7 +147,8 @@ Run jupyterlab using a command such as
## Dask JupyterLab Extension
[Dask JupyterLab Extension](https://github.com/dask/dask-labextension) provides a JupyterLab extension to manage Dask clusters, as well as embed Dask's dashboard plots directly into JupyterLab panes. Create the Dockerfile as:
[Dask JupyterLab Extension](https://github.com/dask/dask-labextension) provides a JupyterLab extension to manage Dask clusters, as well as embed Dask's dashboard plots directly into JupyterLab panes.
Create the Dockerfile as:
```dockerfile
# Start from a core stack version
@@ -208,8 +206,8 @@ Credit: [Paolo D.](https://github.com/pdonorio) based on
## xgboost
You need to install conda's gcc for Python xgboost to work properly. Otherwise, you'll get an
exception about libgomp.so.1 missing GOMP_4.0.
You need to install conda's gcc for Python xgboost to work properly.
Otherwise, you'll get an exception about libgomp.so.1 missing GOMP_4.0.
```bash
conda install --quiet --yes gcc && \
@@ -233,25 +231,25 @@ Sometimes it is useful to run the Jupyter instance behind a nginx proxy, for ins
- you may have many different services in addition to Jupyter running on the same server, and want
to nginx to help improve server performance in manage the connections
Here is a [quick example NGINX configuration](https://gist.github.com/cboettig/8643341bd3c93b62b5c2)
to get started. You'll need a server, a `.crt` and `.key` file for your server, and `docker` &
`docker-compose` installed. Then just download the files at that gist and run `docker-compose up -d`
to test it out. Customize the `nginx.conf` file to set the desired paths and add other services.
Here is a [quick example NGINX configuration](https://gist.github.com/cboettig/8643341bd3c93b62b5c2) to get started.
You'll need a server, a `.crt` and `.key` file for your server, and `docker` & `docker-compose` installed.
Then just download the files at that gist and run `docker-compose up -d` to test it out.
Customize the `nginx.conf` file to set the desired paths and add other services.
## Host volume mounts and notebook errors
If you are mounting a host directory as `/home/jovyan/work` in your container and you receive
permission errors or connection errors when you create a notebook, be sure that the `jovyan` user
(UID=1000 by default) has read/write access to the directory on the host. Alternatively, specify the
UID of the `jovyan` user on container startup using the `-e NB_UID` option described in the
(UID=1000 by default) has read/write access to the directory on the host.
Alternatively, specify the UID of the `jovyan` user on container startup using the `-e NB_UID` option described in the
[Common Features, Docker Options section](../using/common.html#Docker-Options)
Ref: <https://github.com/jupyter/docker-stacks/issues/199>
## Manpage installation
Most containers, including our Ubuntu base image, ship without manpages installed to save space. You
can use the following dockerfile to inherit from one of our images to enable manpages:
Most containers, including our Ubuntu base image, ship without manpages installed to save space.
You can use the following dockerfile to inherit from one of our images to enable manpages:
```dockerfile
# Choose your desired base image
@@ -467,21 +465,29 @@ RUN pip install --quiet --no-cache-dir jupyter_dashboards faker && \
USER root
# Ensure we overwrite the kernel config so that toree connects to cluster
RUN jupyter toree install --sys-prefix --spark_opts="--master yarn --deploy-mode client --driver-memory 512m --executor-memory 512m --executor-cores 1 --driver-java-options -Dhdp.version=2.5.3.0-37 --conf spark.hadoop.yarn.timeline-service.enabled=false"
RUN jupyter toree install --sys-prefix --spark_opts="\
--master yarn
--deploy-mode client
--driver-memory 512m
--executor-memory 512m
--executor-cores 1
--driver-java-options
-Dhdp.version=2.5.3.0-37
--conf spark.hadoop.yarn.timeline-service.enabled=false
"
USER ${NB_UID}
```
Credit: [britishbadger](https://github.com/britishbadger) from
[docker-stacks/issues/369](https://github.com/jupyter/docker-stacks/issues/369)
Credit: [britishbadger](https://github.com/britishbadger) from [docker-stacks/issues/369](https://github.com/jupyter/docker-stacks/issues/369)
## Run Jupyter Notebook/Lab inside an already secured environment (i.e., with no token)
(Adapted from [issue 728](https://github.com/jupyter/docker-stacks/issues/728))
The default security is very good. There are use cases, encouraged by containers, where the jupyter
container and the system it runs within, lie inside the security boundary. In these use cases it is
convenient to launch the server without a password or token. In this case, you should use the
`start.sh` script to launch the server with no token:
The default security is very good.
There are use cases, encouraged by containers, where the jupyter container and the system it runs within, lie inside the security boundary.
In these use cases it is convenient to launch the server without a password or token.
In this case, you should use the `start.sh` script to launch the server with no token:
For jupyterlab:
@@ -517,7 +523,8 @@ Ref: <https://github.com/jupyter/docker-stacks/issues/675>
## Enable auto-sklearn notebooks
Using `auto-sklearn` requires `swig`, which the other notebook images lack, so it cant be experimented with. Also, there is no Conda package for `auto-sklearn`.
Using `auto-sklearn` requires `swig`, which the other notebook images lack, so it cant be experimented with.
Also, there is no Conda package for `auto-sklearn`.
```dockerfile
ARG BASE_CONTAINER=jupyter/scipy-notebook
@@ -539,7 +546,8 @@ RUN pip install --quiet --no-cache-dir auto-sklearn && \
## Enable Delta Lake in Spark notebooks
Please note that the [Delta Lake](https://delta.io/) packages are only available for Spark version > `3.0`. By adding the properties to `spark-defaults.conf`, the user no longer needs to enable Delta support in each notebook.
Please note that the [Delta Lake](https://delta.io/) packages are only available for Spark version > `3.0`.
By adding the properties to `spark-defaults.conf`, the user no longer needs to enable Delta support in each notebook.
```dockerfile
FROM jupyter/pyspark-notebook:latest

View File

@@ -9,9 +9,13 @@ This section provides details about the second.
## Using the Docker CLI
You can launch a local Docker container from the Jupyter Docker Stacks using the [Docker command line interface](https://docs.docker.com/engine/reference/commandline/cli/). There are numerous ways to configure containers using the CLI. The following are some common patterns.
You can launch a local Docker container from the Jupyter Docker Stacks using the [Docker command line interface](https://docs.docker.com/engine/reference/commandline/cli/).
There are numerous ways to configure containers using the CLI.
The following are some common patterns.
**Example 1** This command pulls the `jupyter/scipy-notebook` image tagged `33add21fab64` from Docker Hub if it is not already present on the local host. It then starts a container running a Jupyter Notebook server and exposes the server on host port 8888. The server logs appear in the terminal and include a URL to the notebook server.
**Example 1** This command pulls the `jupyter/scipy-notebook` image tagged `33add21fab64` from Docker Hub if it is not already present on the local host.
It then starts a container running a Jupyter Notebook server and exposes the server on host port 8888.
The server logs appear in the terminal and include a URL to the notebook server.
```bash
$ docker run -p 8888:8888 jupyter/scipy-notebook:33add21fab64
@@ -52,7 +56,9 @@ $ docker rm d67fe77f1a84
d67fe77f1a84
```
**Example 2** This command pulls the `jupyter/r-notebook` image tagged `33add21fab64` from Docker Hub if it is not already present on the local host. It then starts a container running a Jupyter Notebook server and exposes the server on host port 10000. The server logs appear in the terminal and include a URL to the notebook server, but with the internal container port (8888) instead of the the correct host port (10000).
**Example 2** This command pulls the `jupyter/r-notebook` image tagged `33add21fab64` from Docker Hub if it is not already present on the local host.
It then starts a container running a Jupyter Notebook server and exposes the server on host port 10000.
The server logs appear in the terminal and include a URL to the notebook server, but with the internal container port (8888) instead of the the correct host port (10000).
```bash
$ docker run --rm -p 10000:8888 -v "${PWD}":/home/jovyan/work jupyter/r-notebook:33add21fab64
@@ -74,9 +80,12 @@ Executing the command: jupyter notebook
http://localhost:8888/?token=3b8dce890cb65570fb0d9c4a41ae067f7604873bd604f5ac
```
Pressing `Ctrl-C` shuts down the notebook server and immediately destroys the Docker container. Files written to `~/work` in the container remain touched. Any other changes made in the container are lost.
Pressing `Ctrl-C` shuts down the notebook server and immediately destroys the Docker container.
Files written to `~/work` in the container remain touched.
Any other changes made in the container are lost.
**Example 3** This command pulls the `jupyter/all-spark-notebook` image currently tagged `latest` from Docker Hub if an image tagged `latest` is not already present on the local host. It then starts a container named `notebook` running a JupyterLab server and exposes the server on a randomly selected port.
**Example 3** This command pulls the `jupyter/all-spark-notebook` image currently tagged `latest` from Docker Hub if an image tagged `latest` is not already present on the local host.
It then starts a container named `notebook` running a JupyterLab server and exposes the server on a randomly selected port.
```bash
docker run -d -P --name notebook jupyter/all-spark-notebook
@@ -112,12 +121,23 @@ notebook
## Using Binder
[Binder](https://mybinder.org/) is a service that allows you to create and share custom computing environments for projects in version control. You can use any of the Jupyter Docker Stacks images as a basis for a Binder-compatible Dockerfile. See the [docker-stacks example](https://mybinder.readthedocs.io/en/latest/sample_repos.html#using-a-docker-image-from-the-jupyter-docker-stacks-repository) and [Using a Dockerfile](https://mybinder.readthedocs.io/en/latest/tutorials/dockerfile.html) sections in the [Binder documentation](https://mybinder.readthedocs.io/en/latest/index.html) for instructions.
[Binder](https://mybinder.org/) is a service that allows you to create and share custom computing environments for projects in version control.
You can use any of the Jupyter Docker Stacks images as a basis for a Binder-compatible Dockerfile.
See the
[docker-stacks example](https://mybinder.readthedocs.io/en/latest/sample_repos.html#using-a-docker-image-from-the-jupyter-docker-stacks-repository) and
[Using a Dockerfile](https://mybinder.readthedocs.io/en/latest/tutorials/dockerfile.html) sections in the
[Binder documentation](https://mybinder.readthedocs.io/en/latest/index.html) for instructions.
## Using JupyterHub
You can configure JupyterHub to launcher Docker containers from the Jupyter Docker Stacks images. If you've been following the [Zero to JupyterHub with Kubernetes](https://zero-to-jupyterhub.readthedocs.io/en/latest/) guide, see the [Use an existing Docker image](https://zero-to-jupyterhub.readthedocs.io/en/latest/jupyterhub/customizing/user-environment.html#choose-and-use-an-existing-docker-image) section for details. If you have a custom JupyterHub deployment, see the [Picking or building a Docker image](https://github.com/jupyterhub/dockerspawner#picking-or-building-a-docker-image) instructions for the [dockerspawner](https://github.com/jupyterhub/dockerspawner) instead.
You can configure JupyterHub to launcher Docker containers from the Jupyter Docker Stacks images.
If you've been following the [Zero to JupyterHub with Kubernetes](https://zero-to-jupyterhub.readthedocs.io/en/latest/) guide,
see the [Use an existing Docker image](https://zero-to-jupyterhub.readthedocs.io/en/latest/jupyterhub/customizing/user-environment.html#choose-and-use-an-existing-docker-image) section for details.
If you have a custom JupyterHub deployment, see the [Picking or building a Docker image](https://github.com/jupyterhub/dockerspawner#picking-or-building-a-docker-image)
instructions for the [dockerspawner](https://github.com/jupyterhub/dockerspawner) instead.
## Using Other Tools and Services
You can use the Jupyter Docker Stacks with any Docker-compatible technology (e.g., [Docker Compose](https://docs.docker.com/compose/), [docker-py](https://github.com/docker/docker-py), your favorite cloud container service). See the documentation of the tool, library, or service for details about how to reference, configure, and launch containers from these images.
You can use the Jupyter Docker Stacks with any Docker-compatible technology
(e.g., [Docker Compose](https://docs.docker.com/compose/), [docker-py](https://github.com/docker/docker-py), your favorite cloud container service).
See the documentation of the tool, library, or service for details about how to reference, configure, and launch containers from these images.

View File

@@ -13,10 +13,8 @@ This section provides details about the first.
## Core Stacks
The Jupyter team maintains a set of Docker image definitions in the
<https://github.com/jupyter/docker-stacks> GitHub
repository. The following sections describe these images including their contents, relationships,
and versioning strategy.
The Jupyter team maintains a set of Docker image definitions in the <https://github.com/jupyter/docker-stacks> GitHub repository.
The following sections describe these images including their contents, relationships, and versioning strategy.
### jupyter/base-notebook
@@ -24,8 +22,8 @@ and versioning strategy.
[Dockerfile commit history](https://github.com/jupyter/docker-stacks/commits/master/base-notebook/Dockerfile) |
[Docker Hub image tags](https://hub.docker.com/r/jupyter/base-notebook/tags/)
`jupyter/base-notebook` is a small image supporting the
[options common across all core stacks](common.md). It is the basis for all other stacks.
`jupyter/base-notebook` is a small image supporting the [options common across all core stacks](common.md).
It is the basis for all other stacks.
- Minimally-functional Jupyter Notebook server (e.g., no LaTeX support for saving notebooks as PDFs)
- [Miniforge](https://github.com/conda-forge/miniforge) Python 3.x in `/opt/conda` with two package managers
@@ -37,8 +35,7 @@ and versioning strategy.
with ownership over the `/home/jovyan` and `/opt/conda` paths
- `tini` as the container entrypoint and a `start-notebook.sh` script as the default command
- A `start-singleuser.sh` script useful for launching containers in JupyterHub
- A `start.sh` script useful for running alternative commands in the container (e.g. `ipython`,
`jupyter kernelgateway`, `jupyter lab`)
- A `start.sh` script useful for running alternative commands in the container (e.g. `ipython`, `jupyter kernelgateway`, `jupyter lab`)
- Options for a self-signed HTTPS certificate and passwordless sudo
### jupyter/minimal-notebook
@@ -188,29 +185,26 @@ communities.
### Image Relationships
The following diagram depicts the build dependency tree of the core images. (i.e., the `FROM`
statements in their Dockerfiles). Any given image inherits the complete content of all ancestor
images pointing to it.
The following diagram depicts the build dependency tree of the core images. (i.e., the `FROM` statements in their Dockerfiles).
Any given image inherits the complete content of all ancestor images pointing to it.
[![Image inheritance
diagram](../images/inherit.svg)](http://interactive.blockdiag.com/?compression=deflate&src=eJyFzTEPgjAQhuHdX9Gws5sQjGzujsaYKxzmQrlr2msMGv-71K0srO_3XGud9NNA8DSfgzESCFlBSdi0xkvQAKTNugw4QnL6GIU10hvX-Zh7Z24OLLq2SjaxpvP10lX35vCf6pOxELFmUbQiUz4oQhYzMc3gCrRt2cWe_FKosmSjyFHC6OS1AwdQWCtyj7sfh523_BI9hKlQ25YdOFdv5fcH0kiEMA)
### Builds
Pull requests to the `jupyter/docker-stacks` repository trigger builds of all images on GitHub
Actions. These images are for testing purposes only and are not saved for further use. When pull requests
merge to master, all images rebuild on Docker Hub and become available to `docker pull` from
Docker Hub.
Pull requests to the `jupyter/docker-stacks` repository trigger builds of all images on GitHub Actions.
These images are for testing purposes only and are not saved for further use.
When pull requests merge to master, all images rebuild on Docker Hub and become available to `docker pull` from Docker Hub.
### Versioning
The `latest` tag in each Docker Hub repository tracks the master branch `HEAD` reference on GitHub.
`latest` is a moving target, by definition, and will have backward-incompatible changes regularly.
Every image on Docker Hub also receives a 12-character tag which corresponds with the git commit SHA
that triggered the image build. You can inspect the state of the `jupyter/docker-stacks` repository
for that commit to review the definition of the image (e.g., images with tag `33add21fab64` were built
from <https://github.com/jupyter/docker-stacks/tree/33add21fab64>.
Every image on Docker Hub also receives a 12-character tag which corresponds with the git commit SHA that triggered the image build.
You can inspect the state of the `jupyter/docker-stacks` repository for that commit to review the definition of the image
(e.g., images with tag `33add21fab64` were built from <https://github.com/jupyter/docker-stacks/tree/33add21fab64>.
You must refer to git-SHA image tags when stability and reproducibility are important in your work.
(e.g. `FROM jupyter/scipy-notebook:33add21fab64`, `docker run -it --rm jupyter/scipy-notebook:33add21fab64`).
@@ -220,18 +214,17 @@ You should only use `latest` when a one-off container instance is acceptable
## Community Stacks
The core stacks are just a tiny sample of what's possible when combining Jupyter with other
technologies. We encourage members of the Jupyter community to create their own stacks based on the
technologies.
We encourage members of the Jupyter community to create their own stacks based on the
core images and link them below.
- [csharp-notebook is a community Jupyter Docker Stack image. Try C# in Jupyter Notebooks](https://github.com/tlinnet/csharp-notebook).
The image includes more than 200 Jupyter Notebooks with example C# code and can readily be tried
online via mybinder.org. Click here to launch
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/tlinnet/csharp-notebook/master).
The image includes more than 200 Jupyter Notebooks with example C# code and can readily be tried online via mybinder.org.
Try it on [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/tlinnet/csharp-notebook/master).
- [education-notebook is a community Jupyter Docker Stack image](https://github.com/umsi-mads/education-notebook).
The image includes nbgrader and RISE on top of the datascience-notebook image. Click here to
launch it on
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/umsi-mads/education-notebook/master).
The image includes nbgrader and RISE on top of the datascience-notebook image.
Try it on [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/umsi-mads/education-notebook/master).
- **crosscompass/ihaskell-notebook**
@@ -242,36 +235,34 @@ core images and link them below.
`crosscompass/ihaskell-notebook` is based on [IHaskell](https://github.com/gibiansky/IHaskell).
Includes popular packages and example notebooks.
Try it on
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/jamesdbrock/learn-you-a-haskell-notebook/master?urlpath=lab/tree/ihaskell_examples/ihaskell/IHaskell.ipynb)
Try it on [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/jamesdbrock/learn-you-a-haskell-notebook/master?urlpath=lab/tree/ihaskell_examples/ihaskell/IHaskell.ipynb)
- [java-notebook is a community Jupyter Docker Stack image](https://github.com/jbindinga/java-notebook).
The image includes [IJava](https://github.com/SpencerPark/IJava) kernel on top of the
minimal-notebook image. Click here to launch it on
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/jbindinga/java-notebook/master).
The image includes [IJava](https://github.com/SpencerPark/IJava) kernel on top of the minimal-notebook image.
Try it on [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/jbindinga/java-notebook/master).
- [sage-notebook](https://github.com/sharpTrick/sage-notebook) is a community Jupyter Docker Stack
image with the [sagemath](https://www.sagemath.org) kernel on top of the minimal-notebook image. Click
here to launch it on
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/sharpTrick/sage-notebook/master).
- [sage-notebook](https://github.com/sharpTrick/sage-notebook)
is a community Jupyter Docker Stack image with the [sagemath](https://www.sagemath.org) kernel on top of the minimal-notebook image.
Try it on [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/sharpTrick/sage-notebook/master).
- [GPU-Jupyter](https://github.com/iot-salzburg/gpu-jupyter/): Leverage Jupyter Notebooks with the
power of your NVIDIA GPU and perform GPU calculations using Tensorflow and Pytorch in
collaborative notebooks. This is done by generating a Dockerfile, that consists of the
**nvidia/cuda** base image, the well-maintained **docker-stacks** that is integrated as submodule
and GPU-able libraries like **Tensorflow**, **Keras** and **PyTorch** on top of it.
power of your NVIDIA GPU and perform GPU calculations using Tensorflow and Pytorch in collaborative notebooks.
This is done by generating a Dockerfile, that consists of the **nvidia/cuda** base image,
the well-maintained **docker-stacks** that is integrated as submodule and
GPU-able libraries like **Tensorflow**, **Keras** and **PyTorch** on top of it.
- [PRP GPU Jupyter repo](https://gitlab.nautilus.optiputer.net/prp/jupyter-stack/-/tree/prp) and [Registry](https://gitlab.nautilus.optiputer.net/prp/jupyter-stack/container_registry): PRP (Pacific Research Platform) maintained registry for jupyter stack based on NVIDIA CUDA-enabled image. Added the PRP image with Pytorch and some other python packages, and GUI Desktop notebook based on <https://github.com/jupyterhub/jupyter-remote-desktop-proxy>.
- [PRP GPU Jupyter repo](https://gitlab.nautilus.optiputer.net/prp/jupyter-stack/-/tree/prp) and [Registry](https://gitlab.nautilus.optiputer.net/prp/jupyter-stack/container_registry)
PRP (Pacific Research Platform) maintained registry for jupyter stack based on NVIDIA CUDA-enabled image.
Added the PRP image with Pytorch and some other python packages, and GUI Desktop notebook based on <https://github.com/jupyterhub/jupyter-remote-desktop-proxy>.
- [cgspatial-notebook](https://github.com/SCiO-systems/cgspatial-notebook) is a community Jupyter
Docker Stack image. The image includes major geospatial Python & R libraries on top of the
datascience-notebook image. Try it on
binder:[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/SCiO-systems/cgspatial-notebook/master)
- [cgspatial-notebook](https://github.com/SCiO-systems/cgspatial-notebook) is a community Jupyter Docker Stack image.
The image includes major geospatial Python & R libraries on top of the datascience-notebook image.
Try it on [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/SCiO-systems/cgspatial-notebook/master)
- [kotlin-notebook](https://github.com/knonm/kotlin-notebook) is a community Jupyter
Docker Stack image. The image includes [Kotlin kernel for Jupyter/IPython](https://github.com/Kotlin/kotlin-jupyter) on top of the
`base-notebook` image. Try it on
Binder: [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/knonm/kotlin-notebook/main)
- [kotlin-notebook](https://github.com/knonm/kotlin-notebook) is a community Jupyter Docker Stack image.
The image includes [Kotlin kernel for Jupyter/IPython](https://github.com/Kotlin/kotlin-jupyter) on top of the
`base-notebook` image.
Try it on [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/knonm/kotlin-notebook/main)
See the [contributing guide](../contributing/stacks.md) for information about how to create your own
Jupyter Docker Stack.

View File

@@ -6,13 +6,18 @@ This page provides details about features specific to one or more images.
### Specific Docker Image Options
- `-p 4040:4040` - The `jupyter/pyspark-notebook` and `jupyter/all-spark-notebook` images open [SparkUI (Spark Monitoring and Instrumentation UI)](https://spark.apache.org/docs/latest/monitoring.html) at default port `4040`, this option map `4040` port inside docker container to `4040` port on host machine . Note every new spark context that is created is put onto an incrementing port (ie. 4040, 4041, 4042, etc.), and it might be necessary to open multiple ports. For example: `docker run -d -p 8888:8888 -p 4040:4040 -p 4041:4041 jupyter/pyspark-notebook`.
- `-p 4040:4040` - The `jupyter/pyspark-notebook` and `jupyter/all-spark-notebook` images open
[SparkUI (Spark Monitoring and Instrumentation UI)](https://spark.apache.org/docs/latest/monitoring.html) at default port `4040`,
this option map `4040` port inside docker container to `4040` port on host machine.
Note every new spark context that is created is put onto an incrementing port (ie. 4040, 4041, 4042, etc.), and it might be necessary to open multiple ports.
For example: `docker run -d -p 8888:8888 -p 4040:4040 -p 4041:4041 jupyter/pyspark-notebook`.
### Build an Image with a Different Version of Spark
You can build a `pyspark-notebook` image (and also the downstream `all-spark-notebook` image) with a different version of Spark by overriding the default value of the following arguments at build time.
- Spark distribution is defined by the combination of the Spark and the Hadoop version and verified by the package checksum, see [Download Apache Spark](https://spark.apache.org/downloads.html) and the [archive repo](https://archive.apache.org/dist/spark/) for more information.
- Spark distribution is defined by the combination of the Spark and the Hadoop version and verified by the package checksum,
see [Download Apache Spark](https://spark.apache.org/downloads.html) and the [archive repo](https://archive.apache.org/dist/spark/) for more information.
- `spark_version`: The Spark version to install (`3.0.0`).
- `hadoop_version`: The Hadoop version (`3.2`).
- `spark_checksum`: The package checksum (`BFE4540...`).
@@ -46,7 +51,8 @@ docker run -it --rm jupyter/pyspark-notebook:spark-2.4.7 pyspark --version
### Usage Examples
The `jupyter/pyspark-notebook` and `jupyter/all-spark-notebook` images support the use of [Apache Spark](https://spark.apache.org/) in Python, R, and Scala notebooks. The following sections provide some examples of how to get started using them.
The `jupyter/pyspark-notebook` and `jupyter/all-spark-notebook` images support the use of [Apache Spark](https://spark.apache.org/) in Python, R, and Scala notebooks.
The following sections provide some examples of how to get started using them.
#### Using Spark Local Mode
@@ -133,16 +139,18 @@ Connection to Spark Cluster on **[Standalone Mode](https://spark.apache.org/docs
deployed, run the same version of Spark.
1. [Deploy Spark in Standalone Mode](https://spark.apache.org/docs/latest/spark-standalone.html).
2. Run the Docker container with `--net=host` in a location that is network addressable by all of
your Spark workers. (This is a [Spark networking
requirement](https://spark.apache.org/docs/latest/cluster-overview.html#components).)
- NOTE: When using `--net=host`, you must also use the flags `--pid=host -e TINI_SUBREAPER=true`. See <https://github.com/jupyter/docker-stacks/issues/64> for details.
your Spark workers.
(This is a [Spark networking requirement](https://spark.apache.org/docs/latest/cluster-overview.html#components).)
- NOTE: When using `--net=host`, you must also use the flags `--pid=host -e TINI_SUBREAPER=true`.
See <https://github.com/jupyter/docker-stacks/issues/64> for details.
**Note**: In the following examples we are using the Spark master URL `spark://master:7077` that shall be replaced by the URL of the Spark master.
##### Standalone Mode in Python
The **same Python version** needs to be used on the notebook (where the driver is located) and on the Spark workers.
The python version used at driver and worker side can be adjusted by setting the environment variables `PYSPARK_PYTHON` and / or `PYSPARK_DRIVER_PYTHON`, see [Spark Configuration][spark-conf] for more information.
The python version used at driver and worker side can be adjusted by setting the environment variables `PYSPARK_PYTHON` and / or `PYSPARK_DRIVER_PYTHON`,
see [Spark Configuration][spark-conf] for more information.
```python
from pyspark.sql import SparkSession

View File

@@ -38,7 +38,8 @@ notebook/down.sh
### How do I specify which docker-stack notebook image to deploy?
You can customize the docker-stack notebook image to deploy by modifying the `notebook/Dockerfile`. For example, you can build and deploy a `jupyter/all-spark-notebook` by modifying the Dockerfile like so:
You can customize the docker-stack notebook image to deploy by modifying the `notebook/Dockerfile`.
For example, you can build and deploy a `jupyter/all-spark-notebook` by modifying the Dockerfile like so:
```dockerfile
FROM jupyter/all-spark-notebook:33add21fab64
@@ -85,7 +86,9 @@ NAME=your-notebook PORT=9001 WORK_VOLUME=our-work notebook/up.sh
### How do I run over HTTPS?
To run the notebook server with a self-signed certificate, pass the `--secure` option to the `up.sh` script. You must also provide a password, which will be used to secure the notebook server. You can specify the password by setting the `PASSWORD` environment variable, or by passing it to the `up.sh` script.
To run the notebook server with a self-signed certificate, pass the `--secure` option to the `up.sh` script.
You must also provide a password, which will be used to secure the notebook server.
You can specify the password by setting the `PASSWORD` environment variable, or by passing it to the `up.sh` script.
```bash
PASSWORD=a_secret notebook/up.sh --secure
@@ -98,7 +101,8 @@ notebook/up.sh --secure --password a_secret
Sure. If you want to secure access to publicly addressable notebook containers, you can generate a free certificate using the [Let's Encrypt](https://letsencrypt.org) service.
This example includes the `bin/letsencrypt.sh` script, which runs the `letsencrypt` client to create a full-chain certificate and private key, and stores them in a Docker volume. _Note:_ The script hard codes several `letsencrypt` options, one of which automatically agrees to the Let's Encrypt Terms of Service.
This example includes the `bin/letsencrypt.sh` script, which runs the `letsencrypt` client to create a full-chain certificate and private key, and stores them in a Docker volume.
_Note:_ The script hard codes several `letsencrypt` options, one of which automatically agrees to the Let's Encrypt Terms of Service.
The following command will create a certificate chain and store it in a Docker volume named `mydomain-secrets`.
@@ -108,7 +112,8 @@ FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \
bin/letsencrypt.sh
```
Now run `up.sh` with the `--letsencrypt` option. You must also provide the name of the secrets volume and a password.
Now run `up.sh` with the `--letsencrypt` option.
You must also provide the name of the secrets volume and a password.
```bash
PASSWORD=a_secret SECRETS_VOLUME=mydomain-secrets notebook/up.sh --letsencrypt
@@ -117,7 +122,9 @@ PASSWORD=a_secret SECRETS_VOLUME=mydomain-secrets notebook/up.sh --letsencrypt
notebook/up.sh --letsencrypt --password a_secret --secrets mydomain-secrets
```
Be aware that Let's Encrypt has a pretty [low rate limit per domain](https://community.letsencrypt.org/t/public-beta-rate-limits/4772/3) at the moment. You can avoid exhausting your limit by testing against the Let's Encrypt staging servers. To hit their staging servers, set the environment variable `CERT_SERVER=--staging`.
Be aware that Let's Encrypt has a pretty [low rate limit per domain](https://community.letsencrypt.org/t/public-beta-rate-limits/4772/3) at the moment.
You can avoid exhausting your limit by testing against the Let's Encrypt staging servers.
To hit their staging servers, set the environment variable `CERT_SERVER=--staging`.
```bash
FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \
@@ -125,11 +132,14 @@ FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \
bin/letsencrypt.sh
```
Also, be aware that Let's Encrypt certificates are short lived (90 days). If you need them for a longer period of time, you'll need to manually setup a cron job to run the renewal steps. (You can reuse the command above.)
Also, be aware that Let's Encrypt certificates are short lived (90 days).
If you need them for a longer period of time, you'll need to manually setup a cron job to run the renewal steps.
(You can reuse the command above.)
### Can I deploy to any Docker Machine host?
Yes, you should be able to deploy to any Docker Machine-controlled host. To make it easier to get up and running, this example includes scripts to provision Docker Machines to VirtualBox and IBM SoftLayer, but more scripts are welcome!
Yes, you should be able to deploy to any Docker Machine-controlled host.
To make it easier to get up and running, this example includes scripts to provision Docker Machines to VirtualBox and IBM SoftLayer, but more scripts are welcome!
To create a Docker machine using a VirtualBox VM on local desktop:

View File

@@ -48,7 +48,8 @@ make notebook NAME=your-notebook PORT=9001 WORK_VOLUME=our-work
### How do I run over HTTPS?
Instead of `make notebook`, run `make self-signed-notebook PASSWORD=your_desired_password`. This target gives you a notebook with a self-signed certificate.
Instead of `make notebook`, run `make self-signed-notebook PASSWORD=your_desired_password`.
This target gives you a notebook with a self-signed certificate.
### That self-signed certificate is a pain. Let's Encrypt?
@@ -59,15 +60,21 @@ make letsencrypt FQDN=host.mydomain.com EMAIL=myemail@somewhere.com
make letsencrypt-notebook
```
The first command creates a Docker volume named after the notebook container with a `-secrets` suffix. It then runs the `letsencrypt` client with a slew of options (one of which has you automatically agreeing to the Let's Encrypt Terms of Service, see the Makefile). The second command mounts the secrets volume and configures Jupyter to use the full-chain certificate and private key.
The first command creates a Docker volume named after the notebook container with a `-secrets` suffix.
It then runs the `letsencrypt` client with a slew of options (one of which has you automatically agreeing to the Let's Encrypt Terms of Service, see the Makefile).
The second command mounts the secrets volume and configures Jupyter to use the full-chain certificate and private key.
Be aware: Let's Encrypt has a pretty [low rate limit per domain](https://community.letsencrypt.org/t/public-beta-rate-limits/4772/3) at the moment. You can avoid exhausting your limit by testing against the Let's Encrypt staging servers. To hit their staging servers, set the environment variable `CERT_SERVER=--staging`.
Be aware: Let's Encrypt has a pretty [low rate limit per domain](https://community.letsencrypt.org/t/public-beta-rate-limits/4772/3) at the moment.
You can avoid exhausting your limit by testing against the Let's Encrypt staging servers.
To hit their staging servers, set the environment variable `CERT_SERVER=--staging`.
```bash
make letsencrypt FQDN=host.mydomain.com EMAIL=myemail@somewhere.com CERT_SERVER=--staging
```
Also, keep in mind Let's Encrypt certificates are short lived: 90 days at the moment. You'll need to manually setup a cron job to run the renewal steps at the moment. (You can reuse the first command above.)
Also, keep in mind Let's Encrypt certificates are short lived: 90 days at the moment.
You'll need to manually setup a cron job to run the renewal steps at the moment.
(You can reuse the first command above.)
### My pip/conda/apt-get installs disappear every time I restart the container. Can I make them permanent?
@@ -86,11 +93,14 @@ make image DOCKER_ARGS=--pull
make notebook
```
The first line pulls the latest version of the Docker image used in the local Dockerfile. Then it rebuilds the local Docker image containing any customizations you may have added to it. The second line kills your currently running notebook container, and starts a fresh one using the new image.
The first line pulls the latest version of the Docker image used in the local Dockerfile.
Then it rebuilds the local Docker image containing any customizations you may have added to it.
The second line kills your currently running notebook container, and starts a fresh one using the new image.
### Can I run on another VM provider other than VirtualBox?
Yes. As an example, there's a `softlayer.makefile` included in this repo as an example. You would use it like so:
Yes. As an example, there's a `softlayer.makefile` included in this repo as an example.
You would use it like so:
```bash
make softlayer-vm NAME=myhost \
@@ -112,7 +122,8 @@ If you'd like to add support for another docker-machine driver, use the `softlay
### Uh ... make?
Yes, sorry Windows users. It got the job done for a simple example. We can certainly accept other deployment mechanism examples in the parent folder or in other repos.
Yes, sorry Windows users. It got the job done for a simple example.
We can certainly accept other deployment mechanism examples in the parent folder or in other repos.
### Are there any other options?

View File

@@ -2,15 +2,23 @@
This example provides templates for deploying the Jupyter Project docker-stacks images to OpenShift.
## Prerequsites
## Prerequisites
Any OpenShift 3 environment. The templates were tested with OpenShift 3.7. It is believed they should work with at least OpenShift 3.6 or later.
Any OpenShift 3 environment.
The templates were tested with OpenShift 3.7.
It is believed they should work with at least OpenShift 3.6 or later.
Do be aware that the Jupyter Project docker-stacks images are very large. The OpenShift environment you are using must provide sufficient quota on the per user space for images and the file system for running containers. If the quota is too small, the pulling of the images to a node in the OpenShift cluster when deploying them, will fail due to lack of space. Even if the image is able to be run, if the quota is only just larger than the space required for the image, you will not be able to install many packages into the container before running out of space.
Do be aware that the Jupyter Project docker-stacks images are very large.
The OpenShift environment you are using must provide sufficient quota on the per user space for images and the file system for running containers.
If the quota is too small, the pulling of the images to a node in the OpenShift cluster when deploying them, will fail due to lack of space.
Even if the image is able to be run, if the quota is only just larger than the space required for the image, you will not be able to install many packages into the container before running out of space.
OpenShift Online, the public hosted version of OpenShift from Red Hat has a quota of only 3GB for the image and container file system. As a result, only the `minimal-notebook` can be started and there is little space remaining to install additional packages. Although OpenShift Online is suitable for demonstrating these templates work, what you can do in that environment will be limited due to the size of the images.
OpenShift Online, the public hosted version of OpenShift from Red Hat has a quota of only 3GB for the image and container file system.
As a result, only the `minimal-notebook` can be started and there is little space remaining to install additional packages.
Although OpenShift Online is suitable for demonstrating these templates work, what you can do in that environment will be limited due to the size of the images.
If you want to experiment with using Jupyter Notebooks in an OpenShift environment, you should instead use [Minishift](https://www.openshift.org/minishift/). Minishift provides you the ability to run OpenShift in a virtual machine on your own local computer.
If you want to experiment with using Jupyter Notebooks in an OpenShift environment, you should instead use [Minishift](https://www.openshift.org/minishift/).
Minishift provides you the ability to run OpenShift in a virtual machine on your own local computer.
## Loading the Templates
@@ -22,7 +30,8 @@ oc create -f https://raw.githubusercontent.com/jupyter-on-openshift/docker-stack
This should create the `jupyter-notebook` template
The template can be used from the command line using the `oc new-app` command, or from the OpenShift web console by selecting _Add to Project_. This `README` is only going to explain deploying from the command line.
The template can be used from the command line using the `oc new-app` command, or from the OpenShift web console by selecting _Add to Project_.
This `README` is only going to explain deploying from the command line.
## Deploying a Notebook
@@ -56,7 +65,8 @@ The output will be similar to:
Run 'oc status' to view your app.
```
When no template parameters are provided, the name of the deployed notebook will be `notebook`. The image used will be:
When no template parameters are provided, the name of the deployed notebook will be `notebook`.
The image used will be:
```lang-none
jupyter/minimal-notebook:latest
@@ -129,7 +139,10 @@ Setting the environment variable will trigger a new deployment and the Jupyter L
## Adding Persistent Storage
You can upload notebooks and other files using the web interface of the notebook. Any uploaded files or changes you make to them will be lost when the notebook instance is restarted. If you want to save your work, you need to add persistent storage to the notebook. To add persistent storage run:
You can upload notebooks and other files using the web interface of the notebook.
Any uploaded files or changes you make to them will be lost when the notebook instance is restarted.
If you want to save your work, you need to add persistent storage to the notebook.
To add persistent storage run:
```bash
oc set volume dc/mynotebook --add \
@@ -162,7 +175,8 @@ If you are using a persistent volume, you can also create a configuration file a
This will be merged at the end of the configuration from the config map.
Because the configuration is Python code, ensure any indenting is correct. Any errors in the configuration file will cause the notebook to fail when starting.
Because the configuration is Python code, ensure any indenting is correct.
Any errors in the configuration file will cause the notebook to fail when starting.
If the error is in the config map, edit it again to fix it and trigger a new deployment if necessary by running:
@@ -182,7 +196,9 @@ Then run:
oc debug dc/mynotebook
```
to run the notebook in debug mode. This will provide you with an interactive terminal session inside a running container, but the notebook will not have been started. Edit the configuration file in the volume to fix any errors and exit the terminal session.
to run the notebook in debug mode.
This will provide you with an interactive terminal session inside a running container, but the notebook will not have been started.
Edit the configuration file in the volume to fix any errors and exit the terminal session.
Start up the notebook again.
@@ -192,7 +208,8 @@ oc scale dc/mynotebook --replicas 1
## Changing the Notebook Password
The password for the notebook is supplied as a template parameter, or if not supplied will be automatically generated by the template. It will be passed into the container through an environment variable.
The password for the notebook is supplied as a template parameter, or if not supplied will be automatically generated by the template.
It will be passed into the container through an environment variable.
If you want to change the password, you can do so by editing the environment variable on the deployment configuration.
@@ -206,9 +223,11 @@ If using a persistent volume, you could instead setup a password in the file `/h
## Deploying from a Custom Image
If you want to deploy a custom variant of the Jupyter Project docker-stacks images, you can replace the image name with that of your own. If the image is not stored on Docker Hub, but some other public image registry, prefix the name of the image with the image registry host details.
If you want to deploy a custom variant of the Jupyter Project docker-stacks images, you can replace the image name with that of your own.
If the image is not stored on Docker Hub, but some other public image registry, prefix the name of the image with the image registry host details.
If the image is in your OpenShift project, because you imported the image into OpenShift, or used the docker build strategy of OpenShift to build a derived custom image, you can use the name of the image stream for the image name, including any image tag if necessary.
If the image is in your OpenShift project, because you imported the image into OpenShift, or used the docker build strategy of OpenShift to build a derived custom image,
you can use the name of the image stream for the image name, including any image tag if necessary.
This can be illustrated by first importing an image into the OpenShift project.
@@ -225,4 +244,5 @@ oc new-app --template jupyter-notebook \
--param NOTEBOOK_PASSWORD=mypassword
```
Importing an image into OpenShift before deploying it means that when a notebook is started, the image need only be pulled from the internal OpenShift image registry rather than Docker Hub for each deployment. Because the images are so large, this can speed up deployments when the image hasn't previously been deployed to a node in the OpenShift cluster.
Importing an image into OpenShift before deploying it means that when a notebook is started, the image need only be pulled from the internal OpenShift image registry rather than Docker Hub for each deployment.
Because the images are so large, this can speed up deployments when the image hasn't previously been deployed to a node in the OpenShift cluster.

View File

@@ -1,8 +1,12 @@
# Custom Jupyter Notebook images
This example provides scripts for building custom Jupyter Notebook images containing notebooks, data files, and with Python packages required by the notebooks already installed. The scripts provided work with the Source-to-Image tool and you can create the images from the command line on your own computer. Templates are also provided to enable running builds in OpenShift, as well as deploying the resulting image to OpenShift to make it available.
This example provides scripts for building custom Jupyter Notebook images containing notebooks, data files, and with Python packages required by the notebooks already installed.
The scripts provided work with the Source-to-Image tool and you can create the images from the command line on your own computer.
Templates are also provided to enable running builds in OpenShift, as well as deploying the resulting image to OpenShift to make it available.
The build scripts, when used with the Source-to-Image tool, provide similar capabilities to `repo2docker`. When builds are run under OpenShift with the supplied templates, it provides similar capabilities to `mybinder.org`, but where notebook instances are deployed in your existing OpenShift project and JupyterHub is not required.
The build scripts, when used with the Source-to-Image tool, provide similar capabilities to `repo2docker`.
When builds are run under OpenShift with the supplied templates, it provides similar capabilities to `mybinder.org`,
but where notebook instances are deployed in your existing OpenShift project and JupyterHub is not required.
For separate examples of using JupyterHub with OpenShift, see the project:
@@ -10,13 +14,16 @@ For separate examples of using JupyterHub with OpenShift, see the project:
## Source-to-Image Project
Source-to-Image (S2I) is an open source project which provides a tool for creating container images. It works by taking a base image, injecting additional source code or files into a running container created from the base image, and running a builder script in the container to process the source code or files to prepare the new image.
Source-to-Image (S2I) is an open source project which provides a tool for creating container images.
It works by taking a base image, injecting additional source code or files into a running container created from the base image,
and running a builder script in the container to process the source code or files to prepare the new image.
Details on the S2I tool, and executable binaries for Linux, macOS and Windows, can be found on GitHub at:
- <https://github.com/openshift/source-to-image>
The tool is standalone, and can be used on any system which provides a docker daemon for running containers. To provide an end-to-end capability to build and deploy applications in containers, support for S2I is also integrated into container platforms such as OpenShift.
The tool is standalone, and can be used on any system which provides a docker daemon for running containers.
To provide an end-to-end capability to build and deploy applications in containers, support for S2I is also integrated into container platforms such as OpenShift.
## Getting Started with S2I
@@ -31,7 +38,9 @@ s2i build \
notebook-examples
```
This example command will pull down the Git repository <https://github.com/jupyter/notebook> and build the image `notebook-examples` using the files contained in the `docs/source/examples/Notebook` directory of that Git repository. The base image which the files will be combined with is `jupyter/minimal-notebook:latest`, but you can specify any of the Jupyter Project `docker-stacks` images as the base image.
This example command will pull down the Git repository <https://github.com/jupyter/notebook>
and build the image `notebook-examples` using the files contained in the `docs/source/examples/Notebook` directory of that Git repository.
The base image which the files will be combined with is `jupyter/minimal-notebook:latest`, but you can specify any of the Jupyter Project `docker-stacks` images as the base image.
The resulting image from running the command can be seen by running `docker images` command:
@@ -66,11 +75,15 @@ Open your browser on the URL displayed, and you will find the notebooks from the
## The S2I Builder Scripts
Normally when using S2I, the base image would be S2I enabled and contain the builder scripts needed to prepare the image and define how the application in the image should be run. As the Jupyter Project `docker-stacks` images are not S2I enabled (although they could be), in the above example the `--scripts-url` option has been used to specify that the example builder scripts contained in this directory of this Git repository should be used.
Normally when using S2I, the base image would be S2I enabled and contain the builder scripts needed to prepare the image and define how the application in the image should be run.
As the Jupyter Project `docker-stacks` images are not S2I enabled (although they could be),
in the above example the `--scripts-url` option has been used to specify that the example builder scripts contained in this directory of this Git repository should be used.
Using the `--scripts-url` option, the builder scripts can be hosted on any HTTP server, or you could also use builder scripts local to your computer file using an appropriate `file://` format URI argument to `--scripts-url`.
Using the `--scripts-url` option, the builder scripts can be hosted on any HTTP server,
or you could also use builder scripts local to your computer file using an appropriate `file://` format URI argument to `--scripts-url`.
The builder scripts in this directory of this repository are `assemble` and `run` and are provided as examples of what can be done. You can use the scripts as is, or create your own.
The builder scripts in this directory of this repository are `assemble` and `run` and are provided as examples of what can be done.
You can use the scripts as is, or create your own.
The supplied `assemble` script performs a few key steps.
@@ -97,7 +110,8 @@ fi
This determines whether a `environment.yml` or `requirements.txt` file exists with the files and if so, runs the appropriate package management tool to install any Python packages listed in those files.
This means that so long as a set of notebook files provides one of these files listing what Python packages they need, those packages will be automatically installed into the image so they are available when the image is run.
This means that so long as a set of notebook files provides one of these files listing what Python packages they need,
those packages will be automatically installed into the image so they are available when the image is run.
A final step is:
@@ -106,9 +120,14 @@ fix-permissions "${CONDA_DIR}"
fix-permissions "/home/${NB_USER}"
```
This fixes up permissions on any new files created by the build. This is necessary to ensure that when the image is run, you can still install additional files. This is important for when an image is run in `sudo` mode, or it is hosted in a more secure container platform such as Kubernetes/OpenShift where it will be run as a set user ID that isn't known in advance.
This fixes up permissions on any new files created by the build.
This is necessary to ensure that when the image is run, you can still install additional files.
This is important for when an image is run in `sudo` mode, or it is hosted in a more secure container platform such as Kubernetes/OpenShift where it will be run as a set user ID that isn't known in advance.
As long as you preserve the first and last set of steps, you can do whatever you want in the `assemble` script to install packages, create files etc. Do be aware though that S2I builds do not run as `root` and so you cannot install additional system packages. If you need to install additional system packages, use a `Dockerfile` and normal `docker build` to first create a new custom base image from the Jupyter Project `docker-stacks` images, with the extra system packages, and then use that image with the S2I build to combine your notebooks and have Python packages installed.
As long as you preserve the first and last set of steps, you can do whatever you want in the `assemble` script to install packages, create files etc.
Do be aware though that S2I builds do not run as `root` and so you cannot install additional system packages.
If you need to install additional system packages, use a `Dockerfile` and normal `docker build` to first create a new custom base image from the Jupyter Project `docker-stacks` images,
with the extra system packages, and then use that image with the S2I build to combine your notebooks and have Python packages installed.
The `run` script in this directory is very simple and just runs the notebook application.
@@ -118,7 +137,9 @@ exec start-notebook.sh "$@"
## Integration with OpenShift
The OpenShift platform provides integrated support for S2I type builds. Templates are provided for using the S2I build mechanism with the scripts in this directory. To load the templates run:
The OpenShift platform provides integrated support for S2I type builds.
Templates are provided for using the S2I build mechanism with the scripts in this directory.
To load the templates run:
```bash
oc create -f https://raw.githubusercontent.com/jupyter/docker-stacks/master/examples/source-to-image/templates.json
@@ -131,7 +152,8 @@ jupyter-notebook-builder
jupyter-notebook-quickstart
```
The templates can be used from the OpenShift web console or command line. This `README` is only going to explain deploying from the command line.
The templates can be used from the OpenShift web console or command line.
This `README` is only going to explain deploying from the command line.
To use the OpenShift command line to build into an image, and deploy, the set of notebooks used above, run:
@@ -144,9 +166,11 @@ oc new-app --template jupyter-notebook-quickstart \
--param NOTEBOOK_PASSWORD=mypassword
```
You can provide a password using the `NOTEBOOK_PASSWORD` parameter. If you don't set that parameter, a password will be generated, with it being displayed by the `oc new-app` command.
You can provide a password using the `NOTEBOOK_PASSWORD` parameter.
If you don't set that parameter, a password will be generated, with it being displayed by the `oc new-app` command.
Once the image has been built, it will be deployed. To see the hostname for accessing the notebook, run `oc get routes`.
Once the image has been built, it will be deployed.
To see the hostname for accessing the notebook, run `oc get routes`.
```lang-none
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
@@ -155,6 +179,7 @@ notebook-examples notebook-examples-jupyter.abcd.pro-us-east-1.openshiftapps.c
As the deployment will use a secure connection, the URL for accessing the notebook in this case would be <https://notebook-examples-jupyter.abcd.pro-us-east-1.openshiftapps.com>.
If you only want to build an image but not deploy it, you can use the `jupyter-notebook-builder` template. You can then deploy it using the `jupyter-notebook` template provided with the [openshift](../openshift) examples directory.
If you only want to build an image but not deploy it, you can use the `jupyter-notebook-builder` template.
You can then deploy it using the `jupyter-notebook` template provided with the [openshift](../openshift) examples directory.
See the `openshift` examples directory for further information on customizing configuration for a Jupyter Notebook deployment and deleting a deployment.