Make markdown files look prettier using prettier :)

This commit is contained in:
Ayaz Salikhov
2021-05-07 00:53:06 +03:00
parent 09fb660076
commit 453556b2c4
14 changed files with 107 additions and 122 deletions

View File

@@ -1,4 +1,5 @@
<!-- markdownlint-disable MD041 -->
Hi! Thanks for using the Jupyter Docker Stacks.
Please review the following guidance about how to ask questions, contribute changes, or report bugs in the Docker images maintained here.

View File

@@ -1,7 +1,7 @@
<!-- markdownlint-disable MD041 -->
Thanks for contributing! Please see the
__Contributor Guide__ section in [the documentation](https://jupyter-docker-stacks.readthedocs.io) for
**Contributor Guide** section in [the documentation](https://jupyter-docker-stacks.readthedocs.io) for
information about how to contribute
[package updates](http://jupyter-docker-stacks.readthedocs.io/en/latest/contributing/packages.html),
[recipes](http://jupyter-docker-stacks.readthedocs.io/en/latest/contributing/recipes.html),

View File

@@ -25,7 +25,7 @@ software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
@@ -47,8 +47,8 @@ Jupyter uses a shared copyright model. Each contributor maintains copyright
over their contributions to Jupyter. But, it is important to note that these
contributions are typically only changes to the repositories. Thus, the Jupyter
source code, in its entirety is not the copyright of any single person or
institution. Instead, it is the collective copyright of the entire Jupyter
Development Team. If individual contributors want to maintain a record of what
institution. Instead, it is the collective copyright of the entire Jupyter
Development Team. If individual contributors want to maintain a record of what
changes/contributions they have specific copyright on, they should indicate
their copyright in the commit message of the change, when they commit the
change to one of the Jupyter repositories.

View File

@@ -36,7 +36,7 @@ more information is available in the [documentation](https://jupyter-docker-stac
At some point, JupyterLab will become the default for all of the Jupyter Docker stack images, however a new environment variable will be introduced to switch back to Jupyter Notebook if needed.
After the change of default, and according to the Jupyter Notebook project status and its compatibility with JupyterLab, these Docker images may remove the classic Jupyter Notebook interface altogether in favor of another *classic-like* UI built atop JupyterLab.
After the change of default, and according to the Jupyter Notebook project status and its compatibility with JupyterLab, these Docker images may remove the classic Jupyter Notebook interface altogether in favor of another _classic-like_ UI built atop JupyterLab.
This change is tracked in the issue [#1217](https://github.com/jupyter/docker-stacks/issues/1217), please check its content for more information.

View File

@@ -4,22 +4,22 @@ We appreciate your taking the time to report an issue you encountered using the
Jupyter Docker Stacks. Please review the following guidelines when reporting
your problem.
* If you believe youve found a security vulnerability in any of the Jupyter
- If you believe youve found a security vulnerability in any of the Jupyter
projects included in Jupyter Docker Stacks images, please report it to
[security@ipython.org](mailto:security@iypthon.org), not in the issue trackers
on GitHub. If you prefer to encrypt your security reports, you can use [this
PGP public
key](https://jupyter-notebook.readthedocs.io/en/stable/_downloads/ipython_security.asc).
* If you think your problem is unique to the Jupyter Docker Stacks images,
- If you think your problem is unique to the Jupyter Docker Stacks images,
please search the [jupyter/docker-stacks issue
tracker](https://github.com/jupyter/docker-stacks/issues) to see if someone
else has already reported the same problem. If not, please open a [new
issue](https://github.com/jupyter/docker-stacks/issues/new) and provide all of
the information requested in the issue template.
* If the issue you're seeing is with one of the open source libraries included
- If the issue you're seeing is with one of the open source libraries included
in the Docker images and is reproducible outside the images, please file a bug
with the appropriate open source project.
* If you have a general question about how to use the Jupyter Docker Stacks in
- If you have a general question about how to use the Jupyter Docker Stacks in
your environment, in conjunction with other tools, with customizations, and so
on, please post your question on the [Jupyter Discourse
site](https://discourse.jupyter.org).

View File

@@ -44,10 +44,10 @@ To comply with [Docker best practices][dbp], we are using the [Hadolint][hadolin
Sometimes it is necessary to ignore [some rules][rules].
The following rules are ignored by default for all images in the `.hadolint.yaml` file.
- [`DL3006`][DL3006]: We use a specific policy to manage image tags.
- [`DL3006`][dl3006]: We use a specific policy to manage image tags.
- `base-notebook` `FROM` clause is fixed but based on an argument (`ARG`).
- Building downstream images from (`FROM`) the latest is done on purpose.
- [`DL3008`][DL3008]: System packages are always updated (`apt-get`) to the latest version.
- [`DL3008`][dl3008]: System packages are always updated (`apt-get`) to the latest version.
For other rules, the preferred way to do it is to flag ignored rules in the `Dockerfile`.
@@ -63,6 +63,6 @@ RUN cd /tmp && echo "hello!"
[hadolint]: https://github.com/hadolint/hadolint
[dbp]: https://docs.docker.com/develop/develop-images/dockerfile_best-practices
[rules]: https://github.com/hadolint/hadolint#rules
[DL3006]: https://github.com/hadolint/hadolint/wiki/DL3006
[DL3008]: https://github.com/hadolint/hadolint/wiki/DL3008
[dl3006]: https://github.com/hadolint/hadolint/wiki/DL3006
[dl3008]: https://github.com/hadolint/hadolint/wiki/DL3008
[pre-commit]: https://pre-commit.com/

View File

@@ -80,27 +80,28 @@ The cookiecutter template comes with a `.github/workflows/docker.yml` file, whic
```yaml
on:
pull_request:
paths-ignore:
- "*.md"
- "binder/**"
- "docs/**"
- "examples/**"
paths-ignore:
- "*.md"
- "binder/**"
- "docs/**"
- "examples/**"
push:
branches:
- master
- main
paths-ignore:
- "*.md"
- "binder/**"
- "docs/**"
- "examples/**"
branches:
- master
- main
paths-ignore:
- "*.md"
- "binder/**"
- "docs/**"
- "examples/**"
```
This will trigger the CI pipeline whenever you push to your `main` or `master` branch and when any Pull Requests are made to your repository. For more details on this configuration, visit the [GitHub actions documentation on triggers](https://docs.github.com/en/actions/reference/events-that-trigger-workflows).
2. Commit your changes and push to GitHub.
3. Head back to your repository and click on the **Actions** tab.
![GitHub actions tab screenshot](../static/../_static/github-actions-tab.png)
From there, you can click on the workflows on the left-hand side of the screen.
![GitHub actions tab screenshot](../static/../_static/github-actions-tab.png)
From there, you can click on the workflows on the left-hand side of the screen.
4. In the next screen, you will be able to see information about the workflow run and duration. If you click again on the button with the workflow name, you will see the logs for the workflow steps.
![Github actions workflow run screenshot](../static/../_static/github-actions-workflow.png)
@@ -123,17 +124,17 @@ you merge a GitHub pull request to the master branch of your project.
9. Click on your avatar on the top-right corner and select Account settings.
![Docker account selection screenshot](../_static/docker-org-select.png)
10. Click on **Security** and then click on the **New Access Token** button.
![Docker account Security settings screenshot](../_static/docker-org-security.png)
![Docker account Security settings screenshot](../_static/docker-org-security.png)
11. Enter a meaningful name for your token and click on **Create**
![Docker account create new token screenshot](../_static/docker-org-create-token.png)
![Docker account create new token screenshot](../_static/docker-org-create-token.png)
12. Copy the personal access token displayed on the next screen. **Note that you will not be able to see it again after you close the pop-up window**.
13. Head back to your GitHub repository and click on the **Settings tab**.
![Github repository settings tab screenshot](../static/../_static/github-create-secrets.png)
![Github repository settings tab screenshot](../static/../_static/github-create-secrets.png)
14. Click on the **Secrets** section and then on the **New repository secret** button on the top right corner (see image above).
15. Create a **DOCKERHUB_TOKEN** secret and paste the Personal Access Token from DockerHub in the **value** field.
![GitHub create secret token screenshot](../static/../_static/github-secret-token.png)
16. Repeat the above step but creating a **DOCKERHUB_USERNAME** and replacing the *value* field with your DockerHub username. Once you have completed these steps, your repository secrets section should look something like this:
![GitHub repository secrets created screenshot](../static/../_static/github-secrets-completed.png)
![GitHub create secret token screenshot](../static/../_static/github-secret-token.png)
16. Repeat the above step but creating a **DOCKERHUB_USERNAME** and replacing the _value_ field with your DockerHub username. Once you have completed these steps, your repository secrets section should look something like this:
![GitHub repository secrets created screenshot](../static/../_static/github-secrets-completed.png)
## Defining Your Image
@@ -151,7 +152,7 @@ master or main. Refer to Docker Cloud to build your master or main branch that y
Finally, if you'd like to add a link to your project to this documentation site, please do the
following:
1. Clone the [jupyter/docker-stacks](https://github.com/jupyter/docker-stacks) GitHub repository.
1. Clone the [jupyter/docker-stacks](https://github.com/jupyter/docker-stacks) GitHub repository.
2. Open the `docs/using/selecting.md` source file and locate the **Community Stacks** section.
3. Add a bullet with a link to your project and a short description of what your Docker image contains.
4. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request)

View File

@@ -3,5 +3,5 @@
We are delighted when members of the Jupyter community want to help translate these documentation pages to other languages. If you're interested, please visit links below below to join our team on [Transifex](https://transifex.com) and to start creating, reviewing, and updating translations of the Jupyter Docker Stacks documentation.
1. Follow the steps documented on the [Getting Started as a Translator](https://docs.transifex.com/getting-started-1/translators) page.
2. Look for *jupyter-docker-stacks* when prompted to choose a translation team. Alternatively, visit <https://www.transifex.com/project-jupyter/jupyter-docker-stacks-1> after creating your account and request to join the project.
2. Look for _jupyter-docker-stacks_ when prompted to choose a translation team. Alternatively, visit <https://www.transifex.com/project-jupyter/jupyter-docker-stacks-1> after creating your account and request to join the project.
3. See [Translating with the Web Editor](https://docs.transifex.com/translation/translating-with-the-web-editor) in the Transifex documentation.

View File

@@ -23,28 +23,28 @@ docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp
You may instruct the `start-notebook.sh` script to customize the container environment before launching
the notebook server. You do so by passing arguments to the `docker run` command.
* `-e NB_USER=jovyan` - Instructs the startup script to change the default container username from `jovyan` to the provided value. Causes the script to rename the `jovyan` user home folder. For this option to take effect, you must run the container with `--user root`, set the working directory `-w /home/$NB_USER` and set the environment variable `-e CHOWN_HOME=yes` (see below for detail). This feature is useful when mounting host volumes with specific home folder.
* `-e NB_UID=1000` - Instructs the startup script to switch the numeric user ID of `$NB_USER` to the given value. This feature is useful when mounting host volumes with specific owner permissions. For this option to take effect, you must run the container with `--user root`. (The startup script will `su $NB_USER` after adjusting the user ID.) You might consider using modern Docker options `--user` and `--group-add` instead. See the last bullet below for details.
* `-e NB_GID=100` - Instructs the startup script to change the primary group of`$NB_USER` to `$NB_GID` (the new group is added with a name of `$NB_GROUP` if it is defined, otherwise the group is named `$NB_USER`). This feature is useful when mounting host volumes with specific group permissions. For this option to take effect, you must run the container with `--user root`. (The startup script will `su $NB_USER` after adjusting the group ID.) You might consider using modern Docker options `--user` and `--group-add` instead. See the last bullet below for details. The user is added to supplemental group `users` (gid 100) in order to allow write access to the home directory and `/opt/conda`. If you override the user/group logic, ensure the user stays in group `users` if you want them to be able to modify files in the image.
* `-e NB_GROUP=<name>` - The name used for `$NB_GID`, which defaults to `$NB_USER`. This is only used if `$NB_GID` is specified and completely optional: there is only cosmetic effect.
* `-e NB_UMASK=<umask>` - Configures Jupyter to use a different umask value from default, i.e. `022`. For example, if setting umask to `002`, new files will be readable and writable by group members instead of just writable by the owner. Wikipedia has a good article about [umask](https://en.wikipedia.org/wiki/Umask). Feel free to read it in order to choose the value that better fits your needs. Default value should fit most situations. Note that `NB_UMASK` when set only applies to the Jupyter process itself - you cannot use it to set a umask for additional files created during run-hooks e.g. via `pip` or `conda` - if you need to set a umask for these you must set `umask` for each command.
* `-e CHOWN_HOME=yes` - Instructs the startup script to change the `$NB_USER` home directory owner and group to the current value of `$NB_UID` and `$NB_GID`. This change will take effect even if the user home directory is mounted from the host using `-v` as described below. The change is **not** applied recursively by default. You can change modify the `chown` behavior by setting `CHOWN_HOME_OPTS` (e.g., `-e CHOWN_HOME_OPTS='-R'`).
* `-e CHOWN_EXTRA="<some dir>,<some other dir>"` - Instructs the startup script to change the owner and group of each comma-separated container directory to the current value of `$NB_UID` and `$NB_GID`. The change is **not** applied recursively by default. You can change modify the `chown` behavior by setting `CHOWN_EXTRA_OPTS` (e.g., `-e CHOWN_EXTRA_OPTS='-R'`).
* `-e GRANT_SUDO=yes` - Instructs the startup script to grant the `NB_USER` user passwordless `sudo` capability. You do **not** need this option to allow the user to `conda` or `pip` install additional packages. This option is useful, however, when you wish to give `$NB_USER` the ability to install OS packages with `apt` or modify other root-owned files in the container. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su $NB_USER` after adding `$NB_USER` to sudoers.) **You should only enable `sudo` if you trust the user or if the container is running on an isolated host.**
* `-e GEN_CERT=yes` - Instructs the startup script to generates a self-signed SSL certificate and configure Jupyter Notebook to use it to accept encrypted HTTPS connections.
* `-e JUPYTER_ENABLE_LAB=yes` - Instructs the startup script to run `jupyter lab` instead of the default `jupyter notebook` command. Useful in container orchestration environments where setting environment variables is easier than change command line parameters.
* `-e RESTARTABLE=yes` - Runs Jupyter in a loop so that quitting Jupyter does not cause the container to exit. This may be useful when you need to install extensions that require restarting Jupyter.
* `-v /some/host/folder/for/work:/home/jovyan/work` - Mounts a host machine directory as folder in the container. Useful when you want to preserve notebooks and other work even after the container is destroyed. **You must grant the within-container notebook user or group (`NB_UID` or `NB_GID`) write access to the host directory (e.g., `sudo chown 1000 /some/host/folder/for/work`).**
* `--user 5000 --group-add users` - Launches the container with a specific user ID and adds that user to the `users` group so that it can modify files in the default home directory and `/opt/conda`. You can use these arguments as alternatives to setting `$NB_UID` and `$NB_GID`.
- `-e NB_USER=jovyan` - Instructs the startup script to change the default container username from `jovyan` to the provided value. Causes the script to rename the `jovyan` user home folder. For this option to take effect, you must run the container with `--user root`, set the working directory `-w /home/$NB_USER` and set the environment variable `-e CHOWN_HOME=yes` (see below for detail). This feature is useful when mounting host volumes with specific home folder.
- `-e NB_UID=1000` - Instructs the startup script to switch the numeric user ID of `$NB_USER` to the given value. This feature is useful when mounting host volumes with specific owner permissions. For this option to take effect, you must run the container with `--user root`. (The startup script will `su $NB_USER` after adjusting the user ID.) You might consider using modern Docker options `--user` and `--group-add` instead. See the last bullet below for details.
- `-e NB_GID=100` - Instructs the startup script to change the primary group of`$NB_USER` to `$NB_GID` (the new group is added with a name of `$NB_GROUP` if it is defined, otherwise the group is named `$NB_USER`). This feature is useful when mounting host volumes with specific group permissions. For this option to take effect, you must run the container with `--user root`. (The startup script will `su $NB_USER` after adjusting the group ID.) You might consider using modern Docker options `--user` and `--group-add` instead. See the last bullet below for details. The user is added to supplemental group `users` (gid 100) in order to allow write access to the home directory and `/opt/conda`. If you override the user/group logic, ensure the user stays in group `users` if you want them to be able to modify files in the image.
- `-e NB_GROUP=<name>` - The name used for `$NB_GID`, which defaults to `$NB_USER`. This is only used if `$NB_GID` is specified and completely optional: there is only cosmetic effect.
- `-e NB_UMASK=<umask>` - Configures Jupyter to use a different umask value from default, i.e. `022`. For example, if setting umask to `002`, new files will be readable and writable by group members instead of just writable by the owner. Wikipedia has a good article about [umask](https://en.wikipedia.org/wiki/Umask). Feel free to read it in order to choose the value that better fits your needs. Default value should fit most situations. Note that `NB_UMASK` when set only applies to the Jupyter process itself - you cannot use it to set a umask for additional files created during run-hooks e.g. via `pip` or `conda` - if you need to set a umask for these you must set `umask` for each command.
- `-e CHOWN_HOME=yes` - Instructs the startup script to change the `$NB_USER` home directory owner and group to the current value of `$NB_UID` and `$NB_GID`. This change will take effect even if the user home directory is mounted from the host using `-v` as described below. The change is **not** applied recursively by default. You can change modify the `chown` behavior by setting `CHOWN_HOME_OPTS` (e.g., `-e CHOWN_HOME_OPTS='-R'`).
- `-e CHOWN_EXTRA="<some dir>,<some other dir>"` - Instructs the startup script to change the owner and group of each comma-separated container directory to the current value of `$NB_UID` and `$NB_GID`. The change is **not** applied recursively by default. You can change modify the `chown` behavior by setting `CHOWN_EXTRA_OPTS` (e.g., `-e CHOWN_EXTRA_OPTS='-R'`).
- `-e GRANT_SUDO=yes` - Instructs the startup script to grant the `NB_USER` user passwordless `sudo` capability. You do **not** need this option to allow the user to `conda` or `pip` install additional packages. This option is useful, however, when you wish to give `$NB_USER` the ability to install OS packages with `apt` or modify other root-owned files in the container. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su $NB_USER` after adding `$NB_USER` to sudoers.) **You should only enable `sudo` if you trust the user or if the container is running on an isolated host.**
- `-e GEN_CERT=yes` - Instructs the startup script to generates a self-signed SSL certificate and configure Jupyter Notebook to use it to accept encrypted HTTPS connections.
- `-e JUPYTER_ENABLE_LAB=yes` - Instructs the startup script to run `jupyter lab` instead of the default `jupyter notebook` command. Useful in container orchestration environments where setting environment variables is easier than change command line parameters.
- `-e RESTARTABLE=yes` - Runs Jupyter in a loop so that quitting Jupyter does not cause the container to exit. This may be useful when you need to install extensions that require restarting Jupyter.
- `-v /some/host/folder/for/work:/home/jovyan/work` - Mounts a host machine directory as folder in the container. Useful when you want to preserve notebooks and other work even after the container is destroyed. **You must grant the within-container notebook user or group (`NB_UID` or `NB_GID`) write access to the host directory (e.g., `sudo chown 1000 /some/host/folder/for/work`).**
- `--user 5000 --group-add users` - Launches the container with a specific user ID and adds that user to the `users` group so that it can modify files in the default home directory and `/opt/conda`. You can use these arguments as alternatives to setting `$NB_UID` and `$NB_GID`.
## Startup Hooks
You can further customize the container environment by adding shell scripts (`*.sh`) to be sourced
or executables (`chmod +x`) to be run to the paths below:
* `/usr/local/bin/start-notebook.d/` - handled before any of the standard options noted above
- `/usr/local/bin/start-notebook.d/` - handled before any of the standard options noted above
are applied
* `/usr/local/bin/before-notebook.d/` - handled after all of the standard options noted above are
- `/usr/local/bin/before-notebook.d/` - handled after all of the standard options noted above are
applied and just before the notebook server launches
See the `run-hooks` function in the [`jupyter/base-notebook start.sh`](https://github.com/jupyter/docker-stacks/blob/master/base-notebook/start.sh)
@@ -75,9 +75,9 @@ In either case, Jupyter Notebook expects the key and certificate to be a base64
For additional information about using SSL, see the following:
* The [docker-stacks/examples](https://github.com/jupyter/docker-stacks/tree/master/examples) for information about how to use [Let's Encrypt](https://letsencrypt.org/) certificates when you run these stacks on a publicly visible domain.
* The [jupyter_notebook_config.py](https://github.com/jupyter/docker-stacks/blob/master/base-notebook/jupyter_notebook_config.py) file for how this Docker image generates a self-signed certificate.
* The [Jupyter Notebook documentation](https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#securing-a-notebook-server) for best practices about securing a public notebook server in general.
- The [docker-stacks/examples](https://github.com/jupyter/docker-stacks/tree/master/examples) for information about how to use [Let's Encrypt](https://letsencrypt.org/) certificates when you run these stacks on a publicly visible domain.
- The [jupyter_notebook_config.py](https://github.com/jupyter/docker-stacks/blob/master/base-notebook/jupyter_notebook_config.py) file for how this Docker image generates a self-signed certificate.
- The [Jupyter Notebook documentation](https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#securing-a-notebook-server) for best practices about securing a public notebook server in general.
## Alternative Commands

View File

@@ -6,18 +6,18 @@ This page provides details about features specific to one or more images.
### Specific Docker Image Options
* `-p 4040:4040` - The `jupyter/pyspark-notebook` and `jupyter/all-spark-notebook` images open [SparkUI (Spark Monitoring and Instrumentation UI)](http://spark.apache.org/docs/latest/monitoring.html) at default port `4040`, this option map `4040` port inside docker container to `4040` port on host machine . Note every new spark context that is created is put onto an incrementing port (ie. 4040, 4041, 4042, etc.), and it might be necessary to open multiple ports. For example: `docker run -d -p 8888:8888 -p 4040:4040 -p 4041:4041 jupyter/pyspark-notebook`.
- `-p 4040:4040` - The `jupyter/pyspark-notebook` and `jupyter/all-spark-notebook` images open [SparkUI (Spark Monitoring and Instrumentation UI)](http://spark.apache.org/docs/latest/monitoring.html) at default port `4040`, this option map `4040` port inside docker container to `4040` port on host machine . Note every new spark context that is created is put onto an incrementing port (ie. 4040, 4041, 4042, etc.), and it might be necessary to open multiple ports. For example: `docker run -d -p 8888:8888 -p 4040:4040 -p 4041:4041 jupyter/pyspark-notebook`.
### Build an Image with a Different Version of Spark
You can build a `pyspark-notebook` image (and also the downstream `all-spark-notebook` image) with a different version of Spark by overriding the default value of the following arguments at build time.
* Spark distribution is defined by the combination of the Spark and the Hadoop version and verified by the package checksum, see [Download Apache Spark](https://spark.apache.org/downloads.html) and the [archive repo](https://archive.apache.org/dist/spark/) for more information.
* `spark_version`: The Spark version to install (`3.0.0`).
* `hadoop_version`: The Hadoop version (`3.2`).
* `spark_checksum`: The package checksum (`BFE4540...`).
* Spark can run with different OpenJDK versions.
* `openjdk_version`: The version of (JRE headless) the OpenJDK distribution (`11`), see [Ubuntu packages](https://packages.ubuntu.com/search?keywords=openjdk).
- Spark distribution is defined by the combination of the Spark and the Hadoop version and verified by the package checksum, see [Download Apache Spark](https://spark.apache.org/downloads.html) and the [archive repo](https://archive.apache.org/dist/spark/) for more information.
- `spark_version`: The Spark version to install (`3.0.0`).
- `hadoop_version`: The Hadoop version (`3.2`).
- `spark_checksum`: The package checksum (`BFE4540...`).
- Spark can run with different OpenJDK versions.
- `openjdk_version`: The version of (JRE headless) the OpenJDK distribution (`11`), see [Ubuntu packages](https://packages.ubuntu.com/search?keywords=openjdk).
For example here is how to build a `pyspark-notebook` image with Spark `2.4.6`, Hadoop `2.7` and OpenJDK `8`.
@@ -40,7 +40,7 @@ docker run -it --rm jupyter/pyspark-notebook:spark-2.4.7 pyspark --version
# _\ \/ _ \/ _ `/ __/ '_/
# /___/ .__/\_,_/_/ /_/\_\ version 2.4.7
# /_/
#
#
# Using Scala version 2.11.12, OpenJDK 64-Bit Server VM, 1.8.0_275
```
@@ -135,8 +135,7 @@ Connection to Spark Cluster on **[Standalone Mode](https://spark.apache.org/docs
2. Run the Docker container with `--net=host` in a location that is network addressable by all of
your Spark workers. (This is a [Spark networking
requirement](http://spark.apache.org/docs/latest/cluster-overview.html#components).)
* NOTE: When using `--net=host`, you must also use the flags `--pid=host -e
TINI_SUBREAPER=true`. See <https://github.com/jupyter/docker-stacks/issues/64> for details.
- NOTE: When using `--net=host`, you must also use the flags `--pid=host -e TINI_SUBREAPER=true`. See <https://github.com/jupyter/docker-stacks/issues/64> for details.
**Note**: In the following examples we are using the Spark master URL `spark://master:7077` that shall be replaced by the URL of the Spark master.

View File

@@ -4,9 +4,9 @@ This example demonstrate how to deploy docker-stack notebook containers to any D
## Prerequisites
* [Docker Engine](https://docs.docker.com/engine/) 1.10.0+
* [Docker Machine](https://docs.docker.com/machine/) 0.6.0+
* [Docker Compose](https://docs.docker.com/compose/) 1.6.0+
- [Docker Engine](https://docs.docker.com/engine/) 1.10.0+
- [Docker Machine](https://docs.docker.com/machine/) 0.6.0+
- [Docker Compose](https://docs.docker.com/compose/) 1.6.0+
See the [installation instructions](https://docs.docker.com/engine/installation/) for your environment.
@@ -38,7 +38,7 @@ notebook/down.sh
### How do I specify which docker-stack notebook image to deploy?
You can customize the docker-stack notebook image to deploy by modifying the `notebook/Dockerfile`. For example, you can build and deploy a `jupyter/all-spark-notebook` by modifying the Dockerfile like so:
You can customize the docker-stack notebook image to deploy by modifying the `notebook/Dockerfile`. For example, you can build and deploy a `jupyter/all-spark-notebook` by modifying the Dockerfile like so:
```dockerfile
FROM jupyter/all-spark-notebook:55d5ca6be183
@@ -85,7 +85,7 @@ NAME=your-notebook PORT=9001 WORK_VOLUME=our-work notebook/up.sh
### How do I run over HTTPS?
To run the notebook server with a self-signed certificate, pass the `--secure` option to the `up.sh` script. You must also provide a password, which will be used to secure the notebook server. You can specify the password by setting the `PASSWORD` environment variable, or by passing it to the `up.sh` script.
To run the notebook server with a self-signed certificate, pass the `--secure` option to the `up.sh` script. You must also provide a password, which will be used to secure the notebook server. You can specify the password by setting the `PASSWORD` environment variable, or by passing it to the `up.sh` script.
```bash
PASSWORD=a_secret notebook/up.sh --secure
@@ -96,9 +96,9 @@ notebook/up.sh --secure --password a_secret
### Can I use Let's Encrypt certificate chains?
Sure. If you want to secure access to publicly addressable notebook containers, you can generate a free certificate using the [Let's Encrypt](https://letsencrypt.org) service.
Sure. If you want to secure access to publicly addressable notebook containers, you can generate a free certificate using the [Let's Encrypt](https://letsencrypt.org) service.
This example includes the `bin/letsencrypt.sh` script, which runs the `letsencrypt` client to create a full-chain certificate and private key, and stores them in a Docker volume. _Note:_ The script hard codes several `letsencrypt` options, one of which automatically agrees to the Let's Encrypt Terms of Service.
This example includes the `bin/letsencrypt.sh` script, which runs the `letsencrypt` client to create a full-chain certificate and private key, and stores them in a Docker volume. _Note:_ The script hard codes several `letsencrypt` options, one of which automatically agrees to the Let's Encrypt Terms of Service.
The following command will create a certificate chain and store it in a Docker volume named `mydomain-secrets`.
@@ -108,7 +108,7 @@ FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \
bin/letsencrypt.sh
```
Now run `up.sh` with the `--letsencrypt` option. You must also provide the name of the secrets volume and a password.
Now run `up.sh` with the `--letsencrypt` option. You must also provide the name of the secrets volume and a password.
```bash
PASSWORD=a_secret SECRETS_VOLUME=mydomain-secrets notebook/up.sh --letsencrypt
@@ -117,7 +117,7 @@ PASSWORD=a_secret SECRETS_VOLUME=mydomain-secrets notebook/up.sh --letsencrypt
notebook/up.sh --letsencrypt --password a_secret --secrets mydomain-secrets
```
Be aware that Let's Encrypt has a pretty [low rate limit per domain](https://community.letsencrypt.org/t/public-beta-rate-limits/4772/3) at the moment. You can avoid exhausting your limit by testing against the Let's Encrypt staging servers. To hit their staging servers, set the environment variable `CERT_SERVER=--staging`.
Be aware that Let's Encrypt has a pretty [low rate limit per domain](https://community.letsencrypt.org/t/public-beta-rate-limits/4772/3) at the moment. You can avoid exhausting your limit by testing against the Let's Encrypt staging servers. To hit their staging servers, set the environment variable `CERT_SERVER=--staging`.
```bash
FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \
@@ -125,11 +125,11 @@ FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \
bin/letsencrypt.sh
```
Also, be aware that Let's Encrypt certificates are short lived (90 days). If you need them for a longer period of time, you'll need to manually setup a cron job to run the renewal steps. (You can reuse the command above.)
Also, be aware that Let's Encrypt certificates are short lived (90 days). If you need them for a longer period of time, you'll need to manually setup a cron job to run the renewal steps. (You can reuse the command above.)
### Can I deploy to any Docker Machine host?
Yes, you should be able to deploy to any Docker Machine-controlled host. To make it easier to get up and running, this example includes scripts to provision Docker Machines to VirtualBox and IBM SoftLayer, but more scripts are welcome!
Yes, you should be able to deploy to any Docker Machine-controlled host. To make it easier to get up and running, this example includes scripts to provision Docker Machines to VirtualBox and IBM SoftLayer, but more scripts are welcome!
To create a Docker machine using a VirtualBox VM on local desktop:

View File

@@ -4,10 +4,10 @@ This folder contains a Makefile and a set of supporting files demonstrating how
## Prerequisites
* make 3.81+
* Ubuntu users: Be aware of [make 3.81 defect 483086](https://bugs.launchpad.net/ubuntu/+source/make-dfsg/+bug/483086) which exists in 14.04 LTS but is fixed in 15.04+
* docker-machine 0.5.0+
* docker 1.9.0+
- make 3.81+
- Ubuntu users: Be aware of [make 3.81 defect 483086](https://bugs.launchpad.net/ubuntu/+source/make-dfsg/+bug/483086) which exists in 14.04 LTS but is fixed in 15.04+
- docker-machine 0.5.0+
- docker 1.9.0+
## Quickstart
@@ -61,7 +61,7 @@ make letsencrypt-notebook
The first command creates a Docker volume named after the notebook container with a `-secrets` suffix. It then runs the `letsencrypt` client with a slew of options (one of which has you automatically agreeing to the Let's Encrypt Terms of Service, see the Makefile). The second command mounts the secrets volume and configures Jupyter to use the full-chain certificate and private key.
Be aware: Let's Encrypt has a pretty [low rate limit per domain](https://community.letsencrypt.org/t/public-beta-rate-limits/4772/3) at the moment. You can avoid exhausting your limit by testing against the Let's Encrypt staging servers. To hit their staging servers, set the environment variable `CERT_SERVER=--staging`.
Be aware: Let's Encrypt has a pretty [low rate limit per domain](https://community.letsencrypt.org/t/public-beta-rate-limits/4772/3) at the moment. You can avoid exhausting your limit by testing against the Let's Encrypt staging servers. To hit their staging servers, set the environment variable `CERT_SERVER=--staging`.
```bash
make letsencrypt FQDN=host.mydomain.com EMAIL=myemail@somewhere.com CERT_SERVER=--staging

View File

@@ -1,10 +1,8 @@
OpenShift example
=================
# OpenShift example
This example provides templates for deploying the Jupyter Project docker-stacks images to OpenShift.
Prerequsites
------------
## Prerequsites
Any OpenShift 3 environment. The templates were tested with OpenShift 3.7. It is believed they should work with at least OpenShift 3.6 or later.
@@ -14,8 +12,7 @@ OpenShift Online, the public hosted version of OpenShift from Red Hat has a quot
If you want to experiment with using Jupyter Notebooks in an OpenShift environment, you should instead use [Minishift](https://www.openshift.org/minishift/). Minishift provides you the ability to run OpenShift in a virtual machine on your own local computer.
Loading the Templates
---------------------
## Loading the Templates
To load the templates, login to OpenShift from the command line and run:
@@ -27,8 +24,7 @@ This should create the `jupyter-notebook` template
The template can be used from the command line using the `oc new-app` command, or from the OpenShift web console by selecting _Add to Project_. This `README` is only going to explain deploying from the command line.
Deploying a Notebook
--------------------
## Deploying a Notebook
To deploy a notebook from the command line using the template, run:
@@ -89,8 +85,7 @@ https://notebook-jupyter.abcd.pro-us-east-1.openshiftapps.com/
When prompted, enter the password for the notebook.
Passing Template Parameters
---------------------------
## Passing Template Parameters
To override the name for the notebook, the image used, and the password, you can pass template parameters using the `--param` option.
@@ -103,19 +98,18 @@ oc new-app --template jupyter-notebook \
You can deploy any of the Jupyter Project docker-stacks images.
* jupyter/base-notebook
* jupyter/r-notebook
* jupyter/minimal-notebook
* jupyter/scipy-notebook
* jupyter/tensorflow-notebook
* jupyter/datascience-notebook
* jupyter/pyspark-notebook
* jupyter/all-spark-notebook
- jupyter/base-notebook
- jupyter/r-notebook
- jupyter/minimal-notebook
- jupyter/scipy-notebook
- jupyter/tensorflow-notebook
- jupyter/datascience-notebook
- jupyter/pyspark-notebook
- jupyter/all-spark-notebook
If you don't care what version of the image is used, add the `:latest` tag at the end of the image name, otherwise use the hash corresponding to the image version you want to use.
Deleting the Notebook Instance
------------------------------
## Deleting the Notebook Instance
To delete the notebook instance, run `oc delete` using a label selector for the application name.
@@ -123,8 +117,7 @@ To delete the notebook instance, run `oc delete` using a label selector for the
oc delete all,configmap --selector app=mynotebook
```
Enabling Jupyter Lab Interface
------------------------------
## Enabling Jupyter Lab Interface
To enable the Jupyter Lab interface for a deployed notebook set the `JUPYTER_ENABLE_LAB` environment variable.
@@ -134,8 +127,7 @@ oc set env dc/mynotebook JUPYTER_ENABLE_LAB=true
Setting the environment variable will trigger a new deployment and the Jupyter Lab interface will be enabled.
Adding Persistent Storage
-------------------------
## Adding Persistent Storage
You can upload notebooks and other files using the web interface of the notebook. Any uploaded files or changes you make to them will be lost when the notebook instance is restarted. If you want to save your work, you need to add persistent storage to the notebook. To add persistent storage run:
@@ -152,8 +144,7 @@ When you have deleted the notebook instance, if using a persistent volume, you w
oc delete pvc/mynotebook-data
```
Customizing the Configuration
-----------------------------
## Customizing the Configuration
If you want to set any custom configuration for the notebook, you can edit the config map created by the template.
@@ -199,8 +190,7 @@ Start up the notebook again.
oc scale dc/mynotebook --replicas 1
```
Changing the Notebook Password
------------------------------
## Changing the Notebook Password
The password for the notebook is supplied as a template parameter, or if not supplied will be automatically generated by the template. It will be passed into the container through an environment variable.
@@ -214,8 +204,7 @@ This will trigger a new deployment so ensure you have downloaded any work if not
If using a persistent volume, you could instead setup a password in the file `/home/jovyan/.jupyter/jupyter_notebook_config.py` as per guidelines in <https://jupyter-notebook.readthedocs.io/en/stable/public_server.html>.
Deploying from a Custom Image
-----------------------------
## Deploying from a Custom Image
If you want to deploy a custom variant of the Jupyter Project docker-stacks images, you can replace the image name with that of your own. If the image is not stored on Docker Hub, but some other public image registry, prefix the name of the image with the image registry host details.

View File

@@ -1,5 +1,4 @@
Custom Jupyter Notebook images
==============================
# Custom Jupyter Notebook images
This example provides scripts for building custom Jupyter Notebook images containing notebooks, data files, and with Python packages required by the notebooks already installed. The scripts provided work with the Source-to-Image tool and you can create the images from the command line on your own computer. Templates are also provided to enable running builds in OpenShift, as well as deploying the resulting image to OpenShift to make it available.
@@ -7,21 +6,19 @@ The build scripts, when used with the Source-to-Image tool, provide similar capa
For separate examples of using JupyterHub with OpenShift, see the project:
* <https://github.com/jupyter-on-openshift/jupyterhub-quickstart>
- <https://github.com/jupyter-on-openshift/jupyterhub-quickstart>
Source-to-Image Project
-----------------------
## Source-to-Image Project
Source-to-Image (S2I) is an open source project which provides a tool for creating container images. It works by taking a base image, injecting additional source code or files into a running container created from the base image, and running a builder script in the container to process the source code or files to prepare the new image.
Details on the S2I tool, and executable binaries for Linux, macOS and Windows, can be found on GitHub at:
* <https://github.com/openshift/source-to-image>
- <https://github.com/openshift/source-to-image>
The tool is standalone, and can be used on any system which provides a docker daemon for running containers. To provide an end-to-end capability to build and deploy applications in containers, support for S2I is also integrated into container platforms such as OpenShift.
Getting Started with S2I
------------------------
## Getting Started with S2I
As an example of how S2I can be used to create a custom image with a bundled set of notebooks, run:
@@ -67,8 +64,7 @@ Executing the command: jupyter notebook
Open your browser on the URL displayed, and you will find the notebooks from the Git repository and can work with them.
The S2I Builder Scripts
-----------------------
## The S2I Builder Scripts
Normally when using S2I, the base image would be S2I enabled and contain the builder scripts needed to prepare the image and define how the application in the image should be run. As the Jupyter Project `docker-stacks` images are not S2I enabled (although they could be), in the above example the `--scripts-url` option has been used to specify that the example builder scripts contained in this directory of this Git repository should be used.
@@ -120,8 +116,7 @@ The `run` script in this directory is very simple and just runs the notebook app
exec start-notebook.sh "$@"
```
Integration with OpenShift
--------------------------
## Integration with OpenShift
The OpenShift platform provides integrated support for S2I type builds. Templates are provided for using the S2I build mechanism with the scripts in this directory. To load the templates run: