Fix more grammar issues

This commit is contained in:
Ayaz Salikhov
2023-11-19 12:16:19 +01:00
parent d03229331a
commit d8c60bc42c
38 changed files with 85 additions and 88 deletions

View File

@@ -1,4 +1,4 @@
[flake8] [flake8]
max-line-length = 88 max-line-length = 88
select = C,E,F,W,B,B950 select = C, E, F, W, B, B950
extend-ignore = E203, E501, W503 extend-ignore = E203, E501, W503

View File

@@ -52,7 +52,7 @@ body:
Example: Example:
- Altair is a declarative statistical visualization library for Python, based on Vega and Vega-Lite, and the source is available on GitHub. - Altair is a declarative statistical visualization library for Python, based on Vega and Vega-Lite, and the source is available on GitHub.
- With Altair, you can spend more time understanding your data and its meaning. - With Altair, you can spend more time understanding your data and its meaning.
- Altair's API is simple, friendly and consistent and built on top of the powerful Vega-Lite visualization grammar. - Altair's API is simple, friendly, and consistent and built on top of the powerful Vega-Lite visualization grammar.
- This elegant simplicity produces beautiful and effective visualizations with a minimal amount of code. - This elegant simplicity produces beautiful and effective visualizations with a minimal amount of code.
validations: validations:
required: true required: true

View File

@@ -1,4 +1,4 @@
name: Download a Docker image and its tags from GitHub artifacts, apply them and push the image to the Registry name: Download a Docker image and its tags from GitHub artifacts, apply them, and push the image to the Registry
env: env:
REGISTRY: quay.io REGISTRY: quay.io

View File

@@ -102,7 +102,7 @@ push-all: $(foreach I, $(ALL_IMAGES), push/$(I)) ## push all tagged images
run-shell/%: ## run a bash in interactive mode in a stack run-shell/%: ## run a bash in interactive mode in a stack
docker run -it --rm "$(REGISTRY)/$(OWNER)/$(notdir $@)" $(SHELL) docker run -it --rm "$(REGISTRY)/$(OWNER)/$(notdir $@)" $(SHELL)
run-sudo-shell/%: ## run a bash in interactive mode as root in a stack run-sudo-shell/%: ## run bash in interactive mode as root in a stack
docker run -it --rm --user root "$(REGISTRY)/$(OWNER)/$(notdir $@)" $(SHELL) docker run -it --rm --user root "$(REGISTRY)/$(OWNER)/$(notdir $@)" $(SHELL)

View File

@@ -20,8 +20,7 @@ You can use a stack image to do any of the following (and more):
You can try a [relatively recent build of the quay.io/jupyter/base-notebook image on mybinder.org](https://mybinder.org/v2/gh/jupyter/docker-stacks/main?urlpath=lab/tree/README.ipynb) You can try a [relatively recent build of the quay.io/jupyter/base-notebook image on mybinder.org](https://mybinder.org/v2/gh/jupyter/docker-stacks/main?urlpath=lab/tree/README.ipynb)
by simply clicking the preceding link. by simply clicking the preceding link.
Otherwise, the examples below may help you get started if you [have Docker installed](https://docs.docker.com/get-docker/), Otherwise, the examples below may help you get started if you [have Docker installed](https://docs.docker.com/get-docker/),
know [which Docker image](https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html) you want to use know [which Docker image](https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html) you want to use, and want to launch a single Jupyter Application in a container.
and want to launch a single Jupyter Application in a container.
The [User Guide on ReadTheDocs](https://jupyter-docker-stacks.readthedocs.io/en/latest/) describes additional uses and features in detail. The [User Guide on ReadTheDocs](https://jupyter-docker-stacks.readthedocs.io/en/latest/) describes additional uses and features in detail.
@@ -44,8 +43,8 @@ You can modify the port on which the container's port is exposed by [changing th
Visiting `http://<hostname>:10000/?token=<token>` in a browser loads JupyterLab, Visiting `http://<hostname>:10000/?token=<token>` in a browser loads JupyterLab,
where: where:
- `hostname` is the name of the computer running Docker - The `hostname` is the name of the computer running Docker
- `token` is the secret token printed in the console. - The `token` is the secret token printed in the console.
The container remains intact for restart after the Server exits. The container remains intact for restart after the Server exits.
@@ -127,6 +126,6 @@ for information about how to contribute recipes, features, tests, and community-
- [jupyter/repo2docker](https://github.com/jupyterhub/repo2docker) - - [jupyter/repo2docker](https://github.com/jupyterhub/repo2docker) -
Turn git repositories into Jupyter-enabled Docker Images Turn git repositories into Jupyter-enabled Docker Images
- [openshift/source-to-image](https://github.com/openshift/source-to-image) - - [openshift/source-to-image](https://github.com/openshift/source-to-image) -
A tool for building artifacts from source and injecting them into docker images A tool for building artifacts from source code and injecting them into docker images
- [jupyter-on-openshift/jupyter-notebooks](https://github.com/jupyter-on-openshift/jupyter-notebooks) - - [jupyter-on-openshift/jupyter-notebooks](https://github.com/jupyter-on-openshift/jupyter-notebooks) -
OpenShift compatible S2I builder for basic notebook images OpenShift compatible S2I builder for basic notebook images

View File

@@ -29,7 +29,7 @@ language = "en"
html_theme = "alabaster" html_theme = "alabaster"
html_static_path = ["_static"] html_static_path = ["_static"]
# File above was generated using sphinx 6.2.1 with this command: # The file above was generated using sphinx 6.2.1 with this command:
# sphinx-quickstart --project "docker-stacks" --author "Project Jupyter" -v "latest" -r "latest" -l en --no-sep --no-makefile --no-batchfile # sphinx-quickstart --project "docker-stacks" --author "Project Jupyter" -v "latest" -r "latest" -l en --no-sep --no-makefile --no-batchfile
# These are custom options for this project # These are custom options for this project

View File

@@ -100,12 +100,12 @@ you merge a GitHub pull request to the main branch of your project.
2. Create a new repository - make sure to use the correct namespace (account or organization). 2. Create a new repository - make sure to use the correct namespace (account or organization).
Enter the name of the image matching the one you entered when prompted with `stack_name` by the cookiecutter. Enter the name of the image matching the one you entered when prompted with `stack_name` by the cookiecutter.
![Docker Hub - Create Repository page with the name field set to "My specialized jupyter stack"](../_static/contributing/stacks/docker-repo-name.png) ![Docker Hub - 'Create repository' page with the name field set to "My specialized jupyter stack"](../_static/contributing/stacks/docker-repo-name.png)
3. Enter a description for your image. 3. Enter a description for your image.
4. Click on your avatar in the top-right corner and select Account Settings. 4. Click on your avatar in the top-right corner and select Account Settings.
![Docker Hub page zoomed into the user's settings and accounts menu](../_static/contributing/stacks/docker-user-dropdown.png) ![The Docker Hub page zoomed into the user's settings and accounts menu](../_static/contributing/stacks/docker-user-dropdown.png)
5. Click on **Security** and then click on the **New Access Token** button. 5. Click on **Security** and then click on the **New Access Token** button.

View File

@@ -22,7 +22,7 @@ defined in the [conftest.py](https://github.com/jupyter/docker-stacks/blob/main/
## Unit tests ## Unit tests
You can add a unit test if you want to run a python script in one of our images. You can add a unit test if you want to run a Python script in one of our images.
You should create a `tests/<somestack>/units/` directory, if it doesn't already exist, and put your file there. You should create a `tests/<somestack>/units/` directory, if it doesn't already exist, and put your file there.
Files in this folder will be executed in the container when tests are run. Files in this folder will be executed in the container when tests are run.
You can see an [example for the TensorFlow package here](https://github.com/jupyter/docker-stacks/blob/HEAD/tests/tensorflow-notebook/units/unit_tensorflow.py). You can see an [example for the TensorFlow package here](https://github.com/jupyter/docker-stacks/blob/HEAD/tests/tensorflow-notebook/units/unit_tensorflow.py).

View File

@@ -26,10 +26,10 @@ Here is a non-exhaustive list of things we do care about:
- Does the package open additional ports, or add new web endpoints, that could be exploited? - Does the package open additional ports, or add new web endpoints, that could be exploited?
With all this in mind, we have a voting group, that consists of With all this in mind, we have a voting group, that consists of
[mathbunnyru](https://github.com/mathbunnyru), [@mathbunnyru](https://github.com/mathbunnyru),
[consideRatio](https://github.com/consideRatio), [@consideRatio](https://github.com/consideRatio),
[yuvipanda](https://github.com/yuvipanda) and [@yuvipanda](https://github.com/yuvipanda), and
[manics](https://github.com/manics). [@manics](https://github.com/manics).
This voting group is responsible for accepting or declining new packages and stacks. This voting group is responsible for accepting or declining new packages and stacks.
The change is accepted, if there are **at least 2 positive votes**. The change is accepted, if there are **at least 2 positive votes**.

View File

@@ -33,7 +33,7 @@ We are waiting for the first point release of the new LTS Ubuntu before updating
Other images are directly or indirectly inherited from `docker-stacks-foundation`. Other images are directly or indirectly inherited from `docker-stacks-foundation`.
We rebuild our images automatically each week, which means they frequently receive updates. We rebuild our images automatically each week, which means they frequently receive updates.
When there's a security fix in the Ubuntu base image, it's a good idea to manually trigger images rebuild When there's a security fix in the Ubuntu base image, it's a good idea to manually trigger the rebuild of images
[from the GitHub actions workflow UI](https://github.com/jupyter/docker-stacks/actions/workflows/docker.yml). [from the GitHub actions workflow UI](https://github.com/jupyter/docker-stacks/actions/workflows/docker.yml).
Pushing the `Run Workflow` button will trigger this process. Pushing the `Run Workflow` button will trigger this process.

View File

@@ -233,7 +233,7 @@ This script is handy when you derive a new Dockerfile from this image and instal
### Others ### Others
You can bypass the provided scripts and specify an arbitrary start command. You can bypass the provided scripts and specify an arbitrary start command.
If you do, keep in mind that features supported by the `start.sh` script and its kin will not function (e.g., `GRANT_SUDO`). If you do, keep in mind that features, supported by the `start.sh` script and its kin, will not function (e.g., `GRANT_SUDO`).
## Conda Environments ## Conda Environments

View File

@@ -1,6 +1,6 @@
FROM quay.io/jupyter/base-notebook FROM quay.io/jupyter/base-notebook
# Name your environment and choose the python version # Name your environment and choose the Python version
ARG env_name=python310 ARG env_name=python310
ARG py_ver=3.10 ARG py_ver=3.10
@@ -40,5 +40,5 @@ RUN activate_custom_env_script=/usr/local/bin/before-notebook.d/activate_custom_
USER ${NB_UID} USER ${NB_UID}
# Making this environment default in Terminal # Making this environment default in Terminal
# You can comment this line to keep the default environment in Terminal # You can comment this line to keep the default environment in a Terminal
RUN echo "conda activate ${env_name}" >> "${HOME}/.bashrc" RUN echo "conda activate ${env_name}" >> "${HOME}/.bashrc"

View File

@@ -15,7 +15,7 @@ ARG INSTANTCLIENT_MAJOR_VERSION=21
ARG INSTANTCLIENT_VERSION=${INSTANTCLIENT_MAJOR_VERSION}.11.0.0.0-1 ARG INSTANTCLIENT_VERSION=${INSTANTCLIENT_MAJOR_VERSION}.11.0.0.0-1
ARG INSTANTCLIENT_URL=https://download.oracle.com/otn_software/linux/instantclient/2111000 ARG INSTANTCLIENT_URL=https://download.oracle.com/otn_software/linux/instantclient/2111000
# Then install Oracle SQL Instant client, SQL+Plus, tools and JDBC. # Then install Oracle SQL Instant client, SQL+Plus, tools, and JDBC.
# Note: You may need to change the URL to a newer version. # Note: You may need to change the URL to a newer version.
# See: https://www.oracle.com/es/database/technologies/instant-client/linux-x86-64-downloads.html # See: https://www.oracle.com/es/database/technologies/instant-client/linux-x86-64-downloads.html
RUN mkdir "/opt/oracle" RUN mkdir "/opt/oracle"
@@ -39,7 +39,7 @@ RUN echo "ORACLE_HOME=/usr/lib/oracle/${INSTANTCLIENT_MAJOR_VERSION}/client64" >
echo "export PATH" >> "${HOME}/.bashrc" && \ echo "export PATH" >> "${HOME}/.bashrc" && \
echo "export LD_LIBRARY_PATH" >> "${HOME}/.bashrc" echo "export LD_LIBRARY_PATH" >> "${HOME}/.bashrc"
# Add credentials for /redacted/ using Oracle Db. # Add credentials for /redacted/ using Oracle DB.
WORKDIR /usr/lib/oracle/${INSTANTCLIENT_MAJOR_VERSION}/client64/lib/network/admin/ WORKDIR /usr/lib/oracle/${INSTANTCLIENT_MAJOR_VERSION}/client64/lib/network/admin/
# Add a wildcard `[]` on the last letter of the filename to avoid throwing an error if the file does not exist. # Add a wildcard `[]` on the last letter of the filename to avoid throwing an error if the file does not exist.
# See: https://stackoverflow.com/questions/31528384/conditional-copy-add-in-dockerfile # See: https://stackoverflow.com/questions/31528384/conditional-copy-add-in-dockerfile

View File

@@ -132,7 +132,7 @@ Sometimes it is helpful to run the Jupyter instance behind an nginx proxy, for e
and want nginx to help improve server performance in managing the connections and want nginx to help improve server performance in managing the connections
Here is a [quick example of NGINX configuration](https://gist.github.com/cboettig/8643341bd3c93b62b5c2) to get started. Here is a [quick example of NGINX configuration](https://gist.github.com/cboettig/8643341bd3c93b62b5c2) to get started.
You'll need a server, a `.crt` and `.key` file for your server, and `docker` & `docker-compose` installed. You'll need a server, a `.crt`, and a `.key` file for your server, and `docker` & `docker-compose` installed.
Then download the files at that gist and run `docker-compose up` to test it out. Then download the files at that gist and run `docker-compose up` to test it out.
Customize the `nginx.conf` file to set the desired paths and add other services. Customize the `nginx.conf` file to set the desired paths and add other services.
@@ -506,9 +506,9 @@ The following recipe demonstrates how to add functionality to read from and writ
You can now use `pyodbc` and `sqlalchemy` to interact with the database. You can now use `pyodbc` and `sqlalchemy` to interact with the database.
Pre-built images are hosted in the [realiserad/jupyter-docker-mssql](https://github.com/Realiserad/jupyter-docker-mssql) repository. Pre-built images are hosted in the [Realiserad/jupyter-docker-mssql](https://github.com/Realiserad/jupyter-docker-mssql) repository.
## Add Oracle SQL Instant client, SQL\*Plus and other tools (Version 21.x) ## Add Oracle SQL Instant client, SQL\*Plus, and other tools (Version 21.x)
```{note} ```{note}
This recipe only works for x86_64 architecture. This recipe only works for x86_64 architecture.
@@ -520,7 +520,7 @@ This recipe installs version `21.11.0.0.0`.
Nonetheless, go to the [Oracle Instant Client Download page](https://www.oracle.com/es/database/technologies/instant-client/linux-x86-64-downloads.html) for the complete list of versions available. Nonetheless, go to the [Oracle Instant Client Download page](https://www.oracle.com/es/database/technologies/instant-client/linux-x86-64-downloads.html) for the complete list of versions available.
You may need to perform different steps for older versions; You may need to perform different steps for older versions;
the may be explained on the "Installation instructions" section of the Downloads page. they may be explained in the "Installation instructions" section of the Downloads page.
```{literalinclude} recipe_code/oracledb.dockerfile ```{literalinclude} recipe_code/oracledb.dockerfile
:language: docker :language: docker

View File

@@ -170,11 +170,11 @@ Any other changes made in the container will be lost.
## Using Binder ## Using Binder
[Binder](https://mybinder.org/) is a service that allows you to create and share custom computing environments for projects in version control. A [Binder](https://mybinder.org/) is a service that allows you to create and share custom computing environments for projects in version control.
You can use any of the Jupyter Docker Stacks images as a basis for a Binder-compatible Dockerfile. You can use any of the Jupyter Docker Stacks images as a basis for a Binder-compatible Dockerfile.
See the See the
[docker-stacks example](https://mybinder.readthedocs.io/en/latest/examples/sample_repos.html#using-a-docker-image-from-the-jupyter-docker-stacks-repository) and [docker-stacks example](https://mybinder.readthedocs.io/en/latest/examples/sample_repos.html#using-a-docker-image-from-the-jupyter-docker-stacks-repository) and
[Using a Dockerfile](https://mybinder.readthedocs.io/en/latest/tutorials/dockerfile.html) sections in the [Using a Dockerfile](https://mybinder.readthedocs.io/en/latest/tutorials/dockerfile.html) section in the
[Binder documentation](https://mybinder.readthedocs.io/en/latest/index.html) for instructions. [Binder documentation](https://mybinder.readthedocs.io/en/latest/index.html) for instructions.
## Using JupyterHub ## Using JupyterHub

View File

@@ -14,7 +14,7 @@ This page provides details about features specific to one or more images.
Every new spark context that is created is put onto an incrementing port (i.e. 4040, 4041, 4042, etc.), and it might be necessary to open multiple ports. Every new spark context that is created is put onto an incrementing port (i.e. 4040, 4041, 4042, etc.), and it might be necessary to open multiple ports.
``` ```
For example: `docker run --detach -p 8888:8888 -p 4040:4040 -p 4041:4041 quay.io/jupyter/pyspark-notebook`. For example, `docker run --detach -p 8888:8888 -p 4040:4040 -p 4041:4041 quay.io/jupyter/pyspark-notebook`.
#### IPython low-level output capture and forward #### IPython low-level output capture and forward
@@ -53,7 +53,7 @@ You can build a `pyspark-notebook` image with a different `Spark` version by ove
- This version needs to match the version supported by the Spark distribution used above. - This version needs to match the version supported by the Spark distribution used above.
- See [Spark Overview](https://spark.apache.org/docs/latest/#downloading) and [Ubuntu packages](https://packages.ubuntu.com/search?keywords=openjdk). - See [Spark Overview](https://spark.apache.org/docs/latest/#downloading) and [Ubuntu packages](https://packages.ubuntu.com/search?keywords=openjdk).
- Starting with _Spark >= 3.2_, the distribution file might contain Scala version. - Starting with _Spark >= 3.2_, the distribution file might contain the Scala version.
For example, here is how to build a `pyspark-notebook` image with Spark `3.2.0`, Hadoop `3.2`, and OpenJDK `11`. For example, here is how to build a `pyspark-notebook` image with Spark `3.2.0`, Hadoop `3.2`, and OpenJDK `11`.
@@ -161,7 +161,7 @@ Connection to Spark Cluster on **[Standalone Mode](https://spark.apache.org/docs
0. Verify that the docker image (check the Dockerfile) and the Spark Cluster, which is being 0. Verify that the docker image (check the Dockerfile) and the Spark Cluster, which is being
deployed, run the same version of Spark. deployed, run the same version of Spark.
1. [Deploy Spark in Standalone Mode](https://spark.apache.org/docs/latest/spark-standalone.html). 1. [Deploy Spark in Standalone Mode](https://spark.apache.org/docs/latest/spark-standalone.html).
2. Run the Docker container with `--net=host` in a location that is network addressable by all of 2. Run the Docker container with `--net=host` in a location that is network-addressable by all of
your Spark workers. your Spark workers.
(This is a [Spark networking requirement](https://spark.apache.org/docs/latest/cluster-overview.html#components).) (This is a [Spark networking requirement](https://spark.apache.org/docs/latest/cluster-overview.html#components).)
@@ -174,7 +174,7 @@ Connection to Spark Cluster on **[Standalone Mode](https://spark.apache.org/docs
##### Standalone Mode in Python ##### Standalone Mode in Python
The **same Python version** needs to be used on the notebook (where the driver is located) and on the Spark workers. The **same Python version** needs to be used on the notebook (where the driver is located) and on the Spark workers.
The python version used at the driver and worker side can be adjusted by setting the environment variables `PYSPARK_PYTHON` and/or `PYSPARK_DRIVER_PYTHON`, The Python version used on the driver and worker side can be adjusted by setting the environment variables `PYSPARK_PYTHON` and/or `PYSPARK_DRIVER_PYTHON`,
see [Spark Configuration][spark-conf] for more information. see [Spark Configuration][spark-conf] for more information.
```python ```python

View File

@@ -13,17 +13,17 @@ SHELL ["/bin/bash", "-o", "pipefail", "-c"]
USER root USER root
# Install all OS dependencies for Server that starts but lacks all # Install all OS dependencies for the Server that starts but lacks all
# features (e.g., download as all possible file formats) # features (e.g., download as all possible file formats)
RUN apt-get update --yes && \ RUN apt-get update --yes && \
apt-get install --yes --no-install-recommends \ apt-get install --yes --no-install-recommends \
fonts-liberation \ fonts-liberation \
# - pandoc is used to convert notebooks to html files # - pandoc is used to convert notebooks to html files
# it's not present in aarch64 ubuntu image, so we install it here # it's not present in the aarch64 Ubuntu image, so we install it here
pandoc \ pandoc \
# - run-one - a wrapper script that runs no more # - run-one - a wrapper script that runs no more
# than one unique instance of some command with a unique set of arguments, # than one unique instance of some command with a unique set of arguments,
# we use `run-one-constantly` to support `RESTARTABLE` option # we use `run-one-constantly` to support the `RESTARTABLE` option
run-one && \ run-one && \
apt-get clean && rm -rf /var/lib/apt/lists/* apt-get clean && rm -rf /var/lib/apt/lists/*
@@ -64,7 +64,7 @@ USER root
RUN fix-permissions /etc/jupyter/ RUN fix-permissions /etc/jupyter/
# HEALTHCHECK documentation: https://docs.docker.com/engine/reference/builder/#healthcheck # HEALTHCHECK documentation: https://docs.docker.com/engine/reference/builder/#healthcheck
# This healtcheck works well for `lab`, `notebook`, `nbclassic`, `server` and `retro` jupyter commands # This healtcheck works well for `lab`, `notebook`, `nbclassic`, `server`, and `retro` jupyter commands
# https://github.com/jupyter/docker-stacks/issues/915#issuecomment-1068528799 # https://github.com/jupyter/docker-stacks/issues/915#issuecomment-1068528799
HEALTHCHECK --interval=5s --timeout=3s --start-period=5s --retries=3 \ HEALTHCHECK --interval=5s --timeout=3s --start-period=5s --retries=3 \
CMD /etc/jupyter/docker_healthcheck.py || exit 1 CMD /etc/jupyter/docker_healthcheck.py || exit 1

View File

@@ -7,7 +7,7 @@ from pathlib import Path
import requests import requests
# A number of operations below deliberately don't check for possible errors # Several operations below deliberately don't check for possible errors
# As this is a healthcheck, it should succeed or raise an exception on error # As this is a healthcheck, it should succeed or raise an exception on error
runtime_dir = Path("/home/") / os.environ["NB_USER"] / ".local/share/jupyter/runtime/" runtime_dir = Path("/home/") / os.environ["NB_USER"] / ".local/share/jupyter/runtime/"

View File

@@ -34,7 +34,7 @@ if "GEN_CERT" in os.environ:
if not cnf_file.exists(): if not cnf_file.exists():
cnf_file.write_text(OPENSSL_CONFIG) cnf_file.write_text(OPENSSL_CONFIG)
# Generate a certificate if one doesn't exist on disk # Generate a certificate if one doesn't exist on a disk
subprocess.check_call( subprocess.check_call(
[ [
"openssl", "openssl",

View File

@@ -34,7 +34,7 @@ command.append(jupyter_command)
if "NOTEBOOK_ARGS" in os.environ: if "NOTEBOOK_ARGS" in os.environ:
command += shlex.split(os.environ["NOTEBOOK_ARGS"]) command += shlex.split(os.environ["NOTEBOOK_ARGS"])
# Pass through any other args we were passed on the commandline # Pass through any other args we were passed on the command line
command += sys.argv[1:] command += sys.argv[1:]
# Execute the command! # Execute the command!

View File

@@ -18,12 +18,12 @@ SHELL ["/bin/bash", "-o", "pipefail", "-c"]
USER root USER root
# Install all OS dependencies for Server that starts # Install all OS dependencies for the Server that starts
# but lacks all features (e.g., download as all possible file formats) # but lacks all features (e.g., download as all possible file formats)
ENV DEBIAN_FRONTEND noninteractive ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update --yes && \ RUN apt-get update --yes && \
# - apt-get upgrade is run to patch known vulnerabilities in apt-get packages as # - `apt-get upgrade` is run to patch known vulnerabilities in apt-get packages as
# the ubuntu base image is rebuilt too seldom sometimes (less than once a month) # the Ubuntu base image is rebuilt too seldom sometimes (less than once a month)
apt-get upgrade --yes && \ apt-get upgrade --yes && \
apt-get install --yes --no-install-recommends \ apt-get install --yes --no-install-recommends \
# - bzip2 is necessary to extract the micromamba executable. # - bzip2 is necessary to extract the micromamba executable.
@@ -76,19 +76,19 @@ RUN echo "auth requisite pam_deny.so" >> /etc/pam.d/su && \
USER ${NB_UID} USER ${NB_UID}
# Pin python version here, or set it to "default" # Pin Python version here, or set it to "default"
ARG PYTHON_VERSION=3.11 ARG PYTHON_VERSION=3.11
# Setup work directory for backward-compatibility # Setup work directory for backward-compatibility
RUN mkdir "/home/${NB_USER}/work" && \ RUN mkdir "/home/${NB_USER}/work" && \
fix-permissions "/home/${NB_USER}" fix-permissions "/home/${NB_USER}"
# Download and install Micromamba, and initialize Conda prefix. # Download and install Micromamba, and initialize the Conda prefix.
# <https://github.com/mamba-org/mamba#micromamba> # <https://github.com/mamba-org/mamba#micromamba>
# Similar projects using Micromamba: # Similar projects using Micromamba:
# - Micromamba-Docker: <https://github.com/mamba-org/micromamba-docker> # - Micromamba-Docker: <https://github.com/mamba-org/micromamba-docker>
# - repo2docker: <https://github.com/jupyterhub/repo2docker> # - repo2docker: <https://github.com/jupyterhub/repo2docker>
# Install Python, Mamba and jupyter_core # Install Python, Mamba, and jupyter_core
# Cleanup temporary files and remove Micromamba # Cleanup temporary files and remove Micromamba
# Correct permissions # Correct permissions
# Do all this in a single RUN command to avoid duplicating all of the # Do all this in a single RUN command to avoid duplicating all of the

View File

@@ -1,16 +1,14 @@
#!/bin/bash #!/bin/bash
# set permissions on a directory # Set permissions on a directory
# after any installation, if a directory needs to be (human) user-writable, # After any installation, if a directory needs to be (human) user-writable, run this script on it.
# run this script on it. # It will make everything in the directory owned by the group ${NB_GID} and writable by that group.
# It will make everything in the directory owned by the group ${NB_GID}
# and writable by that group.
# Deployments that want to set a specific user id can preserve permissions # Deployments that want to set a specific user id can preserve permissions
# by adding the `--group-add users` line to `docker run`. # by adding the `--group-add users` line to `docker run`.
# uses find to avoid touching files that already have the right permissions, # Uses find to avoid touching files that already have the right permissions,
# which would cause massive image explosion # which would cause a massive image explosion
# right permissions are: # Right permissions are:
# group=${NB_GID} # group=${NB_GID}
# AND permissions include group rwX (directory-execute) # AND permissions include group rwX (directory-execute)
# AND directories have setuid,setgid bits set # AND directories have setuid,setgid bits set

View File

@@ -10,7 +10,7 @@ if [ "$#" -ne 1 ]; then
return 1 return 1
fi fi
if [[ ! -d "${1}" ]] ; then if [[ ! -d "${1}" ]]; then
echo "Directory ${1} doesn't exist or is not a directory" echo "Directory ${1} doesn't exist or is not a directory"
return 1 return 1
fi fi
@@ -25,16 +25,16 @@ for f in "${1}/"*; do
# shellcheck disable=SC1090 # shellcheck disable=SC1090
source "${f}" source "${f}"
# shellcheck disable=SC2181 # shellcheck disable=SC2181
if [ $? -ne 0 ] ; then if [ $? -ne 0 ]; then
echo "${f} has failed, continuing execution" echo "${f} has failed, continuing execution"
fi fi
;; ;;
*) *)
if [ -x "${f}" ] ; then if [ -x "${f}" ]; then
echo "Running executable: ${f}" echo "Running executable: ${f}"
"${f}" "${f}"
# shellcheck disable=SC2181 # shellcheck disable=SC2181
if [ $? -ne 0 ] ; then if [ $? -ne 0 ]; then
echo "${f} has failed, continuing execution" echo "${f} has failed, continuing execution"
fi fi
else else

View File

@@ -43,7 +43,7 @@ source /usr/local/bin/run-hooks.sh /usr/local/bin/start-notebook.d
# things before we run the command passed to start.sh as the desired user # things before we run the command passed to start.sh as the desired user
# (NB_USER). # (NB_USER).
# #
if [ "$(id -u)" == 0 ] ; then if [ "$(id -u)" == 0 ]; then
# Environment variables: # Environment variables:
# - NB_USER: the desired username and associated home folder # - NB_USER: the desired username and associated home folder
# - NB_UID: the desired user id # - NB_UID: the desired user id
@@ -51,11 +51,11 @@ if [ "$(id -u)" == 0 ] ; then
# - NB_GROUP: a group name we want for the group # - NB_GROUP: a group name we want for the group
# - GRANT_SUDO: a boolean ("1" or "yes") to grant the user sudo rights # - GRANT_SUDO: a boolean ("1" or "yes") to grant the user sudo rights
# - CHOWN_HOME: a boolean ("1" or "yes") to chown the user's home folder # - CHOWN_HOME: a boolean ("1" or "yes") to chown the user's home folder
# - CHOWN_EXTRA: a comma separated list of paths to chown # - CHOWN_EXTRA: a comma-separated list of paths to chown
# - CHOWN_HOME_OPTS / CHOWN_EXTRA_OPTS: arguments to the chown commands # - CHOWN_HOME_OPTS / CHOWN_EXTRA_OPTS: arguments to the chown commands
# Refit the jovyan user to the desired the user (NB_USER) # Refit the jovyan user to the desired user (NB_USER)
if id jovyan &> /dev/null ; then if id jovyan &> /dev/null; then
if ! usermod --home "/home/${NB_USER}" --login "${NB_USER}" jovyan 2>&1 | grep "no changes" > /dev/null; then if ! usermod --home "/home/${NB_USER}" --login "${NB_USER}" jovyan 2>&1 | grep "no changes" > /dev/null; then
_log "Updated the jovyan user:" _log "Updated the jovyan user:"
_log "- username: jovyan -> ${NB_USER}" _log "- username: jovyan -> ${NB_USER}"
@@ -78,7 +78,7 @@ if [ "$(id -u)" == 0 ] ; then
useradd --no-log-init --home "/home/${NB_USER}" --shell /bin/bash --uid "${NB_UID}" --gid "${NB_GID}" --groups 100 "${NB_USER}" useradd --no-log-init --home "/home/${NB_USER}" --shell /bin/bash --uid "${NB_UID}" --gid "${NB_GID}" --groups 100 "${NB_USER}"
fi fi
# Move or symlink the jovyan home directory to the desired users home # Move or symlink the jovyan home directory to the desired user's home
# directory if it doesn't already exist, and update the current working # directory if it doesn't already exist, and update the current working
# directory to the new location if needed. # directory to the new location if needed.
if [[ "${NB_USER}" != "jovyan" ]]; then if [[ "${NB_USER}" != "jovyan" ]]; then
@@ -106,7 +106,7 @@ if [ "$(id -u)" == 0 ] ; then
fi fi
fi fi
# Optionally ensure the desired user get filesystem ownership of it's home # Optionally ensure the desired user gets filesystem ownership of its home
# folder and/or additional folders # folder and/or additional folders
if [[ "${CHOWN_HOME}" == "1" || "${CHOWN_HOME}" == "yes" ]]; then if [[ "${CHOWN_HOME}" == "1" || "${CHOWN_HOME}" == "yes" ]]; then
_log "Ensuring /home/${NB_USER} is owned by ${NB_UID}:${NB_GID} ${CHOWN_HOME_OPTS:+(chown options: ${CHOWN_HOME_OPTS})}" _log "Ensuring /home/${NB_USER} is owned by ${NB_UID}:${NB_GID} ${CHOWN_HOME_OPTS:+(chown options: ${CHOWN_HOME_OPTS})}"
@@ -121,7 +121,7 @@ if [ "$(id -u)" == 0 ] ; then
done done
fi fi
# Update potentially outdated environment variables since image build # Update potentially outdated environment variables since the image build
export XDG_CACHE_HOME="/home/${NB_USER}/.cache" export XDG_CACHE_HOME="/home/${NB_USER}/.cache"
# Prepend ${CONDA_DIR}/bin to sudo secure_path # Prepend ${CONDA_DIR}/bin to sudo secure_path
@@ -163,7 +163,7 @@ if [ "$(id -u)" == 0 ] ; then
# - We use the `--set-home` flag to set the HOME variable appropriately. # - We use the `--set-home` flag to set the HOME variable appropriately.
# #
# - To reduce the default list of variables deleted by sudo, we could have # - To reduce the default list of variables deleted by sudo, we could have
# used `env_delete` from /etc/sudoers. It has higher priority than the # used `env_delete` from /etc/sudoers. It has a higher priority than the
# `--preserve-env` flag and the `env_keep` configuration. # `--preserve-env` flag and the `env_keep` configuration.
# #
# - We preserve LD_LIBRARY_PATH, PATH and PYTHONPATH explicitly. Note however that sudo # - We preserve LD_LIBRARY_PATH, PATH and PYTHONPATH explicitly. Note however that sudo
@@ -186,7 +186,7 @@ else
# Attempt to ensure the user uid we currently run as has a named entry in # Attempt to ensure the user uid we currently run as has a named entry in
# the /etc/passwd file, as it avoids software crashing on hard assumptions # the /etc/passwd file, as it avoids software crashing on hard assumptions
# on such entry. Writing to the /etc/passwd was allowed for the root group # on such entry. Writing to the /etc/passwd was allowed for the root group
# from the Dockerfile during build. # from the Dockerfile during the build.
# #
# ref: https://github.com/jupyter/docker-stacks/issues/552 # ref: https://github.com/jupyter/docker-stacks/issues/552
if ! whoami &> /dev/null; then if ! whoami &> /dev/null; then

View File

@@ -13,7 +13,7 @@ SHELL ["/bin/bash", "-o", "pipefail", "-c"]
USER root USER root
# Install all OS dependencies for fully functional Server # Install all OS dependencies for a fully functional Server
RUN apt-get update --yes && \ RUN apt-get update --yes && \
apt-get install --yes --no-install-recommends \ apt-get install --yes --no-install-recommends \
# Common useful utilities # Common useful utilities
@@ -43,7 +43,7 @@ RUN update-alternatives --install /usr/bin/nano nano /bin/nano-tiny 10
# Switch back to jovyan to avoid accidental container runs as root # Switch back to jovyan to avoid accidental container runs as root
USER ${NB_UID} USER ${NB_UID}
# Add R mimetype option to specify how the plot returns from R to the browser # Add an R mimetype option to specify how the plot returns from R to the browser
COPY --chown=${NB_UID}:${NB_GID} Rprofile.site /opt/conda/lib/R/etc/ COPY --chown=${NB_UID}:${NB_GID} Rprofile.site /opt/conda/lib/R/etc/
# Add setup scripts that may be used by downstream images or inherited images # Add setup scripts that may be used by downstream images or inherited images

View File

@@ -1,7 +1,7 @@
#!/bin/bash #!/bin/bash
set -exuo pipefail set -exuo pipefail
# Requirements: # Requirements:
# - Run as non-root user # - Run as a non-root user
# - The JULIA_PKGDIR environment variable is set # - The JULIA_PKGDIR environment variable is set
# - Julia is already set up, with the setup-julia.bash command # - Julia is already set up, with the setup-julia.bash command

View File

@@ -15,7 +15,7 @@ USER root
# Spark dependencies # Spark dependencies
# Default values can be overridden at build time # Default values can be overridden at build time
# (ARGS are in lower case to distinguish them from ENV) # (ARGS are in lowercase to distinguish them from ENV)
ARG spark_version="3.5.0" ARG spark_version="3.5.0"
ARG hadoop_version="3" ARG hadoop_version="3"
ARG scala_version ARG scala_version
@@ -35,7 +35,7 @@ RUN apt-get update --yes && \
WORKDIR /tmp WORKDIR /tmp
# You need to use https://archive.apache.org/dist/ website if you want to download old Spark versions # You need to use https://archive.apache.org/dist/ website if you want to download old Spark versions
# But it seems to be slower, that's why we use recommended site for download # But it seems to be slower, that's why we use the recommended site for download
RUN if [ -z "${scala_version}" ]; then \ RUN if [ -z "${scala_version}" ]; then \
curl --progress-bar --location --output "spark.tgz" \ curl --progress-bar --location --output "spark.tgz" \
"https://dlcdn.apache.org/spark/spark-${APACHE_SPARK_VERSION}/spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz"; \ "https://dlcdn.apache.org/spark/spark-${APACHE_SPARK_VERSION}/spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz"; \

View File

@@ -66,7 +66,7 @@ RUN mamba install --yes \
fix-permissions "${CONDA_DIR}" && \ fix-permissions "${CONDA_DIR}" && \
fix-permissions "/home/${NB_USER}" fix-permissions "/home/${NB_USER}"
# Install facets which does not have a pip or conda package at the moment # Install facets package which does not have a `pip` or `conda-forge` package at the moment
WORKDIR /tmp WORKDIR /tmp
RUN git clone https://github.com/PAIR-code/facets && \ RUN git clone https://github.com/PAIR-code/facets && \
jupyter nbclassic-extension install facets/facets-dist/ --sys-prefix && \ jupyter nbclassic-extension install facets/facets-dist/ --sys-prefix && \

View File

@@ -84,7 +84,7 @@ class SHATagger(TaggerInterface):
### Manifest ### Manifest
`ManifestHeader` is a build manifest header. `ManifestHeader` is a build manifest header.
It contains information about `Build datetime`, `Docker image size`, and `Git commit` info. It contains the following sections: `Build timestamp`, `Docker image size`, and `Git commit` info.
All the other manifest classes are inherited from `ManifestInterface`: All the other manifest classes are inherited from `ManifestInterface`:

View File

@@ -49,7 +49,7 @@ class ManifestHeader:
## Build Info ## Build Info
- Build datetime: {build_timestamp} - Build timestamp: {build_timestamp}
- Docker image: `{registry}/{owner}/{short_image_name}:{commit_hash_tag}` - Docker image: `{registry}/{owner}/{short_image_name}:{commit_hash_tag}`
- Docker image size: {image_size} - Docker image size: {image_size}
- Git commit SHA: [{commit_hash}](https://github.com/jupyter/docker-stacks/commit/{commit_hash}) - Git commit SHA: [{commit_hash}](https://github.com/jupyter/docker-stacks/commit/{commit_hash})

View File

@@ -51,7 +51,7 @@ def update_monthly_wiki_page(
def get_manifest_timestamp(manifest_file: Path) -> str: def get_manifest_timestamp(manifest_file: Path) -> str:
file_content = manifest_file.read_text() file_content = manifest_file.read_text()
pos = file_content.find("Build datetime: ") pos = file_content.find("Build timestamp: ")
return file_content[pos + 16 : pos + 36] return file_content[pos + 16 : pos + 36]

View File

@@ -16,7 +16,7 @@ from tagging.manifests import ManifestHeader, ManifestInterface
LOGGER = logging.getLogger(__name__) LOGGER = logging.getLogger(__name__)
# This would actually be manifest creation timestamp # We use a manifest creation timestamp, which happens right after a build
BUILD_TIMESTAMP = datetime.datetime.utcnow().isoformat()[:-7] + "Z" BUILD_TIMESTAMP = datetime.datetime.utcnow().isoformat()[:-7] + "Z"
MARKDOWN_LINE_BREAK = "<br />" MARKDOWN_LINE_BREAK = "<br />"

View File

@@ -42,7 +42,7 @@ def test_nb_user_change(container: TrackedContainer) -> None:
# Use sleep, not wait, because the container sleeps forever. # Use sleep, not wait, because the container sleeps forever.
time.sleep(1) time.sleep(1)
LOGGER.info( LOGGER.info(
f"Checking if home folder of {nb_user} contains the hidden '.jupyter' folder with appropriate permissions ..." f"Checking if a home folder of {nb_user} contains the hidden '.jupyter' folder with appropriate permissions ..."
) )
command = f'stat -c "%F %U %G" /home/{nb_user}/.jupyter' command = f'stat -c "%F %U %G" /home/{nb_user}/.jupyter'
expected_output = f"directory {nb_user} users" expected_output = f"directory {nb_user} users"

View File

@@ -15,8 +15,8 @@ The goal is to detect import errors that can be caused by incompatibilities betw
This module checks dynamically, through the `CondaPackageHelper`, This module checks dynamically, through the `CondaPackageHelper`,
only the requested packages i.e. packages requested by `mamba install` in the `Dockerfile`s. only the requested packages i.e. packages requested by `mamba install` in the `Dockerfile`s.
This means that it does not check dependencies. This means that it does not check dependencies.
This choice is a tradeoff to cover the main requirements while achieving reasonable test duration. This choice is a tradeoff to cover the main requirements while achieving a reasonable test duration.
However it could be easily changed (or completed) to cover also dependencies. However, it could be easily changed (or completed) to cover dependencies as well.
Use `package_helper.installed_packages()` instead of `package_helper.requested_packages()`. Use `package_helper.installed_packages()` instead of `package_helper.requested_packages()`.
Example: Example:
@@ -145,7 +145,7 @@ def _check_import_packages(
"""Test if packages can be imported """Test if packages can be imported
Note: using a list of packages instead of a fixture for the list of packages Note: using a list of packages instead of a fixture for the list of packages
since pytest prevents use of multiple yields since pytest prevents the use of multiple yields
""" """
failures = {} failures = {}
LOGGER.info("Testing the import of packages ...") LOGGER.info("Testing the import of packages ...")

View File

@@ -83,7 +83,7 @@ def run_source_in_dir(
cont_data_dir = "/home/jovyan/data" cont_data_dir = "/home/jovyan/data"
# https://forums.docker.com/t/all-files-appear-as-executable-in-file-paths-using-bind-mount/99921 # https://forums.docker.com/t/all-files-appear-as-executable-in-file-paths-using-bind-mount/99921
# Unfortunately, Docker treats all files in mounter dir as executable files # Unfortunately, Docker treats all files in mounter dir as executable files
# So we make a copy of mounted dir inside a container # So we make a copy of the mounted dir inside a container
command = ( command = (
"cp -r /home/jovyan/data/ /home/jovyan/data-copy/ &&" "cp -r /home/jovyan/data/ /home/jovyan/data-copy/ &&"
"source /usr/local/bin/run-hooks.sh /home/jovyan/data-copy/" + command_suffix "source /usr/local/bin/run-hooks.sh /home/jovyan/data-copy/" + command_suffix

View File

@@ -74,7 +74,7 @@ def test_nb_user_change(container: TrackedContainer) -> None:
), f"Bad owner for the {nb_user} home folder {output}, expected {expected_output}" ), f"Bad owner for the {nb_user} home folder {output}, expected {expected_output}"
LOGGER.info( LOGGER.info(
f"Checking if home folder of {nb_user} contains the 'work' folder with appropriate permissions ..." f"Checking if a home folder of {nb_user} contains the 'work' folder with appropriate permissions ..."
) )
command = f'stat -c "%F %U %G" /home/{nb_user}/work' command = f'stat -c "%F %U %G" /home/{nb_user}/work'
expected_output = f"directory {nb_user} users" expected_output = f"directory {nb_user} users"
@@ -86,7 +86,7 @@ def test_nb_user_change(container: TrackedContainer) -> None:
def test_chown_extra(container: TrackedContainer) -> None: def test_chown_extra(container: TrackedContainer) -> None:
"""Container should change the UID/GID of a comma separated """Container should change the UID/GID of a comma-separated
CHOWN_EXTRA list of folders.""" CHOWN_EXTRA list of folders."""
logs = container.run_and_wait( logs = container.run_and_wait(
timeout=120, # chown is slow so give it some time timeout=120, # chown is slow so give it some time
@@ -184,7 +184,7 @@ def test_group_add(container: TrackedContainer) -> None:
def test_set_uid(container: TrackedContainer) -> None: def test_set_uid(container: TrackedContainer) -> None:
"""Container should run with the specified uid and NB_USER. """Container should run with the specified uid and NB_USER.
The /home/jovyan directory will not be writable since it's owned by 1000:users. The /home/jovyan directory will not be writable since it's owned by 1000:users.
Additionally verify that "--group-add=users" is suggested in a warning to restore Additionally, verify that "--group-add=users" is suggested in a warning to restore
write access. write access.
""" """
logs = container.run_and_wait( logs = container.run_and_wait(
@@ -270,7 +270,7 @@ def test_jupyter_env_vars_to_unset(
def test_secure_path(container: TrackedContainer, tmp_path: pathlib.Path) -> None: def test_secure_path(container: TrackedContainer, tmp_path: pathlib.Path) -> None:
"""Make sure that the sudo command has conda's python (not system's) on path. """Make sure that the sudo command has conda's python (not system's) on PATH.
See <https://github.com/jupyter/docker-stacks/issues/1053>. See <https://github.com/jupyter/docker-stacks/issues/1053>.
""" """
d = tmp_path / "data" d = tmp_path / "data"

View File

@@ -19,7 +19,7 @@ def test_cython(container: TrackedContainer) -> None:
"start.sh", "start.sh",
"bash", "bash",
"-c", "-c",
# We copy our data to temporary folder to be able to modify the directory # We copy our data to a temporary folder to be able to modify the directory
f"cp -r {cont_data_dir}/ /tmp/test/ && cd /tmp/test && python3 setup.py build_ext", f"cp -r {cont_data_dir}/ /tmp/test/ && cd /tmp/test && python3 setup.py build_ext",
], ],
) )

View File

@@ -17,7 +17,7 @@ THIS_DIR = Path(__file__).parent.resolve()
( (
"matplotlib_1.py", "matplotlib_1.py",
"test.png", "test.png",
"Test that matplotlib is able to plot a graph and write it as an image ...", "Test that matplotlib can plot a graph and write it as an image ...",
), ),
( (
"matplotlib_fonts_1.py", "matplotlib_fonts_1.py",