mirror of
https://github.com/jupyter/docker-stacks.git
synced 2025-10-07 10:04:03 +00:00
Fix more grammar issues
This commit is contained in:
2
.flake8
2
.flake8
@@ -1,4 +1,4 @@
|
||||
[flake8]
|
||||
max-line-length = 88
|
||||
select = C,E,F,W,B,B950
|
||||
select = C, E, F, W, B, B950
|
||||
extend-ignore = E203, E501, W503
|
||||
|
2
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
2
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
@@ -52,7 +52,7 @@ body:
|
||||
Example:
|
||||
- Altair is a declarative statistical visualization library for Python, based on Vega and Vega-Lite, and the source is available on GitHub.
|
||||
- With Altair, you can spend more time understanding your data and its meaning.
|
||||
- Altair's API is simple, friendly and consistent and built on top of the powerful Vega-Lite visualization grammar.
|
||||
- Altair's API is simple, friendly, and consistent and built on top of the powerful Vega-Lite visualization grammar.
|
||||
- This elegant simplicity produces beautiful and effective visualizations with a minimal amount of code.
|
||||
validations:
|
||||
required: true
|
||||
|
2
.github/workflows/docker-tag-push.yml
vendored
2
.github/workflows/docker-tag-push.yml
vendored
@@ -1,4 +1,4 @@
|
||||
name: Download a Docker image and its tags from GitHub artifacts, apply them and push the image to the Registry
|
||||
name: Download a Docker image and its tags from GitHub artifacts, apply them, and push the image to the Registry
|
||||
|
||||
env:
|
||||
REGISTRY: quay.io
|
||||
|
2
Makefile
2
Makefile
@@ -102,7 +102,7 @@ push-all: $(foreach I, $(ALL_IMAGES), push/$(I)) ## push all tagged images
|
||||
|
||||
run-shell/%: ## run a bash in interactive mode in a stack
|
||||
docker run -it --rm "$(REGISTRY)/$(OWNER)/$(notdir $@)" $(SHELL)
|
||||
run-sudo-shell/%: ## run a bash in interactive mode as root in a stack
|
||||
run-sudo-shell/%: ## run bash in interactive mode as root in a stack
|
||||
docker run -it --rm --user root "$(REGISTRY)/$(OWNER)/$(notdir $@)" $(SHELL)
|
||||
|
||||
|
||||
|
@@ -20,8 +20,7 @@ You can use a stack image to do any of the following (and more):
|
||||
You can try a [relatively recent build of the quay.io/jupyter/base-notebook image on mybinder.org](https://mybinder.org/v2/gh/jupyter/docker-stacks/main?urlpath=lab/tree/README.ipynb)
|
||||
by simply clicking the preceding link.
|
||||
Otherwise, the examples below may help you get started if you [have Docker installed](https://docs.docker.com/get-docker/),
|
||||
know [which Docker image](https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html) you want to use
|
||||
and want to launch a single Jupyter Application in a container.
|
||||
know [which Docker image](https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html) you want to use, and want to launch a single Jupyter Application in a container.
|
||||
|
||||
The [User Guide on ReadTheDocs](https://jupyter-docker-stacks.readthedocs.io/en/latest/) describes additional uses and features in detail.
|
||||
|
||||
@@ -44,8 +43,8 @@ You can modify the port on which the container's port is exposed by [changing th
|
||||
Visiting `http://<hostname>:10000/?token=<token>` in a browser loads JupyterLab,
|
||||
where:
|
||||
|
||||
- `hostname` is the name of the computer running Docker
|
||||
- `token` is the secret token printed in the console.
|
||||
- The `hostname` is the name of the computer running Docker
|
||||
- The `token` is the secret token printed in the console.
|
||||
|
||||
The container remains intact for restart after the Server exits.
|
||||
|
||||
@@ -127,6 +126,6 @@ for information about how to contribute recipes, features, tests, and community-
|
||||
- [jupyter/repo2docker](https://github.com/jupyterhub/repo2docker) -
|
||||
Turn git repositories into Jupyter-enabled Docker Images
|
||||
- [openshift/source-to-image](https://github.com/openshift/source-to-image) -
|
||||
A tool for building artifacts from source and injecting them into docker images
|
||||
A tool for building artifacts from source code and injecting them into docker images
|
||||
- [jupyter-on-openshift/jupyter-notebooks](https://github.com/jupyter-on-openshift/jupyter-notebooks) -
|
||||
OpenShift compatible S2I builder for basic notebook images
|
||||
|
@@ -29,7 +29,7 @@ language = "en"
|
||||
html_theme = "alabaster"
|
||||
html_static_path = ["_static"]
|
||||
|
||||
# File above was generated using sphinx 6.2.1 with this command:
|
||||
# The file above was generated using sphinx 6.2.1 with this command:
|
||||
# sphinx-quickstart --project "docker-stacks" --author "Project Jupyter" -v "latest" -r "latest" -l en --no-sep --no-makefile --no-batchfile
|
||||
# These are custom options for this project
|
||||
|
||||
|
@@ -100,12 +100,12 @@ you merge a GitHub pull request to the main branch of your project.
|
||||
2. Create a new repository - make sure to use the correct namespace (account or organization).
|
||||
Enter the name of the image matching the one you entered when prompted with `stack_name` by the cookiecutter.
|
||||
|
||||

|
||||

|
||||
|
||||
3. Enter a description for your image.
|
||||
4. Click on your avatar in the top-right corner and select Account Settings.
|
||||
|
||||

|
||||

|
||||
|
||||
5. Click on **Security** and then click on the **New Access Token** button.
|
||||
|
||||
|
@@ -22,7 +22,7 @@ defined in the [conftest.py](https://github.com/jupyter/docker-stacks/blob/main/
|
||||
|
||||
## Unit tests
|
||||
|
||||
You can add a unit test if you want to run a python script in one of our images.
|
||||
You can add a unit test if you want to run a Python script in one of our images.
|
||||
You should create a `tests/<somestack>/units/` directory, if it doesn't already exist, and put your file there.
|
||||
Files in this folder will be executed in the container when tests are run.
|
||||
You can see an [example for the TensorFlow package here](https://github.com/jupyter/docker-stacks/blob/HEAD/tests/tensorflow-notebook/units/unit_tensorflow.py).
|
||||
|
@@ -26,10 +26,10 @@ Here is a non-exhaustive list of things we do care about:
|
||||
- Does the package open additional ports, or add new web endpoints, that could be exploited?
|
||||
|
||||
With all this in mind, we have a voting group, that consists of
|
||||
[mathbunnyru](https://github.com/mathbunnyru),
|
||||
[consideRatio](https://github.com/consideRatio),
|
||||
[yuvipanda](https://github.com/yuvipanda) and
|
||||
[manics](https://github.com/manics).
|
||||
[@mathbunnyru](https://github.com/mathbunnyru),
|
||||
[@consideRatio](https://github.com/consideRatio),
|
||||
[@yuvipanda](https://github.com/yuvipanda), and
|
||||
[@manics](https://github.com/manics).
|
||||
|
||||
This voting group is responsible for accepting or declining new packages and stacks.
|
||||
The change is accepted, if there are **at least 2 positive votes**.
|
||||
|
@@ -33,7 +33,7 @@ We are waiting for the first point release of the new LTS Ubuntu before updating
|
||||
Other images are directly or indirectly inherited from `docker-stacks-foundation`.
|
||||
We rebuild our images automatically each week, which means they frequently receive updates.
|
||||
|
||||
When there's a security fix in the Ubuntu base image, it's a good idea to manually trigger images rebuild
|
||||
When there's a security fix in the Ubuntu base image, it's a good idea to manually trigger the rebuild of images
|
||||
[from the GitHub actions workflow UI](https://github.com/jupyter/docker-stacks/actions/workflows/docker.yml).
|
||||
Pushing the `Run Workflow` button will trigger this process.
|
||||
|
||||
|
@@ -233,7 +233,7 @@ This script is handy when you derive a new Dockerfile from this image and instal
|
||||
### Others
|
||||
|
||||
You can bypass the provided scripts and specify an arbitrary start command.
|
||||
If you do, keep in mind that features supported by the `start.sh` script and its kin will not function (e.g., `GRANT_SUDO`).
|
||||
If you do, keep in mind that features, supported by the `start.sh` script and its kin, will not function (e.g., `GRANT_SUDO`).
|
||||
|
||||
## Conda Environments
|
||||
|
||||
|
@@ -1,6 +1,6 @@
|
||||
FROM quay.io/jupyter/base-notebook
|
||||
|
||||
# Name your environment and choose the python version
|
||||
# Name your environment and choose the Python version
|
||||
ARG env_name=python310
|
||||
ARG py_ver=3.10
|
||||
|
||||
@@ -40,5 +40,5 @@ RUN activate_custom_env_script=/usr/local/bin/before-notebook.d/activate_custom_
|
||||
USER ${NB_UID}
|
||||
|
||||
# Making this environment default in Terminal
|
||||
# You can comment this line to keep the default environment in Terminal
|
||||
# You can comment this line to keep the default environment in a Terminal
|
||||
RUN echo "conda activate ${env_name}" >> "${HOME}/.bashrc"
|
||||
|
@@ -15,7 +15,7 @@ ARG INSTANTCLIENT_MAJOR_VERSION=21
|
||||
ARG INSTANTCLIENT_VERSION=${INSTANTCLIENT_MAJOR_VERSION}.11.0.0.0-1
|
||||
ARG INSTANTCLIENT_URL=https://download.oracle.com/otn_software/linux/instantclient/2111000
|
||||
|
||||
# Then install Oracle SQL Instant client, SQL+Plus, tools and JDBC.
|
||||
# Then install Oracle SQL Instant client, SQL+Plus, tools, and JDBC.
|
||||
# Note: You may need to change the URL to a newer version.
|
||||
# See: https://www.oracle.com/es/database/technologies/instant-client/linux-x86-64-downloads.html
|
||||
RUN mkdir "/opt/oracle"
|
||||
@@ -39,7 +39,7 @@ RUN echo "ORACLE_HOME=/usr/lib/oracle/${INSTANTCLIENT_MAJOR_VERSION}/client64" >
|
||||
echo "export PATH" >> "${HOME}/.bashrc" && \
|
||||
echo "export LD_LIBRARY_PATH" >> "${HOME}/.bashrc"
|
||||
|
||||
# Add credentials for /redacted/ using Oracle Db.
|
||||
# Add credentials for /redacted/ using Oracle DB.
|
||||
WORKDIR /usr/lib/oracle/${INSTANTCLIENT_MAJOR_VERSION}/client64/lib/network/admin/
|
||||
# Add a wildcard `[]` on the last letter of the filename to avoid throwing an error if the file does not exist.
|
||||
# See: https://stackoverflow.com/questions/31528384/conditional-copy-add-in-dockerfile
|
||||
|
@@ -132,7 +132,7 @@ Sometimes it is helpful to run the Jupyter instance behind an nginx proxy, for e
|
||||
and want nginx to help improve server performance in managing the connections
|
||||
|
||||
Here is a [quick example of NGINX configuration](https://gist.github.com/cboettig/8643341bd3c93b62b5c2) to get started.
|
||||
You'll need a server, a `.crt` and `.key` file for your server, and `docker` & `docker-compose` installed.
|
||||
You'll need a server, a `.crt`, and a `.key` file for your server, and `docker` & `docker-compose` installed.
|
||||
Then download the files at that gist and run `docker-compose up` to test it out.
|
||||
Customize the `nginx.conf` file to set the desired paths and add other services.
|
||||
|
||||
@@ -506,9 +506,9 @@ The following recipe demonstrates how to add functionality to read from and writ
|
||||
|
||||
You can now use `pyodbc` and `sqlalchemy` to interact with the database.
|
||||
|
||||
Pre-built images are hosted in the [realiserad/jupyter-docker-mssql](https://github.com/Realiserad/jupyter-docker-mssql) repository.
|
||||
Pre-built images are hosted in the [Realiserad/jupyter-docker-mssql](https://github.com/Realiserad/jupyter-docker-mssql) repository.
|
||||
|
||||
## Add Oracle SQL Instant client, SQL\*Plus and other tools (Version 21.x)
|
||||
## Add Oracle SQL Instant client, SQL\*Plus, and other tools (Version 21.x)
|
||||
|
||||
```{note}
|
||||
This recipe only works for x86_64 architecture.
|
||||
@@ -520,7 +520,7 @@ This recipe installs version `21.11.0.0.0`.
|
||||
|
||||
Nonetheless, go to the [Oracle Instant Client Download page](https://www.oracle.com/es/database/technologies/instant-client/linux-x86-64-downloads.html) for the complete list of versions available.
|
||||
You may need to perform different steps for older versions;
|
||||
the may be explained on the "Installation instructions" section of the Downloads page.
|
||||
they may be explained in the "Installation instructions" section of the Downloads page.
|
||||
|
||||
```{literalinclude} recipe_code/oracledb.dockerfile
|
||||
:language: docker
|
||||
|
@@ -170,11 +170,11 @@ Any other changes made in the container will be lost.
|
||||
|
||||
## Using Binder
|
||||
|
||||
[Binder](https://mybinder.org/) is a service that allows you to create and share custom computing environments for projects in version control.
|
||||
A [Binder](https://mybinder.org/) is a service that allows you to create and share custom computing environments for projects in version control.
|
||||
You can use any of the Jupyter Docker Stacks images as a basis for a Binder-compatible Dockerfile.
|
||||
See the
|
||||
[docker-stacks example](https://mybinder.readthedocs.io/en/latest/examples/sample_repos.html#using-a-docker-image-from-the-jupyter-docker-stacks-repository) and
|
||||
[Using a Dockerfile](https://mybinder.readthedocs.io/en/latest/tutorials/dockerfile.html) sections in the
|
||||
[Using a Dockerfile](https://mybinder.readthedocs.io/en/latest/tutorials/dockerfile.html) section in the
|
||||
[Binder documentation](https://mybinder.readthedocs.io/en/latest/index.html) for instructions.
|
||||
|
||||
## Using JupyterHub
|
||||
|
@@ -14,7 +14,7 @@ This page provides details about features specific to one or more images.
|
||||
Every new spark context that is created is put onto an incrementing port (i.e. 4040, 4041, 4042, etc.), and it might be necessary to open multiple ports.
|
||||
```
|
||||
|
||||
For example: `docker run --detach -p 8888:8888 -p 4040:4040 -p 4041:4041 quay.io/jupyter/pyspark-notebook`.
|
||||
For example, `docker run --detach -p 8888:8888 -p 4040:4040 -p 4041:4041 quay.io/jupyter/pyspark-notebook`.
|
||||
|
||||
#### IPython low-level output capture and forward
|
||||
|
||||
@@ -53,7 +53,7 @@ You can build a `pyspark-notebook` image with a different `Spark` version by ove
|
||||
- This version needs to match the version supported by the Spark distribution used above.
|
||||
- See [Spark Overview](https://spark.apache.org/docs/latest/#downloading) and [Ubuntu packages](https://packages.ubuntu.com/search?keywords=openjdk).
|
||||
|
||||
- Starting with _Spark >= 3.2_, the distribution file might contain Scala version.
|
||||
- Starting with _Spark >= 3.2_, the distribution file might contain the Scala version.
|
||||
|
||||
For example, here is how to build a `pyspark-notebook` image with Spark `3.2.0`, Hadoop `3.2`, and OpenJDK `11`.
|
||||
|
||||
@@ -161,7 +161,7 @@ Connection to Spark Cluster on **[Standalone Mode](https://spark.apache.org/docs
|
||||
0. Verify that the docker image (check the Dockerfile) and the Spark Cluster, which is being
|
||||
deployed, run the same version of Spark.
|
||||
1. [Deploy Spark in Standalone Mode](https://spark.apache.org/docs/latest/spark-standalone.html).
|
||||
2. Run the Docker container with `--net=host` in a location that is network addressable by all of
|
||||
2. Run the Docker container with `--net=host` in a location that is network-addressable by all of
|
||||
your Spark workers.
|
||||
(This is a [Spark networking requirement](https://spark.apache.org/docs/latest/cluster-overview.html#components).)
|
||||
|
||||
@@ -174,7 +174,7 @@ Connection to Spark Cluster on **[Standalone Mode](https://spark.apache.org/docs
|
||||
##### Standalone Mode in Python
|
||||
|
||||
The **same Python version** needs to be used on the notebook (where the driver is located) and on the Spark workers.
|
||||
The python version used at the driver and worker side can be adjusted by setting the environment variables `PYSPARK_PYTHON` and/or `PYSPARK_DRIVER_PYTHON`,
|
||||
The Python version used on the driver and worker side can be adjusted by setting the environment variables `PYSPARK_PYTHON` and/or `PYSPARK_DRIVER_PYTHON`,
|
||||
see [Spark Configuration][spark-conf] for more information.
|
||||
|
||||
```python
|
||||
|
@@ -13,17 +13,17 @@ SHELL ["/bin/bash", "-o", "pipefail", "-c"]
|
||||
|
||||
USER root
|
||||
|
||||
# Install all OS dependencies for Server that starts but lacks all
|
||||
# Install all OS dependencies for the Server that starts but lacks all
|
||||
# features (e.g., download as all possible file formats)
|
||||
RUN apt-get update --yes && \
|
||||
apt-get install --yes --no-install-recommends \
|
||||
fonts-liberation \
|
||||
# - pandoc is used to convert notebooks to html files
|
||||
# it's not present in aarch64 ubuntu image, so we install it here
|
||||
# it's not present in the aarch64 Ubuntu image, so we install it here
|
||||
pandoc \
|
||||
# - run-one - a wrapper script that runs no more
|
||||
# than one unique instance of some command with a unique set of arguments,
|
||||
# we use `run-one-constantly` to support `RESTARTABLE` option
|
||||
# we use `run-one-constantly` to support the `RESTARTABLE` option
|
||||
run-one && \
|
||||
apt-get clean && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
@@ -64,7 +64,7 @@ USER root
|
||||
RUN fix-permissions /etc/jupyter/
|
||||
|
||||
# HEALTHCHECK documentation: https://docs.docker.com/engine/reference/builder/#healthcheck
|
||||
# This healtcheck works well for `lab`, `notebook`, `nbclassic`, `server` and `retro` jupyter commands
|
||||
# This healtcheck works well for `lab`, `notebook`, `nbclassic`, `server`, and `retro` jupyter commands
|
||||
# https://github.com/jupyter/docker-stacks/issues/915#issuecomment-1068528799
|
||||
HEALTHCHECK --interval=5s --timeout=3s --start-period=5s --retries=3 \
|
||||
CMD /etc/jupyter/docker_healthcheck.py || exit 1
|
||||
|
@@ -7,7 +7,7 @@ from pathlib import Path
|
||||
|
||||
import requests
|
||||
|
||||
# A number of operations below deliberately don't check for possible errors
|
||||
# Several operations below deliberately don't check for possible errors
|
||||
# As this is a healthcheck, it should succeed or raise an exception on error
|
||||
|
||||
runtime_dir = Path("/home/") / os.environ["NB_USER"] / ".local/share/jupyter/runtime/"
|
||||
|
@@ -34,7 +34,7 @@ if "GEN_CERT" in os.environ:
|
||||
if not cnf_file.exists():
|
||||
cnf_file.write_text(OPENSSL_CONFIG)
|
||||
|
||||
# Generate a certificate if one doesn't exist on disk
|
||||
# Generate a certificate if one doesn't exist on a disk
|
||||
subprocess.check_call(
|
||||
[
|
||||
"openssl",
|
||||
|
@@ -34,7 +34,7 @@ command.append(jupyter_command)
|
||||
if "NOTEBOOK_ARGS" in os.environ:
|
||||
command += shlex.split(os.environ["NOTEBOOK_ARGS"])
|
||||
|
||||
# Pass through any other args we were passed on the commandline
|
||||
# Pass through any other args we were passed on the command line
|
||||
command += sys.argv[1:]
|
||||
|
||||
# Execute the command!
|
||||
|
@@ -18,12 +18,12 @@ SHELL ["/bin/bash", "-o", "pipefail", "-c"]
|
||||
|
||||
USER root
|
||||
|
||||
# Install all OS dependencies for Server that starts
|
||||
# Install all OS dependencies for the Server that starts
|
||||
# but lacks all features (e.g., download as all possible file formats)
|
||||
ENV DEBIAN_FRONTEND noninteractive
|
||||
RUN apt-get update --yes && \
|
||||
# - apt-get upgrade is run to patch known vulnerabilities in apt-get packages as
|
||||
# the ubuntu base image is rebuilt too seldom sometimes (less than once a month)
|
||||
# - `apt-get upgrade` is run to patch known vulnerabilities in apt-get packages as
|
||||
# the Ubuntu base image is rebuilt too seldom sometimes (less than once a month)
|
||||
apt-get upgrade --yes && \
|
||||
apt-get install --yes --no-install-recommends \
|
||||
# - bzip2 is necessary to extract the micromamba executable.
|
||||
@@ -76,19 +76,19 @@ RUN echo "auth requisite pam_deny.so" >> /etc/pam.d/su && \
|
||||
|
||||
USER ${NB_UID}
|
||||
|
||||
# Pin python version here, or set it to "default"
|
||||
# Pin Python version here, or set it to "default"
|
||||
ARG PYTHON_VERSION=3.11
|
||||
|
||||
# Setup work directory for backward-compatibility
|
||||
RUN mkdir "/home/${NB_USER}/work" && \
|
||||
fix-permissions "/home/${NB_USER}"
|
||||
|
||||
# Download and install Micromamba, and initialize Conda prefix.
|
||||
# Download and install Micromamba, and initialize the Conda prefix.
|
||||
# <https://github.com/mamba-org/mamba#micromamba>
|
||||
# Similar projects using Micromamba:
|
||||
# - Micromamba-Docker: <https://github.com/mamba-org/micromamba-docker>
|
||||
# - repo2docker: <https://github.com/jupyterhub/repo2docker>
|
||||
# Install Python, Mamba and jupyter_core
|
||||
# Install Python, Mamba, and jupyter_core
|
||||
# Cleanup temporary files and remove Micromamba
|
||||
# Correct permissions
|
||||
# Do all this in a single RUN command to avoid duplicating all of the
|
||||
|
@@ -1,16 +1,14 @@
|
||||
#!/bin/bash
|
||||
# set permissions on a directory
|
||||
# after any installation, if a directory needs to be (human) user-writable,
|
||||
# run this script on it.
|
||||
# It will make everything in the directory owned by the group ${NB_GID}
|
||||
# and writable by that group.
|
||||
# Set permissions on a directory
|
||||
# After any installation, if a directory needs to be (human) user-writable, run this script on it.
|
||||
# It will make everything in the directory owned by the group ${NB_GID} and writable by that group.
|
||||
# Deployments that want to set a specific user id can preserve permissions
|
||||
# by adding the `--group-add users` line to `docker run`.
|
||||
|
||||
# uses find to avoid touching files that already have the right permissions,
|
||||
# which would cause massive image explosion
|
||||
# Uses find to avoid touching files that already have the right permissions,
|
||||
# which would cause a massive image explosion
|
||||
|
||||
# right permissions are:
|
||||
# Right permissions are:
|
||||
# group=${NB_GID}
|
||||
# AND permissions include group rwX (directory-execute)
|
||||
# AND directories have setuid,setgid bits set
|
||||
|
@@ -10,7 +10,7 @@ if [ "$#" -ne 1 ]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ ! -d "${1}" ]] ; then
|
||||
if [[ ! -d "${1}" ]]; then
|
||||
echo "Directory ${1} doesn't exist or is not a directory"
|
||||
return 1
|
||||
fi
|
||||
@@ -25,16 +25,16 @@ for f in "${1}/"*; do
|
||||
# shellcheck disable=SC1090
|
||||
source "${f}"
|
||||
# shellcheck disable=SC2181
|
||||
if [ $? -ne 0 ] ; then
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "${f} has failed, continuing execution"
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
if [ -x "${f}" ] ; then
|
||||
if [ -x "${f}" ]; then
|
||||
echo "Running executable: ${f}"
|
||||
"${f}"
|
||||
# shellcheck disable=SC2181
|
||||
if [ $? -ne 0 ] ; then
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "${f} has failed, continuing execution"
|
||||
fi
|
||||
else
|
||||
|
@@ -43,7 +43,7 @@ source /usr/local/bin/run-hooks.sh /usr/local/bin/start-notebook.d
|
||||
# things before we run the command passed to start.sh as the desired user
|
||||
# (NB_USER).
|
||||
#
|
||||
if [ "$(id -u)" == 0 ] ; then
|
||||
if [ "$(id -u)" == 0 ]; then
|
||||
# Environment variables:
|
||||
# - NB_USER: the desired username and associated home folder
|
||||
# - NB_UID: the desired user id
|
||||
@@ -51,11 +51,11 @@ if [ "$(id -u)" == 0 ] ; then
|
||||
# - NB_GROUP: a group name we want for the group
|
||||
# - GRANT_SUDO: a boolean ("1" or "yes") to grant the user sudo rights
|
||||
# - CHOWN_HOME: a boolean ("1" or "yes") to chown the user's home folder
|
||||
# - CHOWN_EXTRA: a comma separated list of paths to chown
|
||||
# - CHOWN_EXTRA: a comma-separated list of paths to chown
|
||||
# - CHOWN_HOME_OPTS / CHOWN_EXTRA_OPTS: arguments to the chown commands
|
||||
|
||||
# Refit the jovyan user to the desired the user (NB_USER)
|
||||
if id jovyan &> /dev/null ; then
|
||||
# Refit the jovyan user to the desired user (NB_USER)
|
||||
if id jovyan &> /dev/null; then
|
||||
if ! usermod --home "/home/${NB_USER}" --login "${NB_USER}" jovyan 2>&1 | grep "no changes" > /dev/null; then
|
||||
_log "Updated the jovyan user:"
|
||||
_log "- username: jovyan -> ${NB_USER}"
|
||||
@@ -78,7 +78,7 @@ if [ "$(id -u)" == 0 ] ; then
|
||||
useradd --no-log-init --home "/home/${NB_USER}" --shell /bin/bash --uid "${NB_UID}" --gid "${NB_GID}" --groups 100 "${NB_USER}"
|
||||
fi
|
||||
|
||||
# Move or symlink the jovyan home directory to the desired users home
|
||||
# Move or symlink the jovyan home directory to the desired user's home
|
||||
# directory if it doesn't already exist, and update the current working
|
||||
# directory to the new location if needed.
|
||||
if [[ "${NB_USER}" != "jovyan" ]]; then
|
||||
@@ -106,7 +106,7 @@ if [ "$(id -u)" == 0 ] ; then
|
||||
fi
|
||||
fi
|
||||
|
||||
# Optionally ensure the desired user get filesystem ownership of it's home
|
||||
# Optionally ensure the desired user gets filesystem ownership of its home
|
||||
# folder and/or additional folders
|
||||
if [[ "${CHOWN_HOME}" == "1" || "${CHOWN_HOME}" == "yes" ]]; then
|
||||
_log "Ensuring /home/${NB_USER} is owned by ${NB_UID}:${NB_GID} ${CHOWN_HOME_OPTS:+(chown options: ${CHOWN_HOME_OPTS})}"
|
||||
@@ -121,7 +121,7 @@ if [ "$(id -u)" == 0 ] ; then
|
||||
done
|
||||
fi
|
||||
|
||||
# Update potentially outdated environment variables since image build
|
||||
# Update potentially outdated environment variables since the image build
|
||||
export XDG_CACHE_HOME="/home/${NB_USER}/.cache"
|
||||
|
||||
# Prepend ${CONDA_DIR}/bin to sudo secure_path
|
||||
@@ -163,7 +163,7 @@ if [ "$(id -u)" == 0 ] ; then
|
||||
# - We use the `--set-home` flag to set the HOME variable appropriately.
|
||||
#
|
||||
# - To reduce the default list of variables deleted by sudo, we could have
|
||||
# used `env_delete` from /etc/sudoers. It has higher priority than the
|
||||
# used `env_delete` from /etc/sudoers. It has a higher priority than the
|
||||
# `--preserve-env` flag and the `env_keep` configuration.
|
||||
#
|
||||
# - We preserve LD_LIBRARY_PATH, PATH and PYTHONPATH explicitly. Note however that sudo
|
||||
@@ -186,7 +186,7 @@ else
|
||||
# Attempt to ensure the user uid we currently run as has a named entry in
|
||||
# the /etc/passwd file, as it avoids software crashing on hard assumptions
|
||||
# on such entry. Writing to the /etc/passwd was allowed for the root group
|
||||
# from the Dockerfile during build.
|
||||
# from the Dockerfile during the build.
|
||||
#
|
||||
# ref: https://github.com/jupyter/docker-stacks/issues/552
|
||||
if ! whoami &> /dev/null; then
|
||||
|
@@ -13,7 +13,7 @@ SHELL ["/bin/bash", "-o", "pipefail", "-c"]
|
||||
|
||||
USER root
|
||||
|
||||
# Install all OS dependencies for fully functional Server
|
||||
# Install all OS dependencies for a fully functional Server
|
||||
RUN apt-get update --yes && \
|
||||
apt-get install --yes --no-install-recommends \
|
||||
# Common useful utilities
|
||||
@@ -43,7 +43,7 @@ RUN update-alternatives --install /usr/bin/nano nano /bin/nano-tiny 10
|
||||
# Switch back to jovyan to avoid accidental container runs as root
|
||||
USER ${NB_UID}
|
||||
|
||||
# Add R mimetype option to specify how the plot returns from R to the browser
|
||||
# Add an R mimetype option to specify how the plot returns from R to the browser
|
||||
COPY --chown=${NB_UID}:${NB_GID} Rprofile.site /opt/conda/lib/R/etc/
|
||||
|
||||
# Add setup scripts that may be used by downstream images or inherited images
|
||||
|
@@ -1,7 +1,7 @@
|
||||
#!/bin/bash
|
||||
set -exuo pipefail
|
||||
# Requirements:
|
||||
# - Run as non-root user
|
||||
# - Run as a non-root user
|
||||
# - The JULIA_PKGDIR environment variable is set
|
||||
# - Julia is already set up, with the setup-julia.bash command
|
||||
|
||||
|
@@ -15,7 +15,7 @@ USER root
|
||||
|
||||
# Spark dependencies
|
||||
# Default values can be overridden at build time
|
||||
# (ARGS are in lower case to distinguish them from ENV)
|
||||
# (ARGS are in lowercase to distinguish them from ENV)
|
||||
ARG spark_version="3.5.0"
|
||||
ARG hadoop_version="3"
|
||||
ARG scala_version
|
||||
@@ -35,7 +35,7 @@ RUN apt-get update --yes && \
|
||||
WORKDIR /tmp
|
||||
|
||||
# You need to use https://archive.apache.org/dist/ website if you want to download old Spark versions
|
||||
# But it seems to be slower, that's why we use recommended site for download
|
||||
# But it seems to be slower, that's why we use the recommended site for download
|
||||
RUN if [ -z "${scala_version}" ]; then \
|
||||
curl --progress-bar --location --output "spark.tgz" \
|
||||
"https://dlcdn.apache.org/spark/spark-${APACHE_SPARK_VERSION}/spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz"; \
|
||||
|
@@ -66,7 +66,7 @@ RUN mamba install --yes \
|
||||
fix-permissions "${CONDA_DIR}" && \
|
||||
fix-permissions "/home/${NB_USER}"
|
||||
|
||||
# Install facets which does not have a pip or conda package at the moment
|
||||
# Install facets package which does not have a `pip` or `conda-forge` package at the moment
|
||||
WORKDIR /tmp
|
||||
RUN git clone https://github.com/PAIR-code/facets && \
|
||||
jupyter nbclassic-extension install facets/facets-dist/ --sys-prefix && \
|
||||
|
@@ -84,7 +84,7 @@ class SHATagger(TaggerInterface):
|
||||
### Manifest
|
||||
|
||||
`ManifestHeader` is a build manifest header.
|
||||
It contains information about `Build datetime`, `Docker image size`, and `Git commit` info.
|
||||
It contains the following sections: `Build timestamp`, `Docker image size`, and `Git commit` info.
|
||||
|
||||
All the other manifest classes are inherited from `ManifestInterface`:
|
||||
|
||||
|
@@ -49,7 +49,7 @@ class ManifestHeader:
|
||||
|
||||
## Build Info
|
||||
|
||||
- Build datetime: {build_timestamp}
|
||||
- Build timestamp: {build_timestamp}
|
||||
- Docker image: `{registry}/{owner}/{short_image_name}:{commit_hash_tag}`
|
||||
- Docker image size: {image_size}
|
||||
- Git commit SHA: [{commit_hash}](https://github.com/jupyter/docker-stacks/commit/{commit_hash})
|
||||
|
@@ -51,7 +51,7 @@ def update_monthly_wiki_page(
|
||||
|
||||
def get_manifest_timestamp(manifest_file: Path) -> str:
|
||||
file_content = manifest_file.read_text()
|
||||
pos = file_content.find("Build datetime: ")
|
||||
pos = file_content.find("Build timestamp: ")
|
||||
return file_content[pos + 16 : pos + 36]
|
||||
|
||||
|
||||
|
@@ -16,7 +16,7 @@ from tagging.manifests import ManifestHeader, ManifestInterface
|
||||
|
||||
LOGGER = logging.getLogger(__name__)
|
||||
|
||||
# This would actually be manifest creation timestamp
|
||||
# We use a manifest creation timestamp, which happens right after a build
|
||||
BUILD_TIMESTAMP = datetime.datetime.utcnow().isoformat()[:-7] + "Z"
|
||||
MARKDOWN_LINE_BREAK = "<br />"
|
||||
|
||||
|
@@ -42,7 +42,7 @@ def test_nb_user_change(container: TrackedContainer) -> None:
|
||||
# Use sleep, not wait, because the container sleeps forever.
|
||||
time.sleep(1)
|
||||
LOGGER.info(
|
||||
f"Checking if home folder of {nb_user} contains the hidden '.jupyter' folder with appropriate permissions ..."
|
||||
f"Checking if a home folder of {nb_user} contains the hidden '.jupyter' folder with appropriate permissions ..."
|
||||
)
|
||||
command = f'stat -c "%F %U %G" /home/{nb_user}/.jupyter'
|
||||
expected_output = f"directory {nb_user} users"
|
||||
|
@@ -15,8 +15,8 @@ The goal is to detect import errors that can be caused by incompatibilities betw
|
||||
This module checks dynamically, through the `CondaPackageHelper`,
|
||||
only the requested packages i.e. packages requested by `mamba install` in the `Dockerfile`s.
|
||||
This means that it does not check dependencies.
|
||||
This choice is a tradeoff to cover the main requirements while achieving reasonable test duration.
|
||||
However it could be easily changed (or completed) to cover also dependencies.
|
||||
This choice is a tradeoff to cover the main requirements while achieving a reasonable test duration.
|
||||
However, it could be easily changed (or completed) to cover dependencies as well.
|
||||
Use `package_helper.installed_packages()` instead of `package_helper.requested_packages()`.
|
||||
|
||||
Example:
|
||||
@@ -145,7 +145,7 @@ def _check_import_packages(
|
||||
"""Test if packages can be imported
|
||||
|
||||
Note: using a list of packages instead of a fixture for the list of packages
|
||||
since pytest prevents use of multiple yields
|
||||
since pytest prevents the use of multiple yields
|
||||
"""
|
||||
failures = {}
|
||||
LOGGER.info("Testing the import of packages ...")
|
||||
|
@@ -83,7 +83,7 @@ def run_source_in_dir(
|
||||
cont_data_dir = "/home/jovyan/data"
|
||||
# https://forums.docker.com/t/all-files-appear-as-executable-in-file-paths-using-bind-mount/99921
|
||||
# Unfortunately, Docker treats all files in mounter dir as executable files
|
||||
# So we make a copy of mounted dir inside a container
|
||||
# So we make a copy of the mounted dir inside a container
|
||||
command = (
|
||||
"cp -r /home/jovyan/data/ /home/jovyan/data-copy/ &&"
|
||||
"source /usr/local/bin/run-hooks.sh /home/jovyan/data-copy/" + command_suffix
|
||||
|
@@ -74,7 +74,7 @@ def test_nb_user_change(container: TrackedContainer) -> None:
|
||||
), f"Bad owner for the {nb_user} home folder {output}, expected {expected_output}"
|
||||
|
||||
LOGGER.info(
|
||||
f"Checking if home folder of {nb_user} contains the 'work' folder with appropriate permissions ..."
|
||||
f"Checking if a home folder of {nb_user} contains the 'work' folder with appropriate permissions ..."
|
||||
)
|
||||
command = f'stat -c "%F %U %G" /home/{nb_user}/work'
|
||||
expected_output = f"directory {nb_user} users"
|
||||
@@ -86,7 +86,7 @@ def test_nb_user_change(container: TrackedContainer) -> None:
|
||||
|
||||
|
||||
def test_chown_extra(container: TrackedContainer) -> None:
|
||||
"""Container should change the UID/GID of a comma separated
|
||||
"""Container should change the UID/GID of a comma-separated
|
||||
CHOWN_EXTRA list of folders."""
|
||||
logs = container.run_and_wait(
|
||||
timeout=120, # chown is slow so give it some time
|
||||
@@ -184,7 +184,7 @@ def test_group_add(container: TrackedContainer) -> None:
|
||||
def test_set_uid(container: TrackedContainer) -> None:
|
||||
"""Container should run with the specified uid and NB_USER.
|
||||
The /home/jovyan directory will not be writable since it's owned by 1000:users.
|
||||
Additionally verify that "--group-add=users" is suggested in a warning to restore
|
||||
Additionally, verify that "--group-add=users" is suggested in a warning to restore
|
||||
write access.
|
||||
"""
|
||||
logs = container.run_and_wait(
|
||||
@@ -270,7 +270,7 @@ def test_jupyter_env_vars_to_unset(
|
||||
|
||||
|
||||
def test_secure_path(container: TrackedContainer, tmp_path: pathlib.Path) -> None:
|
||||
"""Make sure that the sudo command has conda's python (not system's) on path.
|
||||
"""Make sure that the sudo command has conda's python (not system's) on PATH.
|
||||
See <https://github.com/jupyter/docker-stacks/issues/1053>.
|
||||
"""
|
||||
d = tmp_path / "data"
|
||||
|
@@ -19,7 +19,7 @@ def test_cython(container: TrackedContainer) -> None:
|
||||
"start.sh",
|
||||
"bash",
|
||||
"-c",
|
||||
# We copy our data to temporary folder to be able to modify the directory
|
||||
# We copy our data to a temporary folder to be able to modify the directory
|
||||
f"cp -r {cont_data_dir}/ /tmp/test/ && cd /tmp/test && python3 setup.py build_ext",
|
||||
],
|
||||
)
|
||||
|
@@ -17,7 +17,7 @@ THIS_DIR = Path(__file__).parent.resolve()
|
||||
(
|
||||
"matplotlib_1.py",
|
||||
"test.png",
|
||||
"Test that matplotlib is able to plot a graph and write it as an image ...",
|
||||
"Test that matplotlib can plot a graph and write it as an image ...",
|
||||
),
|
||||
(
|
||||
"matplotlib_fonts_1.py",
|
||||
|
Reference in New Issue
Block a user