Merge pull request #1269 from mathbunnyru/asalikhov/small_updates

Bunch of small refactorings and updates
This commit is contained in:
Romain
2021-04-18 08:08:16 +02:00
committed by GitHub
29 changed files with 50 additions and 54 deletions

View File

@@ -1,12 +1,12 @@
--- ---
repos: repos:
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.1.0 rev: v3.4.0
hooks: hooks:
- id: check-yaml - id: check-yaml
files: .*\.(yaml|yml)$ files: .*\.(yaml|yml)$
- repo: https://github.com/adrienverge/yamllint.git - repo: https://github.com/adrienverge/yamllint.git
rev: v1.23.0 rev: v1.26.1
hooks: hooks:
- id: yamllint - id: yamllint
args: ['-d {extends: relaxed, rules: {line-length: disable}}', '-s'] args: ['-d {extends: relaxed, rules: {line-length: disable}}', '-s']
@@ -17,10 +17,10 @@ repos:
- id: bashate - id: bashate
args: ['--ignore=E006'] args: ['--ignore=E006']
- repo: https://gitlab.com/pycqa/flake8 - repo: https://gitlab.com/pycqa/flake8
rev: 3.8.3 rev: 3.9.1
hooks: hooks:
- id: flake8 - id: flake8
- repo: https://github.com/pre-commit/mirrors-autopep8 - repo: https://github.com/pre-commit/mirrors-autopep8
rev: v1.5.4 rev: v1.5.6
hooks: hooks:
- id: autopep8 - id: autopep8

View File

@@ -12,6 +12,6 @@ sphinx:
formats: all formats: all
python: python:
version: 3.7 version: 3.8
install: install:
- requirements: requirements-dev.txt - requirements: requirements-dev.txt

View File

@@ -1 +1 @@
Please see the [Project Jupyter Code of Conduct](https://github.com/jupyter/governance/blob/master/conduct/code_of_conduct.md). Please see the [Project Jupyter Code of Conduct](https://github.com/jupyter/governance/blob/master/conduct/code_of_conduct.md).

View File

@@ -90,7 +90,7 @@ git-commit: ## commit outstading git changes and push to remote
@git config --global user.name "GitHub Actions" @git config --global user.name "GitHub Actions"
@git config --global user.email "actions@users.noreply.github.com" @git config --global user.email "actions@users.noreply.github.com"
@echo "Publishing outstanding changes in $(LOCAL_PATH) to $(GITHUB_REPOSITORY)" @echo "Publishing outstanding changes in $(LOCAL_PATH) to $(GITHUB_REPOSITORY)"
@cd $(LOCAL_PATH) && \ @cd $(LOCAL_PATH) && \
git remote add publisher https://$(GITHUB_TOKEN)@github.com/$(GITHUB_REPOSITORY).git && \ git remote add publisher https://$(GITHUB_TOKEN)@github.com/$(GITHUB_REPOSITORY).git && \
git checkout master && \ git checkout master && \
@@ -152,6 +152,8 @@ pull/%: DARGS?=
pull/%: ## pull a jupyter image pull/%: ## pull a jupyter image
docker pull $(DARGS) $(OWNER)/$(notdir $@) docker pull $(DARGS) $(OWNER)/$(notdir $@)
pull-all: $(foreach I,$(ALL_IMAGES),pull/$(I) ) ## pull all images
push/%: DARGS?= push/%: DARGS?=
push/%: ## push all tags for a jupyter image push/%: ## push all tags for a jupyter image
docker push --all-tags $(DARGS) $(OWNER)/$(notdir $@) docker push --all-tags $(DARGS) $(OWNER)/$(notdir $@)

View File

@@ -31,7 +31,7 @@ in working on the project.
## Jupyter Notebook Deprecation Notice ## Jupyter Notebook Deprecation Notice
Following [Jupyter Notebook notice](https://github.com/jupyter/notebook#notice), we encourage users to transition to JupyterLab. Following [Jupyter Notebook notice](https://github.com/jupyter/notebook#notice), we encourage users to transition to JupyterLab.
This can be done by passing the environment variable `JUPYTER_ENABLE_LAB=yes` at container startup, This can be done by passing the environment variable `JUPYTER_ENABLE_LAB=yes` at container startup,
more information is available in the [documentation](https://jupyter-docker-stacks.readthedocs.io/en/latest/using/common.html#docker-options). more information is available in the [documentation](https://jupyter-docker-stacks.readthedocs.io/en/latest/using/common.html#docker-options).
At some point, JupyterLab will become the default for all of the Jupyter Docker stack images, however a new environment variable will be introduced to switch back to Jupyter Notebook if needed. At some point, JupyterLab will become the default for all of the Jupyter Docker stack images, however a new environment variable will be introduced to switch back to Jupyter Notebook if needed.

View File

@@ -58,4 +58,4 @@ $(docker run --rm ${IMAGE_NAME} R --silent -e 'installed.packages(.Library)[, c(
\`\`\` \`\`\`
$(docker run --rm ${IMAGE_NAME} apt list --installed) $(docker run --rm ${IMAGE_NAME} apt list --installed)
\`\`\` \`\`\`
EOF EOF

View File

@@ -57,4 +57,4 @@
}, },
"nbformat": 4, "nbformat": 4,
"nbformat_minor": 4 "nbformat_minor": 4
} }

View File

@@ -38,4 +38,4 @@
}, },
"nbformat": 4, "nbformat": 4,
"nbformat_minor": 4 "nbformat_minor": 4
} }

View File

@@ -40,4 +40,4 @@
}, },
"nbformat": 4, "nbformat": 4,
"nbformat_minor": 4 "nbformat_minor": 4
} }

View File

@@ -60,4 +60,4 @@
}, },
"nbformat": 4, "nbformat": 4,
"nbformat_minor": 4 "nbformat_minor": 4
} }

View File

@@ -79,7 +79,7 @@ RUN chmod a+rx /usr/local/bin/fix-permissions
# hadolint ignore=SC2016 # hadolint ignore=SC2016
RUN sed -i 's/^#force_color_prompt=yes/force_color_prompt=yes/' /etc/skel/.bashrc && \ RUN sed -i 's/^#force_color_prompt=yes/force_color_prompt=yes/' /etc/skel/.bashrc && \
# Add call to conda init script see https://stackoverflow.com/a/58081608/4413446 # Add call to conda init script see https://stackoverflow.com/a/58081608/4413446
echo 'eval "$(command conda shell.bash hook 2> /dev/null)"' >> /etc/skel/.bashrc echo 'eval "$(command conda shell.bash hook 2> /dev/null)"' >> /etc/skel/.bashrc
# Create NB_USER with name jovyan user with UID=1000 and in the 'users' group # Create NB_USER with name jovyan user with UID=1000 and in the 'users' group
# and make sure these dirs are writable by the `users` group. # and make sure these dirs are writable by the `users` group.

View File

@@ -62,10 +62,10 @@ RUN conda install --quiet --yes \
'r-caret=6.0*' \ 'r-caret=6.0*' \
'r-crayon=1.4*' \ 'r-crayon=1.4*' \
'r-devtools=2.4*' \ 'r-devtools=2.4*' \
'r-forecast=8.14*' \ 'r-forecast=8.14*' \
'r-hexbin=1.28*' \ 'r-hexbin=1.28*' \
'r-htmltools=0.5*' \ 'r-htmltools=0.5*' \
'r-htmlwidgets=1.5*' \ 'r-htmlwidgets=1.5*' \
'r-irkernel=1.1*' \ 'r-irkernel=1.1*' \
'r-nycflights13=1.0*' \ 'r-nycflights13=1.0*' \
'r-randomforest=4.6*' \ 'r-randomforest=4.6*' \

View File

@@ -17,4 +17,4 @@ help:
# Catch-all target: route all unknown targets to Sphinx using the new # Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile %: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

View File

@@ -1,3 +1,3 @@
body div.sphinxsidebarwrapper p.logo { body div.sphinxsidebarwrapper p.logo {
text-align: left; text-align: left;
} }

View File

@@ -22,4 +22,4 @@ your problem.
* If you have a general question about how to use the Jupyter Docker Stacks in * If you have a general question about how to use the Jupyter Docker Stacks in
your environment, in conjunction with other tools, with customizations, and so your environment, in conjunction with other tools, with customizations, and so
on, please post your question on the [Jupyter Discourse on, please post your question on the [Jupyter Discourse
site](https://discourse.jupyter.org). site](https://discourse.jupyter.org).

View File

@@ -111,4 +111,4 @@ RUN cd /tmp && echo "hello!"
[rules]: https://github.com/hadolint/hadolint#rules [rules]: https://github.com/hadolint/hadolint#rules
[DL3006]: https://github.com/hadolint/hadolint/wiki/DL3006 [DL3006]: https://github.com/hadolint/hadolint/wiki/DL3006
[DL3008]: https://github.com/hadolint/hadolint/wiki/DL3008 [DL3008]: https://github.com/hadolint/hadolint/wiki/DL3008
[pre-commit]: https://pre-commit.com/ [pre-commit]: https://pre-commit.com/

View File

@@ -4,4 +4,4 @@ We are delighted when members of the Jupyter community want to help translate th
1. Follow the steps documented on the [Getting Started as a Translator](https://docs.transifex.com/getting-started-1/translators) page. 1. Follow the steps documented on the [Getting Started as a Translator](https://docs.transifex.com/getting-started-1/translators) page.
2. Look for *jupyter-docker-stacks* when prompted to choose a translation team. Alternatively, visit https://www.transifex.com/project-jupyter/jupyter-docker-stacks-1 after creating your account and request to join the project. 2. Look for *jupyter-docker-stacks* when prompted to choose a translation team. Alternatively, visit https://www.transifex.com/project-jupyter/jupyter-docker-stacks-1 after creating your account and request to join the project.
3. See [Translating with the Web Editor](https://docs.transifex.com/translation/translating-with-the-web-editor) in the Transifex documentation. 3. See [Translating with the Web Editor](https://docs.transifex.com/translation/translating-with-the-web-editor) in the Transifex documentation.

View File

@@ -116,7 +116,7 @@ mamba install some-package
### Using alternative channels ### Using alternative channels
Conda is configured by default to use only the [`conda-forge`](https://anaconda.org/conda-forge) channel. Conda is configured by default to use only the [`conda-forge`](https://anaconda.org/conda-forge) channel.
However, alternative channels can be used either one shot by overwriting the default channel in the installation command or by configuring `conda` to use different channels. However, alternative channels can be used either one shot by overwriting the default channel in the installation command or by configuring `conda` to use different channels.
The examples below show how to use the [anaconda default channels](https://repo.anaconda.com/pkgs/main) instead of `conda-forge` to install packages. The examples below show how to use the [anaconda default channels](https://repo.anaconda.com/pkgs/main) instead of `conda-forge` to install packages.
@@ -127,4 +127,4 @@ conda install --channel defaults humanize
conda config --system --prepend channels defaults conda config --system --prepend channels defaults
# install a package # install a package
conda install humanize conda install humanize
``` ```

View File

@@ -130,7 +130,7 @@ RUN $CONDA_DIR/envs/${conda_env}/bin/python -m ipykernel install --user --name=$
fix-permissions /home/$NB_USER fix-permissions /home/$NB_USER
# any additional pip installs can be added by uncommenting the following line # any additional pip installs can be added by uncommenting the following line
# RUN $CONDA_DIR/envs/${conda_env}/bin/pip install # RUN $CONDA_DIR/envs/${conda_env}/bin/pip install
# prepend conda environment to path # prepend conda environment to path
ENV PATH $CONDA_DIR/envs/${conda_env}/bin:$PATH ENV PATH $CONDA_DIR/envs/${conda_env}/bin:$PATH

View File

@@ -76,7 +76,7 @@ Executing the command: jupyter notebook
Pressing `Ctrl-C` shuts down the notebook server and immediately destroys the Docker container. Files written to `~/work` in the container remain touched. Any other changes made in the container are lost. Pressing `Ctrl-C` shuts down the notebook server and immediately destroys the Docker container. Files written to `~/work` in the container remain touched. Any other changes made in the container are lost.
**Example 3** This command pulls the `jupyter/all-spark-notebook` image currently tagged `latest` from Docker Hub if an image tagged `latest` is not already present on the local host. It then starts a container named `notebook` running a JupyterLab server and exposes the server on a randomly selected port. **Example 3** This command pulls the `jupyter/all-spark-notebook` image currently tagged `latest` from Docker Hub if an image tagged `latest` is not already present on the local host. It then starts a container named `notebook` running a JupyterLab server and exposes the server on a randomly selected port.
``` ```
docker run -d -P --name notebook jupyter/all-spark-notebook docker run -d -P --name notebook jupyter/all-spark-notebook

View File

@@ -82,7 +82,7 @@ sc <- sparkR.session("local")
# Sum of the first 100 whole numbers # Sum of the first 100 whole numbers
sdf <- createDataFrame(list(1:100)) sdf <- createDataFrame(list(1:100))
dapplyCollect(sdf, dapplyCollect(sdf,
function(x) function(x)
{ x <- sum(x)} { x <- sum(x)}
) )
# 5050 # 5050
@@ -102,7 +102,7 @@ conf$spark.sql.catalogImplementation <- "in-memory"
sc <- spark_connect(master = "local", config = conf) sc <- spark_connect(master = "local", config = conf)
# Sum of the first 100 whole numbers # Sum of the first 100 whole numbers
sdf_len(sc, 100, repartition = 1) %>% sdf_len(sc, 100, repartition = 1) %>%
spark_apply(function(e) sum(e)) spark_apply(function(e) sum(e))
# 5050 # 5050
``` ```
@@ -171,7 +171,7 @@ sc <- sparkR.session("spark://master:7077")
# Sum of the first 100 whole numbers # Sum of the first 100 whole numbers
sdf <- createDataFrame(list(1:100)) sdf <- createDataFrame(list(1:100))
dapplyCollect(sdf, dapplyCollect(sdf,
function(x) function(x)
{ x <- sum(x)} { x <- sum(x)}
) )
# 5050 # 5050
@@ -190,7 +190,7 @@ conf$spark.sql.catalogImplementation <- "in-memory"
sc <- spark_connect(master = "spark://master:7077", config = conf) sc <- spark_connect(master = "spark://master:7077", config = conf)
# Sum of the first 100 whole numbers # Sum of the first 100 whole numbers
sdf_len(sc, 100, repartition = 1) %>% sdf_len(sc, 100, repartition = 1) %>%
spark_apply(function(e) sum(e)) spark_apply(function(e) sum(e))
# 5050 # 5050
``` ```
@@ -249,4 +249,4 @@ sess.run(hello)
[sparkr]: https://spark.apache.org/docs/latest/sparkr.html [sparkr]: https://spark.apache.org/docs/latest/sparkr.html
[sparklyr]: https://spark.rstudio.com/ [sparklyr]: https://spark.rstudio.com/
[spark-conf]: https://spark.apache.org/docs/latest/configuration.html [spark-conf]: https://spark.apache.org/docs/latest/configuration.html

View File

@@ -85,7 +85,7 @@ NAME=your-notebook PORT=9001 WORK_VOLUME=our-work notebook/up.sh
### How do I run over HTTPS? ### How do I run over HTTPS?
To run the notebook server with a self-signed certificate, pass the `--secure` option to the `up.sh` script. You must also provide a password, which will be used to secure the notebook server. You can specify the password by setting the `PASSWORD` environment variable, or by passing it to the `up.sh` script. To run the notebook server with a self-signed certificate, pass the `--secure` option to the `up.sh` script. You must also provide a password, which will be used to secure the notebook server. You can specify the password by setting the `PASSWORD` environment variable, or by passing it to the `up.sh` script.
```bash ```bash
PASSWORD=a_secret notebook/up.sh --secure PASSWORD=a_secret notebook/up.sh --secure

View File

@@ -41,4 +41,4 @@ include softlayer.makefile
# Preset notebook configurations # Preset notebook configurations
include self-signed.makefile include self-signed.makefile
include letsencrypt.makefile include letsencrypt.makefile

View File

@@ -146,4 +146,4 @@
"metadata": {} "metadata": {}
} }
] ]
} }

View File

@@ -48,4 +48,4 @@ $(docker run --rm ${IMAGE_NAME} conda list)
\`\`\` \`\`\`
$(docker run --rm ${IMAGE_NAME} apt list --installed) $(docker run --rm ${IMAGE_NAME} apt list --installed)
\`\`\` \`\`\`
EOF EOF

View File

@@ -5,4 +5,4 @@ log_cli_level = INFO
log_cli_format = %(asctime)s [%(levelname)8s] %(message)s (%(filename)s:%(lineno)s) log_cli_format = %(asctime)s [%(levelname)8s] %(message)s (%(filename)s:%(lineno)s)
log_cli_date_format=%Y-%m-%d %H:%M:%S log_cli_date_format=%Y-%m-%d %H:%M:%S
markers = markers =
info: marks tests as info (deselect with '-m "not info"') info: marks tests as info (deselect with '-m "not info"')

View File

@@ -42,4 +42,4 @@ $(docker run --rm ${IMAGE_NAME} conda list)
\`\`\` \`\`\`
$(docker run --rm ${IMAGE_NAME} apt list --installed) $(docker run --rm ${IMAGE_NAME} apt list --installed)
\`\`\` \`\`\`
EOF EOF

View File

@@ -75,7 +75,7 @@ class CondaPackageHelper:
if self.specs is None: if self.specs is None:
LOGGER.info("Grabing the list of specifications ...") LOGGER.info("Grabing the list of specifications ...")
self.specs = CondaPackageHelper._packages_from_json( self.specs = CondaPackageHelper._packages_from_json(
self._execute_command(CondaPackageHelper._conda_export_command(True)) self._execute_command(CondaPackageHelper._conda_export_command(from_history=True))
) )
return self.specs return self.specs
@@ -130,7 +130,7 @@ class CondaPackageHelper:
return ddict return ddict
def check_updatable_packages(self, specifications_only=True): def check_updatable_packages(self, specifications_only=True):
"""Check the updatables packages including or not dependencies""" """Check the updatable packages including or not dependencies"""
specs = self.specified_packages() specs = self.specified_packages()
installed = self.installed_packages() installed = self.installed_packages()
available = self.available_packages() available = self.available_packages()
@@ -145,9 +145,10 @@ class CondaPackageHelper:
current = min(inst_vs, key=CondaPackageHelper.semantic_cmp) current = min(inst_vs, key=CondaPackageHelper.semantic_cmp)
newest = avail_vs[-1] newest = avail_vs[-1]
if avail_vs and current != newest: if avail_vs and current != newest:
if CondaPackageHelper.semantic_cmp( if (
current CondaPackageHelper.semantic_cmp(current) <
) < CondaPackageHelper.semantic_cmp(newest): CondaPackageHelper.semantic_cmp(newest)
):
self.comparison.append( self.comparison.append(
{"Package": pkg, "Current": current, "Newest": newest} {"Package": pkg, "Current": current, "Newest": newest}
) )
@@ -180,10 +181,7 @@ class CondaPackageHelper:
def get_outdated_summary(self, specifications_only=True): def get_outdated_summary(self, specifications_only=True):
"""Return a summary of outdated packages""" """Return a summary of outdated packages"""
if specifications_only: nb_packages = len(self.specs if specifications_only else self.installed)
nb_packages = len(self.specs)
else:
nb_packages = len(self.installed)
nb_updatable = len(self.comparison) nb_updatable = len(self.comparison)
updatable_ratio = nb_updatable / nb_packages updatable_ratio = nb_updatable / nb_packages
return f"{nb_updatable}/{nb_packages} ({updatable_ratio:.0%}) packages could be updated" return f"{nb_updatable}/{nb_packages} ({updatable_ratio:.0%}) packages could be updated"

View File

@@ -7,12 +7,12 @@ test_packages
This test module tests if R and Python packages installed can be imported. This test module tests if R and Python packages installed can be imported.
It's a basic test aiming to prove that the package is working properly. It's a basic test aiming to prove that the package is working properly.
The goal is to detect import errors that can be caused by incompatibilities between packages for example: The goal is to detect import errors that can be caused by incompatibilities between packages, for example:
- #1012: issue importing `sympy` - #1012: issue importing `sympy`
- #966: isssue importing `pyarrow` - #966: isssue importing `pyarrow`
This module checks dynmamically, through the `CondaPackageHelper`, only the specified packages i.e. packages requested by `conda install` in the `Dockerfiles`. This module checks dynamically, through the `CondaPackageHelper`, only the specified packages i.e. packages requested by `conda install` in the `Dockerfile`s.
This means that it does not check dependencies. This choice is a tradeoff to cover the main requirements while achieving reasonable test duration. This means that it does not check dependencies. This choice is a tradeoff to cover the main requirements while achieving reasonable test duration.
However it could be easily changed (or completed) to cover also dependencies `package_helper.installed_packages()` instead of `package_helper.specified_packages()`. However it could be easily changed (or completed) to cover also dependencies `package_helper.installed_packages()` instead of `package_helper.specified_packages()`.
@@ -86,10 +86,7 @@ def packages(package_helper):
def package_map(package): def package_map(package):
"""Perform a mapping between the python package name and the name used for the import""" """Perform a mapping between the python package name and the name used for the import"""
_package = package return PACKAGE_MAPPING.get(package, package)
if _package in PACKAGE_MAPPING:
_package = PACKAGE_MAPPING.get(_package)
return _package
def excluded_package_predicate(package): def excluded_package_predicate(package):
@@ -136,9 +133,8 @@ def _import_packages(package_helper, filtered_packages, check_function, max_fail
for package in filtered_packages: for package in filtered_packages:
LOGGER.info(f"Trying to import {package}") LOGGER.info(f"Trying to import {package}")
try: try:
assert ( assert check_function(package_helper, package) == 0, \
check_function(package_helper, package) == 0 f"Package [{package}] import failed"
), f"Package [{package}] import failed"
except AssertionError as err: except AssertionError as err:
failures[package] = err failures[package] = err
if len(failures) > max_failures: if len(failures) > max_failures: