mirror of
https://github.com/jupyter/docker-stacks.git
synced 2025-10-18 15:32:56 +00:00
Fix more grammar issues
This commit is contained in:
@@ -233,7 +233,7 @@ This script is handy when you derive a new Dockerfile from this image and instal
|
||||
### Others
|
||||
|
||||
You can bypass the provided scripts and specify an arbitrary start command.
|
||||
If you do, keep in mind that features supported by the `start.sh` script and its kin will not function (e.g., `GRANT_SUDO`).
|
||||
If you do, keep in mind that features, supported by the `start.sh` script and its kin, will not function (e.g., `GRANT_SUDO`).
|
||||
|
||||
## Conda Environments
|
||||
|
||||
|
@@ -1,6 +1,6 @@
|
||||
FROM quay.io/jupyter/base-notebook
|
||||
|
||||
# Name your environment and choose the python version
|
||||
# Name your environment and choose the Python version
|
||||
ARG env_name=python310
|
||||
ARG py_ver=3.10
|
||||
|
||||
@@ -40,5 +40,5 @@ RUN activate_custom_env_script=/usr/local/bin/before-notebook.d/activate_custom_
|
||||
USER ${NB_UID}
|
||||
|
||||
# Making this environment default in Terminal
|
||||
# You can comment this line to keep the default environment in Terminal
|
||||
# You can comment this line to keep the default environment in a Terminal
|
||||
RUN echo "conda activate ${env_name}" >> "${HOME}/.bashrc"
|
||||
|
@@ -15,7 +15,7 @@ ARG INSTANTCLIENT_MAJOR_VERSION=21
|
||||
ARG INSTANTCLIENT_VERSION=${INSTANTCLIENT_MAJOR_VERSION}.11.0.0.0-1
|
||||
ARG INSTANTCLIENT_URL=https://download.oracle.com/otn_software/linux/instantclient/2111000
|
||||
|
||||
# Then install Oracle SQL Instant client, SQL+Plus, tools and JDBC.
|
||||
# Then install Oracle SQL Instant client, SQL+Plus, tools, and JDBC.
|
||||
# Note: You may need to change the URL to a newer version.
|
||||
# See: https://www.oracle.com/es/database/technologies/instant-client/linux-x86-64-downloads.html
|
||||
RUN mkdir "/opt/oracle"
|
||||
@@ -39,7 +39,7 @@ RUN echo "ORACLE_HOME=/usr/lib/oracle/${INSTANTCLIENT_MAJOR_VERSION}/client64" >
|
||||
echo "export PATH" >> "${HOME}/.bashrc" && \
|
||||
echo "export LD_LIBRARY_PATH" >> "${HOME}/.bashrc"
|
||||
|
||||
# Add credentials for /redacted/ using Oracle Db.
|
||||
# Add credentials for /redacted/ using Oracle DB.
|
||||
WORKDIR /usr/lib/oracle/${INSTANTCLIENT_MAJOR_VERSION}/client64/lib/network/admin/
|
||||
# Add a wildcard `[]` on the last letter of the filename to avoid throwing an error if the file does not exist.
|
||||
# See: https://stackoverflow.com/questions/31528384/conditional-copy-add-in-dockerfile
|
||||
|
@@ -132,7 +132,7 @@ Sometimes it is helpful to run the Jupyter instance behind an nginx proxy, for e
|
||||
and want nginx to help improve server performance in managing the connections
|
||||
|
||||
Here is a [quick example of NGINX configuration](https://gist.github.com/cboettig/8643341bd3c93b62b5c2) to get started.
|
||||
You'll need a server, a `.crt` and `.key` file for your server, and `docker` & `docker-compose` installed.
|
||||
You'll need a server, a `.crt`, and a `.key` file for your server, and `docker` & `docker-compose` installed.
|
||||
Then download the files at that gist and run `docker-compose up` to test it out.
|
||||
Customize the `nginx.conf` file to set the desired paths and add other services.
|
||||
|
||||
@@ -506,9 +506,9 @@ The following recipe demonstrates how to add functionality to read from and writ
|
||||
|
||||
You can now use `pyodbc` and `sqlalchemy` to interact with the database.
|
||||
|
||||
Pre-built images are hosted in the [realiserad/jupyter-docker-mssql](https://github.com/Realiserad/jupyter-docker-mssql) repository.
|
||||
Pre-built images are hosted in the [Realiserad/jupyter-docker-mssql](https://github.com/Realiserad/jupyter-docker-mssql) repository.
|
||||
|
||||
## Add Oracle SQL Instant client, SQL\*Plus and other tools (Version 21.x)
|
||||
## Add Oracle SQL Instant client, SQL\*Plus, and other tools (Version 21.x)
|
||||
|
||||
```{note}
|
||||
This recipe only works for x86_64 architecture.
|
||||
@@ -520,7 +520,7 @@ This recipe installs version `21.11.0.0.0`.
|
||||
|
||||
Nonetheless, go to the [Oracle Instant Client Download page](https://www.oracle.com/es/database/technologies/instant-client/linux-x86-64-downloads.html) for the complete list of versions available.
|
||||
You may need to perform different steps for older versions;
|
||||
the may be explained on the "Installation instructions" section of the Downloads page.
|
||||
they may be explained in the "Installation instructions" section of the Downloads page.
|
||||
|
||||
```{literalinclude} recipe_code/oracledb.dockerfile
|
||||
:language: docker
|
||||
|
@@ -170,11 +170,11 @@ Any other changes made in the container will be lost.
|
||||
|
||||
## Using Binder
|
||||
|
||||
[Binder](https://mybinder.org/) is a service that allows you to create and share custom computing environments for projects in version control.
|
||||
A [Binder](https://mybinder.org/) is a service that allows you to create and share custom computing environments for projects in version control.
|
||||
You can use any of the Jupyter Docker Stacks images as a basis for a Binder-compatible Dockerfile.
|
||||
See the
|
||||
[docker-stacks example](https://mybinder.readthedocs.io/en/latest/examples/sample_repos.html#using-a-docker-image-from-the-jupyter-docker-stacks-repository) and
|
||||
[Using a Dockerfile](https://mybinder.readthedocs.io/en/latest/tutorials/dockerfile.html) sections in the
|
||||
[Using a Dockerfile](https://mybinder.readthedocs.io/en/latest/tutorials/dockerfile.html) section in the
|
||||
[Binder documentation](https://mybinder.readthedocs.io/en/latest/index.html) for instructions.
|
||||
|
||||
## Using JupyterHub
|
||||
|
@@ -14,7 +14,7 @@ This page provides details about features specific to one or more images.
|
||||
Every new spark context that is created is put onto an incrementing port (i.e. 4040, 4041, 4042, etc.), and it might be necessary to open multiple ports.
|
||||
```
|
||||
|
||||
For example: `docker run --detach -p 8888:8888 -p 4040:4040 -p 4041:4041 quay.io/jupyter/pyspark-notebook`.
|
||||
For example, `docker run --detach -p 8888:8888 -p 4040:4040 -p 4041:4041 quay.io/jupyter/pyspark-notebook`.
|
||||
|
||||
#### IPython low-level output capture and forward
|
||||
|
||||
@@ -53,7 +53,7 @@ You can build a `pyspark-notebook` image with a different `Spark` version by ove
|
||||
- This version needs to match the version supported by the Spark distribution used above.
|
||||
- See [Spark Overview](https://spark.apache.org/docs/latest/#downloading) and [Ubuntu packages](https://packages.ubuntu.com/search?keywords=openjdk).
|
||||
|
||||
- Starting with _Spark >= 3.2_, the distribution file might contain Scala version.
|
||||
- Starting with _Spark >= 3.2_, the distribution file might contain the Scala version.
|
||||
|
||||
For example, here is how to build a `pyspark-notebook` image with Spark `3.2.0`, Hadoop `3.2`, and OpenJDK `11`.
|
||||
|
||||
@@ -161,7 +161,7 @@ Connection to Spark Cluster on **[Standalone Mode](https://spark.apache.org/docs
|
||||
0. Verify that the docker image (check the Dockerfile) and the Spark Cluster, which is being
|
||||
deployed, run the same version of Spark.
|
||||
1. [Deploy Spark in Standalone Mode](https://spark.apache.org/docs/latest/spark-standalone.html).
|
||||
2. Run the Docker container with `--net=host` in a location that is network addressable by all of
|
||||
2. Run the Docker container with `--net=host` in a location that is network-addressable by all of
|
||||
your Spark workers.
|
||||
(This is a [Spark networking requirement](https://spark.apache.org/docs/latest/cluster-overview.html#components).)
|
||||
|
||||
@@ -174,7 +174,7 @@ Connection to Spark Cluster on **[Standalone Mode](https://spark.apache.org/docs
|
||||
##### Standalone Mode in Python
|
||||
|
||||
The **same Python version** needs to be used on the notebook (where the driver is located) and on the Spark workers.
|
||||
The python version used at the driver and worker side can be adjusted by setting the environment variables `PYSPARK_PYTHON` and/or `PYSPARK_DRIVER_PYTHON`,
|
||||
The Python version used on the driver and worker side can be adjusted by setting the environment variables `PYSPARK_PYTHON` and/or `PYSPARK_DRIVER_PYTHON`,
|
||||
see [Spark Configuration][spark-conf] for more information.
|
||||
|
||||
```python
|
||||
|
Reference in New Issue
Block a user