From db01bfbd237480227295b2130fd7bcd9a6bf98bf Mon Sep 17 00:00:00 2001 From: Ayaz Salikhov Date: Sat, 18 Nov 2023 12:27:13 +0100 Subject: [PATCH] Update specifics.md --- docs/using/specifics.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/using/specifics.md b/docs/using/specifics.md index 39776551..3510674d 100644 --- a/docs/using/specifics.md +++ b/docs/using/specifics.md @@ -42,7 +42,7 @@ ipython profile create You can build a `pyspark-notebook` image with a different `Spark` version by overriding the default value of the following arguments at build time. `all-spark-notebook` is inherited from `pyspark-notebook`, so you have to first build `pyspark-notebook` and then `all-spark-notebook` to get the same version in `all-spark-notebook`. -- Spark distribution is defined by the combination of Spark, Hadoop and Scala versions and verified by the package checksum, +- Spark distribution is defined by the combination of Spark, Hadoop, and Scala versions and verified by the package checksum, see [Download Apache Spark](https://spark.apache.org/downloads.html) and the [archive repo](https://archive.apache.org/dist/spark/) for more information. - `spark_version`: The Spark version to install (`3.3.0`). @@ -55,7 +55,7 @@ You can build a `pyspark-notebook` image with a different `Spark` version by ove - Starting with _Spark >= 3.2_, the distribution file might contain Scala version. -For example, here is how to build a `pyspark-notebook` image with Spark `3.2.0`, Hadoop `3.2` and OpenJDK `11`. +For example, here is how to build a `pyspark-notebook` image with Spark `3.2.0`, Hadoop `3.2`, and OpenJDK `11`. ```{warning} This recipe is not tested and might be broken.