* feat: build cuda variants of pytorch
* feat: build with variant tag
* style: remove unused import
* refactor: rename get_prefix params
(cherry picked from commit 12b50af258c2f331d4100fb63fd41ad1a30acb1d)
* revert: drop ROOT_CONTAINER addition from Makefile
(cherry picked from commit f42314513df2855957a05c6ba0c748d2df26d7b0)
* style: use consistent three empty lines in Makefile
(cherry picked from commit 446b45aab37a37720462b5df305ce96b139cf67a)
* refactor: add default value for parent-image
(cherry picked from commit 32955cec99c7202f0ce50647dfc61ec98f57f741)
* revert: use original workflow structure
(cherry picked from commit 68c6744513636ec93d14f9bd0bbd123907efd13b)
* refactor: use single build image step
(cherry picked from commit 5f1ac0aeedcb5969a6d4b2a5bc939817378ab55d)
* fix: run merge tags regardless of repository owner
(cherry picked from commit 3fce366a98adc5db0d127f28ddf3157d13297a0f)
* refactor: build cuda12 instead of cuda tag
(cherry picked from commit 217144ecd322356376f04efb92792a20b4380177)
* docs: add note about CUDA tags to documentation
* refactor: add default value for variant in build-test-upload
* refactor: swap ordering of cuda11/cuda12 variants
* refactor: remove optional str type in arg parser
* fix: add proper env variables to CUDA Dockerfiles
* fix: remove CUDA build for aarch64
* fix: use latest NVIDIA documentation link
* fix: skip aarch64 tags file for CUDA variants
---------
Co-authored-by: zynaa <7562909-zynaa@users.noreply.gitlab.com>
* Automatically install latest pyspark version
* Better text
* Do not use shutil to keep behaviour
* Make setup_script cwd independent
* Use _get_program_version to calculate spark version
* Update setup_spark.py reqs
* Update setup_spark.py
* Add info about HADOOP_VERSION
* Add customization back
* Better text
* Specify build args when they are actually needed
* Better text
* Better code
* Better code
* Better text
* Get rid of warning
* Improve code
* Remove information about checksum
* Better text
* Modify the custom Python kernel
- to activate the custom environment
- for the respective Jupyter Notebook and Jupyter Console
Signed-off-by: Ayaz Salikhov <mathbunnyru@gmail.com>
* Add DL3059 to hadolint ignore list
Signed-off-by: Ayaz Salikhov <mathbunnyru@gmail.com>
* Move hadolint ignore to a single line
* Use python heredoc
* Remove unused print
* Fix style
* Do not hardcode CONDA_DIR
* Update custom_environment.dockerfile
* Use indent=1
* Implement activate_notebook_custom_env.py as a separate script
* Do not call Python manually
---------
Signed-off-by: Ayaz Salikhov <mathbunnyru@gmail.com>
Co-authored-by: Olivier Benz <olivier.benz@b-data.ch>
* Fix conda hook to work in both terminal and Jupyter Notebook
* Fix hook for Jupyter Terminals
* Rename startup hook to have order of precedence
* Try to increase sleep
* Comment making env_name default in custom_environment
* add grpcio grpcio_status to support spark connect
* Sort install list
* Fix package name
* Update pyspark docs with new deps grpcio and grpcio-status
* set grpcio and grpcio-status version as 1.56
* exclude grpcio and grpcio-status in test_packages.py
* Update selecting.md
* Update test_packages.py
* Update Dockerfile
---------
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
* Migrate start-notebook.sh to bash
Based on
> Stop using bash, haha 👍
from https://github.com/jupyter/docker-stacks/issues/1532.
If there's more apetite for this, I'll try to migrate
`start.sh` and `start-singleuser.sh` as well - I think they should
all be merged together. We can remove the `.sh` suffixes for
accuracy, and keep symlinks in so old config still works. Since
the shebang is what is used to launch the correct interpreter,
the `.sh` doesn't matter.
Will help fix https://github.com/jupyter/docker-stacks/issues/1532,
as I believe all those things are going to be easier to do from
python than bash
* Rename start-notebook.sh to start-notebook
* Cleanup start-notebook a little
* Fix typo
* Migrate start-singleuser as well
* Remove unused import
* Run symlink commands as root
* Combine repetitive RUN commands
* Remove multiple args to env
-u can not be set by shebang, we must set the env var
instead
* Fix conditional inversion
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
* Fix how start-singleuser is exec'd
* Actually call jupyterhub-singleuser in start-singleuser
* Pass through any additional args we get
* Put .py suffix on the start-* scripts
* Add .sh shims for the start-* scripts
* Document start-notebook.sh and start-singleuser.sh
* Partially test start-notebook.sh
* Reflow warning docs
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>
---------
Co-authored-by: Ayaz Salikhov <mathbunnyru@users.noreply.github.com>