mirror of
https://github.com/jupyterhub/jupyterhub.git
synced 2025-10-07 10:04:07 +00:00
Compare commits
2157 Commits
Author | SHA1 | Date | |
---|---|---|---|
![]() |
9749b6eb6a | ||
![]() |
979b47d1e0 | ||
![]() |
c12ccafe22 | ||
![]() |
acc51dbe24 | ||
![]() |
51dcbe4c80 | ||
![]() |
6da70e9960 | ||
![]() |
1cb98ce9ff | ||
![]() |
f2ecf6a307 | ||
![]() |
0a4c3bbfd3 | ||
![]() |
e4ae7ce4fe | ||
![]() |
ab43f6beb8 | ||
![]() |
e8806372c6 | ||
![]() |
6e353df033 | ||
![]() |
06507b426d | ||
![]() |
e282205139 | ||
![]() |
e4ff84b7c9 | ||
![]() |
8c4dbd7a32 | ||
![]() |
1336df621b | ||
![]() |
b66931306e | ||
![]() |
83003c7e3d | ||
![]() |
23b9400c53 | ||
![]() |
98e9117633 | ||
![]() |
b2d9f93601 | ||
![]() |
61c39972da | ||
![]() |
08f6ff52b0 | ||
![]() |
949496eb36 | ||
![]() |
7af4cc2fa9 | ||
![]() |
3d60ad3956 | ||
![]() |
689a5ba190 | ||
![]() |
80b9f02332 | ||
![]() |
8bd1219b92 | ||
![]() |
4ea74c4869 | ||
![]() |
24fb08d513 | ||
![]() |
6b22599149 | ||
![]() |
70ca293977 | ||
![]() |
aeaffa654f | ||
![]() |
86e4f42035 | ||
![]() |
6ccb809a2a | ||
![]() |
992bc98ff1 | ||
![]() |
43597febcb | ||
![]() |
6464e3629c | ||
![]() |
62d2a4bec2 | ||
![]() |
6e3913456b | ||
![]() |
de39fda9a7 | ||
![]() |
abca5546b7 | ||
![]() |
1b87e9c668 | ||
![]() |
70561c8727 | ||
![]() |
b13d3afa0f | ||
![]() |
5f6748abd4 | ||
![]() |
8b944a3293 | ||
![]() |
5dddd97132 | ||
![]() |
20a600ffa0 | ||
![]() |
de2841e00d | ||
![]() |
33af239911 | ||
![]() |
2aeb49690b | ||
![]() |
265fcbc874 | ||
![]() |
98a6338247 | ||
![]() |
d519bacd8a | ||
![]() |
ad39fe3823 | ||
![]() |
aca10da71d | ||
![]() |
e8b2bd82c8 | ||
![]() |
5616ade51d | ||
![]() |
b83f6d178b | ||
![]() |
3068e3911b | ||
![]() |
6867f3b141 | ||
![]() |
aec601dbff | ||
![]() |
748b6c98d5 | ||
![]() |
d6d03e8e38 | ||
![]() |
14d32c5bae | ||
![]() |
653922605a | ||
![]() |
52f5aacce1 | ||
![]() |
e00ef75f15 | ||
![]() |
50879db41c | ||
![]() |
8c4a170f4e | ||
![]() |
f36e5420f5 | ||
![]() |
27d83dd6c2 | ||
![]() |
aa43ce85bd | ||
![]() |
53205764ca | ||
![]() |
a7fc94c22a | ||
![]() |
9419c7f2c0 | ||
![]() |
73e0d7092e | ||
![]() |
562f86026d | ||
![]() |
3a64eb85a8 | ||
![]() |
e4340a467c | ||
![]() |
f8c00092d2 | ||
![]() |
bd00f376d7 | ||
![]() |
99b32dd372 | ||
![]() |
7a94830a29 | ||
![]() |
eeb867947a | ||
![]() |
ccac4aa53f | ||
![]() |
38c313eef7 | ||
![]() |
251aa1f12c | ||
![]() |
b6b596cd34 | ||
![]() |
2391d0f764 | ||
![]() |
959cd5a6e1 | ||
![]() |
036dcb644c | ||
![]() |
bdc7ee40f4 | ||
![]() |
5383a60d4a | ||
![]() |
78649b9118 | ||
![]() |
e63ec9aedc | ||
![]() |
6be699c333 | ||
![]() |
a377f8bc7f | ||
![]() |
7ba36ef760 | ||
![]() |
6f13355446 | ||
![]() |
a5f08035a2 | ||
![]() |
3d0256a757 | ||
![]() |
cca7cc6e92 | ||
![]() |
3ab54e6eeb | ||
![]() |
ce7e532ab6 | ||
![]() |
da79a89f22 | ||
![]() |
d75bcc03c0 | ||
![]() |
a03fd54982 | ||
![]() |
f4fa229645 | ||
![]() |
cdc2151f75 | ||
![]() |
b4a06ea53f | ||
![]() |
5fcaaac331 | ||
![]() |
4ea8fcb031 | ||
![]() |
ca7df636cb | ||
![]() |
759a4f0624 | ||
![]() |
2a89495323 | ||
![]() |
671c8ab78d | ||
![]() |
49aaf5050f | ||
![]() |
0c20f3e867 | ||
![]() |
db7d0920cd | ||
![]() |
ff2db557a8 | ||
![]() |
0cd5e51dd4 | ||
![]() |
b0fbf6a61e | ||
![]() |
9c810b1436 | ||
![]() |
3d1f936a46 | ||
![]() |
2c609d0936 | ||
![]() |
8c3025dc4f | ||
![]() |
d51f9f8998 | ||
![]() |
41583c1322 | ||
![]() |
c65e48b2b6 | ||
![]() |
01aeb84a13 | ||
![]() |
4c2e3f176a | ||
![]() |
554248b083 | ||
![]() |
4a859664da | ||
![]() |
00b37c9415 | ||
![]() |
3a9c631526 | ||
![]() |
4c868cdfb6 | ||
![]() |
96e75bb4ac | ||
![]() |
f09fdf4761 | ||
![]() |
7ef70eb74f | ||
![]() |
5c4eab0c15 | ||
![]() |
8ca8750b04 | ||
![]() |
eb1bf1dc58 | ||
![]() |
7852dbc1dc | ||
![]() |
3caea2a463 | ||
![]() |
6679c389b5 | ||
![]() |
954bbbe7d9 | ||
![]() |
3338de2619 | ||
![]() |
33c09daf5b | ||
![]() |
f3cc79e453 | ||
![]() |
cc0bc531d3 | ||
![]() |
fd2919b36f | ||
![]() |
b6e4225482 | ||
![]() |
18d7003580 | ||
![]() |
873f60781c | ||
![]() |
d1d8c02cb9 | ||
![]() |
67dd7742ef | ||
![]() |
3ee808e35c | ||
![]() |
78369901b2 | ||
![]() |
d7a7589821 | ||
![]() |
8437e66db9 | ||
![]() |
6ea07a7dd0 | ||
![]() |
fc184c4ec7 | ||
![]() |
df4f96eaf9 | ||
![]() |
d8bb3f4402 | ||
![]() |
4082c2ddbc | ||
![]() |
300f49d1ab | ||
![]() |
6abc096cbc | ||
![]() |
a6aba9a7e1 | ||
![]() |
8c3ff64511 | ||
![]() |
104593b9ec | ||
![]() |
495ebe406c | ||
![]() |
5100c60831 | ||
![]() |
bec737bf27 | ||
![]() |
2bb27653e2 | ||
![]() |
e8fbe84ac8 | ||
![]() |
8564ff015c | ||
![]() |
fb85cfb118 | ||
![]() |
25384051aa | ||
![]() |
2623aa5e46 | ||
![]() |
30ebf84bd4 | ||
![]() |
50466843ee | ||
![]() |
c616ab284d | ||
![]() |
41090ceb55 | ||
![]() |
d7939c1721 | ||
![]() |
d93ca55b11 | ||
![]() |
9ff11e6fa4 | ||
![]() |
5f3833bc95 | ||
![]() |
66ddaebf26 | ||
![]() |
2598ac2c1a | ||
![]() |
4ab36e3da6 | ||
![]() |
282cc020b6 | ||
![]() |
6912a5a752 | ||
![]() |
cedf237852 | ||
![]() |
9ff8f3e6ec | ||
![]() |
abc9581a75 | ||
![]() |
02df033227 | ||
![]() |
f82097bf2e | ||
![]() |
2af252c4c3 | ||
![]() |
06c8d22087 | ||
![]() |
95d479af88 | ||
![]() |
aee92985ac | ||
![]() |
ea73931ad0 | ||
![]() |
bbc3870803 | ||
![]() |
212d618978 | ||
![]() |
b0494c203f | ||
![]() |
75673fc268 | ||
![]() |
332a393083 | ||
![]() |
fa538cfc65 | ||
![]() |
29ae082399 | ||
![]() |
463960edaf | ||
![]() |
d9c6e43508 | ||
![]() |
961d2fe878 | ||
![]() |
5636472ebf | ||
![]() |
fc02f9e2e6 | ||
![]() |
fd21b2fe94 | ||
![]() |
6051dc9fa7 | ||
![]() |
4ee5ee4e02 | ||
![]() |
745cad5058 | ||
![]() |
335803d19f | ||
![]() |
3924295650 | ||
![]() |
c135e109ab | ||
![]() |
7e098fa09f | ||
![]() |
49f88450d5 | ||
![]() |
de20379933 | ||
![]() |
8d406c398b | ||
![]() |
dbd3813a1c | ||
![]() |
df04596172 | ||
![]() |
12f96df4eb | ||
![]() |
aecb95cd26 | ||
![]() |
5fecb71265 | ||
![]() |
e0157ff5eb | ||
![]() |
5ae250506b | ||
![]() |
8d298922e5 | ||
![]() |
18707e24b3 | ||
![]() |
3580904e8a | ||
![]() |
bcf5f49dd6 | ||
![]() |
522f9d44d9 | ||
![]() |
b29e35650d | ||
![]() |
168fa5c699 | ||
![]() |
cca80bc284 | ||
![]() |
a49c0fdb02 | ||
![]() |
f73c0cea0b | ||
![]() |
46df414d6e | ||
![]() |
8f4942f669 | ||
![]() |
75c947be59 | ||
![]() |
2556cd0691 | ||
![]() |
43fff0a280 | ||
![]() |
d9ce1b917f | ||
![]() |
bf840c65d6 | ||
![]() |
9e5af6f3ca | ||
![]() |
fb1614e20a | ||
![]() |
bb09070e16 | ||
![]() |
e72f0976f9 | ||
![]() |
99b37f1f0f | ||
![]() |
039d457797 | ||
![]() |
cc7662ec87 | ||
![]() |
aa89a63117 | ||
![]() |
9c1944d946 | ||
![]() |
8153e53fb1 | ||
![]() |
3f5d220af4 | ||
![]() |
7d7db9774f | ||
![]() |
01e7d00829 | ||
![]() |
3d7a3393e1 | ||
![]() |
ab40f29056 | ||
![]() |
6c4eba2682 | ||
![]() |
88fe734585 | ||
![]() |
89fa76fae8 | ||
![]() |
2919a2b6c2 | ||
![]() |
af5b3e0c31 | ||
![]() |
c94dcc435c | ||
![]() |
6a17797719 | ||
![]() |
6d7bde996b | ||
![]() |
fdd421bfa9 | ||
![]() |
5e3b1601d7 | ||
![]() |
e766d7a885 | ||
![]() |
f13fb2f12e | ||
![]() |
904e200daf | ||
![]() |
1ae1e66136 | ||
![]() |
7ca2105b80 | ||
![]() |
8820d5c028 | ||
![]() |
a50ed507fe | ||
![]() |
4320f2aff5 | ||
![]() |
102db113cf | ||
![]() |
528c7faf92 | ||
![]() |
b06a0f29ed | ||
![]() |
ce74fdf0a3 | ||
![]() |
48c046359f | ||
![]() |
dc234a79f0 | ||
![]() |
8fd09053a2 | ||
![]() |
634d59dfd5 | ||
![]() |
50dc39b102 | ||
![]() |
772c9e20b7 | ||
![]() |
0eac18bb22 | ||
![]() |
061d267d74 | ||
![]() |
d5e9e3a632 | ||
![]() |
d0523f5e93 | ||
![]() |
b79cb12095 | ||
![]() |
b486fc8abe | ||
![]() |
4d8c3cbf0d | ||
![]() |
6a93abbe1c | ||
![]() |
84ed311902 | ||
![]() |
6c0a0643e8 | ||
![]() |
e3ea59759e | ||
![]() |
aefc8de49a | ||
![]() |
88189d54d9 | ||
![]() |
3fe1e9d510 | ||
![]() |
93a34d9874 | ||
![]() |
f609b00358 | ||
![]() |
9d835a8670 | ||
![]() |
1a04ecde8e | ||
![]() |
6b8dd277a2 | ||
![]() |
0f1acce363 | ||
![]() |
044a916488 | ||
![]() |
47f39e7c2f | ||
![]() |
5424108593 | ||
![]() |
0424f2938b | ||
![]() |
9dc91fb707 | ||
![]() |
1fa4fa32ce | ||
![]() |
4dd36f5988 | ||
![]() |
0f0afa178e | ||
![]() |
c52b308067 | ||
![]() |
8835f79bf9 | ||
![]() |
9d0e8e0861 | ||
![]() |
66c65c93db | ||
![]() |
5aa8d29913 | ||
![]() |
4ec2fe3c19 | ||
![]() |
fcf49be5f6 | ||
![]() |
633aa69623 | ||
![]() |
45a67e2d73 | ||
![]() |
2fe060861a | ||
![]() |
cca49486e3 | ||
![]() |
dfbe7257e6 | ||
![]() |
8c5715621a | ||
![]() |
b05b3a30ab | ||
![]() |
f9fb650a7b | ||
![]() |
3ce2643ab7 | ||
![]() |
3e61e46497 | ||
![]() |
587e6cec4e | ||
![]() |
a6c513c1ac | ||
![]() |
b678236f87 | ||
![]() |
11f5759fc7 | ||
![]() |
95db61e613 | ||
![]() |
ab37cd7f24 | ||
![]() |
26a0be5103 | ||
![]() |
9009bf2825 | ||
![]() |
6ce1a2dc83 | ||
![]() |
b7d68ca255 | ||
![]() |
f0220c87d8 | ||
![]() |
7d720371c5 | ||
![]() |
2262bab442 | ||
![]() |
c08b582c53 | ||
![]() |
7e56bf7e2c | ||
![]() |
1feb3564c1 | ||
![]() |
7e25dd15e6 | ||
![]() |
f581b1a541 | ||
![]() |
f253cc46ad | ||
![]() |
26906cca07 | ||
![]() |
baf6e03c46 | ||
![]() |
0d6778f955 | ||
![]() |
1c02c0f2dd | ||
![]() |
1799b57e4b | ||
![]() |
b98af09df8 | ||
![]() |
ca6032381a | ||
![]() |
f4aa8a4c25 | ||
![]() |
5831079bf6 | ||
![]() |
c685d4bec9 | ||
![]() |
8057323331 | ||
![]() |
c3c69027fa | ||
![]() |
68f359360e | ||
![]() |
ca3ac3b08b | ||
![]() |
9b3d55ded0 | ||
![]() |
6a72ad8ca5 | ||
![]() |
4cf007b515 | ||
![]() |
352826a1ec | ||
![]() |
acf7d7daaa | ||
![]() |
92d59cd12b | ||
![]() |
6ade08825b | ||
![]() |
ff693e82af | ||
![]() |
d2a07aaf1b | ||
![]() |
4a83cddb8e | ||
![]() |
c110c25428 | ||
![]() |
1cd3bc1860 | ||
![]() |
51156a4762 | ||
![]() |
71f6cfa92b | ||
![]() |
66c1600f4f | ||
![]() |
b319b58a2f | ||
![]() |
83ce6d3f6b | ||
![]() |
a76e62dc65 | ||
![]() |
6a6c54fef5 | ||
![]() |
ed0a3699e7 | ||
![]() |
89992296ac | ||
![]() |
970693ef46 | ||
![]() |
74455d6337 | ||
![]() |
e1e34a14a2 | ||
![]() |
1db5e5e95c | ||
![]() |
ed5b9249fe | ||
![]() |
bb702abe15 | ||
![]() |
0d427338a1 | ||
![]() |
5723746e05 | ||
![]() |
6bc3e05c6c | ||
![]() |
0abe517faa | ||
![]() |
c9c4c3cfd7 | ||
![]() |
5033effffb | ||
![]() |
f83c22ea0e | ||
![]() |
b0ae97fd32 | ||
![]() |
46c3548725 | ||
![]() |
061bd7b19f | ||
![]() |
a587c1c91c | ||
![]() |
d3add440ca | ||
![]() |
52af3abedc | ||
![]() |
25da2c2ad3 | ||
![]() |
6d4e17c531 | ||
![]() |
e15b7c2620 | ||
![]() |
5e166970fa | ||
![]() |
4375c2db96 | ||
![]() |
b0235527ab | ||
![]() |
77e625d36d | ||
![]() |
bfe143f1ac | ||
![]() |
871f747597 | ||
![]() |
d0665a9f21 | ||
![]() |
f47d0a1524 | ||
![]() |
d87e2dd3ae | ||
![]() |
e540963f20 | ||
![]() |
bc3bb47672 | ||
![]() |
8cbe1eac2b | ||
![]() |
78a796cea6 | ||
![]() |
943e4a7072 | ||
![]() |
af396ec8d6 | ||
![]() |
41379bfe8c | ||
![]() |
eab6065a26 | ||
![]() |
86b3d8dc79 | ||
![]() |
3492cebec2 | ||
![]() |
a09862cb1b | ||
![]() |
f5bfe6a773 | ||
![]() |
a6e32deeb1 | ||
![]() |
38dc781271 | ||
![]() |
f4f30db334 | ||
![]() |
f13e69b172 | ||
![]() |
87bf84d05f | ||
![]() |
fd78a03280 | ||
![]() |
4d8f86fe63 | ||
![]() |
5ac5850037 | ||
![]() |
8a8ccd068c | ||
![]() |
911513435b | ||
![]() |
6dc9dccbb7 | ||
![]() |
557e200dcf | ||
![]() |
cabc05f7dd | ||
![]() |
8303633527 | ||
![]() |
c66fca73af | ||
![]() |
be1848fba0 | ||
![]() |
e694bad314 | ||
![]() |
ee8d8f68ee | ||
![]() |
5c18b0d450 | ||
![]() |
b659627044 | ||
![]() |
1b88eb67a3 | ||
![]() |
9c3f98d427 | ||
![]() |
c281c82220 | ||
![]() |
da128fb99b | ||
![]() |
58ada78dc2 | ||
![]() |
b7dffc7afc | ||
![]() |
ed8a531b85 | ||
![]() |
654c2c8fc1 | ||
![]() |
7f6e501ad3 | ||
![]() |
135be72470 | ||
![]() |
6c13c83144 | ||
![]() |
c8860649f9 | ||
![]() |
efd6ae357c | ||
![]() |
c45f4b44d9 | ||
![]() |
3bc9e2ff9b | ||
![]() |
bbb3aee386 | ||
![]() |
80c4041036 | ||
![]() |
3286afd848 | ||
![]() |
e1f144ad79 | ||
![]() |
5cddce343d | ||
![]() |
9a75622ee5 | ||
![]() |
a033653918 | ||
![]() |
c15116c76c | ||
![]() |
ffa07afd80 | ||
![]() |
c520102008 | ||
![]() |
3a8851004c | ||
![]() |
ded7610b88 | ||
![]() |
829c65d76e | ||
![]() |
0082f5b3da | ||
![]() |
2c93299764 | ||
![]() |
41fff711e7 | ||
![]() |
a20b29fb1c | ||
![]() |
4555d5bbb2 | ||
![]() |
ef568e3d61 | ||
![]() |
1171bdcef6 | ||
![]() |
77e90051fd | ||
![]() |
8703df341f | ||
![]() |
5c91f3cad7 | ||
![]() |
4712802b45 | ||
![]() |
9fa196092f | ||
![]() |
000b42bdcf | ||
![]() |
7f8eef5e19 | ||
![]() |
041acbc0bf | ||
![]() |
2c7fe93212 | ||
![]() |
41a2e29f27 | ||
![]() |
c9b7b9b224 | ||
![]() |
46fbd0465e | ||
![]() |
92da2c12fd | ||
![]() |
47e8bbaf5e | ||
![]() |
d3d6d0a997 | ||
![]() |
5663964bf4 | ||
![]() |
35fe5f3bd2 | ||
![]() |
c69dcdd0ed | ||
![]() |
ab588c28ce | ||
![]() |
551c65243c | ||
![]() |
1d9182dd82 | ||
![]() |
547543b888 | ||
![]() |
8c3596d923 | ||
![]() |
e879ab18e2 | ||
![]() |
8a5fc8044a | ||
![]() |
68c12d4d32 | ||
![]() |
965e4a91d6 | ||
![]() |
1a5220d5d8 | ||
![]() |
cc9d9e435a | ||
![]() |
efb5789dea | ||
![]() |
4dd430e080 | ||
![]() |
2f091e5300 | ||
![]() |
603c59a307 | ||
![]() |
0758b661df | ||
![]() |
a11816bff8 | ||
![]() |
a5b1b02220 | ||
![]() |
935a366fd3 | ||
![]() |
00bab929fc | ||
![]() |
3f44c75fbc | ||
![]() |
1f596793c6 | ||
![]() |
be14baf096 | ||
![]() |
ab82b8e492 | ||
![]() |
7532ba1310 | ||
![]() |
8f2ad59254 | ||
![]() |
ebca6af1fd | ||
![]() |
a142876a4e | ||
![]() |
7bf4efd3f8 | ||
![]() |
b0517a96d5 | ||
![]() |
fca5e9365c | ||
![]() |
f7434008b4 | ||
![]() |
fcc0669492 | ||
![]() |
377681c796 | ||
![]() |
aea3d7c71c | ||
![]() |
6ba43d06b6 | ||
![]() |
11ea8f40d5 | ||
![]() |
73b2307b36 | ||
![]() |
e81ca0b386 | ||
![]() |
a110504aa7 | ||
![]() |
7000cea8ec | ||
![]() |
6e1b18315a | ||
![]() |
2ecb31b1ad | ||
![]() |
704cec4133 | ||
![]() |
3fe576eb93 | ||
![]() |
41c5be8fbe | ||
![]() |
582533527d | ||
![]() |
6edd440aae | ||
![]() |
7613ba170f | ||
![]() |
6d91e5a4b2 | ||
![]() |
e881b9487f | ||
![]() |
2721081a51 | ||
![]() |
4a42d2ea01 | ||
![]() |
0a32ad63f8 | ||
![]() |
79e75be9f3 | ||
![]() |
690b583e80 | ||
![]() |
4eedc59090 | ||
![]() |
99d72dfccf | ||
![]() |
96f20cf2b0 | ||
![]() |
29bb4b8032 | ||
![]() |
d2bff90f17 | ||
![]() |
277d5a3e97 | ||
![]() |
e633199ea9 | ||
![]() |
25df187e37 | ||
![]() |
6da692523c | ||
![]() |
5e570f94b6 | ||
![]() |
46a1e2b75d | ||
![]() |
4a12f7904f | ||
![]() |
1a0fec74cf | ||
![]() |
ac2c74e5f3 | ||
![]() |
f8d7e7f06b | ||
![]() |
d5bc3856aa | ||
![]() |
e8429ad5e0 | ||
![]() |
cf69c0a4cb | ||
![]() |
76d475f152 | ||
![]() |
36f8ad2ec3 | ||
![]() |
b939f8af37 | ||
![]() |
c614484ea3 | ||
![]() |
60802b2b76 | ||
![]() |
a8e43198a9 | ||
![]() |
b75082e1b6 | ||
![]() |
982e2e8e6c | ||
![]() |
70a3e5fb9c | ||
![]() |
f47165b5d2 | ||
![]() |
f1a2f7d2d0 | ||
![]() |
2bde84d452 | ||
![]() |
40df3cda62 | ||
![]() |
b4d8d67c75 | ||
![]() |
cafe193504 | ||
![]() |
3a52b5be72 | ||
![]() |
5108a987fb | ||
![]() |
63e3f91ee0 | ||
![]() |
c3510d2853 | ||
![]() |
20c39d474a | ||
![]() |
2a1f82d7a9 | ||
![]() |
f5cf87d91b | ||
![]() |
04eb9ca5ea | ||
![]() |
f366b785a3 | ||
![]() |
26a744456b | ||
![]() |
21e7cc53f9 | ||
![]() |
0d6c27ca1d | ||
![]() |
c5e11e4d7a | ||
![]() |
b50fa894ad | ||
![]() |
1ed2c4d07d | ||
![]() |
70717dc9ab | ||
![]() |
f8ec54c3e0 | ||
![]() |
6f0e4e1f7d | ||
![]() |
c689ec726d | ||
![]() |
5deb594933 | ||
![]() |
671ae3c0d7 | ||
![]() |
8a6fab9673 | ||
![]() |
3baae644d6 | ||
![]() |
00777568d0 | ||
![]() |
e5b52b9ac5 | ||
![]() |
7a5a21be29 | ||
![]() |
29e954c407 | ||
![]() |
7e3b0008f8 | ||
![]() |
02622edcc6 | ||
![]() |
f89fd26d92 | ||
![]() |
f0ed10091b | ||
![]() |
6175819f54 | ||
![]() |
9f8516d47e | ||
![]() |
c5c72cddfc | ||
![]() |
060ef8be58 | ||
![]() |
6905c75cea | ||
![]() |
1b7ca1e5de | ||
![]() |
be96990258 | ||
![]() |
be9f9b18d2 | ||
![]() |
4433efe14d | ||
![]() |
f5baa7b55c | ||
![]() |
a1ec5bb09a | ||
![]() |
f7b5d8e4c5 | ||
![]() |
577c93c70a | ||
![]() |
1df0e171d4 | ||
![]() |
96c5c52bf9 | ||
![]() |
8f4764426f | ||
![]() |
93fc1d0efa | ||
![]() |
01edb634ef | ||
![]() |
fd14ce2de3 | ||
![]() |
73aa219ebe | ||
![]() |
30ed97d153 | ||
![]() |
f396c9910e | ||
![]() |
fbd28ef0fd | ||
![]() |
d7849f0d99 | ||
![]() |
23fd5bc87e | ||
![]() |
cbc9f19d6e | ||
![]() |
6c872b6621 | ||
![]() |
a431edd813 | ||
![]() |
6e0e3e3bf3 | ||
![]() |
f4426ae0df | ||
![]() |
71cac7c8f7 | ||
![]() |
c8acf6ce36 | ||
![]() |
27569bc97e | ||
![]() |
1141216758 | ||
![]() |
ff51aa40a5 | ||
![]() |
a91817280c | ||
![]() |
550dec4cf8 | ||
![]() |
ece8408381 | ||
![]() |
3b2af29653 | ||
![]() |
dbc1585864 | ||
![]() |
e1dfc04169 | ||
![]() |
66f9034dab | ||
![]() |
4fe3fe8f12 | ||
![]() |
841913a1d1 | ||
![]() |
906d528302 | ||
![]() |
db39bab3fe | ||
![]() |
aa8d8300b1 | ||
![]() |
4838a5a8e5 | ||
![]() |
77d7b88b01 | ||
![]() |
d33d0f7dac | ||
![]() |
2e0253197c | ||
![]() |
42488fdb12 | ||
![]() |
3865df7db0 | ||
![]() |
85ef375cc5 | ||
![]() |
effbef373f | ||
![]() |
e52700e950 | ||
![]() |
b3d03a25c0 | ||
![]() |
1eda00b721 | ||
![]() |
53c5a5001b | ||
![]() |
f416306913 | ||
![]() |
6ea33fa7cc | ||
![]() |
7f50a0a7fa | ||
![]() |
69ccd21069 | ||
![]() |
8f98075c54 | ||
![]() |
1d0c686966 | ||
![]() |
351b5c0c90 | ||
![]() |
7757dea8a4 | ||
![]() |
63d222912a | ||
![]() |
0af68d8363 | ||
![]() |
803d18989b | ||
![]() |
664bf967e0 | ||
![]() |
ec0a4eaad3 | ||
![]() |
9f208881d9 | ||
![]() |
f45ad335c3 | ||
![]() |
9f848de395 | ||
![]() |
862455ee56 | ||
![]() |
8b28fe6265 | ||
![]() |
e6768763b4 | ||
![]() |
340abcc0d5 | ||
![]() |
8c40f3207e | ||
![]() |
b36a44e634 | ||
![]() |
c59942c690 | ||
![]() |
a66801c424 | ||
![]() |
baaa558a84 | ||
![]() |
6d1178616f | ||
![]() |
4897abbd84 | ||
![]() |
9325d24370 | ||
![]() |
a5061deeee | ||
![]() |
f215324c44 | ||
![]() |
7dbb4ce1ff | ||
![]() |
da144c98ce | ||
![]() |
45102b248b | ||
![]() |
94687e5215 | ||
![]() |
7ce8fb7153 | ||
![]() |
74e02b45ba | ||
![]() |
de5b19dc6c | ||
![]() |
73a2a50e7b | ||
![]() |
d9154681eb | ||
![]() |
3c0fab7449 | ||
![]() |
d268633a2c | ||
![]() |
8505b49eb0 | ||
![]() |
09f65126d8 | ||
![]() |
051729448c | ||
![]() |
9cf799d05b | ||
![]() |
534deaece4 | ||
![]() |
e8b8abac7b | ||
![]() |
1839a2cc1c | ||
![]() |
d1786a5a9d | ||
![]() |
107b98b964 | ||
![]() |
aae5aee065 | ||
![]() |
a67e636830 | ||
![]() |
d5d9081f5b | ||
![]() |
dc129849dd | ||
![]() |
e6e92365d2 | ||
![]() |
67938581d9 | ||
![]() |
71b1d4fa4b | ||
![]() |
85c9983894 | ||
![]() |
bff7be6640 | ||
![]() |
64cc0f72b3 | ||
![]() |
d1cf683fff | ||
![]() |
9aedb50fe2 | ||
![]() |
c3641ef3f3 | ||
![]() |
c6325f3d85 | ||
![]() |
051a941e1e | ||
![]() |
6ea1976b9c | ||
![]() |
4f894097d7 | ||
![]() |
35c279f819 | ||
![]() |
4294791e08 | ||
![]() |
42e7eb382e | ||
![]() |
b6d37e70b4 | ||
![]() |
368f2234d1 | ||
![]() |
58bbea7f57 | ||
![]() |
80f2b9015a | ||
![]() |
9159d77ff1 | ||
![]() |
3fbdf02cc5 | ||
![]() |
33c8f356a6 | ||
![]() |
2977823a13 | ||
![]() |
abfa8217ec | ||
![]() |
f4a5c94a71 | ||
![]() |
bd6148df2a | ||
![]() |
e5d941a3ad | ||
![]() |
9473362b08 | ||
![]() |
b3a97de5fa | ||
![]() |
e890c3b8b2 | ||
![]() |
bb2c91dd1e | ||
![]() |
0528a06e03 | ||
![]() |
81885d5c61 | ||
![]() |
aa754a1a2c | ||
![]() |
9034de28f9 | ||
![]() |
2823c12552 | ||
![]() |
9ef5978515 | ||
![]() |
33e6c0de23 | ||
![]() |
9a0d00fd69 | ||
![]() |
8cef59bdd7 | ||
![]() |
5870bedb3e | ||
![]() |
bdcf697fe9 | ||
![]() |
bf565ece3b | ||
![]() |
95781880c5 | ||
![]() |
d251b705e8 | ||
![]() |
5bb4b70ab1 | ||
![]() |
71fbe5e29d | ||
![]() |
e7defa6e12 | ||
![]() |
1314eca8ec | ||
![]() |
7dd4e4516f | ||
![]() |
e515a4b820 | ||
![]() |
28464f9c47 | ||
![]() |
0e437224d0 | ||
![]() |
664e2d7088 | ||
![]() |
c268026cb6 | ||
![]() |
e28dbe949e | ||
![]() |
b654b5b867 | ||
![]() |
9d6751febe | ||
![]() |
f27838cf2f | ||
![]() |
b58aa2468c | ||
![]() |
1ee10ef93d | ||
![]() |
000110f5d7 | ||
![]() |
617678b16e | ||
![]() |
38126ecfe1 | ||
![]() |
90ca77194d | ||
![]() |
d32b57450c | ||
![]() |
3afb209cd7 | ||
![]() |
8cd1b57eb4 | ||
![]() |
5a48a8e1fc | ||
![]() |
1734b75d47 | ||
![]() |
e12a317e7a | ||
![]() |
f24fbc761f | ||
![]() |
715b8f3cee | ||
![]() |
4fb4eed5e9 | ||
![]() |
105f8dcb92 | ||
![]() |
1d9e41ef57 | ||
![]() |
fc361e3aea | ||
![]() |
f92af04e0e | ||
![]() |
d38dd92415 | ||
![]() |
de31e7f815 | ||
![]() |
e50ad5f039 | ||
![]() |
7ae1b0b97f | ||
![]() |
a3ccee3871 | ||
![]() |
55e4ed6c07 | ||
![]() |
eb1f589d60 | ||
![]() |
328177d25a | ||
![]() |
13dd6e402b | ||
![]() |
26068c7db8 | ||
![]() |
a553f97425 | ||
![]() |
6aacc33cd5 | ||
![]() |
7d00dd9054 | ||
![]() |
6db762c2a7 | ||
![]() |
c5fe261530 | ||
![]() |
8e8640de3e | ||
![]() |
f7c601ec25 | ||
![]() |
58da178d30 | ||
![]() |
b69048f08a | ||
![]() |
2673564e66 | ||
![]() |
a31127ed8b | ||
![]() |
38e1a0aed5 | ||
![]() |
7a0b8d675a | ||
![]() |
c2e7ce52ae | ||
![]() |
6eaa3a4343 | ||
![]() |
161cdcd7e7 | ||
![]() |
ad3266b902 | ||
![]() |
64d237a89e | ||
![]() |
0e4deec714 | ||
![]() |
345f50d29c | ||
![]() |
262a831af8 | ||
![]() |
2c7d693537 | ||
![]() |
52a08176cc | ||
![]() |
c90b190c13 | ||
![]() |
20f75c0018 | ||
![]() |
b71d1543ca | ||
![]() |
cf21933a1d | ||
![]() |
9349ad52e4 | ||
![]() |
689dc5ba24 | ||
![]() |
d42a7261a4 | ||
![]() |
bcbf136de2 | ||
![]() |
55e9a0f5b5 | ||
![]() |
fd1dd8d1e6 | ||
![]() |
0d7c0c0f24 | ||
![]() |
bd06651bb0 | ||
![]() |
d64d916abc | ||
![]() |
da668b5e9a | ||
![]() |
d54442ecbf | ||
![]() |
c930d6bf6a | ||
![]() |
2ce263d45f | ||
![]() |
68f81fdc30 | ||
![]() |
e7ab18a720 | ||
![]() |
582467642c | ||
![]() |
d65e2daa15 | ||
![]() |
4eaa7c5eb3 | ||
![]() |
02de44e551 | ||
![]() |
4cdf0a65cd | ||
![]() |
b0367c21f3 | ||
![]() |
9d68107722 | ||
![]() |
ad61c23873 | ||
![]() |
52d070835f | ||
![]() |
e477756f27 | ||
![]() |
c359221ef3 | ||
![]() |
cc94d290ab | ||
![]() |
da0a58cb9c | ||
![]() |
7ddd3b0589 | ||
![]() |
118fa9e480 | ||
![]() |
ff71d09fd1 | ||
![]() |
1eb0b1b073 | ||
![]() |
9ea9902c76 | ||
![]() |
6494017ce2 | ||
![]() |
b0cd9eebe9 | ||
![]() |
c3d4885521 | ||
![]() |
2919aaae79 | ||
![]() |
1986ba71c1 | ||
![]() |
a2c39a4dbc | ||
![]() |
1e847c8710 | ||
![]() |
83a8552a63 | ||
![]() |
f60c633320 | ||
![]() |
a5c7384228 | ||
![]() |
27de930978 | ||
![]() |
98e76d52bc | ||
![]() |
729aac9bd1 | ||
![]() |
bc85c445ab | ||
![]() |
9f708fa10c | ||
![]() |
d26c7cd6fc | ||
![]() |
0174083439 | ||
![]() |
e6fc2aee4a | ||
![]() |
47513cfbd0 | ||
![]() |
4e7147a495 | ||
![]() |
5cfc0db0d5 | ||
![]() |
eb862e2cbb | ||
![]() |
98799e4227 | ||
![]() |
ea6a0e53cc | ||
![]() |
f2b42a50c8 | ||
![]() |
43336f5b07 | ||
![]() |
bf2d948366 | ||
![]() |
271fd35bce | ||
![]() |
1d70986c25 | ||
![]() |
ec017d1f1d | ||
![]() |
a8c804de5b | ||
![]() |
3578001fab | ||
![]() |
b199110276 | ||
![]() |
b69bba5a7d | ||
![]() |
efdad701df | ||
![]() |
8a074b12b5 | ||
![]() |
b5e5fe630d | ||
![]() |
5d23bf6da3 | ||
![]() |
e5a8939481 | ||
![]() |
0eca901c65 | ||
![]() |
4a1964f881 | ||
![]() |
131094b5ff | ||
![]() |
4544a98fb9 | ||
![]() |
cbacdecb1e | ||
![]() |
64d8b2adc9 | ||
![]() |
9c83c15f67 | ||
![]() |
d2a545a01e | ||
![]() |
10e7ab96e5 | ||
![]() |
40f519544f | ||
![]() |
076c14dce6 | ||
![]() |
e223ce59e1 | ||
![]() |
ad833755e1 | ||
![]() |
142978b4d8 | ||
![]() |
e3cab48039 | ||
![]() |
203f4a5855 | ||
![]() |
cfc27db43d | ||
![]() |
e2a8557083 | ||
![]() |
d5478b1f21 | ||
![]() |
cf19af6f1c | ||
![]() |
1342f00d8e | ||
![]() |
1e49b4379b | ||
![]() |
a5d563217c | ||
![]() |
b1ac3b82dc | ||
![]() |
a376f33af1 | ||
![]() |
6f8a49569b | ||
![]() |
a4c553a5c5 | ||
![]() |
75ebe40f86 | ||
![]() |
69d711929a | ||
![]() |
4c12872dbf | ||
![]() |
21cee1be31 | ||
![]() |
00c782fd40 | ||
![]() |
b3f9635ecc | ||
![]() |
8c10fb285e | ||
![]() |
8a3f5d8f2e | ||
![]() |
7b496a5b4a | ||
![]() |
41445cffb4 | ||
![]() |
64e7705053 | ||
![]() |
dafd2d67f6 | ||
![]() |
823ab58f3a | ||
![]() |
ab7883e5c3 | ||
![]() |
8fd1fb3234 | ||
![]() |
6502b50576 | ||
![]() |
861347cce0 | ||
![]() |
43d4b65250 | ||
![]() |
e53ce19fcc | ||
![]() |
e603ff8274 | ||
![]() |
22b15f0ecf | ||
![]() |
c48c5bce99 | ||
![]() |
fa11d7e3c6 | ||
![]() |
7e3f29d033 | ||
![]() |
b7827687a8 | ||
![]() |
0beb4639a3 | ||
![]() |
b010c9501e | ||
![]() |
295e92270b | ||
![]() |
e42066f1c9 | ||
![]() |
1d29fcbfb2 | ||
![]() |
bdbfbb7e32 | ||
![]() |
42314ed75b | ||
![]() |
d8141692ab | ||
![]() |
44e58818af | ||
![]() |
eaab24d11a | ||
![]() |
025db2f9f3 | ||
![]() |
3985140377 | ||
![]() |
6886384ca3 | ||
![]() |
4a7fe8648a | ||
![]() |
7383c0cf60 | ||
![]() |
83186e02a2 | ||
![]() |
c6b4577c0a | ||
![]() |
73b1922c17 | ||
![]() |
1430e02fa8 | ||
![]() |
9ef09a288a | ||
![]() |
4a093be938 | ||
![]() |
64a253dbef | ||
![]() |
54877025ca | ||
![]() |
7793176b65 | ||
![]() |
bf32599d5d | ||
![]() |
01a31c894c | ||
![]() |
1e9cf23302 | ||
![]() |
555969141e | ||
![]() |
a938982bdc | ||
![]() |
17b54fee6a | ||
![]() |
60a153718d | ||
![]() |
9e1e382c37 | ||
![]() |
d72a96ec17 | ||
![]() |
5f845e78f1 | ||
![]() |
0d7e608a64 | ||
![]() |
15c5f152f8 | ||
![]() |
6d13893f16 | ||
![]() |
7e35de2577 | ||
![]() |
ec78503d1e | ||
![]() |
7d0bc1a112 | ||
![]() |
98e4531b44 | ||
![]() |
bb92058fbf | ||
![]() |
a5c59d6550 | ||
![]() |
f14be3df65 | ||
![]() |
3f7a32c990 | ||
![]() |
a8d8fc02e7 | ||
![]() |
0713fa209e | ||
![]() |
850f430ad6 | ||
![]() |
4026ed87e8 | ||
![]() |
f57d196e33 | ||
![]() |
ca9dc3a179 | ||
![]() |
a348ba6536 | ||
![]() |
c9e194f187 | ||
![]() |
5c6825f298 | ||
![]() |
6bd0bb4b4a | ||
![]() |
9422d2778f | ||
![]() |
ca760fc0df | ||
![]() |
901904ecb8 | ||
![]() |
33e173766f | ||
![]() |
6df40cd94b | ||
![]() |
6cc6be6c1c | ||
![]() |
44c7fe0fa6 | ||
![]() |
533e97eaa9 | ||
![]() |
e4dece9f24 | ||
![]() |
08f9396017 | ||
![]() |
c6598c797b | ||
![]() |
6378592db9 | ||
![]() |
fd598a0b97 | ||
![]() |
bc3ef4403f | ||
![]() |
786196527b | ||
![]() |
93c488f840 | ||
![]() |
23516e93f9 | ||
![]() |
112a79d7c6 | ||
![]() |
ce7085b720 | ||
![]() |
e31e4f8cfc | ||
![]() |
177c6ea0ee | ||
![]() |
b0dbb055f4 | ||
![]() |
f245e933ee | ||
![]() |
e0cd07a9bb | ||
![]() |
46052387bc | ||
![]() |
0b5a992605 | ||
![]() |
3e9cd8acf8 | ||
![]() |
bed466018c | ||
![]() |
6de12313e1 | ||
![]() |
ecc2108710 | ||
![]() |
b64ff64cc0 | ||
![]() |
63dcebadbe | ||
![]() |
015dc03986 | ||
![]() |
f1075b5a21 | ||
![]() |
403b5f1ffe | ||
![]() |
e9fd6e1c32 | ||
![]() |
18adfbbf30 | ||
![]() |
4c1df3f3fe | ||
![]() |
0ea813e6ad | ||
![]() |
09a595851e | ||
![]() |
6ad6cf01c5 | ||
![]() |
bdfde0a256 | ||
![]() |
ee2e830e03 | ||
![]() |
c9d52ce6ff | ||
![]() |
3b04f0872b | ||
![]() |
61ac37500b | ||
![]() |
a43757bc1a | ||
![]() |
5f9283c7c0 | ||
![]() |
5d9e8b47c2 | ||
![]() |
475548a3e2 | ||
![]() |
f21743b751 | ||
![]() |
b2a9a6d1c0 | ||
![]() |
c93832cb33 | ||
![]() |
46508a31d3 | ||
![]() |
b171608e26 | ||
![]() |
d65064af74 | ||
![]() |
493d856872 | ||
![]() |
2adb341769 | ||
![]() |
ac9682a4a7 | ||
![]() |
057d32c166 | ||
![]() |
037c3bc184 | ||
![]() |
9a49d06b21 | ||
![]() |
dff1b0aca6 | ||
![]() |
9535fa3af1 | ||
![]() |
fed4cd5e40 | ||
![]() |
587e5ebfff | ||
![]() |
46be2c21e0 | ||
![]() |
1837c33a56 | ||
![]() |
40164e685f | ||
![]() |
3ad81f3bce | ||
![]() |
155c8f664a | ||
![]() |
969084df98 | ||
![]() |
39d4d38b8b | ||
![]() |
759c4c5ebc | ||
![]() |
2bc452a617 | ||
![]() |
8ef43941e8 | ||
![]() |
defde67746 | ||
![]() |
3abce3581c | ||
![]() |
ec9e9c3b04 | ||
![]() |
c766f5866e | ||
![]() |
27c44e44c3 | ||
![]() |
af1dd54470 | ||
![]() |
be07c7ef31 | ||
![]() |
637cafcf6e | ||
![]() |
977c5b7f0b | ||
![]() |
a084d23107 | ||
![]() |
c4d5762608 | ||
![]() |
270b619921 | ||
![]() |
f2ac996bc6 | ||
![]() |
8cb1e347da | ||
![]() |
d1fba40f9a | ||
![]() |
195ec4c359 | ||
![]() |
f857b17022 | ||
![]() |
58dccdb59b | ||
![]() |
63f164ca53 | ||
![]() |
39f1faa1df | ||
![]() |
9de3757caa | ||
![]() |
75e49ebdd3 | ||
![]() |
150b22aab9 | ||
![]() |
842712171e | ||
![]() |
ce1264cd18 | ||
![]() |
2755966adf | ||
![]() |
bea35a60df | ||
![]() |
bf560707b6 | ||
![]() |
194ff5ee57 | ||
![]() |
bc751b0740 | ||
![]() |
44cb302de0 | ||
![]() |
da5183a6f8 | ||
![]() |
dd0b49c8f4 | ||
![]() |
d5bc135d9b | ||
![]() |
9884fa7127 | ||
![]() |
e85b91cd9b | ||
![]() |
1666342fc3 | ||
![]() |
2a13929e63 | ||
![]() |
11cd8674c2 | ||
![]() |
df3be4c770 | ||
![]() |
ceb1154e61 | ||
![]() |
56e603bf0f | ||
![]() |
7366fddb0c | ||
![]() |
124ae60133 | ||
![]() |
d2231cb683 | ||
![]() |
51b6376634 | ||
![]() |
95cf06a46e | ||
![]() |
f658113b8c | ||
![]() |
6911e2b052 | ||
![]() |
3cf2ef7757 | ||
![]() |
2db7c47fbf | ||
![]() |
680309e35d | ||
![]() |
62ceb9cc3d | ||
![]() |
f825973588 | ||
![]() |
0a84738fe9 | ||
![]() |
a24608d445 | ||
![]() |
7f818a04ae | ||
![]() |
6340b1564e | ||
![]() |
d2814c9c11 | ||
![]() |
49169dab2f | ||
![]() |
3f2d55474c | ||
![]() |
ee43ebeab5 | ||
![]() |
e255ada169 | ||
![]() |
a336a30cf8 | ||
![]() |
7b39790a86 | ||
![]() |
38ba275367 | ||
![]() |
c65779db56 | ||
![]() |
d8a5034b16 | ||
![]() |
cb0073e9b8 | ||
![]() |
dd95201b90 | ||
![]() |
f88695084b | ||
![]() |
61ad3812ce | ||
![]() |
7c8800c724 | ||
![]() |
b691480e5f | ||
![]() |
36f74689c4 | ||
![]() |
043390afe1 | ||
![]() |
c5cceb789a | ||
![]() |
f434b7ea33 | ||
![]() |
8b0258c4f5 | ||
![]() |
5b5069af99 | ||
![]() |
08c14a03d7 | ||
![]() |
70196a4721 | ||
![]() |
1114736ae7 | ||
![]() |
0873902a18 | ||
![]() |
a245708eaf | ||
![]() |
21ad59dc3c | ||
![]() |
c7f8895a95 | ||
![]() |
abe1136cba | ||
![]() |
d0f719b0e1 | ||
![]() |
c565835773 | ||
![]() |
663857a15f | ||
![]() |
728b4e3dc7 | ||
![]() |
6da46f36c9 | ||
![]() |
126f8d0115 | ||
![]() |
11f575568f | ||
![]() |
57a22719a5 | ||
![]() |
995264ffef | ||
![]() |
f364c61d64 | ||
![]() |
93926a564a | ||
![]() |
5b743a147f | ||
![]() |
6abcbe8e37 | ||
![]() |
f4d8ad00a3 | ||
![]() |
ad9b0095cb | ||
![]() |
3c0467ebcf | ||
![]() |
dfec64ab18 | ||
![]() |
f65f429a4a | ||
![]() |
db9226d871 | ||
![]() |
ced81d1a2e | ||
![]() |
fec0cb1260 | ||
![]() |
52b8bc135f | ||
![]() |
c7402676a8 | ||
![]() |
336d7cfcfa | ||
![]() |
bf029d3c31 | ||
![]() |
ffb41b0164 | ||
![]() |
86dcb51417 | ||
![]() |
8613d43fe4 | ||
![]() |
6b7061173f | ||
![]() |
80368aad24 | ||
![]() |
b17b073599 | ||
![]() |
e84359cc23 | ||
![]() |
e4f72c9eeb | ||
![]() |
7a94443a06 | ||
![]() |
ddf1ff03f5 | ||
![]() |
49c518940b | ||
![]() |
bf0927685f | ||
![]() |
30f5d9c8ce | ||
![]() |
e057e8696b | ||
![]() |
e31b69863f | ||
![]() |
bf85411f23 | ||
![]() |
5977e7f092 | ||
![]() |
70e53f31d0 | ||
![]() |
afe50ef96e | ||
![]() |
e580b907c3 | ||
![]() |
3491ad6816 | ||
![]() |
d300eb2519 | ||
![]() |
7f7463ac3c | ||
![]() |
b3f121e3e4 | ||
![]() |
7358b4d4ea | ||
![]() |
15a7e9406b | ||
![]() |
d6965cca81 | ||
![]() |
78e36db3e3 | ||
![]() |
25a4ef36db | ||
![]() |
1a8d4c0e96 | ||
![]() |
0627fe0bb3 | ||
![]() |
2b0533fd8d | ||
![]() |
fe81f4d72d | ||
![]() |
241f927e91 | ||
![]() |
ea2c081f6d | ||
![]() |
4022a3d564 | ||
![]() |
7739890264 | ||
![]() |
cf5999b048 | ||
![]() |
868a571c73 | ||
![]() |
21ff1de87e | ||
![]() |
2dab6aed99 | ||
![]() |
a8549ddbe2 | ||
![]() |
aa91a69bc8 | ||
![]() |
0ed05edba2 | ||
![]() |
bec7c8ad2d | ||
![]() |
9449e77cca | ||
![]() |
0bd20ba74b | ||
![]() |
98494b8c58 | ||
![]() |
7c5662ee52 | ||
![]() |
a9b6d7e51c | ||
![]() |
ee45866afe | ||
![]() |
593112807b | ||
![]() |
0fa732a0a8 | ||
![]() |
4d0b37292d | ||
![]() |
d4a98738f1 | ||
![]() |
3884d556b0 | ||
![]() |
71d5e604cb | ||
![]() |
f3bb3651b3 | ||
![]() |
8bdd5a58a4 | ||
![]() |
0085febc1c | ||
![]() |
8b988dc0be | ||
![]() |
b859818a9c | ||
![]() |
832e8c0348 | ||
![]() |
5b7b9b5677 | ||
![]() |
1d0496fc80 | ||
![]() |
4ff76d6d85 | ||
![]() |
97e51fe54f | ||
![]() |
19a375fba2 | ||
![]() |
e5d48f419f | ||
![]() |
e9c0fc6eb7 | ||
![]() |
eca9c00872 | ||
![]() |
e6e2a7d003 | ||
![]() |
b35a78240c | ||
![]() |
d317c067d1 | ||
![]() |
cfc93d8555 | ||
![]() |
681ff09e06 | ||
![]() |
0d72280e5d | ||
![]() |
ab8629642d | ||
![]() |
dca8725876 | ||
![]() |
3bb1d640ee | ||
![]() |
954e64e3ca | ||
![]() |
58475ffcfd | ||
![]() |
181aec31af | ||
![]() |
c6ac4e0d34 | ||
![]() |
8fe875430a | ||
![]() |
f2dcf96bef | ||
![]() |
6e90059580 | ||
![]() |
0a2e6a4042 | ||
![]() |
672ef34d9b | ||
![]() |
2d9285d447 | ||
![]() |
859cff345c | ||
![]() |
4938ed66b6 | ||
![]() |
770325f695 | ||
![]() |
ec8e6e2e4a | ||
![]() |
f7ce07ee9e | ||
![]() |
2b70e768e5 | ||
![]() |
6d7341b478 | ||
![]() |
b76e308e71 | ||
![]() |
eac96acadd | ||
![]() |
9992e26a8c | ||
![]() |
0ec4852800 | ||
![]() |
fdf9fbb65b | ||
![]() |
2f81ea8fb1 | ||
![]() |
cab9c1d9ab | ||
![]() |
fd172a56e0 | ||
![]() |
6b6946fe8a | ||
![]() |
e97d5c6662 | ||
![]() |
343fe70393 | ||
![]() |
b048bbcd9e | ||
![]() |
5639b3a622 | ||
![]() |
08e6da7584 | ||
![]() |
385a4556a4 | ||
![]() |
40b8393c4b | ||
![]() |
65464c7029 | ||
![]() |
869f2dd08d | ||
![]() |
8b471624ee | ||
![]() |
51c1ea1f7f | ||
![]() |
d04c0c28c0 | ||
![]() |
33fa9b953b | ||
![]() |
de5fb1e7ce | ||
![]() |
1cf13bea66 | ||
![]() |
abe7150ffe | ||
![]() |
95e5a5dea8 | ||
![]() |
f05778a1ae | ||
![]() |
06fb211283 | ||
![]() |
f877588e52 | ||
![]() |
bfe0186ad2 | ||
![]() |
d23e095106 | ||
![]() |
96d4486ae5 | ||
![]() |
98710dbc8b | ||
![]() |
8ac3a8e4e6 | ||
![]() |
0575022186 | ||
![]() |
1ac9c443b5 | ||
![]() |
08ed8443f2 | ||
![]() |
0cdaa833c4 | ||
![]() |
87ce2a9b2f | ||
![]() |
f26b43f209 | ||
![]() |
e6e84eabb3 | ||
![]() |
4fadfd42da | ||
![]() |
29d84f4192 | ||
![]() |
fc7ef39f21 | ||
![]() |
d5d9cb204c | ||
![]() |
9d592fff31 | ||
![]() |
12594631e0 | ||
![]() |
ffbc981bfe | ||
![]() |
7f3fd7e3cc | ||
![]() |
1c9499e91e | ||
![]() |
26e5efeec4 | ||
![]() |
90811196d7 | ||
![]() |
a917de258f | ||
![]() |
5d7383278f | ||
![]() |
865d5f7646 | ||
![]() |
5a5e0118b8 | ||
![]() |
b9596b2dee | ||
![]() |
3b5b42e620 | ||
![]() |
7a9491c323 | ||
![]() |
957fd9cc20 | ||
![]() |
eaa096152a | ||
![]() |
5ea93add28 | ||
![]() |
b77e9cbf08 | ||
![]() |
24dfa5d228 | ||
![]() |
98603ef3e4 | ||
![]() |
06aded4bce | ||
![]() |
a2e80e5d6f | ||
![]() |
c993163f0b | ||
![]() |
e6e890b46c | ||
![]() |
d2c6ae925f | ||
![]() |
32cddfbdfe | ||
![]() |
7dee409218 | ||
![]() |
7dc230581c | ||
![]() |
730fe5a446 | ||
![]() |
2ed84b0de1 | ||
![]() |
a5a61893fb | ||
![]() |
67f5543e18 | ||
![]() |
de1757bf57 | ||
![]() |
54c06c33bd | ||
![]() |
17119a273f | ||
![]() |
7effe53c28 | ||
![]() |
59f14ad7c0 | ||
![]() |
54a5d2c152 | ||
![]() |
b83a6250ba | ||
![]() |
ca6aba7568 | ||
![]() |
49ec485614 | ||
![]() |
8de25d08a7 | ||
![]() |
a7cfc76a82 | ||
![]() |
c19b3b540a | ||
![]() |
5ec1cf86a7 | ||
![]() |
2594a7269e | ||
![]() |
f20021c068 | ||
![]() |
5708de05ff | ||
![]() |
28f2ba9df9 | ||
![]() |
35297ce87f | ||
![]() |
73d97c0a82 | ||
![]() |
f0957ad247 | ||
![]() |
777bbe8e92 | ||
![]() |
3eb6cd302c | ||
![]() |
438b285670 | ||
![]() |
cf2ec324b7 | ||
![]() |
b33b8772d3 | ||
![]() |
6a4078f977 | ||
![]() |
12de64828c | ||
![]() |
23506a25e5 | ||
![]() |
8ffa07e82d | ||
![]() |
2019bd2797 | ||
![]() |
88e2346bf2 | ||
![]() |
2bb7594cd1 | ||
![]() |
36d596a4d3 | ||
![]() |
d9c36ed725 | ||
![]() |
cb5cc8c1b4 | ||
![]() |
76f7ff4721 | ||
![]() |
22d2bbe1ae | ||
![]() |
d215248d8a | ||
![]() |
488331e033 | ||
![]() |
cde1b65252 | ||
![]() |
5f32abeeba | ||
![]() |
41cbcc9502 | ||
![]() |
297c40195c | ||
![]() |
209eb4468e | ||
![]() |
7c92902e48 | ||
![]() |
d5beda293b | ||
![]() |
3b4c40e5e0 | ||
![]() |
5cfa16dbfd | ||
![]() |
7a22a2cf93 | ||
![]() |
eb72067ab3 | ||
![]() |
3da4dc66c9 | ||
![]() |
87c2d545df | ||
![]() |
3e4cc0b869 | ||
![]() |
de8d1b71f8 | ||
![]() |
af7494c460 | ||
![]() |
d0a48c0655 | ||
![]() |
48fc74b9b4 | ||
![]() |
54ac5226b3 | ||
![]() |
ce9feb5139 | ||
![]() |
ec0cb70f35 | ||
![]() |
c0af08fabd | ||
![]() |
a2e59e6867 | ||
![]() |
4859c62381 | ||
![]() |
9b7a77c36f | ||
![]() |
ab7ec8b33d | ||
![]() |
15c691b358 | ||
![]() |
1c95d94b96 | ||
![]() |
e76e9099c2 | ||
![]() |
63cd63ffee | ||
![]() |
2c4c5fc6fe | ||
![]() |
c788d6f087 | ||
![]() |
a1c94a6e13 | ||
![]() |
37bf8e0724 | ||
![]() |
56161c2aa6 | ||
![]() |
66a86f5db0 | ||
![]() |
ea0462de1a | ||
![]() |
37be9b4a5b | ||
![]() |
c4d8be22b6 | ||
![]() |
ced408c205 | ||
![]() |
5ce930324c | ||
![]() |
36b15d7ce0 | ||
![]() |
7b636b6f9c | ||
![]() |
35b06481e2 | ||
![]() |
18b049e3c9 | ||
![]() |
6b62fe794e | ||
![]() |
ad25ef8f87 | ||
![]() |
61369ea5da | ||
![]() |
a17b4c5801 | ||
![]() |
750d36a8f7 | ||
![]() |
23fc2f42d0 | ||
![]() |
29ba669c73 | ||
![]() |
eb8f338186 | ||
![]() |
76f9cf00c4 | ||
![]() |
fc5f55caf9 | ||
![]() |
131da596c5 | ||
![]() |
70bfdd6d00 | ||
![]() |
02f33073ad | ||
![]() |
411189e54c | ||
![]() |
9f5f19cb26 | ||
![]() |
9d93df6baf | ||
![]() |
39eb1f055f | ||
![]() |
74b0ebefe9 | ||
![]() |
28867760a4 | ||
![]() |
7a0b98d9ce | ||
![]() |
14e9155eb8 | ||
![]() |
7ce4b13998 | ||
![]() |
5a16d770da | ||
![]() |
b741a30dc3 | ||
![]() |
aebe33b62b | ||
![]() |
794a1aa70f | ||
![]() |
4e45b89ed1 | ||
![]() |
cd969c5dc4 | ||
![]() |
de4117cba9 | ||
![]() |
f4c129a649 | ||
![]() |
69e973d53a | ||
![]() |
f566559f9a | ||
![]() |
37b82f7c2a | ||
![]() |
a71f6be001 | ||
![]() |
3da7021faa | ||
![]() |
7330babc7e | ||
![]() |
eeb0e506f3 | ||
![]() |
d59d856fa6 | ||
![]() |
a7df46ead7 | ||
![]() |
cb03c6035a | ||
![]() |
037943728f | ||
![]() |
42918352a8 | ||
![]() |
ec18f7b65a | ||
![]() |
69869ee823 | ||
![]() |
fedcd22b0c | ||
![]() |
2f4691e692 | ||
![]() |
aaffb66bef | ||
![]() |
e228b017e9 | ||
![]() |
8e111665cd | ||
![]() |
283d2b75b6 | ||
![]() |
3f73671adf | ||
![]() |
459793010b | ||
![]() |
248bf8ef83 | ||
![]() |
f71388633f | ||
![]() |
dd8259fb46 | ||
![]() |
677895c3eb | ||
![]() |
9fcaf8df52 | ||
![]() |
cd3f37f6a6 | ||
![]() |
91f06f49e0 | ||
![]() |
145ccfbd4f | ||
![]() |
141f5ea2b4 | ||
![]() |
c3da12c195 | ||
![]() |
439b96978e | ||
![]() |
8ba57f9309 | ||
![]() |
1ea557c999 | ||
![]() |
d5f5d837de | ||
![]() |
462324bba8 | ||
![]() |
1dd848e560 | ||
![]() |
864f7dc0c1 | ||
![]() |
817f6e9c35 | ||
![]() |
85c67dd05c | ||
![]() |
b0c2dd01a1 | ||
![]() |
281658ccce | ||
![]() |
c14d8e3446 | ||
![]() |
d956563ff4 | ||
![]() |
e389e7f6d4 | ||
![]() |
634bd26ad4 | ||
![]() |
e82bd5a96b | ||
![]() |
9c044e863a | ||
![]() |
0340e9d6ad | ||
![]() |
ecf486d678 | ||
![]() |
587059da11 | ||
![]() |
6f25abac2e | ||
![]() |
7d73d5774e | ||
![]() |
30e652f59e | ||
![]() |
00cc149b0d | ||
![]() |
d407c96ee8 | ||
![]() |
84e1216dda | ||
![]() |
c8aae0ea1c | ||
![]() |
313623256f | ||
![]() |
da7ac11605 | ||
![]() |
eaeef65560 | ||
![]() |
a4597a1c50 | ||
![]() |
390efc1c5a | ||
![]() |
853f8accf5 | ||
![]() |
1f66237a5a | ||
![]() |
0881514988 | ||
![]() |
975a841cdc | ||
![]() |
ef0a627e28 | ||
![]() |
504ebe9012 | ||
![]() |
895d713370 | ||
![]() |
d1ad045335 | ||
![]() |
1041bc53b1 | ||
![]() |
2636a9fff5 | ||
![]() |
b1dfac546d | ||
![]() |
28de35facd | ||
![]() |
10c54353da | ||
![]() |
b890a7486d | ||
![]() |
97c72dd779 | ||
![]() |
ae833d4a51 | ||
![]() |
15b8857728 | ||
![]() |
79ea4038e5 | ||
![]() |
e7284b65ad | ||
![]() |
deaccdc668 | ||
![]() |
838719e7ab | ||
![]() |
6ca5c1a276 | ||
![]() |
d3facd93f1 | ||
![]() |
8d2a987c81 | ||
![]() |
e7a325ed24 | ||
![]() |
fc6d93bbe3 | ||
![]() |
e9f8459681 | ||
![]() |
8785ecd810 | ||
![]() |
147f029bf6 | ||
![]() |
cd440b0f7d | ||
![]() |
0a9d2b7f76 | ||
![]() |
658a1fccfe | ||
![]() |
ee578dd7b5 | ||
![]() |
d2d9ce9d02 | ||
![]() |
93529d11bc | ||
![]() |
9d630add9a | ||
![]() |
ea88427b7f | ||
![]() |
cc4a3da32c | ||
![]() |
c7f14eec14 | ||
![]() |
cf1dcd6f3a | ||
![]() |
bcaaaa2d35 | ||
![]() |
7462cfa6fd | ||
![]() |
a785b8d38a | ||
![]() |
714b5925f6 | ||
![]() |
de3210536f | ||
![]() |
e6e1e90386 | ||
![]() |
0667451584 | ||
![]() |
35d26e75f4 | ||
![]() |
9f62c76a8e | ||
![]() |
72096b5166 | ||
![]() |
88906c2e1b | ||
![]() |
67b62db524 | ||
![]() |
f9dbfd7275 | ||
![]() |
b15d56432d | ||
![]() |
a10879f493 | ||
![]() |
c2d1a21d32 | ||
![]() |
8060003fd6 | ||
![]() |
5018c13b81 | ||
![]() |
99255b04ac | ||
![]() |
2e6949c6e1 | ||
![]() |
0f5f0f0df9 | ||
![]() |
da302f5206 | ||
![]() |
b0f90a0f4b | ||
![]() |
7a915533a6 | ||
![]() |
733e018bdc | ||
![]() |
4380d1c15a | ||
![]() |
a87872e7aa | ||
![]() |
578c78397a | ||
![]() |
0820f4cfa1 | ||
![]() |
f386da1b7a | ||
![]() |
71ce37b834 | ||
![]() |
64965b7a2e | ||
![]() |
ef7545fc75 | ||
![]() |
af95d643b1 | ||
![]() |
67bca186b4 | ||
![]() |
08a125489d | ||
![]() |
1050dadda4 | ||
![]() |
5997614f45 | ||
![]() |
7e78394eb2 | ||
![]() |
fd2717c5ce | ||
![]() |
dcd4e689aa | ||
![]() |
6ef8120f94 | ||
![]() |
a697c80475 | ||
![]() |
c675c29fce | ||
![]() |
9b44eec7f7 | ||
![]() |
9cb4173042 | ||
![]() |
916a83a954 | ||
![]() |
6933e8fb33 | ||
![]() |
8f30f4afd9 | ||
![]() |
d5790ce386 | ||
![]() |
6b4c5e4bce | ||
![]() |
15cf30156d | ||
![]() |
dff0f054d0 | ||
![]() |
45b5a249c6 | ||
![]() |
e3a25a883f | ||
![]() |
3ee21cc967 | ||
![]() |
941e8c928a | ||
![]() |
5d5a424aea | ||
![]() |
f2059c0d03 | ||
![]() |
20f6b13e86 | ||
![]() |
eb6bf3f698 | ||
![]() |
b87b8c52d3 | ||
![]() |
a2852b1b2b | ||
![]() |
b98e981e6a | ||
![]() |
d223c0edff | ||
![]() |
53bf7a18ae | ||
![]() |
835fe8be8f | ||
![]() |
ed71aead2b | ||
![]() |
5713e56fd1 | ||
![]() |
eb65bd7dd6 | ||
![]() |
aa101a7aff | ||
![]() |
743406e60d | ||
![]() |
7f191ea5e8 | ||
![]() |
915fab2d26 | ||
![]() |
b961925bbc | ||
![]() |
3425269cb2 | ||
![]() |
9280621ca8 | ||
![]() |
2661eab54b | ||
![]() |
18b1df8bc6 | ||
![]() |
674c441935 | ||
![]() |
e324d89f8a | ||
![]() |
433bb09e9e | ||
![]() |
facf8039c3 | ||
![]() |
395b1a5681 | ||
![]() |
65caf37d71 | ||
![]() |
f5cb617ce7 | ||
![]() |
204bfaf8f4 | ||
![]() |
f0c825cc1e | ||
![]() |
fd5705b74b | ||
![]() |
7585026f81 | ||
![]() |
a4fd0980a3 | ||
![]() |
c126cc3f9b | ||
![]() |
f70b11af11 | ||
![]() |
6024a9780f | ||
![]() |
63c391641a | ||
![]() |
7a6c2038d8 | ||
![]() |
83fb49e7cf | ||
![]() |
15bf61df2a | ||
![]() |
cd32aadbe8 | ||
![]() |
387e983543 | ||
![]() |
37d35953bc | ||
![]() |
666b3cb36e | ||
![]() |
75272a8499 | ||
![]() |
bde3f87fb1 | ||
![]() |
29208ebb08 | ||
![]() |
6f6d60297c | ||
![]() |
c8f0bed963 | ||
![]() |
a1212a8503 | ||
![]() |
40eae6c685 | ||
![]() |
2c897a7ca3 | ||
![]() |
1a0b46a5c8 | ||
![]() |
596f9669d7 | ||
![]() |
55fbb0b5fe | ||
![]() |
a3795ad672 | ||
![]() |
464b13c9a5 | ||
![]() |
07fa856943 | ||
![]() |
cfcf0defd0 | ||
![]() |
e2b538b324 | ||
![]() |
82317692ae | ||
![]() |
261a9a5d8a | ||
![]() |
e205d05ec8 | ||
![]() |
2cdba6f42a | ||
![]() |
6219c206e9 | ||
![]() |
9d82a64a85 | ||
![]() |
7a4a00e5c1 | ||
![]() |
f28b613ccb | ||
![]() |
20f5c2690b | ||
![]() |
38afbcc0d0 | ||
![]() |
f9b3ff58f9 | ||
![]() |
0a1811e86c | ||
![]() |
e8ae58f6b5 | ||
![]() |
08b71e7f56 | ||
![]() |
9d90496549 | ||
![]() |
df222de915 | ||
![]() |
01e4f2b974 | ||
![]() |
bffdf690d9 | ||
![]() |
0f06c9376a | ||
![]() |
99eb8d6a2a | ||
![]() |
3bb0d57e35 | ||
![]() |
29ffb23d67 | ||
![]() |
729099f87d | ||
![]() |
daf395c2c3 | ||
![]() |
8b68286486 | ||
![]() |
ad44b16104 | ||
![]() |
fa33faeefa | ||
![]() |
a3b20bb220 | ||
![]() |
b644052cb4 | ||
![]() |
02dead8ab1 | ||
![]() |
24506888ea | ||
![]() |
457ad3ec85 | ||
![]() |
75e8274a7b | ||
![]() |
dd69213a3c | ||
![]() |
1aafdb1d1b | ||
![]() |
9a84b9d9a1 | ||
![]() |
bf6786c55b | ||
![]() |
2a3ceff29f | ||
![]() |
1e9614b218 | ||
![]() |
5e29605341 | ||
![]() |
34b6bc3a3f | ||
![]() |
485190e5af | ||
![]() |
b90200667f | ||
![]() |
5885f88d79 | ||
![]() |
b87561d396 | ||
![]() |
10aba2c038 | ||
![]() |
09c4fee780 | ||
![]() |
4033dbbd3f | ||
![]() |
85da3be6af | ||
![]() |
59d43edea1 | ||
![]() |
4a93cffb52 | ||
![]() |
7fb6df1c18 | ||
![]() |
7a48da1916 | ||
![]() |
5eaf59dd72 | ||
![]() |
73a33ed5fc | ||
![]() |
f17fb36501 | ||
![]() |
3ff1afa88b | ||
![]() |
faf3b4b477 | ||
![]() |
452891148e | ||
![]() |
b76a9ff146 | ||
![]() |
0b9ae96a96 | ||
![]() |
2c9653bc0d | ||
![]() |
08164fb0a7 | ||
![]() |
ce4d8cf0f3 | ||
![]() |
224b14043a | ||
![]() |
0fe08ad082 | ||
![]() |
516c394303 | ||
![]() |
71e86f3064 | ||
![]() |
8a1110f2c0 | ||
![]() |
bb52351a6e | ||
![]() |
87c745d3bf | ||
![]() |
374c6c848b | ||
![]() |
af31ee8c94 | ||
![]() |
26a9883b93 | ||
![]() |
bda3e0c931 | ||
![]() |
f3d17eb77e | ||
![]() |
5f92cfcc0e | ||
![]() |
b55eaae51f | ||
![]() |
c9e6d6afa3 | ||
![]() |
2f1d340c42 | ||
![]() |
2ba99656c1 | ||
![]() |
635f63c1cd | ||
![]() |
b9b49ff306 | ||
![]() |
5640a1506e | ||
![]() |
d4532c64aa | ||
![]() |
4767cfa4e9 | ||
![]() |
309d687c26 | ||
![]() |
df25c09962 | ||
![]() |
5e94759fde | ||
![]() |
b6a4b702ac | ||
![]() |
7d902e87cd | ||
![]() |
d4213a98d0 | ||
![]() |
6b67a1b146 | ||
![]() |
463e1fb9d7 | ||
![]() |
4440e56aa1 | ||
![]() |
f59727b39f | ||
![]() |
09d0909878 | ||
![]() |
72db4624e0 | ||
![]() |
e9eca22e3b | ||
![]() |
33d4f382d5 | ||
![]() |
aa1eb32b4c | ||
![]() |
d690cfad38 | ||
![]() |
0b04bf1181 | ||
![]() |
9450a69bd3 | ||
![]() |
67573728ad | ||
![]() |
7011bc12fe | ||
![]() |
9186594dc1 | ||
![]() |
562a24b651 | ||
![]() |
9318eb3fb2 | ||
![]() |
0590b76cd0 | ||
![]() |
8aac18c96d | ||
![]() |
6a6b8567c0 | ||
![]() |
78438bdfcc | ||
![]() |
2096c956db | ||
![]() |
dfc2d4d4f1 | ||
![]() |
5f57b72d6e | ||
![]() |
6a470b44e7 | ||
![]() |
a35a2ec8b7 | ||
![]() |
978b71c8bb | ||
![]() |
f3b328a4d8 | ||
![]() |
9da78880e6 | ||
![]() |
4d38087fa8 | ||
![]() |
f2871cfc3c | ||
![]() |
24efd12ab5 | ||
![]() |
9b2e6b1c1d | ||
![]() |
9f62d83568 | ||
![]() |
7eb3575502 | ||
![]() |
6afa0d6311 | ||
![]() |
d170601678 | ||
![]() |
01a33d150f | ||
![]() |
28b11d2165 | ||
![]() |
83b5e8f3da | ||
![]() |
e4e4bf5ff4 | ||
![]() |
ab3a01b9f6 | ||
![]() |
8548472b6e | ||
![]() |
ab776e3989 | ||
![]() |
b0b7378e2b | ||
![]() |
75e03ef1d9 | ||
![]() |
986de0b5db | ||
![]() |
6959c9dde3 | ||
![]() |
53087e50e4 | ||
![]() |
dee830a56f | ||
![]() |
45179c53b7 | ||
![]() |
6c7cb65224 | ||
![]() |
0038b3c2e8 | ||
![]() |
17bb8a9ba4 | ||
![]() |
9756c13f13 | ||
![]() |
be84b06ca6 | ||
![]() |
3ee88d99da | ||
![]() |
16b0a2ac3b | ||
![]() |
a82d2e4903 | ||
![]() |
cef241a80b | ||
![]() |
eab6a1a112 | ||
![]() |
827bfc99ec | ||
![]() |
b5bd307999 | ||
![]() |
d02d029e88 | ||
![]() |
e41885c458 | ||
![]() |
78ac9946e3 | ||
![]() |
bd8e8eaa09 | ||
![]() |
a2a01755ec | ||
![]() |
a593c6187f | ||
![]() |
b159cbfeef | ||
![]() |
a7cced506b | ||
![]() |
e8469af763 | ||
![]() |
d2c6b23bf9 | ||
![]() |
c657498d75 | ||
![]() |
001d0c9af1 | ||
![]() |
0d0368c042 | ||
![]() |
6fdd0ff7c5 | ||
![]() |
6a8b1be940 | ||
![]() |
dee671b640 | ||
![]() |
c289a422c3 | ||
![]() |
fa47529cf1 | ||
![]() |
6769de5b01 | ||
![]() |
c8a8892292 | ||
![]() |
273b25cb6f | ||
![]() |
71e06a4cd7 | ||
![]() |
827310aca6 | ||
![]() |
b9c83cf7ab | ||
![]() |
8a44748324 | ||
![]() |
e4f469ef73 | ||
![]() |
4cf4566fff | ||
![]() |
55c866f340 | ||
![]() |
fd550e223e | ||
![]() |
225ace636a | ||
![]() |
ee4c8b835b | ||
![]() |
f43ad0c176 | ||
![]() |
95de2618a3 | ||
![]() |
48c9f6ca50 | ||
![]() |
2d56bb74eb | ||
![]() |
42cc3cae8e | ||
![]() |
02da11e06e | ||
![]() |
eb1061a910 | ||
![]() |
5d9967d3bd | ||
![]() |
d852d9e37c | ||
![]() |
1392aee195 | ||
![]() |
3b4c8fe827 | ||
![]() |
052bf17292 | ||
![]() |
8df935829d | ||
![]() |
39479609ca | ||
![]() |
4344b0c0b0 | ||
![]() |
13ea058bbb | ||
![]() |
8fe4bc201e | ||
![]() |
f6a35de542 | ||
![]() |
d7fbe494dd | ||
![]() |
1ccf282170 | ||
![]() |
63b7defe1a | ||
![]() |
00803f039a | ||
![]() |
2b1c246c13 | ||
![]() |
4f6dd69cb1 | ||
![]() |
4fde1d2b65 | ||
![]() |
ccceebe257 | ||
![]() |
499dac9ee2 | ||
![]() |
1d26e61f7e | ||
![]() |
c40e20a3e3 | ||
![]() |
549b2b8e95 | ||
![]() |
15665c0363 | ||
![]() |
226f993e7d | ||
![]() |
9081265dab | ||
![]() |
de14f18be8 | ||
![]() |
da276f0c6b | ||
![]() |
5a3c98a849 | ||
![]() |
51fa0af3fe | ||
![]() |
fcdce01ae6 | ||
![]() |
9af9a7bff7 | ||
![]() |
1eef021704 | ||
![]() |
a308a0c9b4 | ||
![]() |
726b8243eb | ||
![]() |
88cea51561 | ||
![]() |
ec0bcb1f1b | ||
![]() |
2df1808c4e | ||
![]() |
c85e90a71b | ||
![]() |
1013a49db2 | ||
![]() |
f6eec29aa2 | ||
![]() |
64b99d5587 | ||
![]() |
75b07fc0d6 | ||
![]() |
d64068da66 | ||
![]() |
62b38934e5 | ||
![]() |
14d8e23135 | ||
![]() |
0908a15848 | ||
![]() |
2e878fb5ca | ||
![]() |
62d24341ca | ||
![]() |
f2085fdf0f | ||
![]() |
a19c211612 | ||
![]() |
9bbcf594ea | ||
![]() |
da89155503 | ||
![]() |
3b59c4861f | ||
![]() |
6f5764fd3d | ||
![]() |
3c059f3acf | ||
![]() |
3a022f1ae3 | ||
![]() |
049a59f2ed | ||
![]() |
ed9ea4e6cc | ||
![]() |
c415be2db3 | ||
![]() |
2bc5061e22 | ||
![]() |
cedf12baeb | ||
![]() |
b403c41c15 | ||
![]() |
acd75d85c7 | ||
![]() |
5e5dad9512 | ||
![]() |
95e343395d | ||
![]() |
6a29e5193b | ||
![]() |
1cb7177597 | ||
![]() |
50e863ca52 | ||
![]() |
8cdd7ca2d2 | ||
![]() |
6fbf8411ec | ||
![]() |
fa200fed98 | ||
![]() |
7d7d30bcae | ||
![]() |
85a4bbc28e | ||
![]() |
0b161627c2 | ||
![]() |
36e7898ed4 | ||
![]() |
3537722208 | ||
![]() |
dfcaa29c8a | ||
![]() |
92c6d69bc8 | ||
![]() |
7b8a2ae57b | ||
![]() |
b444fe478c | ||
![]() |
50fb1a016c | ||
![]() |
e229c63e11 | ||
![]() |
9649a57e34 | ||
![]() |
ac85d63013 | ||
![]() |
4b2ba1f6c0 | ||
![]() |
886d15b622 | ||
![]() |
d517ce37e7 | ||
![]() |
85f0cec33e | ||
![]() |
5c37569b2a | ||
![]() |
956b96967e | ||
![]() |
f51faa25ed | ||
![]() |
aa8c85f404 | ||
![]() |
98540f0f6d | ||
![]() |
841c89769a | ||
![]() |
84cb9761e8 | ||
![]() |
178e340223 | ||
![]() |
b18a05c2c8 | ||
![]() |
3466de1473 | ||
![]() |
4be4e41fa7 | ||
![]() |
3264463366 | ||
![]() |
8252504dad | ||
![]() |
ac3ef1efc1 | ||
![]() |
54cb259882 | ||
![]() |
04d0291fa0 | ||
![]() |
c8d6700406 | ||
![]() |
e61f2d74a8 | ||
![]() |
a0b9a1fe86 | ||
![]() |
27d2e95c43 | ||
![]() |
819e59292d | ||
![]() |
f3ef16b948 | ||
![]() |
5e1e44057d | ||
![]() |
bf2e322c22 | ||
![]() |
585b47051f | ||
![]() |
5ca96fa758 | ||
![]() |
aba6eb962f | ||
![]() |
107dc02fd0 | ||
![]() |
debac715bf | ||
![]() |
c6ed41e322 | ||
![]() |
ec2c90c73f | ||
![]() |
6c2c5e5a90 | ||
![]() |
f0b2d8c4eb | ||
![]() |
a588a0bfa3 | ||
![]() |
c07358a526 | ||
![]() |
9058fa42dd | ||
![]() |
55d7ebe006 | ||
![]() |
6edbfdad89 | ||
![]() |
b2a6a5a82f | ||
![]() |
60cd4ff872 | ||
![]() |
35f4c76982 | ||
![]() |
715a4a25cf | ||
![]() |
e15447c8b8 | ||
![]() |
ab8eec164c | ||
![]() |
b1177cd2ce | ||
![]() |
40d95dc142 | ||
![]() |
d78bd42cfc | ||
![]() |
b6210dc225 | ||
![]() |
b05a89a3e0 | ||
![]() |
13e99b904b | ||
![]() |
01a4b9c4b4 | ||
![]() |
d6df1be272 | ||
![]() |
85ef5cf807 | ||
![]() |
ff020cb5a4 | ||
![]() |
5c54ac9aa1 | ||
![]() |
e48662423a | ||
![]() |
f124f06c2d | ||
![]() |
f2faf0ee43 | ||
![]() |
ab2913008e | ||
![]() |
eebc0f485d | ||
![]() |
bb6427ea9b | ||
![]() |
29b73563dc | ||
![]() |
aa0ce1c88a | ||
![]() |
7a9778249f | ||
![]() |
c41b732fbd | ||
![]() |
6ede428990 | ||
![]() |
d9b85a819e | ||
![]() |
6d00eb501a | ||
![]() |
318c95342d | ||
![]() |
cde0f12f07 | ||
![]() |
6668fb39f9 | ||
![]() |
4691fae90a | ||
![]() |
0fccbc69ff | ||
![]() |
d699f794ac | ||
![]() |
29a9ca18fe | ||
![]() |
72ae21d6dc | ||
![]() |
310d9621e5 | ||
![]() |
0f4258d00c | ||
![]() |
78b5aa150c | ||
![]() |
3cfb14b9e5 | ||
![]() |
7e22614a4e | ||
![]() |
66ecaf472a | ||
![]() |
3ba262f6f6 | ||
![]() |
bfc9c880b9 | ||
![]() |
ef113a9040 | ||
![]() |
ca4342a010 | ||
![]() |
e627e91fa6 | ||
![]() |
b935190da8 | ||
![]() |
7cd5c1c12b | ||
![]() |
4708fce4f8 | ||
![]() |
93fda7c96b | ||
![]() |
912e0ad53f | ||
![]() |
3e9ce8bc03 | ||
![]() |
a08aa3398c | ||
![]() |
3076845927 | ||
![]() |
b1c0ebd521 | ||
![]() |
e6c4ca1f25 | ||
![]() |
cb25d29b0b | ||
![]() |
2e8d303ad8 | ||
![]() |
a754d56433 | ||
![]() |
775a16dc50 | ||
![]() |
16824dcadb | ||
![]() |
f949cda227 | ||
![]() |
454e356e4d | ||
![]() |
9a87b59e84 | ||
![]() |
93d82a9012 | ||
![]() |
564458b106 | ||
![]() |
b38e9c45bf | ||
![]() |
85d4c5bd7a | ||
![]() |
6a9d27ceb4 | ||
![]() |
d2eaf90df2 | ||
![]() |
fdf23600c0 | ||
![]() |
0643b8280e | ||
![]() |
6e9ca0dc4a | ||
![]() |
42af51a1a5 | ||
![]() |
0d90b81cb6 | ||
![]() |
020738a7ea | ||
![]() |
515cadd079 | ||
![]() |
5e5830185d | ||
![]() |
26c65339a7 | ||
![]() |
e59e7f534c | ||
![]() |
25b0133979 | ||
![]() |
3b7e4d8550 | ||
![]() |
2b6666e114 | ||
![]() |
06b2b78ffc | ||
![]() |
2dd2b7d60c | ||
![]() |
c7cb4138ee | ||
![]() |
d812d0f11c | ||
![]() |
7fe565cc05 | ||
![]() |
5aed99b4a6 | ||
![]() |
4c30e9e1d1 | ||
![]() |
7a56cadfb5 | ||
![]() |
21231d2f23 | ||
![]() |
b11814c95b | ||
![]() |
471e492c11 | ||
![]() |
e243812745 |
@@ -5,6 +5,5 @@ jupyterhub.sqlite
|
||||
jupyterhub_config.py
|
||||
node_modules
|
||||
docs
|
||||
.git
|
||||
dist
|
||||
build
|
||||
|
9
.flake8
9
.flake8
@@ -3,14 +3,9 @@
|
||||
# E: style errors
|
||||
# W: style warnings
|
||||
# C: complexity
|
||||
# F401: module imported but unused
|
||||
# F403: import *
|
||||
# F811: redefinition of unused `name` from line `N`
|
||||
# D: docstring warnings (unused pydocstyle extension)
|
||||
# F841: local variable assigned but never used
|
||||
# E402: module level import not at top of file
|
||||
# I100: Import statements are in the wrong order
|
||||
# I101: Imported names are in the wrong order. Should be
|
||||
ignore = E, C, W, F401, F403, F811, F841, E402, I100, I101, D400
|
||||
ignore = E, C, W, D, F841
|
||||
builtins = c, get_config
|
||||
exclude =
|
||||
.cache,
|
||||
|
57
.github/dependabot.yaml
vendored
Normal file
57
.github/dependabot.yaml
vendored
Normal file
@@ -0,0 +1,57 @@
|
||||
# dependabot.yaml reference: https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
|
||||
#
|
||||
# Notes:
|
||||
# - Status and logs from dependabot are provided at
|
||||
# https://github.com/jupyterhub/jupyterhub/network/updates.
|
||||
#
|
||||
version: 2
|
||||
updates:
|
||||
# Maintain dependencies in our GitHub Workflows
|
||||
- package-ecosystem: github-actions
|
||||
directory: /
|
||||
labels: [ci]
|
||||
schedule:
|
||||
interval: monthly
|
||||
time: "05:00"
|
||||
timezone: Etc/UTC
|
||||
- package-ecosystem: npm
|
||||
directory: /
|
||||
groups:
|
||||
# one big pull request for minor bumps
|
||||
npm-minor:
|
||||
patterns:
|
||||
- "*"
|
||||
update-types:
|
||||
- minor
|
||||
- patch
|
||||
schedule:
|
||||
interval: monthly
|
||||
- package-ecosystem: npm
|
||||
directory: /jsx
|
||||
groups:
|
||||
# one big pull request for minor bumps
|
||||
jsx-minor:
|
||||
patterns:
|
||||
- "*"
|
||||
update-types:
|
||||
- minor
|
||||
- patch
|
||||
# group major bumps of react-related dependencies
|
||||
jsx-react:
|
||||
patterns:
|
||||
- "react*"
|
||||
- "redux*"
|
||||
- "*react"
|
||||
- "recompose"
|
||||
update-types:
|
||||
- major
|
||||
# group major bumps of webpack-related dependencies
|
||||
jsx-webpack:
|
||||
patterns:
|
||||
- "webpack*"
|
||||
- "@babel/*"
|
||||
- "*-loader"
|
||||
update-types:
|
||||
- major
|
||||
schedule:
|
||||
interval: monthly
|
54
.github/workflows/registry-overviews.yml
vendored
Normal file
54
.github/workflows/registry-overviews.yml
vendored
Normal file
@@ -0,0 +1,54 @@
|
||||
name: Update Registry overviews
|
||||
|
||||
env:
|
||||
OWNER: ${{ github.repository_owner }}
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
paths:
|
||||
- ".github/workflows/registry-overviews.yml"
|
||||
|
||||
- "README.md"
|
||||
- "onbuild/README.md"
|
||||
- "demo-image/README.md"
|
||||
- "singleuser/README.md"
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
update-overview:
|
||||
runs-on: ubuntu-latest
|
||||
name: update-overview (${{matrix.image}})
|
||||
if: github.repository_owner == 'jupyterhub'
|
||||
|
||||
steps:
|
||||
- name: Checkout Repo ⚡️
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Push README to Registry 🐳
|
||||
uses: christian-korneck/update-container-description-action@d36005551adeaba9698d8d67a296bd16fa91f8e8 # v1
|
||||
env:
|
||||
DOCKER_USER: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
DOCKER_PASS: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
with:
|
||||
destination_container_repo: ${{ env.OWNER }}/${{ matrix.image }}
|
||||
provider: dockerhub
|
||||
short_description: ${{ matrix.description }}
|
||||
readme_file: ${{ matrix.readme_file }}
|
||||
|
||||
strategy:
|
||||
matrix:
|
||||
include:
|
||||
- image: jupyterhub
|
||||
description: "JupyterHub: multi-user Jupyter notebook server"
|
||||
readme_file: README.md
|
||||
- image: jupyterhub-onbuild
|
||||
description: onbuild version of JupyterHub images
|
||||
readme_file: onbuild/README.md
|
||||
- image: jupyterhub-demo
|
||||
description: Demo JupyterHub Docker image with a quick overview of what JupyterHub is and how it works
|
||||
readme_file: demo-image/README.md
|
||||
- image: singleuser
|
||||
description: "single-user docker images for use with JupyterHub and DockerSpawner see also: jupyter/docker-stacks"
|
||||
readme_file: singleuser/README.md
|
106
.github/workflows/release.yml
vendored
106
.github/workflows/release.yml
vendored
@@ -30,19 +30,21 @@ on:
|
||||
|
||||
jobs:
|
||||
build-release:
|
||||
runs-on: ubuntu-20.04
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-python@v2
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: 3.8
|
||||
python-version: "3.11"
|
||||
cache: pip
|
||||
|
||||
- uses: actions/setup-node@v1
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: "14"
|
||||
node-version: "20"
|
||||
|
||||
- name: install build package
|
||||
- name: install build requirements
|
||||
run: |
|
||||
npm install -g yarn
|
||||
pip install --upgrade pip
|
||||
pip install build
|
||||
pip freeze
|
||||
@@ -52,28 +54,21 @@ jobs:
|
||||
python -m build --sdist --wheel .
|
||||
ls -l dist
|
||||
|
||||
- name: verify wheel
|
||||
- name: verify sdist
|
||||
run: |
|
||||
cd dist
|
||||
pip install ./*.whl
|
||||
# verify data-files are installed where they are found
|
||||
cat <<EOF | python
|
||||
import os
|
||||
from jupyterhub._data import DATA_FILES_PATH
|
||||
print(f"DATA_FILES_PATH={DATA_FILES_PATH}")
|
||||
assert os.path.exists(DATA_FILES_PATH), DATA_FILES_PATH
|
||||
for subpath in (
|
||||
"templates/page.html",
|
||||
"static/css/style.min.css",
|
||||
"static/components/jquery/dist/jquery.js",
|
||||
):
|
||||
path = os.path.join(DATA_FILES_PATH, subpath)
|
||||
assert os.path.exists(path), path
|
||||
print("OK")
|
||||
EOF
|
||||
./ci/check_sdist.py dist/jupyterhub-*.tar.gz
|
||||
|
||||
- name: verify data-files are installed where they are found
|
||||
run: |
|
||||
pip install dist/*.whl
|
||||
./ci/check_installed_data.py
|
||||
|
||||
- name: verify sdist can be installed without npm/yarn
|
||||
run: |
|
||||
docker run --rm -v $PWD/dist:/dist:ro docker.io/library/python:3.9-slim-bullseye bash -c 'pip install /dist/jupyterhub-*.tar.gz'
|
||||
|
||||
# ref: https://github.com/actions/upload-artifact#readme
|
||||
- uses: actions/upload-artifact@v2
|
||||
- uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: jupyterhub-${{ github.sha }}
|
||||
path: "dist/*"
|
||||
@@ -89,7 +84,8 @@ jobs:
|
||||
twine upload --skip-existing dist/*
|
||||
|
||||
publish-docker:
|
||||
runs-on: ubuntu-20.04
|
||||
runs-on: ubuntu-22.04
|
||||
timeout-minutes: 30
|
||||
|
||||
services:
|
||||
# So that we can test this in PRs/branches
|
||||
@@ -102,39 +98,35 @@ jobs:
|
||||
- name: Should we push this image to a public registry?
|
||||
run: |
|
||||
if [ "${{ startsWith(github.ref, 'refs/tags/') || (github.ref == 'refs/heads/main') }}" = "true" ]; then
|
||||
# Empty => Docker Hub
|
||||
echo "REGISTRY=" >> $GITHUB_ENV
|
||||
echo "REGISTRY=quay.io/" >> $GITHUB_ENV
|
||||
else
|
||||
echo "REGISTRY=localhost:5000/" >> $GITHUB_ENV
|
||||
fi
|
||||
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
# Setup docker to build for multiple platforms, see:
|
||||
# https://github.com/docker/build-push-action/tree/v2.4.0#usage
|
||||
# https://github.com/docker/build-push-action/blob/v2.4.0/docs/advanced/multi-platform.md
|
||||
- name: Set up QEMU (for docker buildx)
|
||||
uses: docker/setup-qemu-action@25f0500ff22e406f7191a2a8ba8cda16901ca018 # associated tag: v1.0.2
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
||||
- name: Set up Docker Buildx (for multi-arch builds)
|
||||
uses: docker/setup-buildx-action@2a4b53665e15ce7d7049afb11ff1f70ff1610609 # associated tag: v1.1.2
|
||||
uses: docker/setup-buildx-action@v3
|
||||
with:
|
||||
# Allows pushing to registry on localhost:5000
|
||||
driver-opts: network=host
|
||||
|
||||
- name: Setup push rights to Docker Hub
|
||||
# This was setup by...
|
||||
# 1. Creating a Docker Hub service account "jupyterhubbot"
|
||||
# 2. Creating a access token for the service account specific to this
|
||||
# repository: https://hub.docker.com/settings/security
|
||||
# 3. Making the account part of the "bots" team, and granting that team
|
||||
# permissions to push to the relevant images:
|
||||
# https://hub.docker.com/orgs/jupyterhub/teams/bots/permissions
|
||||
# 4. Registering the username and token as a secret for this repo:
|
||||
# https://github.com/jupyterhub/jupyterhub/settings/secrets/actions
|
||||
# 1. Creating a [Robot Account](https://quay.io/organization/jupyterhub?tab=robots) in the JupyterHub
|
||||
# . Quay.io org
|
||||
# 2. Giving it enough permissions to push to the jupyterhub and singleuser images
|
||||
# 3. Putting the robot account's username and password in GitHub actions environment
|
||||
if: env.REGISTRY != 'localhost:5000/'
|
||||
run: |
|
||||
docker login -u "${{ secrets.DOCKERHUB_USERNAME }}" -p "${{ secrets.DOCKERHUB_TOKEN }}"
|
||||
docker login -u "${{ secrets.QUAY_USERNAME }}" -p "${{ secrets.QUAY_PASSWORD }}" "${{ env.REGISTRY }}"
|
||||
docker login -u "${{ secrets.DOCKERHUB_USERNAME }}" -p "${{ secrets.DOCKERHUB_TOKEN }}" docker.io
|
||||
|
||||
# image: jupyterhub/jupyterhub
|
||||
#
|
||||
@@ -147,15 +139,17 @@ jobs:
|
||||
# If GITHUB_TOKEN isn't available (e.g. in PRs) returns no tags [].
|
||||
- name: Get list of jupyterhub tags
|
||||
id: jupyterhubtags
|
||||
uses: jupyterhub/action-major-minor-tag-calculator@v2
|
||||
uses: jupyterhub/action-major-minor-tag-calculator@v3
|
||||
with:
|
||||
githubToken: ${{ secrets.GITHUB_TOKEN }}
|
||||
prefix: "${{ env.REGISTRY }}jupyterhub/jupyterhub:"
|
||||
prefix: >-
|
||||
${{ env.REGISTRY }}jupyterhub/jupyterhub:
|
||||
jupyterhub/jupyterhub:
|
||||
defaultTag: "${{ env.REGISTRY }}jupyterhub/jupyterhub:noref"
|
||||
branchRegex: ^\w[\w-.]*$
|
||||
|
||||
- name: Build and push jupyterhub
|
||||
uses: docker/build-push-action@e1b7f96249f2e4c8e4ac1519b9608c0d48944a1f
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
platforms: linux/amd64,linux/arm64
|
||||
@@ -168,15 +162,17 @@ jobs:
|
||||
#
|
||||
- name: Get list of jupyterhub-onbuild tags
|
||||
id: onbuildtags
|
||||
uses: jupyterhub/action-major-minor-tag-calculator@v2
|
||||
uses: jupyterhub/action-major-minor-tag-calculator@v3
|
||||
with:
|
||||
githubToken: ${{ secrets.GITHUB_TOKEN }}
|
||||
prefix: "${{ env.REGISTRY }}jupyterhub/jupyterhub-onbuild:"
|
||||
prefix: >-
|
||||
${{ env.REGISTRY }}jupyterhub/jupyterhub-onbuild:
|
||||
jupyterhub/jupyterhub-onbuild:
|
||||
defaultTag: "${{ env.REGISTRY }}jupyterhub/jupyterhub-onbuild:noref"
|
||||
branchRegex: ^\w[\w-.]*$
|
||||
|
||||
- name: Build and push jupyterhub-onbuild
|
||||
uses: docker/build-push-action@e1b7f96249f2e4c8e4ac1519b9608c0d48944a1f
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
build-args: |
|
||||
BASE_IMAGE=${{ fromJson(steps.jupyterhubtags.outputs.tags)[0] }}
|
||||
@@ -189,15 +185,17 @@ jobs:
|
||||
#
|
||||
- name: Get list of jupyterhub-demo tags
|
||||
id: demotags
|
||||
uses: jupyterhub/action-major-minor-tag-calculator@v2
|
||||
uses: jupyterhub/action-major-minor-tag-calculator@v3
|
||||
with:
|
||||
githubToken: ${{ secrets.GITHUB_TOKEN }}
|
||||
prefix: "${{ env.REGISTRY }}jupyterhub/jupyterhub-demo:"
|
||||
prefix: >-
|
||||
${{ env.REGISTRY }}jupyterhub/jupyterhub-demo:
|
||||
jupyterhub/jupyterhub-demo:
|
||||
defaultTag: "${{ env.REGISTRY }}jupyterhub/jupyterhub-demo:noref"
|
||||
branchRegex: ^\w[\w-.]*$
|
||||
|
||||
- name: Build and push jupyterhub-demo
|
||||
uses: docker/build-push-action@e1b7f96249f2e4c8e4ac1519b9608c0d48944a1f
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
build-args: |
|
||||
BASE_IMAGE=${{ fromJson(steps.onbuildtags.outputs.tags)[0] }}
|
||||
@@ -213,15 +211,17 @@ jobs:
|
||||
#
|
||||
- name: Get list of jupyterhub/singleuser tags
|
||||
id: singleusertags
|
||||
uses: jupyterhub/action-major-minor-tag-calculator@v2
|
||||
uses: jupyterhub/action-major-minor-tag-calculator@v3
|
||||
with:
|
||||
githubToken: ${{ secrets.GITHUB_TOKEN }}
|
||||
prefix: "${{ env.REGISTRY }}jupyterhub/singleuser:"
|
||||
prefix: >-
|
||||
${{ env.REGISTRY }}jupyterhub/singleuser:
|
||||
jupyterhub/singleuser:
|
||||
defaultTag: "${{ env.REGISTRY }}jupyterhub/singleuser:noref"
|
||||
branchRegex: ^\w[\w-.]*$
|
||||
|
||||
- name: Build and push jupyterhub/singleuser
|
||||
uses: docker/build-push-action@e1b7f96249f2e4c8e4ac1519b9608c0d48944a1f
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
build-args: |
|
||||
JUPYTERHUB_VERSION=${{ github.ref_type == 'tag' && github.ref_name || format('git:{0}', github.sha) }}
|
||||
|
4
.github/workflows/support-bot.yml
vendored
4
.github/workflows/support-bot.yml
vendored
@@ -12,7 +12,7 @@ jobs:
|
||||
action:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: dessant/support-requests@v2
|
||||
- uses: dessant/support-requests@v4
|
||||
with:
|
||||
github-token: ${{ github.token }}
|
||||
support-label: "support"
|
||||
@@ -25,7 +25,7 @@ jobs:
|
||||
|
||||
Our goal is to sustain a positive experience for both users and developers. We use GitHub issues for specific discussions related to changing a repository's content, and let the forum be where we can more generally help and inspire each other.
|
||||
|
||||
Thanks you for being an active member of our community! :heart:
|
||||
Thank you for being an active member of our community! :heart:
|
||||
close-issue: true
|
||||
lock-issue: false
|
||||
issue-lock-reason: "off-topic"
|
||||
|
81
.github/workflows/test-docs.yml
vendored
81
.github/workflows/test-docs.yml
vendored
@@ -15,15 +15,13 @@ on:
|
||||
- "docs/**"
|
||||
- "jupyterhub/_version.py"
|
||||
- "jupyterhub/scopes.py"
|
||||
- ".github/workflows/*"
|
||||
- "!.github/workflows/test-docs.yml"
|
||||
- ".github/workflows/test-docs.yml"
|
||||
push:
|
||||
paths:
|
||||
- "docs/**"
|
||||
- "jupyterhub/_version.py"
|
||||
- "jupyterhub/scopes.py"
|
||||
- ".github/workflows/*"
|
||||
- "!.github/workflows/test-docs.yml"
|
||||
- ".github/workflows/test-docs.yml"
|
||||
branches-ignore:
|
||||
- "dependabot/**"
|
||||
- "pre-commit-ci-update-config"
|
||||
@@ -38,27 +36,82 @@ env:
|
||||
|
||||
jobs:
|
||||
validate-rest-api-definition:
|
||||
runs-on: ubuntu-20.04
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: "20"
|
||||
cache: npm
|
||||
|
||||
- name: Validate REST API definition
|
||||
uses: char0n/swagger-editor-validate@182d1a5d26ff5c2f4f452c43bd55e2c7d8064003
|
||||
with:
|
||||
definition-file: docs/source/_static/rest-api.yml
|
||||
run: |
|
||||
npx @redocly/cli lint
|
||||
|
||||
test-docs:
|
||||
runs-on: ubuntu-20.04
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-python@v2
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
python-version: "3.9"
|
||||
# make rediraffecheckdiff requires git history to compare current
|
||||
# commit with the main branch and previous releases.
|
||||
fetch-depth: 0
|
||||
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.11"
|
||||
cache: pip
|
||||
cache-dependency-path: |
|
||||
requirements.txt
|
||||
docs/requirements.txt
|
||||
|
||||
- name: Install requirements
|
||||
run: |
|
||||
pip install -r docs/requirements.txt pytest -e .
|
||||
pip install -e . -r docs/requirements.txt pytest
|
||||
|
||||
- name: pytest docs/
|
||||
run: |
|
||||
pytest docs/
|
||||
|
||||
# readthedocs doesn't halt on warnings,
|
||||
# so raise any warnings here
|
||||
- name: build docs
|
||||
run: |
|
||||
cd docs
|
||||
make html
|
||||
|
||||
# Output broken and permanently redirected links in a readable format
|
||||
- name: check links
|
||||
uses: manics/action-sphinx-linkcheck-summary@main
|
||||
with:
|
||||
docs-dir: docs
|
||||
build-dir: docs/_build
|
||||
|
||||
# make rediraffecheckdiff compares files for different changesets
|
||||
# these diff targets aren't always available
|
||||
# - compare with base ref (usually 'main', always on 'origin') for pull requests
|
||||
# - only compare with tags when running against jupyterhub/jupyterhub
|
||||
# to avoid errors on forks, which often lack tags
|
||||
- name: check redirects for this PR
|
||||
if: github.event_name == 'pull_request'
|
||||
run: |
|
||||
cd docs
|
||||
export REDIRAFFE_BRANCH=origin/${{ github.base_ref }}
|
||||
make rediraffecheckdiff
|
||||
|
||||
# this should check currently published 'stable' links for redirects
|
||||
- name: check redirects since last release
|
||||
if: github.repository == 'jupyterhub/jupyterhub'
|
||||
run: |
|
||||
cd docs
|
||||
export REDIRAFFE_BRANCH=$(git describe --tags --abbrev=0)
|
||||
make rediraffecheckdiff
|
||||
|
||||
# longer-term redirect check (fixed version) for older links
|
||||
- name: check redirects since 3.0.0
|
||||
if: github.repository == 'jupyterhub/jupyterhub'
|
||||
run: |
|
||||
cd docs
|
||||
export REDIRAFFE_BRANCH=3.0.0
|
||||
make rediraffecheckdiff
|
||||
|
84
.github/workflows/test-jsx.yml
vendored
84
.github/workflows/test-jsx.yml
vendored
@@ -19,90 +19,30 @@ on:
|
||||
- "**"
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
# The ./jsx folder contains React based source code files that are to compile
|
||||
# to share/jupyterhub/static/js/admin-react.js. The ./jsx folder includes
|
||||
# tests also has tests that this job is meant to run with `yarn test`
|
||||
# tests also has tests that this job is meant to run with `npm test`
|
||||
# according to the documentation in jsx/README.md.
|
||||
test-jsx-admin-react:
|
||||
runs-on: ubuntu-20.04
|
||||
runs-on: ubuntu-22.04
|
||||
timeout-minutes: 5
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-node@v1
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: "14"
|
||||
node-version: "20"
|
||||
|
||||
- name: Install yarn
|
||||
run: |
|
||||
npm install -g yarn
|
||||
|
||||
- name: yarn
|
||||
- name: install jsx
|
||||
run: |
|
||||
cd jsx
|
||||
yarn
|
||||
npm ci
|
||||
|
||||
- name: yarn test
|
||||
- name: test
|
||||
run: |
|
||||
cd jsx
|
||||
yarn test
|
||||
|
||||
# The ./jsx folder contains React based source files that are to compile to
|
||||
# share/jupyterhub/static/js/admin-react.js. This job makes sure that whatever
|
||||
# we have in jsx/src matches the compiled asset that we package and
|
||||
# distribute.
|
||||
#
|
||||
# This job's purpose is to make sure we don't forget to compile changes and to
|
||||
# verify nobody sneaks in a change in the hard to review compiled asset.
|
||||
#
|
||||
# NOTE: In the future we may want to stop version controlling the compiled
|
||||
# artifact and instead generate it whenever we package JupyterHub. If we
|
||||
# do this, we are required to setup node and compile the source code
|
||||
# more often, at the same time we could avoid having this check be made.
|
||||
#
|
||||
compile-jsx-admin-react:
|
||||
runs-on: ubuntu-20.04
|
||||
timeout-minutes: 5
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-node@v1
|
||||
with:
|
||||
node-version: "14"
|
||||
|
||||
- name: Install yarn
|
||||
run: |
|
||||
npm install -g yarn
|
||||
|
||||
- name: yarn
|
||||
run: |
|
||||
cd jsx
|
||||
yarn
|
||||
|
||||
- name: yarn build
|
||||
run: |
|
||||
cd jsx
|
||||
yarn build
|
||||
|
||||
- name: yarn place
|
||||
run: |
|
||||
cd jsx
|
||||
yarn place
|
||||
|
||||
- name: Verify compiled jsx/src matches version controlled artifact
|
||||
run: |
|
||||
if [[ `git status --porcelain=v1` ]]; then
|
||||
echo "The source code in ./jsx compiles to something different than found in ./share/jupyterhub/static/js/admin-react.js!"
|
||||
echo
|
||||
echo "Please re-compile the source code in ./jsx with the following commands:"
|
||||
echo
|
||||
echo "yarn"
|
||||
echo "yarn build"
|
||||
echo "yarn place"
|
||||
echo
|
||||
echo "See ./jsx/README.md for more details."
|
||||
exit 1
|
||||
else
|
||||
echo "Compilation of jsx/src to share/jupyterhub/static/js/admin-react.js didn't lead to changes."
|
||||
fi
|
||||
npm test
|
||||
|
147
.github/workflows/test.yml
vendored
147
.github/workflows/test.yml
vendored
@@ -28,12 +28,15 @@ on:
|
||||
env:
|
||||
# UTF-8 content may be interpreted as ascii and causes errors without this.
|
||||
LANG: C.UTF-8
|
||||
PYTEST_ADDOPTS: "--verbose --color=yes"
|
||||
SQLALCHEMY_WARN_20: "1"
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
# Run "pytest jupyterhub/tests" in various configurations
|
||||
pytest:
|
||||
runs-on: ubuntu-20.04
|
||||
runs-on: ubuntu-22.04
|
||||
timeout-minutes: 15
|
||||
|
||||
strategy:
|
||||
@@ -53,36 +56,57 @@ jobs:
|
||||
# Tests everything when JupyterHub works against a dedicated mysql or
|
||||
# postgresql server.
|
||||
#
|
||||
# nbclassic:
|
||||
# legacy_notebook:
|
||||
# Tests everything when the user instances are started with
|
||||
# notebook instead of jupyter_server.
|
||||
# the legacy notebook server instead of jupyter_server.
|
||||
#
|
||||
# ssl:
|
||||
# Tests everything using internal SSL connections instead of
|
||||
# unencrypted HTTP
|
||||
#
|
||||
# main_dependencies:
|
||||
# Tests everything when the we use the latest available dependencies
|
||||
# Tests everything when we use the latest available dependencies
|
||||
# from: traitlets.
|
||||
#
|
||||
# NOTE: Since only the value of these parameters are presented in the
|
||||
# GitHub UI when the workflow run, we avoid using true/false as
|
||||
# values by instead duplicating the name to signal true.
|
||||
# Python versions available at:
|
||||
# https://github.com/actions/python-versions/blob/HEAD/versions-manifest.json
|
||||
include:
|
||||
- python: "3.6"
|
||||
- python: "3.8"
|
||||
oldest_dependencies: oldest_dependencies
|
||||
nbclassic: nbclassic
|
||||
- python: "3.6"
|
||||
subdomain: subdomain
|
||||
- python: "3.7"
|
||||
db: mysql
|
||||
- python: "3.7"
|
||||
ssl: ssl
|
||||
legacy_notebook: legacy_notebook
|
||||
- python: "3.8"
|
||||
db: postgres
|
||||
- python: "3.8"
|
||||
nbclassic: nbclassic
|
||||
jupyter_server: "1.*"
|
||||
subset: singleuser
|
||||
- python: "3.9"
|
||||
db: mysql
|
||||
- python: "3.10"
|
||||
db: postgres
|
||||
- python: "3.12"
|
||||
subdomain: subdomain
|
||||
serverextension: serverextension
|
||||
- python: "3.11"
|
||||
ssl: ssl
|
||||
serverextension: serverextension
|
||||
- python: "3.11"
|
||||
jupyverse: jupyverse
|
||||
subset: singleuser
|
||||
- python: "3.11"
|
||||
subdomain: subdomain
|
||||
noextension: noextension
|
||||
subset: singleuser
|
||||
- python: "3.11"
|
||||
ssl: ssl
|
||||
noextension: noextension
|
||||
subset: singleuser
|
||||
- python: "3.11"
|
||||
browser: browser
|
||||
- python: "3.11"
|
||||
subdomain: subdomain
|
||||
browser: browser
|
||||
- python: "3.12"
|
||||
main_dependencies: main_dependencies
|
||||
|
||||
steps:
|
||||
@@ -96,7 +120,7 @@ jobs:
|
||||
fi
|
||||
if [ "${{ matrix.db }}" == "mysql" ]; then
|
||||
echo "MYSQL_HOST=127.0.0.1" >> $GITHUB_ENV
|
||||
echo "JUPYTERHUB_TEST_DB_URL=mysql+mysqlconnector://root@127.0.0.1:3306/jupyterhub" >> $GITHUB_ENV
|
||||
echo "JUPYTERHUB_TEST_DB_URL=mysql+mysqldb://root@127.0.0.1:3306/jupyterhub" >> $GITHUB_ENV
|
||||
fi
|
||||
if [ "${{ matrix.ssl }}" == "ssl" ]; then
|
||||
echo "SSL_ENABLED=1" >> $GITHUB_ENV
|
||||
@@ -107,54 +131,77 @@ jobs:
|
||||
echo "PGPASSWORD=hub[test/:?" >> $GITHUB_ENV
|
||||
echo "JUPYTERHUB_TEST_DB_URL=postgresql://test_user:hub%5Btest%2F%3A%3F@127.0.0.1:5432/jupyterhub" >> $GITHUB_ENV
|
||||
fi
|
||||
if [ "${{ matrix.jupyter_server }}" != "" ]; then
|
||||
echo "JUPYTERHUB_SINGLEUSER_APP=jupyterhub.tests.mockserverapp.MockServerApp" >> $GITHUB_ENV
|
||||
if [ "${{ matrix.serverextension }}" != "" ]; then
|
||||
echo "JUPYTERHUB_SINGLEUSER_EXTENSION=1" >> $GITHUB_ENV
|
||||
elif [ "${{ matrix.noextension }}" != "" ]; then
|
||||
echo "JUPYTERHUB_SINGLEUSER_EXTENSION=0" >> $GITHUB_ENV
|
||||
fi
|
||||
- uses: actions/checkout@v2
|
||||
# NOTE: actions/setup-node@v1 make use of a cache within the GitHub base
|
||||
if [ "${{ matrix.jupyverse }}" != "" ]; then
|
||||
echo "JUPYTERHUB_SINGLEUSER_APP=jupyverse" >> $GITHUB_ENV
|
||||
fi
|
||||
- uses: actions/checkout@v4
|
||||
# NOTE: actions/setup-node@v4 make use of a cache within the GitHub base
|
||||
# environment and setup in a fraction of a second.
|
||||
- name: Install Node v14
|
||||
uses: actions/setup-node@v1
|
||||
- name: Install Node
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: "14"
|
||||
- name: Install Node dependencies
|
||||
node-version: "20"
|
||||
- name: Install Javascript dependencies
|
||||
run: |
|
||||
npm install
|
||||
npm install -g configurable-http-proxy
|
||||
npm install -g configurable-http-proxy yarn
|
||||
npm list
|
||||
|
||||
# NOTE: actions/setup-python@v2 make use of a cache within the GitHub base
|
||||
# NOTE: actions/setup-python@v5 make use of a cache within the GitHub base
|
||||
# environment and setup in a fraction of a second.
|
||||
- name: Install Python ${{ matrix.python }}
|
||||
uses: actions/setup-python@v2
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: ${{ matrix.python }}
|
||||
python-version: "${{ matrix.python }}"
|
||||
cache: pip
|
||||
cache-dependency-path: |
|
||||
pyproject.toml
|
||||
requirements.txt
|
||||
ci/oldest-dependencies/requirements.old
|
||||
|
||||
- name: Install Python dependencies
|
||||
run: |
|
||||
pip install --upgrade pip
|
||||
pip install --upgrade . -r dev-requirements.txt
|
||||
|
||||
if [ "${{ matrix.oldest_dependencies }}" != "" ]; then
|
||||
# take any dependencies in requirements.txt such as tornado>=5.0
|
||||
# and transform them to tornado==5.0 so we can run tests with
|
||||
# the earliest-supported versions
|
||||
cat requirements.txt | grep '>=' | sed -e 's@>=@==@g' > oldest-requirements.txt
|
||||
pip install -r oldest-requirements.txt
|
||||
# frozen env with oldest dependencies
|
||||
# make sure our `>=` pins really do express our minimum supported versions
|
||||
pip install -r ci/oldest-dependencies/requirements.old -e .
|
||||
else
|
||||
pip install --pre -e ".[test]"
|
||||
fi
|
||||
|
||||
if [ "${{ matrix.main_dependencies }}" != "" ]; then
|
||||
pip install git+https://github.com/ipython/traitlets#egg=traitlets --force
|
||||
# Tests are broken:
|
||||
# https://github.com/jupyterhub/jupyterhub/issues/4418
|
||||
# pip install git+https://github.com/ipython/traitlets#egg=traitlets --force
|
||||
pip install --upgrade --pre sqlalchemy
|
||||
fi
|
||||
if [ "${{ matrix.nbclassic }}" != "" ]; then
|
||||
if [ "${{ matrix.legacy_notebook }}" != "" ]; then
|
||||
pip uninstall jupyter_server --yes
|
||||
pip install notebook
|
||||
pip install 'notebook<7'
|
||||
fi
|
||||
if [ "${{ matrix.jupyter_server }}" != "" ]; then
|
||||
pip install "jupyter_server==${{ matrix.jupyter_server }}"
|
||||
fi
|
||||
if [ "${{ matrix.jupyverse }}" != "" ]; then
|
||||
pip install "jupyverse[jupyterlab,auth-jupyterhub]"
|
||||
pip install -e .
|
||||
fi
|
||||
if [ "${{ matrix.db }}" == "mysql" ]; then
|
||||
pip install mysql-connector-python
|
||||
pip install mysqlclient
|
||||
fi
|
||||
if [ "${{ matrix.db }}" == "postgres" ]; then
|
||||
pip install psycopg2-binary
|
||||
fi
|
||||
if [ "${{ matrix.serverextension }}" != "" ]; then
|
||||
pip install 'jupyter-server>=2'
|
||||
fi
|
||||
|
||||
pip freeze
|
||||
|
||||
@@ -199,25 +246,31 @@ jobs:
|
||||
DB=postgres bash ci/init-db.sh
|
||||
fi
|
||||
|
||||
- name: Configure browser tests
|
||||
if: matrix.browser
|
||||
run: echo "PYTEST_ADDOPTS=$PYTEST_ADDOPTS -m browser" >> "${GITHUB_ENV}"
|
||||
|
||||
- name: Ensure browsers are installed for playwright
|
||||
if: matrix.browser
|
||||
run: python -m playwright install --with-deps
|
||||
|
||||
- name: Run pytest
|
||||
run: |
|
||||
pytest --maxfail=2 --cov=jupyterhub jupyterhub/tests
|
||||
- name: Submit codecov report
|
||||
run: |
|
||||
codecov
|
||||
pytest -k "${{ matrix.subset }}" --maxfail=2 --cov=jupyterhub jupyterhub/tests
|
||||
|
||||
- uses: codecov/codecov-action@v4
|
||||
|
||||
docker-build:
|
||||
runs-on: ubuntu-20.04
|
||||
runs-on: ubuntu-22.04
|
||||
timeout-minutes: 20
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: build images
|
||||
run: |
|
||||
docker build -t jupyterhub/jupyterhub .
|
||||
DOCKER_BUILDKIT=1 docker build -t jupyterhub/jupyterhub .
|
||||
docker build -t jupyterhub/jupyterhub-onbuild onbuild
|
||||
docker build -t jupyterhub/jupyterhub:alpine -f dockerfiles/Dockerfile.alpine .
|
||||
docker build -t jupyterhub/singleuser singleuser
|
||||
|
||||
- name: smoke test jupyterhub
|
||||
|
12
.gitignore
vendored
12
.gitignore
vendored
@@ -9,16 +9,24 @@ docs/_build
|
||||
docs/build
|
||||
docs/source/_static/rest-api
|
||||
docs/source/rbac/scope-table.md
|
||||
docs/source/reference/metrics.md
|
||||
|
||||
.ipynb_checkpoints
|
||||
.virtual_documents
|
||||
|
||||
jsx/build/
|
||||
# ignore config file at the top-level of the repo
|
||||
# but not sub-dirs
|
||||
/jupyterhub_config.py
|
||||
jupyterhub_cookie_secret
|
||||
jupyterhub.sqlite
|
||||
package-lock.json
|
||||
jupyterhub.sqlite*
|
||||
share/jupyterhub/static/components
|
||||
share/jupyterhub/static/css/style.css
|
||||
share/jupyterhub/static/css/style.css.map
|
||||
share/jupyterhub/static/css/style.min.css
|
||||
share/jupyterhub/static/css/style.min.css.map
|
||||
share/jupyterhub/static/js/admin-react.js*
|
||||
*.egg-info
|
||||
MANIFEST
|
||||
.coverage
|
||||
@@ -32,3 +40,5 @@ docs/source/reference/metrics.rst
|
||||
oldest-requirements.txt
|
||||
jupyterhub-proxy.pid
|
||||
examples/server-api/service-token
|
||||
|
||||
*.hot-update*
|
||||
|
@@ -8,46 +8,51 @@
|
||||
# - Run on all files: pre-commit run --all-files
|
||||
# - Register git hooks: pre-commit install --install-hooks
|
||||
#
|
||||
|
||||
ci:
|
||||
# pre-commit.ci will open PRs updating our hooks once a month
|
||||
autoupdate_schedule: monthly
|
||||
|
||||
repos:
|
||||
# Autoformat: Python code, syntax patterns are modernized
|
||||
- repo: https://github.com/asottile/pyupgrade
|
||||
rev: v2.31.1
|
||||
# autoformat and lint Python code
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
rev: v0.6.3
|
||||
hooks:
|
||||
- id: pyupgrade
|
||||
args:
|
||||
- --py36-plus
|
||||
|
||||
# Autoformat: Python code
|
||||
- repo: https://github.com/asottile/reorder_python_imports
|
||||
rev: v3.0.1
|
||||
hooks:
|
||||
- id: reorder-python-imports
|
||||
|
||||
# Autoformat: Python code
|
||||
- repo: https://github.com/psf/black
|
||||
rev: 22.1.0
|
||||
hooks:
|
||||
- id: black
|
||||
args: [--target-version=py36]
|
||||
- id: ruff
|
||||
types_or:
|
||||
- python
|
||||
- jupyter
|
||||
args: ["--fix", "--show-fixes"]
|
||||
- id: ruff-format
|
||||
types_or:
|
||||
- python
|
||||
- jupyter
|
||||
|
||||
# Autoformat: markdown, yaml, javascript (see the file .prettierignore)
|
||||
- repo: https://github.com/pre-commit/mirrors-prettier
|
||||
rev: v2.5.1
|
||||
rev: v4.0.0-alpha.8
|
||||
hooks:
|
||||
- id: prettier
|
||||
exclude: .*/templates/.*
|
||||
|
||||
# autoformat HTML templates
|
||||
- repo: https://github.com/djlint/djLint
|
||||
rev: v1.35.2
|
||||
hooks:
|
||||
- id: djlint-reformat-jinja
|
||||
files: ".*templates/.*.html"
|
||||
types_or: ["html"]
|
||||
exclude: redoc.html
|
||||
- id: djlint-jinja
|
||||
files: ".*templates/.*.html"
|
||||
types_or: ["html"]
|
||||
|
||||
# Autoformat and linting, misc. details
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.1.0
|
||||
rev: v4.6.0
|
||||
hooks:
|
||||
- id: end-of-file-fixer
|
||||
exclude: share/jupyterhub/static/js/admin-react.js
|
||||
- id: requirements-txt-fixer
|
||||
- id: check-case-conflict
|
||||
- id: check-executables-have-shebangs
|
||||
|
||||
# Linting: Python code (see the file .flake8)
|
||||
- repo: https://github.com/PyCQA/flake8
|
||||
rev: "4.0.1"
|
||||
hooks:
|
||||
- id: flake8
|
||||
|
@@ -1,2 +1,4 @@
|
||||
share/jupyterhub/templates/
|
||||
share/jupyterhub/static/js/admin-react.js
|
||||
jupyterhub/singleuser/templates/
|
||||
docs/source/_templates/
|
||||
|
@@ -1,20 +1,26 @@
|
||||
# Configuration on how ReadTheDocs (RTD) builds our documentation
|
||||
# ref: https://readthedocs.org/projects/jupyterhub/
|
||||
# ref: https://docs.readthedocs.io/en/stable/config-file/v2.html
|
||||
#
|
||||
version: 2
|
||||
|
||||
sphinx:
|
||||
configuration: docs/source/conf.py
|
||||
|
||||
build:
|
||||
os: ubuntu-20.04
|
||||
os: ubuntu-22.04
|
||||
tools:
|
||||
nodejs: "16"
|
||||
python: "3.9"
|
||||
nodejs: "20"
|
||||
python: "3.11"
|
||||
|
||||
python:
|
||||
install:
|
||||
- method: pip
|
||||
path: .
|
||||
- path: .
|
||||
- requirements: docs/requirements.txt
|
||||
|
||||
formats:
|
||||
# Adding htmlzip enables a Downloads section in the rendered website's RTD
|
||||
# menu where the html build can be downloaded. This doesn't require any
|
||||
# additional configuration in docs/source/conf.py.
|
||||
#
|
||||
- htmlzip
|
||||
- epub
|
||||
|
133
CONTRIBUTING.md
133
CONTRIBUTING.md
@@ -6,134 +6,9 @@ you can follow the [Jupyter contributor guide](https://jupyter.readthedocs.io/en
|
||||
Make sure to also follow [Project Jupyter's Code of Conduct](https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md)
|
||||
for a friendly and welcoming collaborative environment.
|
||||
|
||||
## Setting up a development environment
|
||||
Please see our documentation on
|
||||
|
||||
<!--
|
||||
https://jupyterhub.readthedocs.io/en/stable/contributing/setup.html
|
||||
contains a lot of the same information. Should we merge the docs and
|
||||
just have this page link to that one?
|
||||
-->
|
||||
- [Setting up a development install](https://jupyterhub.readthedocs.io/en/latest/contributing/setup.html)
|
||||
- [Testing JupyterHub and linting code](https://jupyterhub.readthedocs.io/en/latest/contributing/tests.html)
|
||||
|
||||
JupyterHub requires Python >= 3.5 and nodejs.
|
||||
|
||||
As a Python project, a development install of JupyterHub follows standard practices for the basics (steps 1-2).
|
||||
|
||||
1. clone the repo
|
||||
```bash
|
||||
git clone https://github.com/jupyterhub/jupyterhub
|
||||
```
|
||||
2. do a development install with pip
|
||||
|
||||
```bash
|
||||
cd jupyterhub
|
||||
python3 -m pip install --editable .
|
||||
```
|
||||
|
||||
3. install the development requirements,
|
||||
which include things like testing tools
|
||||
|
||||
```bash
|
||||
python3 -m pip install -r dev-requirements.txt
|
||||
```
|
||||
|
||||
4. install configurable-http-proxy with npm:
|
||||
|
||||
```bash
|
||||
npm install -g configurable-http-proxy
|
||||
```
|
||||
|
||||
5. set up pre-commit hooks for automatic code formatting, etc.
|
||||
|
||||
```bash
|
||||
pre-commit install
|
||||
```
|
||||
|
||||
You can also invoke the pre-commit hook manually at any time with
|
||||
|
||||
```bash
|
||||
pre-commit run
|
||||
```
|
||||
|
||||
## Contributing
|
||||
|
||||
JupyterHub has adopted automatic code formatting so you shouldn't
|
||||
need to worry too much about your code style.
|
||||
As long as your code is valid,
|
||||
the pre-commit hook should take care of how it should look.
|
||||
You can invoke the pre-commit hook by hand at any time with:
|
||||
|
||||
```bash
|
||||
pre-commit run
|
||||
```
|
||||
|
||||
which should run any autoformatting on your code
|
||||
and tell you about any errors it couldn't fix automatically.
|
||||
You may also install [black integration](https://github.com/psf/black#editor-integration)
|
||||
into your text editor to format code automatically.
|
||||
|
||||
If you have already committed files before setting up the pre-commit
|
||||
hook with `pre-commit install`, you can fix everything up using
|
||||
`pre-commit run --all-files`. You need to make the fixing commit
|
||||
yourself after that.
|
||||
|
||||
## Testing
|
||||
|
||||
It's a good idea to write tests to exercise any new features,
|
||||
or that trigger any bugs that you have fixed to catch regressions.
|
||||
|
||||
You can run the tests with:
|
||||
|
||||
```bash
|
||||
pytest -v
|
||||
```
|
||||
|
||||
in the repo directory. If you want to just run certain tests,
|
||||
check out the [pytest docs](https://pytest.readthedocs.io/en/latest/usage.html)
|
||||
for how pytest can be called.
|
||||
For instance, to test only spawner-related things in the REST API:
|
||||
|
||||
```bash
|
||||
pytest -v -k spawn jupyterhub/tests/test_api.py
|
||||
```
|
||||
|
||||
The tests live in `jupyterhub/tests` and are organized roughly into:
|
||||
|
||||
1. `test_api.py` tests the REST API
|
||||
2. `test_pages.py` tests loading the HTML pages
|
||||
|
||||
and other collections of tests for different components.
|
||||
When writing a new test, there should usually be a test of
|
||||
similar functionality already written and related tests should
|
||||
be added nearby.
|
||||
|
||||
The fixtures live in `jupyterhub/tests/conftest.py`. There are
|
||||
fixtures that can be used for JupyterHub components, such as:
|
||||
|
||||
- `app`: an instance of JupyterHub with mocked parts
|
||||
- `auth_state_enabled`: enables persisting auth_state (like authentication tokens)
|
||||
- `db`: a sqlite in-memory DB session
|
||||
- `io_loop`: a Tornado event loop
|
||||
- `event_loop`: a new asyncio event loop
|
||||
- `user`: creates a new temporary user
|
||||
- `admin_user`: creates a new temporary admin user
|
||||
- single user servers
|
||||
- `cleanup_after`: allows cleanup of single user servers between tests
|
||||
- mocked service
|
||||
- `MockServiceSpawner`: a spawner that mocks services for testing with a short poll interval
|
||||
- `mockservice`: mocked service with no external service url
|
||||
- `mockservice_url`: mocked service with a url to test external services
|
||||
|
||||
And fixtures to add functionality or spawning behavior:
|
||||
|
||||
- `admin_access`: grants admin access
|
||||
- `no_patience`: sets slow-spawning timeouts to zero
|
||||
- `slow_spawn`: enables the SlowSpawner (a spawner that takes a few seconds to start)
|
||||
- `never_spawn`: enables the NeverSpawner (a spawner that will never start)
|
||||
- `bad_spawn`: enables the BadSpawner (a spawner that fails immediately)
|
||||
- `slow_bad_spawn`: enables the SlowBadSpawner (a spawner that fails after a short delay)
|
||||
|
||||
To read more about fixtures check out the
|
||||
[pytest docs](https://docs.pytest.org/en/latest/fixture.html)
|
||||
for how to use the existing fixtures, and how to create new ones.
|
||||
|
||||
When in doubt, feel free to [ask](https://gitter.im/jupyterhub/jupyterhub).
|
||||
If you need some help, feel free to ask on [Gitter](https://gitter.im/jupyterhub/jupyterhub) or [Discourse](https://discourse.jupyter.org/).
|
||||
|
141
Dockerfile
141
Dockerfile
@@ -6,7 +6,7 @@
|
||||
#
|
||||
# Option 1:
|
||||
#
|
||||
# FROM jupyterhub/jupyterhub:latest
|
||||
# FROM quay.io/jupyterhub/jupyterhub:latest
|
||||
#
|
||||
# And put your configuration file jupyterhub_config.py in /srv/jupyterhub/jupyterhub_config.py.
|
||||
#
|
||||
@@ -14,88 +14,133 @@
|
||||
#
|
||||
# Or you can create your jupyterhub config and database on the host machine, and mount it with:
|
||||
#
|
||||
# docker run -v $PWD:/srv/jupyterhub -t jupyterhub/jupyterhub
|
||||
# docker run -v $PWD:/srv/jupyterhub -t quay.io/jupyterhub/jupyterhub
|
||||
#
|
||||
# NOTE
|
||||
# If you base on jupyterhub/jupyterhub-onbuild
|
||||
# If you base on quay.io/jupyterhub/jupyterhub-onbuild
|
||||
# your jupyterhub_config.py will be added automatically
|
||||
# from your docker directory.
|
||||
|
||||
ARG BASE_IMAGE=ubuntu:focal-20200729
|
||||
FROM $BASE_IMAGE AS builder
|
||||
######################################################################
|
||||
# This Dockerfile uses multi-stage builds with optimisations to build
|
||||
# the JupyterHub wheel on the native architecture only
|
||||
# https://www.docker.com/blog/faster-multi-platform-builds-dockerfile-cross-compilation-guide/
|
||||
|
||||
USER root
|
||||
ARG BASE_IMAGE=ubuntu:22.04
|
||||
|
||||
ENV DEBIAN_FRONTEND noninteractive
|
||||
RUN apt-get update \
|
||||
&& apt-get install -yq --no-install-recommends \
|
||||
|
||||
######################################################################
|
||||
# The JupyterHub wheel is pure Python so can be built for any platform
|
||||
# on the native architecture (avoiding QEMU emulation)
|
||||
FROM --platform=${BUILDPLATFORM:-linux/amd64} $BASE_IMAGE AS jupyterhub-builder
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
# Don't clear apt cache, and don't combine RUN commands, so that cached layers can
|
||||
# be reused in other stages
|
||||
|
||||
RUN apt-get update -qq \
|
||||
&& apt-get install -yqq --no-install-recommends \
|
||||
build-essential \
|
||||
ca-certificates \
|
||||
curl \
|
||||
git \
|
||||
gnupg \
|
||||
locales \
|
||||
python3-dev \
|
||||
python3-pip \
|
||||
python3-pycurl \
|
||||
nodejs \
|
||||
npm \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN python3 -m pip install --upgrade setuptools pip wheel
|
||||
python3-venv \
|
||||
&& python3 -m pip install --no-cache-dir --upgrade setuptools pip build wheel
|
||||
# Ubuntu 22.04 comes with Nodejs 12 which is too old for building JupyterHub JS
|
||||
# It's fine at runtime though (used only by configurable-http-proxy)
|
||||
ARG NODE_MAJOR=20
|
||||
RUN mkdir -p /etc/apt/keyrings \
|
||||
&& curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg \
|
||||
&& echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | tee /etc/apt/sources.list.d/nodesource.list \
|
||||
&& apt-get update \
|
||||
&& apt-get install -yqq --no-install-recommends \
|
||||
nodejs
|
||||
|
||||
WORKDIR /src/jupyterhub
|
||||
# copy everything except whats in .dockerignore, its a
|
||||
# compromise between needing to rebuild and maintaining
|
||||
# what needs to be part of the build
|
||||
COPY . /src/jupyterhub/
|
||||
WORKDIR /src/jupyterhub
|
||||
COPY . .
|
||||
|
||||
# Build client component packages (they will be copied into ./share and
|
||||
# packaged with the built wheel.)
|
||||
RUN python3 setup.py bdist_wheel
|
||||
RUN python3 -m pip wheel --wheel-dir wheelhouse dist/*.whl
|
||||
ARG PIP_CACHE_DIR=/tmp/pip-cache
|
||||
RUN --mount=type=cache,target=${PIP_CACHE_DIR} \
|
||||
python3 -m build --wheel
|
||||
|
||||
# verify installed files
|
||||
RUN --mount=type=cache,target=${PIP_CACHE_DIR} \
|
||||
python3 -m pip install ./dist/*.whl \
|
||||
&& cd ci \
|
||||
&& python3 check_installed_data.py
|
||||
|
||||
FROM $BASE_IMAGE
|
||||
|
||||
USER root
|
||||
######################################################################
|
||||
# All other wheels required by JupyterHub, some are platform specific
|
||||
FROM $BASE_IMAGE AS wheel-builder
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get install -yq --no-install-recommends \
|
||||
RUN apt-get update -qq \
|
||||
&& apt-get install -yqq --no-install-recommends \
|
||||
build-essential \
|
||||
ca-certificates \
|
||||
curl \
|
||||
gnupg \
|
||||
locales \
|
||||
python3-dev \
|
||||
python3-pip \
|
||||
python3-pycurl \
|
||||
nodejs \
|
||||
npm \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
python3-venv \
|
||||
&& python3 -m pip install --no-cache-dir --upgrade setuptools pip build wheel
|
||||
|
||||
ENV SHELL=/bin/bash \
|
||||
WORKDIR /src/jupyterhub
|
||||
|
||||
COPY --from=jupyterhub-builder /src/jupyterhub/dist/*.whl /src/jupyterhub/dist/
|
||||
ARG PIP_CACHE_DIR=/tmp/pip-cache
|
||||
RUN --mount=type=cache,target=${PIP_CACHE_DIR} \
|
||||
python3 -m pip wheel --wheel-dir wheelhouse dist/*.whl
|
||||
|
||||
|
||||
######################################################################
|
||||
# The final JupyterHub image, platform specific
|
||||
FROM $BASE_IMAGE AS jupyterhub
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive \
|
||||
SHELL=/bin/bash \
|
||||
LC_ALL=en_US.UTF-8 \
|
||||
LANG=en_US.UTF-8 \
|
||||
LANGUAGE=en_US.UTF-8
|
||||
|
||||
RUN locale-gen $LC_ALL
|
||||
|
||||
# always make sure pip is up to date!
|
||||
RUN python3 -m pip install --no-cache --upgrade setuptools pip
|
||||
|
||||
RUN npm install -g configurable-http-proxy@^4.2.0 \
|
||||
&& rm -rf ~/.npm
|
||||
|
||||
# install the wheels we built in the first stage
|
||||
COPY --from=builder /src/jupyterhub/wheelhouse /tmp/wheelhouse
|
||||
RUN python3 -m pip install --no-cache /tmp/wheelhouse/*
|
||||
|
||||
RUN mkdir -p /srv/jupyterhub/
|
||||
WORKDIR /srv/jupyterhub/
|
||||
LANGUAGE=en_US.UTF-8 \
|
||||
PYTHONDONTWRITEBYTECODE=1
|
||||
|
||||
EXPOSE 8000
|
||||
|
||||
LABEL maintainer="Jupyter Project <jupyter@googlegroups.com>"
|
||||
LABEL org.jupyter.service="jupyterhub"
|
||||
|
||||
WORKDIR /srv/jupyterhub
|
||||
|
||||
RUN apt-get update -qq \
|
||||
&& apt-get install -yqq --no-install-recommends \
|
||||
ca-certificates \
|
||||
curl \
|
||||
gnupg \
|
||||
locales \
|
||||
python-is-python3 \
|
||||
python3-pip \
|
||||
python3-pycurl \
|
||||
nodejs \
|
||||
npm \
|
||||
&& locale-gen $LC_ALL \
|
||||
&& npm install -g configurable-http-proxy@^4.2.0 \
|
||||
# clean cache and logs
|
||||
&& rm -rf /var/lib/apt/lists/* /var/log/* /var/tmp/* ~/.npm
|
||||
# install the wheels we built in the previous stage
|
||||
RUN --mount=type=cache,from=wheel-builder,source=/src/jupyterhub/wheelhouse,target=/tmp/wheelhouse \
|
||||
# always make sure pip is up to date!
|
||||
python3 -m pip install --no-compile --no-cache-dir --upgrade setuptools pip \
|
||||
&& python3 -m pip install --no-compile --no-cache-dir /tmp/wheelhouse/*
|
||||
|
||||
CMD ["jupyterhub"]
|
||||
|
31
MANIFEST.in
31
MANIFEST.in
@@ -1,24 +1,13 @@
|
||||
include README.md
|
||||
include COPYING.md
|
||||
include setupegg.py
|
||||
include bower-lite
|
||||
include package.json
|
||||
# using setuptools-scm means we only need to handle _non-tracked files here_
|
||||
|
||||
include package-lock.json
|
||||
include *requirements.txt
|
||||
include Dockerfile
|
||||
|
||||
graft onbuild
|
||||
graft jupyterhub
|
||||
graft scripts
|
||||
# include untracked js/css artifacts, components
|
||||
graft share
|
||||
graft singleuser
|
||||
graft ci
|
||||
|
||||
# Documentation
|
||||
graft docs
|
||||
prune docs/node_modules
|
||||
|
||||
# prune some large unused files from components
|
||||
# prune some large unused files from components.
|
||||
# these patterns affect source distributions (sdists)
|
||||
# we have stricter exclusions from installation in setup.py:get_data_files
|
||||
prune share/jupyterhub/static/components/bootstrap/dist/css
|
||||
exclude share/jupyterhub/static/components/bootstrap/dist/fonts/*.svg
|
||||
prune share/jupyterhub/static/components/font-awesome/css
|
||||
@@ -28,11 +17,3 @@ prune share/jupyterhub/static/components/jquery/external
|
||||
prune share/jupyterhub/static/components/jquery/src
|
||||
prune share/jupyterhub/static/components/moment/lang
|
||||
prune share/jupyterhub/static/components/moment/min
|
||||
|
||||
# Patterns to exclude from any directory
|
||||
global-exclude *~
|
||||
global-exclude *.pyc
|
||||
global-exclude *.pyo
|
||||
global-exclude .git
|
||||
global-exclude .ipynb_checkpoints
|
||||
global-exclude .bower.json
|
||||
|
28
README.md
28
README.md
@@ -8,22 +8,12 @@
|
||||
|
||||
---
|
||||
|
||||
Please note that this repository is participating in a study into the sustainability of open source projects. Data will be gathered about this repository for approximately the next 12 months, starting from 2021-06-11.
|
||||
|
||||
Data collected will include the number of contributors, number of PRs, time taken to close/merge these PRs, and issues closed.
|
||||
|
||||
For more information, please visit
|
||||
[our informational page](https://sustainable-open-science-and-software.github.io/) or download our [participant information sheet](https://sustainable-open-science-and-software.github.io/assets/PIS_sustainable_software.pdf).
|
||||
|
||||
---
|
||||
|
||||
# [JupyterHub](https://github.com/jupyterhub/jupyterhub)
|
||||
|
||||
[](https://pypi.python.org/pypi/jupyterhub)
|
||||
[](https://anaconda.org/conda-forge/jupyterhub)
|
||||
[](https://jupyterhub.readthedocs.org/en/latest/)
|
||||
[](https://github.com/jupyterhub/jupyterhub/actions)
|
||||
[](https://hub.docker.com/r/jupyterhub/jupyterhub/tags)
|
||||
[](https://codecov.io/gh/jupyterhub/jupyterhub)
|
||||
[](https://github.com/jupyterhub/jupyterhub/issues)
|
||||
[](https://discourse.jupyter.org/c/jupyterhub)
|
||||
@@ -59,14 +49,14 @@ JupyterHub also provides a
|
||||
[REST API][]
|
||||
for administration of the Hub and its users.
|
||||
|
||||
[rest api]: https://juptyerhub.readthedocs.io/en/latest/reference/rest-api.html
|
||||
[rest api]: https://jupyterhub.readthedocs.io/en/latest/reference/rest-api.html
|
||||
|
||||
## Installation
|
||||
|
||||
### Check prerequisites
|
||||
|
||||
- A Linux/Unix based system
|
||||
- [Python](https://www.python.org/downloads/) 3.6 or greater
|
||||
- [Python](https://www.python.org/downloads/) 3.8 or greater
|
||||
- [nodejs/npm](https://www.npmjs.com/)
|
||||
|
||||
- If you are using **`conda`**, the nodejs and npm dependencies will be installed for
|
||||
@@ -127,7 +117,7 @@ more configuration of the system.
|
||||
|
||||
## Configuration
|
||||
|
||||
The [Getting Started](https://jupyterhub.readthedocs.io/en/latest/getting-started/index.html) section of the
|
||||
The [Getting Started](https://jupyterhub.readthedocs.io/en/latest/tutorial/index.html#getting-started) section of the
|
||||
documentation explains the common steps in setting up JupyterHub.
|
||||
|
||||
The [**JupyterHub tutorial**](https://github.com/jupyterhub/jupyterhub-tutorial)
|
||||
@@ -169,10 +159,10 @@ To start the Hub on a specific url and port `10.0.1.2:443` with **https**:
|
||||
|
||||
## Docker
|
||||
|
||||
A starter [**docker image for JupyterHub**](https://hub.docker.com/r/jupyterhub/jupyterhub/)
|
||||
A starter [**docker image for JupyterHub**](https://quay.io/repository/jupyterhub/jupyterhub)
|
||||
gives a baseline deployment of JupyterHub using Docker.
|
||||
|
||||
**Important:** This `jupyterhub/jupyterhub` image contains only the Hub itself,
|
||||
**Important:** This `quay.io/jupyterhub/jupyterhub` image contains only the Hub itself,
|
||||
with no configuration. In general, one needs to make a derivative image, with
|
||||
at least a `jupyterhub_config.py` setting up an Authenticator and/or a Spawner.
|
||||
To run the single-user servers, which may be on the same system as the Hub or
|
||||
@@ -180,7 +170,7 @@ not, Jupyter Notebook version 4 or greater must be installed.
|
||||
|
||||
The JupyterHub docker image can be started with the following command:
|
||||
|
||||
docker run -p 8000:8000 -d --name jupyterhub jupyterhub/jupyterhub jupyterhub
|
||||
docker run -p 8000:8000 -d --name jupyterhub quay.io/jupyterhub/jupyterhub jupyterhub
|
||||
|
||||
This command will create a container named `jupyterhub` that you can
|
||||
**stop and resume** with `docker stop/start`.
|
||||
@@ -190,7 +180,7 @@ this a good choice for **testing JupyterHub on your desktop or laptop**.
|
||||
|
||||
If you want to run docker on a computer that has a public IP then you should
|
||||
(as in MUST) **secure it with ssl** by adding ssl options to your docker
|
||||
configuration or by using a ssl enabled proxy.
|
||||
configuration or by using an ssl enabled proxy.
|
||||
|
||||
[Mounting volumes](https://docs.docker.com/engine/admin/volumes/volumes/) will
|
||||
allow you to **store data outside the docker image (host system) so it will be persistent**, even when you start
|
||||
@@ -239,9 +229,9 @@ You can also talk with us on our JupyterHub [Gitter](https://gitter.im/jupyterhu
|
||||
|
||||
- [Reporting Issues](https://github.com/jupyterhub/jupyterhub/issues)
|
||||
- [JupyterHub tutorial](https://github.com/jupyterhub/jupyterhub-tutorial)
|
||||
- [Documentation for JupyterHub](https://jupyterhub.readthedocs.io/en/latest/) | [PDF (latest)](https://media.readthedocs.org/pdf/jupyterhub/latest/jupyterhub.pdf) | [PDF (stable)](https://media.readthedocs.org/pdf/jupyterhub/stable/jupyterhub.pdf)
|
||||
- [Documentation for JupyterHub](https://jupyterhub.readthedocs.io/en/latest/)
|
||||
- [Documentation for JupyterHub's REST API][rest api]
|
||||
- [Documentation for Project Jupyter](http://jupyter.readthedocs.io/en/latest/index.html) | [PDF](https://media.readthedocs.org/pdf/jupyter/latest/jupyter.pdf)
|
||||
- [Documentation for Project Jupyter](http://jupyter.readthedocs.io/en/latest/index.html)
|
||||
- [Project Jupyter website](https://jupyter.org)
|
||||
- [Project Jupyter community](https://jupyter.org/community)
|
||||
|
||||
|
49
RELEASE.md
49
RELEASE.md
@@ -1,39 +1,42 @@
|
||||
# How to make a release
|
||||
|
||||
`jupyterhub` is a package [available on
|
||||
PyPI](https://pypi.org/project/jupyterhub/) and
|
||||
[conda-forge](https://conda-forge.org/).
|
||||
These are instructions on how to make a release on PyPI.
|
||||
The PyPI release is done automatically by CI when a tag is pushed.
|
||||
`jupyterhub` is a package available on [PyPI][] and [conda-forge][].
|
||||
These are instructions on how to make a release.
|
||||
|
||||
For you to follow along according to these instructions, you need:
|
||||
## Pre-requisites
|
||||
|
||||
- To have push rights to the [jupyterhub GitHub
|
||||
repository](https://github.com/jupyterhub/jupyterhub).
|
||||
- Push rights to [jupyterhub/jupyterhub][]
|
||||
- Push rights to [conda-forge/jupyterhub-feedstock][]
|
||||
|
||||
## Steps to make a release
|
||||
|
||||
1. Create a PR updating `docs/source/changelog.md` with [github-activity][] and
|
||||
continue only when its merged.
|
||||
|
||||
```shell
|
||||
pip install github-activity
|
||||
|
||||
github-activity --heading-level=3 jupyterhub/jupyterhub
|
||||
```
|
||||
|
||||
1. Checkout main and make sure it is up to date.
|
||||
|
||||
```shell
|
||||
ORIGIN=${ORIGIN:-origin} # set to the canonical remote, e.g. 'upstream' if 'origin' is not the official repo
|
||||
git checkout main
|
||||
git fetch $ORIGIN main
|
||||
git reset --hard $ORIGIN/main
|
||||
git fetch origin main
|
||||
git reset --hard origin/main
|
||||
```
|
||||
|
||||
1. Make sure `docs/source/changelog.md` is up-to-date.
|
||||
[github-activity][] can help with this.
|
||||
|
||||
1. Update the version with `tbump`.
|
||||
You can see what will happen without making any changes with `tbump --dry-run ${VERSION}`
|
||||
1. Update the version, make commits, and push a git tag with `tbump`.
|
||||
|
||||
```shell
|
||||
pip install tbump
|
||||
tbump --dry-run ${VERSION}
|
||||
|
||||
tbump ${VERSION}
|
||||
```
|
||||
|
||||
This will tag and publish a release,
|
||||
which will be finished on CI.
|
||||
Following this, the [CI system][] will build and publish a release.
|
||||
|
||||
1. Reset the version back to dev, e.g. `2.1.0.dev` after releasing `2.0.0`
|
||||
|
||||
@@ -42,9 +45,11 @@ For you to follow along according to these instructions, you need:
|
||||
```
|
||||
|
||||
1. Following the release to PyPI, an automated PR should arrive to
|
||||
[conda-forge/jupyterhub-feedstock][],
|
||||
check for the tests to succeed on this PR and then merge it to successfully
|
||||
update the package for `conda` on the conda-forge channel.
|
||||
[conda-forge/jupyterhub-feedstock][] with instructions.
|
||||
|
||||
[github-activity]: https://github.com/choldgraf/github-activity
|
||||
[pypi]: https://pypi.org/project/jupyterhub/
|
||||
[conda-forge]: https://anaconda.org/conda-forge/jupyterhub
|
||||
[jupyterhub/jupyterhub]: https://github.com/jupyterhub/jupyterhub
|
||||
[conda-forge/jupyterhub-feedstock]: https://github.com/conda-forge/jupyterhub-feedstock
|
||||
[github-activity]: https://github.com/executablebooks/github-activity
|
||||
[ci system]: https://github.com/jupyterhub/jupyterhub/actions/workflows/release.yml
|
||||
|
@@ -7,6 +7,7 @@ bower-lite
|
||||
Since Bower's on its way out,
|
||||
stage frontend dependencies from node_modules into components
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
|
36
ci/check_installed_data.py
Executable file
36
ci/check_installed_data.py
Executable file
@@ -0,0 +1,36 @@
|
||||
#!/usr/bin/env python
|
||||
# Check that installed package contains everything we expect
|
||||
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import jupyterhub
|
||||
from jupyterhub._data import DATA_FILES_PATH
|
||||
|
||||
print("Checking jupyterhub._data", end=" ")
|
||||
print(f"DATA_FILES_PATH={DATA_FILES_PATH}", end=" ")
|
||||
DATA_FILES_PATH = Path(DATA_FILES_PATH)
|
||||
assert DATA_FILES_PATH.is_dir(), DATA_FILES_PATH
|
||||
for subpath in (
|
||||
"templates/spawn.html",
|
||||
"static/css/style.min.css",
|
||||
"static/components/jquery/dist/jquery.js",
|
||||
"static/js/admin-react.js",
|
||||
):
|
||||
path = DATA_FILES_PATH / subpath
|
||||
assert path.is_file(), path
|
||||
|
||||
print("OK")
|
||||
|
||||
print("Checking package_data", end=" ")
|
||||
jupyterhub_path = Path(jupyterhub.__file__).parent.resolve()
|
||||
for subpath in (
|
||||
"alembic.ini",
|
||||
"alembic/versions/833da8570507_rbac.py",
|
||||
"event-schemas/server-actions/v1.yaml",
|
||||
"singleuser/templates/page.html",
|
||||
):
|
||||
path = jupyterhub_path / subpath
|
||||
assert path.is_file(), path
|
||||
|
||||
print("OK")
|
27
ci/check_sdist.py
Executable file
27
ci/check_sdist.py
Executable file
@@ -0,0 +1,27 @@
|
||||
#!/usr/bin/env python
|
||||
# Check that sdist contains everything we expect
|
||||
|
||||
import sys
|
||||
import tarfile
|
||||
|
||||
expected_files = [
|
||||
"docs/requirements.txt",
|
||||
"jsx/package.json",
|
||||
"package.json",
|
||||
"README.md",
|
||||
]
|
||||
|
||||
assert len(sys.argv) == 2, "Expected one file"
|
||||
print(f"Checking {sys.argv[1]}")
|
||||
|
||||
tar = tarfile.open(name=sys.argv[1], mode="r:gz")
|
||||
try:
|
||||
# Remove leading jupyterhub-VERSION/
|
||||
filelist = {f.partition('/')[2] for f in tar.getnames()}
|
||||
finally:
|
||||
tar.close()
|
||||
|
||||
for e in expected_files:
|
||||
assert e in filelist, f"{e} not found"
|
||||
|
||||
print("OK")
|
@@ -22,7 +22,7 @@ if [[ "$DB" == "mysql" ]]; then
|
||||
# ref server: https://hub.docker.com/_/mysql/
|
||||
# ref client: https://dev.mysql.com/doc/refman/5.7/en/setting-environment-variables.html
|
||||
#
|
||||
DOCKER_RUN_ARGS="-p 3306:3306 --env MYSQL_ALLOW_EMPTY_PASSWORD=1 mysql:5.7"
|
||||
DOCKER_RUN_ARGS="-p 3306:3306 --env MYSQL_ALLOW_EMPTY_PASSWORD=1 mysql:8.0"
|
||||
READINESS_CHECK="mysql --user root --execute \q"
|
||||
elif [[ "$DB" == "postgres" ]]; then
|
||||
# Environment variables can influence both the postgresql server in the
|
||||
@@ -36,7 +36,7 @@ elif [[ "$DB" == "postgres" ]]; then
|
||||
# used by the postgresql client psql, so we configure the user based on how
|
||||
# we want to connect.
|
||||
#
|
||||
DOCKER_RUN_ARGS="-p 5432:5432 --env "POSTGRES_USER=${PGUSER}" --env "POSTGRES_PASSWORD=${PGPASSWORD}" postgres:9.5"
|
||||
DOCKER_RUN_ARGS="-p 5432:5432 --env "POSTGRES_USER=${PGUSER}" --env "POSTGRES_PASSWORD=${PGPASSWORD}" postgres:15.1"
|
||||
READINESS_CHECK="psql --command \q"
|
||||
else
|
||||
echo '$DB must be mysql or postgres'
|
||||
|
@@ -19,8 +19,9 @@ else
|
||||
fi
|
||||
|
||||
# Configure a set of databases in the database server for upgrade tests
|
||||
# this list must be in sync with versions in test_db.py:test_upgrade
|
||||
set -x
|
||||
for SUFFIX in '' _upgrade_100 _upgrade_122 _upgrade_130; do
|
||||
for SUFFIX in '' _upgrade_110 _upgrade_122 _upgrade_130 _upgrade_150 _upgrade_211 _upgrade_311; do
|
||||
$SQL_CLIENT "DROP DATABASE jupyterhub${SUFFIX};" 2>/dev/null || true
|
||||
$SQL_CLIENT "CREATE DATABASE jupyterhub${SUFFIX} ${EXTRA_CREATE_DATABASE_ARGS:-};"
|
||||
done
|
||||
|
13
ci/oldest-dependencies/oldest-dependencies.txt
Normal file
13
ci/oldest-dependencies/oldest-dependencies.txt
Normal file
@@ -0,0 +1,13 @@
|
||||
alembic==1.4
|
||||
async_generator==1.9
|
||||
certipy==0.1.2
|
||||
importlib_metadata==3.6; python_version < '3.10'
|
||||
jinja2==2.11.0
|
||||
jupyter_telemetry==0.1.0
|
||||
oauthlib==3.0
|
||||
pamela==1.1.0; sys_platform != 'win32'
|
||||
prometheus_client==0.5.0
|
||||
psutil==5.6.5; sys_platform == 'win32'
|
||||
SQLAlchemy==1.4.1
|
||||
tornado==5.1
|
||||
traitlets==4.3.2
|
20
ci/oldest-dependencies/requirements.in
Normal file
20
ci/oldest-dependencies/requirements.in
Normal file
@@ -0,0 +1,20 @@
|
||||
# oldest-dependencies.txt is autogenerated.
|
||||
# recreate with:
|
||||
# cat requirements.txt | grep '>=' | sed -e 's@>=@==@g' > ci/legacy-env/oldest-dependencies.txt
|
||||
-r ./oldest-dependencies.txt
|
||||
# then `pip-compile` with Python 3.8
|
||||
# below are additional pins to make this a working test env
|
||||
# these are extracted from jupyterhub[test]
|
||||
beautifulsoup4
|
||||
coverage
|
||||
playwright
|
||||
pytest
|
||||
pytest-cov
|
||||
pytest-asyncio==0.17.*
|
||||
requests-mock
|
||||
virtualenv
|
||||
|
||||
# and any additional pins to make this a working test env
|
||||
# e.g. pinning down a transitive dependency
|
||||
notebook==6.*
|
||||
markupsafe==2.0.*
|
285
ci/oldest-dependencies/requirements.old
Normal file
285
ci/oldest-dependencies/requirements.old
Normal file
@@ -0,0 +1,285 @@
|
||||
#
|
||||
# This file is autogenerated by pip-compile with Python 3.8
|
||||
# by the following command:
|
||||
#
|
||||
# pip-compile --output-file=requirements.old
|
||||
#
|
||||
alembic==1.4.0
|
||||
# via -r ./oldest-dependencies.txt
|
||||
appnope==0.1.3
|
||||
# via
|
||||
# ipykernel
|
||||
# ipython
|
||||
argon2-cffi==23.1.0
|
||||
# via notebook
|
||||
argon2-cffi-bindings==21.2.0
|
||||
# via argon2-cffi
|
||||
async-generator==1.9
|
||||
# via -r ./oldest-dependencies.txt
|
||||
attrs==23.1.0
|
||||
# via
|
||||
# jsonschema
|
||||
# referencing
|
||||
backcall==0.2.0
|
||||
# via ipython
|
||||
beautifulsoup4==4.12.2
|
||||
# via -r requirements.in
|
||||
bleach==6.0.0
|
||||
# via nbconvert
|
||||
certifi==2023.7.22
|
||||
# via requests
|
||||
certipy==0.1.2
|
||||
# via -r ./oldest-dependencies.txt
|
||||
cffi==1.15.1
|
||||
# via
|
||||
# argon2-cffi-bindings
|
||||
# cryptography
|
||||
charset-normalizer==3.2.0
|
||||
# via requests
|
||||
coverage[toml]==7.3.1
|
||||
# via
|
||||
# -r requirements.in
|
||||
# pytest-cov
|
||||
cryptography==41.0.4
|
||||
# via pyopenssl
|
||||
debugpy==1.8.0
|
||||
# via ipykernel
|
||||
decorator==5.1.1
|
||||
# via
|
||||
# ipython
|
||||
# traitlets
|
||||
defusedxml==0.7.1
|
||||
# via nbconvert
|
||||
distlib==0.3.7
|
||||
# via virtualenv
|
||||
entrypoints==0.4
|
||||
# via
|
||||
# jupyter-client
|
||||
# nbconvert
|
||||
exceptiongroup==1.1.3
|
||||
# via pytest
|
||||
fastjsonschema==2.18.0
|
||||
# via nbformat
|
||||
filelock==3.12.4
|
||||
# via virtualenv
|
||||
greenlet==2.0.2
|
||||
# via
|
||||
# playwright
|
||||
# sqlalchemy
|
||||
idna==3.4
|
||||
# via requests
|
||||
importlib-metadata==3.6.0 ; python_version < "3.10"
|
||||
# via -r ./oldest-dependencies.txt
|
||||
importlib-resources==6.1.0
|
||||
# via
|
||||
# jsonschema
|
||||
# jsonschema-specifications
|
||||
iniconfig==2.0.0
|
||||
# via pytest
|
||||
ipykernel==6.4.2
|
||||
# via notebook
|
||||
ipython==7.34.0
|
||||
# via ipykernel
|
||||
ipython-genutils==0.2.0
|
||||
# via
|
||||
# ipykernel
|
||||
# notebook
|
||||
# traitlets
|
||||
jedi==0.19.0
|
||||
# via ipython
|
||||
jinja2==2.11.0
|
||||
# via
|
||||
# -r ./oldest-dependencies.txt
|
||||
# nbconvert
|
||||
# notebook
|
||||
jsonschema==4.19.1
|
||||
# via
|
||||
# jupyter-telemetry
|
||||
# nbformat
|
||||
jsonschema-specifications==2023.7.1
|
||||
# via jsonschema
|
||||
jupyter-client==7.2.0
|
||||
# via
|
||||
# ipykernel
|
||||
# nbclient
|
||||
# notebook
|
||||
jupyter-core==5.0.0
|
||||
# via
|
||||
# jupyter-client
|
||||
# nbconvert
|
||||
# nbformat
|
||||
# notebook
|
||||
jupyter-telemetry==0.1.0
|
||||
# via -r ./oldest-dependencies.txt
|
||||
jupyterlab-pygments==0.2.2
|
||||
# via nbconvert
|
||||
mako==1.2.4
|
||||
# via alembic
|
||||
markupsafe==2.0.1
|
||||
# via
|
||||
# -r requirements.in
|
||||
# jinja2
|
||||
# mako
|
||||
matplotlib-inline==0.1.6
|
||||
# via
|
||||
# ipykernel
|
||||
# ipython
|
||||
mistune==0.8.4
|
||||
# via nbconvert
|
||||
nbclient==0.5.11
|
||||
# via nbconvert
|
||||
nbconvert==6.0.7
|
||||
# via notebook
|
||||
nbformat==5.3.0
|
||||
# via
|
||||
# nbclient
|
||||
# nbconvert
|
||||
# notebook
|
||||
nest-asyncio==1.5.8
|
||||
# via
|
||||
# jupyter-client
|
||||
# nbclient
|
||||
notebook==6.1.6
|
||||
# via -r requirements.in
|
||||
oauthlib==3.0.0
|
||||
# via -r ./oldest-dependencies.txt
|
||||
packaging==23.1
|
||||
# via pytest
|
||||
pamela==1.1.0 ; sys_platform != "win32"
|
||||
# via -r ./oldest-dependencies.txt
|
||||
pandocfilters==1.5.0
|
||||
# via nbconvert
|
||||
parso==0.8.3
|
||||
# via jedi
|
||||
pexpect==4.8.0
|
||||
# via ipython
|
||||
pickleshare==0.7.5
|
||||
# via ipython
|
||||
pkgutil-resolve-name==1.3.10
|
||||
# via jsonschema
|
||||
platformdirs==3.10.0
|
||||
# via
|
||||
# jupyter-core
|
||||
# virtualenv
|
||||
playwright==1.38.0
|
||||
# via -r requirements.in
|
||||
pluggy==1.3.0
|
||||
# via pytest
|
||||
prometheus-client==0.5.0
|
||||
# via
|
||||
# -r ./oldest-dependencies.txt
|
||||
# notebook
|
||||
prompt-toolkit==3.0.39
|
||||
# via ipython
|
||||
ptyprocess==0.7.0
|
||||
# via
|
||||
# pexpect
|
||||
# terminado
|
||||
pycparser==2.21
|
||||
# via cffi
|
||||
pyee==9.0.4
|
||||
# via playwright
|
||||
pygments==2.16.1
|
||||
# via
|
||||
# ipython
|
||||
# nbconvert
|
||||
pyopenssl==23.2.0
|
||||
# via certipy
|
||||
pytest==7.4.2
|
||||
# via
|
||||
# -r requirements.in
|
||||
# pytest-asyncio
|
||||
# pytest-cov
|
||||
pytest-asyncio==0.17.2
|
||||
# via -r requirements.in
|
||||
pytest-cov==4.1.0
|
||||
# via -r requirements.in
|
||||
python-dateutil==2.8.2
|
||||
# via
|
||||
# alembic
|
||||
# jupyter-client
|
||||
python-editor==1.0.4
|
||||
# via alembic
|
||||
python-json-logger==2.0.7
|
||||
# via jupyter-telemetry
|
||||
pyzmq==25.1.1
|
||||
# via
|
||||
# jupyter-client
|
||||
# notebook
|
||||
referencing==0.30.2
|
||||
# via
|
||||
# jsonschema
|
||||
# jsonschema-specifications
|
||||
requests==2.31.0
|
||||
# via requests-mock
|
||||
requests-mock==1.11.0
|
||||
# via -r requirements.in
|
||||
rpds-py==0.10.3
|
||||
# via
|
||||
# jsonschema
|
||||
# referencing
|
||||
ruamel-yaml==0.17.32
|
||||
# via jupyter-telemetry
|
||||
ruamel-yaml-clib==0.2.7
|
||||
# via ruamel-yaml
|
||||
send2trash==1.8.2
|
||||
# via notebook
|
||||
six==1.16.0
|
||||
# via
|
||||
# bleach
|
||||
# python-dateutil
|
||||
# requests-mock
|
||||
# traitlets
|
||||
soupsieve==2.5
|
||||
# via beautifulsoup4
|
||||
sqlalchemy==1.4.1
|
||||
# via
|
||||
# -r ./oldest-dependencies.txt
|
||||
# alembic
|
||||
terminado==0.13.3
|
||||
# via notebook
|
||||
testpath==0.6.0
|
||||
# via nbconvert
|
||||
tomli==2.0.1
|
||||
# via
|
||||
# coverage
|
||||
# pytest
|
||||
tornado==5.1
|
||||
# via
|
||||
# -r ./oldest-dependencies.txt
|
||||
# ipykernel
|
||||
# jupyter-client
|
||||
# notebook
|
||||
# terminado
|
||||
traitlets==4.3.2
|
||||
# via
|
||||
# -r ./oldest-dependencies.txt
|
||||
# ipykernel
|
||||
# ipython
|
||||
# jupyter-client
|
||||
# jupyter-core
|
||||
# jupyter-telemetry
|
||||
# matplotlib-inline
|
||||
# nbclient
|
||||
# nbconvert
|
||||
# nbformat
|
||||
# notebook
|
||||
typing-extensions==4.8.0
|
||||
# via
|
||||
# playwright
|
||||
# pyee
|
||||
urllib3==2.0.5
|
||||
# via requests
|
||||
virtualenv==20.24.5
|
||||
# via -r requirements.in
|
||||
wcwidth==0.2.6
|
||||
# via prompt-toolkit
|
||||
webencodings==0.5.1
|
||||
# via bleach
|
||||
zipp==3.17.0
|
||||
# via
|
||||
# importlib-metadata
|
||||
# importlib-resources
|
||||
|
||||
# The following packages are considered to be unsafe in a requirements file:
|
||||
# setuptools
|
@@ -3,7 +3,7 @@
|
||||
# This should only be used for demo or testing and not as a base image to build on.
|
||||
#
|
||||
# It includes the notebook package and it uses the DummyAuthenticator and the SimpleLocalProcessSpawner.
|
||||
ARG BASE_IMAGE=jupyterhub/jupyterhub-onbuild
|
||||
ARG BASE_IMAGE=quay.io/jupyterhub/jupyterhub-onbuild
|
||||
FROM ${BASE_IMAGE}
|
||||
|
||||
# Install the notebook package
|
||||
|
@@ -1,6 +1,6 @@
|
||||
# Configuration file for jupyterhub-demo
|
||||
|
||||
c = get_config()
|
||||
c = get_config() # noqa
|
||||
|
||||
# Use DummyAuthenticator and SimpleSpawner
|
||||
c.JupyterHub.spawner_class = "simple"
|
||||
|
@@ -1,21 +0,0 @@
|
||||
-r requirements.txt
|
||||
# temporary pin of attrs for jsonschema 0.3.0a1
|
||||
# seems to be a pip bug
|
||||
attrs>=17.4.0
|
||||
beautifulsoup4
|
||||
codecov
|
||||
coverage
|
||||
cryptography
|
||||
html5lib # needed for beautifulsoup
|
||||
jupyterlab >=3
|
||||
mock
|
||||
pre-commit
|
||||
pytest>=3.3
|
||||
pytest-asyncio
|
||||
pytest-cov
|
||||
requests-mock
|
||||
tbump
|
||||
# blacklist urllib3 releases affected by https://github.com/urllib3/urllib3/issues/1683
|
||||
# I *think* this should only affect testing, not production
|
||||
urllib3!=1.25.4,!=1.25.5
|
||||
virtualenv
|
@@ -1,14 +0,0 @@
|
||||
FROM alpine:3.13
|
||||
ENV LANG=en_US.UTF-8
|
||||
RUN apk add --no-cache \
|
||||
python3 \
|
||||
py3-pip \
|
||||
py3-ruamel.yaml \
|
||||
py3-cryptography \
|
||||
py3-sqlalchemy
|
||||
|
||||
ARG JUPYTERHUB_VERSION=1.3.0
|
||||
RUN pip3 install --no-cache jupyterhub==${JUPYTERHUB_VERSION}
|
||||
|
||||
USER nobody
|
||||
CMD ["jupyterhub"]
|
@@ -1,20 +0,0 @@
|
||||
## What is Dockerfile.alpine
|
||||
|
||||
Dockerfile.alpine contains base image for jupyterhub. It does not work independently, but only as part of a full jupyterhub cluster
|
||||
|
||||
## How to use it?
|
||||
|
||||
1. A running configurable-http-proxy, whose API is accessible.
|
||||
2. A jupyterhub_config file.
|
||||
3. Authentication and other libraries required by the specific jupyterhub_config file.
|
||||
|
||||
## Steps to test it outside a cluster
|
||||
|
||||
- start configurable-http-proxy in another container
|
||||
- specify CONFIGPROXY_AUTH_TOKEN env in both containers
|
||||
- put both containers on the same network (e.g. docker network create jupyterhub; docker run ... --net jupyterhub)
|
||||
- tell jupyterhub where CHP is (e.g. c.ConfigurableHTTPProxy.api_url = 'http://chp:8001')
|
||||
- tell jupyterhub not to start the proxy itself (c.ConfigurableHTTPProxy.should_start = False)
|
||||
- Use dummy authenticator for ease of testing. Update following in jupyterhub_config file
|
||||
- c.JupyterHub.authenticator_class = 'dummyauthenticator.DummyAuthenticator'
|
||||
- c.DummyAuthenticator.password = "your strong password"
|
@@ -4,6 +4,11 @@ from jupyterhub._data import DATA_FILES_PATH
|
||||
|
||||
print(f"DATA_FILES_PATH={DATA_FILES_PATH}")
|
||||
|
||||
for sub_path in ("templates", "static/components", "static/css/style.min.css"):
|
||||
for sub_path in (
|
||||
"templates",
|
||||
"static/components",
|
||||
"static/css/style.min.css",
|
||||
"static/js/admin-react.js",
|
||||
):
|
||||
path = os.path.join(DATA_FILES_PATH, sub_path)
|
||||
assert os.path.exists(path), path
|
||||
|
239
docs/Makefile
239
docs/Makefile
@@ -1,209 +1,62 @@
|
||||
# Makefile for Sphinx documentation
|
||||
#
|
||||
# Makefile for Sphinx documentation generated by sphinx-quickstart
|
||||
# ----------------------------------------------------------------------------
|
||||
|
||||
# You can set these variables from the command line.
|
||||
SPHINXOPTS = "-W"
|
||||
SPHINXBUILD = sphinx-build
|
||||
PAPER =
|
||||
BUILDDIR = build
|
||||
|
||||
# User-friendly check for sphinx-build
|
||||
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
|
||||
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
|
||||
endif
|
||||
|
||||
# Internal variables.
|
||||
PAPEROPT_a4 = -D latex_paper_size=a4
|
||||
PAPEROPT_letter = -D latex_paper_size=letter
|
||||
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
|
||||
# the i18n builder cannot share the environment and doctrees with the others
|
||||
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
|
||||
|
||||
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext
|
||||
# You can set these variables from the command line, and also
|
||||
# from the environment for the first two.
|
||||
SPHINXOPTS ?= --color -W --keep-going
|
||||
SPHINXBUILD ?= sphinx-build
|
||||
SOURCEDIR = source
|
||||
BUILDDIR = _build
|
||||
|
||||
# Put it first so that "make" without argument is like "make help".
|
||||
help:
|
||||
@echo "Please use \`make <target>' where <target> is one of"
|
||||
@echo " html to make standalone HTML files"
|
||||
@echo " dirhtml to make HTML files named index.html in directories"
|
||||
@echo " singlehtml to make a single large HTML file"
|
||||
@echo " pickle to make pickle files"
|
||||
@echo " json to make JSON files"
|
||||
@echo " htmlhelp to make HTML files and a HTML help project"
|
||||
@echo " qthelp to make HTML files and a qthelp project"
|
||||
@echo " applehelp to make an Apple Help Book"
|
||||
@echo " devhelp to make HTML files and a Devhelp project"
|
||||
@echo " epub to make an epub"
|
||||
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
|
||||
@echo " latexpdf to make LaTeX files and run them through pdflatex"
|
||||
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
|
||||
@echo " text to make text files"
|
||||
@echo " man to make manual pages"
|
||||
@echo " texinfo to make Texinfo files"
|
||||
@echo " info to make Texinfo files and run them through makeinfo"
|
||||
@echo " gettext to make PO message catalogs"
|
||||
@echo " changes to make an overview of all changed/added/deprecated items"
|
||||
@echo " xml to make Docutils-native XML files"
|
||||
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
|
||||
@echo " linkcheck to check all external links for integrity"
|
||||
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
|
||||
@echo " coverage to run coverage check of the documentation (if enabled)"
|
||||
@echo " spelling to run spell check on documentation"
|
||||
@echo " metrics to generate documentation for metrics by inspecting the source code"
|
||||
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS)
|
||||
|
||||
clean:
|
||||
rm -rf $(BUILDDIR)/*
|
||||
.PHONY: help Makefile metrics scopes
|
||||
|
||||
metrics: source/reference/metrics.rst
|
||||
# Catch-all target: route all unknown targets to Sphinx using the new
|
||||
# "make mode" option.
|
||||
#
|
||||
# Several sphinx-build commands can be used through this, for example:
|
||||
#
|
||||
# - make clean
|
||||
# - make linkcheck
|
||||
# - make spelling
|
||||
#
|
||||
%: Makefile
|
||||
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS)
|
||||
|
||||
source/reference/metrics.rst: generate-metrics.py
|
||||
python3 generate-metrics.py
|
||||
|
||||
scopes: source/rbac/scope-table.md
|
||||
|
||||
source/rbac/scope-table.md: source/rbac/generate-scope-table.py
|
||||
python3 source/rbac/generate-scope-table.py
|
||||
# Manually added targets - related to code generation
|
||||
# ----------------------------------------------------------------------------
|
||||
|
||||
# For local development:
|
||||
# - builds the html
|
||||
# - NOTE: If the pre-requisites for the html target is updated, also update the
|
||||
# Read The Docs section in docs/source/conf.py.
|
||||
#
|
||||
html: metrics scopes
|
||||
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
|
||||
$(SPHINXBUILD) -b html "$(SOURCEDIR)" "$(BUILDDIR)/html" $(SPHINXOPTS)
|
||||
@echo
|
||||
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
|
||||
|
||||
dirhtml:
|
||||
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
|
||||
@echo
|
||||
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
|
||||
metrics: source/reference/metrics.md
|
||||
source/reference/metrics.md:
|
||||
python3 generate-metrics.py
|
||||
|
||||
singlehtml:
|
||||
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
|
||||
@echo
|
||||
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
|
||||
scopes: source/rbac/scope-table.md
|
||||
source/rbac/scope-table.md:
|
||||
python3 source/rbac/generate-scope-table.py
|
||||
|
||||
pickle:
|
||||
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
|
||||
@echo
|
||||
@echo "Build finished; now you can process the pickle files."
|
||||
|
||||
json:
|
||||
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
|
||||
@echo
|
||||
@echo "Build finished; now you can process the JSON files."
|
||||
# Manually added targets - related to development
|
||||
# ----------------------------------------------------------------------------
|
||||
|
||||
htmlhelp:
|
||||
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
|
||||
@echo
|
||||
@echo "Build finished; now you can run HTML Help Workshop with the" \
|
||||
".hhp project file in $(BUILDDIR)/htmlhelp."
|
||||
|
||||
qthelp:
|
||||
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
|
||||
@echo
|
||||
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
|
||||
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
|
||||
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/JupyterHub.qhcp"
|
||||
@echo "To view the help file:"
|
||||
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/JupyterHub.qhc"
|
||||
|
||||
applehelp:
|
||||
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
|
||||
@echo
|
||||
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
|
||||
@echo "N.B. You won't be able to view it unless you put it in" \
|
||||
"~/Library/Documentation/Help or install it in your application" \
|
||||
"bundle."
|
||||
|
||||
devhelp:
|
||||
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
|
||||
@echo
|
||||
@echo "Build finished."
|
||||
@echo "To view the help file:"
|
||||
@echo "# mkdir -p $$HOME/.local/share/devhelp/JupyterHub"
|
||||
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/JupyterHub"
|
||||
@echo "# devhelp"
|
||||
|
||||
epub:
|
||||
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
|
||||
@echo
|
||||
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
|
||||
|
||||
latex:
|
||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
||||
@echo
|
||||
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
|
||||
@echo "Run \`make' in that directory to run these through (pdf)latex" \
|
||||
"(use \`make latexpdf' here to do that automatically)."
|
||||
|
||||
latexpdf:
|
||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
||||
@echo "Running LaTeX files through pdflatex..."
|
||||
$(MAKE) -C $(BUILDDIR)/latex all-pdf
|
||||
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
|
||||
|
||||
latexpdfja:
|
||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
||||
@echo "Running LaTeX files through platex and dvipdfmx..."
|
||||
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
|
||||
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
|
||||
|
||||
text:
|
||||
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
|
||||
@echo
|
||||
@echo "Build finished. The text files are in $(BUILDDIR)/text."
|
||||
|
||||
man:
|
||||
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
|
||||
@echo
|
||||
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
|
||||
|
||||
texinfo:
|
||||
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
|
||||
@echo
|
||||
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
|
||||
@echo "Run \`make' in that directory to run these through makeinfo" \
|
||||
"(use \`make info' here to do that automatically)."
|
||||
|
||||
info:
|
||||
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
|
||||
@echo "Running Texinfo files through makeinfo..."
|
||||
make -C $(BUILDDIR)/texinfo info
|
||||
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
|
||||
|
||||
gettext:
|
||||
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
|
||||
@echo
|
||||
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
|
||||
|
||||
changes:
|
||||
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
|
||||
@echo
|
||||
@echo "The overview file is in $(BUILDDIR)/changes."
|
||||
|
||||
linkcheck:
|
||||
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
|
||||
@echo
|
||||
@echo "Link check complete; look for any errors in the above output " \
|
||||
"or in $(BUILDDIR)/linkcheck/output.txt."
|
||||
|
||||
spelling:
|
||||
$(SPHINXBUILD) -b spelling $(ALLSPHINXOPTS) $(BUILDDIR)/spelling
|
||||
@echo
|
||||
@echo "Spell check complete; look for any errors in the above output " \
|
||||
"or in $(BUILDDIR)/spelling/output.txt."
|
||||
doctest:
|
||||
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
|
||||
@echo "Testing of doctests in the sources finished, look at the " \
|
||||
"results in $(BUILDDIR)/doctest/output.txt."
|
||||
|
||||
coverage:
|
||||
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
|
||||
@echo "Testing of coverage in the sources finished, look at the " \
|
||||
"results in $(BUILDDIR)/coverage/python.txt."
|
||||
|
||||
xml:
|
||||
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
|
||||
@echo
|
||||
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
|
||||
|
||||
pseudoxml:
|
||||
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
|
||||
@echo
|
||||
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
|
||||
# For local development:
|
||||
# - requires sphinx-autobuild, see
|
||||
# https://sphinxcontrib-spelling.readthedocs.io/en/latest/
|
||||
# - builds and rebuilds html on changes to source, but does not re-generate
|
||||
# metrics/scopes files
|
||||
# - starts a livereload enabled webserver and opens up a browser
|
||||
devenv: html
|
||||
sphinx-autobuild -b html --open-browser "$(SOURCEDIR)" "$(BUILDDIR)/html"
|
||||
|
@@ -1,8 +1,6 @@
|
||||
import os
|
||||
from os.path import join
|
||||
|
||||
from pytablewriter import RstSimpleTableWriter
|
||||
from pytablewriter.style import Style
|
||||
from pytablewriter import MarkdownTableWriter
|
||||
|
||||
import jupyterhub.metrics
|
||||
|
||||
@@ -12,12 +10,11 @@ HERE = os.path.abspath(os.path.dirname(__file__))
|
||||
class Generator:
|
||||
@classmethod
|
||||
def create_writer(cls, table_name, headers, values):
|
||||
writer = RstSimpleTableWriter()
|
||||
writer = MarkdownTableWriter()
|
||||
writer.table_name = table_name
|
||||
writer.headers = headers
|
||||
writer.value_matrix = values
|
||||
writer.margin = 1
|
||||
[writer.set_style(header, Style(align="center")) for header in headers]
|
||||
return writer
|
||||
|
||||
def _parse_metrics(self):
|
||||
@@ -34,18 +31,17 @@ class Generator:
|
||||
if not os.path.exists(generated_directory):
|
||||
os.makedirs(generated_directory)
|
||||
|
||||
filename = f"{generated_directory}/metrics.rst"
|
||||
filename = f"{generated_directory}/metrics.md"
|
||||
table_name = ""
|
||||
headers = ["Type", "Name", "Description"]
|
||||
values = self._parse_metrics()
|
||||
writer = self.create_writer(table_name, headers, values)
|
||||
|
||||
title = "List of Prometheus Metrics"
|
||||
underline = "============================"
|
||||
content = f"{title}\n{underline}\n{writer.dumps()}"
|
||||
with open(filename, 'w') as f:
|
||||
f.write(content)
|
||||
print(f"Generated {filename}.")
|
||||
f.write("# List of Prometheus Metrics\n\n")
|
||||
f.write(writer.dumps())
|
||||
f.write("\n")
|
||||
print(f"Generated {filename}")
|
||||
|
||||
|
||||
def main():
|
||||
|
268
docs/make.bat
268
docs/make.bat
@@ -1,263 +1,49 @@
|
||||
@ECHO OFF
|
||||
|
||||
pushd %~dp0
|
||||
|
||||
REM Command file for Sphinx documentation
|
||||
|
||||
if "%SPHINXBUILD%" == "" (
|
||||
set SPHINXBUILD=--color -W --keep-going
|
||||
)
|
||||
if "%SPHINXBUILD%" == "" (
|
||||
set SPHINXBUILD=sphinx-build
|
||||
)
|
||||
set BUILDDIR=build
|
||||
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% source
|
||||
set I18NSPHINXOPTS=%SPHINXOPTS% source
|
||||
if NOT "%PAPER%" == "" (
|
||||
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
|
||||
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
|
||||
)
|
||||
set SOURCEDIR=source
|
||||
set BUILDDIR=_build
|
||||
|
||||
if "%1" == "" goto help
|
||||
|
||||
if "%1" == "help" (
|
||||
:help
|
||||
echo.Please use `make ^<target^>` where ^<target^> is one of
|
||||
echo. html to make standalone HTML files
|
||||
echo. dirhtml to make HTML files named index.html in directories
|
||||
echo. singlehtml to make a single large HTML file
|
||||
echo. pickle to make pickle files
|
||||
echo. json to make JSON files
|
||||
echo. htmlhelp to make HTML files and a HTML help project
|
||||
echo. qthelp to make HTML files and a qthelp project
|
||||
echo. devhelp to make HTML files and a Devhelp project
|
||||
echo. epub to make an epub
|
||||
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
|
||||
echo. text to make text files
|
||||
echo. man to make manual pages
|
||||
echo. texinfo to make Texinfo files
|
||||
echo. gettext to make PO message catalogs
|
||||
echo. changes to make an overview over all changed/added/deprecated items
|
||||
echo. xml to make Docutils-native XML files
|
||||
echo. pseudoxml to make pseudoxml-XML files for display purposes
|
||||
echo. linkcheck to check all external links for integrity
|
||||
echo. doctest to run all doctests embedded in the documentation if enabled
|
||||
echo. coverage to run coverage check of the documentation if enabled
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "clean" (
|
||||
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
|
||||
del /q /s %BUILDDIR%\*
|
||||
goto end
|
||||
)
|
||||
if "%1" == "devenv" goto devenv
|
||||
goto default
|
||||
|
||||
|
||||
REM Check if sphinx-build is available and fallback to Python version if any
|
||||
%SPHINXBUILD% 1>NUL 2>NUL
|
||||
if errorlevel 9009 goto sphinx_python
|
||||
goto sphinx_ok
|
||||
|
||||
:sphinx_python
|
||||
|
||||
set SPHINXBUILD=python -m sphinx.__init__
|
||||
%SPHINXBUILD% 2> nul
|
||||
:default
|
||||
%SPHINXBUILD% >NUL 2>NUL
|
||||
if errorlevel 9009 (
|
||||
echo.
|
||||
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
|
||||
echo.installed, then set the SPHINXBUILD environment variable to point
|
||||
echo.to the full path of the 'sphinx-build' executable. Alternatively you
|
||||
echo.may add the Sphinx directory to PATH.
|
||||
echo.
|
||||
echo.If you don't have Sphinx installed, grab it from
|
||||
echo.http://sphinx-doc.org/
|
||||
echo.The 'sphinx-build' command was not found. Open and read README.md!
|
||||
exit /b 1
|
||||
)
|
||||
|
||||
:sphinx_ok
|
||||
%SPHINXBUILD% -M %1 "%SOURCEDIR%" "%BUILDDIR%" %SPHINXOPTS%
|
||||
goto end
|
||||
|
||||
|
||||
if "%1" == "html" (
|
||||
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
|
||||
if errorlevel 1 exit /b 1
|
||||
:help
|
||||
%SPHINXBUILD% -M help "%SOURCEDIR%" "%BUILDDIR%" %SPHINXOPTS%
|
||||
goto end
|
||||
|
||||
|
||||
:devenv
|
||||
sphinx-autobuild >NUL 2>NUL
|
||||
if errorlevel 9009 (
|
||||
echo.
|
||||
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
|
||||
goto end
|
||||
echo.The 'sphinx-autobuild' command was not found. Open and read README.md!
|
||||
exit /b 1
|
||||
)
|
||||
sphinx-autobuild -b html --open-browser "%SOURCEDIR%" "%BUILDDIR%/html"
|
||||
goto end
|
||||
|
||||
if "%1" == "dirhtml" (
|
||||
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "singlehtml" (
|
||||
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "pickle" (
|
||||
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished; now you can process the pickle files.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "json" (
|
||||
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished; now you can process the JSON files.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "htmlhelp" (
|
||||
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished; now you can run HTML Help Workshop with the ^
|
||||
.hhp project file in %BUILDDIR%/htmlhelp.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "qthelp" (
|
||||
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished; now you can run "qcollectiongenerator" with the ^
|
||||
.qhcp project file in %BUILDDIR%/qthelp, like this:
|
||||
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\JupyterHub.qhcp
|
||||
echo.To view the help file:
|
||||
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\JupyterHub.ghc
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "devhelp" (
|
||||
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "epub" (
|
||||
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The epub file is in %BUILDDIR%/epub.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "latex" (
|
||||
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "latexpdf" (
|
||||
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
|
||||
cd %BUILDDIR%/latex
|
||||
make all-pdf
|
||||
cd %~dp0
|
||||
echo.
|
||||
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "latexpdfja" (
|
||||
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
|
||||
cd %BUILDDIR%/latex
|
||||
make all-pdf-ja
|
||||
cd %~dp0
|
||||
echo.
|
||||
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "text" (
|
||||
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The text files are in %BUILDDIR%/text.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "man" (
|
||||
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The manual pages are in %BUILDDIR%/man.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "texinfo" (
|
||||
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "gettext" (
|
||||
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "changes" (
|
||||
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.The overview file is in %BUILDDIR%/changes.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "linkcheck" (
|
||||
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Link check complete; look for any errors in the above output ^
|
||||
or in %BUILDDIR%/linkcheck/output.txt.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "doctest" (
|
||||
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Testing of doctests in the sources finished, look at the ^
|
||||
results in %BUILDDIR%/doctest/output.txt.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "coverage" (
|
||||
%SPHINXBUILD% -b coverage %ALLSPHINXOPTS% %BUILDDIR%/coverage
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Testing of coverage in the sources finished, look at the ^
|
||||
results in %BUILDDIR%/coverage/python.txt.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "xml" (
|
||||
%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The XML files are in %BUILDDIR%/xml.
|
||||
goto end
|
||||
)
|
||||
|
||||
if "%1" == "pseudoxml" (
|
||||
%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
|
||||
if errorlevel 1 exit /b 1
|
||||
echo.
|
||||
echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
|
||||
goto end
|
||||
)
|
||||
|
||||
:end
|
||||
popd
|
||||
|
@@ -1,12 +1,14 @@
|
||||
-r ../requirements.txt
|
||||
|
||||
alabaster_jupyterhub
|
||||
# docs also require jupyterhub itself to be installed
|
||||
# don't depend on it here, as that often results in a duplicate
|
||||
# installation of jupyterhub that's already installed
|
||||
autodoc-traits
|
||||
myst-parser
|
||||
jupyterhub-sphinx-theme
|
||||
myst-parser>=0.19
|
||||
pre-commit
|
||||
pydata-sphinx-theme
|
||||
pytablewriter>=0.56
|
||||
ruamel.yaml
|
||||
sphinx>=1.7
|
||||
sphinx>=4
|
||||
sphinx-copybutton
|
||||
sphinx-jsonschema
|
||||
sphinxext-opengraph
|
||||
sphinxext-rediraffe
|
||||
|
File diff suppressed because it is too large
Load Diff
2
docs/source/_templates/page.html
Normal file
2
docs/source/_templates/page.html
Normal file
@@ -0,0 +1,2 @@
|
||||
{%- set _meta = meta | default({}) %}
|
||||
{%- extends _meta.page_template | default('!page.html') %}
|
32
docs/source/_templates/redoc.html
Normal file
32
docs/source/_templates/redoc.html
Normal file
@@ -0,0 +1,32 @@
|
||||
{# djlint: off #}
|
||||
{%- extends "!layout.html" %}
|
||||
{# not sure why, but theme CSS prevents scrolling within redoc content
|
||||
# If this were fixed, we could keep the navbar and footer
|
||||
#}
|
||||
{% block css %}
|
||||
{% endblock css %}
|
||||
{% block docs_navbar %}
|
||||
{% endblock docs_navbar %}
|
||||
{% block footer %}
|
||||
{% endblock footer %}
|
||||
{%- block body_tag -%}<body>{%- endblock body_tag %}
|
||||
{%- block extrahead %}
|
||||
{{ super() }}
|
||||
<link href="{{ pathto('_static/redoc-fonts.css', 1) }}" rel="stylesheet" />
|
||||
<script src="{{ pathto('_static/redoc.js', 1) }}"></script>
|
||||
{%- endblock extrahead %}
|
||||
{%- block content %}
|
||||
<redoc id="redoc-spec"></redoc>
|
||||
<script>
|
||||
if (location.protocol === "file:") {
|
||||
document.body.innerText = "Rendered API specification doesn't work with file: protocol. Use sphinx-autobuild to do local builds of the docs, served over HTTP."
|
||||
} else {
|
||||
Redoc.init(
|
||||
"{{ pathto('_static/rest-api.yml', 1) }}",
|
||||
{{ meta.redoc_options | default ({}) }},
|
||||
document.getElementById("redoc-spec"),
|
||||
);
|
||||
}
|
||||
</script>
|
||||
{%- endblock content %}
|
||||
{# djlint: on #}
|
@@ -1,37 +0,0 @@
|
||||
# Common log messages emitted by JupyterHub
|
||||
|
||||
When debugging errors and outages, looking at the logs emitted by
|
||||
JupyterHub is very helpful. This document tries to document some common
|
||||
log messages, and what they mean.
|
||||
|
||||
## Failing suspected API request to not-running server
|
||||
|
||||
### Example
|
||||
|
||||
Your logs might be littered with lines that might look slightly scary
|
||||
|
||||
```
|
||||
[W 2022-03-10 17:25:19.774 JupyterHub base:1349] Failing suspected API request to not-running server: /hub/user/<user-name>/api/metrics/v1
|
||||
```
|
||||
|
||||
### Most likely cause
|
||||
|
||||
This likely means is that the user's server has stopped running but they
|
||||
still have a browser tab open. For example, you might have 3 tabs open, and shut
|
||||
your server down via one. Or you closed your laptop, your server was
|
||||
culled for inactivity, and then you reopen your laptop again! The
|
||||
client side code (JupyterLab, Classic Notebook, etc) does not know
|
||||
yet that the server is dead, and continues to make some API requests.
|
||||
JupyterHub's architecture means that the proxy routes all requests that
|
||||
don't go to a running user server to the hub process itself. The hub
|
||||
process then explicitly returns a failure response, so the client knows
|
||||
that the server is not running anymore. This is used by JupyterLab to
|
||||
tell you your server is not running anymore, and offer you the option
|
||||
to let you restart it.
|
||||
|
||||
Most commonly, you'll see this in reference to the `/api/metrics/v1`
|
||||
URL, used by [jupyter-resource-usage](https://github.com/jupyter-server/jupyter-resource-usage).
|
||||
|
||||
### Actions you can take
|
||||
|
||||
This log message is benign, and there is usually no action for you to take.
|
@@ -1,157 +0,0 @@
|
||||
====================
|
||||
Upgrading JupyterHub
|
||||
====================
|
||||
|
||||
JupyterHub offers easy upgrade pathways between minor versions. This
|
||||
document describes how to do these upgrades.
|
||||
|
||||
If you are using :ref:`a JupyterHub distribution <index/distributions>`, you
|
||||
should consult the distribution's documentation on how to upgrade. This
|
||||
document is if you have set up your own JupyterHub without using a
|
||||
distribution.
|
||||
|
||||
It is long because is pretty detailed! Most likely, upgrading
|
||||
JupyterHub is painless, quick and with minimal user interruption.
|
||||
|
||||
Read the Changelog
|
||||
==================
|
||||
|
||||
The `changelog <../changelog.html>`_ contains information on what has
|
||||
changed with the new JupyterHub release, and any deprecation warnings.
|
||||
Read these notes to familiarize yourself with the coming changes. There
|
||||
might be new releases of authenticators & spawners you are using, so
|
||||
read the changelogs for those too!
|
||||
|
||||
Notify your users
|
||||
=================
|
||||
|
||||
If you are using the default configuration where ``configurable-http-proxy``
|
||||
is managed by JupyterHub, your users will see service disruption during
|
||||
the upgrade process. You should notify them, and pick a time to do the
|
||||
upgrade where they will be least disrupted.
|
||||
|
||||
If you are using a different proxy, or running ``configurable-http-proxy``
|
||||
independent of JupyterHub, your users will be able to continue using notebook
|
||||
servers they had already launched, but will not be able to launch new servers
|
||||
nor sign in.
|
||||
|
||||
|
||||
Backup database & config
|
||||
========================
|
||||
|
||||
Before doing an upgrade, it is critical to back up:
|
||||
|
||||
#. Your JupyterHub database (sqlite by default, or MySQL / Postgres
|
||||
if you used those). If you are using sqlite (the default), you
|
||||
should backup the ``jupyterhub.sqlite`` file.
|
||||
#. Your ``jupyterhub_config.py`` file.
|
||||
#. Your user's home directories. This is unlikely to be affected directly by
|
||||
a JupyterHub upgrade, but we recommend a backup since user data is very
|
||||
critical.
|
||||
|
||||
|
||||
Shutdown JupyterHub
|
||||
===================
|
||||
|
||||
Shutdown the JupyterHub process. This would vary depending on how you
|
||||
have set up JupyterHub to run. Most likely, it is using a process
|
||||
supervisor of some sort (``systemd`` or ``supervisord`` or even ``docker``).
|
||||
Use the supervisor specific command to stop the JupyterHub process.
|
||||
|
||||
Upgrade JupyterHub packages
|
||||
===========================
|
||||
|
||||
There are two environments where the ``jupyterhub`` package is installed:
|
||||
|
||||
#. The *hub environment*, which is where the JupyterHub server process
|
||||
runs. This is started with the ``jupyterhub`` command, and is what
|
||||
people generally think of as JupyterHub.
|
||||
|
||||
#. The *notebook user environments*. This is where the user notebook
|
||||
servers are launched from, and is probably custom to your own
|
||||
installation. This could be just one environment (different from the
|
||||
hub environment) that is shared by all users, one environment
|
||||
per user, or same environment as the hub environment. The hub
|
||||
launched the ``jupyterhub-singleuser`` command in this environment,
|
||||
which in turn starts the notebook server.
|
||||
|
||||
You need to make sure the version of the ``jupyterhub`` package matches
|
||||
in both these environments. If you installed ``jupyterhub`` with pip,
|
||||
you can upgrade it with:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
python3 -m pip install --upgrade jupyterhub==<version>
|
||||
|
||||
Where ``<version>`` is the version of JupyterHub you are upgrading to.
|
||||
|
||||
If you used ``conda`` to install ``jupyterhub``, you should upgrade it
|
||||
with:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
conda install -c conda-forge jupyterhub==<version>
|
||||
|
||||
Where ``<version>`` is the version of JupyterHub you are upgrading to.
|
||||
|
||||
You should also check for new releases of the authenticator & spawner you
|
||||
are using. You might wish to upgrade those packages too along with JupyterHub,
|
||||
or upgrade them separately.
|
||||
|
||||
Upgrade JupyterHub database
|
||||
===========================
|
||||
|
||||
Once new packages are installed, you need to upgrade the JupyterHub
|
||||
database. From the hub environment, in the same directory as your
|
||||
``jupyterhub_config.py`` file, you should run:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
jupyterhub upgrade-db
|
||||
|
||||
This should find the location of your database, and run necessary upgrades
|
||||
for it.
|
||||
|
||||
SQLite database disadvantages
|
||||
-----------------------------
|
||||
|
||||
SQLite has some disadvantages when it comes to upgrading JupyterHub. These
|
||||
are:
|
||||
|
||||
- ``upgrade-db`` may not work, and you may need delete your database
|
||||
and start with a fresh one.
|
||||
- ``downgrade-db`` **will not** work if you want to rollback to an
|
||||
earlier version, so backup the ``jupyterhub.sqlite`` file before
|
||||
upgrading
|
||||
|
||||
What happens if I delete my database?
|
||||
-------------------------------------
|
||||
|
||||
Losing the Hub database is often not a big deal. Information that
|
||||
resides only in the Hub database includes:
|
||||
|
||||
- active login tokens (user cookies, service tokens)
|
||||
- users added via JupyterHub UI, instead of config files
|
||||
- info about running servers
|
||||
|
||||
If the following conditions are true, you should be fine clearing the
|
||||
Hub database and starting over:
|
||||
|
||||
- users specified in config file, or login using an external
|
||||
authentication provider (Google, GitHub, LDAP, etc)
|
||||
- user servers are stopped during upgrade
|
||||
- don't mind causing users to login again after upgrade
|
||||
|
||||
Start JupyterHub
|
||||
================
|
||||
|
||||
Once the database upgrade is completed, start the ``jupyterhub``
|
||||
process again.
|
||||
|
||||
#. Log-in and start the server to make sure things work as
|
||||
expected.
|
||||
#. Check the logs for any errors or deprecation warnings. You
|
||||
might have to update your ``jupyterhub_config.py`` file to
|
||||
deal with any deprecated options.
|
||||
|
||||
Congratulations, your JupyterHub has been upgraded!
|
@@ -1,15 +0,0 @@
|
||||
=========================
|
||||
Application configuration
|
||||
=========================
|
||||
|
||||
Module: :mod:`jupyterhub.app`
|
||||
=============================
|
||||
|
||||
.. automodule:: jupyterhub.app
|
||||
|
||||
.. currentmodule:: jupyterhub.app
|
||||
|
||||
:class:`JupyterHub`
|
||||
-------------------
|
||||
|
||||
.. autoconfigurable:: JupyterHub
|
@@ -1,32 +0,0 @@
|
||||
==============
|
||||
Authenticators
|
||||
==============
|
||||
|
||||
Module: :mod:`jupyterhub.auth`
|
||||
==============================
|
||||
|
||||
.. automodule:: jupyterhub.auth
|
||||
|
||||
.. currentmodule:: jupyterhub.auth
|
||||
|
||||
:class:`Authenticator`
|
||||
----------------------
|
||||
|
||||
.. autoconfigurable:: Authenticator
|
||||
:members:
|
||||
|
||||
:class:`LocalAuthenticator`
|
||||
---------------------------
|
||||
|
||||
.. autoconfigurable:: LocalAuthenticator
|
||||
:members:
|
||||
|
||||
:class:`PAMAuthenticator`
|
||||
-------------------------
|
||||
|
||||
.. autoconfigurable:: PAMAuthenticator
|
||||
|
||||
:class:`DummyAuthenticator`
|
||||
---------------------------
|
||||
|
||||
.. autoconfigurable:: DummyAuthenticator
|
@@ -1,33 +0,0 @@
|
||||
.. _api-index:
|
||||
|
||||
##############
|
||||
JupyterHub API
|
||||
##############
|
||||
|
||||
:Release: |release|
|
||||
:Date: |today|
|
||||
|
||||
JupyterHub also provides a REST API for administration of the Hub and users.
|
||||
The documentation on `Using JupyterHub's REST API <../reference/rest.html>`_ provides
|
||||
information on:
|
||||
|
||||
- what you can do with the API
|
||||
- creating an API token
|
||||
- adding API tokens to the config files
|
||||
- making an API request programmatically using the requests library
|
||||
- learning more about JupyterHub's API
|
||||
|
||||
JupyterHub API Reference:
|
||||
|
||||
.. toctree::
|
||||
|
||||
app
|
||||
auth
|
||||
spawner
|
||||
proxy
|
||||
user
|
||||
service
|
||||
services.auth
|
||||
|
||||
|
||||
.. _OpenAPI Initiative: https://www.openapis.org/
|
@@ -1,22 +0,0 @@
|
||||
=======
|
||||
Proxies
|
||||
=======
|
||||
|
||||
Module: :mod:`jupyterhub.proxy`
|
||||
===============================
|
||||
|
||||
.. automodule:: jupyterhub.proxy
|
||||
|
||||
.. currentmodule:: jupyterhub.proxy
|
||||
|
||||
:class:`Proxy`
|
||||
--------------
|
||||
|
||||
.. autoconfigurable:: Proxy
|
||||
:members:
|
||||
|
||||
:class:`ConfigurableHTTPProxy`
|
||||
------------------------------
|
||||
|
||||
.. autoconfigurable:: ConfigurableHTTPProxy
|
||||
:members: debug, auth_token, check_running_interval, api_url, command
|
@@ -1,16 +0,0 @@
|
||||
========
|
||||
Services
|
||||
========
|
||||
|
||||
Module: :mod:`jupyterhub.services.service`
|
||||
==========================================
|
||||
|
||||
.. automodule:: jupyterhub.services.service
|
||||
|
||||
.. currentmodule:: jupyterhub.services.service
|
||||
|
||||
:class:`Service`
|
||||
----------------
|
||||
|
||||
.. autoconfigurable:: Service
|
||||
:members: name, admin, url, api_token, managed, kind, command, cwd, environment, user, oauth_client_id, server, prefix, proxy_spec
|
@@ -1,40 +0,0 @@
|
||||
=======================
|
||||
Services Authentication
|
||||
=======================
|
||||
|
||||
Module: :mod:`jupyterhub.services.auth`
|
||||
=======================================
|
||||
|
||||
.. automodule:: jupyterhub.services.auth
|
||||
|
||||
.. currentmodule:: jupyterhub.services.auth
|
||||
|
||||
|
||||
:class:`HubAuth`
|
||||
----------------
|
||||
|
||||
.. autoconfigurable:: HubAuth
|
||||
:members:
|
||||
|
||||
:class:`HubOAuth`
|
||||
-----------------
|
||||
|
||||
.. autoconfigurable:: HubOAuth
|
||||
:members:
|
||||
|
||||
|
||||
:class:`HubAuthenticated`
|
||||
-------------------------
|
||||
|
||||
.. autoclass:: HubAuthenticated
|
||||
:members:
|
||||
|
||||
:class:`HubOAuthenticated`
|
||||
--------------------------
|
||||
|
||||
.. autoclass:: HubOAuthenticated
|
||||
|
||||
:class:`HubOAuthCallbackHandler`
|
||||
--------------------------------
|
||||
|
||||
.. autoclass:: HubOAuthCallbackHandler
|
@@ -1,21 +0,0 @@
|
||||
========
|
||||
Spawners
|
||||
========
|
||||
|
||||
Module: :mod:`jupyterhub.spawner`
|
||||
=================================
|
||||
|
||||
.. automodule:: jupyterhub.spawner
|
||||
|
||||
.. currentmodule:: jupyterhub.spawner
|
||||
|
||||
:class:`Spawner`
|
||||
----------------
|
||||
|
||||
.. autoconfigurable:: Spawner
|
||||
:members: options_from_form, poll, start, stop, get_args, get_env, get_state, template_namespace, format_string, create_certs, move_certs
|
||||
|
||||
:class:`LocalProcessSpawner`
|
||||
----------------------------
|
||||
|
||||
.. autoconfigurable:: LocalProcessSpawner
|
@@ -1,36 +0,0 @@
|
||||
=====
|
||||
Users
|
||||
=====
|
||||
|
||||
Module: :mod:`jupyterhub.user`
|
||||
==============================
|
||||
|
||||
.. automodule:: jupyterhub.user
|
||||
|
||||
.. currentmodule:: jupyterhub.user
|
||||
|
||||
:class:`UserDict`
|
||||
-----------------
|
||||
|
||||
.. autoclass:: UserDict
|
||||
:members:
|
||||
|
||||
|
||||
:class:`User`
|
||||
-------------
|
||||
|
||||
.. autoclass:: User
|
||||
:members: escaped_name
|
||||
|
||||
.. attribute:: name
|
||||
|
||||
The user's name
|
||||
|
||||
.. attribute:: server
|
||||
|
||||
The user's Server data object if running, None otherwise.
|
||||
Has ``ip``, ``port`` attributes.
|
||||
|
||||
.. attribute:: spawner
|
||||
|
||||
The user's :class:`~.Spawner` instance.
|
File diff suppressed because one or more lines are too long
@@ -1,70 +1,92 @@
|
||||
# Configuration file for Sphinx to build our documentation to HTML.
|
||||
#
|
||||
# Configuration reference: https://www.sphinx-doc.org/en/master/usage/configuration.html
|
||||
#
|
||||
import contextlib
|
||||
import datetime
|
||||
import io
|
||||
import os
|
||||
import sys
|
||||
import re
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
from urllib.request import urlretrieve
|
||||
|
||||
# Set paths
|
||||
sys.path.insert(0, os.path.abspath('.'))
|
||||
|
||||
# -- General configuration ------------------------------------------------
|
||||
|
||||
# Minimal Sphinx version
|
||||
needs_sphinx = '1.4'
|
||||
|
||||
# Sphinx extension modules
|
||||
extensions = [
|
||||
'sphinx.ext.autodoc',
|
||||
'sphinx.ext.intersphinx',
|
||||
'sphinx.ext.napoleon',
|
||||
'autodoc_traits',
|
||||
'sphinx_copybutton',
|
||||
'sphinx-jsonschema',
|
||||
'myst_parser',
|
||||
]
|
||||
|
||||
myst_heading_anchors = 2
|
||||
myst_enable_extensions = [
|
||||
'colon_fence',
|
||||
'deflist',
|
||||
]
|
||||
# The master toctree document.
|
||||
master_doc = 'index'
|
||||
|
||||
# General information about the project.
|
||||
project = 'JupyterHub'
|
||||
copyright = '2016, Project Jupyter team'
|
||||
author = 'Project Jupyter team'
|
||||
|
||||
# Autopopulate version
|
||||
from os.path import dirname
|
||||
|
||||
docs = dirname(dirname(__file__))
|
||||
root = dirname(docs)
|
||||
sys.path.insert(0, root)
|
||||
from docutils import nodes
|
||||
from ruamel.yaml import YAML
|
||||
from sphinx.directives.other import SphinxDirective
|
||||
from sphinx.util import logging
|
||||
|
||||
import jupyterhub
|
||||
|
||||
# The short X.Y version.
|
||||
version = '%i.%i' % jupyterhub.version_info[:2]
|
||||
# The full version, including alpha/beta/rc tags.
|
||||
release = jupyterhub.__version__
|
||||
|
||||
language = None
|
||||
exclude_patterns = []
|
||||
pygments_style = 'sphinx'
|
||||
todo_include_todos = False
|
||||
|
||||
# Set the default role so we can use `foo` instead of ``foo``
|
||||
default_role = 'literal'
|
||||
|
||||
# -- Config -------------------------------------------------------------
|
||||
from jupyterhub.app import JupyterHub
|
||||
from docutils import nodes
|
||||
from sphinx.directives.other import SphinxDirective
|
||||
from contextlib import redirect_stdout
|
||||
from io import StringIO
|
||||
|
||||
# create a temp instance of JupyterHub just to get the output of the generate-config
|
||||
# and help --all commands.
|
||||
logger = logging.getLogger(__name__)
|
||||
# -- Project information -----------------------------------------------------
|
||||
# ref: https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
|
||||
#
|
||||
project = "JupyterHub"
|
||||
author = "Project Jupyter Contributors"
|
||||
copyright = f"{datetime.date.today().year}, {author}"
|
||||
|
||||
|
||||
# -- General Sphinx configuration --------------------------------------------
|
||||
# ref: https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
|
||||
#
|
||||
extensions = [
|
||||
"sphinx.ext.autodoc",
|
||||
"sphinx.ext.intersphinx",
|
||||
"sphinx.ext.napoleon",
|
||||
"autodoc_traits",
|
||||
"sphinx_copybutton",
|
||||
"sphinx-jsonschema",
|
||||
"sphinxext.opengraph",
|
||||
"sphinxext.rediraffe",
|
||||
"jupyterhub_sphinx_theme",
|
||||
"myst_parser",
|
||||
]
|
||||
root_doc = "index"
|
||||
source_suffix = [".md"]
|
||||
# default_role let's use use `foo` instead of ``foo`` in rST
|
||||
default_role = "literal"
|
||||
|
||||
docs = Path(__file__).parent.parent.absolute()
|
||||
docs_source = docs / "source"
|
||||
rest_api_yaml = docs_source / "_static" / "rest-api.yml"
|
||||
|
||||
|
||||
# -- MyST configuration ------------------------------------------------------
|
||||
# ref: https://myst-parser.readthedocs.io/en/latest/configuration.html
|
||||
#
|
||||
myst_heading_anchors = 2
|
||||
|
||||
myst_enable_extensions = [
|
||||
# available extensions: https://myst-parser.readthedocs.io/en/latest/syntax/optional.html
|
||||
"attrs_inline",
|
||||
"colon_fence",
|
||||
"deflist",
|
||||
"fieldlist",
|
||||
"substitution",
|
||||
]
|
||||
|
||||
myst_substitutions = {
|
||||
# date example: Dev 07, 2022
|
||||
"date": datetime.date.today().strftime("%b %d, %Y").title(),
|
||||
"node_min": "12",
|
||||
"python_min": "3.8",
|
||||
"version": jupyterhub.__version__,
|
||||
}
|
||||
|
||||
|
||||
# -- Custom directives to generate documentation -----------------------------
|
||||
# ref: https://myst-parser.readthedocs.io/en/latest/syntax/roles-and-directives.html
|
||||
#
|
||||
# We define custom directives to help us generate documentation using Python on
|
||||
# demand when referenced from our documentation files.
|
||||
#
|
||||
|
||||
# Create a temp instance of JupyterHub for use by two separate directive classes
|
||||
# to get the output from using the "--generate-config" and "--help-all" CLI
|
||||
# flags respectively.
|
||||
#
|
||||
jupyterhub_app = JupyterHub()
|
||||
|
||||
|
||||
@@ -81,8 +103,8 @@ class ConfigDirective(SphinxDirective):
|
||||
# The generated configuration file for this version
|
||||
generated_config = jupyterhub_app.generate_config_file()
|
||||
# post-process output
|
||||
home_dir = os.environ['HOME']
|
||||
generated_config = generated_config.replace(home_dir, '$HOME', 1)
|
||||
home_dir = os.environ["HOME"]
|
||||
generated_config = generated_config.replace(home_dir, "$HOME", 1)
|
||||
par = nodes.literal_block(text=generated_config)
|
||||
return [par]
|
||||
|
||||
@@ -98,160 +120,234 @@ class HelpAllDirective(SphinxDirective):
|
||||
|
||||
def run(self):
|
||||
# The output of the help command for this version
|
||||
buffer = StringIO()
|
||||
with redirect_stdout(buffer):
|
||||
jupyterhub_app.print_help('--help-all')
|
||||
buffer = io.StringIO()
|
||||
with contextlib.redirect_stdout(buffer):
|
||||
jupyterhub_app.print_help("--help-all")
|
||||
all_help = buffer.getvalue()
|
||||
# post-process output
|
||||
home_dir = os.environ['HOME']
|
||||
all_help = all_help.replace(home_dir, '$HOME', 1)
|
||||
home_dir = os.environ["HOME"]
|
||||
all_help = all_help.replace(home_dir, "$HOME", 1)
|
||||
par = nodes.literal_block(text=all_help)
|
||||
return [par]
|
||||
|
||||
|
||||
class RestAPILinksDirective(SphinxDirective):
|
||||
"""Directive to populate link targets for the REST API
|
||||
|
||||
The resulting nodes resolve xref targets,
|
||||
but are not actually rendered in the final result
|
||||
which is handled by a custom template.
|
||||
"""
|
||||
|
||||
has_content = False
|
||||
required_arguments = 0
|
||||
optional_arguments = 0
|
||||
final_argument_whitespace = False
|
||||
option_spec = {}
|
||||
|
||||
def run(self):
|
||||
targets = []
|
||||
yaml = YAML(typ="safe")
|
||||
with rest_api_yaml.open() as f:
|
||||
api = yaml.load(f)
|
||||
for path, path_spec in api["paths"].items():
|
||||
for method, operation in path_spec.items():
|
||||
operation_id = operation.get("operationId")
|
||||
if not operation_id:
|
||||
logger.warning(f"No operation id for {method} {path}")
|
||||
continue
|
||||
# 'id' is the id on the page (must match redoc anchor)
|
||||
# 'name' is the name of the ref for use in our documents
|
||||
target = nodes.target(
|
||||
ids=[f"operation/{operation_id}"],
|
||||
names=[f"rest-api-{operation_id}"],
|
||||
)
|
||||
targets.append(target)
|
||||
self.state.document.note_explicit_target(target, target)
|
||||
|
||||
return targets
|
||||
|
||||
|
||||
templates_path = ["_templates"]
|
||||
|
||||
|
||||
def stage_redoc_js(app, exception):
|
||||
"""Download redoc.js to our static files"""
|
||||
if app.builder.name != "html":
|
||||
logger.info(f"Skipping redoc download for builder: {app.builder.name}")
|
||||
return
|
||||
|
||||
out_static = Path(app.builder.outdir) / "_static"
|
||||
|
||||
redoc_version = "2.1.3"
|
||||
redoc_url = (
|
||||
f"https://cdn.redoc.ly/redoc/v{redoc_version}/bundles/redoc.standalone.js"
|
||||
)
|
||||
dest = out_static / "redoc.js"
|
||||
if not dest.exists():
|
||||
logger.info(f"Downloading {redoc_url} -> {dest}")
|
||||
urlretrieve(redoc_url, dest)
|
||||
|
||||
# stage fonts for redoc from google fonts
|
||||
fonts_css_url = "https://fonts.googleapis.com/css?family=Montserrat:300,400,700|Roboto:300,400,700"
|
||||
fonts_css_file = out_static / "redoc-fonts.css"
|
||||
fonts_dir = out_static / "fonts"
|
||||
fonts_dir.mkdir(exist_ok=True)
|
||||
if not fonts_css_file.exists():
|
||||
logger.info(f"Downloading {fonts_css_url} -> {fonts_css_file}")
|
||||
urlretrieve(fonts_css_url, fonts_css_file)
|
||||
|
||||
# For each font external font URL,
|
||||
# download the font and rewrite to a local URL
|
||||
# The downloaded TTF fonts have license info in their metadata
|
||||
with open(fonts_css_file) as f:
|
||||
fonts_css = f.read()
|
||||
|
||||
fonts_css_changed = False
|
||||
for font_url in re.findall(r'url\((https?[^\)]+)\)', fonts_css):
|
||||
fonts_css_changed = True
|
||||
filename = font_url.rpartition("/")[-1]
|
||||
dest = fonts_dir / filename
|
||||
local_url = str(dest.relative_to(fonts_css_file.parent))
|
||||
fonts_css = fonts_css.replace(font_url, local_url)
|
||||
if not dest.exists():
|
||||
logger.info(f"Downloading {font_url} -> {dest}")
|
||||
urlretrieve(font_url, dest)
|
||||
|
||||
if fonts_css_changed:
|
||||
# rewrite font css with local URLs
|
||||
with open(fonts_css_file, "w") as f:
|
||||
logger.info(f"Rewriting URLs in {fonts_css_file}")
|
||||
f.write(fonts_css)
|
||||
|
||||
|
||||
def setup(app):
|
||||
app.add_css_file('custom.css')
|
||||
app.add_directive('jupyterhub-generate-config', ConfigDirective)
|
||||
app.add_directive('jupyterhub-help-all', HelpAllDirective)
|
||||
app.connect("build-finished", stage_redoc_js)
|
||||
app.add_css_file("custom.css")
|
||||
app.add_directive("jupyterhub-generate-config", ConfigDirective)
|
||||
app.add_directive("jupyterhub-help-all", HelpAllDirective)
|
||||
app.add_directive("jupyterhub-rest-api-links", RestAPILinksDirective)
|
||||
|
||||
|
||||
source_suffix = ['.rst', '.md']
|
||||
# source_encoding = 'utf-8-sig'
|
||||
# -- Read The Docs -----------------------------------------------------------
|
||||
#
|
||||
# Since RTD runs sphinx-build directly without running "make html", we run the
|
||||
# pre-requisite steps for "make html" from here if needed.
|
||||
#
|
||||
if os.environ.get("READTHEDOCS"):
|
||||
subprocess.check_call(["make", "metrics", "scopes"], cwd=str(docs))
|
||||
|
||||
# -- Options for HTML output ----------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages.
|
||||
html_theme = 'pydata_sphinx_theme'
|
||||
# -- Spell checking ----------------------------------------------------------
|
||||
# ref: https://sphinxcontrib-spelling.readthedocs.io/en/latest/customize.html#configuration-options
|
||||
#
|
||||
# The "sphinxcontrib.spelling" extension is optionally enabled if its available.
|
||||
#
|
||||
try:
|
||||
import sphinxcontrib.spelling # noqa
|
||||
except ImportError:
|
||||
pass
|
||||
else:
|
||||
extensions.append("sphinxcontrib.spelling")
|
||||
spelling_word_list_filename = "spelling_wordlist.txt"
|
||||
|
||||
html_logo = '_static/images/logo/logo.png'
|
||||
html_favicon = '_static/images/logo/favicon.ico'
|
||||
|
||||
# Paths that contain custom static files (such as style sheets)
|
||||
html_static_path = ['_static']
|
||||
|
||||
htmlhelp_basename = 'JupyterHubdoc'
|
||||
# -- Options for HTML output -------------------------------------------------
|
||||
# ref: https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
|
||||
#
|
||||
html_logo = "_static/images/logo/logo.png"
|
||||
html_favicon = "_static/images/logo/favicon.ico"
|
||||
html_static_path = ["_static"]
|
||||
|
||||
html_theme = "jupyterhub_sphinx_theme"
|
||||
html_theme_options = {
|
||||
"icon_links": [
|
||||
{
|
||||
"name": "GitHub",
|
||||
"url": "https://github.com/jupyterhub/jupyterhub",
|
||||
"icon": "fab fa-github-square",
|
||||
},
|
||||
{
|
||||
"name": "Discourse",
|
||||
"url": "https://discourse.jupyter.org/c/jupyterhub/10",
|
||||
"icon": "fab fa-discourse",
|
||||
"icon": "fa-brands fa-github",
|
||||
},
|
||||
],
|
||||
"use_edit_page_button": True,
|
||||
"navbar_align": "left",
|
||||
}
|
||||
|
||||
html_context = {
|
||||
"github_user": "jupyterhub",
|
||||
"github_repo": "jupyterhub",
|
||||
"github_version": "main",
|
||||
"doc_path": "docs",
|
||||
"doc_path": "docs/source",
|
||||
}
|
||||
|
||||
# -- Options for LaTeX output ---------------------------------------------
|
||||
|
||||
latex_elements = {
|
||||
# 'papersize': 'letterpaper',
|
||||
# 'pointsize': '10pt',
|
||||
# 'preamble': '',
|
||||
# 'figure_align': 'htbp',
|
||||
}
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title,
|
||||
# author, documentclass [howto, manual, or own class]).
|
||||
latex_documents = [
|
||||
(
|
||||
master_doc,
|
||||
'JupyterHub.tex',
|
||||
'JupyterHub Documentation',
|
||||
'Project Jupyter team',
|
||||
'manual',
|
||||
)
|
||||
# -- Options for linkcheck builder -------------------------------------------
|
||||
# ref: https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-the-linkcheck-builder
|
||||
#
|
||||
linkcheck_ignore = [
|
||||
r"(.*)github\.com(.*)#", # javascript based anchors
|
||||
r"(.*)/#%21(.*)/(.*)", # /#!forum/jupyter - encoded anchor edge case
|
||||
r"https?://(.*\.)?example\.(org|com)(/.*)?", # example links
|
||||
r"https://github.com/[^/]*$", # too many github usernames / searches in changelog
|
||||
"https://github.com/jupyterhub/jupyterhub/pull/", # too many PRs in changelog
|
||||
"https://github.com/jupyterhub/jupyterhub/compare/", # too many comparisons in changelog
|
||||
"https://schema.jupyter.org/jupyterhub/.*", # schemas are not published yet
|
||||
r"https?://(localhost|127.0.0.1).*", # ignore localhost references in auto-links
|
||||
r"https://linux.die.net/.*", # linux.die.net seems to block requests from CI with 403 sometimes
|
||||
# don't check links to unpublished advisories
|
||||
r"https://github.com/jupyterhub/jupyterhub/security/advisories/.*",
|
||||
]
|
||||
linkcheck_anchors_ignore = [
|
||||
"/#!",
|
||||
"/#%21",
|
||||
]
|
||||
|
||||
# latex_logo = None
|
||||
# latex_use_parts = False
|
||||
# latex_show_pagerefs = False
|
||||
# latex_show_urls = False
|
||||
# latex_appendices = []
|
||||
# latex_domain_indices = True
|
||||
|
||||
|
||||
# -- manual page output -------------------------------------------------
|
||||
|
||||
# One entry per manual page. List of tuples
|
||||
# (source start file, name, description, authors, manual section).
|
||||
man_pages = [(master_doc, 'jupyterhub', 'JupyterHub Documentation', [author], 1)]
|
||||
|
||||
# man_show_urls = False
|
||||
|
||||
|
||||
# -- Texinfo output -----------------------------------------------------
|
||||
|
||||
# Grouping the document tree into Texinfo files. List of tuples
|
||||
# (source start file, target name, title, author,
|
||||
# dir menu entry, description, category)
|
||||
texinfo_documents = [
|
||||
(
|
||||
master_doc,
|
||||
'JupyterHub',
|
||||
'JupyterHub Documentation',
|
||||
author,
|
||||
'JupyterHub',
|
||||
'One line description of project.',
|
||||
'Miscellaneous',
|
||||
)
|
||||
]
|
||||
|
||||
# texinfo_appendices = []
|
||||
# texinfo_domain_indices = True
|
||||
# texinfo_show_urls = 'footnote'
|
||||
# texinfo_no_detailmenu = False
|
||||
|
||||
|
||||
# -- Epub output --------------------------------------------------------
|
||||
|
||||
# Bibliographic Dublin Core info.
|
||||
epub_title = project
|
||||
epub_author = author
|
||||
epub_publisher = author
|
||||
epub_copyright = copyright
|
||||
|
||||
# A list of files that should not be packed into the epub file.
|
||||
epub_exclude_files = ['search.html']
|
||||
|
||||
# -- Intersphinx ----------------------------------------------------------
|
||||
|
||||
# -- Intersphinx -------------------------------------------------------------
|
||||
# ref: https://www.sphinx-doc.org/en/master/usage/extensions/intersphinx.html#configuration
|
||||
#
|
||||
intersphinx_mapping = {
|
||||
'python': ('https://docs.python.org/3/', None),
|
||||
'tornado': ('https://www.tornadoweb.org/en/stable/', None),
|
||||
"python": ("https://docs.python.org/3/", None),
|
||||
"tornado": ("https://www.tornadoweb.org/en/stable/", None),
|
||||
"jupyter-server": ("https://jupyter-server.readthedocs.io/en/stable/", None),
|
||||
"nbgitpuller": ("https://nbgitpuller.readthedocs.io/en/latest", None),
|
||||
}
|
||||
|
||||
# -- Read The Docs --------------------------------------------------------
|
||||
# -- Options for the opengraph extension -------------------------------------
|
||||
# ref: https://github.com/wpilibsuite/sphinxext-opengraph#options
|
||||
#
|
||||
# ogp_site_url is set automatically by RTD
|
||||
ogp_image = "_static/logo.png"
|
||||
ogp_use_first_image = True
|
||||
|
||||
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
|
||||
if on_rtd:
|
||||
# readthedocs.org uses their theme by default, so no need to specify it
|
||||
# build both metrics and rest-api, since RTD doesn't run make
|
||||
from subprocess import check_call as sh
|
||||
|
||||
sh(['make', 'metrics', 'scopes'], cwd=docs)
|
||||
# -- Options for the rediraffe extension -------------------------------------
|
||||
# ref: https://github.com/wpilibsuite/sphinxext-rediraffe#readme
|
||||
#
|
||||
# This extension helps us relocate content without breaking links. If a
|
||||
# document is moved internally, a redirect link should be configured as below to
|
||||
# help us not break links.
|
||||
#
|
||||
# The workflow for adding redirects can be as follows:
|
||||
# 1. Change "rediraffe_branch" below to point to the commit/ branch you
|
||||
# want to base off the changes.
|
||||
# 2. Option 1: run "make rediraffecheckdiff"
|
||||
# a. Analyze the output of this command.
|
||||
# b. Manually add the redirect entries to the "redirects.txt" file.
|
||||
# Option 2: run "make rediraffewritediff"
|
||||
# a. rediraffe will then automatically add the obvious redirects to redirects.txt.
|
||||
# b. Analyze the output of the command for broken links.
|
||||
# c. Check the "redirects.txt" file for any files that were moved/ renamed but are not listed.
|
||||
# d. Manually add the redirects that have been mised by the automatic builder to "redirects.txt".
|
||||
# Option 3: Do not use the commands above and, instead, do everything manually - by taking
|
||||
# note of the files you have moved or renamed and adding them to the "redirects.txt" file.
|
||||
#
|
||||
# If you are basing changes off another branch/ commit, always change back
|
||||
# rediraffe_branch to main before pushing your changes upstream.
|
||||
#
|
||||
rediraffe_branch = os.environ.get("REDIRAFFE_BRANCH", "main")
|
||||
rediraffe_redirects = "redirects.txt"
|
||||
|
||||
# -- Spell checking -------------------------------------------------------
|
||||
# allow 80% match for autogenerated redirects
|
||||
rediraffe_auto_redirect_perc = 80
|
||||
|
||||
try:
|
||||
import sphinxcontrib.spelling
|
||||
except ImportError:
|
||||
pass
|
||||
else:
|
||||
extensions.append("sphinxcontrib.spelling")
|
||||
|
||||
spelling_word_list_filename = 'spelling_wordlist.txt'
|
||||
# rediraffe_redirects = {
|
||||
# "old-file": "new-folder/new-file-name",
|
||||
# }
|
||||
|
29
docs/source/contributing/community.md
Normal file
29
docs/source/contributing/community.md
Normal file
@@ -0,0 +1,29 @@
|
||||
(contributing:community)=
|
||||
|
||||
# Community communication channels
|
||||
|
||||
We use different channels of communication for different purposes. Whichever one you use will depend on what kind of communication you want to engage in.
|
||||
|
||||
## Discourse (recommended)
|
||||
|
||||
We use [Discourse](https://discourse.jupyter.org) for online discussions and support questions.
|
||||
You can ask questions here if you are a first-time contributor to the JupyterHub project.
|
||||
Everyone in the Jupyter community is welcome to bring ideas and questions there.
|
||||
|
||||
We recommend that you first use our Discourse as all past and current discussions on it are archived and searchable. Thus, all discussions remain useful and accessible to the whole community.
|
||||
|
||||
## Gitter
|
||||
|
||||
We use [our Gitter channel](https://gitter.im/jupyterhub/jupyterhub) for online, real-time text chat; a place for more ephemeral discussions. When you're not on Discourse, you can stop here to have other discussions on the fly.
|
||||
|
||||
## Github Issues
|
||||
|
||||
[Github issues](https://docs.github.com/en/issues/tracking-your-work-with-issues/about-issues) are used for most long-form project discussions, bug reports and feature requests.
|
||||
|
||||
- Issues related to a specific authenticator or spawner should be opened in the appropriate repository for the authenticator or spawner.
|
||||
- If you are using a specific JupyterHub distribution (such as [Zero to JupyterHub on Kubernetes](https://github.com/jupyterhub/zero-to-jupyterhub-k8s) or [The Littlest JupyterHub](https://github.com/jupyterhub/the-littlest-jupyterhub/)), you should open issues directly in their repository.
|
||||
- If you cannot find a repository to open your issue in, do not worry! Open the issue in the [main JupyterHub repository](https://github.com/jupyterhub/jupyterhub/) and our community will help you figure it out.
|
||||
|
||||
```{note}
|
||||
Our community is distributed across the world in various timezones, so please be patient if you do not get a response immediately!
|
||||
```
|
@@ -1,30 +0,0 @@
|
||||
.. _contributing/community:
|
||||
|
||||
================================
|
||||
Community communication channels
|
||||
================================
|
||||
|
||||
We use `Discourse <https://discourse.jupyter.org>` for online discussion.
|
||||
Everyone in the Jupyter community is welcome to bring ideas and questions there.
|
||||
In addition, we use `Gitter <https://gitter.im>`_ for online, real-time text chat,
|
||||
a place for more ephemeral discussions.
|
||||
The primary Gitter channel for JupyterHub is `jupyterhub/jupyterhub <https://gitter.im/jupyterhub/jupyterhub>`_.
|
||||
Gitter isn't archived or searchable, so we recommend going to discourse first
|
||||
to make sure that discussions are most useful and accessible to the community.
|
||||
Remember that our community is distributed across the world in various
|
||||
timezones, so be patient if you do not get an answer immediately!
|
||||
|
||||
GitHub issues are used for most long-form project discussions, bug reports
|
||||
and feature requests. Issues related to a specific authenticator or
|
||||
spawner should be directed to the appropriate repository for the
|
||||
authenticator or spawner. If you are using a specific JupyterHub
|
||||
distribution (such as `Zero to JupyterHub on Kubernetes <http://github.com/jupyterhub/zero-to-jupyterhub-k8s>`_
|
||||
or `The Littlest JupyterHub <http://github.com/jupyterhub/the-littlest-jupyterhub/>`_),
|
||||
you should open issues directly in their repository. If you can not
|
||||
find a repository to open your issue in, do not worry! Create it in the `main
|
||||
JupyterHub repository <https://github.com/jupyterhub/jupyterhub/>`_ and our
|
||||
community will help you figure it out.
|
||||
|
||||
A `mailing list <https://groups.google.com/forum/#!forum/jupyter>`_ for all
|
||||
of Project Jupyter exists, along with one for `teaching with Jupyter
|
||||
<https://groups.google.com/forum/#!forum/jupyter-education>`_.
|
@@ -1,3 +1,5 @@
|
||||
(contributing:contributors)=
|
||||
|
||||
# Contributors
|
||||
|
||||
Project Jupyter thanks the following people for their help and
|
||||
@@ -120,3 +122,4 @@ contribution on JupyterHub:
|
||||
- yuvipanda
|
||||
- zoltan-fedor
|
||||
- zonca
|
||||
- Neeraj Natu
|
76
docs/source/contributing/docs.md
Normal file
76
docs/source/contributing/docs.md
Normal file
@@ -0,0 +1,76 @@
|
||||
(contributing:docs)=
|
||||
|
||||
# Contributing Documentation
|
||||
|
||||
Documentation is often more important than code. This page helps
|
||||
you get set up on how to contribute to JupyterHub's documentation.
|
||||
|
||||
## Building documentation locally
|
||||
|
||||
We use [sphinx](https://www.sphinx-doc.org) to build our documentation. It takes
|
||||
our documentation source files (written in [markdown](https://daringfireball.net/projects/markdown/) or [reStructuredText](https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html) &
|
||||
stored under the `docs/source` directory) and converts it into various
|
||||
formats for people to read. To make sure the documentation you write or
|
||||
change renders correctly, it is good practice to test it locally.
|
||||
|
||||
1. Make sure you have successfully completed {ref}`contributing:setup`.
|
||||
|
||||
2. Install the packages required to build the docs.
|
||||
|
||||
```bash
|
||||
python3 -m pip install -r docs/requirements.txt
|
||||
```
|
||||
|
||||
3. Build the html version of the docs. This is the most commonly used
|
||||
output format, so verifying it renders correctly is usually good
|
||||
enough.
|
||||
|
||||
```bash
|
||||
cd docs
|
||||
make html
|
||||
```
|
||||
|
||||
This step will display any syntax or formatting errors in the documentation,
|
||||
along with the filename / line number in which they occurred. Fix them,
|
||||
and re-run the `make html` command to re-render the documentation.
|
||||
|
||||
4. View the rendered documentation by opening `_build/html/index.html` in
|
||||
a web browser.
|
||||
|
||||
:::{tip}
|
||||
**On Windows**, you can open a file from the terminal with `start <path-to-file>`.
|
||||
|
||||
**On macOS**, you can do the same with `open <path-to-file>`.
|
||||
|
||||
**On Linux**, you can do the same with `xdg-open <path-to-file>`.
|
||||
|
||||
After opening index.html in your browser you can just refresh the page whenever
|
||||
you rebuild the docs via `make html`
|
||||
:::
|
||||
|
||||
(contributing-docs-conventions)=
|
||||
|
||||
## Documentation conventions
|
||||
|
||||
This section lists various conventions we use in our documentation. This is a
|
||||
living document that grows over time, so feel free to add to it / change it!
|
||||
|
||||
Our entire documentation does not yet fully conform to these conventions yet,
|
||||
so help in making it so would be appreciated!
|
||||
|
||||
### `pip` invocation
|
||||
|
||||
There are many ways to invoke a `pip` command, we recommend the following
|
||||
approach:
|
||||
|
||||
```bash
|
||||
python3 -m pip
|
||||
```
|
||||
|
||||
This invokes pip explicitly using the python3 binary that you are
|
||||
currently using. This is the **recommended way** to invoke pip
|
||||
in our documentation, since it is least likely to cause problems
|
||||
with python3 and pip being from different environments.
|
||||
|
||||
For more information on how to invoke `pip` commands, see
|
||||
[the pip documentation](https://pip.pypa.io/en/stable/).
|
@@ -1,78 +0,0 @@
|
||||
.. _contributing/docs:
|
||||
|
||||
==========================
|
||||
Contributing Documentation
|
||||
==========================
|
||||
|
||||
Documentation is often more important than code. This page helps
|
||||
you get set up on how to contribute documentation to JupyterHub.
|
||||
|
||||
Building documentation locally
|
||||
==============================
|
||||
|
||||
We use `sphinx <http://sphinx-doc.org>`_ to build our documentation. It takes
|
||||
our documentation source files (written in `markdown
|
||||
<https://daringfireball.net/projects/markdown/>`_ or `reStructuredText
|
||||
<https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_ &
|
||||
stored under the ``docs/source`` directory) and converts it into various
|
||||
formats for people to read. To make sure the documentation you write or
|
||||
change renders correctly, it is good practice to test it locally.
|
||||
|
||||
#. Make sure you have successfuly completed :ref:`contributing/setup`.
|
||||
|
||||
#. Install the packages required to build the docs.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
python3 -m pip install -r docs/requirements.txt
|
||||
|
||||
#. Build the html version of the docs. This is the most commonly used
|
||||
output format, so verifying it renders as you should is usually good
|
||||
enough.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd docs
|
||||
make html
|
||||
|
||||
This step will display any syntax or formatting errors in the documentation,
|
||||
along with the filename / line number in which they occurred. Fix them,
|
||||
and re-run the ``make html`` command to re-render the documentation.
|
||||
|
||||
#. View the rendered documentation by opening ``build/html/index.html`` in
|
||||
a web browser.
|
||||
|
||||
.. tip::
|
||||
|
||||
On macOS, you can open a file from the terminal with ``open <path-to-file>``.
|
||||
On Linux, you can do the same with ``xdg-open <path-to-file>``.
|
||||
|
||||
|
||||
.. _contributing/docs/conventions:
|
||||
|
||||
Documentation conventions
|
||||
=========================
|
||||
|
||||
This section lists various conventions we use in our documentation. This is a
|
||||
living document that grows over time, so feel free to add to it / change it!
|
||||
|
||||
Our entire documentation does not yet fully conform to these conventions yet,
|
||||
so help in making it so would be appreciated!
|
||||
|
||||
``pip`` invocation
|
||||
------------------
|
||||
|
||||
There are many ways to invoke a ``pip`` command, we recommend the following
|
||||
approach:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
python3 -m pip
|
||||
|
||||
This invokes pip explicitly using the python3 binary that you are
|
||||
currently using. This is the **recommended way** to invoke pip
|
||||
in our documentation, since it is least likely to cause problems
|
||||
with python3 and pip being from different environments.
|
||||
|
||||
For more information on how to invoke ``pip`` commands, see
|
||||
`the pip documentation <https://pip.pypa.io/en/stable/>`_.
|
24
docs/source/contributing/index.md
Normal file
24
docs/source/contributing/index.md
Normal file
@@ -0,0 +1,24 @@
|
||||
(contributing)=
|
||||
|
||||
# Contributing
|
||||
|
||||
We want you to contribute to JupyterHub in ways that are most exciting
|
||||
and useful to you. We value documentation, testing, bug reporting & code equally,
|
||||
and are glad to have your contributions in whatever form you wish.
|
||||
|
||||
Be sure to first check our [Code of Conduct](https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md)
|
||||
([reporting guidelines](https://github.com/jupyter/governance/blob/HEAD/conduct/reporting_online.md)), which help keep our community welcoming to as many people as possible.
|
||||
|
||||
This section covers information about our community, as well as ways that you can connect and get involved.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 2
|
||||
|
||||
contributor-list
|
||||
community
|
||||
setup
|
||||
docs
|
||||
tests
|
||||
roadmap
|
||||
security
|
||||
```
|
@@ -1,21 +0,0 @@
|
||||
============
|
||||
Contributing
|
||||
============
|
||||
|
||||
We want you to contribute to JupyterHub in ways that are most exciting
|
||||
& useful to you. We value documentation, testing, bug reporting & code equally,
|
||||
and are glad to have your contributions in whatever form you wish :)
|
||||
|
||||
Our `Code of Conduct <https://github.com/jupyter/governance/blob/HEAD/conduct/code_of_conduct.md>`_
|
||||
(`reporting guidelines <https://github.com/jupyter/governance/blob/HEAD/conduct/reporting_online.md>`_)
|
||||
helps keep our community welcoming to as many people as possible.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
community
|
||||
setup
|
||||
docs
|
||||
tests
|
||||
roadmap
|
||||
security
|
@@ -1,10 +1,12 @@
|
||||
(contributing:roadmap)=
|
||||
|
||||
# The JupyterHub roadmap
|
||||
|
||||
This roadmap collects "next steps" for JupyterHub. It is about creating a
|
||||
shared understanding of the project's vision and direction amongst
|
||||
the community of users, contributors, and maintainers.
|
||||
The goal is to communicate priorities and upcoming release plans.
|
||||
It is not a aimed at limiting contributions to what is listed here.
|
||||
It is not aimed at limiting contributions to what is listed here.
|
||||
|
||||
## Using the roadmap
|
||||
|
||||
|
11
docs/source/contributing/security.md
Normal file
11
docs/source/contributing/security.md
Normal file
@@ -0,0 +1,11 @@
|
||||
(contributing:security)=
|
||||
|
||||
# Reporting security issues in Jupyter or JupyterHub
|
||||
|
||||
If you find a security vulnerability in Jupyter or JupyterHub,
|
||||
whether it is a failure of the security model described in [Security Overview](explanation:security)
|
||||
or a failure in implementation,
|
||||
please report it to <mailto:security@ipython.org>.
|
||||
|
||||
If you prefer to encrypt your security reports,
|
||||
you can use {download}`this PGP public key </ipython_security.asc>`.
|
@@ -1,10 +0,0 @@
|
||||
Reporting security issues in Jupyter or JupyterHub
|
||||
==================================================
|
||||
|
||||
If you find a security vulnerability in Jupyter or JupyterHub,
|
||||
whether it is a failure of the security model described in :doc:`../reference/websecurity`
|
||||
or a failure in implementation,
|
||||
please report it to security@ipython.org.
|
||||
|
||||
If you prefer to encrypt your security reports,
|
||||
you can use :download:`this PGP public key </ipython_security.asc>`.
|
242
docs/source/contributing/setup.md
Normal file
242
docs/source/contributing/setup.md
Normal file
@@ -0,0 +1,242 @@
|
||||
(contributing:setup)=
|
||||
|
||||
# Setting up a development install
|
||||
|
||||
## System requirements
|
||||
|
||||
JupyterHub can only run on macOS or Linux operating systems. If you are
|
||||
using Windows, we recommend using [VirtualBox](https://virtualbox.org)
|
||||
or a similar system to run [Ubuntu Linux](https://ubuntu.com) for
|
||||
development.
|
||||
|
||||
### Install Python
|
||||
|
||||
JupyterHub is written in the [Python](https://python.org) programming language and
|
||||
requires you have at least version {{python_min}} installed locally. If you haven’t
|
||||
installed Python before, the recommended way to install it is to use
|
||||
[Miniforge](https://github.com/conda-forge/miniforge#download).
|
||||
|
||||
### Install nodejs
|
||||
|
||||
[NodeJS {{node_min}}+](https://nodejs.org/en/) is required for building some JavaScript components.
|
||||
`configurable-http-proxy`, the default proxy implementation for JupyterHub, is written in Javascript.
|
||||
If you have not installed NodeJS before, we recommend installing it in the `miniconda` environment you set up for Python.
|
||||
You can do so with `conda install nodejs`.
|
||||
|
||||
Many in the Jupyter community use [`nvm`](https://github.com/nvm-sh/nvm) to
|
||||
managing node dependencies.
|
||||
|
||||
### Install git
|
||||
|
||||
JupyterHub uses [Git](https://git-scm.com) & [GitHub](https://github.com)
|
||||
for development & collaboration. You need to [install git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) to work on
|
||||
JupyterHub. We also recommend getting a free account on GitHub.com.
|
||||
|
||||
## Setting up a development install
|
||||
|
||||
When developing JupyterHub, you would need to make changes and be able to instantly view the results of the changes. To achieve that, a developer install is required.
|
||||
|
||||
:::{note}
|
||||
This guide does not attempt to dictate _how_ development
|
||||
environments should be isolated since that is a personal preference and can
|
||||
be achieved in many ways, for example, `tox`, `conda`, `docker`, etc. See this
|
||||
[forum thread](https://discourse.jupyter.org/t/thoughts-on-using-tox/3497) for
|
||||
a more detailed discussion.
|
||||
:::
|
||||
|
||||
1. Clone the [JupyterHub git repository](https://github.com/jupyterhub/jupyterhub)
|
||||
to your computer.
|
||||
|
||||
```bash
|
||||
git clone https://github.com/jupyterhub/jupyterhub
|
||||
cd jupyterhub
|
||||
```
|
||||
|
||||
2. Make sure the `python` you installed and the `npm` you installed
|
||||
are available to you on the command line.
|
||||
|
||||
```bash
|
||||
python -V
|
||||
```
|
||||
|
||||
This should return a version number greater than or equal to {{python_min}}.
|
||||
|
||||
```bash
|
||||
npm -v
|
||||
```
|
||||
|
||||
This should return a version number greater than or equal to 5.0.
|
||||
|
||||
3. Install `configurable-http-proxy` (required to run and test the default JupyterHub configuration):
|
||||
|
||||
```bash
|
||||
npm install -g configurable-http-proxy
|
||||
```
|
||||
|
||||
If you get an error that says `Error: EACCES: permission denied`, you might need to prefix the command with `sudo`.
|
||||
`sudo` may be required to perform a system-wide install.
|
||||
If you do not have access to sudo, you may instead run the following commands:
|
||||
|
||||
```bash
|
||||
npm install configurable-http-proxy
|
||||
export PATH=$PATH:$(pwd)/node_modules/.bin
|
||||
```
|
||||
|
||||
The second line needs to be run every time you open a new terminal.
|
||||
|
||||
If you are using conda you can instead run:
|
||||
|
||||
```bash
|
||||
conda install configurable-http-proxy
|
||||
```
|
||||
|
||||
4. Install an editable version of JupyterHub and its requirements for
|
||||
development and testing. This lets you edit JupyterHub code in a text editor
|
||||
& restart the JupyterHub process to see your code changes immediately.
|
||||
|
||||
```bash
|
||||
python3 -m pip install --editable ".[test]"
|
||||
```
|
||||
|
||||
5. You are now ready to start JupyterHub!
|
||||
|
||||
```bash
|
||||
jupyterhub
|
||||
```
|
||||
|
||||
6. You can access JupyterHub from your browser at
|
||||
`http://localhost:8000` now.
|
||||
|
||||
Happy developing!
|
||||
|
||||
## Using DummyAuthenticator & SimpleLocalProcessSpawner
|
||||
|
||||
To simplify testing of JupyterHub, it is helpful to use
|
||||
{class}`~jupyterhub.auth.DummyAuthenticator` instead of the default JupyterHub
|
||||
authenticator and SimpleLocalProcessSpawner instead of the default spawner.
|
||||
|
||||
There is a sample configuration file that does this in
|
||||
`testing/jupyterhub_config.py`. To launch JupyterHub with this
|
||||
configuration:
|
||||
|
||||
```bash
|
||||
jupyterhub -f testing/jupyterhub_config.py
|
||||
```
|
||||
|
||||
The test configuration enables a few things to make testing easier:
|
||||
|
||||
- use 'dummy' authentication and 'simple' spawner
|
||||
- named servers are enabled
|
||||
- listen only on localhost
|
||||
- 'admin' is an admin user, if you want to test the admin page
|
||||
- disable caching of static files
|
||||
|
||||
The default JupyterHub [authenticator](PAMAuthenticator)
|
||||
& [spawner](LocalProcessSpawner)
|
||||
require your system to have user accounts for each user you want to log in to
|
||||
JupyterHub as.
|
||||
|
||||
DummyAuthenticator allows you to log in with any username & password,
|
||||
while SimpleLocalProcessSpawner allows you to start servers without having to
|
||||
create a Unix user for each JupyterHub user. Together, these make it
|
||||
much easier to test JupyterHub.
|
||||
|
||||
Tip: If you are working on parts of JupyterHub that are common to all
|
||||
authenticators & spawners, we recommend using both DummyAuthenticator &
|
||||
SimpleLocalProcessSpawner. If you are working on just authenticator-related
|
||||
parts, use only SimpleLocalProcessSpawner. Similarly, if you are working on
|
||||
just spawner-related parts, use only DummyAuthenticator.
|
||||
|
||||
## Building frontend components
|
||||
|
||||
The testing configuration file also disables caching of static files,
|
||||
which allows you to edit and rebuild these files without restarting JupyterHub.
|
||||
|
||||
If you are working on the admin react page, which is in the `jsx` directory, you can run:
|
||||
|
||||
```bash
|
||||
cd jsx
|
||||
npm install
|
||||
npm run build:watch
|
||||
```
|
||||
|
||||
to continuously rebuild the admin page, requiring only a refresh of the page.
|
||||
|
||||
If you are working on the frontend SCSS files, you can run the same `build:watch` command
|
||||
in the _top level_ directory of the repo:
|
||||
|
||||
```bash
|
||||
npm install
|
||||
npm run build:watch
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
This section lists common ways setting up your development environment may
|
||||
fail, and how to fix them. Please add to the list if you encounter yet
|
||||
another way it can fail!
|
||||
|
||||
### `lessc` not found
|
||||
|
||||
If the `python3 -m pip install --editable .` command fails and complains about
|
||||
`lessc` being unavailable, you may need to explicitly install some
|
||||
additional JavaScript dependencies:
|
||||
|
||||
```bash
|
||||
npm install
|
||||
```
|
||||
|
||||
This will fetch client-side JavaScript dependencies necessary to compile
|
||||
CSS.
|
||||
|
||||
You may also need to manually update JavaScript and CSS after some
|
||||
development updates, with:
|
||||
|
||||
```bash
|
||||
python3 setup.py js # fetch updated client-side js
|
||||
python3 setup.py css # recompile CSS from LESS sources
|
||||
python3 setup.py jsx # build React admin app
|
||||
```
|
||||
|
||||
### Failed to bind XXX to `http://127.0.0.1:<port>/<path>`
|
||||
|
||||
This error can happen when there's already an application or a service using this
|
||||
port.
|
||||
|
||||
Use the following command to find out which service is using this port.
|
||||
|
||||
```bash
|
||||
lsof -P -i TCP:<port> -sTCP:LISTEN
|
||||
```
|
||||
|
||||
If nothing shows up, it likely means there's a system service that uses it but
|
||||
your current user cannot list it. Reuse the same command with sudo.
|
||||
|
||||
```bash
|
||||
sudo lsof -P -i TCP:<port> -sTCP:LISTEN
|
||||
```
|
||||
|
||||
Depending on the result of the above commands, the most simple solution is to
|
||||
configure JupyterHub to use a different port for the service that is failing.
|
||||
|
||||
As an example, the following is a frequently seen issue:
|
||||
|
||||
`Failed to bind hub to http://127.0.0.1:8081/hub/`
|
||||
|
||||
Using the procedure described above, start with:
|
||||
|
||||
```bash
|
||||
lsof -P -i TCP:8081 -sTCP:LISTEN
|
||||
```
|
||||
|
||||
and if nothing shows up:
|
||||
|
||||
```bash
|
||||
sudo lsof -P -i TCP:8081 -sTCP:LISTEN
|
||||
```
|
||||
|
||||
Finally, depending on your findings, you can apply the following change and start JupyterHub again:
|
||||
|
||||
```python
|
||||
c.JupyterHub.hub_port = 9081 # Or any other free port
|
||||
```
|
@@ -1,188 +0,0 @@
|
||||
.. _contributing/setup:
|
||||
|
||||
================================
|
||||
Setting up a development install
|
||||
================================
|
||||
|
||||
System requirements
|
||||
===================
|
||||
|
||||
JupyterHub can only run on MacOS or Linux operating systems. If you are
|
||||
using Windows, we recommend using `VirtualBox <https://virtualbox.org>`_
|
||||
or a similar system to run `Ubuntu Linux <https://ubuntu.com>`_ for
|
||||
development.
|
||||
|
||||
Install Python
|
||||
--------------
|
||||
|
||||
JupyterHub is written in the `Python <https://python.org>`_ programming language, and
|
||||
requires you have at least version 3.5 installed locally. If you haven’t
|
||||
installed Python before, the recommended way to install it is to use
|
||||
`miniconda <https://conda.io/miniconda.html>`_. Remember to get the ‘Python 3’ version,
|
||||
and **not** the ‘Python 2’ version!
|
||||
|
||||
Install nodejs
|
||||
--------------
|
||||
|
||||
``configurable-http-proxy``, the default proxy implementation for
|
||||
JupyterHub, is written in Javascript to run on `NodeJS
|
||||
<https://nodejs.org/en/>`_. If you have not installed nodejs before, we
|
||||
recommend installing it in the ``miniconda`` environment you set up for
|
||||
Python. You can do so with ``conda install nodejs``.
|
||||
|
||||
Install git
|
||||
-----------
|
||||
|
||||
JupyterHub uses `git <https://git-scm.com>`_ & `GitHub <https://github.com>`_
|
||||
for development & collaboration. You need to `install git
|
||||
<https://git-scm.com/book/en/v2/Getting-Started-Installing-Git>`_ to work on
|
||||
JupyterHub. We also recommend getting a free account on GitHub.com.
|
||||
|
||||
Setting up a development install
|
||||
================================
|
||||
|
||||
When developing JupyterHub, you need to make changes to the code & see
|
||||
their effects quickly. You need to do a developer install to make that
|
||||
happen.
|
||||
|
||||
.. note:: This guide does not attempt to dictate *how* development
|
||||
environements should be isolated since that is a personal preference and can
|
||||
be achieved in many ways, for example `tox`, `conda`, `docker`, etc. See this
|
||||
`forum thread <https://discourse.jupyter.org/t/thoughts-on-using-tox/3497>`_ for
|
||||
a more detailed discussion.
|
||||
|
||||
1. Clone the `JupyterHub git repository <https://github.com/jupyterhub/jupyterhub>`_
|
||||
to your computer.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
git clone https://github.com/jupyterhub/jupyterhub
|
||||
cd jupyterhub
|
||||
|
||||
2. Make sure the ``python`` you installed and the ``npm`` you installed
|
||||
are available to you on the command line.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
python -V
|
||||
|
||||
This should return a version number greater than or equal to 3.5.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
npm -v
|
||||
|
||||
This should return a version number greater than or equal to 5.0.
|
||||
|
||||
3. Install ``configurable-http-proxy``. This is required to run
|
||||
JupyterHub.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
npm install -g configurable-http-proxy
|
||||
|
||||
If you get an error that says ``Error: EACCES: permission denied``,
|
||||
you might need to prefix the command with ``sudo``. If you do not
|
||||
have access to sudo, you may instead run the following commands:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
npm install configurable-http-proxy
|
||||
export PATH=$PATH:$(pwd)/node_modules/.bin
|
||||
|
||||
The second line needs to be run every time you open a new terminal.
|
||||
|
||||
4. Install the python packages required for JupyterHub development.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
python3 -m pip install -r dev-requirements.txt
|
||||
python3 -m pip install -r requirements.txt
|
||||
|
||||
5. Setup a database.
|
||||
|
||||
The default database engine is ``sqlite`` so if you are just trying
|
||||
to get up and running quickly for local development that should be
|
||||
available via `python <https://docs.python.org/3.5/library/sqlite3.html>`__.
|
||||
See :doc:`/reference/database` for details on other supported databases.
|
||||
|
||||
6. Install the development version of JupyterHub. This lets you edit
|
||||
JupyterHub code in a text editor & restart the JupyterHub process to
|
||||
see your code changes immediately.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
python3 -m pip install --editable .
|
||||
|
||||
7. You are now ready to start JupyterHub!
|
||||
|
||||
.. code:: bash
|
||||
|
||||
jupyterhub
|
||||
|
||||
8. You can access JupyterHub from your browser at
|
||||
``http://localhost:8000`` now.
|
||||
|
||||
Happy developing!
|
||||
|
||||
Using DummyAuthenticator & SimpleLocalProcessSpawner
|
||||
====================================================
|
||||
|
||||
To simplify testing of JupyterHub, it’s helpful to use
|
||||
:class:`~jupyterhub.auth.DummyAuthenticator` instead of the default JupyterHub
|
||||
authenticator and SimpleLocalProcessSpawner instead of the default spawner.
|
||||
|
||||
There is a sample configuration file that does this in
|
||||
``testing/jupyterhub_config.py``. To launch jupyterhub with this
|
||||
configuration:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
jupyterhub -f testing/jupyterhub_config.py
|
||||
|
||||
The default JupyterHub `authenticator
|
||||
<https://jupyterhub.readthedocs.io/en/stable/reference/authenticators.html#the-default-pam-authenticator>`_
|
||||
& `spawner
|
||||
<https://jupyterhub.readthedocs.io/en/stable/api/spawner.html#localprocessspawner>`_
|
||||
require your system to have user accounts for each user you want to log in to
|
||||
JupyterHub as.
|
||||
|
||||
DummyAuthenticator allows you to log in with any username & password,
|
||||
while SimpleLocalProcessSpawner allows you to start servers without having to
|
||||
create a unix user for each JupyterHub user. Together, these make it
|
||||
much easier to test JupyterHub.
|
||||
|
||||
Tip: If you are working on parts of JupyterHub that are common to all
|
||||
authenticators & spawners, we recommend using both DummyAuthenticator &
|
||||
SimpleLocalProcessSpawner. If you are working on just authenticator related
|
||||
parts, use only SimpleLocalProcessSpawner. Similarly, if you are working on
|
||||
just spawner related parts, use only DummyAuthenticator.
|
||||
|
||||
Troubleshooting
|
||||
===============
|
||||
|
||||
This section lists common ways setting up your development environment may
|
||||
fail, and how to fix them. Please add to the list if you encounter yet
|
||||
another way it can fail!
|
||||
|
||||
``lessc`` not found
|
||||
-------------------
|
||||
|
||||
If the ``python3 -m pip install --editable .`` command fails and complains about
|
||||
``lessc`` being unavailable, you may need to explicitly install some
|
||||
additional JavaScript dependencies:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
npm install
|
||||
|
||||
This will fetch client-side JavaScript dependencies necessary to compile
|
||||
CSS.
|
||||
|
||||
You may also need to manually update JavaScript and CSS after some
|
||||
development updates, with:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
python3 setup.py js # fetch updated client-side js
|
||||
python3 setup.py css # recompile CSS from LESS sources
|
157
docs/source/contributing/tests.md
Normal file
157
docs/source/contributing/tests.md
Normal file
@@ -0,0 +1,157 @@
|
||||
(contributing-tests)=
|
||||
|
||||
# Testing JupyterHub and linting code
|
||||
|
||||
Unit testing helps to validate that JupyterHub works the way we think it does,
|
||||
and continues to do so when changes occur. They also help communicate
|
||||
precisely what we expect our code to do.
|
||||
|
||||
JupyterHub uses [pytest](https://pytest.org) for all the tests. You
|
||||
can find them under the [jupyterhub/tests](https://github.com/jupyterhub/jupyterhub/tree/main/jupyterhub/tests) directory in the git repository.
|
||||
|
||||
## Running the tests
|
||||
|
||||
1. Make sure you have completed {ref}`contributing:setup`.
|
||||
Once you are done, you would be able to run `jupyterhub` from the command line and access it from your web browser.
|
||||
This ensures that the dev environment is properly set up for tests to run.
|
||||
|
||||
2. You can run all tests in JupyterHub
|
||||
|
||||
```bash
|
||||
pytest -v jupyterhub/tests
|
||||
```
|
||||
|
||||
This should display progress as it runs all the tests, printing
|
||||
information about any test failures as they occur.
|
||||
|
||||
If you wish to confirm test coverage the run tests with the `--cov` flag:
|
||||
|
||||
```bash
|
||||
pytest -v --cov=jupyterhub jupyterhub/tests
|
||||
```
|
||||
|
||||
3. You can also run tests in just a specific file:
|
||||
|
||||
```bash
|
||||
pytest -v jupyterhub/tests/<test-file-name>
|
||||
```
|
||||
|
||||
4. To run a specific test only, you can do:
|
||||
|
||||
```bash
|
||||
pytest -v jupyterhub/tests/<test-file-name>::<test-name>
|
||||
```
|
||||
|
||||
This runs the test with function name `<test-name>` defined in
|
||||
`<test-file-name>`. This is very useful when you are iteratively
|
||||
developing a single test.
|
||||
|
||||
For example, to run the test `test_shutdown` in the file `test_api.py`,
|
||||
you would run:
|
||||
|
||||
```bash
|
||||
pytest -v jupyterhub/tests/test_api.py::test_shutdown
|
||||
```
|
||||
|
||||
For more details, refer to the [pytest usage documentation](https://pytest.readthedocs.io/en/latest/usage.html).
|
||||
|
||||
## Test organisation
|
||||
|
||||
The tests live in `jupyterhub/tests` and are organized roughly into:
|
||||
|
||||
1. `test_api.py` tests the REST API
|
||||
2. `test_pages.py` tests loading the HTML pages
|
||||
|
||||
and other collections of tests for different components.
|
||||
When writing a new test, there should usually be a test of
|
||||
similar functionality already written and related tests should
|
||||
be added nearby.
|
||||
|
||||
The fixtures live in `jupyterhub/tests/conftest.py`. There are
|
||||
fixtures that can be used for JupyterHub components, such as:
|
||||
|
||||
- `app`: an instance of JupyterHub with mocked parts
|
||||
- `auth_state_enabled`: enables persisting auth_state (like authentication tokens)
|
||||
- `db`: a sqlite in-memory DB session
|
||||
- `` io_loop` ``: a Tornado event loop
|
||||
- `event_loop`: a new asyncio event loop
|
||||
- `user`: creates a new temporary user
|
||||
- `admin_user`: creates a new temporary admin user
|
||||
- single user servers
|
||||
\- `cleanup_after`: allows cleanup of single user servers between tests
|
||||
- mocked service
|
||||
\- `MockServiceSpawner`: a spawner that mocks services for testing with a short poll interval
|
||||
\- `` mockservice` ``: mocked service with no external service url
|
||||
\- `mockservice_url`: mocked service with a url to test external services
|
||||
|
||||
And fixtures to add functionality or spawning behavior:
|
||||
|
||||
- `admin_access`: grants admin access
|
||||
- `` no_patience` ``: sets slow-spawning timeouts to zero
|
||||
- `slow_spawn`: enables the SlowSpawner (a spawner that takes a few seconds to start)
|
||||
- `never_spawn`: enables the NeverSpawner (a spawner that will never start)
|
||||
- `bad_spawn`: enables the BadSpawner (a spawner that fails immediately)
|
||||
- `slow_bad_spawn`: enables the SlowBadSpawner (a spawner that fails after a short delay)
|
||||
|
||||
Refer to the [pytest fixtures documentation](https://pytest.readthedocs.io/en/latest/fixture.html) to learn how to use fixtures that exists already and to create new ones.
|
||||
|
||||
### The Pytest-Asyncio Plugin
|
||||
|
||||
When testing the various JupyterHub components and their various implementations, it sometimes becomes necessary to have a running instance of JupyterHub to test against.
|
||||
The [`app`](https://github.com/jupyterhub/jupyterhub/blob/270b61992143b29af8c2fab90c4ed32f2f6fe209/jupyterhub/tests/conftest.py#L60) fixture mocks a JupyterHub application for use in testing by:
|
||||
|
||||
- enabling ssl if internal certificates are available
|
||||
- creating an instance of [MockHub](https://github.com/jupyterhub/jupyterhub/blob/270b61992143b29af8c2fab90c4ed32f2f6fe209/jupyterhub/tests/mocking.py#L221) using any provided configurations as arguments
|
||||
- initializing the mocked instance
|
||||
- starting the mocked instance
|
||||
- finally, a registered finalizer function performs a cleanup and stops the mocked instance
|
||||
|
||||
The JupyterHub test suite uses the [pytest-asyncio plugin](https://pytest-asyncio.readthedocs.io/en/latest/) that handles [event-loop](https://docs.python.org/3/library/asyncio-eventloop.html) integration in [Tornado](https://www.tornadoweb.org/en/stable/) applications. This allows for the use of top-level awaits when calling async functions or [fixtures](https://docs.pytest.org/en/6.2.x/fixture.html#what-fixtures-are) during testing. All test functions and fixtures labelled as `async` will run on the same event loop.
|
||||
|
||||
```{note}
|
||||
With the introduction of [top-level awaits](https://piccolo-orm.com/blog/top-level-await-in-python/), the use of the `io_loop` fixture of the [pytest-tornado plugin](https://www.tornadoweb.org/en/stable/ioloop.html) is no longer necessary. It was initially used to call coroutines. With the upgrades made to `pytest-asyncio`, this usage is now deprecated. It is now, only utilized within the JupyterHub test suite to ensure complete cleanup of resources used during testing such as open file descriptors. This is demonstrated in this [pull request](https://github.com/jupyterhub/jupyterhub/pull/4332).
|
||||
More information is provided below.
|
||||
```
|
||||
|
||||
One of the general goals of the [JupyterHub Pytest Plugin project](https://github.com/jupyterhub/pytest-jupyterhub) is to ensure the MockHub cleanup fully closes and stops all utilized resources during testing so the use of the `io_loop` fixture for teardown is not necessary. This was highlighted in this [issue](https://github.com/jupyterhub/pytest-jupyterhub/issues/30)
|
||||
|
||||
For more information on asyncio and event-loops, here are some resources:
|
||||
|
||||
- **Read**: [Introduction to the Python event loop](https://www.pythontutorial.net/python-concurrency/python-event-loop)
|
||||
- **Read**: [Overview of Async IO in Python 3.7](https://stackabuse.com/overview-of-async-io-in-python-3-7)
|
||||
- **Watch**: [Asyncio: Understanding Async / Await in Python](https://www.youtube.com/watch?v=bs9tlDFWWdQ)
|
||||
- **Watch**: [Learn Python's AsyncIO #2 - The Event Loop](https://www.youtube.com/watch?v=E7Yn5biBZ58)
|
||||
|
||||
## Troubleshooting Test Failures
|
||||
|
||||
### All the tests are failing
|
||||
|
||||
Make sure you have completed all the steps in {ref}`contributing:setup` successfully, and are able to access JupyterHub from your browser at http://localhost:8000 after starting `jupyterhub` in your command line.
|
||||
|
||||
## Code formatting and linting
|
||||
|
||||
JupyterHub automatically enforces code formatting. This means that pull requests
|
||||
with changes breaking this formatting will receive a commit from pre-commit.ci
|
||||
automatically.
|
||||
|
||||
To automatically format code locally, you can install pre-commit and register a
|
||||
_git hook_ to automatically check with pre-commit before you make a commit if
|
||||
the formatting is okay.
|
||||
|
||||
```bash
|
||||
pip install pre-commit
|
||||
pre-commit install --install-hooks
|
||||
```
|
||||
|
||||
To run pre-commit manually you would do:
|
||||
|
||||
```bash
|
||||
# check for changes to code not yet committed
|
||||
pre-commit run
|
||||
|
||||
# check for changes also in already committed code
|
||||
pre-commit run --all-files
|
||||
```
|
||||
|
||||
You may also install [black integration](https://github.com/psf/black#editor-integration)
|
||||
into your text editor to format code automatically.
|
@@ -1,68 +0,0 @@
|
||||
.. _contributing/tests:
|
||||
|
||||
==================
|
||||
Testing JupyterHub
|
||||
==================
|
||||
|
||||
Unit test help validate that JupyterHub works the way we think it does,
|
||||
and continues to do so when changes occur. They also help communicate
|
||||
precisely what we expect our code to do.
|
||||
|
||||
JupyterHub uses `pytest <https://pytest.org>`_ for all our tests. You
|
||||
can find them under ``jupyterhub/tests`` directory in the git repository.
|
||||
|
||||
Running the tests
|
||||
==================
|
||||
|
||||
#. Make sure you have completed :ref:`contributing/setup`. You should be able
|
||||
to start ``jupyterhub`` from the commandline & access it from your
|
||||
web browser. This ensures that the dev environment is properly set
|
||||
up for tests to run.
|
||||
|
||||
#. You can run all tests in JupyterHub
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pytest -v jupyterhub/tests
|
||||
|
||||
This should display progress as it runs all the tests, printing
|
||||
information about any test failures as they occur.
|
||||
|
||||
If you wish to confirm test coverage the run tests with the `--cov` flag:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pytest -v --cov=jupyterhub jupyterhub/tests
|
||||
|
||||
#. You can also run tests in just a specific file:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pytest -v jupyterhub/tests/<test-file-name>
|
||||
|
||||
#. To run a specific test only, you can do:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pytest -v jupyterhub/tests/<test-file-name>::<test-name>
|
||||
|
||||
This runs the test with function name ``<test-name>`` defined in
|
||||
``<test-file-name>``. This is very useful when you are iteratively
|
||||
developing a single test.
|
||||
|
||||
For example, to run the test ``test_shutdown`` in the file ``test_api.py``,
|
||||
you would run:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pytest -v jupyterhub/tests/test_api.py::test_shutdown
|
||||
|
||||
|
||||
Troubleshooting Test Failures
|
||||
=============================
|
||||
|
||||
All the tests are failing
|
||||
-------------------------
|
||||
|
||||
Make sure you have completed all the steps in :ref:`contributing/setup` successfully, and
|
||||
can launch ``jupyterhub`` from the terminal.
|
@@ -1,46 +0,0 @@
|
||||
Eventlogging and Telemetry
|
||||
==========================
|
||||
|
||||
JupyterHub can be configured to record structured events from a running server using Jupyter's `Telemetry System`_. The types of events that JupyterHub emits are defined by `JSON schemas`_ listed at the bottom of this page_.
|
||||
|
||||
.. _logging: https://docs.python.org/3/library/logging.html
|
||||
.. _`Telemetry System`: https://github.com/jupyter/telemetry
|
||||
.. _`JSON schemas`: https://json-schema.org/
|
||||
|
||||
How to emit events
|
||||
------------------
|
||||
|
||||
Event logging is handled by its ``Eventlog`` object. This leverages Python's standing logging_ library to emit, filter, and collect event data.
|
||||
|
||||
|
||||
To begin recording events, you'll need to set two configurations:
|
||||
|
||||
1. ``handlers``: tells the EventLog *where* to route your events. This trait is a list of Python logging handlers that route events to
|
||||
2. ``allows_schemas``: tells the EventLog *which* events should be recorded. No events are emitted by default; all recorded events must be listed here.
|
||||
|
||||
Here's a basic example:
|
||||
|
||||
.. code-block::
|
||||
|
||||
import logging
|
||||
|
||||
c.EventLog.handlers = [
|
||||
logging.FileHandler('event.log'),
|
||||
]
|
||||
|
||||
c.EventLog.allowed_schemas = [
|
||||
'hub.jupyter.org/server-action'
|
||||
]
|
||||
|
||||
The output is a file, ``"event.log"``, with events recorded as JSON data.
|
||||
|
||||
|
||||
.. _page:
|
||||
|
||||
Event schemas
|
||||
-------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
server-actions.rst
|
310
docs/source/explanation/capacity-planning.md
Normal file
310
docs/source/explanation/capacity-planning.md
Normal file
@@ -0,0 +1,310 @@
|
||||
(explanation:capacity-planning)=
|
||||
|
||||
# Capacity planning
|
||||
|
||||
General capacity planning advice for JupyterHub is hard to give,
|
||||
because it depends almost entirely on what your users are doing,
|
||||
and what JupyterHub users do varies _wildly_ in terms of resource consumption.
|
||||
|
||||
**There is no single answer to "I have X users, what resources do I need?" or "How many users can I support with this machine?"**
|
||||
|
||||
Here are three _typical_ Jupyter use patterns that require vastly different resources:
|
||||
|
||||
- **Learning**: negligible resources because computation is mostly idle,
|
||||
e.g. students learning programming for the first time
|
||||
- **Production code**: very intense, sustained load, e.g. training machine learning models
|
||||
- **Bursting**: _mostly_ idle, but needs a lot of resources for short periods of time
|
||||
(interactive research often looks like this)
|
||||
|
||||
But just because there's no single answer doesn't mean we can't help.
|
||||
So we have gathered here some useful information to help you make your decisions
|
||||
about what resources you need based on how your users work,
|
||||
including the relative invariants in terms of resources that JupyterHub itself needs.
|
||||
|
||||
## JupyterHub infrastructure
|
||||
|
||||
JupyterHub consists of a few components that are always running.
|
||||
These take up very little resources,
|
||||
especially relative to the resources consumed by users when you have more than a few.
|
||||
|
||||
As an example, an instance of mybinder.org (running JupyterHub 1.5.0),
|
||||
running with typically ~100-150 users has:
|
||||
|
||||
| Component | CPU (mean/peak) | Memory (mean/peak) |
|
||||
| --------- | --------------- | ------------------ |
|
||||
| Hub | 4% / 13% | (230 MB / 260 MB) |
|
||||
| Proxy | 6% / 13% | (47 MB / 65 MB) |
|
||||
|
||||
So it would be pretty generous to allocate ~25% of one CPU core
|
||||
and ~500MB of RAM to overall JupyterHub infrastructure.
|
||||
|
||||
The rest is going to be up to your users.
|
||||
Per-user overhead from JupyterHub is typically negligible
|
||||
up to at least a few hundred concurrent active users.
|
||||
|
||||
```{figure} /images/mybinder-hub-components-cpu-memory.png
|
||||
JupyterHub component resource usage for mybinder.org.
|
||||
```
|
||||
|
||||
## Factors to consider
|
||||
|
||||
### Static vs elastic resources
|
||||
|
||||
A big factor in planning resources is:
|
||||
**how much does it cost to change your mind?**
|
||||
If you are using a single shared machine with local storage,
|
||||
migrating to a new one because it turns out your users don't fit might be very costly.
|
||||
You will have to get a new machine, set it up, and maybe even migrate user data.
|
||||
|
||||
On the other hand, if you are using ephemeral resources,
|
||||
such as node pools in Kubernetes,
|
||||
changing resource types costs close to nothing
|
||||
because nodes can automatically be added or removed as needed.
|
||||
|
||||
Take that cost into account when you are picking how much memory or cpu to allocate to users.
|
||||
|
||||
Static resources (like [the-littlest-jupyterhub][]) provide for more **stable, predictable costs**,
|
||||
but elastic resources (like [zero-to-jupyterhub][]) tend to provide **lower overall costs**
|
||||
(especially when deployed with monitoring allowing cost optimizations over time),
|
||||
but which are **less predictable**.
|
||||
|
||||
[the-littlest-jupyterhub]: https://the-littlest-jupyterhub.readthedocs.io
|
||||
|
||||
[zero-to-jupyterhub]: https://z2jh.jupyter.org
|
||||
|
||||
(limits-requests)=
|
||||
|
||||
### Limit vs Request for resources
|
||||
|
||||
Many scheduling tools like Kubernetes have two separate ways of allocating resources to users.
|
||||
A **Request** or **Reservation** describes how much resources are _set aside_ for each user.
|
||||
Often, this doesn't have any practical effect other than deciding when a given machine is considered 'full'.
|
||||
If you are using expandable resources like an autoscaling Kubernetes cluster,
|
||||
a new node must be launched and added to the pool if you 'request' more resources than fit on currently running nodes (a cluster **scale-up event**).
|
||||
If you are running on a single VM, this describes how many users you can run at the same time, full stop.
|
||||
|
||||
A **Limit**, on the other hand, enforces a limit to how much resources any given user can consume.
|
||||
For more information on what happens when users try to exceed their limits, see [](oversubscription).
|
||||
|
||||
In the strictest, safest case, you can have these two numbers be the same.
|
||||
That means that each user is _limited_ to fit within the resources allocated to it.
|
||||
This avoids **[oversubscription](oversubscription)** of resources (allowing use of more than you have available),
|
||||
at the expense (in a literal, this-costs-money sense) of reserving lots of usually-idle capacity.
|
||||
|
||||
However, you often find that a small fraction of users use more resources than others.
|
||||
In this case you may give users limits that _go beyond the amount of resources requested_.
|
||||
This is called **oversubscribing** the resources available to users.
|
||||
|
||||
Having a gap between the request and the limit means you can fit a number of _typical_ users on a node (based on the request),
|
||||
but still limit how much a runaway user can gobble up for themselves.
|
||||
|
||||
(oversubscription)=
|
||||
|
||||
### Oversubscribed CPU is okay, running out of memory is bad
|
||||
|
||||
An important consideration when assigning resources to users is: **What happens when users need more than I've given them?**
|
||||
|
||||
A good summary to keep in mind:
|
||||
|
||||
> When tasks don't get enough CPU, things are slow.
|
||||
> When they don't get enough memory, things are broken.
|
||||
|
||||
This means it's **very important that users have enough memory**,
|
||||
but much less important that they always have exclusive access to all the CPU they can use.
|
||||
|
||||
This relates to [Limits and Requests](limits-requests),
|
||||
because these are the consequences of your limits and/or requests not matching what users actually try to use.
|
||||
|
||||
A table of mismatched resource allocation situations and their consequences:
|
||||
|
||||
| issue | consequence |
|
||||
| -------------------------------------------------------- | ------------------------------------------------------------------------------------- |
|
||||
| Requests too high | Unnecessarily high cost and/or low capacity. |
|
||||
| CPU limit too low | Poor performance experienced by users |
|
||||
| CPU oversubscribed (too-low request + too-high limit) | Poor performance across the system; may crash, if severe |
|
||||
| Memory limit too low | Servers killed by Out-of-Memory Killer (OOM); lost work for users |
|
||||
| Memory oversubscribed (too-low request + too-high limit) | System memory exhaustion - all kinds of hangs and crashes and weird errors. Very bad. |
|
||||
|
||||
Note that the 'oversubscribed' problem case is where the request is lower than _typical_ usage,
|
||||
meaning that the total reserved resources isn't enough for the total _actual_ consumption.
|
||||
This doesn't mean that _all_ your users exceed the request,
|
||||
just that the _limit_ gives enough room for the _average_ user to exceed the request.
|
||||
|
||||
All of these considerations are important _per node_.
|
||||
Larger nodes means more users per node, and therefore more users to average over.
|
||||
It also means more chances for multiple outliers on the same node.
|
||||
|
||||
### Example case for oversubscribing memory
|
||||
|
||||
Take for example, this system and sampling of user behavior:
|
||||
|
||||
- System memory = 8G
|
||||
- memory request = 1G, limit = 3G
|
||||
- typical 'heavy' user: 2G
|
||||
- typical 'light' user: 0.5G
|
||||
|
||||
This will assign 8 users to those 8G of RAM (remember: only requests are used for deciding when a machine is 'full').
|
||||
As long as the total of 8 users _actual_ usage is under 8G, everything is fine.
|
||||
But the _limit_ allows a total of 24G to be used,
|
||||
which would be a mess if everyone used their full limit.
|
||||
But _not_ everyone uses the full limit, which is the point!
|
||||
|
||||
This pattern is fine if 1/8 of your users are 'heavy' because _typical_ usage will be ~0.7G,
|
||||
and your total usage will be ~5G (`1 × 2 + 7 × 0.5 = 5.5`).
|
||||
|
||||
But if _50%_ of your users are 'heavy' you have a problem because that means your users will be trying to use 10G (`4 × 2 + 4 × 0.5 = 10`),
|
||||
which you don't have.
|
||||
|
||||
You can make guesses at these numbers, but the only _real_ way to get them is to measure (see [](measuring)).
|
||||
|
||||
### CPU:memory ratio
|
||||
|
||||
Most of the time, you'll find that only one resource is the limiting factor for your users.
|
||||
Most often it's memory, but for certain tasks, it could be CPU (or even GPUs).
|
||||
|
||||
Many cloud deployments have just one or a few fixed ratios of cpu to memory
|
||||
(e.g. 'general purpose', 'high memory', and 'high cpu').
|
||||
Setting your secondary resource allocation according to this ratio
|
||||
after selecting the more important limit results in a balanced resource allocation.
|
||||
|
||||
For instance, some of Google Cloud's ratios are:
|
||||
|
||||
| node type | GB RAM / CPU core |
|
||||
| ----------- | ----------------- |
|
||||
| n2-highmem | 8 |
|
||||
| n2-standard | 4 |
|
||||
| n2-highcpu | 1 |
|
||||
|
||||
(idleness)=
|
||||
|
||||
### Idleness
|
||||
|
||||
Jupyter being an interactive tool means people tend to spend a lot more time reading and thinking than actually running resource-intensive code.
|
||||
This significantly affects how much _cpu_ resources a typical active user needs,
|
||||
but often does not significantly affect the _memory_.
|
||||
|
||||
Ways to think about this:
|
||||
|
||||
- More idle users means unused CPU.
|
||||
This generally means setting your CPU _limit_ higher than your CPU _request_.
|
||||
- What do your users do when they _are_ running code?
|
||||
Is it typically single-threaded local computation in a notebook?
|
||||
If so, there's little reason to set a limit higher than 1 CPU core.
|
||||
- Do typical computations take a long time, or just a few seconds?
|
||||
Longer typical computations means it's more likely for users to be trying to use the CPU at the same moment,
|
||||
suggesting a higher _request_.
|
||||
- Even with idle users, parallel computation adds up quickly - one user fully loading 4 cores and 3 using almost nothing still averages to more than a full CPU core per user.
|
||||
- Long-running intense computations suggest higher requests.
|
||||
|
||||
Again, using mybinder.org as an example—we run around 100 users on 8-core nodes,
|
||||
and still see fairly _low_ overall CPU usage on each user node.
|
||||
The limit here is actually Kubernetes' pods per node, not memory _or_ CPU.
|
||||
This is likely a extreme case, as many Binder users come from clicking links on webpages
|
||||
without any actual intention of running code.
|
||||
|
||||
```{figure} /images/mybinder-load5.png
|
||||
mybinder.org node CPU usage is low with 50-150 users sharing just 8 cores
|
||||
```
|
||||
|
||||
### Concurrent users and culling idle servers
|
||||
|
||||
Related to [][idleness], all of these resource consumptions and limits are calculated based on **concurrently active users**,
|
||||
not total users.
|
||||
You might have 10,000 users of your JupyterHub deployment, but only 100 of them running at any given time.
|
||||
That 100 is the main number you need to use for your capacity planning.
|
||||
JupyterHub costs scale very little based on the number of _total_ users,
|
||||
up to a point.
|
||||
|
||||
There are two important definitions for **active user**:
|
||||
|
||||
- Are they _actually_ there (i.e. a human interacting with Jupyter, or running code that might be )
|
||||
- Is their server running (this is where resource reservations and limits are actually applied)
|
||||
|
||||
Connecting those two definitions (how long are servers running if their humans aren't using them) is an important area of deployment configuration, usually implemented via the [JupyterHub idle culler service][idle-culler].
|
||||
|
||||
[idle-culler]: https://github.com/jupyterhub/jupyterhub-idle-culler
|
||||
|
||||
There are a lot of considerations when it comes to culling idle users that will depend:
|
||||
|
||||
- How much does it save me to shut down user servers? (e.g. keeping an elastic cluster small, or keeping a fixed-size deployment available to active users)
|
||||
- How much does it cost my users to have their servers shut down? (e.g. lost work if shutdown prematurely)
|
||||
- How easy do I want it to be for users to keep their servers running? (e.g. Do they want to run unattended simulations overnight? Do you want them to?)
|
||||
|
||||
Like many other things in this guide, there are many correct answers leading to different configuration choices.
|
||||
For more detail on culling configuration and considerations, consult the [JupyterHub idle culler documentation][idle-culler].
|
||||
|
||||
## More tips
|
||||
|
||||
### Start strict and generous, then measure
|
||||
|
||||
A good tip, in general, is to give your users as much resources as you can afford that you think they _might_ use.
|
||||
Then, use resource usage metrics like prometheus to analyze what your users _actually_ need,
|
||||
and tune accordingly.
|
||||
Remember: **Limits affect your user experience and stability. Requests mostly affect your costs**.
|
||||
|
||||
For example, a sensible starting point (lacking any other information) might be:
|
||||
|
||||
```yaml
|
||||
request:
|
||||
cpu: 0.5
|
||||
mem: 2G
|
||||
limit:
|
||||
cpu: 1
|
||||
mem: 2G
|
||||
```
|
||||
|
||||
(more memory if significant computations are likely - machine learning models, data analysis, etc.)
|
||||
|
||||
Some actions
|
||||
|
||||
- If you see out-of-memory killer events, increase the limit (or talk to your users!)
|
||||
- If you see typical memory well below your limit, reduce the request (but not the limit)
|
||||
- If _nobody_ uses that much memory, reduce your limit
|
||||
- If CPU is your limiting scheduling factor and your CPUs are mostly idle,
|
||||
reduce the cpu request (maybe even to 0!).
|
||||
- If CPU usage continues to be low, increase the limit to 2 or 4 to allow bursts of parallel execution.
|
||||
|
||||
(measuring)=
|
||||
|
||||
### Measuring user resource consumption
|
||||
|
||||
It is _highly_ recommended to deploy monitoring services such as [Prometheus][]
|
||||
and [Grafana][] to get a view of your users' resource usage.
|
||||
This is the only way to truly know what your users need.
|
||||
|
||||
JupyterHub has some experimental [grafana dashboards][] you can use as a starting point,
|
||||
to keep an eye on your resource usage.
|
||||
Here are some sample charts from (again from mybinder.org),
|
||||
showing >90% of users using less than 10% CPU and 200MB,
|
||||
but a few outliers near the limit of 1 CPU and 2GB of RAM.
|
||||
This is the kind of information you can use to tune your requests and limits.
|
||||
|
||||

|
||||
|
||||
[prometheus]: https://prometheus.io
|
||||
[grafana]: https://grafana.com
|
||||
[grafana dashboards]: https://github.com/jupyterhub/grafana-dashboards
|
||||
|
||||
### Measuring costs
|
||||
|
||||
Measuring costs may be as important as measuring your users activity.
|
||||
If you are using a cloud provider, you can often use cost thresholds and quotas to instruct them to notify you if your costs are too high,
|
||||
e.g. "Have AWS send me an email if I hit X spending trajectory on week 3 of the month."
|
||||
You can then use this information to tune your resources based on what you can afford.
|
||||
You can mix this information with user resource consumption to figure out if you have a problem,
|
||||
e.g. "my users really do need X resources, but I can only afford to give them 80% of X."
|
||||
This information may prove useful when asking your budget-approving folks for more funds.
|
||||
|
||||
### Additional resources
|
||||
|
||||
There are lots of other resources for cost and capacity planning that may be specific to JupyterHub and/or your cloud provider.
|
||||
|
||||
Here are some useful links to other resources
|
||||
|
||||
- [Zero to JupyterHub](https://z2jh.jupyter.org) documentation on
|
||||
- [projecting costs](https://z2jh.jupyter.org/en/latest/administrator/cost.html)
|
||||
- [configuring user resources](https://z2jh.jupyter.org/en/latest/jupyterhub/customizing/user-resources.html)
|
||||
- Cloud platform cost calculators:
|
||||
- [Google Cloud](https://cloud.google.com/products/calculator/)
|
||||
- [Amazon AWS](https://calculator.aws)
|
||||
- [Microsoft Azure](https://azure.microsoft.com/en-us/pricing/calculator/)
|
430
docs/source/explanation/concepts.md
Normal file
430
docs/source/explanation/concepts.md
Normal file
@@ -0,0 +1,430 @@
|
||||
(explanation:concepts)=
|
||||
|
||||
# JupyterHub: A conceptual overview
|
||||
|
||||
```{warning}
|
||||
This page could be missing cross-links to other parts of
|
||||
the documentation. You can help by adding them!
|
||||
```
|
||||
|
||||
JupyterHub is not what you think it is. Most things you think are
|
||||
part of JupyterHub are actually handled by some other component, for
|
||||
example the spawner or notebook server itself, and it's not always
|
||||
obvious how the parts relate. The knowledge contained here hasn't
|
||||
been assembled in one place before, and is essential to understand
|
||||
when setting up a sufficiently complex Jupyter(Hub) setup.
|
||||
|
||||
This document was originally written to assist in debugging: very
|
||||
often, the actual problem is not where one thinks it is and thus
|
||||
people can't easily debug. In order to tell this story, we start at
|
||||
JupyterHub and go all the way down to the fundamental components of
|
||||
Jupyter.
|
||||
|
||||
In this document, we occasionally leave things out or bend the truth
|
||||
where it helps in explanation, and give our explanations in terms of
|
||||
Python even though Jupyter itself is language-neutral. The "(&)"
|
||||
symbol highlights important points where this page leaves out or bends
|
||||
the truth for simplification of explanation, but there is more if you
|
||||
dig deeper.
|
||||
|
||||
This guide is long, but after reading it you will be know of all major
|
||||
components in the Jupyter ecosystem and everything else you read
|
||||
should make sense.
|
||||
|
||||
## What is Jupyter?
|
||||
|
||||
Before we get too far, let's remember what our end goal is. A
|
||||
**Jupyter Notebook** is nothing more than a Python(&) process
|
||||
which is getting commands from a web browser and displaying the output
|
||||
via that browser. What the process actually sees is roughly like
|
||||
getting commands on standard input(&) and writing to standard
|
||||
output(&). There is nothing intrinsically special about this process
|
||||
|
||||
- it can do anything a normal Python process can do, and nothing more.
|
||||
The **Jupyter kernel** handles capturing output and converting things
|
||||
such as graphics to a form usable by the browser.
|
||||
|
||||
Everything we explain below is building up to this, going through many
|
||||
different layers which give you many ways of customizing how this
|
||||
process runs.
|
||||
|
||||
## JupyterHub
|
||||
|
||||
**JupyterHub** is the central piece that provides multi-user
|
||||
login capabilities. Despite this, the end user only briefly interacts with
|
||||
JupyterHub and most of the actual Jupyter session does not relate to
|
||||
the hub at all: the hub mainly handles authentication and creating (JupyterHub calls it "spawning") the
|
||||
single-user server. In short, anything which is related to _starting_
|
||||
the user's workspace/environment is about JupyterHub, anything about
|
||||
_running_ usually isn't.
|
||||
|
||||
If you have problems connecting the authentication, spawning, and the
|
||||
proxy (explained below), the issue is usually with JupyterHub. To
|
||||
debug, JupyterHub has extensive logs which get printed to its console
|
||||
and can be used to discover most problems.
|
||||
|
||||
The main pieces of JupyterHub are:
|
||||
|
||||
### Authenticator
|
||||
|
||||
JupyterHub itself doesn't actually manage your users. It has a
|
||||
database of users, but it is usually connected with some other system
|
||||
that manages the usernames and passwords. When someone tries to log
|
||||
in to JupyteHub, it asks the
|
||||
**authenticator**([basics](authenticators),
|
||||
[reference](../reference/authenticators)) if the
|
||||
username/password is valid(&). The authenticator returns a username(&),
|
||||
which is passed on to the spawner, which has to use it to start that
|
||||
user's environment. The authenticator can also return user
|
||||
groups and admin status of users, so that JupyterHub can do some
|
||||
higher-level management.
|
||||
|
||||
The following authenticators are included with JupyterHub:
|
||||
|
||||
- **PAMAuthenticator** uses the standard Unix/Linux operating system
|
||||
functions to check users. Roughly, if someone already has access to
|
||||
the machine (they can log in by ssh), they will be able to log in to
|
||||
JupyterHub without any other setup. Thus, JupyterHub fills the role
|
||||
of a ssh server, but providing a web-browser based way to access the
|
||||
machine.
|
||||
|
||||
There are [plenty of others to choose from](https://github.com/jupyterhub/jupyterhub/wiki/Authenticators).
|
||||
You can connect to almost any other existing service to manage your
|
||||
users. You either use all users from this other service (e.g. your
|
||||
company), or enable only the allowed users (e.g. your group's
|
||||
Github usernames). Some other popular authenticators include:
|
||||
|
||||
- **OAuthenticator** uses the standard OAuth protocol to verify users.
|
||||
For example, you can easily use Github to authenticate your users -
|
||||
people have a "click to login with Github" button. This is often
|
||||
done with a allowlist to only allow certain users.
|
||||
|
||||
- **NativeAuthenticator** actually stores and validates its own
|
||||
usernames and passwords, unlike most other authenticators. Thus,
|
||||
you can manage all your users within JupyterHub only.
|
||||
|
||||
- There are authenticators for LTI (learning management systems),
|
||||
Shibboleth, Kerberos - and so on.
|
||||
|
||||
The authenticator is configured with the
|
||||
`c.JupyterHub.authenticator_class` configuration option in the
|
||||
`jupyterhub_config.py` file.
|
||||
|
||||
The authenticator runs internally to the Hub process but communicates
|
||||
with outside services.
|
||||
|
||||
If you have trouble logging in, this is usually a problem of the
|
||||
authenticator. The authenticator logs are part of the the JupyterHub
|
||||
logs, but there may also be relevant information in whatever external
|
||||
services you are using.
|
||||
|
||||
### Spawner
|
||||
|
||||
The **spawner** ([basics](spawners),
|
||||
[reference](../reference/spawners)) is the real core of
|
||||
JupyterHub: when someone wants a notebook server, the spawner allocates
|
||||
resources and starts the server. The notebook server could run on the
|
||||
same machine as JupyterHub, on another machine, on some cloud service,
|
||||
or more. Administrators can limit resources (CPU, memory) or isolate users
|
||||
from each other - if the spawner supports it. They can also do no
|
||||
limiting and allow any user to access any other user's files if they
|
||||
are not configured properly.
|
||||
|
||||
Some basic spawners included in JupyterHub are:
|
||||
|
||||
- **LocalProcessSpawner** is built into JupyterHub. Upon launch it tries
|
||||
to switch users to the given username (`su` (&)) and start the
|
||||
notebook server. It requires that the hub be run as root (because
|
||||
only root has permission to start processes as other user IDs).
|
||||
LocalProcessSpawner is no different than a user logging in with
|
||||
something like `ssh` and running `jupyter notebook`. PAMAuthenticator and
|
||||
LocalProcessSpawner is the most basic way of using JupyterHub (and
|
||||
what it does out of the box) and makes the hub not too dissimilar to
|
||||
an advanced ssh server.
|
||||
|
||||
There are [many more advanced spawners](/reference/spawners), and to
|
||||
show the diversity of spawning strategys some are listed below:
|
||||
|
||||
- **SudoSpawner** is like LocalProcessSpawner but lets you run
|
||||
JupyterHub without root. `sudo` has to be configured to allow the
|
||||
hub's user to run processes under other user IDs.
|
||||
|
||||
- **SystemdSpawner** uses Systemd to start other processes. It can
|
||||
isolate users from each other and provide resource limiting.
|
||||
|
||||
- **DockerSpawner** runs stuff in Docker, a containerization system.
|
||||
This lets you fully isolate users, limit CPU, memory, and provide
|
||||
other container images to fully customize the environment.
|
||||
|
||||
- **KubeSpawner** runs on the Kubernetes, a cloud orchestration
|
||||
system. The spawner can easily limit users and provide cloud
|
||||
scaling - but the spawner doesn't actually do that, Kubernetes
|
||||
does. The spawner just tells Kubernetes what to do. If you want to
|
||||
get KubeSpawner to do something, first you would figure out how to
|
||||
do it in Kubernetes, then figure out how to tell KubeSpawner to tell
|
||||
Kubernetes that. Actually... this is true for most spawners.
|
||||
|
||||
- **BatchSpawner** runs on computer clusters with batch job scheduling
|
||||
systems (e.g Slurm, HTCondor, PBS, etc). The user processes are run
|
||||
as batch jobs, having access to all the data and software that the
|
||||
users normally will.
|
||||
|
||||
In short, spawners are the interface to the rest of the operating
|
||||
system, and to configure them right you need to know a bit about how
|
||||
the corresponding operating system service works.
|
||||
|
||||
The spawner is responsible for the environment of the single-user
|
||||
notebook servers (described in the next section). In the end, it just
|
||||
makes a choice about how to start these processes: for example, the
|
||||
Docker spawner starts a normal Docker container and runs the right
|
||||
command inside of it. Thus, the spawner is responsible for setting
|
||||
what kind of software and data is available to the user.
|
||||
|
||||
The spawner runs internally to the Hub process but communicates with
|
||||
outside services. It is configured by `c.JupyterHub.spawner_class` in
|
||||
`jupyterhub_config.py`.
|
||||
|
||||
If a user tries to launch a notebook server and it doesn't work, the
|
||||
error is usually with the spawner or the notebook server (as described
|
||||
in the next section). Each spawner outputs some logs to the main
|
||||
JupyterHub logs, but may also have logs in other places depending on
|
||||
what services it interacts with (for example, the Docker spawner
|
||||
somehow puts logs in the Docker system services, Kubernetes through
|
||||
the `kubectl` API).
|
||||
|
||||
### Proxy
|
||||
|
||||
The JupyterHub **proxy** relays connections between the users
|
||||
and their single-user notebook servers. What this basically means is
|
||||
that the hub itself can shut down and the proxy can continue to
|
||||
allow users to communicate with their notebook servers. (This
|
||||
further emphasizes that the hub is responsible for starting, not
|
||||
running, the notebooks). By default, the hub starts the proxy
|
||||
automatically
|
||||
and stops the proxy when the hub stops (so that connections get
|
||||
interrupted). But when you [configure the proxy to run
|
||||
separately](howto:separate-proxy),
|
||||
user's connections will continue to work even without the hub.
|
||||
|
||||
The default proxy is **ConfigurableHttpProxy** which is simple but
|
||||
effective. A more advanced option is the [**Traefik Proxy**](https://blog.jupyter.org/introducing-traefikproxy-a-new-jupyterhub-proxy-based-on-traefik-4839e972faf6),
|
||||
which gives you redundancy and high-availability.
|
||||
|
||||
When users "connect to JupyterHub", they _always_ first connect to the
|
||||
proxy and the proxy relays the connection to the hub. Thus, the proxy
|
||||
is responsible for SSL and accepting connections from the rest of the
|
||||
internet. The user uses the hub to authenticate and start the server,
|
||||
and then the hub connects back to the proxy to adjust the proxy routes
|
||||
for the user's server (e.g. the web path `/user/someone` redirects to
|
||||
the server of someone at a certain internal address). The proxy has
|
||||
to be able to internally connect to both the hub and all the
|
||||
single-user servers.
|
||||
|
||||
The proxy always runs as a separate process to JupyterHub (even though
|
||||
JupyterHub can start it for you). JupyterHub has one set of
|
||||
configuration options for the proxy addresses (`bind_url`) and one for
|
||||
the hub (`hub_bind_url`). If `bind_url` is given, it is just passed to
|
||||
the automatic proxy to tell it what to do.
|
||||
|
||||
If you have problems after users are redirected to their single-user
|
||||
notebook servers, or making the first connection to the hub, it is
|
||||
usually caused by the proxy. The ConfigurableHttpProxy's logs are
|
||||
mixed with JupyterHub's logs if it's started through the hub (the
|
||||
default case), otherwise from whatever system runs the proxy (if you
|
||||
do configure it, you'll know).
|
||||
|
||||
### Services
|
||||
|
||||
JupyterHub has the concept of **services** ([basics](tutorial:services),
|
||||
[reference](services-reference)), which are other web services
|
||||
started by the hub, but otherwise are not necessarily related to the
|
||||
hub itself. They are often used to do things related to Jupyter
|
||||
(things that user interacts with, usually not the hub), but could
|
||||
always be run some other way. Running from the hub provides an easy
|
||||
way to get Hub API tokens and authenticate users against the hub. It
|
||||
can also automatically add a proxy route to forward web requests to
|
||||
that service.
|
||||
|
||||
A common example of a service is the [cull idle
|
||||
servers](https://github.com/jupyterhub/jupyterhub-idle-culler)
|
||||
service. When started by the hub, it automatically gets admin API
|
||||
tokens. It uses the API to list all running servers, compare against
|
||||
activity timeouts, and shut down servers exceeding the limits. Even
|
||||
though this is an intrinsic part of JupyterHub, it is only loosely
|
||||
coupled and running as a service provides convenience of
|
||||
authentication - it could be just as well run some other way, with a
|
||||
manually provided API token.
|
||||
|
||||
The configuration option `c.JupyterHub.services` is used to start
|
||||
services from the hub.
|
||||
|
||||
When a service is started from JupyterHub automatically, its logs are
|
||||
included in the JupyterHub logs.
|
||||
|
||||
## Single-user notebook server
|
||||
|
||||
The **single-user notebook server** is the same thing you get by
|
||||
running `jupyter notebook` or `jupyter lab` from the command line -
|
||||
the actual Jupyter user interface for a single person.
|
||||
|
||||
The role of the spawner is to start this server - basically, running
|
||||
the command `jupyter notebook`. Actually it doesn't run that, it runs
|
||||
`jupyterhub-singleuser` which first communicates with the hub to say
|
||||
"I'm alive" before running a completely normal Jupyter server. The
|
||||
single-user server can be JupyterLab or classic notebooks. By this
|
||||
point, the hub is almost completely out of the picture (the web
|
||||
traffic is going through proxy unchanged). Also by this time, the
|
||||
spawner has already decided the environment which this single-user
|
||||
server will have and the single-user server has to deal with that.
|
||||
|
||||
The spawner starts the server using `jupyterhub-singleuser` with some
|
||||
environment variables like `JUPYTERHUB_API_TOKEN` and
|
||||
`JUPYTERHUB_BASE_URL` which tell the single-user server how to connect
|
||||
back to the hub in order to say that it's ready.
|
||||
|
||||
The single-user server options are **JupyterLab** and **classic
|
||||
Jupyter Notebook**. They both run through the same backend server process--the web
|
||||
frontend is an option when it is starting. The spawner can choose the
|
||||
command line when it starts the single-user server. Extensions are a
|
||||
property of the single-user server (in two parts: there can be a part
|
||||
that runs in the Python server process, and parts that run in
|
||||
javascript in lab or notebook).
|
||||
|
||||
If one wants to install software for users, it is not a matter of
|
||||
"installing it for JupyerHub" - it's a matter of installing it for the
|
||||
single-user server, which might be the same environment as the hub,
|
||||
but not necessarily. (see below - it's a matter of the kernels!)
|
||||
|
||||
After the single-user notebook server is started, any errors are only
|
||||
an issue of the single-user notebook server. Sometimes, it seems like
|
||||
the spawner is failing, but really the spawner is working but the
|
||||
single-user notebook server dies right away (in this case, you need to
|
||||
find the problem with the single-user server and adjust the spawner to
|
||||
start it correctly or fix the environment). This can happen, for
|
||||
example, if the spawner doesn't set an environment variable or doesn't
|
||||
provide storage.
|
||||
|
||||
The single-user server's logs are printed to stdout/stderr, and the
|
||||
spawer decides where those streams are directed, so if you
|
||||
notice problems at this phase you need to check your spawner for
|
||||
instructions for accessing the single-user logs. For example, the
|
||||
LocalProcessSpawner logs are just outputted to the same JupyterHub
|
||||
output logs, the SystemdSpawner logs are
|
||||
written to the Systemd journal, Docker and Kubernetes logs are written
|
||||
to Docker and Kubernetes respectively, and batchspawner output goes to
|
||||
the normal output places of batch jobs and is an explicit
|
||||
configuration option of the spawner.
|
||||
|
||||
**(Jupyter) Notebook** is the classic interface, where each notebook
|
||||
opens in a separate tab. It is traditionally started by `jupyter
|
||||
notebook`. Does anything need to be said here?
|
||||
|
||||
**JupyterLab** is the new interface, where multiple notebooks are
|
||||
openable in the same tab in an IDE-like environment. It is
|
||||
traditionally started with `jupyter lab`. Both Notebook and Lab use
|
||||
the same `.ipynb` file format.
|
||||
|
||||
JupyterLab is run thorugh the same server file, but at a path `/lab`
|
||||
instead of `/tree`. Thus, they can be active at the same time in the
|
||||
backend and you can switch between them at runtime by changing your
|
||||
URL path.
|
||||
|
||||
Extensions need to be re-written for JupyterLab (if moving from
|
||||
classic notebooks). But, the server-side of the extensions can be
|
||||
shared by both.
|
||||
|
||||
## Kernel
|
||||
|
||||
The commands you run in the notebook session are not executed in the same process as
|
||||
the notebook itself, but in a separate **Jupyter kernel**. There are [many
|
||||
kernels
|
||||
available](https://github.com/jupyter/jupyter/wiki/Jupyter-kernels).
|
||||
|
||||
As a basic approximation, a **Jupyter kernel** is a process which
|
||||
accepts commands (cells that are run) and returns the output to
|
||||
Jupyter to display. One example is the **IPython Jupyter kernel**,
|
||||
which runs Python. There is nothing special about it, it can be
|
||||
considered a \*normal Python process. The kernel process can be
|
||||
approximated in UNIX terms as a process that takes commands on stdin
|
||||
and returns stuff on stdout(&). Obviously, it's more because it has
|
||||
to be able to disentangle all the possible outputs, such as figures,
|
||||
and present it to the user in a web browser.
|
||||
|
||||
Kernel communication is via the the ZeroMQ protocol on the local
|
||||
computer. Kernels are separate processes from the main single-user
|
||||
notebook server (and thus obviously, different from the JupyterHub
|
||||
process and everything else). By default (and unless you do something
|
||||
special), kernels share the same environment as the notebook server
|
||||
(data, resource limits, permissions, user id, etc.). But they _can_
|
||||
run in a separate Python environment from the single-user server
|
||||
(search `--prefix` in the [ipykernel installation
|
||||
instructions](https://ipython.readthedocs.io/en/stable/install/kernel_install.html))
|
||||
There are also more fancy techniques such as the [Jupyter Kernel
|
||||
Gateway](https://jupyter-kernel-gateway.readthedocs.io/) and [Enterprise
|
||||
Gateway](https://jupyter-enterprise-gateway.readthedocs.io/), which
|
||||
allow you to run the kernels on a different machine and possibly with
|
||||
a different environment.
|
||||
|
||||
A kernel doesn't just execute it's language - cell magics such as `%`,
|
||||
`%%`, and `!` are a property of the kernel - in particular, these are
|
||||
IPython kernel commands and don't necessarily work in any other
|
||||
kernel unless they specifically support them.
|
||||
|
||||
Kernels are yet _another_ layer of configurability.
|
||||
Each kernel can run a different programming language, with different
|
||||
software, and so on. By default, they would run in the same
|
||||
environment as the single-user notebook server, and the most common
|
||||
other way they are configured is by
|
||||
running in different Python virtual environments or conda
|
||||
environments. They can be started and killed independently (there is
|
||||
normally one per notebook you have open). The kernel uses
|
||||
most of your memory and CPU when running Jupyter - the rest of the web
|
||||
interface has a small footprint.
|
||||
|
||||
You can list your installed kernels with `jupyter kernelspec list`.
|
||||
If you look at one of `kernel.json` files in those directories, you
|
||||
will see exactly what command is run. These are normally
|
||||
automatically made by the kernels, but can be edited as needed. [The
|
||||
spec](https://jupyter-client.readthedocs.io/en/stable/kernels.html)
|
||||
tells you even more.
|
||||
|
||||
The kernel normally has to be reachable by the single-user notebook server
|
||||
but the gateways mentioned above can get around that limitation.
|
||||
|
||||
If you get problems with "Kernel died" or some other error in a single
|
||||
notebook but the single-user notebook server stays working, it is
|
||||
usually a problem with the kernel. It could be that you are trying to
|
||||
use more resources than you are allowed and the symptom is the kernel
|
||||
getting killed. It could be that it crashes for some other reason.
|
||||
In these cases, you need to find the kernel logs and investigate.
|
||||
|
||||
The debug logs for the kernel are normally mixed in with the
|
||||
single-user notebook server logs.
|
||||
|
||||
## JupyterHub distributions
|
||||
|
||||
There are several "distributions" which automatically install all of
|
||||
the things above and configure them for a certain purpose. They are
|
||||
good ways to get started, but if you have custom needs, eventually it
|
||||
may become hard to adapt them to your requirements.
|
||||
|
||||
- [**Zero to JupyterHub with
|
||||
Kubernetes**](https://zero-to-jupyterhub.readthedocs.io/) installs
|
||||
an entire scaleable system using Kubernetes. Uses KubeSpawner,
|
||||
....Authenticator, ....
|
||||
|
||||
- [**The Littlest JupyterHub**](https://tljh.jupyter.org/) installs JupyterHub on a single system
|
||||
using SystemdSpawner and NativeAuthenticator (which manages users
|
||||
itself).
|
||||
|
||||
- [**JupyterHub the hard way**](https://github.com/jupyterhub/jupyterhub-the-hard-way/blob/master/docs/installation-guide-hard.md)
|
||||
takes you through everything yourself. It is a natural companion to
|
||||
this guide, since you get to experience every little bit.
|
||||
|
||||
## What's next?
|
||||
|
||||
Now you know everything. Well, you know how everything relates, but
|
||||
there are still plenty of details, implementations, and exceptions.
|
||||
When setting up JupyterHub, the first step is to consider the above
|
||||
layers, decide the right option for each of them, then begin putting
|
||||
everything together.
|
183
docs/source/explanation/database.md
Normal file
183
docs/source/explanation/database.md
Normal file
@@ -0,0 +1,183 @@
|
||||
(explanation:hub-database)=
|
||||
|
||||
# The Hub's Database
|
||||
|
||||
JupyterHub uses a database to store information about users, services, and other data needed for operating the Hub.
|
||||
This is the **state** of the Hub.
|
||||
|
||||
## Why does JupyterHub have a database?
|
||||
|
||||
JupyterHub is a **stateful** application (more on that 'state' later).
|
||||
Updating JupyterHub's configuration or upgrading the version of JupyterHub requires restarting the JupyterHub process to apply the changes.
|
||||
We want to minimize the disruption caused by restarting the Hub process, so it can be a mundane, frequent, routine activity.
|
||||
Storing state information outside the process for later retrieval is necessary for this, and one of the main thing databases are for.
|
||||
|
||||
A lot of the operations in JupyterHub are also **relationships**, which is exactly what SQL databases are great at.
|
||||
For example:
|
||||
|
||||
- Given an API token, what user is making the request?
|
||||
- Which users don't have running servers?
|
||||
- Which servers belong to user X?
|
||||
- Which users have not been active in the last 24 hours?
|
||||
|
||||
Finally, a database allows us to have more information stored without needing it all loaded in memory,
|
||||
e.g. supporting a large number (several thousands) of inactive users.
|
||||
|
||||
## What's in the database?
|
||||
|
||||
The short answer of what's in the JupyterHub database is "everything."
|
||||
JupyterHub's **state** lives in the database.
|
||||
That is, everything JupyterHub needs to be aware of to function that _doesn't_ come from the configuration files, such as
|
||||
|
||||
- users, roles, role assignments
|
||||
- state, urls of running servers
|
||||
- Hashed API tokens
|
||||
- Short-lived state related to OAuth flow
|
||||
- Timestamps for when users, tokens, and servers were last used
|
||||
|
||||
### What's _not_ in the database
|
||||
|
||||
Not _quite_ all of JupyterHub's state is in the database.
|
||||
This mostly involves transient state, such as the 'pending' transitions of Spawners (starting, stopping, etc.).
|
||||
Anything not in the database must be reconstructed on Hub restart, and the only sources of information to do that are the database and JupyterHub configuration file(s).
|
||||
|
||||
## How does JupyterHub use the database?
|
||||
|
||||
JupyterHub makes some _unusual_ choices in how it connects to the database.
|
||||
These choices represent trade-offs favoring single-process simplicity and performance at the expense of horizontal scalability (multiple Hub instances).
|
||||
|
||||
We often say that the Hub 'owns' the database.
|
||||
This ownership means that we assume the Hub is the only process that will talk to the database.
|
||||
This assumption enables us to make several caching optimizations that dramatically improve JupyterHub's performance (i.e. data written recently to the database can be read from memory instead of fetched again from the database) that would not work if multiple processes could be interacting with the database at the same time.
|
||||
|
||||
Database operations are also synchronous, so while JupyterHub is waiting on a database operation, it cannot respond to other requests.
|
||||
This allows us to avoid complex locking mechanisms, because transaction races can only occur during an `await`, so we only need to make sure we've completed any given transaction before the next `await` in a given request.
|
||||
|
||||
:::{note}
|
||||
We are slowly working to remove these assumptions, and moving to a more traditional db session per-request pattern.
|
||||
This will enable multiple Hub instances and enable scaling JupyterHub, but will significantly reduce the number of active users a single Hub instance can serve.
|
||||
:::
|
||||
|
||||
### Database performance in a typical request
|
||||
|
||||
Most authenticated requests to JupyterHub involve a few database transactions:
|
||||
|
||||
1. look up the authenticated user (e.g. look up token by hash, then resolve owner and permissions)
|
||||
2. record activity
|
||||
3. perform any relevant changes involved in processing the request (e.g. create the records for a running server when starting one)
|
||||
|
||||
This means that the database is involved in almost every request, but only in quite small, simple queries, e.g.:
|
||||
|
||||
- lookup one token by hash
|
||||
- lookup one user by name
|
||||
- list tokens or servers for one user (typically 1-10)
|
||||
- etc.
|
||||
|
||||
### The database as a limiting factor
|
||||
|
||||
As a result of the above transactions in most requests, database performance is the _leading_ factor in JupyterHub's baseline requests-per-second performance, but that cost does not scale significantly with the number of users, active or otherwise.
|
||||
However, the database is _rarely_ a limiting factor in JupyterHub performance in a practical sense, because the main thing JupyterHub does is start, stop, and monitor whole servers, which take far more time than any small database transaction, no matter how many records you have or how slow your database is (within reason).
|
||||
Additionally, there is usually _very_ little load on the database itself.
|
||||
|
||||
By far the most taxing activity on the database is the 'list all users' endpoint, primarily used by the [idle-culling service](https://github.com/jupyterhub/jupyterhub-idle-culler).
|
||||
Database-based optimizations have been added to make even these operations feasible for large numbers of users:
|
||||
|
||||
1. State filtering on [GET /hub/api/users?state=active](rest-api-get-users),
|
||||
which limits the number of results in the query to only the relevant subset (added in JupyterHub 1.3), rather than all users.
|
||||
2. [Pagination](api-pagination) of all list endpoints, allowing the request of a large number of resources to be more fairly balanced with other Hub activities across multiple requests (added in 2.0).
|
||||
|
||||
:::{note}
|
||||
It's important to note when discussing performance and limiting factors and that all of this only applies to requests to `/hub/...`.
|
||||
The Hub and its database are not involved in most requests to single-user servers (`/user/...`), which is by design, and largely motivated by the fact that the Hub itself doesn't _need_ to be fast because its operations are infrequent and large.
|
||||
:::
|
||||
|
||||
## Database backends
|
||||
|
||||
JupyterHub supports a variety of database backends via [SQLAlchemy][].
|
||||
The default is sqlite, which works great for many cases, but you should be able to use many backends supported by SQLAlchemy.
|
||||
Usually, this will mean PostgreSQL or MySQL, both of which are officially supported and well tested with JupyterHub, but others may work as well.
|
||||
See [SQLAlchemy's docs][sqlalchemy-dialect] for how to connect to different database backends.
|
||||
Doing so generally involves:
|
||||
|
||||
1. installing a Python package that provides a client implementation, and
|
||||
2. setting [](JupyterHub.db_url) to connect to your database with the specified implementation
|
||||
|
||||
[sqlalchemy-dialect]: https://docs.sqlalchemy.org/en/20/dialects/
|
||||
[sqlalchemy]: https://www.sqlalchemy.org
|
||||
|
||||
### Default backend: SQLite
|
||||
|
||||
The default database backend for JupyterHub is [SQLite](https://sqlite.org).
|
||||
We have chosen SQLite as JupyterHub's default because it's simple (the 'database' is a single file) and ubiquitous (it is in the Python standard library).
|
||||
It works very well for testing, small deployments, and workshops.
|
||||
|
||||
For production systems, SQLite has some disadvantages when used with JupyterHub:
|
||||
|
||||
- `upgrade-db` may not always work, and you may need to start with a fresh database
|
||||
- `downgrade-db` **will not** work if you want to rollback to an earlier
|
||||
version, so backup the `jupyterhub.sqlite` file before upgrading (JupyterHub automatically creates a date-stamped backup file when upgrading sqlite)
|
||||
|
||||
The sqlite documentation provides a helpful page about [when to use SQLite and
|
||||
where traditional RDBMS may be a better choice](https://sqlite.org/whentouse.html).
|
||||
|
||||
### Picking your database backend (PostgreSQL, MySQL)
|
||||
|
||||
When running a long term deployment or a production system, we recommend using a full-fledged relational database, such as [PostgreSQL](https://www.postgresql.org) or [MySQL](https://www.mysql.com), that supports the SQL `ALTER TABLE` statement, which is used in some database upgrade steps.
|
||||
|
||||
In general, you select your database backend with [](JupyterHub.db_url), and can further configure it (usually not necessary) with [](JupyterHub.db_kwargs).
|
||||
|
||||
## Notes and Tips
|
||||
|
||||
### SQLite
|
||||
|
||||
The SQLite database should not be used on NFS. SQLite uses reader/writer locks
|
||||
to control access to the database. This locking mechanism might not work
|
||||
correctly if the database file is kept on an NFS filesystem. This is because
|
||||
`fcntl()` file locking is broken on many NFS implementations. Therefore, you
|
||||
should avoid putting SQLite database files on NFS since it will not handle well
|
||||
multiple processes which might try to access the file at the same time.
|
||||
|
||||
### PostgreSQL
|
||||
|
||||
We recommend using PostgreSQL for production if you are unsure whether to use
|
||||
MySQL or PostgreSQL or if you do not have a strong preference.
|
||||
There is additional configuration required for MySQL that is not needed for PostgreSQL.
|
||||
|
||||
For example, to connect to a PostgreSQL database with psycopg2:
|
||||
|
||||
1. install psycopg2: `pip install psycopg2` (or `psycopg2-binary` to avoid compilation, which is [not recommended for production][psycopg2-binary])
|
||||
2. set authentication via environment variables `PGUSER` and `PGPASSWORD`
|
||||
3. configure [](JupyterHub.db_url):
|
||||
|
||||
```python
|
||||
c.JupyterHub.db_url = "postgresql+psycopg2://my-postgres-server:5432/my-db-name"
|
||||
```
|
||||
|
||||
[psycopg2-binary]: https://www.psycopg.org/docs/install.html#psycopg-vs-psycopg-binary
|
||||
|
||||
### MySQL / MariaDB
|
||||
|
||||
- You should probably use the `pymysql` or `mysqlclient` sqlalchemy provider, or another backend [recommended by sqlalchemy](https://docs.sqlalchemy.org/en/20/dialects/mysql.html#dialect-mysql)
|
||||
- You also need to set `pool_recycle` to some value (typically 60 - 300, JupyterHub will default to 60)
|
||||
which depends on your MySQL setup. This is necessary since MySQL kills
|
||||
connections serverside if they've been idle for a while, and the connection
|
||||
from the hub will be idle for longer than most connections. This behavior
|
||||
will lead to frustrating 'the connection has gone away' errors from
|
||||
sqlalchemy if `pool_recycle` is not set.
|
||||
- If you use `utf8mb4` collation with MySQL earlier than 5.7.7 or MariaDB
|
||||
earlier than 10.2.1 you may get an `1709, Index column size too large` error.
|
||||
To fix this you need to set `innodb_large_prefix` to enabled and
|
||||
`innodb_file_format` to `Barracuda` to allow for the index sizes jupyterhub
|
||||
uses. `row_format` will be set to `DYNAMIC` as long as those options are set
|
||||
correctly. Later versions of MariaDB and MySQL should set these values by
|
||||
default, as well as have a default `DYNAMIC` `row_format` and pose no trouble
|
||||
to users.
|
||||
|
||||
For example, to connect to a mysql database with mysqlclient:
|
||||
|
||||
1. install mysqlclient: `pip install mysqlclient`
|
||||
2. configure [](JupyterHub.db_url):
|
||||
|
||||
```python
|
||||
c.JupyterHub.db_url = "mysql+mysqldb://myuser:mypassword@my-sql-server:3306/my-db-name"
|
||||
```
|
17
docs/source/explanation/index.md
Normal file
17
docs/source/explanation/index.md
Normal file
@@ -0,0 +1,17 @@
|
||||
(explanation)=
|
||||
|
||||
# Explanation
|
||||
|
||||
_Explanation_ documentation provide big-picture descriptions of how JupyterHub works. This section is meant to build your understanding of particular topics.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
concepts
|
||||
capacity-planning
|
||||
database
|
||||
websecurity
|
||||
oauth
|
||||
singleuser
|
||||
../rbac/index
|
||||
```
|
@@ -1,26 +1,28 @@
|
||||
(explanation:hub-oauth)=
|
||||
|
||||
# JupyterHub and OAuth
|
||||
|
||||
JupyterHub uses OAuth 2 internally as a mechanism for authenticating users.
|
||||
JupyterHub uses [OAuth 2](https://oauth.net/2/) as an internal mechanism for authenticating users.
|
||||
As such, JupyterHub itself always functions as an OAuth **provider**.
|
||||
More on what that means [below](oauth-terms).
|
||||
You can find out more about what that means [below](oauth-terms).
|
||||
|
||||
Additionally, JupyterHub is _often_ deployed with [oauthenticator](https://oauthenticator.readthedocs.io),
|
||||
Additionally, JupyterHub is _often_ deployed with [OAuthenticator](https://oauthenticator.readthedocs.io),
|
||||
where an external identity provider, such as GitHub or KeyCloak, is used to authenticate users.
|
||||
When this is the case, there are _two_ nested oauth flows:
|
||||
an _internal_ oauth flow where JupyterHub is the **provider**,
|
||||
and and _external_ oauth flow, where JupyterHub is a **client**.
|
||||
When this is the case, there are _two_ nested OAuth flows:
|
||||
an _internal_ OAuth flow where JupyterHub is the **provider**,
|
||||
and an _external_ OAuth flow, where JupyterHub is the **client**.
|
||||
|
||||
This means that when you are using JupyterHub, there is always _at least one_ and often two layers of OAuth involved in a user logging in and accessing their server.
|
||||
|
||||
Some relevant points:
|
||||
The following points are noteworthy:
|
||||
|
||||
- Single-user servers _never_ need to communicate with or be aware of the upstream provider configured in your Authenticator.
|
||||
As far as they are concerned, only JupyterHub is an OAuth provider,
|
||||
As far as the servers are concerned, only JupyterHub is an OAuth provider,
|
||||
and how users authenticate with the Hub itself is irrelevant.
|
||||
- When talking to a single-user server,
|
||||
- When interacting with a single-user server,
|
||||
there are ~always two tokens:
|
||||
a token issued to the server itself to communicate with the Hub API,
|
||||
and a second per-user token in the browser to represent the completed login process and authorized permissions.
|
||||
first, a token issued to the server itself to communicate with the Hub API,
|
||||
and second, a per-user token in the browser to represent the completed login process and authorized permissions.
|
||||
More on this [later](two-tokens).
|
||||
|
||||
(oauth-terms)=
|
||||
@@ -28,66 +30,66 @@ Some relevant points:
|
||||
## Key OAuth terms
|
||||
|
||||
Here are some key definitions to keep in mind when we are talking about OAuth.
|
||||
You can also read more detail [here](https://www.oauth.com/oauth2-servers/definitions/).
|
||||
You can also read more in detail [here](https://www.oauth.com/oauth2-servers/definitions/).
|
||||
|
||||
- **provider** the entity responsible for managing identity and authorization,
|
||||
- **provider**: The entity responsible for managing identity and authorization;
|
||||
always a web server.
|
||||
JupyterHub is _always_ an oauth provider for JupyterHub's components.
|
||||
When OAuthenticator is used, an external service, such as GitHub or KeyCloak, is also an oauth provider.
|
||||
- **client** An entity that requests OAuth **tokens** on a user's behalf,
|
||||
JupyterHub is _always_ an OAuth provider for JupyterHub's components.
|
||||
When OAuthenticator is used, an external service, such as GitHub or KeyCloak, is also an OAuth provider.
|
||||
- **client**: An entity that requests OAuth **tokens** on a user's behalf;
|
||||
generally a web server of some kind.
|
||||
OAuth **clients** are services that _delegate_ authentication and/or authorization
|
||||
to an OAuth **provider**.
|
||||
JupyterHub _services_ or single-user _servers_ are OAuth **clients** of the JupyterHub **provider**.
|
||||
When OAuthenticator is used, JupyterHub is itself _also_ an OAuth **client** for the external oauth **provider**, e.g. GitHub.
|
||||
- **browser** A user's web browser, which makes requests and stores things like cookies
|
||||
- **token** The secret value used to represent a user's authorization. This is the final product of the OAuth process.
|
||||
- **code** A short-lived temporary secret that the **client** exchanges
|
||||
for a **token** at the conclusion of oauth,
|
||||
in what's generally called the "oauth callback handler."
|
||||
When OAuthenticator is used, JupyterHub is itself _also_ an OAuth **client** for the external OAuth **provider**, e.g. GitHub.
|
||||
- **browser**: A user's web browser, which makes requests and stores things like cookies.
|
||||
- **token**: The secret value used to represent a user's authorization. This is the final product of the OAuth process.
|
||||
- **code**: A short-lived temporary secret that the **client** exchanges
|
||||
for a **token** at the conclusion of OAuth,
|
||||
in what's generally called the "OAuth callback handler."
|
||||
|
||||
## One oauth flow
|
||||
|
||||
OAuth **flow** is what we call the sequence of HTTP requests involved in authenticating a user and issuing a token, ultimately used for authorized access to a service or single-user server.
|
||||
OAuth **flow** is what we call the sequence of HTTP requests involved in authenticating a user and issuing a token, ultimately used for authorizing access to a service or single-user server.
|
||||
|
||||
A single oauth flow generally goes like this:
|
||||
A single OAuth flow typically goes like this:
|
||||
|
||||
### OAuth request and redirect
|
||||
|
||||
1. A **browser** makes an HTTP request to an oauth **client**.
|
||||
2. There are no credentials, so the client _redirects_ the browser to an "authorize" page on the oauth **provider** with some extra information:
|
||||
- the oauth **client id** of the client itself
|
||||
- the **redirect uri** to be redirected back to after completion
|
||||
1. A **browser** makes an HTTP request to an OAuth **client**.
|
||||
2. There are no credentials, so the client _redirects_ the browser to an "authorize" page on the OAuth **provider** with some extra information:
|
||||
- the OAuth **client ID** of the client itself.
|
||||
- the **redirect URI** to be redirected back to after completion.
|
||||
- the **scopes** requested, which the user should be presented with to confirm.
|
||||
This is the "X would like to be able to Y on your behalf. Allow this?" page you see on all the "Login with ..." pages around the Internet.
|
||||
3. During this authorize step,
|
||||
the browser must be _authenticated_ with the provider.
|
||||
This is often already stored in a cookie,
|
||||
but if not the provider webapp must begin its _own_ authentication process before serving the authorization page.
|
||||
This _may_ even begin another oauth flow!
|
||||
This _may_ even begin another OAuth flow!
|
||||
4. After the user tells the provider that they want to proceed with the authorization,
|
||||
the provider records this authorization in a short-lived record called an **oauth code**.
|
||||
5. Finally, the oauth provider redirects the browser _back_ to the oauth client's "redirect uri"
|
||||
(or "oauth callback uri"),
|
||||
with the oauth code in a url parameter.
|
||||
the provider records this authorization in a short-lived record called an **OAuth code**.
|
||||
5. Finally, the oauth provider redirects the browser _back_ to the oauth client's "redirect URI"
|
||||
(or "OAuth callback URI"),
|
||||
with the OAuth code in a URL parameter.
|
||||
|
||||
That's the end of the requests made between the **browser** and the **provider**.
|
||||
That marks the end of the requests made between the **browser** and the **provider**.
|
||||
|
||||
### State after redirect
|
||||
|
||||
At this point:
|
||||
|
||||
- The browser is authenticated with the _provider_
|
||||
- The user's authorized permissions are recorded in an _oauth code_
|
||||
- The _provider_ knows that the given oauth client's requested permissions have been granted, but the client doesn't know this yet.
|
||||
- All requests so far have been made directly by the browser.
|
||||
No requests have originated at the client or provider.
|
||||
- The browser is authenticated with the _provider_.
|
||||
- The user's authorized permissions are recorded in an _OAuth code_.
|
||||
- The _provider_ knows that the permissions requested by the OAuth client have been granted, but the client doesn't know this yet.
|
||||
- All the requests so far have been made directly by the browser.
|
||||
No requests have originated from the client or provider.
|
||||
|
||||
### OAuth Client Handles Callback Request
|
||||
|
||||
Now we get to finish the OAuth process.
|
||||
Let's dig into what the oauth client does when it handles
|
||||
the oauth callback request with the
|
||||
At this stage, we get to finish the OAuth process.
|
||||
Let's dig into what the OAuth client does when it handles
|
||||
the OAuth callback request.
|
||||
|
||||
- The OAuth client receives the _code_ and makes an API request to the _provider_ to exchange the code for a real _token_.
|
||||
This is the first direct request between the OAuth _client_ and the _provider_.
|
||||
@@ -95,12 +97,12 @@ the oauth callback request with the
|
||||
makes a second API request to the _provider_
|
||||
to retrieve information about the owner of the token (the user).
|
||||
This is the step where behavior diverges for different OAuth providers.
|
||||
Up to this point, all oauth providers are the same, following the oauth specification.
|
||||
However, oauth does not define a standard for exchanging tokens for information about their owner or permissions ([OpenID Connect](https://openid.net/connect/) does that),
|
||||
Up to this point, all OAuth providers are the same, following the OAuth specification.
|
||||
However, OAuth does not define a standard for issuing tokens in exchange for information about their owner or permissions ([OpenID Connect](https://openid.net/connect/) does that),
|
||||
so this step may be different for each OAuth provider.
|
||||
- Finally, the oauth client stores its own record that the user is authorized in a cookie.
|
||||
- Finally, the OAuth client stores its own record that the user is authorized in a cookie.
|
||||
This could be the token itself, or any other appropriate representation of successful authentication.
|
||||
- Last of all, now that credentials have been established,
|
||||
- Now that credentials have been established,
|
||||
the browser can be redirected to the _original_ URL where it started,
|
||||
to try the request again.
|
||||
If the client wasn't able to keep track of the original URL all this time
|
||||
@@ -113,24 +115,24 @@ So that's _one_ OAuth process.
|
||||
|
||||
## Full sequence of OAuth in JupyterHub
|
||||
|
||||
Let's go through the above oauth process in JupyterHub,
|
||||
with specific examples of each HTTP request and what information is contained.
|
||||
For bonus points, we are using the double-oauth example of JupyterHub configured with GitHubOAuthenticator.
|
||||
Let's go through the above OAuth process in JupyterHub,
|
||||
with specific examples of each HTTP request and what information it contains.
|
||||
For bonus points, we are using the double-OAuth example of JupyterHub configured with GitHubOAuthenticator.
|
||||
|
||||
To disambiguate, we will call the OAuth process where JupyterHub is the **provider** "internal oauth,"
|
||||
and the one with JupyterHub as a **client** "external oauth."
|
||||
To disambiguate, we will call the OAuth process where JupyterHub is the **provider** "internal OAuth,"
|
||||
and the one with JupyterHub as a **client** "external OAuth."
|
||||
|
||||
Our starting point:
|
||||
|
||||
- a user's single-user server is running. Let's call them `danez`
|
||||
- jupyterhub is running with GitHub as an oauth provider (this means two full instances of oauth),
|
||||
- Danez has a fresh browser session with no cookies yet
|
||||
- Jupyterhub is running with GitHub as an OAuth provider (this means two full instances of OAuth),
|
||||
- Danez has a fresh browser session with no cookies yet.
|
||||
|
||||
First request:
|
||||
|
||||
- browser->single-user server running JupyterLab or Jupyter Classic
|
||||
- `GET /user/danez/notebooks/mynotebook.ipynb`
|
||||
- no credentials, so single-user server (as an oauth **client**) starts internal oauth process with JupyterHub (the **provider**)
|
||||
- no credentials, so single-user server (as an OAuth **client**) starts internal OAuth process with JupyterHub (the **provider**)
|
||||
- response: 302 redirect -> `/hub/api/oauth2/authorize`
|
||||
with:
|
||||
- client-id=`jupyterhub-user-danez`
|
||||
@@ -138,9 +140,9 @@ First request:
|
||||
|
||||
Second request, following redirect:
|
||||
|
||||
- browser->jupyterhub
|
||||
- browser->JupyterHub
|
||||
- `GET /hub/api/oauth2/authorize`
|
||||
- no credentials, so jupyterhub starts external oauth process _with GitHub_
|
||||
- no credentials, so JupyterHub starts external OAuth process _with GitHub_
|
||||
- response: 302 redirect -> `https://github.com/login/oauth/authorize`
|
||||
with:
|
||||
- client-id=`jupyterhub-client-uuid`
|
||||
@@ -154,8 +156,8 @@ c.JupyterHub.authenticator_class = 'github'
|
||||
```
|
||||
|
||||
That means authenticating a request to the Hub itself starts
|
||||
a _second_, external oauth process with GitHub as a provider.
|
||||
This external oauth process is optional, though.
|
||||
a _second_, external OAuth process with GitHub as a provider.
|
||||
This external OAuth process is optional, though.
|
||||
If you were using the default username+password PAMAuthenticator,
|
||||
this redirect would have been to `/hub/login` instead, to present the user
|
||||
with a login form.
|
||||
@@ -171,7 +173,7 @@ Here, GitHub prompts for login and asks for confirmation of authorization
|
||||
After successful authorization
|
||||
(either by looking up a pre-existing authorization,
|
||||
or recording it via form submission)
|
||||
GitHub issues an **oauth code** and redirects to `/hub/oauth_callback?code=github-code`
|
||||
GitHub issues an **OAuth code** and redirects to `/hub/oauth_callback?code=github-code`
|
||||
|
||||
Next request:
|
||||
|
||||
@@ -184,7 +186,7 @@ The first:
|
||||
|
||||
- JupyterHub->GitHub
|
||||
- `POST https://github.com/login/oauth/access_token`
|
||||
- request made with oauth **code** from url parameter
|
||||
- request made with OAuth **code** from URL parameter
|
||||
- response includes an access **token**
|
||||
|
||||
The second:
|
||||
@@ -194,9 +196,9 @@ The second:
|
||||
- request made with access **token** in the `Authorization` header
|
||||
- response is the user model, including username, email, etc.
|
||||
|
||||
Now the external oauth callback request completes with:
|
||||
Now the external OAuth callback request completes with:
|
||||
|
||||
- set cookie on `/hub/` path, recording jupyterhub authentication so we don't need to do external oauth with GitHub again for a while
|
||||
- set cookie on `/hub/` path, recording jupyterhub authentication so we don't need to do external OAuth with GitHub again for a while
|
||||
- redirect -> `/hub/api/oauth2/authorize`
|
||||
|
||||
🎉 At this point, we have completed our first OAuth flow! 🎉
|
||||
@@ -211,14 +213,14 @@ Now, we get our first repeated request:
|
||||
2. automatically accepts authorization (shortcut taken when a user is visiting their own server)
|
||||
- redirect -> `/user/danez/oauth_callback?code=jupyterhub-code`
|
||||
|
||||
Here, we start the same oauth callback process as before, but at Danez's single-user server for the _internal_ oauth
|
||||
Here, we start the same OAuth callback process as before, but at Danez's single-user server for the _internal_ OAuth.
|
||||
|
||||
- browser->single-user server
|
||||
- `GET /user/danez/oauth_callback`
|
||||
|
||||
(in handler)
|
||||
|
||||
Inside the internal oauth callback handler,
|
||||
Inside the internal OAuth callback handler,
|
||||
Danez's server makes two API requests to JupyterHub:
|
||||
|
||||
The first:
|
||||
@@ -255,7 +257,7 @@ To authenticate this request, the single token stored in the encrypted cookie is
|
||||
|
||||
If the user model matches who should be allowed (e.g. Danez),
|
||||
then the request is allowed.
|
||||
See {doc}`../rbac/scopes` for how JupyterHub uses scopes to determine authorized access to servers and services.
|
||||
See [Scopes in JupyterHub](jupyterhub-scopes) for how JupyterHub uses scopes to determine authorized access to servers and services.
|
||||
|
||||
_the end_
|
||||
|
||||
@@ -271,15 +273,15 @@ To handle this, OAuth tokens and the various places they are stored can _expire_
|
||||
which should have the same effect as no credentials,
|
||||
and trigger the authorization process again.
|
||||
|
||||
In JupyterHub's internal oauth, we have these layers of information that can go stale:
|
||||
In JupyterHub's internal OAuth, we have these layers of information that can go stale:
|
||||
|
||||
- The oauth client has a **cache** of Hub responses for tokens,
|
||||
- The OAuth client has a **cache** of Hub responses for tokens,
|
||||
so it doesn't need to make API requests to the Hub for every request it receives.
|
||||
This cache has an expiry of five minutes by default,
|
||||
and is governed by the configuration `HubAuth.cache_max_age` in the single-user server.
|
||||
- The internal oauth token is stored in a cookie, which has its own expiry (default: 14 days),
|
||||
- The internal OAuth token is stored in a cookie, which has its own expiry (default: 14 days),
|
||||
governed by `JupyterHub.cookie_max_age_days`.
|
||||
- The internal oauth token can also itself expire,
|
||||
- The internal OAuth token itself can also expire,
|
||||
which is by default the same as the cookie expiry,
|
||||
since it makes sense for the token itself and the place it is stored to expire at the same time.
|
||||
This is governed by `JupyterHub.cookie_max_age_days` first,
|
||||
@@ -317,9 +319,9 @@ triggering the external login process anew before letting a user proceed.
|
||||
- If the token has expired, but is still in the cookie:
|
||||
when the token response cache expires,
|
||||
the next time the server asks the hub about the token,
|
||||
no user will be identified and the internal oauth process begins again.
|
||||
no user will be identified and the internal OAuth process begins again.
|
||||
- If the token _cookie_ expires, the next browser request will be made with no credentials,
|
||||
and the internal oauth process will begin again.
|
||||
and the internal OAuth process will begin again.
|
||||
This will usually have the form of a transparent redirect browsers won't notice.
|
||||
However, if this occurs on an API request in a long-lived page visit
|
||||
such as a JupyterLab session, the API request may fail and require
|
||||
@@ -352,7 +354,7 @@ Logging out of JupyterHub means clearing and revoking many of these credentials:
|
||||
### A tale of two tokens
|
||||
|
||||
**TODO**: discuss API token issued to server at startup ($JUPYTERHUB_API_TOKEN)
|
||||
and oauth-issued token in the cookie,
|
||||
and OAuth-issued token in the cookie,
|
||||
and some details of how JupyterLab currently deals with that.
|
||||
They are different, and JupyterLab should be making requests using the token from the cookie,
|
||||
not the token from the server,
|
109
docs/source/explanation/singleuser.md
Normal file
109
docs/source/explanation/singleuser.md
Normal file
@@ -0,0 +1,109 @@
|
||||
(explanation:singleuser)=
|
||||
|
||||
# The JupyterHub single-user server
|
||||
|
||||
When a user logs into JupyterHub, they get a 'server', which we usually call the **single-user server**, because it's a server that's meant for a single JupyterHub user.
|
||||
Each JupyterHub user gets a different one (or more than one!).
|
||||
|
||||
A single-user server is a process running somewhere that is:
|
||||
|
||||
1. accessible over http[s],
|
||||
2. authenticated via JupyterHub using OAuth 2.0,
|
||||
3. started by a [Spawner](spawners), and
|
||||
4. 'owned' by a single JupyterHub user
|
||||
|
||||
## The single-user server command
|
||||
|
||||
The Spawner's default single-user server startup command, `jupyterhub-singleuser`, launches `jupyter-server`, the same program used when you run `jupyter lab` on your laptop.
|
||||
(_It can also launch the legacy `jupyter-notebook` server_).
|
||||
That's why JupyterHub looks familiar to folks who are already using Jupyter at home or elsewhere.
|
||||
It's the same!
|
||||
`jupyterhub-singleuser` _customizes_ that program to change (approximately) one thing: **authenticate requests with JupyterHub**.
|
||||
|
||||
(singleuser-auth)=
|
||||
|
||||
## Single-user server authentication
|
||||
|
||||
Implementation-wise, JupyterHub single-user servers are a special-case of {ref}`services-reference`
|
||||
and as such use the same (OAuth) authentication mechanism (more on OAuth in JupyterHub at [](oauth)).
|
||||
This is primarily implemented in the {class}`~.HubOAuth` class.
|
||||
|
||||
This code resides in `jupyterhub.singleuser` subpackage of JupyterHub.
|
||||
The main task of this code is to:
|
||||
|
||||
1. resolve a JupyterHub token to a JupyterHub user (authenticate)
|
||||
2. check permissions (`access:servers`) for the token to make sure the request should be allowed (authorize)
|
||||
3. if not authorized, begin the OAuth process with a redirect to the Hub
|
||||
4. after login, store OAuth tokens in a cookie only used by this single-user server
|
||||
5. implement logout to clear the cookie
|
||||
|
||||
Most of this is implemented in the {class}`~.HubOAuth` class. `jupyterhub.singleuser` is responsible for _adapting_ the base Jupyter Server to use HubOAuth for these tasks.
|
||||
|
||||
### JupyterHub authentication extension
|
||||
|
||||
By default, `jupyter-server` uses its own cookie to authenticate.
|
||||
If that cookie is not present, the server redirects you a login page and asks you to enter a password or token.
|
||||
|
||||
Jupyter Server 2.0 introduces two new _APIs_ for customizing authentication: the [IdentityProvider](inv:jupyter-server#jupyter_server.auth.IdentityProvider) and the [Authorizer](inv:jupyter-server#jupyter_server.auth.Authorizer).
|
||||
More information can be found in the [Jupyter Server documentation](https://jupyter-server.readthedocs.io).
|
||||
|
||||
JupyterHub implements these APIs in `jupyterhub.singleuser.extension`.
|
||||
|
||||
The IdentityProvider is responsible for _authenticating_ requests.
|
||||
In JupyterHub, that means extracting OAuth tokens from the request and resolving them to a JupyterHub user.
|
||||
|
||||
The Authorizer is a _separate_ API for _authorizing_ actions on particular resources.
|
||||
Because the JupyterHub IdentityProvider only allows _authenticating_ users who already have the necessary `access:servers` permission to access the server, the default Authorizer only contains a redundant check for this same permission, and ignores the resource inputs.
|
||||
However, specifying a _custom_ Authorizer allows for granular permissions, such as read-only access to subsets of a shared server.
|
||||
|
||||
### JupyterHub authentication via subclass
|
||||
|
||||
Prior to Jupyter Server 2 (i.e. Jupyter Server 1.x or the legacy `jupyter-notebook` server), JupyterHub authentication is applied via _subclass_.
|
||||
Originally a subclass of `NotebookApp`,
|
||||
this approach works with both `jupyter-server` and `jupyter-notebook`.
|
||||
Instead of using the extension mechanisms above,
|
||||
the server application is _subclassed_. This worked well in the `jupyter-notebook` days,
|
||||
but doesn't fit well with Jupyter Server's extension-based architecture.
|
||||
|
||||
### Selecting jupyterhub-singleuser implementation
|
||||
|
||||
Using the JupyterHub singleuser-server extension is the default behavior of JupyterHub 4 and Jupyter Server 2, otherwise the subclass approach is taken.
|
||||
|
||||
You can opt-out of the extension by setting the environment variable `JUPYTERHUB_SINGLEUSER_EXTENSION=0`:
|
||||
|
||||
```python
|
||||
c.Spawner.environment.update(
|
||||
{
|
||||
"JUPYTERHUB_SINGLEUSER_EXTENSION": "0",
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
The subclass approach will also be taken if you've opted to use the classic notebook server with:
|
||||
|
||||
```
|
||||
JUPYTERHUB_SINGLEUSER_APP=notebook
|
||||
```
|
||||
|
||||
which was introduced in JupyterHub 2.
|
||||
|
||||
## Other customizations
|
||||
|
||||
`jupyterhub-singleuser` makes other small customizations to how the single-user server behaves:
|
||||
|
||||
1. logs activity on the single-user server, used in [idle-culling](https://github.com/jupyterhub/jupyterhub-idle-culler).
|
||||
2. disables some features that don't make sense in JupyterHub (trash, retrying ports)
|
||||
3. loading options such as URLs and SSL configuration from the environment
|
||||
4. customize logging for consistency with JupyterHub logs
|
||||
|
||||
## Running a single-user server that's not `jupyterhub-singleuser`
|
||||
|
||||
By default, `jupyterhub-singleuser` is the same `jupyter-server` used by JupyterLab, Jupyter notebook (>= 7), etc.
|
||||
But technically, all JupyterHub cares about is that it is:
|
||||
|
||||
1. an http server at the prescribed URL, accessible from the Hub and proxy, and
|
||||
2. authenticated via [OAuth](oauth) with the Hub (it doesn't even have to do this, if you want to do your own authentication, as is done in BinderHub)
|
||||
|
||||
which means that you can customize JupyterHub to launch _any_ web application that meets these criteria, by following the specifications in {ref}`services-reference`.
|
||||
|
||||
Most of the time, though, it's easier to use [jupyter-server-proxy](https://jupyter-server-proxy.readthedocs.io) if you want to launch additional web applications in JupyterHub.
|
274
docs/source/explanation/websecurity.md
Normal file
274
docs/source/explanation/websecurity.md
Normal file
@@ -0,0 +1,274 @@
|
||||
(explanation:security)=
|
||||
|
||||
# Security Overview
|
||||
|
||||
The **Security Overview** section helps you learn about:
|
||||
|
||||
- the design of JupyterHub with respect to web security
|
||||
- the semi-trusted user
|
||||
- the available mitigations to protect untrusted users from each other
|
||||
- the value of periodic security audits
|
||||
|
||||
This overview also helps you obtain a deeper understanding of how JupyterHub
|
||||
works.
|
||||
|
||||
## Semi-trusted and untrusted users
|
||||
|
||||
JupyterHub is designed to be a _simple multi-user server for modestly sized
|
||||
groups_ of **semi-trusted** users. While the design reflects serving
|
||||
semi-trusted users, JupyterHub can also be suitable for serving **untrusted** users,
|
||||
but **is not suitable for untrusted users** in its default configuration.
|
||||
|
||||
As a result, using JupyterHub with **untrusted** users means more work by the
|
||||
administrator, since much care is required to secure a Hub, with extra caution on
|
||||
protecting users from each other.
|
||||
|
||||
One aspect of JupyterHub's _design simplicity_ for **semi-trusted** users is that
|
||||
the Hub and single-user servers are placed in a _single domain_, behind a
|
||||
[_proxy_][configurable-http-proxy]. If the Hub is serving untrusted
|
||||
users, many of the web's cross-site protections are not applied between
|
||||
single-user servers and the Hub, or between single-user servers and each
|
||||
other, since browsers see the whole thing (proxy, Hub, and single user
|
||||
servers) as a single website (i.e. single domain).
|
||||
|
||||
## Protect users from each other
|
||||
|
||||
To protect users from each other, a user must **never** be able to write arbitrary
|
||||
HTML and serve it to another user on the Hub's domain. This is prevented by JupyterHub's
|
||||
authentication setup because only the owner of a given single-user notebook server is
|
||||
allowed to view user-authored pages served by the given single-user notebook
|
||||
server.
|
||||
|
||||
To protect all users from each other, JupyterHub administrators must
|
||||
ensure that:
|
||||
|
||||
- A user **does not have permission** to modify their single-user notebook server,
|
||||
including:
|
||||
- the installation of new packages in the Python environment that runs
|
||||
their single-user server;
|
||||
- the creation of new files in any `PATH` directory that precedes the
|
||||
directory containing `jupyterhub-singleuser` (if the `PATH` is used
|
||||
to resolve the single-user executable instead of using an absolute path);
|
||||
- the modification of environment variables (e.g. PATH, PYTHONPATH) for
|
||||
their single-user server;
|
||||
- the modification of the configuration of the notebook server
|
||||
(the `~/.jupyter` or `JUPYTER_CONFIG_DIR` directory).
|
||||
- unrestricted selection of the base environment (e.g. the image used in container-based Spawners)
|
||||
|
||||
If any additional services are run on the same domain as the Hub, the services
|
||||
**must never** display user-authored HTML that is neither _sanitized_ nor _sandboxed_
|
||||
to any user that lacks authentication as the author of a file.
|
||||
|
||||
### Sharing access to servers
|
||||
|
||||
Because sharing access to servers (via `access:servers` scopes or the sharing feature in JupyterHub 5) by definition means users can serve each other files, enabling sharing is not suitable for untrusted users without also enabling per-user domains.
|
||||
|
||||
JupyterHub does not enable any sharing by default.
|
||||
|
||||
## Mitigate security issues
|
||||
|
||||
The several approaches to mitigating security issues with configuration
|
||||
options provided by JupyterHub include:
|
||||
|
||||
(subdomains)=
|
||||
|
||||
### Enable user subdomains
|
||||
|
||||
JupyterHub provides the ability to run single-user servers on their own
|
||||
domains. This means the cross-origin protections between servers has the
|
||||
desired effect, and user servers and the Hub are protected from each other.
|
||||
|
||||
**Subdomains are the only way to reliably isolate user servers from each other.**
|
||||
|
||||
To enable subdomains, set:
|
||||
|
||||
```python
|
||||
c.JupyterHub.subdomain_host = "https://jupyter.example.org"
|
||||
```
|
||||
|
||||
When subdomains are enabled, each user's single-user server will be at e.g. `https://username.jupyter.example.org`.
|
||||
This also requires all user subdomains to point to the same address,
|
||||
which is most easily accomplished with wildcard DNS, where a single A record points to your server and a wildcard CNAME record points to your A record:
|
||||
|
||||
```
|
||||
A jupyter.example.org 192.168.1.123
|
||||
CNAME *.jupyter.example.org jupyter.example.org
|
||||
```
|
||||
|
||||
Since this spreads the service across multiple domains, you will likely need wildcard SSL as well,
|
||||
matching `*.jupyter.example.org`.
|
||||
|
||||
Unfortunately, for many institutional domains, wildcard DNS and SSL may not be available.
|
||||
|
||||
We also **strongly encourage** serving JupyterHub and user content on a domain that is _not_ a subdomain of any sensitive content.
|
||||
For reasoning, see [GitHub's discussion of moving user content to github.io from \*.github.com](https://github.blog/2013-04-09-yummy-cookies-across-domains/).
|
||||
|
||||
**If you do plan to serve untrusted users, enabling subdomains is highly encouraged**,
|
||||
as it resolves many security issues, which are difficult to unavoidable when JupyterHub is on a single-domain.
|
||||
|
||||
:::{important}
|
||||
JupyterHub makes no guarantees about protecting users from each other unless subdomains are enabled.
|
||||
|
||||
If you want to protect users from each other, you **_must_** enable per-user domains.
|
||||
:::
|
||||
|
||||
### Disable user config
|
||||
|
||||
If subdomains are unavailable or undesirable, JupyterHub provides a
|
||||
configuration option `Spawner.disable_user_config = True`, which can be set to prevent
|
||||
the user-owned configuration files from being loaded. After implementing this
|
||||
option, `PATH`s and package installation are the other things that the
|
||||
admin must enforce.
|
||||
|
||||
### Prevent spawners from evaluating shell configuration files
|
||||
|
||||
For most Spawners, `PATH` is not something users can influence, but it's important that
|
||||
the Spawner should _not_ evaluate shell configuration files prior to launching the server.
|
||||
|
||||
### Isolate packages in a read-only environment
|
||||
|
||||
The user must not have permission to install packages into the environment where the singleuser-server runs.
|
||||
On a shared system, package isolation is most easily handled by running the single-user server in
|
||||
a root-owned virtualenv with disabled system-site-packages.
|
||||
The user must not have permission to install packages into this environment.
|
||||
The same principle extends to the images used by container-based deployments.
|
||||
If users can select the images in which their servers run, they can disable all security for their own servers.
|
||||
|
||||
It is important to note that the control over the environment is only required for the
|
||||
single-user server, and not the environment(s) in which the users' kernel(s)
|
||||
may run. Installing additional packages in the kernel environment does not
|
||||
pose additional risk to the web application's security.
|
||||
|
||||
### Encrypt internal connections with SSL/TLS
|
||||
|
||||
By default, all communications within JupyterHub—between the proxy, hub, and single
|
||||
-user notebooks—are performed unencrypted. Setting the `internal_ssl` flag in
|
||||
`jupyterhub_config.py` secures the aforementioned routes. Turning this
|
||||
feature on does require that the enabled `Spawner` can use the certificates
|
||||
generated by the `Hub` (the default `LocalProcessSpawner` can, for instance).
|
||||
|
||||
It is also important to note that this encryption **does not** cover the
|
||||
`zmq tcp` sockets between the Notebook client and kernel yet. While users cannot
|
||||
submit arbitrary commands to another user's kernel, they can bind to these
|
||||
sockets and listen. When serving untrusted users, this eavesdropping can be
|
||||
mitigated by setting `KernelManager.transport` to `ipc`. This applies standard
|
||||
Unix permissions to the communication sockets thereby restricting
|
||||
communication to the socket owner. The `internal_ssl` option will eventually
|
||||
extend to securing the `tcp` sockets as well.
|
||||
|
||||
### Mitigating same-origin deployments
|
||||
|
||||
While per-user domains are **required** for robust protection of users from each other,
|
||||
you can mitigate many (but not all) cross-user issues.
|
||||
First, it is critical that users cannot modify their server environments, as described above.
|
||||
Second, it is important that users do not have `access:servers` permission to any server other than their own.
|
||||
|
||||
If users can access each others' servers, additional security measures must be enabled, some of which come with distinct user-experience costs.
|
||||
|
||||
Without the [Same-Origin Policy] (SOP) protecting user servers from each other,
|
||||
each user server is considered a trusted origin for requests to each other user server (and the Hub itself).
|
||||
Servers _cannot_ meaningfully distinguish requests originating from other user servers,
|
||||
because SOP implies a great deal of trust, losing many restrictions applied to cross-origin requests.
|
||||
|
||||
That means pages served from each user server can:
|
||||
|
||||
1. arbitrarily modify the path in the Referer
|
||||
2. make fully authorized requests with cookies
|
||||
3. access full page contents served from the hub or other servers via popups
|
||||
|
||||
JupyterHub uses distinct xsrf tokens stored in cookies on each server path to attempt to limit requests across.
|
||||
This has limitations because not all requests are protected by these XSRF tokens,
|
||||
and unless additional measures are taken, the XSRF tokens from other user prefixes may be retrieved.
|
||||
|
||||
[Same-Origin Policy]: https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy
|
||||
|
||||
For example:
|
||||
|
||||
- `Content-Security-Policy` header must prohibit popups and iframes from the same origin.
|
||||
The following Content-Security-Policy rules are _insecure_ and readily enable users to access each others' servers:
|
||||
|
||||
- `frame-ancestors: 'self'`
|
||||
- `frame-ancestors: '*'`
|
||||
- `sandbox allow-popups`
|
||||
|
||||
- Ideally, pages should use the strictest `Content-Security-Policy: sandbox` available,
|
||||
but this is not feasible in general for JupyterLab pages, which need at least `sandbox allow-same-origin allow-scripts` to work.
|
||||
|
||||
The default Content-Security-Policy for single-user servers is
|
||||
|
||||
```
|
||||
frame-ancestors: 'none'
|
||||
```
|
||||
|
||||
which prohibits iframe embedding, but not pop-ups.
|
||||
|
||||
A more secure Content-Security-Policy that has some costs to user experience is:
|
||||
|
||||
```
|
||||
frame-ancestors: 'none'; sandbox allow-same-origin allow-scripts
|
||||
```
|
||||
|
||||
`allow-popups` is not disabled by default because disabling it breaks legitimate functionality, like "Open this in a new tab", and the "JupyterHub Control Panel" menu item.
|
||||
To reiterate, the right way to avoid these issues is to enable per-user domains, where none of these concerns come up.
|
||||
|
||||
Note: even this level of protection requires administrators maintaining full control over the user server environment.
|
||||
If users can modify their server environment, these methods are ineffective, as users can readily disable them.
|
||||
|
||||
### Cookie tossing
|
||||
|
||||
Cookie tossing is a technique where another server on a subdomain or peer subdomain can set a cookie
|
||||
which will be read on another domain.
|
||||
This is not relevant unless there are other user-controlled servers on a peer domain.
|
||||
|
||||
"Domain-locked" cookies avoid this issue, but have their own restrictions:
|
||||
|
||||
- JupyterHub must be served over HTTPS
|
||||
- All secure cookies must be set on `/`, not on sub-paths, which means they are shared by all JupyterHub components in a single-domain deployment.
|
||||
|
||||
As a result, this option is only recommended when per-user subdomains are enabled,
|
||||
to prevent sending all jupyterhub cookies to all user servers.
|
||||
|
||||
To enable domain-locked cookies, set:
|
||||
|
||||
```python
|
||||
c.JupyterHub.cookie_host_prefix_enabled = True
|
||||
```
|
||||
|
||||
```{versionadded} 4.1
|
||||
|
||||
```
|
||||
|
||||
### Forced-login
|
||||
|
||||
Jupyter servers can share links with `?token=...`.
|
||||
JupyterHub prior to 5.0 will accept this request and persist the token for future requests.
|
||||
This is useful for enabling admins to create 'fully authenticated' links bypassing login.
|
||||
However, it also means users can share their own links that will log other users into their own servers,
|
||||
enabling them to serve each other notebooks and other arbitrary HTML, depending on server configuration.
|
||||
|
||||
```{versionadded} 4.1
|
||||
Setting environment variable `JUPYTERHUB_ALLOW_TOKEN_IN_URL=0` in the single-user environment can opt out of accepting token auth in URL parameters.
|
||||
```
|
||||
|
||||
```{versionadded} 5.0
|
||||
Accepting tokens in URLs is disabled by default, and `JUPYTERHUB_ALLOW_TOKEN_IN_URL=1` environment variable must be set to _allow_ token auth in URL parameters.
|
||||
```
|
||||
|
||||
## Security audits
|
||||
|
||||
We recommend that you do periodic reviews of your deployment's security. It's
|
||||
good practice to keep [JupyterHub](https://readthedocs.org/projects/jupyterhub/), [configurable-http-proxy][], and [nodejs
|
||||
versions](https://github.com/nodejs/Release) up to date.
|
||||
|
||||
A handy website for testing your deployment is
|
||||
[Qualsys' SSL analyzer tool](https://www.ssllabs.com/ssltest/analyze.html).
|
||||
|
||||
[configurable-http-proxy]: https://github.com/jupyterhub/configurable-http-proxy
|
||||
|
||||
## Vulnerability reporting
|
||||
|
||||
If you believe you have found a security vulnerability in JupyterHub, or any
|
||||
Jupyter project, please report it to
|
||||
[security@ipython.org](mailto:security@ipython.org). If you prefer to encrypt
|
||||
your security reports, you can use [this PGP public
|
||||
key](https://jupyter.org/assets/ipython_security.asc).
|
78
docs/source/faq/faq.md
Normal file
78
docs/source/faq/faq.md
Normal file
@@ -0,0 +1,78 @@
|
||||
(faq)=
|
||||
|
||||
# Frequently asked questions
|
||||
|
||||
## How do I share links to notebooks?
|
||||
|
||||
Sharing links to notebooks is a common activity,
|
||||
and can look different depending on what you mean by 'share.'
|
||||
Your first instinct might be to copy the URL you see in the browser,
|
||||
e.g. `jupyterhub.example/user/yourname/notebooks/coolthing.ipynb`,
|
||||
but this usually won't work, depending on the permissions of the person you share the link with.
|
||||
|
||||
Unfortunately, 'share' means at least a few things to people in a JupyterHub context.
|
||||
We'll cover 3 common cases here, when they are applicable, and what assumptions they make:
|
||||
|
||||
1. sharing links that will open the same file on the visitor's own server
|
||||
2. sharing links that will bring the visitor to _your_ server (e.g. for real-time collaboration, or RTC)
|
||||
3. publishing notebooks and sharing links that will download the notebook into the user's server
|
||||
|
||||
### link to the same file on the visitor's server
|
||||
|
||||
This is for the case where you have JupyterHub on a shared (or sufficiently similar) filesystem, where you want to share a link that will cause users to login and start their _own_ server, to view or edit the file.
|
||||
|
||||
**Assumption:** the same path on someone else's server is valid and points to the same file
|
||||
|
||||
This is useful in e.g. classes where you know students have certain files in certain locations, or collaborations where you know you have a shared filesystem where everyone has access to the same files.
|
||||
|
||||
A link should look like `https://jupyterhub.example/hub/user-redirect/lab/tree/foo.ipynb`.
|
||||
You can hand-craft these URLs from the URL you are looking at, where you see `/user/name/lab/tree/foo.ipynb` use `/hub/user-redirect/lab/tree/foo.ipynb` (replace `/user/name/` with `/hub/user-redirect/`).
|
||||
Or you can use JupyterLab's "copy shareable link" in the context menu in the file browser:
|
||||
|
||||

|
||||
|
||||
which will produce a correct URL with `/hub/user-redirect/` in it.
|
||||
|
||||
### link to the file on your server
|
||||
|
||||
This is for the case where you want to both be using _your_ server, e.g. for real-time collaboration (RTC).
|
||||
|
||||
**Assumption:** the user has (or should have) access to your server.
|
||||
|
||||
**Assumption:** your server is running _or_ the user has permission to start it.
|
||||
|
||||
By default, JupyterHub users don't have access to each other's servers, but JupyterHub 2.0 administrators can grant users limited access permissions to each other's servers.
|
||||
If the visitor doesn't have access to the server, these links will result in a 403 Permission Denied error.
|
||||
|
||||
In many cases, for this situation you can copy the link in your URL bar (`/user/yourname/lab`), or you can add `/tree/path/to/specific/notebook.ipynb` to open a specific file.
|
||||
|
||||
The [jupyterlab-link-share] JupyterLab extension generates these links, and even can _grant_ other users access to your server.
|
||||
|
||||
[jupyterlab-link-share]: https://github.com/jupyterlab-contrib/jupyterlab-link-share
|
||||
|
||||
:::{warning}
|
||||
Note that the way the extension _grants_ access is handing over credentials to allow the other user to **_BECOME YOU_**.
|
||||
This is usually not appropriate in JupyterHub.
|
||||
:::
|
||||
|
||||
### link to a published copy
|
||||
|
||||
Another way to 'share' notebooks is to publish copies, e.g. pushing the notebook to a git repository and sharing a download link.
|
||||
This way is especially useful for course materials,
|
||||
where no assumptions are necessary about the user's environment,
|
||||
except for having one package installed.
|
||||
|
||||
**Assumption:** The [nbgitpuller](inv:nbgitpuller#index) server extension is installed
|
||||
|
||||
Unlike the other two methods, nbgitpuller doesn't provide an extension to copy a shareable link for the document you're currently looking at,
|
||||
but it does provide a [link generator](inv:nbgitpuller#link),
|
||||
which uses the `user-redirect` approach above.
|
||||
|
||||
When visiting an nbgitpuller link:
|
||||
|
||||
- The visitor will be directed to their own server
|
||||
- Your repo will be cloned (or updated if it's already been cloned)
|
||||
- and then the file opened when it's ready
|
||||
|
||||
[nbgitpuller]: https://nbgitpuller.readthedocs.io
|
||||
[nbgitpuller-link]: https://nbgitpuller.readthedocs.io/en/latest/link.html
|
11
docs/source/faq/index.md
Normal file
11
docs/source/faq/index.md
Normal file
@@ -0,0 +1,11 @@
|
||||
# FAQs
|
||||
|
||||
Find answers to some of the most frequently-asked questions around JupyterHub and how it works.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 2
|
||||
|
||||
faq
|
||||
institutional-faq
|
||||
troubleshooting
|
||||
```
|
@@ -1,3 +1,5 @@
|
||||
(faq:institutional)=
|
||||
|
||||
# Institutional FAQ
|
||||
|
||||
This page contains common questions from users of JupyterHub,
|
||||
@@ -8,10 +10,16 @@ broken down by their roles within organizations.
|
||||
### Is it appropriate for adoption within a larger institutional context?
|
||||
|
||||
Yes! JupyterHub has been used at-scale for large pools of users, as well
|
||||
as complex and high-performance computing. For example, UC Berkeley uses
|
||||
JupyterHub for its Data Science Education Program courses (serving over
|
||||
3,000 students). The Pangeo project uses JupyterHub to provide access
|
||||
to scalable cloud computing with Dask. JupyterHub is stable and customizable
|
||||
as complex and high-performance computing.
|
||||
For example,
|
||||
|
||||
- UC Berkeley uses
|
||||
JupyterHub for its Data Science Education Program courses (serving over
|
||||
3,000 students).
|
||||
- The Pangeo project uses JupyterHub to provide access
|
||||
to scalable cloud computing with Dask.
|
||||
|
||||
JupyterHub is stable and customizable
|
||||
to the use-cases of large organizations.
|
||||
|
||||
### I keep hearing about Jupyter Notebook, JupyterLab, and now JupyterHub. What’s the difference?
|
||||
@@ -26,7 +34,7 @@ Here is a quick breakdown of these three tools:
|
||||
has several extensions that are tailored for using Jupyter Notebooks, as well as extensions
|
||||
for other parts of the data science stack.
|
||||
- **JupyterHub** is an application that manages interactive computing sessions for **multiple users**.
|
||||
It also connects them with infrastructure those users wish to access. It can provide
|
||||
It also connects users with infrastructure they wish to access. It can provide
|
||||
remote access to Jupyter Notebooks and JupyterLab for many people.
|
||||
|
||||
## For management
|
||||
@@ -35,7 +43,7 @@ Here is a quick breakdown of these three tools:
|
||||
|
||||
JupyterHub provides a shared platform for data science and collaboration.
|
||||
It allows users to utilize familiar data science workflows (such as the scientific Python stack,
|
||||
the R tidyverse, and Jupyter Notebooks) on institutional infrastructure. It also allows administrators
|
||||
the R tidyverse, and Jupyter Notebooks) on institutional infrastructure. It also gives administrators
|
||||
some control over access to resources, security, environments, and authentication.
|
||||
|
||||
### Is JupyterHub mature? Why should we trust it?
|
||||
@@ -58,14 +66,14 @@ industry, and government research labs. It is most-commonly used by two kinds of
|
||||
Here is a sample of organizations that use JupyterHub:
|
||||
|
||||
- **Universities and colleges**: UC Berkeley, UC San Diego, Cal Poly SLO, Harvard University, University of Chicago,
|
||||
University of Oslo, University of Sheffield, Université Paris Sud, University of Versailles
|
||||
University of Oslo, University of Sheffield, Université Paris Sud, University of Versailles, University of Portland
|
||||
- **Research laboratories**: NASA, NCAR, NOAA, the Large Synoptic Survey Telescope, Brookhaven National Lab,
|
||||
Minnesota Supercomputing Institute, ALCF, CERN, Lawrence Livermore National Laboratory
|
||||
Minnesota Supercomputing Institute, ALCF, CERN, Lawrence Livermore National Laboratory, HUNT
|
||||
- **Online communities**: Pangeo, Quantopian, mybinder.org, MathHub, Open Humans
|
||||
- **Computing infrastructure providers**: NERSC, San Diego Supercomputing Center, Compute Canada
|
||||
- **Companies**: Capital One, SANDVIK code, Globus
|
||||
|
||||
See the [Gallery of JupyterHub deployments](../gallery-jhub-deployments.md) for
|
||||
See the [Gallery of JupyterHub deployments](gallery-of-deployments) for
|
||||
a more complete list of JupyterHub deployments at institutions.
|
||||
|
||||
### How does JupyterHub compare with hosted products, like Google Colaboratory, RStudio.cloud, or Anaconda Enterprise?
|
||||
@@ -78,7 +86,7 @@ gives administrators more control over their setup and hardware.
|
||||
|
||||
Because JupyterHub is an open-source, community-driven tool, it can be extended and
|
||||
modified to fit an institution's needs. It plays nicely with the open source data science
|
||||
stack, and can serve a variety of computing enviroments, user interfaces, and
|
||||
stack, and can serve a variety of computing environments, user interfaces, and
|
||||
computational hardware. It can also be deployed anywhere - on enterprise cloud infrastructure, on
|
||||
High-Performance-Computing machines, on local hardware, or even on a single laptop, which
|
||||
is not possible with most other tools for shared interactive computing.
|
||||
@@ -99,12 +107,12 @@ that we currently suggest are:
|
||||
guide that runs on Kubernetes. Better for larger or dynamic user groups (50-10,000) or more complex
|
||||
compute/data needs.
|
||||
- [The Littlest JupyterHub](https://tljh.jupyter.org) is a lightweight JupyterHub that runs on a single
|
||||
single machine (in the cloud or under your desk). Better for smaller user groups (4-80) or more
|
||||
machine (in the cloud or under your desk). Better for smaller user groups (4-80) or more
|
||||
lightweight computational resources.
|
||||
|
||||
### Does JupyterHub run well in the cloud?
|
||||
|
||||
Yes - most deployments of JupyterHub are run via cloud infrastructure and on a variety of cloud providers.
|
||||
**Yes** - most deployments of JupyterHub are run via cloud infrastructure and on a variety of cloud providers.
|
||||
Depending on the distribution of JupyterHub that you'd like to use, you can also connect your JupyterHub
|
||||
deployment with a number of other cloud-native services so that users have access to other resources from
|
||||
their interactive computing sessions.
|
||||
@@ -118,14 +126,15 @@ as more resources are needed - allowing you to utilize the benefits of a flexibl
|
||||
|
||||
### Is JupyterHub secure?
|
||||
|
||||
The short answer: yes. JupyterHub as a standalone application has been battle-tested at an institutional
|
||||
The short answer: yes.
|
||||
JupyterHub as a standalone application has been battle-tested at an institutional
|
||||
level for several years, and makes a number of "default" security decisions that are reasonable for most
|
||||
users.
|
||||
|
||||
- For security considerations in the base JupyterHub application,
|
||||
[see the JupyterHub security page](https://jupyterhub.readthedocs.io/en/stable/reference/websecurity.html).
|
||||
[see the JupyterHub security page](explanation:security).
|
||||
- For security considerations when deploying JupyterHub on Kubernetes, see the
|
||||
[JupyterHub on Kubernetes security page](https://zero-to-jupyterhub.readthedocs.io/en/latest/security.html).
|
||||
[JupyterHub on Kubernetes security page](https://z2jh.jupyter.org/en/latest/security.html).
|
||||
|
||||
The longer answer: it depends on your deployment. Because JupyterHub is very flexible, it can be used
|
||||
in a variety of deployment setups. This often entails connecting your JupyterHub to **other** infrastructure
|
||||
@@ -134,11 +143,11 @@ in these cases, and the security of your JupyterHub deployment will often depend
|
||||
|
||||
If you are worried about security, don't hesitate to reach out to the JupyterHub community in the
|
||||
[Jupyter Community Forum](https://discourse.jupyter.org/c/jupyterhub). This community of practice has many
|
||||
individuals with experience running secure JupyterHub deployments.
|
||||
individuals with experience running secure JupyterHub deployments and will be very glad to help you out.
|
||||
|
||||
### Does JupyterHub provide computing or data infrastructure?
|
||||
|
||||
No - JupyterHub manages user sessions and can _control_ computing infrastructure, but it does not provide these
|
||||
**No** - JupyterHub manages user sessions and can _control_ computing infrastructure, but it does not provide these
|
||||
things itself. You are expected to run JupyterHub on your own infrastructure (local or in the cloud). Moreover,
|
||||
JupyterHub has no internal concept of "data", but is designed to be able to communicate with data repositories
|
||||
(again, either locally or remotely) for use within interactive computing sessions.
|
||||
@@ -191,7 +200,7 @@ complex computing infrastructures from the interactive sessions of a JupyterHub.
|
||||
This is highly configurable by the administrator. If you wish for your users to have simple
|
||||
data analytics environments for prototyping and light data exploring, you can restrict their
|
||||
memory and CPU based on the resources that you have available. If you'd like your JupyterHub
|
||||
to serve as a gateway to high-performance compute or data resources, you may increase the
|
||||
to serve as a gateway to high-performance computing or data resources, you may increase the
|
||||
resources available on user machines, or connect them with computing infrastructures elsewhere.
|
||||
|
||||
### Can I customize the look and feel of a JupyterHub?
|
@@ -1,35 +1,11 @@
|
||||
(faq:troubleshooting)=
|
||||
|
||||
# Troubleshooting
|
||||
|
||||
When troubleshooting, you may see unexpected behaviors or receive an error
|
||||
message. This section provide links for identifying the cause of the
|
||||
message. This section provides links for identifying the cause of the
|
||||
problem and how to resolve it.
|
||||
|
||||
[_Behavior_](#behavior)
|
||||
|
||||
- JupyterHub proxy fails to start
|
||||
- sudospawner fails to run
|
||||
- What is the default behavior when none of the lists (admin, allowed,
|
||||
allowed groups) are set?
|
||||
- JupyterHub Docker container not accessible at localhost
|
||||
|
||||
[_Errors_](#errors)
|
||||
|
||||
- 500 error after spawning my single-user server
|
||||
|
||||
[_How do I...?_](#how-do-i)
|
||||
|
||||
- Use a chained SSL certificate
|
||||
- Install JupyterHub without a network connection
|
||||
- I want access to the whole filesystem, but still default users to their home directory
|
||||
- How do I increase the number of pySpark executors on YARN?
|
||||
- How do I use JupyterLab's prerelease version with JupyterHub?
|
||||
- How do I set up JupyterHub for a workshop (when users are not known ahead of time)?
|
||||
- How do I set up rotating daily logs?
|
||||
- Toree integration with HDFS rack awareness script
|
||||
- Where do I find Docker images and Dockerfiles related to JupyterHub?
|
||||
|
||||
[_Troubleshooting commands_](#troubleshooting-commands)
|
||||
|
||||
## Behavior
|
||||
|
||||
### JupyterHub proxy fails to start
|
||||
@@ -40,9 +16,9 @@ If you have tried to start the JupyterHub proxy and it fails to start:
|
||||
`c.JupyterHub.ip = '*'`; if it is, try `c.JupyterHub.ip = ''`
|
||||
- Try starting with `jupyterhub --ip=0.0.0.0`
|
||||
|
||||
**Note**: If this occurs on Ubuntu/Debian, check that the you are using a
|
||||
recent version of node. Some versions of Ubuntu/Debian come with a version
|
||||
of node that is very old, and it is necessary to update node.
|
||||
**Note**: If this occurs on Ubuntu/Debian, check that you are using a
|
||||
recent version of [Node](https://nodejs.org). Some versions of Ubuntu/Debian come with a very old version
|
||||
of Node and it is necessary to update Node.
|
||||
|
||||
### sudospawner fails to run
|
||||
|
||||
@@ -61,24 +37,24 @@ to the config file, `jupyterhub_config.py`.
|
||||
### What is the default behavior when none of the lists (admin, allowed, allowed groups) are set?
|
||||
|
||||
When nothing is given for these lists, there will be no admins, and all users
|
||||
who can authenticate on the system (i.e. all the unix users on the server with
|
||||
who can authenticate on the system (i.e. all the Unix users on the server with
|
||||
a password) will be allowed to start a server. The allowed username set lets you limit
|
||||
this to a particular set of users, and admin_users lets you specify who
|
||||
among them may use the admin interface (not necessary, unless you need to do
|
||||
things like inspect other users' servers, or modify the user list at runtime).
|
||||
things like inspect other users' servers or modify the user list at runtime).
|
||||
|
||||
### JupyterHub Docker container not accessible at localhost
|
||||
### JupyterHub Docker container is not accessible at localhost
|
||||
|
||||
Even though the command to start your Docker container exposes port 8000
|
||||
(`docker run -p 8000:8000 -d --name jupyterhub jupyterhub/jupyterhub jupyterhub`),
|
||||
it is possible that the IP address itself is not accessible/visible. As a result
|
||||
(`docker run -p 8000:8000 -d --name jupyterhub quay.io/jupyterhub/jupyterhub jupyterhub`),
|
||||
it is possible that the IP address itself is not accessible/visible. As a result,
|
||||
when you try http://localhost:8000 in your browser, you are unable to connect
|
||||
even though the container is running properly. One workaround is to explicitly
|
||||
tell Jupyterhub to start at `0.0.0.0` which is visible to everyone. Try this
|
||||
command:
|
||||
`docker run -p 8000:8000 -d --name jupyterhub jupyterhub/jupyterhub jupyterhub --ip 0.0.0.0 --port 8000`
|
||||
`docker run -p 8000:8000 -d --name jupyterhub quay.io/jupyterhub/jupyterhub jupyterhub --ip 0.0.0.0 --port 8000`
|
||||
|
||||
### How can I kill ports from JupyterHub managed services that have been orphaned?
|
||||
### How can I kill ports from JupyterHub-managed services that have been orphaned?
|
||||
|
||||
I started JupyterHub + nbgrader on the same host without containers. When I try to restart JupyterHub + nbgrader with this configuration, errors appear that the service accounts cannot start because the ports are being used.
|
||||
|
||||
@@ -92,12 +68,12 @@ Where `<service_port>` is the port used by the nbgrader course service. This con
|
||||
|
||||
### Why am I getting a Spawn failed error message?
|
||||
|
||||
After successfully logging in to JupyterHub with a compatible authenticators, I get a 'Spawn failed' error message in the browser. The JupyterHub logs have `jupyterhub KeyError: "getpwnam(): name not found: <my_user_name>`.
|
||||
After successfully logging in to JupyterHub with a compatible authenticator, I get a 'Spawn failed' error message in the browser. The JupyterHub logs have `jupyterhub KeyError: "getpwnam(): name not found: <my_user_name>`.
|
||||
|
||||
This issue occurs when the authenticator requires a local system user to exist. In these cases, you need to use a spawner
|
||||
that does not require an existing system user account, such as `DockerSpawner` or `KubeSpawner`.
|
||||
|
||||
### How can I run JupyterHub with sudo but use my current env vars and virtualenv location?
|
||||
### How can I run JupyterHub with sudo but use my current environment variables and virtualenv location?
|
||||
|
||||
When launching JupyterHub with `sudo jupyterhub` I get import errors and my environment variables don't work.
|
||||
|
||||
@@ -109,25 +85,11 @@ sudo MY_ENV=abc123 \
|
||||
/srv/jupyterhub/jupyterhub
|
||||
```
|
||||
|
||||
### How can I view the logs for JupyterHub or the user's Notebook servers when using the DockerSpawner?
|
||||
|
||||
Use `docker logs <container>` where `<container>` is the container name defined within `docker-compose.yml`. For example, to view the logs of the JupyterHub container use:
|
||||
|
||||
docker logs hub
|
||||
|
||||
By default, the user's notebook server is named `jupyter-<username>` where `username` is the user's username within JupyterHub's db. So if you wanted to see the logs for user `foo` you would use:
|
||||
|
||||
docker logs jupyter-foo
|
||||
|
||||
You can also tail logs to view them in real time using the `-f` option:
|
||||
|
||||
docker logs -f hub
|
||||
|
||||
## Errors
|
||||
|
||||
### 500 error after spawning my single-user server
|
||||
### Error 500 after spawning my single-user server
|
||||
|
||||
You receive a 500 error when accessing the URL `/user/<your_name>/...`.
|
||||
You receive a 500 error while accessing the URL `/user/<your_name>/...`.
|
||||
This is often seen when your single-user server cannot verify your user cookie
|
||||
with the Hub.
|
||||
|
||||
@@ -153,9 +115,9 @@ If everything is working, the response logged will be similar to this:
|
||||
|
||||
You should see a similar 200 message, as above, in the Hub log when you first
|
||||
visit your single-user notebook server. If you don't see this message in the log, it
|
||||
may mean that your single-user notebook server isn't connecting to your Hub.
|
||||
may mean that your single-user notebook server is not connecting to your Hub.
|
||||
|
||||
If you see 403 (forbidden) like this, it's likely a token problem:
|
||||
If you see 403 (forbidden) like this, it is likely a token problem:
|
||||
|
||||
```
|
||||
403 GET /hub/api/authorizations/cookie/jupyterhub-token-name/[secret] (@10.0.1.4) 4.14ms
|
||||
@@ -185,10 +147,10 @@ If you receive a 403 error, the API token for the single-user server is likely
|
||||
invalid. Commonly, the 403 error is caused by resetting the JupyterHub
|
||||
database (either removing jupyterhub.sqlite or some other action) while
|
||||
leaving single-user servers running. This happens most frequently when using
|
||||
DockerSpawner, because Docker's default behavior is to stop/start containers
|
||||
which resets the JupyterHub database, rather than destroying and recreating
|
||||
DockerSpawner because Docker's default behavior is to stop/start containers
|
||||
that reset the JupyterHub database, rather than destroying and recreating
|
||||
the container every time. This means that the same API token is used by the
|
||||
server for its whole life, until the container is rebuilt.
|
||||
server for its whole life until the container is rebuilt.
|
||||
|
||||
The fix for this Docker case is to remove any Docker containers seeing this
|
||||
issue (typically all containers created before a certain point in time):
|
||||
@@ -201,28 +163,28 @@ your server again.
|
||||
|
||||
##### Proxy settings (403 GET)
|
||||
|
||||
When your whole JupyterHub sits behind a organization proxy (_not_ a reverse proxy like NGINX as part of your setup and _not_ the configurable-http-proxy) the environment variables `HTTP_PROXY`, `HTTPS_PROXY`, `http_proxy` and `https_proxy` might be set. This confuses the jupyterhub-singleuser servers: When connecting to the Hub for authorization they connect via the proxy instead of directly connecting to the Hub on localhost. The proxy might deny the request (403 GET). This results in the singleuser server thinking it has a wrong auth token. To circumvent this you should add `<hub_url>,<hub_ip>,localhost,127.0.0.1` to the environment variables `NO_PROXY` and `no_proxy`.
|
||||
When your whole JupyterHub sits behind an organization proxy (_not_ a reverse proxy like NGINX as part of your setup and _not_ the configurable-http-proxy) the environment variables `HTTP_PROXY`, `HTTPS_PROXY`, `http_proxy`, and `https_proxy` might be set. This confuses the JupyterHub single-user servers: When connecting to the Hub for authorization they connect via the proxy instead of directly connecting to the Hub on localhost. The proxy might deny the request (403 GET). This results in the single-user server thinking it has the wrong auth token. To circumvent this you should add `<hub_url>,<hub_ip>,localhost,127.0.0.1` to the environment variables `NO_PROXY` and `no_proxy`.
|
||||
|
||||
### Launching Jupyter Notebooks to run as an externally managed JupyterHub service with the `jupyterhub-singleuser` command returns a `JUPYTERHUB_API_TOKEN` error
|
||||
|
||||
[JupyterHub services](https://jupyterhub.readthedocs.io/en/stable/reference/services.html) allow processes to interact with JupyterHub's REST API. Example use-cases include:
|
||||
{ref}`services-reference` allow processes to interact with JupyterHub's REST API. Example use-cases include:
|
||||
|
||||
- **Secure Testing**: provide a canonical Jupyter Notebook for testing production data to reduce the number of entry points into production systems.
|
||||
- **Grading Assignments**: provide access to shared Jupyter Notebooks that may be used for management tasks such grading assignments.
|
||||
- **Grading Assignments**: provide access to shared Jupyter Notebooks that may be used for management tasks such as grading assignments.
|
||||
- **Private Dashboards**: share dashboards with certain group members.
|
||||
|
||||
If possible, try to run the Jupyter Notebook as an externally managed service with one of the provided [jupyter/docker-stacks](https://github.com/jupyter/docker-stacks).
|
||||
|
||||
Standard JupyterHub installations include a [jupyterhub-singleuser](https://github.com/jupyterhub/jupyterhub/blob/9fdab027daa32c9017845572ad9d5ba1722dbc53/setup.py#L116) command which is built from the `jupyterhub.singleuser:main` method. The `jupyterhub-singleuser` command is the default command when JupyterHub launches single-user Jupyter Notebooks. One of the goals of this command is to make sure the version of JupyterHub installed within the Jupyter Notebook coincides with the version of the JupyterHub server itself.
|
||||
|
||||
If you launch a Jupyter Notebook with the `jupyterhub-singleuser` command directly from the command line the Jupyter Notebook won't have access to the `JUPYTERHUB_API_TOKEN` and will return:
|
||||
If you launch a Jupyter Notebook with the `jupyterhub-singleuser` command directly from the command line, the Jupyter Notebook won't have access to the `JUPYTERHUB_API_TOKEN` and will return:
|
||||
|
||||
```
|
||||
JUPYTERHUB_API_TOKEN env is required to run jupyterhub-singleuser.
|
||||
Did you launch it manually?
|
||||
```
|
||||
|
||||
If you plan on testing `jupyterhub-singleuser` independently from JupyterHub, then you can set the api token environment variable. For example, if were to run the single-user Jupyter Notebook on the host, then:
|
||||
If you plan on testing `jupyterhub-singleuser` independently from JupyterHub, then you can set the API token environment variable. For example, if you were to run the single-user Jupyter Notebook on the host, then:
|
||||
|
||||
export JUPYTERHUB_API_TOKEN=my_secret_token
|
||||
jupyterhub-singleuser
|
||||
@@ -236,6 +198,23 @@ With a docker container, pass in the environment variable with the run command:
|
||||
|
||||
[This example](https://github.com/jupyterhub/jupyterhub/tree/HEAD/examples/service-notebook/external) demonstrates how to combine the use of the `jupyterhub-singleuser` environment variables when launching a Notebook as an externally managed service.
|
||||
|
||||
### Jupyter Notebook/Lab can be launched, but notebooks seem to hang when trying to execute a cell
|
||||
|
||||
This often occurs when your browser is unable to open a websocket connection to a Jupyter kernel.
|
||||
|
||||
#### Diagnose
|
||||
|
||||
Open your browser console, e.g. [Chrome](https://developer.chrome.com/docs/devtools/console), [Firefox](https://firefox-source-docs.mozilla.org/devtools-user/web_console/).
|
||||
If you see errors related to opening websockets this is likely to be the problem.
|
||||
|
||||
#### Solutions
|
||||
|
||||
This could be caused by anything related to the network between your computer/browser and the server running JupyterHub, such as:
|
||||
|
||||
- reverse proxies (see {ref}`howto:config:reverse-proxy` for example configurations)
|
||||
- anti-virus or firewalls running on your computer or JupyterHub server
|
||||
- transparent proxies running on your network
|
||||
|
||||
## How do I...?
|
||||
|
||||
### Use a chained SSL certificate
|
||||
@@ -243,7 +222,7 @@ With a docker container, pass in the environment variable with the run command:
|
||||
Some certificate providers, i.e. Entrust, may provide you with a chained
|
||||
certificate that contains multiple files. If you are using a chained
|
||||
certificate you will need to concatenate the individual files by appending the
|
||||
chain cert and root cert to your host cert:
|
||||
chained cert and root cert to your host cert:
|
||||
|
||||
cat your_host.crt chain.crt root.crt > your_host-chained.crt
|
||||
|
||||
@@ -256,7 +235,7 @@ You would then set in your `jupyterhub_config.py` file the `ssl_key` and
|
||||
#### Example
|
||||
|
||||
Your certificate provider gives you the following files: `example_host.crt`,
|
||||
`Entrust_L1Kroot.txt` and `Entrust_Root.txt`.
|
||||
`Entrust_L1Kroot.txt`, and `Entrust_Root.txt`.
|
||||
|
||||
Concatenate the files appending the chain cert and root cert to your host cert:
|
||||
|
||||
@@ -289,7 +268,7 @@ with npmbox:
|
||||
python3 -m pip wheel jupyterhub
|
||||
npmbox configurable-http-proxy
|
||||
|
||||
### I want access to the whole filesystem, but still default users to their home directory
|
||||
### I want access to the whole filesystem and still default users to their home directory
|
||||
|
||||
Setting the following in `jupyterhub_config.py` will configure access to
|
||||
the entire filesystem and set the default to the user's home directory.
|
||||
@@ -297,18 +276,7 @@ the entire filesystem and set the default to the user's home directory.
|
||||
c.Spawner.notebook_dir = '/'
|
||||
c.Spawner.default_url = '/home/%U' # %U will be replaced with the username
|
||||
|
||||
### How do I increase the number of pySpark executors on YARN?
|
||||
|
||||
From the command line, pySpark executors can be configured using a command
|
||||
similar to this one:
|
||||
|
||||
pyspark --total-executor-cores 2 --executor-memory 1G
|
||||
|
||||
[Cloudera documentation for configuring spark on YARN applications](https://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_running_spark_on_yarn.html#spark_on_yarn_config_apps)
|
||||
provides additional information. The [pySpark configuration documentation](https://spark.apache.org/docs/0.9.0/configuration.html)
|
||||
is also helpful for programmatic configuration examples.
|
||||
|
||||
### How do I use JupyterLab's prerelease version with JupyterHub?
|
||||
### How do I use JupyterLab's pre-release version with JupyterHub?
|
||||
|
||||
While JupyterLab is still under active development, we have had users
|
||||
ask about how to try out JupyterLab with JupyterHub.
|
||||
@@ -321,7 +289,7 @@ For instance:
|
||||
python3 -m pip install jupyterlab
|
||||
jupyter serverextension enable --py jupyterlab --sys-prefix
|
||||
|
||||
The important thing is that jupyterlab is installed and enabled in the
|
||||
The important thing is that JupyterLab is installed and enabled in the
|
||||
single-user notebook server environment. For system users, this means
|
||||
system-wide, as indicated above. For Docker containers, it means inside
|
||||
the single-user docker image, etc.
|
||||
@@ -334,14 +302,14 @@ notebook servers to default to JupyterLab:
|
||||
### How do I set up JupyterHub for a workshop (when users are not known ahead of time)?
|
||||
|
||||
1. Set up JupyterHub using OAuthenticator for GitHub authentication
|
||||
2. Configure admin list to have workshop leaders be listed with administrator privileges.
|
||||
2. Configure the admin list to have workshop leaders listed with administrator privileges.
|
||||
|
||||
Users will need a GitHub account to login and be authenticated by the Hub.
|
||||
Users will need a GitHub account to log in and be authenticated by the Hub.
|
||||
|
||||
### How do I set up rotating daily logs?
|
||||
|
||||
You can do this with [logrotate](https://linux.die.net/man/8/logrotate),
|
||||
or pipe to `logger` to use syslog instead of directly to a file.
|
||||
or pipe to `logger` to use Syslog instead of directly to a file.
|
||||
|
||||
For example, with this logrotate config file:
|
||||
|
||||
@@ -362,34 +330,9 @@ Or use syslog:
|
||||
|
||||
jupyterhub | logger -t jupyterhub
|
||||
|
||||
## Troubleshooting commands
|
||||
|
||||
The following commands provide additional detail about installed packages,
|
||||
versions, and system information that may be helpful when troubleshooting
|
||||
a JupyterHub deployment. The commands are:
|
||||
|
||||
- System and deployment information
|
||||
|
||||
```bash
|
||||
jupyter troubleshooting
|
||||
```
|
||||
|
||||
- Kernel information
|
||||
|
||||
```bash
|
||||
jupyter kernelspec list
|
||||
```
|
||||
|
||||
- Debug logs when running JupyterHub
|
||||
|
||||
```bash
|
||||
jupyterhub --debug
|
||||
```
|
||||
|
||||
### Toree integration with HDFS rack awareness script
|
||||
|
||||
The Apache Toree kernel will an issue, when running with JupyterHub, if the standard HDFS
|
||||
rack awareness script is used. This will materialize in the logs as a repeated WARN:
|
||||
The Apache Toree kernel will have an issue when running with JupyterHub if the standard HDFS rack awareness script is used. This will materialize in the logs as a repeated WARN:
|
||||
|
||||
```bash
|
||||
16/11/29 16:24:20 WARN ScriptBasedMapping: Exception running /etc/hadoop/conf/topology_script.py some.ip.address
|
||||
@@ -410,10 +353,49 @@ In order to resolve this issue, there are two potential options.
|
||||
|
||||
### Where do I find Docker images and Dockerfiles related to JupyterHub?
|
||||
|
||||
Docker images can be found at the [JupyterHub organization on DockerHub](https://hub.docker.com/u/jupyterhub/).
|
||||
The Docker image [jupyterhub/singleuser](https://hub.docker.com/r/jupyterhub/singleuser/)
|
||||
provides an example single user notebook server for use with DockerSpawner.
|
||||
Docker images can be found at the [JupyterHub organization on Quay.io](https://quay.io/organization/jupyterhub).
|
||||
The Docker image [jupyterhub/singleuser](https://quay.io/repository/jupyterhub/singleuser)
|
||||
provides an example single-user notebook server for use with DockerSpawner.
|
||||
|
||||
Additional single user notebook server images can be found at the [Jupyter
|
||||
organization on DockerHub](https://hub.docker.com/r/jupyter/) and information
|
||||
Additional single-user notebook server images can be found at the [Jupyter
|
||||
organization on Quay.io](https://quay.io/organization/jupyter) and information
|
||||
about each image at the [jupyter/docker-stacks repo](https://github.com/jupyter/docker-stacks).
|
||||
|
||||
### How can I view the logs for JupyterHub or the user's Notebook servers when using the DockerSpawner?
|
||||
|
||||
Use `docker logs <container>` where `<container>` is the container name defined within `docker-compose.yml`. For example, to view the logs of the JupyterHub container use:
|
||||
|
||||
docker logs hub
|
||||
|
||||
By default, the user's notebook server is named `jupyter-<username>` where `username` is the user's username within JupyterHub's database.
|
||||
So if you wanted to see the logs for user `foo` you would use:
|
||||
|
||||
docker logs jupyter-foo
|
||||
|
||||
You can also tail logs to view them in real-time using the `-f` option:
|
||||
|
||||
docker logs -f hub
|
||||
|
||||
## Troubleshooting commands
|
||||
|
||||
The following commands provide additional detail about installed packages,
|
||||
versions, and system information that may be helpful when troubleshooting
|
||||
a JupyterHub deployment. The commands are:
|
||||
|
||||
- System and deployment information
|
||||
|
||||
```bash
|
||||
jupyter troubleshoot
|
||||
```
|
||||
|
||||
- Kernel information
|
||||
|
||||
```bash
|
||||
jupyter kernelspec list
|
||||
```
|
||||
|
||||
- Debug logs when running JupyterHub
|
||||
|
||||
```bash
|
||||
jupyterhub --debug
|
||||
```
|
@@ -1,131 +0,0 @@
|
||||
# Authentication and User Basics
|
||||
|
||||
The default Authenticator uses [PAM][] to authenticate system users with
|
||||
their username and password. With the default Authenticator, any user
|
||||
with an account and password on the system will be allowed to login.
|
||||
|
||||
## Create a set of allowed users
|
||||
|
||||
You can restrict which users are allowed to login with a set,
|
||||
`Authenticator.allowed_users`:
|
||||
|
||||
```python
|
||||
c.Authenticator.allowed_users = {'mal', 'zoe', 'inara', 'kaylee'}
|
||||
```
|
||||
|
||||
Users in the `allowed_users` set are added to the Hub database when the Hub is
|
||||
started.
|
||||
|
||||
```{warning}
|
||||
If this configuration value is not set, then **all authenticated users will be allowed into your hub**.
|
||||
```
|
||||
|
||||
## Configure admins (`admin_users`)
|
||||
|
||||
```{note}
|
||||
As of JupyterHub 2.0, the full permissions of `admin_users`
|
||||
should not be required.
|
||||
Instead, you can assign [roles][] to users or groups
|
||||
with only the scopes they require.
|
||||
```
|
||||
|
||||
Admin users of JupyterHub, `admin_users`, can add and remove users from
|
||||
the user `allowed_users` set. `admin_users` can take actions on other users'
|
||||
behalf, such as stopping and restarting their servers.
|
||||
|
||||
A set of initial admin users, `admin_users` can be configured as follows:
|
||||
|
||||
```python
|
||||
c.Authenticator.admin_users = {'mal', 'zoe'}
|
||||
```
|
||||
|
||||
Users in the admin set are automatically added to the user `allowed_users` set,
|
||||
if they are not already present.
|
||||
|
||||
Each authenticator may have different ways of determining whether a user is an
|
||||
administrator. By default JupyterHub uses the PAMAuthenticator which provides the
|
||||
`admin_groups` option and can set administrator status based on a user
|
||||
group. For example we can let any user in the `wheel` group be admin:
|
||||
|
||||
```python
|
||||
c.PAMAuthenticator.admin_groups = {'wheel'}
|
||||
```
|
||||
|
||||
## Give admin access to other users' notebook servers (`admin_access`)
|
||||
|
||||
Since the default `JupyterHub.admin_access` setting is `False`, the admins
|
||||
do not have permission to log in to the single user notebook servers
|
||||
owned by _other users_. If `JupyterHub.admin_access` is set to `True`,
|
||||
then admins have permission to log in _as other users_ on their
|
||||
respective machines, for debugging. **As a courtesy, you should make
|
||||
sure your users know if admin_access is enabled.**
|
||||
|
||||
## Add or remove users from the Hub
|
||||
|
||||
Users can be added to and removed from the Hub via either the admin
|
||||
panel or the REST API. When a user is **added**, the user will be
|
||||
automatically added to the `allowed_users` set and database. Restarting the Hub
|
||||
will not require manually updating the `allowed_users` set in your config file,
|
||||
as the users will be loaded from the database.
|
||||
|
||||
After starting the Hub once, it is not sufficient to **remove** a user
|
||||
from the allowed users set in your config file. You must also remove the user
|
||||
from the Hub's database, either by deleting the user from JupyterHub's
|
||||
admin page, or you can clear the `jupyterhub.sqlite` database and start
|
||||
fresh.
|
||||
|
||||
## Use LocalAuthenticator to create system users
|
||||
|
||||
The `LocalAuthenticator` is a special kind of authenticator that has
|
||||
the ability to manage users on the local system. When you try to add a
|
||||
new user to the Hub, a `LocalAuthenticator` will check if the user
|
||||
already exists. If you set the configuration value, `create_system_users`,
|
||||
to `True` in the configuration file, the `LocalAuthenticator` has
|
||||
the privileges to add users to the system. The setting in the config
|
||||
file is:
|
||||
|
||||
```python
|
||||
c.LocalAuthenticator.create_system_users = True
|
||||
```
|
||||
|
||||
Adding a user to the Hub that doesn't already exist on the system will
|
||||
result in the Hub creating that user via the system `adduser` command
|
||||
line tool. This option is typically used on hosted deployments of
|
||||
JupyterHub, to avoid the need to manually create all your users before
|
||||
launching the service. This approach is not recommended when running
|
||||
JupyterHub in situations where JupyterHub users map directly onto the
|
||||
system's UNIX users.
|
||||
|
||||
## Use OAuthenticator to support OAuth with popular service providers
|
||||
|
||||
JupyterHub's [OAuthenticator][] currently supports the following
|
||||
popular services:
|
||||
|
||||
- Auth0
|
||||
- Azure AD
|
||||
- Bitbucket
|
||||
- CILogon
|
||||
- GitHub
|
||||
- GitLab
|
||||
- Globus
|
||||
- Google
|
||||
- MediaWiki
|
||||
- Okpy
|
||||
- OpenShift
|
||||
|
||||
A generic implementation, which you can use for OAuth authentication
|
||||
with any provider, is also available.
|
||||
|
||||
## Use DummyAuthenticator for testing
|
||||
|
||||
The `DummyAuthenticator` is a simple authenticator that
|
||||
allows for any username/password unless a global password has been set. If
|
||||
set, it will allow for any username as long as the correct password is provided.
|
||||
To set a global password, add this to the config file:
|
||||
|
||||
```python
|
||||
c.DummyAuthenticator.password = "some_password"
|
||||
```
|
||||
|
||||
[pam]: https://en.wikipedia.org/wiki/Pluggable_authentication_module
|
||||
[oauthenticator]: https://github.com/jupyterhub/oauthenticator
|
@@ -1,35 +0,0 @@
|
||||
# Frequently asked questions
|
||||
|
||||
## How do I share links to notebooks?
|
||||
|
||||
In short, where you see `/user/name/notebooks/foo.ipynb` use `/hub/user-redirect/notebooks/foo.ipynb` (replace `/user/name` with `/hub/user-redirect`).
|
||||
|
||||
Sharing links to notebooks is a common activity,
|
||||
and can look different based on what you mean.
|
||||
Your first instinct might be to copy the URL you see in the browser,
|
||||
e.g. `hub.jupyter.org/user/yourname/notebooks/coolthing.ipynb`.
|
||||
However, let's break down what this URL means:
|
||||
|
||||
`hub.jupyter.org/user/yourname/` is the URL prefix handled by _your server_,
|
||||
which means that sharing this URL is asking the person you share the link with
|
||||
to come to _your server_ and look at the exact same file.
|
||||
In most circumstances, this is forbidden by permissions because the person you share with does not have access to your server.
|
||||
What actually happens when someone visits this URL will depend on whether your server is running and other factors.
|
||||
|
||||
But what is our actual goal?
|
||||
A typical situation is that you have some shared or common filesystem,
|
||||
such that the same path corresponds to the same document
|
||||
(either the exact same document or another copy of it).
|
||||
Typically, what folks want when they do sharing like this
|
||||
is for each visitor to open the same file _on their own server_,
|
||||
so Breq would open `/user/breq/notebooks/foo.ipynb` and
|
||||
Seivarden would open `/user/seivarden/notebooks/foo.ipynb`, etc.
|
||||
|
||||
JupyterHub has a special URL that does exactly this!
|
||||
It's called `/hub/user-redirect/...`.
|
||||
So if you replace `/user/yourname` in your URL bar
|
||||
with `/hub/user-redirect` any visitor should get the same
|
||||
URL on their own server, rather than visiting yours.
|
||||
|
||||
In JupyterLab 2.0, this should also be the result of the "Copy Shareable Link"
|
||||
action in the file browser.
|
@@ -1,19 +0,0 @@
|
||||
Get Started
|
||||
===========
|
||||
|
||||
This section covers how to configure and customize JupyterHub for your
|
||||
needs. It contains information about authentication, networking, security, and
|
||||
other topics that are relevant to individuals or organizations deploying their
|
||||
own JupyterHub.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
config-basics
|
||||
networking-basics
|
||||
security-basics
|
||||
authenticators-users-basics
|
||||
spawners-basics
|
||||
services-basics
|
||||
faq
|
||||
institutional-faq
|
@@ -1,261 +0,0 @@
|
||||
Security settings
|
||||
=================
|
||||
|
||||
.. important::
|
||||
|
||||
You should not run JupyterHub without SSL encryption on a public network.
|
||||
|
||||
Security is the most important aspect of configuring Jupyter. Three
|
||||
configuration settings are the main aspects of security configuration:
|
||||
|
||||
1. :ref:`SSL encryption <ssl-encryption>` (to enable HTTPS)
|
||||
2. :ref:`Cookie secret <cookie-secret>` (a key for encrypting browser cookies)
|
||||
3. Proxy :ref:`authentication token <authentication-token>` (used for the Hub and
|
||||
other services to authenticate to the Proxy)
|
||||
|
||||
The Hub hashes all secrets (e.g., auth tokens) before storing them in its
|
||||
database. A loss of control over read-access to the database should have
|
||||
minimal impact on your deployment; if your database has been compromised, it
|
||||
is still a good idea to revoke existing tokens.
|
||||
|
||||
.. _ssl-encryption:
|
||||
|
||||
Enabling SSL encryption
|
||||
-----------------------
|
||||
|
||||
Since JupyterHub includes authentication and allows arbitrary code execution,
|
||||
you should not run it without SSL (HTTPS).
|
||||
|
||||
Using an SSL certificate
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This will require you to obtain an official, trusted SSL certificate or create a
|
||||
self-signed certificate. Once you have obtained and installed a key and
|
||||
certificate you need to specify their locations in the ``jupyterhub_config.py``
|
||||
configuration file as follows:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
c.JupyterHub.ssl_key = '/path/to/my.key'
|
||||
c.JupyterHub.ssl_cert = '/path/to/my.cert'
|
||||
|
||||
|
||||
Some cert files also contain the key, in which case only the cert is needed. It
|
||||
is important that these files be put in a secure location on your server, where
|
||||
they are not readable by regular users.
|
||||
|
||||
If you are using a **chain certificate**, see also chained certificate for SSL
|
||||
in the JupyterHub `Troubleshooting FAQ <../troubleshooting.html>`_.
|
||||
|
||||
Using letsencrypt
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
It is also possible to use `letsencrypt <https://letsencrypt.org/>`_ to obtain
|
||||
a free, trusted SSL certificate. If you run letsencrypt using the default
|
||||
options, the needed configuration is (replace ``mydomain.tld`` by your fully
|
||||
qualified domain name):
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
c.JupyterHub.ssl_key = '/etc/letsencrypt/live/{mydomain.tld}/privkey.pem'
|
||||
c.JupyterHub.ssl_cert = '/etc/letsencrypt/live/{mydomain.tld}/fullchain.pem'
|
||||
|
||||
If the fully qualified domain name (FQDN) is ``example.com``, the following
|
||||
would be the needed configuration:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
c.JupyterHub.ssl_key = '/etc/letsencrypt/live/example.com/privkey.pem'
|
||||
c.JupyterHub.ssl_cert = '/etc/letsencrypt/live/example.com/fullchain.pem'
|
||||
|
||||
|
||||
If SSL termination happens outside of the Hub
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In certain cases, for example if the hub is running behind a reverse proxy, and
|
||||
`SSL termination is being provided by NGINX <https://www.nginx.com/resources/admin-guide/nginx-ssl-termination/>`_,
|
||||
it is reasonable to run the hub without SSL.
|
||||
|
||||
To achieve this, simply omit the configuration settings
|
||||
``c.JupyterHub.ssl_key`` and ``c.JupyterHub.ssl_cert``
|
||||
(setting them to ``None`` does not have the same effect, and is an error).
|
||||
|
||||
.. _authentication-token:
|
||||
|
||||
Proxy authentication token
|
||||
--------------------------
|
||||
|
||||
The Hub authenticates its requests to the Proxy using a secret token that
|
||||
the Hub and Proxy agree upon. Note that this applies to the default
|
||||
``ConfigurableHTTPProxy`` implementation. Not all proxy implementations
|
||||
use an auth token.
|
||||
|
||||
The value of this token should be a random string (for example, generated by
|
||||
``openssl rand -hex 32``). You can store it in the configuration file or an
|
||||
environment variable
|
||||
|
||||
Generating and storing token in the configuration file
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can set the value in the configuration file, ``jupyterhub_config.py``:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
c.ConfigurableHTTPProxy.api_token = 'abc123...' # any random string
|
||||
|
||||
Generating and storing as an environment variable
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can pass this value of the proxy authentication token to the Hub and Proxy
|
||||
using the ``CONFIGPROXY_AUTH_TOKEN`` environment variable:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export CONFIGPROXY_AUTH_TOKEN=$(openssl rand -hex 32)
|
||||
|
||||
This environment variable needs to be visible to the Hub and Proxy.
|
||||
|
||||
Default if token is not set
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If you don't set the Proxy authentication token, the Hub will generate a random
|
||||
key itself, which means that any time you restart the Hub you **must also
|
||||
restart the Proxy**. If the proxy is a subprocess of the Hub, this should happen
|
||||
automatically (this is the default configuration).
|
||||
|
||||
.. _cookie-secret:
|
||||
|
||||
Cookie secret
|
||||
-------------
|
||||
|
||||
The cookie secret is an encryption key, used to encrypt the browser cookies
|
||||
which are used for authentication. Three common methods are described for
|
||||
generating and configuring the cookie secret.
|
||||
|
||||
Generating and storing as a cookie secret file
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The cookie secret should be 32 random bytes, encoded as hex, and is typically
|
||||
stored in a ``jupyterhub_cookie_secret`` file. An example command to generate the
|
||||
``jupyterhub_cookie_secret`` file is:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
openssl rand -hex 32 > /srv/jupyterhub/jupyterhub_cookie_secret
|
||||
|
||||
In most deployments of JupyterHub, you should point this to a secure location on
|
||||
the file system, such as ``/srv/jupyterhub/jupyterhub_cookie_secret``.
|
||||
|
||||
The location of the ``jupyterhub_cookie_secret`` file can be specified in the
|
||||
``jupyterhub_config.py`` file as follows:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
c.JupyterHub.cookie_secret_file = '/srv/jupyterhub/jupyterhub_cookie_secret'
|
||||
|
||||
If the cookie secret file doesn't exist when the Hub starts, a new cookie
|
||||
secret is generated and stored in the file. The file must not be readable by
|
||||
``group`` or ``other`` or the server won't start. The recommended permissions
|
||||
for the cookie secret file are ``600`` (owner-only rw).
|
||||
|
||||
Generating and storing as an environment variable
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If you would like to avoid the need for files, the value can be loaded in the
|
||||
Hub process from the ``JPY_COOKIE_SECRET`` environment variable, which is a
|
||||
hex-encoded string. You can set it this way:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export JPY_COOKIE_SECRET=$(openssl rand -hex 32)
|
||||
|
||||
For security reasons, this environment variable should only be visible to the
|
||||
Hub. If you set it dynamically as above, all users will be logged out each time
|
||||
the Hub starts.
|
||||
|
||||
Generating and storing as a binary string
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can also set the cookie secret in the configuration file
|
||||
itself, ``jupyterhub_config.py``, as a binary string:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
c.JupyterHub.cookie_secret = bytes.fromhex('64 CHAR HEX STRING')
|
||||
|
||||
|
||||
.. important::
|
||||
|
||||
If the cookie secret value changes for the Hub, all single-user notebook
|
||||
servers must also be restarted.
|
||||
|
||||
.. _cookies:
|
||||
|
||||
Cookies used by JupyterHub authentication
|
||||
-----------------------------------------
|
||||
|
||||
The following cookies are used by the Hub for handling user authentication.
|
||||
|
||||
This section was created based on this post_ from Discourse.
|
||||
|
||||
.. _post: https://discourse.jupyter.org/t/how-to-force-re-login-for-users/1998/6
|
||||
|
||||
jupyterhub-hub-login
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This is the login token used when visiting Hub-served pages that are
|
||||
protected by authentication such as the main home, the spawn form, etc.
|
||||
If this cookie is set, then the user is logged in.
|
||||
|
||||
Resetting the Hub cookie secret effectively revokes this cookie.
|
||||
|
||||
This cookie is restricted to the path ``/hub/``.
|
||||
|
||||
jupyterhub-user-<username>
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This is the cookie used for authenticating with a single-user server.
|
||||
It is set by the single-user server after OAuth with the Hub.
|
||||
|
||||
Effectively the same as ``jupyterhub-hub-login``, but for the
|
||||
single-user server instead of the Hub. It contains an OAuth access token,
|
||||
which is checked with the Hub to authenticate the browser.
|
||||
|
||||
Each OAuth access token is associated with a session id (see ``jupyterhub-session-id`` section
|
||||
below).
|
||||
|
||||
To avoid hitting the Hub on every request, the authentication response
|
||||
is cached. And to avoid a stale cache the cache key is comprised of both
|
||||
the token and session id.
|
||||
|
||||
Resetting the Hub cookie secret effectively revokes this cookie.
|
||||
|
||||
This cookie is restricted to the path ``/user/<username>``, so that
|
||||
only the user’s server receives it.
|
||||
|
||||
jupyterhub-session-id
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This is a random string, meaningless in itself, and the only cookie
|
||||
shared by the Hub and single-user servers.
|
||||
|
||||
Its sole purpose is to coordinate logout of the multiple OAuth cookies.
|
||||
|
||||
This cookie is set to ``/`` so all endpoints can receive it, or clear it, etc.
|
||||
|
||||
jupyterhub-user-<username>-oauth-state
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
A short-lived cookie, used solely to store and validate OAuth state.
|
||||
It is only set while OAuth between the single-user server and the Hub
|
||||
is processing.
|
||||
|
||||
If you use your browser development tools, you should see this cookie
|
||||
for a very brief moment before your are logged in,
|
||||
with an expiration date shorter than ``jupyterhub-hub-login`` or
|
||||
``jupyterhub-user-<username>``.
|
||||
|
||||
This cookie should not exist after you have successfully logged in.
|
||||
|
||||
This cookie is restricted to the path ``/user/<username>``, so that only
|
||||
the user’s server receives it.
|
@@ -1,4 +1,4 @@
|
||||
(api-only)=
|
||||
(howto:api-only)=
|
||||
|
||||
# Deploying JupyterHub in "API only mode"
|
||||
|
@@ -1,3 +1,5 @@
|
||||
(howto:config:gh-oauth)=
|
||||
|
||||
# Configure GitHub OAuth
|
||||
|
||||
In this example, we show a configuration file for a fairly standard JupyterHub
|
||||
@@ -5,15 +7,15 @@ deployment with the following assumptions:
|
||||
|
||||
- Running JupyterHub on a single cloud server
|
||||
- Using SSL on the standard HTTPS port 443
|
||||
- Using GitHub OAuth (using oauthenticator) for login
|
||||
- Using GitHub OAuth (using [OAuthenticator](https://oauthenticator.readthedocs.io/en/latest)) for login
|
||||
- Using the default spawner (to configure other spawners, uncomment and edit
|
||||
`spawner_class` as well as follow the instructions for your desired spawner)
|
||||
- Users exist locally on the server
|
||||
- Users' notebooks to be served from `~/assignments` to allow users to browse
|
||||
for notebooks within other users' home directories
|
||||
- You want the landing page for each user to be a `Welcome.ipynb` notebook in
|
||||
their assignments directory.
|
||||
- All runtime files are put into `/srv/jupyterhub` and log files in `/var/log`.
|
||||
their assignments directory
|
||||
- All runtime files are put into `/srv/jupyterhub` and log files in `/var/log`
|
||||
|
||||
The `jupyterhub_config.py` file would have these settings:
|
||||
|
||||
@@ -69,7 +71,7 @@ c.Spawner.args = ['--NotebookApp.default_url=/notebooks/Welcome.ipynb']
|
||||
```
|
||||
|
||||
Using the GitHub Authenticator requires a few additional
|
||||
environment variable to be set prior to launching JupyterHub:
|
||||
environment variables to be set prior to launching JupyterHub:
|
||||
|
||||
```bash
|
||||
export GITHUB_CLIENT_ID=github_id
|
||||
@@ -79,3 +81,5 @@ export CONFIGPROXY_AUTH_TOKEN=super-secret
|
||||
# append log output to log file /var/log/jupyterhub.log
|
||||
jupyterhub -f /etc/jupyterhub/jupyterhub_config.py &>> /var/log/jupyterhub.log
|
||||
```
|
||||
|
||||
Visit the [Github OAuthenticator reference](https://oauthenticator.readthedocs.io/en/latest/api/gen/oauthenticator.github.html) to see the full list of options for configuring Github OAuth with JupyterHub.
|
@@ -1,3 +1,5 @@
|
||||
(howto:config:reverse-proxy)=
|
||||
|
||||
# Using a reverse proxy
|
||||
|
||||
In the following example, we show configuration files for a JupyterHub server
|
||||
@@ -14,7 +16,7 @@ satisfy the following:
|
||||
- After testing, the server in question should be able to score at least an A on the
|
||||
Qualys SSL Labs [SSL Server Test](https://www.ssllabs.com/ssltest/)
|
||||
|
||||
Let's start out with needed JupyterHub configuration in `jupyterhub_config.py`:
|
||||
Let's start out with the needed JupyterHub configuration in `jupyterhub_config.py`:
|
||||
|
||||
```python
|
||||
# Force the proxy to only listen to connections to 127.0.0.1 (on port 8000)
|
||||
@@ -30,15 +32,15 @@ This can take a few minutes:
|
||||
openssl dhparam -out /etc/ssl/certs/dhparam.pem 4096
|
||||
```
|
||||
|
||||
## nginx
|
||||
## Nginx
|
||||
|
||||
This **`nginx` config file** is fairly standard fare except for the two
|
||||
`location` blocks within the main section for HUB.DOMAIN.tld.
|
||||
To create a new site for jupyterhub in your nginx config, make a new file
|
||||
To create a new site for jupyterhub in your Nginx config, make a new file
|
||||
in `sites.enabled`, e.g. `/etc/nginx/sites.enabled/jupyterhub.conf`:
|
||||
|
||||
```bash
|
||||
# top-level http config for websocket headers
|
||||
# Top-level HTTP config for WebSocket headers
|
||||
# If Upgrade is defined, Connection = upgrade
|
||||
# If Upgrade is empty, Connection = close
|
||||
map $http_upgrade $connection_upgrade {
|
||||
@@ -51,7 +53,7 @@ server {
|
||||
listen 80;
|
||||
server_name HUB.DOMAIN.TLD;
|
||||
|
||||
# Tell all requests to port 80 to be 302 redirected to HTTPS
|
||||
# Redirect the request to HTTPS
|
||||
return 302 https://$host$request_uri;
|
||||
}
|
||||
|
||||
@@ -75,11 +77,11 @@ server {
|
||||
ssl_stapling_verify on;
|
||||
add_header Strict-Transport-Security max-age=15768000;
|
||||
|
||||
# Managing literal requests to the JupyterHub front end
|
||||
# Managing literal requests to the JupyterHub frontend
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:8000;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
|
||||
# websocket headers
|
||||
@@ -101,10 +103,10 @@ server {
|
||||
If `nginx` is not running on port 443, substitute `$http_host` for `$host` on
|
||||
the lines setting the `Host` header.
|
||||
|
||||
`nginx` will now be the front facing element of JupyterHub on `443` which means
|
||||
`nginx` will now be the front-facing element of JupyterHub on `443` which means
|
||||
it is also free to bind other servers, like `NO_HUB.DOMAIN.TLD` to the same port
|
||||
on the same machine and network interface. In fact, one can simply use the same
|
||||
server blocks as above for `NO_HUB` and simply add line for the root directory
|
||||
server blocks as above for `NO_HUB` and simply add a line for the root directory
|
||||
of the site as well as the applicable location call:
|
||||
|
||||
```bash
|
||||
@@ -112,7 +114,7 @@ server {
|
||||
listen 80;
|
||||
server_name NO_HUB.DOMAIN.TLD;
|
||||
|
||||
# Tell all requests to port 80 to be 302 redirected to HTTPS
|
||||
# Redirect the request to HTTPS
|
||||
return 302 https://$host$request_uri;
|
||||
}
|
||||
|
||||
@@ -143,12 +145,12 @@ Now restart `nginx`, restart the JupyterHub, and enjoy accessing
|
||||
`https://HUB.DOMAIN.TLD` while serving other content securely on
|
||||
`https://NO_HUB.DOMAIN.TLD`.
|
||||
|
||||
### SELinux permissions for nginx
|
||||
### SELinux permissions for Nginx
|
||||
|
||||
On distributions with SELinux enabled (e.g. Fedora), one may encounter permission errors
|
||||
when the nginx service is started.
|
||||
when the Nginx service is started.
|
||||
|
||||
We need to allow nginx to perform network relay and connect to the jupyterhub port. The
|
||||
We need to allow Nginx to perform network relay and connect to the JupyterHub port. The
|
||||
following commands do that:
|
||||
|
||||
```bash
|
||||
@@ -157,26 +159,26 @@ setsebool -P httpd_can_network_relay 1
|
||||
setsebool -P httpd_can_network_connect 1
|
||||
```
|
||||
|
||||
Replace 8000 with the port the jupyterhub server is running from.
|
||||
Replace 8000 with the port the JupyterHub server is running from.
|
||||
|
||||
## Apache
|
||||
|
||||
As with nginx above, you can use [Apache](https://httpd.apache.org) as the reverse proxy.
|
||||
First, we will need to enable the apache modules that we are going to need:
|
||||
As with Nginx above, you can use [Apache](https://httpd.apache.org) as the reverse proxy.
|
||||
First, we will need to enable the Apache modules that we are going to need:
|
||||
|
||||
```bash
|
||||
a2enmod ssl rewrite proxy headers proxy_http proxy_wstunnel
|
||||
```
|
||||
|
||||
Our Apache configuration is equivalent to the nginx configuration above:
|
||||
Our Apache configuration is equivalent to the Nginx configuration above:
|
||||
|
||||
- Redirect HTTP to HTTPS
|
||||
- Good SSL Configuration
|
||||
- Support for websockets on any proxied URL
|
||||
- Support for WebSocket on any proxied URL
|
||||
- JupyterHub is running locally at http://127.0.0.1:8000
|
||||
|
||||
```bash
|
||||
# redirect HTTP to HTTPS
|
||||
# Redirect HTTP to HTTPS
|
||||
Listen 80
|
||||
<VirtualHost HUB.DOMAIN.TLD:80>
|
||||
ServerName HUB.DOMAIN.TLD
|
||||
@@ -188,26 +190,26 @@ Listen 443
|
||||
|
||||
ServerName HUB.DOMAIN.TLD
|
||||
|
||||
# enable HTTP/2, if available
|
||||
# Enable HTTP/2, if available
|
||||
Protocols h2 http/1.1
|
||||
|
||||
# HTTP Strict Transport Security (mod_headers is required) (63072000 seconds)
|
||||
Header always set Strict-Transport-Security "max-age=63072000"
|
||||
|
||||
# configure SSL
|
||||
# Configure SSL
|
||||
SSLEngine on
|
||||
SSLCertificateFile /etc/letsencrypt/live/HUB.DOMAIN.TLD/fullchain.pem
|
||||
SSLCertificateKeyFile /etc/letsencrypt/live/HUB.DOMAIN.TLD/privkey.pem
|
||||
SSLOpenSSLConfCmd DHParameters /etc/ssl/certs/dhparam.pem
|
||||
|
||||
# intermediate configuration from ssl-config.mozilla.org (2022-03-03)
|
||||
# Please note, that this configuration might be out-dated - please update it accordingly using https://ssl-config.mozilla.org/
|
||||
# Intermediate configuration from SSL-config.mozilla.org (2022-03-03)
|
||||
# Please note, that this configuration might be outdated - please update it accordingly using https://ssl-config.mozilla.org/
|
||||
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
|
||||
SSLCipherSuite ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
|
||||
SSLHonorCipherOrder off
|
||||
SSLSessionTickets off
|
||||
|
||||
# Use RewriteEngine to handle websocket connection upgrades
|
||||
# Use RewriteEngine to handle WebSocket connection upgrades
|
||||
RewriteEngine On
|
||||
RewriteCond %{HTTP:Connection} Upgrade [NC]
|
||||
RewriteCond %{HTTP:Upgrade} websocket [NC]
|
||||
@@ -224,15 +226,15 @@ Listen 443
|
||||
</VirtualHost>
|
||||
```
|
||||
|
||||
In case of the need to run the jupyterhub under /jhub/ or other location please use the below configurations:
|
||||
In case of the need to run JupyterHub under /jhub/ or another location please use the below configurations:
|
||||
|
||||
- JupyterHub running locally at http://127.0.0.1:8000/jhub/ or other location
|
||||
|
||||
httpd.conf amendments:
|
||||
|
||||
```bash
|
||||
RewriteRule /jhub/(.*) ws://127.0.0.1:8000/jhub/$1 [NE,P,L]
|
||||
RewriteRule /jhub/(.*) http://127.0.0.1:8000/jhub/$1 [NE,P,L]
|
||||
RewriteRule /jhub/(.*) ws://127.0.0.1:8000/jhub/$1 [P,L]
|
||||
RewriteRule /jhub/(.*) http://127.0.0.1:8000/jhub/$1 [P,L]
|
||||
|
||||
ProxyPass /jhub/ http://127.0.0.1:8000/jhub/
|
||||
ProxyPassReverse /jhub/ http://127.0.0.1:8000/jhub/
|
||||
@@ -240,8 +242,8 @@ httpd.conf amendments:
|
||||
|
||||
jupyterhub_config.py amendments:
|
||||
|
||||
```bash
|
||||
--The public facing URL of the whole JupyterHub application.
|
||||
--This is the address on which the proxy will bind. Sets protocol, ip, base_url
|
||||
c.JupyterHub.bind_url = 'http://127.0.0.1:8000/jhub/'
|
||||
```python
|
||||
# The public facing URL of the whole JupyterHub application.
|
||||
# This is the address on which the proxy will bind. Sets protocol, IP, base_url
|
||||
c.JupyterHub.bind_url = 'http://127.0.0.1:8000/jhub/'
|
||||
```
|
@@ -1,3 +1,5 @@
|
||||
(howto:config:no-sudo)=
|
||||
|
||||
# Run JupyterHub without root privileges using `sudo`
|
||||
|
||||
**Note:** Setting up `sudo` permissions involves many pieces of system
|
||||
@@ -6,14 +8,14 @@ Only do this if you are very sure you must.
|
||||
|
||||
## Overview
|
||||
|
||||
There are many Authenticators and Spawners available for JupyterHub. Some, such
|
||||
as DockerSpawner or OAuthenticator, do not need any elevated permissions. This
|
||||
There are many [Authenticators](authenticators) and [Spawners](spawners) available for JupyterHub. Some, such
|
||||
as [DockerSpawner](https://github.com/jupyterhub/dockerspawner) or [OAuthenticator](https://github.com/jupyterhub/oauthenticator), do not need any elevated permissions. This
|
||||
document describes how to get the full default behavior of JupyterHub while
|
||||
running notebook servers as real system users on a shared system without
|
||||
running notebook servers as real system users on a shared system, without
|
||||
running the Hub itself as root.
|
||||
|
||||
Since JupyterHub needs to spawn processes as other users, the simplest way
|
||||
is to run it as root, spawning user servers with [setuid](http://linux.die.net/man/2/setuid).
|
||||
is to run it as root, spawning user servers with [setuid](https://linux.die.net/man/2/setuid).
|
||||
But this isn't especially safe, because you have a process running on the
|
||||
public web as root.
|
||||
|
||||
@@ -69,7 +71,8 @@ Cmnd_Alias JUPYTER_CMD = /usr/local/bin/sudospawner
|
||||
rhea ALL=(JUPYTER_USERS) NOPASSWD:JUPYTER_CMD
|
||||
```
|
||||
|
||||
It might be useful to modify `secure_path` to add commands in path.
|
||||
It might be useful to modify `secure_path` to add commands in path. (Search for
|
||||
`secure_path` in the [sudo docs](https://www.sudo.ws/man/1.8.14/sudoers.man.html)
|
||||
|
||||
As an alternative to adding every user to the `/etc/sudoers` file, you can
|
||||
use a group in the last line above, instead of `JUPYTER_USERS`:
|
||||
@@ -90,7 +93,7 @@ $ adduser -G jupyterhub newuser
|
||||
Test that the new user doesn't need to enter a password to run the sudospawner
|
||||
command.
|
||||
|
||||
This should prompt for your password to switch to rhea, but _not_ prompt for
|
||||
This should prompt for your password to switch to `rhea`, but _not_ prompt for
|
||||
any password for the second switch. It should show some help output about
|
||||
logging options:
|
||||
|
||||
@@ -113,13 +116,13 @@ sudo: a password is required
|
||||
|
||||
## Enable PAM for non-root
|
||||
|
||||
By default, [PAM authentication](http://en.wikipedia.org/wiki/Pluggable_authentication_module)
|
||||
By default, [PAM authentication](https://en.wikipedia.org/wiki/Pluggable_authentication_module)
|
||||
is used by JupyterHub. To use PAM, the process may need to be able to read
|
||||
the shadow password database.
|
||||
|
||||
### Shadow group (Linux)
|
||||
|
||||
**Note:** On Fedora based distributions there is no clear way to configure
|
||||
**Note:** On [Fedora based distributions](https://fedoraproject.org/wiki/List_of_Fedora_remixes) there is no clear way to configure
|
||||
the PAM database to allow sufficient access for authenticating with the target user's password
|
||||
from JupyterHub. As a workaround we recommend use an
|
||||
[alternative authentication method](https://github.com/jupyterhub/jupyterhub/wiki/Authenticators).
|
||||
@@ -150,7 +153,7 @@ We want our new user to be able to read the shadow passwords, so add it to the s
|
||||
$ sudo usermod -a -G shadow rhea
|
||||
```
|
||||
|
||||
If you want jupyterhub to serve pages on a restricted port (such as port 80 for http),
|
||||
If you want jupyterhub to serve pages on a restricted port (such as port 80 for HTTP),
|
||||
then you will need to give `node` permission to do so:
|
||||
|
||||
```bash
|
||||
@@ -158,16 +161,18 @@ sudo setcap 'cap_net_bind_service=+ep' /usr/bin/node
|
||||
```
|
||||
|
||||
However, you may want to further understand the consequences of this.
|
||||
([Further reading](https://man7.org/linux/man-pages/man7/capabilities.7.html))
|
||||
|
||||
You may also be interested in limiting the amount of CPU any process can use
|
||||
on your server. `cpulimit` is a useful tool that is available for many Linux
|
||||
distributions' packaging system. This can be used to keep any user's process
|
||||
from using too much CPU cycles. You can configure it accoring to [these
|
||||
instructions](http://ubuntuforums.org/showthread.php?t=992706).
|
||||
instructions](https://ubuntuforums.org/showthread.php?t=992706).
|
||||
|
||||
### Shadow group (FreeBSD)
|
||||
|
||||
**NOTE:** This has not been tested and may not work as expected.
|
||||
**NOTE:** This has not been tested on FreeBSD and may not work as expected on
|
||||
the FreeBSD platform. _Do not use in production without verifying that it works properly!_
|
||||
|
||||
```bash
|
||||
$ ls -l /etc/spwd.db /etc/master.passwd
|
||||
@@ -225,8 +230,8 @@ And try logging in.
|
||||
|
||||
## Troubleshooting: SELinux
|
||||
|
||||
If you still get a generic `Permission denied` `PermissionError`, it's possible SELinux is blocking you.
|
||||
Here's how you can make a module to allow this.
|
||||
If you still get a generic `Permission denied` `PermissionError`, it's possible SELinux is blocking you.
|
||||
Here's how you can make a module to resolve this.
|
||||
First, put this in a file named `sudo_exec_selinux.te`:
|
||||
|
||||
```bash
|
||||
@@ -253,6 +258,6 @@ $ semodule -i sudo_exec_selinux.pp
|
||||
## Troubleshooting: PAM session errors
|
||||
|
||||
If the PAM authentication doesn't work and you see errors for
|
||||
`login:session-auth`, or similar, considering updating to a more recent version
|
||||
`login:session-auth`, or similar, consider updating to a more recent version
|
||||
of jupyterhub and disabling the opening of PAM sessions with
|
||||
`c.PAMAuthenticator.open_sessions=False`.
|
265
docs/source/howto/configuration/config-user-env.md
Normal file
265
docs/source/howto/configuration/config-user-env.md
Normal file
@@ -0,0 +1,265 @@
|
||||
(howto:config:user-env)=
|
||||
|
||||
# Configuring user environments
|
||||
|
||||
To deploy JupyterHub means you are providing Jupyter notebook environments for
|
||||
multiple users. Often, this includes a desire to configure the user
|
||||
environment in a custom way.
|
||||
|
||||
Since the `jupyterhub-singleuser` server extends the standard Jupyter notebook
|
||||
server, most configuration and documentation that applies to Jupyter Notebook
|
||||
applies to the single-user environments. Configuration of user environments
|
||||
typically does not occur through JupyterHub itself, but rather through system-wide
|
||||
configuration of Jupyter, which is inherited by `jupyterhub-singleuser`.
|
||||
|
||||
**Tip:** When searching for configuration tips for JupyterHub user environments, you might want to remove JupyterHub from your search because there are a lot more people out there configuring Jupyter than JupyterHub and the configuration is the same.
|
||||
|
||||
This section will focus on user environments, which includes the following:
|
||||
|
||||
- [Installing packages](#installing-packages)
|
||||
- [Configuring Jupyter and IPython](#configuring-jupyter-and-ipython)
|
||||
- [Installing kernelspecs](#installing-kernelspecs)
|
||||
- [Using containers vs. multi-user hosts](#multi-user-hosts-vs-containers)
|
||||
|
||||
## Installing packages
|
||||
|
||||
To make packages available to users, you will typically install packages system-wide or in a shared environment.
|
||||
|
||||
This installation location should always be in the same environment where
|
||||
`jupyterhub-singleuser` itself is installed in, and must be _readable and
|
||||
executable_ by your users. If you want your users to be able to install additional
|
||||
packages, the installation location must also be _writable_ by your users.
|
||||
|
||||
If you are using a standard Python installation on your system, use the following command:
|
||||
|
||||
```bash
|
||||
sudo python3 -m pip install numpy
|
||||
```
|
||||
|
||||
to install the numpy package in the default Python 3 environment on your system
|
||||
(typically `/usr/local`).
|
||||
|
||||
You may also use conda to install packages. If you do, you should make sure
|
||||
that the conda environment has appropriate permissions for users to be able to
|
||||
run Python code in the env. The env must be _readable and executable_ by all
|
||||
users. Additionally it must be _writeable_ if you want users to install
|
||||
additional packages.
|
||||
|
||||
## Configuring Jupyter and IPython
|
||||
|
||||
[Jupyter](https://jupyter-notebook.readthedocs.io/en/stable/configuring/config_overview.html)
|
||||
and [IPython](https://ipython.readthedocs.io/en/stable/development/config.html)
|
||||
have their own configuration systems.
|
||||
|
||||
As a JupyterHub administrator, you will typically want to install and configure environments for all JupyterHub users. For example, let's say you wish for each student in a class to have the same user environment configuration.
|
||||
|
||||
Jupyter and IPython support **"system-wide"** locations for configuration, which is the logical place to put global configuration that you want to affect all users. It's generally more efficient to configure user environments "system-wide", and it's a good practice to avoid creating files in the users' home directories.
|
||||
The typical locations for these config files are:
|
||||
|
||||
- **system-wide** in `/etc/{jupyter|ipython}`
|
||||
- **env-wide** (environment wide) in `{sys.prefix}/etc/{jupyter|ipython}`.
|
||||
|
||||
### Jupyter environment configuration priority
|
||||
|
||||
When Jupyter runs in an environment (conda or virtualenv), it prefers to load configuration from the environment over each user's own configuration (e.g. in `~/.jupyter`).
|
||||
This may cause issues if you use a _shared_ conda environment or virtualenv for users, because e.g. jupyterlab may try to write information like workspaces or settings to the environment instead of the user's own directory.
|
||||
This could fail with something like `Permission denied: $PREFIX/etc/jupyter/lab`.
|
||||
|
||||
To avoid this issue, set `JUPYTER_PREFER_ENV_PATH=0` in the user environment:
|
||||
|
||||
```python
|
||||
c.Spawner.environment.update(
|
||||
{
|
||||
"JUPYTER_PREFER_ENV_PATH": "0",
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
which tells Jupyter to prefer _user_ configuration paths (e.g. in `~/.jupyter`) to configuration set in the environment.
|
||||
|
||||
### Example: Enable an extension system-wide
|
||||
|
||||
For example, to enable the `cython` IPython extension for all of your users, create the file `/etc/ipython/ipython_config.py`:
|
||||
|
||||
```python
|
||||
c.InteractiveShellApp.extensions.append("cython")
|
||||
```
|
||||
|
||||
### Example: Enable a Jupyter notebook configuration setting for all users
|
||||
|
||||
:::{note}
|
||||
These examples configure the Jupyter ServerApp, which is used by JupyterLab, the default in JupyterHub 2.0.
|
||||
|
||||
If you are using the classing Jupyter Notebook server,
|
||||
the same things should work,
|
||||
with the following substitutions:
|
||||
|
||||
- Search for `jupyter_server_config`, and replace with `jupyter_notebook_config`
|
||||
- Search for `NotebookApp`, and replace with `ServerApp`
|
||||
|
||||
:::
|
||||
|
||||
To enable Jupyter notebook's internal idle-shutdown behavior (requires notebook ≥ 5.4), set the following in the `/etc/jupyter/jupyter_server_config.py` file:
|
||||
|
||||
```python
|
||||
# shutdown the server after no activity for an hour
|
||||
c.ServerApp.shutdown_no_activity_timeout = 60 * 60
|
||||
# shutdown kernels after no activity for 20 minutes
|
||||
c.MappingKernelManager.cull_idle_timeout = 20 * 60
|
||||
# check for idle kernels every two minutes
|
||||
c.MappingKernelManager.cull_interval = 2 * 60
|
||||
```
|
||||
|
||||
## Installing kernelspecs
|
||||
|
||||
You may have multiple Jupyter kernels installed and want to make sure that they are available to all of your users. This means installing kernelspecs either system-wide (e.g. in /usr/local/) or in the `sys.prefix` of JupyterHub
|
||||
itself.
|
||||
|
||||
Jupyter kernelspec installation is system-wide by default, but some kernels
|
||||
may default to installing kernelspecs in your home directory. These will need
|
||||
to be moved system-wide to ensure that they are accessible.
|
||||
|
||||
To see where your kernelspecs are, you can use the following command:
|
||||
|
||||
```bash
|
||||
jupyter kernelspec list
|
||||
```
|
||||
|
||||
### Example: Installing kernels system-wide
|
||||
|
||||
Let's assume that I have a Python 2 and Python 3 environment that I want to make sure are available, I can install their specs **system-wide** (in /usr/local) using the following command:
|
||||
|
||||
```bash
|
||||
/path/to/python3 -m ipykernel install --prefix=/usr/local
|
||||
/path/to/python2 -m ipykernel install --prefix=/usr/local
|
||||
```
|
||||
|
||||
## Multi-user hosts vs. Containers
|
||||
|
||||
There are two broad categories of user environments that depend on what
|
||||
Spawner you choose:
|
||||
|
||||
- Multi-user hosts (shared system)
|
||||
- Container-based
|
||||
|
||||
How you configure user environments for each category can differ a bit
|
||||
depending on what Spawner you are using.
|
||||
|
||||
The first category is a **shared system (multi-user host)** where
|
||||
each user has a JupyterHub account, a home directory as well as being
|
||||
a real system user. In this example, shared configuration and installation
|
||||
must be in a 'system-wide' location, such as `/etc/`, or `/usr/local`
|
||||
or a custom prefix such as `/opt/conda`.
|
||||
|
||||
When JupyterHub uses **container-based** Spawners (e.g. KubeSpawner or
|
||||
DockerSpawner), the 'system-wide' environment is really the container image used for users.
|
||||
|
||||
In both cases, you want to _avoid putting configuration in user home
|
||||
directories_ because users can change those configuration settings. Also, home directories typically persist once they are created, thereby making it difficult for admins to update later.
|
||||
|
||||
## Named servers
|
||||
|
||||
By default, in a JupyterHub deployment, each user has one server only.
|
||||
|
||||
JupyterHub can, however, have multiple servers per user.
|
||||
This is mostly useful in deployments where users can configure the environment in which their server will start (e.g. resource requests on an HPC cluster), so that a given user can have multiple configurations running at the same time, without having to stop and restart their own server.
|
||||
|
||||
To allow named servers, include this code snippet in your config file:
|
||||
|
||||
```python
|
||||
c.JupyterHub.allow_named_servers = True
|
||||
```
|
||||
|
||||
Named servers were implemented in the REST API in JupyterHub 0.8,
|
||||
and JupyterHub 1.0 introduces UI for managing named servers via the user home page:
|
||||
|
||||

|
||||
|
||||
as well as the admin page:
|
||||
|
||||

|
||||
|
||||
Named servers can be accessed, created, started, stopped, and deleted
|
||||
from these pages. Activity tracking is now per server as well.
|
||||
|
||||
To limit the number of **named server** per user by setting a constant value, include this code snippet in your config file:
|
||||
|
||||
```python
|
||||
c.JupyterHub.named_server_limit_per_user = 5
|
||||
```
|
||||
|
||||
Alternatively, to use a callable/awaitable based on the handler object, include this code snippet in your config file:
|
||||
|
||||
```python
|
||||
def named_server_limit_per_user_fn(handler):
|
||||
user = handler.current_user
|
||||
if user and user.admin:
|
||||
return 0
|
||||
return 5
|
||||
|
||||
c.JupyterHub.named_server_limit_per_user = named_server_limit_per_user_fn
|
||||
```
|
||||
|
||||
This can be useful for quota service implementations. The example above limits the number of named servers for non-admin users only.
|
||||
|
||||
If `named_server_limit_per_user` is set to `0`, no limit is enforced.
|
||||
|
||||
When using named servers, Spawners may need additional configuration to take the `servername` into account. Whilst `KubeSpawner` takes the `servername` into account by default in [`pod_name_template`](https://jupyterhub-kubespawner.readthedocs.io/en/latest/spawner.html#kubespawner.KubeSpawner.pod_name_template), other Spawners may not. Check the documentation for the specific Spawner to see how singleuser servers are named, for example in `DockerSpawner` this involves modifying the [`name_template`](https://jupyterhub-dockerspawner.readthedocs.io/en/latest/api/index.html) setting to include `servername`, eg. `"{prefix}-{username}-{servername}"`.
|
||||
|
||||
(classic-notebook-ui)=
|
||||
|
||||
## Switching back to the classic notebook
|
||||
|
||||
By default, the single-user server launches JupyterLab,
|
||||
which is based on [Jupyter Server][].
|
||||
|
||||
This is the default server when running JupyterHub ≥ 2.0.
|
||||
To switch to using the legacy Jupyter Notebook server (notebook < 7.0), you can set the `JUPYTERHUB_SINGLEUSER_APP` environment variable
|
||||
(in the single-user environment) to:
|
||||
|
||||
```bash
|
||||
export JUPYTERHUB_SINGLEUSER_APP='notebook.notebookapp.NotebookApp'
|
||||
```
|
||||
|
||||
:::{note}
|
||||
|
||||
```
|
||||
JUPYTERHUB_SINGLEUSER_APP='notebook.notebookapp.NotebookApp'
|
||||
```
|
||||
|
||||
is only valid for notebook < 7. notebook v7 is based on jupyter-server,
|
||||
and the default jupyter-server application must be used.
|
||||
Selecting the new notebook UI is no longer a matter of selecting the server app to launch,
|
||||
but only the default URL for users to visit.
|
||||
To use notebook v7 with JupyterHub, leave the default singleuser app config alone (or specify `JUPYTERHUB_SINGLEUSER_APP=jupyter-server`) and set the default _URL_ for user servers:
|
||||
|
||||
```python
|
||||
c.Spawner.default_url = '/tree/'
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
[jupyter server]: https://jupyter-server.readthedocs.io
|
||||
[jupyter notebook]: https://jupyter-notebook.readthedocs.io
|
||||
|
||||
:::{versionchanged} 2.0
|
||||
|
||||
JupyterLab is now the default single-user UI, if available,
|
||||
which is based on the [Jupyter Server][],
|
||||
no longer the legacy [Jupyter Notebook][] server.
|
||||
JupyterHub prior to 2.0 launched the legacy notebook server (`jupyter notebook`),
|
||||
and the Jupyter server could be selected by specifying the following:
|
||||
|
||||
```python
|
||||
# jupyterhub_config.py
|
||||
c.Spawner.cmd = ["jupyter-labhub"]
|
||||
```
|
||||
|
||||
Alternatively, for an otherwise customized Jupyter Server app,
|
||||
set the environment variable using the following command:
|
||||
|
||||
```bash
|
||||
export JUPYTERHUB_SINGLEUSER_APP='jupyter_server.serverapp.ServerApp'
|
||||
```
|
||||
|
||||
:::
|
34
docs/source/howto/index.md
Normal file
34
docs/source/howto/index.md
Normal file
@@ -0,0 +1,34 @@
|
||||
# How-to
|
||||
|
||||
The _How-to_ guides provide practical step-by-step details to help you achieve a particular goal. They are useful when you are trying to get something done but require you to understand and adapt the steps to your specific usecase.
|
||||
|
||||
Use the following guides when:
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
api-only
|
||||
proxy
|
||||
rest
|
||||
separate-proxy
|
||||
templates
|
||||
upgrading
|
||||
log-messages
|
||||
|
||||
```
|
||||
|
||||
(config-examples)=
|
||||
|
||||
## Configuration
|
||||
|
||||
The following guides provide examples, including configuration files and tips, for the
|
||||
following:
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
configuration/config-user-env
|
||||
configuration/config-ghoauth
|
||||
configuration/config-proxy
|
||||
configuration/config-sudo
|
||||
```
|
74
docs/source/howto/log-messages.md
Normal file
74
docs/source/howto/log-messages.md
Normal file
@@ -0,0 +1,74 @@
|
||||
(howto:log-messages)=
|
||||
|
||||
# Interpreting common log messages
|
||||
|
||||
When debugging errors and outages, looking at the logs emitted by
|
||||
JupyterHub is very helpful. This document intends to describe some common
|
||||
log messages, what they mean and what are the most common causes that generated them, as well as some possible ways to fix them.
|
||||
|
||||
## Failing suspected API request to not-running server
|
||||
|
||||
### Example
|
||||
|
||||
Your logs might be littered with lines that look scary
|
||||
|
||||
```
|
||||
[W 2022-03-10 17:25:19.774 JupyterHub base:1349] Failing suspected API request to not-running server: /hub/user/<user-name>/api/metrics/v1
|
||||
```
|
||||
|
||||
### Cause
|
||||
|
||||
This likely means that the user's server has stopped running but they
|
||||
still have a browser tab open. For example, you might have 3 tabs open and you shut
|
||||
the server down via one.
|
||||
Another possible reason could be that you closed your laptop and the server was culled for inactivity, then reopened the laptop!
|
||||
However, the client-side code (JupyterLab, Classic Notebook, etc) doesn't interpret the shut-down server and continues to make some API requests.
|
||||
|
||||
JupyterHub's architecture means that the proxy routes all requests that
|
||||
don't go to a running user server to the hub process itself. The hub
|
||||
process then explicitly returns a failure response, so the client knows
|
||||
that the server is not running anymore. This is used by JupyterLab to
|
||||
inform the user that the server is not running anymore, and provide an option
|
||||
to restart it.
|
||||
|
||||
Most commonly, you'll see this in reference to the `/api/metrics/v1`
|
||||
URL, used by [jupyter-resource-usage](https://github.com/jupyter-server/jupyter-resource-usage).
|
||||
|
||||
### Actions you can take
|
||||
|
||||
This log message is benign, and there is usually no action for you to take.
|
||||
|
||||
## JupyterHub Singleuser Version mismatch
|
||||
|
||||
### Example
|
||||
|
||||
```
|
||||
jupyterhub version 1.5.0 != jupyterhub-singleuser version 1.3.0. This could cause failure to authenticate and result in redirect loops!
|
||||
```
|
||||
|
||||
### Cause
|
||||
|
||||
JupyterHub requires the `jupyterhub` python package installed inside the image or
|
||||
environment, the user server starts in. This message indicates that the version of
|
||||
the `jupyterhub` package installed inside the user image or environment is not
|
||||
the same as the JupyterHub server's version itself. This is not necessarily always a
|
||||
problem - some version drift is mostly acceptable, and the only two known cases of
|
||||
breakage are across the 0.7 and 2.0 version releases. In those cases, issues pop
|
||||
up immediately after upgrading your version of JupyterHub, so **always check the JupyterHub
|
||||
changelog before upgrading!**. The primary problems this _could_ cause are:
|
||||
|
||||
1. Infinite redirect loops after the user server starts
|
||||
2. Missing expected environment variables in the user server once it starts
|
||||
3. Failure for the started user server to authenticate with the JupyterHub server -
|
||||
note that this is _not_ the same as _user authentication_ failing!
|
||||
|
||||
However, for the most part, unless you are seeing these specific issues, the log
|
||||
message should be counted as a warning to get the `jupyterhub` package versions
|
||||
aligned, rather than as an indicator of an existing problem.
|
||||
|
||||
### Actions you can take
|
||||
|
||||
Upgrade the version of the `jupyterhub` package in your user environment or image
|
||||
so that it matches the version of JupyterHub running your JupyterHub server! If you
|
||||
are using the [zero-to-jupyterhub](https://z2jh.jupyter.org) helm chart, you can find the appropriate
|
||||
version of the `jupyterhub` package to install in your user image [here](https://jupyterhub.github.io/helm-chart/)
|
@@ -1,3 +1,5 @@
|
||||
(howto:custom-proxy)=
|
||||
|
||||
# Writing a custom Proxy implementation
|
||||
|
||||
JupyterHub 0.8 introduced the ability to write a custom implementation of the
|
||||
@@ -7,9 +9,12 @@ Hub manages by default as a subprocess (it can be run externally, as well, and
|
||||
typically is in production deployments).
|
||||
|
||||
The upside to CHP, and why we use it by default, is that it's easy to install
|
||||
and run (if you have nodejs, you are set!). The downsides are that it's a
|
||||
single process and does not support any persistence of the routing table. So
|
||||
if the proxy process dies, your whole JupyterHub instance is inaccessible
|
||||
and run (if you have nodejs, you are set!). The downsides are that
|
||||
|
||||
- it's a single process and
|
||||
- does not support any persistence of the routing table.
|
||||
|
||||
So if the proxy process dies, your whole JupyterHub instance is inaccessible
|
||||
until the Hub notices, restarts the proxy, and restores the routing table. For
|
||||
deployments that want to avoid such a single point of failure, or leverage
|
||||
existing proxy infrastructure in their chosen deployment (such as Kubernetes
|
||||
@@ -138,7 +143,7 @@ async def delete_route(self, routespec):
|
||||
|
||||
For retrieval, you only _need_ to implement a single method that retrieves all
|
||||
routes. The return value for this function should be a dictionary, keyed by
|
||||
`routespect`, of dicts whose keys are the same three arguments passed to
|
||||
`routespec`, of dicts whose keys are the same three arguments passed to
|
||||
`add_route` (`routespec`, `target`, `data`)
|
||||
|
||||
```python
|
||||
@@ -204,7 +209,7 @@ setup(
|
||||
```
|
||||
|
||||
If you have added this metadata to your package,
|
||||
users can select your proxy with the configuration:
|
||||
admins can select your authenticator with the configuration:
|
||||
|
||||
```python
|
||||
c.JupyterHub.proxy_class = 'mything'
|
||||
@@ -216,7 +221,7 @@ instead of the full
|
||||
c.JupyterHub.proxy_class = 'mypackage:MyProxy'
|
||||
```
|
||||
|
||||
previously required.
|
||||
as previously required.
|
||||
Additionally, configurable attributes for your proxy will
|
||||
appear in jupyterhub help output and auto-generated configuration files
|
||||
via `jupyterhub --generate-config`.
|
@@ -1,61 +1,90 @@
|
||||
(rest-api)=
|
||||
(howto:rest-api)=
|
||||
|
||||
# Using JupyterHub's REST API
|
||||
|
||||
This section will give you information on:
|
||||
|
||||
- what you can do with the API
|
||||
- create an API token
|
||||
- add API tokens to the config files
|
||||
- make an API request programmatically using the requests library
|
||||
- learn more about JupyterHub's API
|
||||
- What you can do with the API
|
||||
- How to create an API token
|
||||
- Assigning permissions to a token
|
||||
- Updating to admin services
|
||||
- Making an API request programmatically using the requests library
|
||||
- Paginating API requests
|
||||
- Enabling users to spawn multiple named-servers via the API
|
||||
- Learn more about JupyterHub's API
|
||||
|
||||
## What you can do with the API
|
||||
|
||||
Using the [JupyterHub REST API][], you can perform actions on the Hub,
|
||||
such as:
|
||||
|
||||
- checking which users are active
|
||||
- adding or removing users
|
||||
- stopping or starting single user notebook servers
|
||||
- authenticating services
|
||||
- communicating with an individual Jupyter server's REST API
|
||||
|
||||
A [REST](https://en.wikipedia.org/wiki/Representational_state_transfer)
|
||||
Before we discuss about JupyterHub's REST API, you can learn about [REST APIs here](https://en.wikipedia.org/wiki/Representational_state_transfer). A REST
|
||||
API provides a standard way for users to get and send information to the
|
||||
Hub.
|
||||
|
||||
## What you can do with the API
|
||||
|
||||
Using the [JupyterHub REST API](jupyterhub-rest-API), you can perform actions on the Hub,
|
||||
such as:
|
||||
|
||||
- Checking which users are active
|
||||
- Adding or removing users
|
||||
- Adding or removing services
|
||||
- Stopping or starting single user notebook servers
|
||||
- Authenticating services
|
||||
- Communicating with an individual Jupyter server's REST API
|
||||
|
||||
## Create an API token
|
||||
|
||||
To send requests using JupyterHub API, you must pass an API token with
|
||||
To send requests using the JupyterHub API, you must pass an API token with
|
||||
the request.
|
||||
|
||||
The preferred way of generating an API token is:
|
||||
While JupyterHub is running, any JupyterHub user can request a token via the `token` page.
|
||||
This is accessible via a `token` link in the top nav bar from the JupyterHub home page,
|
||||
or at the URL `/hub/token`.
|
||||
|
||||
:::{figure-md}
|
||||
|
||||

|
||||
|
||||
JupyterHub's API token page
|
||||
:::
|
||||
|
||||
:::{figure-md}
|
||||

|
||||
|
||||
JupyterHub's token page after successfully requesting a token.
|
||||
|
||||
:::
|
||||
|
||||
### Register API tokens via configuration
|
||||
|
||||
Sometimes, you'll want to pre-generate a token for access to JupyterHub,
|
||||
typically for use by external services,
|
||||
so that both JupyterHub and the service have access to the same value.
|
||||
|
||||
First, you need to generate a good random secret.
|
||||
A good way of generating an API token is by running:
|
||||
|
||||
```bash
|
||||
openssl rand -hex 32
|
||||
```
|
||||
|
||||
This `openssl` command generates a potential token that can then be
|
||||
added to JupyterHub using `.api_tokens` configuration setting in
|
||||
`jupyterhub_config.py`.
|
||||
This `openssl` command generates a random token that can be added to the JupyterHub configuration in `jupyterhub_config.py`.
|
||||
|
||||
Alternatively, use the `jupyterhub token` command to generate a token
|
||||
for a specific hub user by passing the 'username':
|
||||
For external services, this would be registered with JupyterHub via configuration:
|
||||
|
||||
```bash
|
||||
jupyterhub token <username>
|
||||
```python
|
||||
c.JupyterHub.services = [
|
||||
{
|
||||
"name": "my-service",
|
||||
"api_token": the_secret_value,
|
||||
},
|
||||
]
|
||||
```
|
||||
|
||||
This command generates a random string to use as a token and registers
|
||||
it for the given user with the Hub's database.
|
||||
At this point, requests authenticated with the token will be associated with The service `my-service`.
|
||||
|
||||
In [version 0.8.0](../changelog.md), a token request page for
|
||||
generating an API token is available from the JupyterHub user interface:
|
||||
```{note}
|
||||
You can also load additional tokens for users via the `JupyterHub.api_tokens` configuration.
|
||||
|
||||

|
||||
|
||||

|
||||
However, this option has been deprecated since the introduction of services.
|
||||
```
|
||||
|
||||
## Assigning permissions to a token
|
||||
|
||||
@@ -67,25 +96,63 @@ Prior to JupyterHub 2.0, there were two levels of permissions:
|
||||
where a token would always have full permissions to do whatever its owner could do.
|
||||
|
||||
In JupyterHub 2.0,
|
||||
specific permissions are now defined as 'scopes',
|
||||
specific permissions are now defined as '**scopes**',
|
||||
and can be assigned both at the user/service level,
|
||||
and at the individual token level.
|
||||
The previous behavior is represented by the scope `inherit`,
|
||||
and is still the default behavior for requesting a token if limited permissions are not specified.
|
||||
|
||||
This allows e.g. a user with full admin permissions to request a token with limited permissions.
|
||||
|
||||
### Updating to admin services
|
||||
In JupyterHub 5.0, you can specify scopes for a token when requesting it via the `/hub/tokens` page as a space-separated list.
|
||||
In JupyterHub 3.0 and later, you can also request tokens with limited scopes via the JupyterHub API (provided you already have a token!):
|
||||
|
||||
```python
|
||||
import json
|
||||
from urllib.parse import quote
|
||||
|
||||
import requests
|
||||
|
||||
def request_token(
|
||||
username, *, api_token, scopes=None, expires_in=0, hub_url="http://127.0.0.1:8081"
|
||||
):
|
||||
"""Request a new token for a user"""
|
||||
request_body = {}
|
||||
if expires_in:
|
||||
request_body["expires_in"] = expires_in
|
||||
if scopes:
|
||||
request_body["scopes"] = scopes
|
||||
url = hub_url.rstrip("/") + f"/hub/api/users/{quote(username)}/tokens"
|
||||
r = requests.post(
|
||||
url,
|
||||
data=json.dumps(request_body),
|
||||
headers={"Authorization": f"token {api_token}"},
|
||||
)
|
||||
if r.status_code >= 400:
|
||||
# extract error message for nicer error messages
|
||||
r.reason = r.json().get("message", r.text)
|
||||
r.raise_for_status()
|
||||
# response is a dict and will include the token itself in the 'token' field,
|
||||
# as well as other fields about the token
|
||||
return r.json()
|
||||
|
||||
request_token("myusername", scopes=["list:users"], api_token="abc123")
|
||||
```
|
||||
|
||||
## Updating to admin services
|
||||
|
||||
```{note}
|
||||
The `api_tokens` configuration has been softly deprecated since the introduction of services.
|
||||
We have no plans to remove it,
|
||||
but deployments are encouraged to use service configuration instead.
|
||||
```
|
||||
|
||||
If you have been using `api_tokens` to create an admin user
|
||||
and a token for that user to perform some automations,
|
||||
the services mechanism may be a better fit.
|
||||
If you have the following configuration:
|
||||
and the token for that user to perform some automations, then
|
||||
the services' mechanism may be a better fit if you have the following configuration:
|
||||
|
||||
```python
|
||||
c.JupyterHub.admin_users = {"service-admin",}
|
||||
c.JupyterHub.admin_users = {"service-admin"}
|
||||
c.JupyterHub.api_tokens = {
|
||||
"secret-token": "service-admin",
|
||||
}
|
||||
@@ -103,9 +170,8 @@ c.JupyterHub.services = [
|
||||
},
|
||||
]
|
||||
|
||||
# roles are new in JupyterHub 2.0
|
||||
# prior to 2.0, only 'admin': True or False
|
||||
# was available
|
||||
# roles were introduced in JupyterHub 2.0
|
||||
# prior to 2.0, only "admin": True or False was available
|
||||
|
||||
c.JupyterHub.load_roles = [
|
||||
{
|
||||
@@ -125,7 +191,7 @@ c.JupyterHub.load_roles = [
|
||||
The token will have the permissions listed in the role
|
||||
(see [scopes][] for a list of available permissions),
|
||||
but there will no longer be a user account created to house it.
|
||||
The main noticeable difference is that there will be no notebook server associated with the account
|
||||
The main noticeable difference between a user and a service is that there will be no notebook server associated with the account
|
||||
and the service will not show up in the various user list pages and APIs.
|
||||
|
||||
## Make an API request
|
||||
@@ -136,9 +202,8 @@ Authorization header.
|
||||
### Use requests
|
||||
|
||||
Using the popular Python [requests](https://docs.python-requests.org)
|
||||
library, here's example code to make an API request for the users of a JupyterHub
|
||||
deployment. An API GET request is made, and the request sends an API token for
|
||||
authorization. The response contains information about the users:
|
||||
library, an API GET request is made to [/users](rest-api-get-users), and the request sends an API token for
|
||||
authorization. The response contains information about the users, here's example code to make an API request for the users of a JupyterHub deployment
|
||||
|
||||
```python
|
||||
import requests
|
||||
@@ -155,7 +220,7 @@ r.raise_for_status()
|
||||
users = r.json()
|
||||
```
|
||||
|
||||
This example provides a slightly more complicated request, yet the
|
||||
This example provides a slightly more complicated request (to [/groups/formgrade-data301/users](rest-api-post-group-users)), yet the
|
||||
process is very similar:
|
||||
|
||||
```python
|
||||
@@ -176,7 +241,8 @@ r.json()
|
||||
```
|
||||
|
||||
The same API token can also authorize access to the [Jupyter Notebook REST API][]
|
||||
provided by notebook servers managed by JupyterHub if it has the necessary `access:users:servers` scope:
|
||||
|
||||
provided by notebook servers managed by JupyterHub if it has the necessary `access:servers` scope.
|
||||
|
||||
(api-pagination)=
|
||||
|
||||
@@ -188,7 +254,7 @@ provided by notebook servers managed by JupyterHub if it has the necessary `acce
|
||||
|
||||
Pagination is available through the `offset` and `limit` query parameters on
|
||||
list endpoints, which can be used to return ideally sized windows of results.
|
||||
Here's example code demonstrating pagination on the `GET /users`
|
||||
Here's example code demonstrating pagination on the [`GET /users`](rest-api-get-users)
|
||||
endpoint to fetch the first 20 records.
|
||||
|
||||
```python
|
||||
@@ -245,7 +311,7 @@ with your request, in which case a response will look like:
|
||||
|
||||
where the list results (same as pre-2.0) will be in `items`,
|
||||
and pagination info will be in `_pagination`.
|
||||
The `next` field will include the offset, limit, and URL for requesting the next page.
|
||||
The `next` field will include the `offset`, `limit`, and `url` for requesting the next page.
|
||||
`next` will be `null` if there is no next page.
|
||||
|
||||
Pagination is governed by two configuration options:
|
||||
@@ -259,7 +325,7 @@ Pagination is enabled on the `GET /users`, `GET /groups`, and `GET /proxy` REST
|
||||
|
||||
## Enabling users to spawn multiple named-servers via the API
|
||||
|
||||
With JupyterHub version 0.8, support for multiple servers per user has landed.
|
||||
Support for multiple servers per user was introduced in JupyterHub [version 0.8.](changelog)
|
||||
Prior to that, each user could only launch a single default server via the API
|
||||
like this:
|
||||
|
||||
@@ -275,9 +341,9 @@ First you must enable named-servers by including the following setting in the `j
|
||||
|
||||
`c.JupyterHub.allow_named_servers = True`
|
||||
|
||||
If using the [zero-to-jupyterhub-k8s](https://github.com/jupyterhub/zero-to-jupyterhub-k8s) set-up to run JupyterHub,
|
||||
If you are using the [zero-to-jupyterhub-k8s](https://github.com/jupyterhub/zero-to-jupyterhub-k8s) set-up to run JupyterHub,
|
||||
then instead of editing the `jupyterhub_config.py` file directly, you could pass
|
||||
the following as part of the `config.yaml` file, as per the [tutorial](https://zero-to-jupyterhub.readthedocs.io/en/latest/):
|
||||
the following as part of the `config.yaml` file, as per the [tutorial](https://z2jh.jupyter.org/en/latest/):
|
||||
|
||||
```bash
|
||||
hub:
|
||||
@@ -287,12 +353,18 @@ hub:
|
||||
|
||||
With that setting in place, a new named-server is activated like this:
|
||||
|
||||
```{parsed-literal}
|
||||
[POST /api/users/:username/servers/:servername](rest-api-post-user-server-name)
|
||||
```
|
||||
|
||||
e.g.
|
||||
|
||||
```bash
|
||||
curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/servers/<serverA>"
|
||||
curl -X POST -H "Authorization: token <token>" "http://127.0.0.1:8081/hub/api/users/<user>/servers/<serverB>"
|
||||
```
|
||||
|
||||
The same servers can be stopped by substituting `DELETE` for `POST` above.
|
||||
The same servers can be [stopped](rest-api-delete-user-server-name) by substituting `DELETE` for `POST` above.
|
||||
|
||||
### Some caveats for using named-servers
|
||||
|
||||
@@ -303,8 +375,9 @@ or kubernetes pods.
|
||||
|
||||
## Learn more about the API
|
||||
|
||||
You can see the full [JupyterHub REST API][] for details.
|
||||
You can see the full [JupyterHub REST API](jupyterhub-rest-api) for more details.
|
||||
|
||||
[openapi initiative]: https://www.openapis.org/
|
||||
[jupyterhub rest api]: ./rest-api
|
||||
[scopes]: ../rbac/scopes.md
|
||||
[jupyter notebook rest api]: https://petstore3.swagger.io/?url=https://raw.githubusercontent.com/jupyter/notebook/HEAD/notebook/services/api/api.yaml
|
@@ -1,8 +1,10 @@
|
||||
(howto:separate-proxy)=
|
||||
|
||||
# Running proxy separately from the hub
|
||||
|
||||
## Background
|
||||
|
||||
The thing which users directly connect to is the proxy, by default
|
||||
The thing which users directly connect to is the proxy, which by default is
|
||||
`configurable-http-proxy`. The proxy either redirects users to the
|
||||
hub (for login and managing servers), or to their own single-user
|
||||
servers. Thus, as long as the proxy stays running, access to existing
|
||||
@@ -10,16 +12,15 @@ servers continues, even if the hub itself restarts or goes down.
|
||||
|
||||
When you first configure the hub, you may not even realize this
|
||||
because the proxy is automatically managed by the hub. This is great
|
||||
for getting started and even most use, but everytime you restart the
|
||||
hub, all user connections also get restarted. But it's also simple to
|
||||
for getting started and even most use-cases, although, everytime you restart the
|
||||
hub, all user connections are also restarted. However, it is also simple to
|
||||
run the proxy as a service separate from the hub, so that you are free
|
||||
to reconfigure the hub while only interrupting users who are currently
|
||||
actively starting the hub.
|
||||
to reconfigure the hub while only interrupting users who are waiting for their notebook server to start.
|
||||
starting their notebook server.
|
||||
|
||||
The default JupyterHub proxy is
|
||||
[configurable-http-proxy](https://github.com/jupyterhub/configurable-http-proxy),
|
||||
and that page has some docs. If you are using a different proxy, such
|
||||
as Traefik, these instructions are probably not relevant to you.
|
||||
[configurable-http-proxy](https://github.com/jupyterhub/configurable-http-proxy). If you are using a different proxy, such
|
||||
as [Traefik](https://github.com/traefik/traefik), these instructions are probably not relevant to you.
|
||||
|
||||
## Configuration options
|
||||
|
||||
@@ -40,9 +41,14 @@ set to the URL which the hub uses to connect _to the proxy's API_.
|
||||
## Proxy configuration
|
||||
|
||||
You need to configure a service to start the proxy. An example
|
||||
command line for this is `configurable-http-proxy --ip=127.0.0.1 --port=8000 --api-ip=127.0.0.1 --api-port=8001 --default-target=http://localhost:8081 --error-target=http://localhost:8081/hub/error`. (Details for how to
|
||||
do this is out of scope for this tutorial - for example it might be a
|
||||
systemd service on within another docker cotainer). The proxy has no
|
||||
command line argument for this is:
|
||||
|
||||
```bash
|
||||
$ configurable-http-proxy --ip=127.0.0.1 --port=8000 --api-ip=127.0.0.1 --api-port=8001 --default-target=http://localhost:8081 --error-target=http://localhost:8081/hub/error
|
||||
```
|
||||
|
||||
(Details on how to do this is out of the scope of this tutorial. For example, it might be a
|
||||
systemd service configured within another docker container). The proxy has no
|
||||
configuration files, all configuration is via the command line and
|
||||
environment variables.
|
||||
|
||||
@@ -57,14 +63,14 @@ match the token given to `c.ConfigurableHTTPProxy.auth_token`.
|
||||
|
||||
You should check the [configurable-http-proxy
|
||||
options](https://github.com/jupyterhub/configurable-http-proxy) to see
|
||||
what other options are needed, for example SSL options. Note that
|
||||
these are configured in the hub if the hub is starting the proxy - you
|
||||
need to move the options to here.
|
||||
what other options are needed, for example, SSL options. Note that
|
||||
these options are configured in the hub if the hub is starting the proxy, so you
|
||||
need to configure the options there.
|
||||
|
||||
## Docker image
|
||||
|
||||
You can use [jupyterhub configurable-http-proxy docker
|
||||
image](https://hub.docker.com/r/jupyterhub/configurable-http-proxy/)
|
||||
image](https://quay.io/repository/jupyterhub/configurable-http-proxy)
|
||||
to run the proxy.
|
||||
|
||||
## See also
|
@@ -1,28 +1,31 @@
|
||||
(howto:templates)=
|
||||
|
||||
# Working with templates and UI
|
||||
|
||||
The pages of the JupyterHub application are generated from
|
||||
[Jinja](http://jinja.pocoo.org/) templates. These allow the header, for
|
||||
[Jinja](https://jinja.palletsprojects.com) templates. These allow the header, for
|
||||
example, to be defined once and incorporated into all pages. By providing
|
||||
your own templates, you can have complete control over JupyterHub's
|
||||
your own template(s), you can have complete control over JupyterHub's
|
||||
appearance.
|
||||
|
||||
## Custom Templates
|
||||
|
||||
JupyterHub will look for custom templates in all of the paths in the
|
||||
`JupyterHub.template_paths` configuration option, falling back on the
|
||||
JupyterHub will look for custom templates in all paths included in the
|
||||
`JupyterHub.template_paths` configuration option, falling back on these
|
||||
[default templates](https://github.com/jupyterhub/jupyterhub/tree/HEAD/share/jupyterhub/templates)
|
||||
if no custom template with that name is found. This fallback
|
||||
behavior is new in version 0.9; previous versions searched only those paths
|
||||
if no custom template(s) with specified name(s) are found. This fallback
|
||||
behavior is new in version 0.9; previous versions searched only the paths
|
||||
explicitly included in `template_paths`. You may override as many
|
||||
or as few templates as you desire.
|
||||
|
||||
## Extending Templates
|
||||
|
||||
Jinja provides a mechanism to [extend templates](http://jinja.pocoo.org/docs/2.10/templates/#template-inheritance).
|
||||
A base template can define a `block`, and child templates can replace or
|
||||
supplement the material in the block. The
|
||||
[JupyterHub templates](https://github.com/jupyterhub/jupyterhub/tree/HEAD/share/jupyterhub/templates)
|
||||
make extensive use of blocks, which allows you to customize parts of the
|
||||
Jinja provides a mechanism to [extend templates](https://jinja.palletsprojects.com/en/3.0.x/templates/#template-inheritance).
|
||||
|
||||
A base template can define `block`(s) within itself that child templates can fill up or
|
||||
supply content to. The
|
||||
[JupyterHub default templates](https://github.com/jupyterhub/jupyterhub/tree/HEAD/share/jupyterhub/templates)
|
||||
make extensive use of blocks, thus allowing you to customize parts of the
|
||||
interface easily.
|
||||
|
||||
In general, a child template can extend a base template, `page.html`, by beginning with:
|
||||
@@ -40,15 +43,15 @@ file with this block:
|
||||
{% extends "templates/page.html" %}
|
||||
```
|
||||
|
||||
By defining `block`s with same name as in the base template, child templates
|
||||
By defining `block`s with the same name as in the base template, child templates
|
||||
can replace those sections with custom content. The content from the base
|
||||
template can be included with the `{{ super() }}` directive.
|
||||
template can be included in the child template with the `{{ super() }}` directive.
|
||||
|
||||
### Example
|
||||
|
||||
To add an additional message to the spawn-pending page, below the existing
|
||||
text about the server starting up, place this content in a file named
|
||||
`spawn_pending.html` in a directory included in the
|
||||
text about the server starting up, place the content below in a file named
|
||||
`spawn_pending.html`. This directory must also be included in the
|
||||
`JupyterHub.template_paths` configuration option.
|
||||
|
||||
```html
|
||||
@@ -61,7 +64,7 @@ text about the server starting up, place this content in a file named
|
||||
|
||||
To add announcements to be displayed on a page, you have two options:
|
||||
|
||||
- Extend the page templates as described above
|
||||
- [Extend the page templates as described above](#extending-templates)
|
||||
- Use configuration variables
|
||||
|
||||
### Announcement Configuration Variables
|
||||
@@ -71,10 +74,10 @@ the top of all pages. The more specific variables
|
||||
`announcement_login`, `announcement_spawn`, `announcement_home`, and
|
||||
`announcement_logout` are more specific and only show on their
|
||||
respective pages (overriding the global `announcement` variable).
|
||||
Note that changing these variables require a restart, unlike direct
|
||||
Note that changing these variables requires a restart, unlike direct
|
||||
template extension.
|
||||
|
||||
You can get the same effect by extending templates, which allows you
|
||||
Alternatively, you can get the same effect by extending templates, which allows you
|
||||
to update the messages without restarting. Set
|
||||
`c.JupyterHub.template_paths` as mentioned above, and then create a
|
||||
template (for example, `login.html`) with:
|
||||
@@ -84,5 +87,5 @@ template (for example, `login.html`) with:
|
||||
```
|
||||
|
||||
Extending `page.html` puts the message on all pages, but note that
|
||||
extending `page.html` take precedence over an extension of a specific
|
||||
extending `page.html` takes precedence over an extension of a specific
|
||||
page (unlike the variable-based approach above).
|
144
docs/source/howto/upgrading-v5.md
Normal file
144
docs/source/howto/upgrading-v5.md
Normal file
@@ -0,0 +1,144 @@
|
||||
(howto:upgrading-v5)=
|
||||
|
||||
# Upgrading to JupyterHub 5
|
||||
|
||||
This document describes the specific considerations.
|
||||
For general upgrading tips, see the [docs on upgrading jupyterhub](upgrading).
|
||||
|
||||
You can see the [changelog](changelog) for more detailed information.
|
||||
|
||||
## Python version
|
||||
|
||||
JupyterHub 5 requires Python 3.8.
|
||||
Make sure you have at least Python 3.8 in your user and hub environments before upgrading.
|
||||
|
||||
## Database upgrades
|
||||
|
||||
JupyterHub 5 does have a database schema upgrade,
|
||||
so you should backup your database and run `jupyterhub upgrade-db` after upgrading and before starting JupyterHub.
|
||||
The updated schema only adds some columns, so is one that should be not too disruptive to roll back if you need to.
|
||||
|
||||
## User subdomains
|
||||
|
||||
All JupyterHub deployments which care about protecting users from each other are encouraged to enable per-user domains, if possible,
|
||||
as this provides the best isolation between user servers.
|
||||
|
||||
To enable subdomains, set:
|
||||
|
||||
```python
|
||||
c.JupyterHub.subdomain_host = "https://myjupyterhub.example.org"
|
||||
```
|
||||
|
||||
If you were using subdomains before, some user servers and all services will be on different hosts in the default configuration.
|
||||
|
||||
JupyterHub 5 allows complete customization of the subdomain scheme via the new {attr}`.JupyterHub.subdomain_hook`,
|
||||
and changes the default subdomain scheme.
|
||||
.
|
||||
|
||||
You can provide a completely custom subdomain scheme, or select one of two default implementations by name: `idna` or `legacy`. `idna` is the default.
|
||||
|
||||
The new default behavior can be selected explicitly via:
|
||||
|
||||
```python
|
||||
c.JupyterHub.subdomain_hook = "idna"
|
||||
```
|
||||
|
||||
Or to delay any changes to URLs for your users, you can opt-in to the pre-5.0 behavior with:
|
||||
|
||||
```python
|
||||
c.JupyterHub.subdomain_hook = "legacy"
|
||||
```
|
||||
|
||||
The key differences of the new `idna` scheme:
|
||||
|
||||
- It should always produce valid domains, regardless of username (not true for the legacy scheme when using characters that might need escaping or usernames that are long)
|
||||
- each Service gets its own subdomain on `service--` rather than sharing `services.`
|
||||
|
||||
Below is a table of examples of users and services with their domains with the old and new scheme, assuming the configuration:
|
||||
|
||||
```python
|
||||
c.JupyterHub.subdomain_host = "https://jupyter.example.org"
|
||||
```
|
||||
|
||||
| kind | name | legacy | idna |
|
||||
| ------- | ------------------ | ---------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- |
|
||||
| user | laudna | `laudna.jupyter.example.org` | `laudna.jupyter.example.org` |
|
||||
| service | bells | `services.jupyter.example.org` | `bells--service.jupyter.example.org` |
|
||||
| user | jester@mighty.nein | `jester_40mighty.nein.jupyter.example.org` (may not work!) | `u-jestermi--8037680.jupyter.example.org` (not as pretty, but guaranteed to be valid and not collide) |
|
||||
|
||||
## Tokens in URLs
|
||||
|
||||
JupyterHub 5 does not accept `?token=...` URLs by default in single-user servers.
|
||||
These URLs allow one user to force another to login as them,
|
||||
which can be the start of an inter-user attack.
|
||||
|
||||
There is a valid use case for producing links which allow starting a fully authenticated session,
|
||||
so you may still opt in to this behavior by setting:
|
||||
|
||||
```python
|
||||
c.Spawner.environment.update({"JUPYTERHUB_ALLOW_TOKEN_IN_URL": "1"})
|
||||
```
|
||||
|
||||
if you are not concerned about protecting your users from each other.
|
||||
If you have subdomains enabled, the threat is substantially reduced.
|
||||
|
||||
## Sharing
|
||||
|
||||
The big new feature in JupyterHub 5.0 is sharing.
|
||||
Check it out in [the sharing docs](sharing-tutorial).
|
||||
|
||||
## Authenticator.allow_all and allow_existing_users
|
||||
|
||||
Prior to JupyterHub 5, JupyterHub Authenticators had the _implicit_ default behavior to allow any user who successfully authenticates to login **if no users are explicitly allowed** (i.e. `allowed_users` is empty on the base class).
|
||||
This behavior was considered a too-permissive default in Authenticators that source large user pools like OAuthenticator, which would accept e.g. all users with a Google account by default.
|
||||
As a result, OAuthenticator 16 introduced two configuration options: `allow_all` and `allow_existing_users`.
|
||||
|
||||
JupyterHub 5 adopts these options for all Authenticators:
|
||||
|
||||
1. `Authenticator.allow_all` (default: False)
|
||||
2. `Authenticator.allow_existing_users` (default: True if allowed_users is non-empty, False otherwise)
|
||||
|
||||
having the effect that _some_ allow configuration is required for anyone to be able to login.
|
||||
If you want to preserve the pre-5.0 behavior with no explicit `allow` configuration, set:
|
||||
|
||||
```python
|
||||
c.Authenticator.allow_all = True
|
||||
```
|
||||
|
||||
`allow_existing_users` defaults are meant to be backward-compatible, but you can now _explicitly_ allow or not based on presence in the database by setting `Authenticator.allow_existing_users` to True or False.
|
||||
|
||||
:::{seealso}
|
||||
|
||||
[Authenticator config docs](authenticators) for details on these and other Authenticator options.
|
||||
:::
|
||||
|
||||
## Bootstrap 5
|
||||
|
||||
JupyterHub uses the CSS framework [bootstrap](https://getbootstrap.com), which is upgraded from 3.4 to 5.3.
|
||||
If you don't have any custom HTML templates, you are likely to only see relatively minor aesthetic changes.
|
||||
If you have custom HTML templates or spawner options forms, they may need some updating to look right.
|
||||
|
||||
See the bootstrap documentation. Since we upgraded two major versions, you might need to look at both v4 and v5 documentation for what has changed since 3.x:
|
||||
|
||||
- [migrating to v4](https://getbootstrap.com/docs/4.6/migration/)
|
||||
- [migrating to v5](https://getbootstrap.com/docs/5.3/migration/)
|
||||
|
||||
If you customized the JupyterHub CSS by recompiling from LESS files, bootstrap migrated to SCSS.
|
||||
You can start by autoconverting your LESS to SCSS (it's not that different) with [less2sass](https://github.com/ekryski/less2sass):
|
||||
|
||||
```bash
|
||||
npm install --global less2scss
|
||||
# converts less/foo.less to scss/foo.scss
|
||||
less2scss --src ./less --dst ./scss
|
||||
```
|
||||
|
||||
Bootstrap also allows configuring things with [CSS variables](https://getbootstrap.com/docs/5.3/customize/css-variables/), so depending on what you have customized, you may be able to get away with just adding a CSS file defining variables without rebuilding the whole SCSS.
|
||||
|
||||
## groups required with Authenticator.manage_groups
|
||||
|
||||
Setting `Authenticator.manage_groups = True` allows the Authenticator to manage group membership by returning `groups` from the authentication model.
|
||||
However, this option is available even on Authenticators that do not support it, which led to confusion.
|
||||
Starting with JupyterHub 5, if `manage_groups` is True `authenticate` _must_ return a groups field, otherwise an error is raised.
|
||||
This prevents confusion when users enable managed groups that is not implemented.
|
||||
|
||||
If an Authenticator _does_ support managing groups but was not providing a `groups` field in order to leave membership unmodified, it must specify `"groups": None` to make this explicit instead of implicit (this is backward-compatible).
|
149
docs/source/howto/upgrading.md
Normal file
149
docs/source/howto/upgrading.md
Normal file
@@ -0,0 +1,149 @@
|
||||
(howto:upgrading-jupyterhub)=
|
||||
|
||||
# Upgrading JupyterHub
|
||||
|
||||
JupyterHub offers easy upgrade pathways between minor versions. This
|
||||
document describes how to do these upgrades.
|
||||
|
||||
If you are using {ref}`a JupyterHub distribution <index/distributions>`, you
|
||||
should consult the distribution's documentation on how to upgrade. This documentation is
|
||||
for those who have set up their JupyterHub without using a distribution.
|
||||
|
||||
This documentation is lengthy because it is quite detailed. Most likely, upgrading
|
||||
JupyterHub is painless, quick and with minimal user interruption.
|
||||
|
||||
The steps are discussed in detail, so if you get stuck at any step you can always refer to this guide.
|
||||
|
||||
For specific version migrations:
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
./upgrading-v5
|
||||
```
|
||||
|
||||
## Read the Changelog
|
||||
|
||||
The [changelog](changelog) contains information on what has
|
||||
changed with the new JupyterHub release and any deprecation warnings.
|
||||
Read these notes to familiarize yourself with the coming changes. There
|
||||
might be new releases of the authenticators & spawners you use, so
|
||||
read the changelogs for those too!
|
||||
|
||||
## Notify your users
|
||||
|
||||
If you use the default configuration where `configurable-http-proxy`
|
||||
is managed by JupyterHub, your users will see service disruption during
|
||||
the upgrade process. You should notify them, and pick a time to do the
|
||||
upgrade where they will be least disrupted.
|
||||
|
||||
If you use a different proxy or run `configurable-http-proxy`
|
||||
independent of JupyterHub, your users will be able to continue using notebook
|
||||
servers they had already launched, but will not be able to launch new servers or sign in.
|
||||
|
||||
## Backup database & config
|
||||
|
||||
Before doing an upgrade, it is critical to back up:
|
||||
|
||||
1. Your JupyterHub database (SQLite by default, or MySQL / Postgres if you used those).
|
||||
If you use SQLite (the default), you should backup the `jupyterhub.sqlite` file.
|
||||
2. Your `jupyterhub_config.py` file.
|
||||
3. Your users' home directories. This is unlikely to be affected directly by
|
||||
a JupyterHub upgrade, but we recommend a backup since user data is critical.
|
||||
|
||||
## Shut down JupyterHub
|
||||
|
||||
Shut down the JupyterHub process. This would vary depending on how you
|
||||
have set up JupyterHub to run. It is most likely using a process
|
||||
supervisor of some sort (`systemd` or `supervisord` or even `docker`).
|
||||
Use the supervisor-specific command to stop the JupyterHub process.
|
||||
|
||||
## Upgrade JupyterHub packages
|
||||
|
||||
There are two environments where the `jupyterhub` package is installed:
|
||||
|
||||
1. The _hub environment_: where the JupyterHub server process
|
||||
runs. This is started with the `jupyterhub` command, and is what
|
||||
people generally think of as JupyterHub.
|
||||
2. The _notebook user environments_: where the user notebook
|
||||
servers are launched from, and is probably custom to your own
|
||||
installation. This could be just one environment (different from the
|
||||
hub environment) that is shared by all users, one environment
|
||||
per user, or the same environment as the hub environment. The hub
|
||||
launched the `jupyterhub-singleuser` command in this environment,
|
||||
which in turn starts the notebook server.
|
||||
|
||||
You need to make sure the version of the `jupyterhub` package matches
|
||||
in both these environments. If you installed `jupyterhub` with pip,
|
||||
you can upgrade it with:
|
||||
|
||||
```bash
|
||||
python3 -m pip install --upgrade jupyterhub==<version>
|
||||
```
|
||||
|
||||
Where `<version>` is the version of JupyterHub you are upgrading to.
|
||||
|
||||
If you used `conda` to install `jupyterhub`, you should upgrade it
|
||||
with:
|
||||
|
||||
```bash
|
||||
conda install -c conda-forge jupyterhub==<version>
|
||||
```
|
||||
|
||||
You should also check for new releases of the authenticator & spawner you
|
||||
are using. You might wish to upgrade those packages, too, along with JupyterHub
|
||||
or upgrade them separately.
|
||||
|
||||
## Upgrade JupyterHub database
|
||||
|
||||
Once new packages are installed, you need to upgrade the JupyterHub
|
||||
database. From the hub environment, in the same directory as your
|
||||
`jupyterhub_config.py` file, you should run:
|
||||
|
||||
```bash
|
||||
jupyterhub upgrade-db
|
||||
```
|
||||
|
||||
This should find the location of your database, and run the necessary upgrades
|
||||
for it.
|
||||
|
||||
### SQLite database disadvantages
|
||||
|
||||
SQLite has some disadvantages when it comes to upgrading JupyterHub. These
|
||||
are:
|
||||
|
||||
- `upgrade-db` may not work, and you may need to delete your database
|
||||
and start with a fresh one.
|
||||
- `downgrade-db` **will not** work if you want to rollback to an
|
||||
earlier version, so backup the `jupyterhub.sqlite` file before
|
||||
upgrading.
|
||||
|
||||
### What happens if I delete my database?
|
||||
|
||||
Losing the Hub database is often not a big deal. Information that
|
||||
resides only in the Hub database includes:
|
||||
|
||||
- active login tokens (user cookies, service tokens)
|
||||
- users added via JupyterHub UI, instead of config files
|
||||
- info about running servers
|
||||
|
||||
If the following conditions are true, you should be fine clearing the
|
||||
Hub database and starting over:
|
||||
|
||||
- users specified in the config file, or login using an external
|
||||
authentication provider (Google, GitHub, LDAP, etc)
|
||||
- user servers are stopped during the upgrade
|
||||
- don't mind causing users to log in again after the upgrade
|
||||
|
||||
## Start JupyterHub
|
||||
|
||||
Once the database upgrade is completed, start the `jupyterhub`
|
||||
process again.
|
||||
|
||||
1. Log in and start the server to make sure things work as
|
||||
expected.
|
||||
2. Check the logs for any errors or deprecation warnings. You
|
||||
might have to update your `jupyterhub_config.py` file to
|
||||
deal with any deprecated options.
|
||||
|
||||
Congratulations, your JupyterHub has been upgraded!
|
BIN
docs/source/images/collaboration-admin-ui.png
Normal file
BIN
docs/source/images/collaboration-admin-ui.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 141 KiB |
BIN
docs/source/images/dropdown-details-3.0.png
Normal file
BIN
docs/source/images/dropdown-details-3.0.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 1.1 MiB |
BIN
docs/source/images/jupyterlab-rtc.png
Normal file
BIN
docs/source/images/jupyterlab-rtc.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 81 KiB |
BIN
docs/source/images/mybinder-hub-components-cpu-memory.png
Normal file
BIN
docs/source/images/mybinder-hub-components-cpu-memory.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 1017 KiB |
BIN
docs/source/images/mybinder-load5.png
Normal file
BIN
docs/source/images/mybinder-load5.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 607 KiB |
BIN
docs/source/images/mybinder-user-resources.png
Normal file
BIN
docs/source/images/mybinder-user-resources.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 1.8 MiB |
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user