Compare commits

...

427 Commits
k8s ... main

Author SHA1 Message Date
a889a9f0b8 up
All checks were successful
/ build (push) Successful in 1m23s
2025-06-22 16:32:50 +02:00
71fc44acc5 change provider
All checks were successful
/ build (push) Successful in 42s
2025-04-01 22:09:36 +02:00
36cd9266b3 remove links
All checks were successful
/ build (push) Successful in 3m55s
2025-04-01 22:04:13 +02:00
21ff0825cc up python
All checks were successful
/ build (push) Successful in 37s
2025-03-04 20:07:53 +01:00
c7cdd65f87 add link
All checks were successful
/ build (push) Successful in 3m51s
2025-03-03 20:37:47 +01:00
3b2adea40f up ver
All checks were successful
/ build (push) Successful in 34s
2025-03-02 22:21:00 +01:00
8e82c90734 use python asyncio
All checks were successful
/ build (push) Successful in 43s
2025-03-02 22:19:41 +01:00
ea7529c443 cleanup asp deps
All checks were successful
/ build (push) Successful in 35s
2025-03-02 22:05:30 +01:00
d83d53180d up theme
All checks were successful
/ build (push) Successful in 32s
2025-03-02 21:57:32 +01:00
511fdbce7c fix asp db score
All checks were successful
/ build (push) Successful in 1m25s
2025-03-02 21:54:05 +01:00
e2bdc520dd remove x
All checks were successful
/ build (push) Successful in 1m56s
2025-01-11 19:17:24 +01:00
d5eeda3bab add 2025 tests
All checks were successful
/ build (push) Successful in 1m28s
2024-12-26 19:15:41 +01:00
bcb10ec9ad up
All checks were successful
/ build (push) Successful in 1m25s
2024-12-06 18:47:43 +01:00
0d760d43d6 up
All checks were successful
/ build (push) Successful in 44s
2024-11-30 19:00:11 +01:00
78236841f0 fix author
All checks were successful
/ build (push) Successful in 35s
2024-11-30 18:57:45 +01:00
b3239b9d96 fix author
All checks were successful
/ build (push) Successful in 1m21s
2024-11-30 18:55:07 +01:00
6442e21ba7 update bench results
Some checks failed
/ build (push) Failing after 1m12s
2024-09-07 23:11:59 +02:00
7204fa7c1b up theme
All checks were successful
/ build (push) Successful in 35s
2024-09-07 19:01:40 +02:00
8b01419070 remove mysql test for simplicity
All checks were successful
/ build (push) Successful in 1m34s
2024-09-07 18:59:38 +02:00
761fbd2074 auto release
All checks were successful
/ build (push) Successful in 38s
2024-08-22 21:51:24 +02:00
8c6d0000f9 auto release
All checks were successful
/ build (push) Successful in 33s
2024-08-22 21:43:38 +02:00
47a6edf37e auto release
All checks were successful
/ build (push) Successful in 45s
2024-08-22 21:26:58 +02:00
c310333396 auto release
All checks were successful
/ build (push) Successful in 35s
2024-08-22 18:27:18 +02:00
94f980d399 auto release
All checks were successful
/ build (push) Successful in 1m45s
2024-08-22 17:58:29 +02:00
dced346102 build
All checks were successful
/ build (push) Has been skipped
/ deploy (push) Successful in 16s
2024-08-14 17:55:35 +02:00
3e2f7059b5 build
Some checks failed
/ deploy (push) Failing after 0s
/ build (push) Failing after 3s
2024-08-14 17:54:47 +02:00
877e2b5c5b build
Some checks failed
/ deploy (push) Failing after 1s
/ build (push) Has been cancelled
2024-08-14 17:54:33 +02:00
66edc2016a build
Some checks failed
/ build (push) Successful in 31s
/ deploy (push) Failing after 1s
2024-08-14 17:53:19 +02:00
08de9a59b9 build
Some checks failed
/ build (push) Has been skipped
/ deploy (push) Failing after 5s
2024-08-14 17:52:04 +02:00
e86caebaf5 build
All checks were successful
/ build (push) Has been skipped
/ deploy (push) Successful in 16s
2024-08-14 17:49:58 +02:00
88315c0b8e build
Some checks failed
/ deploy (push) Failing after 10s
/ build (push) Has been skipped
2024-08-14 17:47:37 +02:00
4a281b9502 build
Some checks failed
/ build (push) Has been skipped
/ deploy (push) Failing after 9s
2024-08-14 17:45:41 +02:00
59d47e6938 build
Some checks failed
/ deploy (push) Failing after 5s
/ build (push) Has been skipped
2024-08-14 17:43:49 +02:00
1756ef7c54 build
Some checks failed
/ deploy (push) Failing after 5s
/ build (push) Has been skipped
2024-08-14 17:41:27 +02:00
aa4b8199f6 build
Some checks failed
/ deploy (push) Failing after 5s
/ build (push) Has been skipped
2024-08-14 17:38:32 +02:00
69df8decb9 build
Some checks failed
/ deploy (push) Has been skipped
/ build (push) Has been cancelled
2024-08-14 17:38:14 +02:00
b084807eb7 build
All checks were successful
/ build (push) Has been skipped
/ deploy (push) Successful in 18s
2024-08-14 17:32:35 +02:00
ee90948a73 build
Some checks failed
/ build (push) Successful in 30s
/ deploy (push) Failing after 1s
2024-08-14 17:28:22 +02:00
014d8296eb build
Some checks failed
/ build (push) Has been skipped
/ deploy (push) Failing after 5s
2024-08-14 17:26:16 +02:00
8df54d0495 build
Some checks failed
/ build (push) Has been skipped
/ deploy (push) Failing after 6s
2024-08-14 17:11:37 +02:00
c0a93ea0cc build
Some checks failed
/ build (push) Has been skipped
/ deploy (push) Failing after 5s
2024-08-14 17:02:29 +02:00
6edc7790dc build
Some checks failed
/ deploy (push) Failing after 0s
/ build (push) Failing after 2s
2024-08-14 17:02:07 +02:00
f2114e001b build
All checks were successful
/ deploy (push) Has been skipped
/ build (push) Successful in 41s
2024-08-14 17:01:22 +02:00
1b715c2a8d build
Some checks failed
/ deploy (push) Failing after 1s
/ build (push) Successful in 33s
2024-08-14 16:57:29 +02:00
07ca3dd3a5 build
All checks were successful
/ build (push) Has been skipped
/ deploy (push) Successful in 10s
2024-08-14 16:25:52 +02:00
a333928d39 build
All checks were successful
/ build (push) Successful in 1m47s
2024-08-14 16:12:09 +02:00
eadcddc659 up 2024-08-08 20:58:59 +02:00
ad9b1cd38d try gitea action
All checks were successful
/ build (push) Successful in 16s
2024-08-03 12:00:23 +02:00
4789bb5c03 try gitea action
All checks were successful
/ build (push) Successful in 36s
2024-08-03 11:59:15 +02:00
c57c660f9f try gitea action
All checks were successful
/ build (push) Successful in 16s
2024-08-03 11:24:04 +02:00
2c66d6c165 try gitea action
All checks were successful
/ build (push) Successful in 40s
2024-08-02 21:03:54 +02:00
cd025a9291 try gitea action
All checks were successful
/ build (push) Successful in 2m10s
2024-08-02 20:58:47 +02:00
4f5ff56f9e try gitea action
All checks were successful
/ build (push) Successful in 1m26s
2024-08-02 19:41:37 +02:00
728da376f7 try gitea action
Some checks failed
/ build (push) Has been cancelled
2024-08-02 19:40:39 +02:00
e87401928e try gitea action
All checks were successful
/ build (push) Successful in 34s
2024-08-01 22:39:14 +02:00
aa84e6e182 try gitea action
All checks were successful
/ build (push) Successful in 32s
2024-08-01 22:38:16 +02:00
d2085e2305 try gitea action
Some checks failed
/ build (push) Failing after 17s
2024-08-01 22:36:07 +02:00
7fefb3d53b try gitea action
All checks were successful
/ build (push) Successful in 15s
2024-08-01 22:21:08 +02:00
2304d88224 try gitea action
All checks were successful
/ build (push) Successful in 36s
2024-08-01 22:15:39 +02:00
2c3c115517 try gitea action
All checks were successful
/ build (push) Successful in 37s
2024-08-01 22:13:57 +02:00
64191136a2 try gitea action
Some checks failed
/ build (push) Failing after 18s
2024-08-01 22:08:38 +02:00
2798b4f3bd try gitea action
Some checks failed
/ build (push) Failing after 3s
2024-08-01 21:40:52 +02:00
5d4bfe05de try gitea action
Some checks failed
/ build (push) Failing after 3s
2024-08-01 21:36:54 +02:00
e8daa92fba try gitea action
Some checks failed
/ build (push) Failing after 3s
2024-08-01 21:32:56 +02:00
f3317ba2ac try gitea action
Some checks failed
/ build (push) Failing after 4s
2024-08-01 21:27:09 +02:00
ed89f99265 try gitea action
Some checks failed
/ build (push) Failing after 2s
2024-08-01 21:26:24 +02:00
278a4e57e9 try gitea action
Some checks failed
/ build (push) Failing after 2s
2024-08-01 21:25:07 +02:00
6e4c44e746 try gitea action
Some checks failed
/ build (push) Failing after 4s
2024-08-01 21:22:49 +02:00
933d29d395 try gitea action
All checks were successful
/ build (push) Successful in 16s
/ deploy (push) Successful in 51s
2024-07-31 22:32:33 +02:00
6a15af8e82 try gitea action
All checks were successful
/ build (push) Successful in 15s
/ deploy (push) Has been skipped
2024-07-31 22:29:29 +02:00
c2c3763976 try gitea action
All checks were successful
/ build (push) Successful in 16s
/ deploy (push) Has been skipped
2024-07-31 22:25:52 +02:00
bafbda6a6f try gitea action
All checks were successful
/ build (push) Successful in 16s
/ deploy (push) Has been skipped
2024-07-31 22:19:32 +02:00
b4f6b8cc98 try gitea action
All checks were successful
/ build (push) Successful in 15s
/ deploy (push) Successful in 42s
2024-07-31 22:17:57 +02:00
22a456e8b0 try gitea action
All checks were successful
/ build (push) Successful in 15s
2024-07-31 22:10:12 +02:00
a4dcae26c2 try gitea action
Some checks failed
/ deploy (push) Has been skipped
/ build (push) Failing after 21s
2024-07-31 22:07:20 +02:00
4b19c14aae try gitea action
Some checks failed
/ deploy (push) Failing after 9s
/ build (push) Successful in 13s
2024-07-31 21:58:25 +02:00
94485bdb36 try gitea action
All checks were successful
/ build (push) Successful in 1m2s
2024-07-30 22:20:56 +02:00
1137607c76 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog
All checks were successful
/ build (push) Successful in 34s
2024-07-30 22:13:12 +02:00
a5654cc14d try gitea action 2024-07-30 22:13:07 +02:00
94ea21a73f bump to 1.0.201 [ci skip] 2024-07-30 20:10:38 +00:00
5bd92dc914 try gitea action
Some checks failed
/ build (push) Failing after 38s
2024-07-30 22:06:51 +02:00
17e2cb3f0a try gitea action
All checks were successful
/ build (push) Successful in 38s
2024-07-30 21:40:33 +02:00
00f14787d3 try gitea action
Some checks failed
/ build (push) Failing after 19s
2024-07-30 21:35:22 +02:00
a50ceadff2 try gitea action
All checks were successful
/ build (push) Successful in 33s
2024-07-30 21:32:53 +02:00
518bb37bfe try gitea action
Some checks failed
/ build (push) Failing after 18s
2024-07-30 21:28:37 +02:00
598e5996ea try gitea action
Some checks failed
/ build (push) Has been cancelled
2024-07-30 21:28:21 +02:00
c2ae40c859 try gitea action
Some checks failed
/ build (push) Failing after 31s
2024-07-30 21:25:38 +02:00
0d6e5d6e5e try gitea action
All checks were successful
/ build (push) Successful in 15s
2024-07-30 21:22:33 +02:00
1216493667 try gitea action
All checks were successful
/ build (push) Successful in 15s
2024-07-30 21:21:46 +02:00
44c643d9d4 try gitea action
All checks were successful
/ build (push) Successful in 1m20s
2024-07-30 21:19:53 +02:00
2f73fc3dd1 try gitea action
Some checks failed
/ build (push) Failing after 2m56s
2024-07-30 21:13:55 +02:00
c10960326f try gitea action
Some checks failed
/ build (push) Failing after 1m47s
2024-07-30 21:10:56 +02:00
b893e539b3 try gitea action
Some checks failed
/ build (push) Failing after 1m24s
2024-07-30 21:08:48 +02:00
234c4f8500 try gitea action
Some checks failed
/ build (push) Failing after 1m23s
2024-07-30 21:06:06 +02:00
124841452e try gitea action
Some checks failed
/ build (push) Failing after 2m58s
2024-07-30 21:02:26 +02:00
ec2f1b96d2 try gitea action
Some checks failed
/ build (push) Failing after 18s
2024-07-30 20:53:30 +02:00
37e9c90b1b try gitea action
Some checks failed
/ build (push) Failing after 19s
2024-07-30 20:21:30 +02:00
e49c37d92c try gitea action
Some checks failed
/ build (push) Failing after 19s
2024-07-30 20:19:19 +02:00
c0beda72c8 try gitea action
Some checks failed
/ build (push) Failing after 18s
2024-07-30 20:17:49 +02:00
9f13ebfe5e try gitea action
Some checks failed
/ build (push) Failing after 50s
2024-07-30 20:15:40 +02:00
edfa564b04 try gitea action
Some checks failed
/ build (push) Failing after 18s
2024-07-30 20:13:28 +02:00
52c19aff99 try gitea action
Some checks failed
/ build (push) Failing after 44s
2024-07-30 20:01:41 +02:00
6bd545062b try gitea action
All checks were successful
/ build (push) Successful in 14s
2024-07-30 19:37:06 +02:00
5de35478b9 try gitea action
All checks were successful
/ build (push) Successful in 1m18s
2024-07-30 19:35:33 +02:00
e68313b5d4 try gitea action
All checks were successful
/ build (push) Successful in 1m18s
2024-07-30 19:33:52 +02:00
530cf8bac7 try gitea action
All checks were successful
/ build (push) Successful in 1m17s
2024-07-30 19:32:13 +02:00
62939c50b9 try gitea action
Some checks failed
/ build (push) Failing after 37s
2024-07-30 19:26:49 +02:00
d503e7c9fb try gitea action
All checks were successful
/ build (push) Successful in 1m17s
2024-07-30 19:19:51 +02:00
41112b725d try gitea action
All checks were successful
/ build (push) Successful in 1m16s
2024-07-30 19:17:56 +02:00
b3b922eb73 try gitea action
Some checks failed
/ build (push) Failing after 16s
2024-07-30 19:16:30 +02:00
77114a7684 try gitea action
All checks were successful
/ build (push) Successful in 6s
2024-07-30 19:07:07 +02:00
7b752648f2 try gitea action
All checks were successful
/ Explore-Gitea-Actions (push) Successful in 7s
2024-07-30 19:06:00 +02:00
af6d0b2640 try gitea action
All checks were successful
Gitea Actions Demo / Explore-Gitea-Actions (push) Successful in 7s
2024-07-30 19:05:16 +02:00
5fe53742cf try gitea action
All checks were successful
Gitea Actions Demo / Explore-Gitea-Actions (push) Successful in 6s
2024-07-30 19:03:03 +02:00
59d5812ef4 try gitea action
All checks were successful
Gitea Actions Demo / Explore-Gitea-Actions (push) Successful in 38s
2024-07-30 19:01:54 +02:00
3099ef273f try gitea action
All checks were successful
Gitea Actions Demo / Explore-Gitea-Actions (push) Successful in 38s
2024-07-30 18:46:54 +02:00
f7f493e007 bump to 1.0.200 [ci skip] 2024-06-16 16:42:21 +00:00
a05dbbbe93 woodpecker 2024-06-16 17:17:28 +02:00
fa6da5c600 woodpecker 2024-06-16 17:16:46 +02:00
11b9ef8b59 woodpecker
All checks were successful
ci/woodpecker/manual/woodpecker Pipeline was successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-16 17:08:08 +02:00
cc81f4b020 woodpecker
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-16 17:02:12 +02:00
43a8afeaa7 woodpecker
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-16 16:54:48 +02:00
97355aaecb woodpecker
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-16 16:32:57 +02:00
0c602dbd2f woodpecker
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/manual/woodpecker Pipeline was successful
2024-06-16 15:57:28 +02:00
451695b861 woodpecker
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-16 15:56:39 +02:00
1d7af0506d woodpecker
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-16 15:52:49 +02:00
eeb112557a bump to 1.0.199 [ci skip] 2024-06-09 11:38:10 +00:00
c92cc3655b test 2024-06-09 13:35:56 +02:00
e145ec2e51 test 2024-06-09 13:35:51 +02:00
f596ad54ef bump to 1.0.198 [ci skip] 2024-06-07 18:52:42 +00:00
cf41ae8e4c up ver 2024-06-07 20:49:37 +02:00
105cb0de40 bump to 1.0.197 [ci skip] 2024-05-27 15:27:27 +00:00
52443678bc bump to 1.0.196 [ci skip] 2024-05-25 10:39:55 +00:00
0100eb545e Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-05-25 12:37:16 +02:00
d6e39155d3 use set_list 2024-05-25 12:37:11 +02:00
0afa52c595 bump to 1.0.195 [ci skip] 2024-05-19 20:25:32 +00:00
6a5f2e45cd Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-05-19 22:24:21 +02:00
3712ad0fc4 add real pp 2024-05-19 22:24:17 +02:00
c2328e0df1 bump to 1.0.194 [ci skip] 2024-05-19 15:17:43 +00:00
ac238c9d2e bump to 1.0.193 [ci skip] 2024-05-19 14:48:48 +00:00
62641ed7f8 bump to 1.0.192 [ci skip] 2024-05-19 14:47:37 +00:00
7d61056d6b put badge 2024-05-19 16:42:00 +02:00
43a1d8f914 bump to 1.0.191 [ci skip] 2024-05-15 21:08:21 +00:00
185bbe8b29 add traefik crds 2024-05-15 23:05:57 +02:00
bfd1d1ca03 bump to 1.0.190 [ci skip] 2024-05-13 19:09:25 +00:00
58f9034236 Merge branch 'main' of gitea.okami101.io:adr1enbe4udou1n/blog 2024-05-13 21:06:43 +02:00
6be7706196 up flux v2.3 2024-05-13 21:06:28 +02:00
052815301b bump to 1.0.189 [ci skip] 2024-05-07 19:47:51 +00:00
b0ba21d08d bump to 1.0.188 [ci skip] 2024-04-30 18:34:12 +00:00
ba99a5be09 up traefik v3 2024-04-30 20:32:23 +02:00
f837c8b71b bump to 1.0.187 [ci skip] 2024-04-22 19:22:59 +00:00
dc145a1b4c Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-04-22 21:17:45 +02:00
650cb92129 loki single binary mode 2024-04-22 21:17:42 +02:00
f0cfc765b1 bump to 1.0.186 [ci skip] 2024-04-18 17:37:38 +00:00
68847d422f Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-04-18 19:35:30 +02:00
365301d2b6 up theme 2024-04-18 19:35:09 +02:00
88cb4929f9 bump to 1.0.185 [ci skip] 2024-04-18 17:30:29 +00:00
ef30d9fb96 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-04-18 19:28:46 +02:00
56660d32ba up 2024-04-18 19:28:41 +02:00
6057606a2d bump to 1.0.184 [ci skip] 2024-04-14 10:01:58 +00:00
6e727ac414 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-04-14 12:00:55 +02:00
9bc67ea68c up helm vers 2024-04-14 12:00:52 +02:00
ac15d5e4f1 bump to 1.0.183 [ci skip] 2024-04-14 09:56:56 +00:00
43ab62c469 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-04-14 11:55:27 +02:00
f943e015ef up helm vers 2024-04-14 11:55:24 +02:00
d8e9b029a0 bump to 1.0.182 [ci skip] 2024-04-14 09:39:54 +00:00
e8584452b0 Merge branches 'main' and 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-04-14 11:38:39 +02:00
b97c2c28f7 upgrade to loki v3 2024-04-14 11:38:35 +02:00
ac5d72ebea bump to 1.0.181 [ci skip] 2024-04-14 09:18:59 +00:00
2dfb5944e4 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-04-14 11:17:09 +02:00
6c48b0b87c up helm vers 2024-04-14 11:17:05 +02:00
0457a5a46d bump to 1.0.180 [ci skip] 2024-04-14 08:46:36 +00:00
b663158503 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-04-14 10:45:39 +02:00
e15002e8bb cleanup 2024-04-14 10:45:35 +02:00
1f8a8cdc9c bump to 1.0.179 [ci skip] 2024-04-14 08:34:34 +00:00
8809f02868 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-04-14 10:32:48 +02:00
b1a9642bc4 cleanup 2024-04-14 10:32:45 +02:00
49a984f19a bump to 1.0.178 [ci skip] 2024-04-14 08:24:35 +00:00
b495d5022b Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-04-14 10:22:55 +02:00
a454792968 fix labels 2024-04-14 10:22:52 +02:00
1f9e6ab47f bump to 1.0.177 [ci skip] 2024-04-14 08:21:54 +00:00
13aa16f516 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-04-14 10:20:08 +02:00
f92b11eb34 fix labels 2024-04-14 10:20:05 +02:00
673dae791a bump to 1.0.176 [ci skip] 2024-04-13 23:25:25 +00:00
fb38f7f3d2 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-04-14 01:24:02 +02:00
4a8928e9f6 up vers 2024-04-14 01:23:58 +02:00
6ac2a94dd7 bump to 1.0.175 [ci skip] 2024-04-13 22:53:18 +00:00
593c8950b0 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-04-14 00:51:22 +02:00
3f5ea711ac up vers 2024-04-14 00:51:19 +02:00
6a641bf71b bump to 1.0.174 [ci skip] 2024-04-13 18:42:05 +00:00
7ed9b159d9 up traefik version 2024-04-13 20:40:40 +02:00
1e0382079f bump to 1.0.173 [ci skip] 2024-04-13 18:30:56 +00:00
b31e739c3f Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-04-13 20:29:11 +02:00
31bcc0c105 fix system upgrade 2024-04-13 20:29:08 +02:00
d369834ef0 bump to 1.0.172 [ci skip] 2024-04-13 16:57:06 +00:00
87f13fc687 change smtp host scw 2024-04-13 18:55:22 +02:00
9a8c5ce0ab bump to 1.0.171 [ci skip] 2024-04-13 12:20:56 +00:00
c1d32438f7 bump to 1.0.170 [ci skip] 2024-04-03 19:45:02 +00:00
a17c02f82f ccl 2024-04-03 21:43:02 +02:00
fa41a363fa bump to 1.0.169 [ci skip] 2024-04-03 19:27:53 +00:00
06abcda4f3 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-04-03 21:26:22 +02:00
18a8bd7760 fix loki bottleneck 2024-04-03 21:26:19 +02:00
0abb839ae3 bump to 1.0.168 [ci skip] 2024-04-03 19:24:37 +00:00
49137f84f2 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-04-03 21:22:44 +02:00
189d883c55 fix loki bottleneck 2024-04-03 21:22:40 +02:00
759c916022 bump to 1.0.167 [ci skip] 2024-04-03 19:08:16 +00:00
58f02c4e9f Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-04-03 21:06:34 +02:00
ffc6628d58 fix worker 2024-04-03 21:06:30 +02:00
f882cef83e bump to 1.0.166 [ci skip] 2024-04-02 21:25:43 +00:00
b227e5bf75 give all config for DIY 2024-04-02 23:24:38 +02:00
3608909712 bump to 1.0.165 [ci skip] 2024-04-01 20:49:16 +00:00
eb9ac1e28e rewrite conclusion 2024-04-01 22:47:50 +02:00
7cad7ec1bf fix finally symfony frankenphp 2024-04-01 21:57:47 +02:00
17fb386b50 bump to 1.0.164 [ci skip] 2024-03-31 15:50:02 +00:00
99d315698d Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-03-31 17:48:51 +02:00
c70128aa0a add octane 2024-03-31 17:48:46 +02:00
0ece9082b2 bump to 1.0.163 [ci skip] 2024-03-30 18:26:49 +00:00
08562ed2e3 bump to 1.0.162 [ci skip] 2024-03-23 14:30:48 +00:00
a06e1642ad Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-03-23 15:25:46 +01:00
c8c7df99fd change laravel openapi pkg 2024-03-23 15:25:41 +01:00
96c5b9d779 bump to 1.0.161 [ci skip] 2024-03-17 15:34:54 +00:00
a089b9d047 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-03-17 16:33:24 +01:00
5a877d009c up theme 2024-03-17 16:33:22 +01:00
ca8aa110b0 bump to 1.0.160 [ci skip] 2024-03-17 15:26:48 +00:00
b18e070571 bump to 1.0.159 [ci skip] 2024-03-17 15:25:54 +00:00
696fe88e58 up theme
All checks were successful
ci/woodpecker/manual/woodpecker Pipeline was successful
2024-03-17 16:20:37 +01:00
308eaf1d2c laravel 11 2024-03-17 16:20:09 +01:00
aed1c6b3e7 bump to 1.0.158 [ci skip] 2024-02-24 15:15:48 +00:00
ae99e271e7 up theme 2024-02-24 16:10:59 +01:00
1e3b2fa6cc bump to 1.0.157 [ci skip] 2024-01-28 10:42:06 +00:00
a2c801ef38 Merge branch 'main' of int.okami101.io:adr1enbe4udou1n/blog 2024-01-28 11:40:39 +01:00
3708d2e63f cl 2024-01-28 11:40:36 +01:00
f12ae7e262 bump to 1.0.156 [ci skip] 2024-01-28 10:38:26 +00:00
ba21e75995 bump to 1.0.155 [ci skip] 2024-01-28 10:37:24 +00:00
281ed1ccb3 redirect 2024-01-28 11:36:38 +01:00
6ac7775750 Merge branch 'main' of int.okami101.io:adr1enbe4udou1n/blog 2024-01-28 11:36:00 +01:00
7cec61b2f3 redirect 2024-01-28 11:35:56 +01:00
87878d1347 bump to 1.0.154 [ci skip] 2024-01-28 10:29:02 +00:00
f31598f640 cleanup 2024-01-28 11:28:16 +01:00
2387612618 bump to 1.0.153 [ci skip] 2024-01-28 10:15:51 +00:00
07af978507 Merge branch 'main' of int.okami101.io:adr1enbe4udou1n/blog 2024-01-28 11:08:05 +01:00
0e985070c7 int vcs 2024-01-28 11:08:01 +01:00
e67aa74eb2 bump to 1.0.152 [ci skip] 2024-01-28 10:06:25 +00:00
36da769f0e int vcs 2024-01-28 11:00:58 +01:00
99a632a891 bump to 1.0.151 [ci skip] 2024-01-09 20:02:38 +00:00
c0b84228f0 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-01-09 21:01:24 +01:00
d7e8336db0 fix toleration 2024-01-09 21:01:21 +01:00
7f28697c08 bump to 1.0.150 [ci skip] 2024-01-07 12:07:30 +00:00
e9bbdd223d Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-01-07 13:06:23 +01:00
b1ee5d0d1f n8n cache 2024-01-07 13:06:16 +01:00
2f5baad20e bump to 1.0.149 [ci skip] 2024-01-07 11:54:42 +00:00
fdcb305cb4 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-01-07 12:53:17 +01:00
1596f0670c fix namespace apiversion 2024-01-07 12:53:11 +01:00
db71e8014d bump to 1.0.148 [ci skip] 2024-01-03 18:06:35 +00:00
a9b40dde4d Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-01-03 19:05:40 +01:00
d9945977d1 fix tls-sans 2024-01-03 19:05:30 +01:00
96ef1da81c bump to 1.0.147 [ci skip] 2024-01-01 13:38:34 +00:00
f577a65e70 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2024-01-01 14:37:28 +01:00
6d3c345e75 add redirects support 2024-01-01 14:37:24 +01:00
4ecb854b2a bump to 1.0.146 [ci skip] 2023-12-28 20:19:59 +00:00
5902732262 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-12-28 21:19:08 +01:00
850cb48a32 update remarks 2023-12-28 21:19:05 +01:00
47948f2a87 bump to 1.0.145 [ci skip] 2023-12-28 19:48:16 +00:00
b06875819f Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-12-28 20:47:15 +01:00
1c4d080839 automatized bench data 2023-12-28 20:47:11 +01:00
9a6f7553c0 bump to 1.0.144 [ci skip] 2023-12-28 11:42:02 +00:00
141b2b1ac5 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-12-28 12:41:02 +01:00
82f975d893 fix k3s inputs 2023-12-28 12:40:58 +01:00
469553a994 bump to 1.0.143 [ci skip] 2023-12-27 16:46:02 +00:00
ac43d0d07a Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-12-27 17:45:20 +01:00
84729e0130 typo 2023-12-27 17:45:18 +01:00
3c58db60fd bump to 1.0.142 [ci skip] 2023-12-27 16:03:01 +00:00
10afc1a61d Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-12-27 17:01:58 +01:00
25c6d94a3d wip 2023-12-27 17:01:55 +01:00
b9d5b47581 bump to 1.0.141 [ci skip] 2023-12-27 15:51:45 +00:00
b033d5c785 bump to 1.0.140 [ci skip] 2023-12-27 15:51:29 +00:00
92afcff6f9 fix scenario 1 php result 2023-12-27 16:50:49 +01:00
88d7007036 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-12-27 16:50:34 +01:00
24949710b7 fix scenario 1 php result 2023-12-27 16:50:31 +01:00
3f4868216b bump to 1.0.139 [ci skip] 2023-12-27 14:55:31 +00:00
9070fe442f Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-12-27 15:54:41 +01:00
7ef13c90b9 remove article 2023-12-27 15:54:39 +01:00
1df70303eb bump to 1.0.138 [ci skip] 2023-12-27 14:33:38 +00:00
d00420efb0 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-12-27 15:32:47 +01:00
2331c93cf5 label 2023-12-27 15:32:44 +01:00
5cf088c911 bump to 1.0.137 [ci skip] 2023-12-27 14:24:55 +00:00
3ddc9cf9a6 label 2023-12-27 15:23:59 +01:00
375aec9175 bump to 1.0.136 [ci skip] 2023-12-27 14:19:56 +00:00
25c8c4c5ee Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-12-27 15:19:05 +01:00
8ba3b480eb fix doctrine mysql testNow it performs slower than with MySQL 2023-12-27 15:19:03 +01:00
16cf5cadd7 bump to 1.0.135 [ci skip] 2023-12-27 00:03:49 +00:00
633813b5b4 Update content/posts/22-web-api-benchmarks-2024/index.md 2023-12-27 00:02:58 +00:00
b1d8d32ba8 bump to 1.0.134 [ci skip] 2023-12-26 18:21:53 +00:00
d6115ba7db Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-12-26 19:20:57 +01:00
d6e9170fb9 fix chart resp 2023-12-26 19:20:54 +01:00
a82f292bd7 bump to 1.0.133 [ci skip] 2023-12-26 18:03:42 +00:00
68e74f3e05 wip charts 2023-12-26 19:01:36 +01:00
cacb6f78bb wip charts 2023-12-26 17:23:30 +01:00
aad3615542 wip charts 2023-12-26 00:16:47 +01:00
30116c360d wip charts 2023-12-25 23:58:53 +01:00
548a84ebe3 wip charts 2023-12-25 23:49:57 +01:00
db7655c4f8 wip charts 2023-12-25 23:38:13 +01:00
e7e5ec9586 wip charts 2023-12-25 23:20:27 +01:00
d040374dbc wip charts 2023-12-25 22:35:34 +01:00
1a5f7aea75 wip charts 2023-12-25 22:24:43 +01:00
a241e91d8c wip charts 2023-12-25 20:57:19 +01:00
a8e608070f wip charts 2023-12-25 19:43:35 +01:00
15bf34c299 wip charts 2023-12-25 19:01:48 +01:00
f5f3b033bb wip charts 2023-12-25 17:00:47 +01:00
a9dacbb3a2 refactor ts 2023-12-25 16:24:40 +01:00
f93bf3c26f refactor ts 2023-12-25 13:19:37 +01:00
cca4ebc90e wip bench 2023-12-24 23:15:25 +01:00
df7d31290f art 2023-12-24 19:59:08 +01:00
7df54a5b33 bump to 1.0.132 [ci skip] 2023-12-21 14:48:11 +00:00
a251cc4df2 bump to 1.0.131 [ci skip] 2023-12-21 14:40:57 +00:00
b5753b8681 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-12-21 15:39:52 +01:00
32ef3c0fec dont use auto ot 2023-12-21 15:39:49 +01:00
47c58dfdd3 bump to 1.0.130 [ci skip] 2023-12-21 14:12:01 +00:00
79bba89e68 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-12-21 15:11:00 +01:00
67923abd1f use .net 8 app 2023-12-21 15:10:58 +01:00
48769dac97 bump to 1.0.129 [ci skip] 2023-12-21 13:48:28 +00:00
5426360d04 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-12-21 14:47:32 +01:00
f4cd8d1123 change k3s conf 2023-12-21 14:47:29 +01:00
ebc4d0b6f9 bump to 1.0.128 [ci skip] 2023-12-21 12:42:13 +00:00
f6c10850ac Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-12-21 13:41:00 +01:00
32e1d9d53d merge k3s config 2023-12-21 13:40:57 +01:00
e4438ece84 bump to 1.0.127 [ci skip] 2023-12-18 17:32:41 +00:00
d9b143cdec bump to 1.0.126 [ci skip] 2023-12-18 17:31:41 +00:00
536e262226 bump to 1.0.125 [ci skip] 2023-12-16 21:20:21 +00:00
1b1449ce51 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-12-16 22:17:10 +01:00
d9ddee9fd8 sf7 2023-12-16 22:17:07 +01:00
e3022703d3 bump to 1.0.124 [ci skip] 2023-12-15 20:26:47 +00:00
51b3f5bc94 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-12-15 21:24:25 +01:00
fbe01c5de3 java 2023-12-15 21:24:22 +01:00
93c1206041 bump to 1.0.123 [ci skip] 2023-11-26 13:56:55 +00:00
34ac768d17 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-11-26 14:55:03 +01:00
239d5ac202 acc 2023-11-26 14:55:00 +01:00
55d0b867a9 bump to 1.0.122 [ci skip] 2023-11-26 13:50:21 +00:00
75b7684337 up 2023-11-26 14:48:25 +01:00
f6a31fb75d bump to 1.0.121 [ci skip] 2023-11-26 12:23:41 +00:00
2a8db86536 add bastion 2023-11-26 13:21:43 +01:00
33aa481c87 bump to 1.0.120 [ci skip] 2023-11-23 18:05:38 +00:00
a971aabc95 up theme 2023-11-23 19:03:32 +01:00
097b6169c2 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-11-23 19:02:17 +01:00
fd75702fb7 up work version 2023-11-23 19:02:13 +01:00
f1a175a7b7 bump to 1.0.119 [ci skip] 2023-11-01 12:41:40 +00:00
9a28d870f7 up congo theme 2023-11-01 13:38:18 +01:00
0a3cd51090 bump to 1.0.118 [ci skip] 2023-10-29 17:51:59 +00:00
210cb3102d bump to 1.0.117 [ci skip] 2023-10-29 17:51:30 +00:00
b5599be1eb up 2023-10-29 18:49:49 +01:00
4ac61e1294 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-10-29 18:48:52 +01:00
3a57bbf6f1 up 2023-10-29 18:48:49 +01:00
1a295cc401 bump to 1.0.116 [ci skip] 2023-10-29 17:17:04 +00:00
52ba6d9ea4 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-10-29 18:15:29 +01:00
0b18290797 fastapi poetry 2023-10-29 18:15:26 +01:00
1cfa1b4cb7 bump to 1.0.115 [ci skip] 2023-10-29 17:01:20 +00:00
ad6a31b71b Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-10-29 17:59:26 +01:00
71ffe8531b python 3.12 2023-10-29 17:59:22 +01:00
f6bacfa5d6 bump to 1.0.114 [ci skip] 2023-10-28 12:50:45 +00:00
598c34f9fe Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-10-28 14:48:53 +02:00
009cc3d5eb use ruff 2023-10-28 14:48:35 +02:00
0243b9f26e bump to 1.0.113 [ci skip] 2023-10-01 12:40:53 +00:00
e12fdfb3f7 add loki ring update 2023-10-01 14:38:00 +02:00
b447e476f1 bump to 1.0.112 [ci skip] 2023-09-24 15:33:02 +00:00
92df3cbaf1 wtf 2023-09-24 17:30:57 +02:00
aa7b5d6c14 wtf 2023-09-24 17:30:44 +02:00
67f047b1e4 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-09-24 17:27:18 +02:00
5882a96ff3 wtf 2023-09-24 17:27:15 +02:00
cbf3a88b83 bump to 1.0.111 [ci skip] 2023-09-24 15:26:40 +00:00
f78d791730 wtf 2023-09-24 17:24:16 +02:00
cf23988636 wtf 2023-09-24 17:22:48 +02:00
1e6795ae27 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-09-24 17:20:28 +02:00
318b03d1eb wtf 2023-09-24 17:20:25 +02:00
7050abbed0 bump to 1.0.110 [ci skip] 2023-09-24 15:19:37 +00:00
1787b4a2ac wtf 2023-09-24 17:17:36 +02:00
d6d236f143 bump to 1.0.109 [ci skip] 2023-09-24 15:10:34 +00:00
91cbf70f40 use flux for sonarqube 2023-09-24 17:01:19 +02:00
8b0efa3b60 bump to 1.0.108 [ci skip] 2023-09-16 13:46:19 +00:00
0c4ba0a562 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-09-16 15:44:36 +02:00
50f21191f2 gitea links 2023-09-16 15:44:33 +02:00
bdc6ba81cd bump to 1.0.107 [ci skip] 2023-09-16 11:39:43 +00:00
12405f9ac4 bump to 1.0.106 [ci skip] 2023-09-16 10:59:44 +00:00
3669b8afde bump to 1.0.105 [ci skip] 2023-09-10 11:01:28 +00:00
f3990d2de6 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-09-10 12:59:38 +02:00
84b703efa0 add if exists 2023-09-10 12:59:36 +02:00
8c5497b92b bump to 1.0.104 [ci skip] 2023-09-10 09:09:57 +00:00
d168ca0414 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-09-10 11:08:15 +02:00
94b521c4ff add strict local 2023-09-10 11:08:13 +02:00
767a9c7b52 bump to 1.0.103 [ci skip] 2023-09-09 16:03:23 +00:00
2a8446c72d Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-09-09 18:01:35 +02:00
c54908dbe6 redis 2023-09-09 18:01:32 +02:00
0570f3610b bump to 1.0.102 [ci skip] 2023-09-09 15:59:59 +00:00
0e68a34e6d redis 2023-09-09 17:57:57 +02:00
78a62ea7f1 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-09-09 17:57:26 +02:00
0ffe508858 remove helm exporter 2023-09-09 17:57:22 +02:00
24ef84162f bump to 1.0.101 [ci skip] 2023-09-09 14:43:37 +00:00
db03e71f2f Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-09-09 16:41:50 +02:00
2ef95db920 remove helm exporter 2023-09-09 16:41:47 +02:00
5631c459c8 bump to 1.0.100 [ci skip] 2023-09-09 14:38:17 +00:00
ef68fb6854 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-09-09 16:36:01 +02:00
0252b1186e use redis cluster instead 2023-09-09 16:35:58 +02:00
aca9cde58e bump to 1.0.99 [ci skip] 2023-09-03 11:21:24 +00:00
1a661ada20 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-09-03 13:19:21 +02:00
17394a99b7 pkg 2023-09-03 13:19:07 +02:00
6b190f1a33 bump to 1.0.98 [ci skip] 2023-09-02 16:09:36 +00:00
de5d063ca9 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-09-02 18:02:39 +02:00
7e88dda273 enc status 2023-09-02 18:02:36 +02:00
5984a9e1cf bump to 1.0.97 [ci skip] 2023-09-02 15:54:06 +00:00
e09cbb2cd1 uniformize 2023-09-02 17:52:25 +02:00
2b7ad1304d add metrics & encrypt 2023-09-02 17:50:36 +02:00
9a13ade068 add metrics & encrypt 2023-09-02 17:50:06 +02:00
a0cc73a7e9 bump to 1.0.96 [ci skip] 2023-09-02 14:35:35 +00:00
52d5591f17 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-09-02 16:33:44 +02:00
bc30cbb870 add cni 2023-09-02 16:33:41 +02:00
ae41c9409b bump to 1.0.95 [ci skip] 2023-08-31 13:23:18 +00:00
de974c8d32 Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-08-31 15:21:17 +02:00
122c054f20 add tips 2023-08-31 15:21:01 +02:00
37a4f9d00d bump to 1.0.94 [ci skip] 2023-08-30 19:36:30 +00:00
161d16242c Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-08-30 21:34:30 +02:00
a77fc3e9d8 fix dashboards links 2023-08-30 21:34:27 +02:00
c0585a7f05 bump to 1.0.93 [ci skip] 2023-08-30 19:26:27 +00:00
d0f5c1eddd Merge branch 'main' of ssh.okami101.io:adr1enbe4udou1n/blog 2023-08-30 21:24:28 +02:00
b02b6a2b6c fix dashboards links 2023-08-30 21:24:24 +02:00
70c60216c2 fix dashboards links 2023-08-30 21:24:21 +02:00
ff3b57126a bump to 1.0.92 [ci skip] 2023-08-30 19:09:25 +00:00
78d8b640a4 Merge branch 'k8s' 2023-08-30 21:06:48 +02:00
2a47eb58b0 bump to 1.0.91 [ci skip] 2023-08-18 20:07:00 +00:00
42 changed files with 6412 additions and 1045 deletions

View File

@ -0,0 +1,26 @@
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
submodules: true
- uses: actions/cache@v4
with:
path: resources
key: ${{ runner.os }}-resources
- uses: peaceiris/actions-hugo@v3
with:
extended: true
- name: Build
run: hugo --minify
- uses: https://gitea.okami101.io/okami101/actions/docker@main
with:
password: ${{ secrets.CONTAINER_REGISTRY_PASSWORD }}
gitea-token: ${{ secrets.RELEASE_TOKEN }}
release: true

3
.gitignore vendored
View File

@ -1,2 +1,3 @@
node_modules
resources
resources
public

View File

@ -1,3 +1,7 @@
FROM nginx:alpine
RUN sed -i 's/^\(.*\)http {/\1http {\n map_hash_bucket_size 128;\n/' /etc/nginx/nginx.conf
COPY nginx/ /etc/nginx/conf.d/
COPY public /usr/share/nginx/html

File diff suppressed because it is too large Load Diff

View File

@ -6,10 +6,11 @@ baseURL = "https://blog.okami101.io"
languageCode = "en"
defaultContentLanguage = "en"
theme = "congo"
title = "Okami101 Blog"
# copyright = "Copy, _right?_ :thinking_face:"
timeout = "120s"
enableEmoji = true
enableRobotsTXT = true
@ -20,16 +21,5 @@ ignoreFiles = ['_data/*']
[outputs]
home = ["HTML", "RSS", "JSON"]
[author]
name = "Adrien Beaudouin"
image = "author.jpg"
bio = "A senior web developer @janze"
links = [
{ email = "mailto:adrien@okami101.io" },
{ github = "https://github.com/adr1enbe4udou1n" },
{ linkedin = "https://linkedin.com/in/adr1enbe4udou1n" },
{ twitter = "https://twitter.com/adr1enbe4udou1n" },
]
[permalinks]
posts = "/:year/:month/:title/"

View File

@ -61,3 +61,14 @@ excludedKinds = ["taxonomy", "term"]
# bing = ""
# pinterest = ""
# yandex = ""
[author]
name = "Adrien Beaudouin"
image = "author.jpg"
bio = "A senior web developer @janze"
links = [
{ email = "mailto:adrien@okami101.io" },
{ github = "https://github.com/adr1enbe4udou1n" },
{ linkedin = "https://linkedin.com/in/adr1enbe4udou1n" },
{ bluesky = "https://bsky.app/profile/adr1enbe4udou1n.bsky.social" },
]

View File

@ -4,5 +4,5 @@ description: "This is adr1enbe4udou1n blog."
---
{{< lead >}}
A 🧔🌍💻 aka senior web developer @Bretagne 🇫🇷
A 🧔🌍💻 aka senior test web developer @Bretagne 🇫🇷
{{< /lead >}}

View File

@ -18,7 +18,7 @@ I can develop proper API design following [**DDD / Hexa**](https://en.wikipedia.
I encourage `TDD` or at least proper **integration tests** on any backend frameworks, following **AAA** aka *Arrange Act Assert* principle :
* `PHPUnit` or [`Pest`](https://pestphp.com/) for *PHP*
* [`NUnit.net`](https://nunit.org/) or [`xUnit.net`](https://xunit.net/) with [`Fluent Assertions`](https://github.com/fluentassertions/fluentassertions) for *.NET Core*
* [`NUnit.net`](https://nunit.org/) or [`xUnit.net`](https://xunit.net/) for *.NET Core*
* `JUnit` with [`REST Assured`](https://rest-assured.io/) for *Spring Boot*
* `Jest` and `pytest` on respective *NodeJS* end *Python* stacks
@ -52,9 +52,9 @@ Some notes of this blog :
* Kubernetes infrastructure completely managed with [`Terraform`](https://github.com/adr1enbe4udou1n/terraform-kube-okami) 🌴
* **HA** setup using **Hetzner LB**, targeting 2 worker nodes, with **Postgres cluster** (managed on same Kubernetes cluster)
* `Traefik` as reverse proxy, configured for HA 🛣️
* Source code on my own [`Gitea`](https://gitea.okami101.io/adr1enbe4udou1n/blog)
* Compiled by my own [`Concourse`](https://concourse.okami101.io) instance as a final docker container image into self-hosted private registry (**CI** 🏗️)
* Automatically deployed by `Flux CD v2` to the Kubernetes cluster from [central Git source](https://gitea.okami101.io/okami101/flux-source/src/branch/main/okami/deploy-blog.yaml) (**CD** 🚀)
* Source code on my own [`Gitea`](https://about.gitea.com/)
* Compiled by my own [`Concourse`](https://concourse-ci.org/) instance as a final docker container image into self-hosted private registry (**CI** 🏗️)
* Automatically deployed by `Flux CD v2` to the Kubernetes cluster (**CD** 🚀)
* Tracked with [`Umami`](https://umami.is/) 📈
All above tools are 💯% self-hosted ! Just sadly missing my own Homelab with Proxmox because no fiber 😿

View File

@ -1,13 +1,12 @@
[`ASP.NET Core 7`](https://docs.microsoft.com/aspnet/core/) implementation, following `DDD` principle, implemented with `Hexa architecture` and `CQRS` pattern. [Swashbuckle](https://github.com/domaindrivendev/Swashbuckle.AspNetCore) is used as default main OpenAPI generator that's perfectly integrates into the code.
[`ASP.NET Core 8`](https://docs.microsoft.com/aspnet/core/) implementation, using minimal APIs, mature since 8.0, following `DDD` principle, implemented with `Hexa architecture` and `CQRS` pattern. [Swashbuckle](https://github.com/domaindrivendev/Swashbuckle.AspNetCore) is used as default main OpenAPI generator.
Main packages involved :
* [Carter](https://github.com/CarterCommunity/Carter/) for seamless endpoints grouping
* [EF Core](https://docs.microsoft.com/ef/) as strongly typed ORM
* [MediatR](https://github.com/jbogard/MediatR) for easy mediator implementation. It allows strong decoupling between all ASP.NET controllers and the final application which is cutted into small queries and commands
* [Fluent Validation](https://fluentvalidation.net/) for strongly typed validation
* [dotnet-format](https://github.com/dotnet/format) as official formatter
* [xUnit.net](https://xunit.net/) as framework test
* [Fluent Assertions](https://fluentassertions.com/) for strongly typed assertions within the API
* [Respawn](https://github.com/jbogard/Respawn) as for optimal integration tests isolation
* [Bogus](https://github.com/bchavez/Bogus) for strongly typed fake data generator
* [Bullseye](https://github.com/adamralph/bullseye) as a nice CLI publisher tool with dependency graph

View File

@ -1,4 +1,4 @@
[`FastAPI`](https://fastapi.tiangolo.com/) implementation under last `Python 3.11` with [Pipenv](https://pypi.org/project/pipenv/) as package manager.
[`FastAPI`](https://fastapi.tiangolo.com/) implementation under last `Python 3.12` with [Poetry](https://python-poetry.org/) as package manager.
It's based on [pydantic](https://pydantic-docs.helpmanual.io/), an essential component that allows proper OpenAPI generation and data validations while bringing advanced type hints.
@ -8,7 +8,6 @@ Main packages involved :
* [SQLAlchemy 2](https://www.sqlalchemy.org/) with [Alembic](https://alembic.sqlalchemy.org/en/latest/) for schema migration
* [python-jose](https://github.com/mpdavis/python-jose) as JWT implementation
* [Faker](https://faker.readthedocs.io/en/master/) as dummy data generator
* [autoflake](https://pypi.org/project/autoflake/) and [isort](https://pycqa.github.io/isort/) for clean imports
* [Flake8](https://flake8.pycqa.org/en/latest/) and [Black](https://black.readthedocs.io/en/stable/) as respective code linter and powerful code formatter
* [Ruff](https://docs.astral.sh/ruff/) as extremely fast linter and code formatter written in rust, a perfect drop-in replacement for flake8, isort and black
* [mypy](http://mypy-lang.org/) as advanced static analyzer
* [pytest](https://docs.pytest.org) as main test framework

View File

@ -1,4 +1,4 @@
[`Laravel 10`](https://laravel.com/) implementation on `PHP 8.2` with extensive usage of last attributes support. The particularity of this framework is to give you almost of all you need for quickly develop any complex application. So minimal external packages need.
[`Laravel 11`](https://laravel.com/) implementation on `PHP 8.3` with extensive usage of last attributes support. The particularity of this framework is to give you almost of all you need for quickly develop any complex application. So minimal external packages need.
I obviously made usage of **Eloquent** as a very expressive **Active Record** ORM, and the Laravel factories system based on [PHP Faker](https://fakerphp.github.io/) is already perfect for dummy data generator.
@ -8,7 +8,7 @@ Main packages involved :
* [PHP JWT](https://github.com/lcobucci/jwt) as JWT implementation, with proper integration to Laravel using custom guard
* [Laravel Routes Attribute](https://github.com/spatie/laravel-route-attributes) for Laravel routing that leverage on last PHP 8 attributes feature
* [Laravel OpenAPI](https://github.com/vyuldashev/laravel-openapi) that also use PHP 8 attributes for API documentation
* [Laravel OpenAPI](https://github.com/DarkaOnLine/L5-Swagger) that also use PHP 8 attributes for API documentation
* [Laravel IDE Helper](https://github.com/barryvdh/laravel-ide-helper) for proper IDE integration, perfectly suited for **VS Code** with [Intelephense](https://marketplace.visualstudio.com/items?itemName=bmewburn.vscode-intelephense-client) extension
* [PHP CS Fixer](https://github.com/FriendsOfPHP/PHP-CS-Fixer) as formatter with Laravel style guide
* [Larastan](https://github.com/nunomaduro/larastan), a Laravel wrapper of [PHPStan](https://phpstan.org/), as advanced code static analyzer

View File

@ -1,4 +1,4 @@
[`NestJS 9`](https://nestjs.com/) implementation under `NodeJS` using [`Typescript`](https://www.typescriptlang.org/) and [`pnpm`](https://pnpm.io/) as fast package manager. It relies by default on [`express`](https://github.com/expressjs/express) as NodeJS HTTP server implementation. NestJS offers a nice OpenAPI documentation generator thanks to Typescript which provides strong typing.
[`NestJS 10`](https://nestjs.com/) implementation under `Node.js 20` using [`Typescript 5`](https://www.typescriptlang.org/) and [`pnpm`](https://pnpm.io/) as fast package manager. It relies by default on [`express`](https://github.com/expressjs/express) as NodeJS HTTP server implementation. NestJS offers a nice OpenAPI documentation generator thanks to Typescript which provides strong typing.
Main packages involved :

View File

@ -12,7 +12,5 @@ Main purpose of this projects is to have personal extensive API training on mult
* Proper seeder / faker for quick starting with filled DB
* Separated RW / RO database connections for maximizing performance between these 2 contexts
* Proper suited QA + production Dockerfile
* Complete CI on Kubernetes with [Concourse](https://concourse.okami101.io/)
* Complete CI on Kubernetes with [Concourse CI](https://concourse-ci.org/)
* Automatic CD on Kubernetes using [Flux](https://fluxcd.io/)
See complete production deployment manifests [here](https://gitea.okami101.io/okami101/flux-source/src/branch/main/conduit), allowing **GitOps** management.

View File

@ -1,4 +1,4 @@
[`Spring Boot 3`](https://spring.io/projects/spring-boot) implementation using `Gradle 8` & `Java 17+`. Similar to the [official Spring Boot implementation](https://github.com/gothinkster/spring-boot-realworld-example-app) but with usage of `Spring Data JPA` instead of `MyBatis`. [Here is another nice one](https://github.com/raeperd/realworld-springboot-java) that explicitly follows `DDD`.
[`Spring Boot 3.2`](https://spring.io/projects/spring-boot) implementation using `Gradle 8` & `Java 21`. Similar to the [official Spring Boot implementation](https://github.com/gothinkster/spring-boot-realworld-example-app) but with usage of `Spring Data JPA` instead of `MyBatis`. [Here is another nice one](https://github.com/raeperd/realworld-springboot-java) that explicitly follows `DDD`.
Main packages involved :

View File

@ -1,9 +1,10 @@
[`Symfony 6.3`](https://symfony.com/) implementation on `PHP 8.2` that supports PHP 8 attributes. I excluded the usage of [API Platform](https://api-platform.com/) here, which is a very powerful API crud generator but really not well suited for real customized API in my taste.
[`Symfony 7`](https://symfony.com/) implementation on `PHP 8.3` that supports PHP 8 attributes, using [API Platform](https://api-platform.com/).
Contrary to Laravel, the usage of **DataMapper** pattern ORM involve classic POPO models. The additional usage of plain PHP DTO classes facilitates the OpenAPI spec models generation without writing all schemas by hand. On the downside the Nelmio package is far more verbose than the Laravel OpenAPI version.
Main packages involved :
* [API Platform](https://api-platform.com/) as API framework
* [Doctrine](https://www.doctrine-project.org/) as **DataMapper** ORM
* [SensioFrameworkExtraBundle](https://github.com/sensiolabs/SensioFrameworkExtraBundle) for ParamConverter helper with Doctrine
* [FOSRestBundle](https://github.com/FriendsOfSymfony/FOSRestBundle) only for some helpers as DTO automatic converters and validation

View File

@ -345,7 +345,7 @@ Set proper `GF_DATABASE_PASSWORD` and deploy. Database migration should be autom
### Docker Swarm dashboard
For best show-case scenario of Grafana, let's import an [existing dashboard](https://grafana.com/grafana/dashboards/11939) suited for complete Swarm monitor overview.
For best show-case scenario of Grafana, let's import an [existing dashboard](https://grafana.com/dashboards/11939) suited for complete Swarm monitor overview.
First we need to add Prometheus as main metrics data source. Go to *Configuration > Data source* menu and click on *Add data source*. Select Prometheus and set the internal docker prometheus URL, which should be `http://prometheus:9090`. A successful message should appear when saving.

View File

@ -39,7 +39,7 @@ For better fluidity, here is the expected list of variables you'll need to prepa
| `s3_bucket` | kuberocks | |
| `s3_access_key` | xxx | |
| `s3_secret_key` | xxx | |
| `smtp_host` | smtp-relay.brevo.com | |
| `smtp_host` | smtp.tem.scw.cloud | |
| `smtp_port` | 587 | |
| `smtp_user` | <me@kube.rocks> | |
| `smtp_password` | xxx | |
@ -77,6 +77,7 @@ Here are the pros and cons of each module:
| | [Kube Hetzner](https://registry.terraform.io/modules/kube-hetzner/kube-hetzner/hcloud/latest) | [Okami101 K3s](https://registry.terraform.io/modules/okami101/k3s) |
| ----------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Support** | Strong community | Just intended as a reusable starter-kit |
| **CNI support** | Choice between Flannel, Cilium, Calico | Flannel only, while supporting network encryption with `enable_wireguard` variable, set `flannel-backend` to `none` if installing other CNI |
| **Included helms** | Traefik, Longhorn, Cert Manager, Kured | None, just the K3s initial setup, as it's generally preferable to manage this helms dependencies on separated terraform project, allowing easier upgrading |
| **Hetzner integration** | Complete, use [Hcloud Controller](https://github.com/hetznercloud/hcloud-cloud-controller-manager) internally, allowing dynamic Load Balancing, autoscaling, cleaner node deletion | Basic, public Load Balancer is statically managed by the nodepool configuration, no autoscaling support |
| **OS** | openSUSE MicroOS, optimized for container worloads | Debian 11 or Ubuntu 22.04 |
@ -86,6 +87,7 @@ Here are the pros and cons of each module:
| **Upgrade** | You may need to follow new versions regularly | As a simple starter-kit, no need to support all community problems, so very few updates |
| **Quality** | Use many hacks to satisfy all community needs, plenty of remote-exec and file provisioner which is not recommended by HashiCorp themselves | Use standard **cloud-config** for initial provisioning, then **Salt** for cluster OS management |
| **Security** | Needs an SSH private key because of local provisioners, and SSH port opened to every node | Require only public SSH key, minimized opened SSH ports to only controllers, use SSH jump from a controller to access any internal worker node |
| **Bastion** | No real bastion support | Dedicated bastion host support with preinstalled WireGuard VPN, ideal for internal access to critical services like Kube API, longhorn, etc. |
| **Reusability** | Vendor locked to Hetzner Cloud | Easy to adapt for a different cloud provider as long as it supports **cloud-config** (as 99% of them) |
So for resume, choose Kube Hetzner module if:
@ -171,14 +173,14 @@ module "hcloud_kube" {
k3s_channel = "stable"
tls_sans = ["cp.kube.rocks"]
disabled_components = ["traefik"]
kubelet_args = [
"eviction-hard=memory.available<250Mi"
]
etcd_s3_backup = {
control_planes_custom_config = {
tls-san = ["cp.kube.rocks"]
disable = ["traefik"]
etcd-s3 = true
etcd-s3-endpoint = "s3.fr-par.scw.cloud"
etcd-s3-access-key = var.s3_access_key
etcd-s3-secret-key = var.s3_secret_key
@ -216,7 +218,7 @@ output "ssh_config" {
}
```
{{</ highlight >}}
{{< /highlight >}}
#### Explanation
@ -240,7 +242,7 @@ At any case, consider any leak of writeable Hetzner Cloud token as a **Game Over
3. Sniff any data from the cluster that comes to the compromised server, including secrets, thanks to the new agent.
4. Get access to remote S3 backups.
In order to mitigate any risk of critical data leak, you may use data encryption whenever is possible. K3s offer it [natively for etcd](https://docs.k3s.io/security/secrets-encryption). Longhorn also offer it [natively for volumes](https://longhorn.io/docs/latest/advanced-resources/security/volume-encryption/) (including backups).
In order to mitigate any risk of critical data leak, you may use data encryption whenever is possible. K3s offer it natively [for etcd](https://docs.k3s.io/security/secrets-encryption) and [for networking using WireGuard flannel option](https://docs.k3s.io/installation/network-options). Longhorn also offer it [natively for volumes](https://longhorn.io/docs/latest/advanced-resources/security/volume-encryption/) (including backups).
{{</ tab >}}
{{< tab tabName="Global" >}}
@ -266,7 +268,7 @@ Why not `debian-12` ? Because it's sadly not yet supported by [Salt project](htt
{{< alert >}}
`nfs-common` package is required for Longhorn in order to support RWX volumes.
{{</ alert >}}
{{< /alert >}}
`cluster_name` is the node's name prefix and will have the format `{cluster_name}-{pool_name}-{index}`, for example `kube-storage-01`. `cluster_user` is the username UID 1000 for SSH access with sudo rights. `root` user is disabled for remote access security reasons.
@ -276,17 +278,12 @@ Why not `debian-12` ? Because it's sadly not yet supported by [Salt project](htt
```tf
k3s_channel = "stable"
tls_sans = ["cp.kube.rocks"]
disabled_components = ["traefik"]
kubelet_args = [
"eviction-hard=memory.available<250Mi"
]
```
This is the K3s specific configuration, where you can choose the channel (stable or latest), the TLS SANs, and the kubelet arguments.
I'm disabling included Traefik because we'll use a more flexible official Helm later.
This is the K3s specific configuration, where you can choose the channel (stable or latest), and the kubelet arguments.
I also prefer increase the eviction threshold to 250Mi, in order to avoid OS OOM killer.
@ -294,7 +291,10 @@ I also prefer increase the eviction threshold to 250Mi, in order to avoid OS OOM
{{< tab tabName="Backup" >}}
```tf
etcd_s3_backup = {
control_planes_custom_config = {
tls-san = ["cp.kube.rocks"]
disable = ["traefik"]
etcd-s3 = true
etcd-s3-endpoint = "s3.fr-par.scw.cloud"
etcd-s3-access-key = var.s3_access_key
etcd-s3-secret-key = var.s3_secret_key
@ -304,7 +304,11 @@ etcd_s3_backup = {
}
```
This will enable automatic daily backup of etcd database on S3 bucket, which is useful for faster disaster recovery. See the official guide [here](https://docs.k3s.io/datastore/backup-restore).
Here some specific additional configuration for k3s servers.
I'm disabling included Traefik because we'll use a more flexible official Helm later.
We're adding automatic daily backup of etcd database on S3 bucket, which is useful for faster disaster recovery. See the official guide [here](https://docs.k3s.io/datastore/backup-restore).
{{</ tab >}}
{{< tab tabName="Cluster" >}}
@ -354,6 +358,42 @@ Will print the SSH config access after cluster creation.
{{</ tab >}}
{{</ tabs >}}
#### ETCD and network encryption by default
You may need to enable etcd and network encryption in order to prevent any data leak in case of a server is compromised. You can easily do so by adding the following variables:
{{< highlight host="demo-kube-hcloud" file="kube.tf" >}}
```tf
module "hcloud_kube" {
//...
# You need to install WireGuard package on all nodes
server_packages = ["wireguard"]
control_planes_custom_config = {
//...
flannel-backend = "wireguard-native"
secrets-encryption = true,
}
//...
}
```
{{< /highlight >}}
You can check the ETCD encryption status with `sudo k3s secrets-encrypt status`:
```txt
Encryption Status: Enabled
Current Rotation Stage: start
Server Encryption Hashes: All hashes match
Active Key Type Name
------ -------- ----
* AES-CBC aescbckey
```
#### Inputs
As input variables, you have the choice to use environment variables or separated `terraform.tfvars` file.
@ -365,17 +405,17 @@ As input variables, you have the choice to use environment variables or separate
```tf
hcloud_token = "xxx"
my_public_ssh_keys = [
my_ip_addresses = [
"82.82.82.82/32"
]
my_ip_addresses = [
my_public_ssh_keys = [
"ssh-ed25519 xxx"
]
s3_access_key = "xxx"
s3_secret_key = "xxx"
```
{{</ highlight >}}
{{< /highlight >}}
{{</ tab >}}
{{< tab tabName="Environment variables" >}}
@ -440,7 +480,7 @@ Merge above SSH config into your `~/.ssh/config` file, then test the connection
{{< alert >}}
If you get "Connection refused", it's probably because the server is still on cloud-init phase. Wait a few minutes and try again. Be sure to have the same public IPs as the one you whitelisted in the Terraform variables. You can edit them and reapply the Terraform configuration at any moment.
{{</ alert >}}
{{< /alert >}}
Before using K3s, let's enable Salt for OS management by taping `sudo salt-key -A -y`. This will accept all pending keys, and allow Salt to connect to all nodes. To upgrade all nodes at one, just type `sudo salt '*' pkg.upgrade`.
@ -455,7 +495,7 @@ From the controller, copy `/etc/rancher/k3s/k3s.yaml` on your machine located ou
{{< alert >}}
If `~/.kube/config` already existing, you have to properly [merging the config inside it](https://able8.medium.com/how-to-merge-multiple-kubeconfig-files-into-one-36fc987c2e2f). You can use `kubectl config view --flatten` for that.
Then use `kubectl config use-context kube` for switching to your new cluster.
{{</ alert >}}
{{< /alert >}}
Type `kubectl get nodes` and you should see the 2 nodes of your cluster in **Ready** state.
@ -491,7 +531,7 @@ agent_nodepools = [
]
```
{{</ highlight >}}
{{< /highlight >}}
Then apply the Terraform configuration again. After few minutes, you should see 2 new nodes in **Ready** state.
@ -505,7 +545,7 @@ kube-worker-03 Ready <none> 25s v1.27.4+k3s1
{{< alert >}}
You'll have to use `sudo salt-key -A -y` each time you'll add a new node to the cluster for global OS management.
{{</ alert >}}
{{< /alert >}}
#### Deleting workers
@ -515,7 +555,7 @@ To finalize the deletion, delete the node from the cluster with `krm no kube-wor
{{< alert >}}
If node have some workloads running, you'll have to consider a proper [draining](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) before deleting it.
{{</ alert >}}
{{< /alert >}}
## 1st check ✅

View File

@ -25,7 +25,7 @@ terraform {
}
```
{{</ highlight >}}
{{< /highlight >}}
Let's begin with automatic upgrades management.
@ -34,8 +34,8 @@ Let's begin with automatic upgrades management.
Before we go next steps, we need to install critical monitoring CRDs that will be used by many components for monitoring, a subject that will be covered later.
```sh
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.67.1/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.67.1/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
ka https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml --server-side
ka https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml --server-side
```
### Automatic reboot
@ -47,7 +47,7 @@ When OS kernel is upgraded, the system needs to be rebooted to apply it. This is
```tf
resource "helm_release" "kubereboot" {
chart = "kured"
version = "5.1.0"
version = "5.4.5"
repository = "https://kubereboot.github.io/charts"
name = "kured"
@ -75,7 +75,7 @@ resource "helm_release" "kubereboot" {
}
```
{{</ highlight >}}
{{< /highlight >}}
For all `helm_release` resource you'll see from this guide, you may check the last chart version available. Example for `kured`:
@ -100,11 +100,13 @@ However, as Terraform doesn't offer a proper way to apply a remote multi-documen
{{< alert >}}
Don't push yourself get fully 100% GitOps everywhere if the remedy give far more code complexity. Sometimes a simple documentation of manual steps in README is better.
{{</ alert >}}
{{< /alert >}}
```sh
k create ns system-upgrade
# installing system-upgrade-controller
ka https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml
ka https://github.com/rancher/system-upgrade-controller/releases/latest/download/crd.yaml
# checking system-upgrade-controller deployment status
kg deploy -n system-upgrade
```
@ -187,19 +189,25 @@ resource "kubernetes_manifest" "agent_plan" {
}
```
{{</ highlight >}}
{{< /highlight >}}
{{< alert >}}
You may set the same channel as previous step for hcloud cluster creation.
{{</ alert >}}
{{< /alert >}}
## External access
Now it's time to expose our cluster to the outside world. We'll use Traefik as ingress controller and cert-manager for SSL certificates management.
Now it's time to expose our cluster to the outside world. We'll use Traefik v3 as ingress controller and cert-manager for SSL certificates management.
### Traefik
Apply following file:
Apply CRDs:
```sh
kak https://github.com/traefik/traefik-helm-chart/traefik/crds/ --server-side
```
Then apply following:
{{< highlight host="demo-kube-k3s" file="traefik.tf" >}}
@ -216,25 +224,31 @@ resource "kubernetes_namespace_v1" "traefik" {
resource "helm_release" "traefik" {
chart = "traefik"
version = "24.0.0"
version = "28.0.0"
repository = "https://traefik.github.io/charts"
name = "traefik"
namespace = kubernetes_namespace_v1.traefik.metadata[0].name
set {
name = "ports.web.redirectTo"
name = "ports.web.redirectTo.port"
value = "websecure"
}
set {
set_list {
name = "ports.websecure.forwardedHeaders.trustedIPs"
value = "{127.0.0.1/32,10.0.0.0/8}"
value = [
"127.0.0.1/32",
"10.0.0.0/8"
]
}
set {
set_list {
name = "ports.websecure.proxyProtocol.trustedIPs"
value = "{127.0.0.1/32,10.0.0.0/8}"
value = [
"127.0.0.1/32",
"10.0.0.0/8"
]
}
set {
@ -259,9 +273,9 @@ resource "helm_release" "traefik" {
}
```
{{</ highlight >}}
{{< /highlight >}}
`ports.web.redirectTo` will redirect all HTTP traffic to HTTPS.
`ports.web.redirectTo.port` will redirect all HTTP traffic to HTTPS.
`forwardedHeaders` and `proxyProtocol` will allow Traefik to get real IP of clients.
@ -317,14 +331,14 @@ resource "hcloud_load_balancer_service" "https_service" {
}
```
{{</ highlight >}}
{{< /highlight >}}
Use `hcloud load-balancer-type list` to get the list of available load balancer types.
{{< alert >}}
Don't forget to add `hcloud_load_balancer_service` resource for each service (aka port) you want to serve.
We use `tcp` protocol as Traefik will handle SSL termination. Set `proxyprotocol` to true to allow Traefik to get real IP of clients.
{{</ alert >}}
{{< /alert >}}
One applied, use `hcloud load-balancer list` to get the public IP of the load balancer and try to curl it. You should be properly redirected to HTTPS and have certificate error. It's time to get SSL certificates.
@ -333,7 +347,7 @@ One applied, use `hcloud load-balancer list` to get the public IP of the load ba
First we need to install cert-manager for proper distributed SSL management. First install CRDs manually.
```sh
ka https://github.com/cert-manager/cert-manager/releases/download/v1.12.3/cert-manager.crds.yaml
ka https://github.com/cert-manager/cert-manager/releases/download/v1.15.0/cert-manager.crds.yaml
```
Then apply the following Terraform code.
@ -349,7 +363,7 @@ resource "kubernetes_namespace_v1" "cert_manager" {
resource "helm_release" "cert_manager" {
chart = "cert-manager"
version = "v1.12.3"
version = "v1.15.0"
repository = "https://charts.jetstack.io"
name = "cert-manager"
@ -362,12 +376,12 @@ resource "helm_release" "cert_manager" {
}
```
{{</ highlight >}}
{{< /highlight >}}
{{< alert >}}
You can use `installCRDs` option to install CRDs automatically. But uninstall cert-manager will delete all associated resources including generated certificates. That's why I generally prefer to install CRDs manually.
As always we enable `prometheus.servicemonitor.enabled` to allow Prometheus to scrape cert-manager metrics.
{{</ alert >}}
{{< /alert >}}
All should be ok with `kg deploy -n cert-manager`.
@ -377,7 +391,7 @@ We'll use [DNS01 challenge](https://cert-manager.io/docs/configuration/acme/dns0
{{< alert >}}
You may use a DNS provider supported by cert-manager. Check the [list of supported providers](https://cert-manager.io/docs/configuration/acme/dns01/#supported-dns01-providers). As cert-manager is highly extensible, you can easily create your own provider with some efforts. Check [available contrib webhooks](https://cert-manager.io/docs/configuration/acme/dns01/#webhook).
{{</ alert >}}
{{< /alert >}}
First prepare variables and set them accordingly:
@ -398,7 +412,7 @@ variable "dns_api_token" {
}
```
{{</ highlight >}}
{{< /highlight >}}
{{< highlight host="demo-kube-k3s" file="terraform.tfvars" >}}
@ -408,7 +422,7 @@ domain = "kube.rocks"
dns_api_token = "xxx"
```
{{</ highlight >}}
{{< /highlight >}}
Then we need to create a default `Certificate` k8s resource associated to a valid `ClusterIssuer` resource that will manage its generation. Apply the following Terraform code for issuing the new wildcard certificate for your domain.
@ -484,12 +498,12 @@ resource "kubernetes_manifest" "tls_certificate" {
}
```
{{</ highlight >}}
{{< /highlight >}}
{{< alert >}}
You can set `acme.privateKeySecretRef.name` to **letsencrypt-staging** for testing purpose and avoid wasting LE quota limit.
Set `privateKey.rotationPolicy` to `Always` to ensure that the certificate will be [renewed automatically](https://cert-manager.io/docs/usage/certificate/) 30 days before expires without downtime.
{{</ alert >}}
{{< /alert >}}
In the meantime, go to your DNS provider and add a new `*.kube.rocks` entry pointing to the load balancer IP.
@ -530,7 +544,7 @@ resource "null_resource" "encrypted_admin_password" {
}
```
{{</ highlight >}}
{{< /highlight >}}
{{< highlight host="demo-kube-k3s" file="terraform.tfvars" >}}
@ -540,11 +554,11 @@ http_password = "xxx"
whitelisted_ips = ["82.82.82.82"]
```
{{</ highlight >}}
{{< /highlight >}}
{{< alert >}}
Note on `encrypted_admin_password`, we generate a bcrypt hash of the password compatible for HTTP basic auth and keep the original to avoid to regenerate it each time.
{{</ alert >}}
{{< /alert >}}
Then apply the following Terraform code:
@ -554,9 +568,9 @@ Then apply the following Terraform code:
resource "helm_release" "traefik" {
//...
set {
set_list {
name = "ingressRoute.dashboard.entryPoints"
value = "{websecure}"
value = ["websecure"]
}
set {
@ -611,7 +625,7 @@ resource "kubernetes_manifest" "traefik_middleware_ip" {
namespace = kubernetes_namespace_v1.traefik.metadata[0].name
}
spec = {
ipWhiteList = {
ipAllowList = {
sourceRange = var.whitelisted_ips
}
}
@ -619,7 +633,7 @@ resource "kubernetes_manifest" "traefik_middleware_ip" {
}
```
{{</ highlight >}}
{{< /highlight >}}
Now go to `https://traefik.kube.rocks` and you should be asked for credentials. After login, you should see the dashboard.
@ -640,7 +654,7 @@ resource "kubernetes_manifest" "traefik_middleware_ip" {
manifest = {
//...
spec = {
ipWhiteList = {
ipAllowList = {
sourceRange = var.whitelisted_ips
ipStrategy = {
depth = 1
@ -651,7 +665,7 @@ resource "kubernetes_manifest" "traefik_middleware_ip" {
}
```
{{</ highlight >}}
{{< /highlight >}}
In the case of Cloudflare, you may need also to trust the [Cloudflare IP ranges](https://www.cloudflare.com/ips-v4) in addition to Hetzner load balancer. Just set `ports.websecure.forwardedHeaders.trustedIPs` and `ports.websecure.proxyProtocol.trustedIPs` accordingly.
@ -664,7 +678,7 @@ variable "cloudflare_ips" {
}
```
{{</ highlight >}}
{{< /highlight >}}
{{< highlight host="demo-kube-k3s" file="traefik.tf" >}}
@ -676,19 +690,19 @@ locals {
resource "helm_release" "traefik" {
//...
set {
set_list {
name = "ports.websecure.forwardedHeaders.trustedIPs"
value = "{${join(",", local.trusted_ips)}}"
value = local.trusted_ips
}
set {
set_list {
name = "ports.websecure.proxyProtocol.trustedIPs"
value = "{${join(",", local.trusted_ips)}}"
value = local.trusted_ips
}
}
```
{{</ highlight >}}
{{< /highlight >}}
Or for testing purpose set `ports.websecure.forwardedHeaders.insecure` and `ports.websecure.proxyProtocol.insecure` to true.

View File

@ -19,7 +19,7 @@ In Kubernetes world, the most difficult while essential part is probably the sto
If you are not familiar with Kubernetes storage, you must at least be aware of pros and cons of `RWO` and `RWX` volumes when creating `PVC`.
In general `RWO` is more performant, but only one pod can mount it, while `RWX` is slower, but allow sharing between multiple pods.
`RWO` is a single node volume, and `RWX` is a shared volume between multiple nodes.
{{</ alert >}}
{{< /alert >}}
`K3s` comes with a built-in `local-path` provisioner, which is the most performant `RWO` solution by directly using local NVMe SSD. But it's not resilient neither scalable. I think it's a good solution for what you consider as not critical data.
@ -126,7 +126,7 @@ The volume is of course automatically mounted on each node reboot, it's done via
{{< alert >}}
Note as if you set volume in same time as node pool creation, Hetzner doesn't seem to automatically mount the volume. So it's preferable to create the node pool first, then add the volume as soon as the node in ready state. You can always detach / re-attach volumes manually through UI, which will force a proper remount.
{{</ alert >}}
{{< /alert >}}
### Longhorn variables
@ -200,7 +200,7 @@ resource "kubernetes_secret_v1" "longhorn_backup_credential" {
resource "helm_release" "longhorn" {
chart = "longhorn"
version = "1.5.1"
version = "1.6.1"
repository = "https://charts.longhorn.io"
name = "longhorn"
@ -254,7 +254,7 @@ resource "helm_release" "longhorn" {
Set both `persistence.defaultClassReplicaCount` (used for Kubernetes configuration in longhorn storage class) and `defaultSettings.defaultReplicaCount` (for volumes created from the UI) to 2 as we have 2 storage nodes.
The toleration is required to allow Longhorn pods (managers and drivers) to be scheduled on storage nodes in addition to workers.
Note as we need to have longhorn deployed on workers too, otherwise pods scheduled on these nodes can't be attached to longhorn volumes.
{{</ alert >}}
{{< /alert >}}
Use `kgpo -n longhorn-system -o wide` to check that Longhorn pods are correctly running on storage nodes as well as worker nodes. You should have `instance-manager` deployed on each node.
@ -342,7 +342,7 @@ resource "kubernetes_manifest" "longhorn_ingress" {
{{< alert >}}
It's vital that you have at least IP and AUTH middlewares with a strong password for Longhorn UI access, as its concern the most critical part of cluster.
Of course, you can skip this ingress and directly use `kpf svc/longhorn-frontend -n longhorn-system 8000:80` to access Longhorn UI securely.
{{</ alert >}}
{{< /alert >}}
### Nodes and volumes configuration
@ -358,7 +358,7 @@ Type this commands for both storage nodes or use Longhorn UI from **Node** tab:
```sh
# get the default-disk-xxx identifier
kg nodes.longhorn.io okami-storage-01 -n longhorn-system -o yaml
kg nodes.longhorn.io kube-storage-0x -n longhorn-system -o yaml
# patch main default-disk-xxx as fast storage
k patch nodes.longhorn.io kube-storage-0x -n longhorn-system --type=merge --patch '{"spec": {"disks": {"default-disk-xxx": {"tags": ["fast"]}}}}'
# add a new schedulable disk by adding HC_Volume_XXXXXXXX path
@ -386,6 +386,7 @@ resource "kubernetes_storage_class_v1" "longhorn_fast" {
fromBackup = ""
fsType = "ext4"
diskSelector = "fast"
dataLocality = "strict-local"
}
}
```
@ -476,7 +477,7 @@ resource "kubernetes_secret_v1" "postgresql_auth" {
resource "helm_release" "postgresql" {
chart = "postgresql"
version = var.chart_postgresql_version
version = "15.2.5"
repository = "https://charts.bitnami.com/bitnami"
name = "postgresql"
@ -507,11 +508,6 @@ resource "helm_release" "postgresql" {
value = "replication"
}
set {
name = "architecture"
value = "replication"
}
set {
name = "metrics.enabled"
value = "true"
@ -576,25 +572,25 @@ resource "helm_release" "postgresql" {
}
```
{{</ highlight >}}
{{< /highlight >}}
{{< alert >}}
Don't forget to use fast storage by setting `primary.persistence.storageClass` and `readReplicas.persistence.storageClass` accordingly.
{{</ alert >}}
{{< /alert >}}
Now check that PostgreSQL pods are correctly running on storage nodes with `kgpo -n postgres -o wide`.
```txt
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
postgresql-primary-0 2/2 Running 0 151m 10.42.5.253 okami-storage-01 <none> <none>
postgresql-read-0 2/2 Running 0 152m 10.42.2.216 okami-storage-02 <none> <none>
postgresql-primary-0 2/2 Running 0 151m 10.42.5.253 kube-storage-01 <none> <none>
postgresql-read-0 2/2 Running 0 152m 10.42.2.216 kube-storage-02 <none> <none>
```
And that's it, we have replicated PostgreSQL cluster ready to use ! Go to longhorn UI and be sure that 2 volumes are created on fast disk under **Volume** menu.
## Redis cluster
After PostgreSQL, set up a master/slave redis is a piece of cake. You may prefer [redis cluster](https://redis.io/docs/management/scaling/) by using [Bitnami redis cluster](https://artifacthub.io/packages/helm/bitnami/redis-cluster), but it [doesn't work](https://github.com/bitnami/charts/issues/12901) at the time of writing this guide.
After PostgreSQL, set up a redis cluster is a piece of cake. Let's use [Bitnami redis](https://artifacthub.io/packages/helm/bitnami/redis) with [Sentinel](https://redis.io/docs/management/sentinel/).
### Redis variables
@ -640,17 +636,12 @@ resource "kubernetes_secret_v1" "redis_auth" {
resource "helm_release" "redis" {
chart = "redis"
version = "17.15.6"
version = "19.1.0"
repository = "https://charts.bitnami.com/bitnami"
name = "redis"
namespace = kubernetes_namespace_v1.redis.metadata[0].name
set {
name = "architecture"
value = "standalone"
}
set {
name = "auth.existingSecret"
value = kubernetes_secret_v1.redis_auth.metadata[0].name
@ -672,67 +663,25 @@ resource "helm_release" "redis" {
}
set {
name = "master.tolerations[0].key"
value = "node-role.kubernetes.io/storage"
}
set {
name = "master.tolerations[0].effect"
value = "NoSchedule"
}
set {
name = "master.nodeSelector.node-role\\.kubernetes\\.io/primary"
type = "string"
name = "sentinel.enabled"
value = "true"
}
set {
name = "master.persistence.size"
value = "10Gi"
}
set {
name = "master.persistence.storageClass"
value = "longhorn-fast"
name = "replica.persistence.enabled"
value = "false"
}
set {
name = "replica.replicaCount"
value = "1"
}
set {
name = "replica.tolerations[0].key"
value = "node-role.kubernetes.io/storage"
}
set {
name = "replica.tolerations[0].effect"
value = "NoSchedule"
}
set {
name = "replica.nodeSelector.node-role\\.kubernetes\\.io/read"
type = "string"
value = "true"
}
set {
name = "replica.persistence.size"
value = "10Gi"
}
set {
name = "replica.persistence.storageClass"
value = "longhorn-fast"
value = "3"
}
}
```
{{< /highlight >}}
And that's it, job done ! Always check that Redis pods are correctly running on storage nodes with `kgpo -n redis -o wide` and volumes are ready on Longhorn.
And that's it, job done ! Check that all 3 Redis nodes are correctly running on worker nodes with `kgpo -n redis -o wide`. Thanks to Sentinel, Redis is highly available and resilient.
## Backups
@ -805,7 +754,7 @@ Configure this variable according to your needs.
If you need some regular dump of your database without requiring a dedicated Kubernetes `CronJob`, you can simply use following crontab line on control plane node:
```sh
0 */8 * * * root /usr/local/bin/k3s kubectl exec sts/postgresql-primary -n postgres -- /bin/sh -c 'PGUSER="okami" PGPASSWORD="$POSTGRES_PASSWORD" pg_dumpall -c | gzip > /bitnami/postgresql/dump_$(date "+\%H")h.sql.gz'
0 */8 * * * root /usr/local/bin/k3s kubectl exec sts/postgresql-primary -n postgres -- /bin/sh -c 'PGUSER="okami" PGPASSWORD="$POSTGRES_PASSWORD" pg_dumpall -c --if-exists | gzip > /bitnami/postgresql/dump_$(date "+\%H")h.sql.gz'
```
It will generate 3 daily dumps, one every 8 hours, on the same primary db volume, allowing easy `psql` restore from the same container.

View File

@ -125,7 +125,8 @@ provider "flux" {
}
resource "flux_bootstrap_git" "this" {
path = "clusters/demo"
path = "clusters/demo"
embedded_manifests = true
components_extra = [
"image-reflector-controller",
@ -152,7 +153,7 @@ Open `demo-kube-flux` project and create helm deployment for sealed secret.
```yaml
---
apiVersion: source.toolkit.fluxcd.io/v1beta2
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: sealed-secrets
@ -161,7 +162,7 @@ spec:
interval: 1h0m0s
url: https://bitnami-labs.github.io/sealed-secrets
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: sealed-secrets
@ -352,7 +353,7 @@ Let's try some app that require a bit more configuration and real database conne
{{< highlight host="demo-kube-flux" file="clusters/demo/n8n/deploy-n8n.yaml" >}}
```yaml
apiVersion: apps/v1
apiVersion: v1
kind: Namespace
metadata:
name: n8n
@ -423,10 +424,14 @@ spec:
volumeMounts:
- name: n8n-data
mountPath: /home/node/.n8n
- name: n8n-cache
mountPath: /home/node/.cache
volumes:
- name: n8n-data
persistentVolumeClaim:
claimName: n8n-data
- name: n8n-cache
emptyDir: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
@ -517,7 +522,7 @@ Let's try a final candidate with NocoDB, an Airtable-like generator for Postgres
{{< highlight host="demo-kube-flux" file="clusters/demo/nocodb/deploy-nocodb.yaml" >}}
```yaml
apiVersion: apps/v1
apiVersion: v1
kind: Namespace
metadata:
name: nocodb

View File

@ -69,7 +69,7 @@ resource "kubernetes_namespace_v1" "monitoring" {
resource "helm_release" "kube_prometheus_stack" {
chart = "kube-prometheus-stack"
version = "49.2.0"
version = "58.1.0"
repository = "https://prometheus-community.github.io/helm-charts"
name = "kube-prometheus-stack"
@ -112,12 +112,12 @@ resource "helm_release" "kube_prometheus_stack" {
set {
name = "prometheus.prometheusSpec.tolerations[0].key"
value = "node-role.kubernetes.io/storage"
value = "node-role.kubernetes.io/monitor"
}
set {
name = "prometheus.prometheusSpec.tolerations[0].operator"
value = "Exists"
name = "prometheus.prometheusSpec.tolerations[0].effect"
value = "NoSchedule"
}
set {
@ -159,6 +159,10 @@ Important notes:
* As we don't set any storage class, the default one will be used, which is `local-path` when using K3s. If you want to use longhorn instead and benefit of automatic monitoring backup, you can set it with `...volumeClaimTemplate.spec.storageClassName`. But don't forget to deploy Longhorn manager by adding monitor toleration.
* As it's a huge chart, I want to minimize dependencies by disabling Grafana, as I prefer manage it separately. However, in this case we may set `grafana.forceDeployDatasources` and `grafana.forceDeployDashboards` to `true` in order to benefit of all included Kubernetes dashboards and automatic Prometheus datasource injection, and deploy them to config maps that can be used for next Grafana install by provisioning.
{{< alert >}}
As Terraform plan become slower and slower, you can directly apply one single resource by using `target` option. For example for applying only Prometheus stack, use `terraform apply -target=helm_release.kube_prometheus_stack`. It will save you a lot of time for testing.
{{< /alert >}}
And finally the ingress for external access:
{{< highlight host="demo-kube-k3s" file="monitoring.tf" >}}
@ -292,7 +296,7 @@ Create `grafana` database through pgAdmin with same user and according `grafana_
{{< highlight host="demo-kube-k3s" file="terraform.tfvars" >}}
```tf
smtp_host = "smtp.mailgun.org"
smtp_host = "smtp.tem.scw.cloud"
smtp_port = "587"
smtp_user = "xxx"
smtp_password = "xxx"
@ -307,7 +311,7 @@ Apply next configuration to Terraform project:
```tf
resource "helm_release" "grafana" {
chart = "grafana"
version = "6.58.9"
version = "7.3.8"
repository = "https://grafana.github.io/helm-charts"
name = "grafana"
@ -432,10 +436,6 @@ If you go to `https://grafana.kube.rocks/dashboards`, you should see a many dash
* Prometheus and Grafana itself stats
* Flux stats
{{< alert >}}
Some other core components like etcd, scheduler, proxy, and controller manager need to have metrics enabled to be scraped. See K3s docs or [this issue](https://github.com/k3s-io/k3s/issues/3619)
{{< /alert >}}
#### Prometheus
[![Prometheus](dashboards-prometheus.png)](dashboards-prometheus.png)
@ -466,7 +466,7 @@ You can easily import some additional dashboards by importing them from Grafana
#### Traefik
[Link](https://grafana.com/grafana/17346)
[Link](https://grafana.com/dashboards/17346)
[![Traefik](dashboards-traefik.png)](dashboards-traefik.png)
@ -478,31 +478,61 @@ You can easily import some additional dashboards by importing them from Grafana
#### Longhorn
[Link](https://grafana.com/grafana/16888)
[Link](https://grafana.com/dashboards/16888)
[![Longhorn](dashboards-longhorn.png)](dashboards-longhorn.png)
#### PostgreSQL
[Link](https://grafana.com/grafana/9628)
[Link](https://grafana.com/dashboards/9628)
[![PostgreSQL](dashboards-postgresql.png)](dashboards-postgresql.png)
#### Redis
[Link](https://grafana.com/grafana/dashboards/763)
[Link](https://grafana.com/dashboards/763)
[![Redis](dashboards-redis.png)](dashboards-redis.png)
#### Other core components
Some other core components like etcd, scheduler, proxy, and controller manager need to have metrics enabled to be scraped. See K3s docs or [this issue](https://github.com/k3s-io/k3s/issues/3619).
From Terraform Hcloud project, use `control_planes_custom_config` for expose all remaining metrics endpoint:
{{< highlight host="demo-kube-hcloud" file="kube.tf" >}}
```tf
module "hcloud_kube" {
//...
control_planes_custom_config = {
//...
etcd-expose-metrics = true,
kube-scheduler-arg = "bind-address=0.0.0.0",
kube-controller-manager-arg = "bind-address=0.0.0.0",
kube-proxy-arg = "metrics-bind-address=0.0.0.0",
}
//...
}
```
{{< /highlight >}}
{{< alert >}}
As above config applies only at cluster initialization, you may change directly `/etc/rancher/k3s/config.yaml` instead and restart K3s server.
{{< /alert >}}
## Logging
Last but not least, we need to add a logging stack. The most popular one is [Elastic Stack](https://www.elastic.co/elastic-stack), but it's very resource intensive. A more lightweight option is to use [Loki](https://grafana.com/oss/loki/), also part of Grafana Labs.
In order to work on scalable mode, we need to have a S3 storage backend. We will reuse same S3 compatible storage as longhorn backup here, but it's recommended to use a separate bucket and credentials.
We need to have a S3 storage backend for long term storage. We will reuse same S3 compatible storage as longhorn backup here, but it's recommended to use a separate bucket and credentials.
### Loki
Let's install it now:
Let's install it on single binary mode:
{{< highlight host="demo-kube-k3s" file="logging.tf" >}}
@ -515,7 +545,7 @@ resource "kubernetes_namespace_v1" "logging" {
resource "helm_release" "loki" {
chart = "loki"
version = "5.15.0"
version = "6.2.0"
repository = "https://grafana.github.io/helm-charts"
name = "loki"
@ -531,6 +561,11 @@ resource "helm_release" "loki" {
value = "true"
}
set {
name = "loki.compactor.delete_request_store"
value = "s3"
}
set {
name = "loki.limits_config.retention_period"
value = "24h"
@ -572,34 +607,78 @@ resource "helm_release" "loki" {
}
set {
name = "read.replicas"
name = "loki.commonConfig.replication_factor"
value = "1"
}
set {
name = "loki.schemaConfig.configs[0].from"
value = "2024-01-01"
}
set {
name = "loki.schemaConfig.configs[0].store"
value = "tsdb"
}
set {
name = "loki.schemaConfig.configs[0].object_store"
value = "s3"
}
set {
name = "loki.schemaConfig.configs[0].schema"
value = "v13"
}
set {
name = "loki.schemaConfig.configs[0].index.prefix"
value = "index_"
}
set {
name = "loki.schemaConfig.configs[0].index.period"
value = "24h"
}
set {
name = "deploymentMode"
value = "SingleBinary"
}
set {
name = "read.replicas"
value = "0"
}
set {
name = "backend.replicas"
value = "1"
value = "0"
}
set {
name = "write.replicas"
value = "2"
value = "0"
}
set {
name = "write.tolerations[0].key"
value = "node-role.kubernetes.io/storage"
name = "singleBinary.replicas"
value = "1"
}
set {
name = "write.tolerations[0].effect"
name = "singleBinary.tolerations[0].key"
value = "node-role.kubernetes.io/monitor"
}
set {
name = "singleBinary.tolerations[0].effect"
value = "NoSchedule"
}
set {
name = "write.nodeSelector.node-role\\.kubernetes\\.io/storage"
type = "string"
value = "true"
name = "singleBinary.nodeSelector.node\\.kubernetes\\.io/server-usage"
value = "monitor"
}
set {
@ -626,6 +705,21 @@ resource "helm_release" "loki" {
name = "test.enabled"
value = "false"
}
set {
name = "chunksCache.enabled"
value = "false"
}
set {
name = "resultsCache.enabled"
value = "false"
}
set {
name = "lokiCanary.enabled"
value = "false"
}
}
```
@ -642,7 +736,7 @@ Okay so Loki is running but not fed, for that we'll deploy [Promtail](https://gr
```tf
resource "helm_release" "promtail" {
chart = "promtail"
version = "6.15.0"
version = "6.15.5"
repository = "https://grafana.github.io/helm-charts"
name = "promtail"
@ -715,107 +809,6 @@ We have nothing more to do, all dashboards are already provided by Loki Helm cha
[![Loki explore](dashboards-loki.png)](dashboards-loki.png)
## Helm Exporter
We have installed many Helm Charts so far, but how we manage upgrading plans ? We may need to be aware of new versions and security fixes. For that, we can use Helm Exporter:
{{< highlight host="demo-kube-k3s" file="monitoring.tf" >}}
```tf
resource "helm_release" "helm_exporter" {
chart = "helm-exporter"
version = "1.2.5+1cbc9c5"
repository = "https://shanestarcher.com/helm-charts"
name = "helm-exporter"
namespace = kubernetes_namespace_v1.monitoring.metadata[0].name
set {
name = "serviceMonitor.create"
value = "true"
}
set {
name = "grafanaDashboard.enabled"
value = "true"
}
set {
name = "grafanaDashboard.grafanaDashboard.namespace"
value = kubernetes_namespace_v1.monitoring.metadata[0].name
}
values = [
file("values/helm-exporter-values.yaml")
]
}
```
{{< /highlight >}}
As the helm exporter config is a bit tedious, it's more straightforward to use a separate helm values file. Here is a sample configuration for Helm Exporter for scraping all charts that we'll need:
{{< highlight host="demo-kube-k3s" file="values/helm-exporter-values.tf" >}}
```yaml
config:
helmRegistries:
registryNames:
- bitnami
override:
- registry:
url: "https://concourse-charts.storage.googleapis.com"
charts:
- concourse
- registry:
url: "https://dl.gitea.io/charts"
charts:
- gitea
- registry:
url: "https://grafana.github.io/helm-charts"
charts:
- grafana
- loki
- promtail
- tempo
- registry:
url: "https://charts.longhorn.io"
charts:
- longhorn
- registry:
url: "https://charts.jetstack.io"
charts:
- cert-manager
- registry:
url: "https://traefik.github.io/charts"
charts:
- traefik
- registry:
url: "https://bitnami-labs.github.io/sealed-secrets"
charts:
- sealed-secrets
- registry:
url: "https://prometheus-community.github.io/helm-charts"
charts:
- kube-prometheus-stack
- registry:
url: "https://SonarSource.github.io/helm-chart-sonarqube"
charts:
- sonarqube
- registry:
url: "https://kubereboot.github.io/charts"
charts:
- kured
- registry:
url: "https://shanestarcher.com/helm-charts"
charts:
- helm-exporter
```
{{< /highlight >}}
You can easily start from provisioned dashboard and customize it for using `helm_chart_outdated` instead of `helm_chart_info` to list all outdated helms.
## 5th check ✅
We now have a full monitoring suite with performant logging collector ! What a pretty massive subject done. At this stage, you have a good starting point to run many apps on your cluster with high scalability and observability. We are done for the pure **operational** part. It's finally time to tackle the **building** part for a complete development stack. Go [next part]({{< ref "/posts/16-a-beautiful-gitops-day-6" >}}) to begin with continuous integration.

View File

@ -62,7 +62,7 @@ Then the Helm chart itself:
```tf
locals {
redis_connection = "redis://:${urlencode(var.redis_password)}@redis-master.redis:6379/0"
redis_connection = "redis://:${urlencode(var.redis_password)}@redis.redis:6379/0"
}
resource "kubernetes_namespace_v1" "gitea" {
@ -73,7 +73,7 @@ resource "kubernetes_namespace_v1" "gitea" {
resource "helm_release" "gitea" {
chart = "gitea"
version = "9.2.0"
version = "10.1.4"
repository = "https://dl.gitea.io/charts"
name = "gitea"
@ -303,7 +303,7 @@ You should be able to log in `https://gitea.kube.rocks` with chosen admin creden
### Push a basic Web API project
Let's generate a basic .NET Web API project. Create a new dotnet project like following (you may install [last .NET SDK](https://dotnet.microsoft.com/en-us/download)):
Let's generate a basic .NET Web API project. Create a new dotnet 8 project like following (you may install [.NET 8 SDK](https://dotnet.microsoft.com/en-us/download)):
```sh
mkdir kuberocks-demo
@ -311,7 +311,7 @@ cd kuberocks-demo
dotnet new sln
dotnet new gitignore
dotnet new editorconfig
dotnet new webapi -o src/KubeRocks.WebApi
dotnet new webapi -o src/KubeRocks.WebApi --use-controllers
dotnet sln add src/KubeRocks.WebApi
git init
git add .
@ -359,7 +359,7 @@ resource "helm_release" "traefik" {
}
set {
name = "ports.ssh.expose"
name = "ports.ssh.expose.default"
value = "true"
}
@ -414,7 +414,7 @@ Now retry pull again and it should work seamlessly !
### Gitea monitoring
[Link](https://grafana.com/grafana/dashboards/17802)
[Link](https://grafana.com/dashboards/17802)
[![Gitea monitoring](gitea-monitoring.png)](gitea-monitoring.png)
@ -510,7 +510,7 @@ resource "kubernetes_namespace_v1" "concourse" {
resource "helm_release" "concourse" {
chart = "concourse"
version = "17.2.0"
version = "17.3.1"
repository = "https://concourse-charts.storage.googleapis.com"
name = "concourse"

View File

@ -193,7 +193,7 @@ Firstly create following files in root of your repo that we'll use for building
{{< highlight host="kuberocks-demo" file="Dockerfile" >}}
```Dockerfile
FROM mcr.microsoft.com/dotnet/aspnet:7.0
FROM mcr.microsoft.com/dotnet/aspnet:8.0
WORKDIR /publish
COPY /publish .
@ -253,7 +253,7 @@ jobs:
type: registry-image
source:
repository: mcr.microsoft.com/dotnet/sdk
tag: "7.0"
tag: "8.0"
inputs:
- name: source-code
path: .
@ -432,7 +432,7 @@ Let's define the image update automation task for main Flux repository:
{{< highlight host="demo-kube-flux" file="clusters/demo/flux-add-ons/image-update-automation.yaml" >}}
```yaml
apiVersion: image.toolkit.fluxcd.io/v1beta1
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImageUpdateAutomation
metadata:
name: flux-system
@ -465,7 +465,7 @@ Now we need to tell Image Reflector how to scan the repository, as well as the a
{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/images-demo.yaml" >}}
```yaml
apiVersion: image.toolkit.fluxcd.io/v1beta1
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImageRepository
metadata:
name: demo
@ -476,7 +476,7 @@ spec:
secretRef:
name: dockerconfigjson
---
apiVersion: image.toolkit.fluxcd.io/v1beta1
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImagePolicy
metadata:
name: demo

View File

@ -120,8 +120,9 @@ The last step but not least for a total integration with our monitored Kubernete
Install minimal ASP.NET Core metrics is really a no-brainer:
```sh
dotnet add src/KubeRocks.WebApi package OpenTelemetry.AutoInstrumentation --prerelease
dotnet add src/KubeRocks.WebApi package OpenTelemetry.Extensions.Hosting --prerelease
dotnet add src/KubeRocks.WebApi package OpenTelemetry.Instrumentation.AspNetCore --prerelease
dotnet add src/KubeRocks.WebApi package OpenTelemetry.Instrumentation.EntityFrameworkCore --prerelease
dotnet add src/KubeRocks.WebApi package OpenTelemetry.Exporter.Prometheus.AspNetCore --prerelease
```
@ -135,7 +136,11 @@ builder.Services.AddOpenTelemetry()
{
b
.AddAspNetCoreInstrumentation()
.AddPrometheusExporter();
.AddPrometheusExporter()
.AddMeter(
"Microsoft.AspNetCore.Hosting",
"Microsoft.AspNetCore.Server.Kestrel"
);
});
var app = builder.Build();
@ -149,9 +154,7 @@ app.UseOpenTelemetryPrometheusScrapingEndpoint();
Relaunch app and go to `https://demo.kube.rocks/metrics` to confirm it's working. It should show metrics after each endpoint call, simply try `https://demo.kube.rocks/Articles`.
{{< alert >}}
.NET metrics are currently pretty basic, but the next .NET 8 version will provide far better metrics from internal components allowing some [useful dashboard](https://github.com/JamesNK/aspnetcore-grafana).
{{< /alert >}}
Now you can easily import ASP.NET [specific grafana dashboards](https://github.com/dotnet/aspire/tree/main/src/Grafana) for visualizing.
#### Hide internal endpoints
@ -270,7 +273,7 @@ resource "kubernetes_namespace_v1" "tracing" {
resource "helm_release" "tempo" {
chart = "tempo"
version = "1.5.1"
version = "1.7.2"
repository = "https://grafana.github.io/helm-charts"
name = "tempo"
@ -344,6 +347,7 @@ Use the *Test* button on `https://grafana.kube.rocks/connections/datasources/edi
Let's firstly add another instrumentation package specialized for Npgsql driver used by EF Core to translate queries to PostgreSQL:
```sh
dotnet add src/KubeRocks.WebApi package OpenTelemetry.Exporter.OpenTelemetryProtocol --prerelease
dotnet add src/KubeRocks.WebApi package Npgsql.OpenTelemetry
```

View File

@ -19,105 +19,109 @@ SonarQube is leading the code metrics industry for a long time, embracing full O
SonarQube has its dedicated Helm chart which is perfect for us. However, it's the most resource hungry component of our development stack so far (because built with Java ? End of troll), so be sure to deploy it on almost empty free node (which should be ok with 3 workers), maybe a dedicated one. In fact, it's the last Helm chart for this tutorial, I promise!
Create dedicated database for SonarQube same as usual.
Create dedicated database for SonarQube same as usual, then we can use flux for deployment.
{{< highlight host="demo-kube-k3s" file="main.tf" >}}
{{< highlight host="demo-kube-flux" file="clusters/demo/sonarqube/deploy-sonarqube.yaml" >}}
```tf
variable "sonarqube_db_password" {
type = string
sensitive = true
}
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: sonarqube
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: sonarqube
namespace: sonarqube
spec:
interval: 1h0m0s
url: https://SonarSource.github.io/helm-chart-sonarqube
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: sonarqube
namespace: sonarqube
spec:
chart:
spec:
chart: sonarqube
reconcileStrategy: ChartVersion
sourceRef:
kind: HelmRepository
name: sonarqube
version: ">=10.0.0"
interval: 1m
releaseName: sonarqube
targetNamespace: sonarqube
values:
resources:
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 500m
memory: 2Gi
prometheusMonitoring:
podMonitor:
enabled: true
namespace: sonarqube
monitoringPasscode: null
monitoringPasscodeSecretName: sonarqube-secret
monitoringPasscodeSecretKey: monitoring-passcode
jdbcOverwrite:
enable: true
jdbcUrl: jdbc:postgresql://postgresql-primary.postgres/sonarqube
jdbcUsername: sonarqube
jdbcSecretName: sonarqube-secret
jdbcSecretPasswordKey: db-password
postgresql:
enabled: false
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: sonarqube
namespace: sonarqube
spec:
entryPoints:
- websecure
routes:
- match: Host(`sonarqube.kube.rocks`)
kind: Rule
services:
- name: sonarqube-sonarqube
port: http
```
{{< /highlight >}}
{{< highlight host="demo-kube-k3s" file="terraform.tfvars" >}}
Here are the secrets to adapt to your needs:
```tf
sonarqube_db_password = "xxx"
{{< highlight host="demo-kube-flux" file="clusters/demo/sonarqube/secret-sonarqube.yaml" >}}
```yaml
apiVersion: v1
kind: Secret
metadata:
name: sonarqube-secret
namespace: sonarqube
type: Opaque
data:
db-password: YWRtaW4=
monitoring-passcode: YWRtaW4=
```
{{< /highlight >}}
{{< highlight host="demo-kube-k3s" file="sonarqube.tf" >}}
As seen in part 4 of this guide, seal these secrets with `kubeseal` under `sealed-secret-sonarqube.yaml` and delete original secret file.
```tf
resource "kubernetes_namespace_v1" "sonarqube" {
metadata {
name = "sonarqube"
}
}
resource "helm_release" "sonarqube" {
chart = "sonarqube"
version = "10.1.0+628"
repository = "https://SonarSource.github.io/helm-chart-sonarqube"
name = "sonarqube"
namespace = kubernetes_namespace_v1.sonarqube.metadata[0].name
set {
name = "prometheusMonitoring.podMonitor.enabled"
value = "true"
}
set {
name = "postgresql.enabled"
value = "false"
}
set {
name = "jdbcOverwrite.enabled"
value = "true"
}
set {
name = "jdbcOverwrite.jdbcUrl"
value = "jdbc:postgresql://postgresql-primary.postgres/sonarqube"
}
set {
name = "jdbcOverwrite.jdbcUsername"
value = "sonarqube"
}
set {
name = "jdbcOverwrite.jdbcPassword"
value = var.sonarqube_db_password
}
}
resource "kubernetes_manifest" "sonarqube_ingress" {
manifest = {
apiVersion = "traefik.io/v1alpha1"
kind = "IngressRoute"
metadata = {
name = "sonarqube"
namespace = kubernetes_namespace_v1.sonarqube.metadata[0].name
}
spec = {
entryPoints = ["websecure"]
routes = [
{
match = "Host(`sonarqube.${var.domain}`)"
kind = "Rule"
services = [
{
name = "sonarqube-sonarqube"
port = "http"
}
]
}
]
}
}
}
```
{{< /highlight >}}
Be sure to disable the PostgreSQL sub chart and use our self-hosted cluster with both `postgresql.enabled` and `jdbcOverwrite.enabled`. If needed, set proper `tolerations` and `nodeSelector` for deploying on a dedicated node.
Inside Helm values, be sure to disable the PostgreSQL sub chart and use our self-hosted cluster with both `postgresql.enabled` and `jdbcOverwrite.enabled`. If needed, set proper `tolerations` and `nodeSelector` for deploying on a dedicated node.
The installation take many minutes, be patient. Once done, you can access SonarQube on `https://sonarqube.kube.rocks` and login with `admin` / `admin`.
@ -584,6 +588,10 @@ public class ArticlesListTests : TestBase
Ensure all tests passes with `dotnet test`.
{{< alert >}}
You may be interested in [Testcontainers](https://testcontainers.com/) for native support of containers inside code, including parallelism.
{{< /alert >}}
### CI tests & code coverage
Now we need to integrate the tests in our CI pipeline. As we testing with a real database, create a new `demo_test` database through pgAdmin with basic `test` / `test` credentials.

View File

@ -152,7 +152,7 @@ vus............................: 7 min=7 max=30
vus_max........................: 30 min=30 max=30
```
As we use Prometheus for outputting the result, we can visualize it easily with Grafana. You just have to import [this dashboard](https://grafana.com/grafana/dashboards/18030-official-k6-test-result/):
As we use Prometheus for outputting the result, we can visualize it easily with Grafana. You just have to import [this dashboard](https://grafana.com/dashboards/18030):
[![Grafana](grafana-k6.png)](grafana-k6.png)
@ -880,7 +880,7 @@ After push all CI should build correctly. Then the image policy for auto update:
{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/images-demo-ui.yaml" >}}
```yml
apiVersion: image.toolkit.fluxcd.io/v1beta1
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImageRepository
metadata:
name: demo-ui
@ -891,7 +891,7 @@ spec:
secretRef:
name: dockerconfigjson
---
apiVersion: image.toolkit.fluxcd.io/v1beta1
apiVersion: image.toolkit.fluxcd.io/v1beta2
kind: ImagePolicy
metadata:
name: demo-ui
@ -931,7 +931,7 @@ spec:
- name: dockerconfigjson
containers:
- name: front
image: gitea.okami101.io/kuberocks/demo-ui:latest # {"$imagepolicy": "flux-system:image-demo-ui"}
image: gitea.kube.rocks/kuberocks/demo-ui:latest # {"$imagepolicy": "flux-system:image-demo-ui"}
ports:
- containerPort: 80
---

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 7.7 KiB

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 7.7 KiB

View File

@ -4,8 +4,64 @@ description: "Some boring stuf."
layout: "simple"
---
## What We Collect and Receive
## Introduction
In order for us to provide you the best possible experience on our websites, we need to collect and process certain information. Depending on your use of the Services, that may include:
Welcome to **Okami101**. We are committed to protecting your privacy. This Privacy Policy explains how we handle any personal data that may be collected when you visit our blog site. While we do not collect user information for tracking or marketing purposes, we use certain third-party services to ensure the security, functionality, and analytics of our site. This policy outlines our approach to privacy and how we ensure compliance with the General Data Protection Regulation (GDPR).
* **Usage data** — when you visit our site, we will store: the website from which you visited us from, the parts of our site you visit, the date and duration of your visit, your anonymized IP address, information from the device (device type, operating system, screen resolution, language, country you are located in, and web browser type) you used during your visit, and more. We process this usage data in Umami for statistical purposes, to improve our site and to recognize and stop any misuse.
## Data Collection
### Personal Data
We do not collect, store, or process any personal data from our users for marketing or tracking purposes. However, we do process user IP addresses strictly for security purposes and use anonymized analytics data.
### IP Addresses
We use IP addresses solely for the purpose of preventing attacks and ensuring the security of our site. This is done through CrowdSec, a participative security solution that offers crowdsourced protection against malicious IPs. Your IP address may be processed to identify and mitigate potential security threats.
### Cookies
Our blog does not use cookies to track or identify visitors for our purposes. However, Cloudflare may use cookies to deliver its services effectively. These cookies are essential for security purposes and to improve site performance. Additionally, Umami, our analytics provider, does not use cookies and ensures user privacy.
### Log Files
We do not maintain log files of visitors to our site. However, Cloudflare and CrowdSec may collect log data for security and operational purposes, including IP addresses, browser types, and other technical information.
## Third-Party Services
### Cloudflare
We use Cloudflare for web security and performance optimization. Cloudflare may collect and process certain data as part of its service. This data processing is governed by Cloudflare's Privacy Policy, which can be found [here](https://www.cloudflare.com/privacypolicy/).
### Crowdsec
We use CrowdSec to enhance our site's security by protecting against malicious IP addresses. CrowdSec processes IP addresses to identify and mitigate security threats. The data handling practices of CrowdSec are governed by CrowdSec's Privacy Policy, which can be found [here](https://crowdsec.net/privacy-policy).
### Umami
We use Umami, a fully GDPR-compliant Google Analytics alternative, to gather anonymized analytics data about our site's usage. Umami does not use cookies or collect personally identifiable information. The data collected by Umami helps us understand site traffic and usage patterns without compromising user privacy. For more information, you can refer to Umami's privacy policy [here](https://umami.is/docs/).
### giscus
We use giscus, a GitHub-based commenting system, to manage comments on our blog posts. When you post a comment using giscus, you are interacting with GitHub's platform. This means your comment data, including your GitHub username and any other information you choose to share, is processed by GitHub. The data handling practices for giscus are governed by GitHub's Privacy Policy, which can be found [here](https://docs.github.com/en/site-policy/privacy-policies/github-privacy-statement).
## Third-Party Links
Our blog may contain links to other websites. Please be aware that we are not responsible for the privacy practices of other sites. We encourage you to read the privacy statements of each website that collects personal information.
## Data Protection Rights
Since we only process personal data (IP addresses) for security purposes and use anonymized analytics, your data protection rights are limited in this context. However, for any concerns or questions about data processed by Cloudflare, Crowdsec, giscus (GitHub), or Umami, please refer to their respective privacy policies.
## Contact Us
If you have any questions or concerns about our privacy practices or this policy, please contact us at <adrien@okami101.io>.
## Changes to This Privacy Policy
We may update our Privacy Policy from time to time. Any changes will be posted on this page with an updated effective date. We encourage you to review this policy periodically for any changes.
Effective Date: **19/05/2024**
---
By using our blog, you agree to the terms of this Privacy Policy. Thank you for visiting **Okami101**!

View File

@ -70,11 +70,8 @@
title: Vuetify Admin
date: 11/2020
repo: okami101/vuetify-admin
demo: https://va-demo.okami101.io/
docs: https://www.okami101.io/vuetify-admin
- name: laravel-rad-stack
title: Laravel RAD Stack
date: 10/2021
repo: adr1enbe4udou1n/laravel-rad-stack
demo: https://laravel-rad-stack.okami101.io/

View File

@ -74,12 +74,8 @@
{{ end }}
<div class="flex items-center justify-between">
<div class="flex items-center gap-4">
<img src="/kube.png" width="30" height="30" alt="Kubernetes"
title="Run on K3s over Hetzner Cloud" />
<a href="https://concourse.okami101.io/teams/main/pipelines/okami-blog" target="_blank">
<img src="https://concourse.okami101.io/api/v1/teams/main/pipelines/okami-blog/badge" />
</a>
<img src="/talos-logo.svg" width="30" height="30" alt="Talos Linux"
title="Run on Talos Linux over Hetzner Cloud" />
</div>
<div class="hidden lg:block">
{{/* Copyright */}}
@ -90,7 +86,7 @@
{{- else }}
&copy;
{{ now.Format "2006" }}
{{ .Site.Author.name | markdownify | emojify }}
{{ .Site.Params.Author.name | markdownify | emojify }}
{{- end }}
</p>
{{ end }}

View File

@ -6,9 +6,9 @@
>
<header class="flex flex-col items-center mb-3">
<h1 class="text-4xl font-extrabold">
{{ .Site.Author.name | default .Site.Title }}
{{ .Site.Params.Author.name | default .Site.Title }}
</h1>
{{ with .Site.Author.headline }}
{{ with .Site.Params.Author.headline }}
<h2 class="text-xl text-neutral-500 dark:text-neutral-400">
{{ . | markdownify | emojify }}
</h2>

View File

@ -0,0 +1,58 @@
<div class="chart">
{{ $id := delimit (shuffle (seq 1 9)) "" }}
<canvas id="{{ $id }}" height="350"></canvas>
<script type="text/javascript">
window.addEventListener("DOMContentLoaded", (event) => {
const ctx = document.getElementById("{{ $id }}");
const chart = new Chart(ctx, {
{{ if eq (.Get "type") "timeseries" }}
type: 'line',
options: {
maintainAspectRatio: false,
plugins: {
title: {
display: true,
text: {{ .Get "title" }},
},
},
scales: {
x: {
ticks: {
autoSkip: true,
callback: function(val, index) {
return this.getLabelForValue(val) + 's'
},
}
},
y: {
{{ if .Get "stacked" }}
stacked: {{ .Get "stacked" }},
{{ end }}
beginAtZero: true,
{{ if .Get "max" }}
suggestedMax: {{ .Get "max" }},
{{ end }}
}
},
},
data: {
labels: [
{{ if .Get "step" }}
{{ range seq 0 (.Get "step") 90 }}
{{ . }},
{{ end }}
{{ else }}
{{ range seq 0 90 }}
{{ . }},
{{ end }}
{{ end }}
],
datasets: {{ .Inner | safeJS }}
}
{{ else }}
{{ .Inner | safeJS }}
{{ end }}
});
});
</script>
</div>

View File

@ -49,12 +49,7 @@
{{ readFile (print "data/works/" .name ".md") | markdownify }}
</div>
<div class="flex justify-center gap-4">
{{ partial "button.html" (dict "text" (partial "icon.html" "github") "href" (print
"https://github.com/" .repo) "color" .color) }}
{{ if .ci }}
{{ partial "button.html" (dict "text" (partial "icon.html" "bug") "href" (print
"https://concourse.okami101.io/teams/main/pipelines/" .ci) "color" .color) }}
{{ end }}
{{ partial "button.html" (dict "text" (partial "icon.html" "github") "href" (print "https://github.com/" .repo) "color" .color) }}
{{ if .demo }}
{{ partial "button.html" (dict "text" "Demo" "href" .demo "color" .color) }}
{{ end }}

16
nginx/default.conf Normal file
View File

@ -0,0 +1,16 @@
server {
listen 80;
listen [::]:80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
if ($new_uri != "") {
rewrite ^(.*)$ $new_uri permanent;
}
}
error_page 404 /404.html;
}

3
nginx/redirects.conf Normal file
View File

@ -0,0 +1,3 @@
map $request_uri $new_uri {
/2023/12/a-2024-benchmark-of-main-web-apis-frameworks/ /2023/12/a-2024-benchmark-of-main-web-api-frameworks/;
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

3
static/talos-logo.svg Normal file
View File

@ -0,0 +1,3 @@
<?xml version="1.0" encoding="UTF-8"?>
<!-- Generator: Adobe Illustrator 23.0.3, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" viewBox="0 0 203.74 226.05"><defs><style>.cls-1{fill:url(#linear-gradient);}.cls-2{fill:url(#linear-gradient-2);}.cls-3{fill:url(#linear-gradient-3);}.cls-4{fill:url(#linear-gradient-4);}.cls-5{fill:url(#linear-gradient-5);}</style><linearGradient id="linear-gradient" x1="101.85" y1="-15.19" x2="101.85" y2="237.81" gradientUnits="userSpaceOnUse"><stop offset="0" stop-color="#ffd200"/><stop offset="0.06" stop-color="#ffb500"/><stop offset="0.14" stop-color="#ff8c00"/><stop offset="0.21" stop-color="#ff7300"/><stop offset="0.26" stop-color="#ff6a00"/><stop offset="0.33" stop-color="#fc4f0e"/><stop offset="0.43" stop-color="#f92f1e"/><stop offset="0.51" stop-color="#f81b27"/><stop offset="0.57" stop-color="#f7142b"/><stop offset="0.68" stop-color="#df162e"/><stop offset="0.79" stop-color="#af1a38"/><stop offset="1" stop-color="#4b214c"/></linearGradient><linearGradient id="linear-gradient-2" x1="24.84" y1="-15.19" x2="24.84" y2="237.81" xlink:href="#linear-gradient"/><linearGradient id="linear-gradient-3" x1="178.9" y1="-15.19" x2="178.9" y2="237.81" xlink:href="#linear-gradient"/><linearGradient id="linear-gradient-4" x1="145.06" y1="-15.19" x2="145.06" y2="237.81" xlink:href="#linear-gradient"/><linearGradient id="linear-gradient-5" x1="58.64" y1="-15.19" x2="58.64" y2="237.81" xlink:href="#linear-gradient"/></defs><g id="Layer_2" data-name="Layer 2"><g id="Layer_1-2" data-name="Layer 1"><path class="cls-1" d="M101.89,226.05c2.85,0,5.67-.15,8.46-.35V.35c-2.8-.21-5.62-.35-8.48-.35s-5.7.14-8.52.35V225.69c2.81.21,5.64.35,8.5.36Z"/><path class="cls-2" d="M11.56,50.9,9.12,48.47A112.82,112.82,0,0,0,.2,63.61c29.42,29.89,32.52,44.31,32.48,49.14C32.57,125,17.58,144.21,0,162a113.69,113.69,0,0,0,8.84,15.15c1-1,1.95-1.92,2.92-2.9,25.37-25.54,37.77-45.61,37.92-61.38S37.36,77,11.56,50.9Z"/><path class="cls-3" d="M192,174.29l2.92,2.9A113.69,113.69,0,0,0,203.74,162c-17.57-17.83-32.56-37.09-32.68-49.29-.11-11.9,14.79-31.15,32.46-49.18a112.88,112.88,0,0,0-8.9-15.1l-2.44,2.43c-25.8,26.05-38.27,46.34-38.12,62S166.61,148.75,192,174.29Z"/><path class="cls-4" d="M140.68,112.83c0-22,9.81-58.58,24.92-93.15A113,113,0,0,0,150.45,11c-16.54,37.27-26.78,76.91-26.78,101.87,0,24.15,11.09,64.23,27.93,101.7a113,113,0,0,0,14.84-8.77C150.85,170.73,140.68,134.07,140.68,112.83Z"/><path class="cls-5" d="M80,112.83C80,87.74,69.35,47.88,53,11.07a112.76,112.76,0,0,0-14.93,8.64C53.21,54.26,63,90.85,63,112.83c0,21.23-10.17,57.88-25.76,92.91a113.66,113.66,0,0,0,14.84,8.77C68.94,177.05,80,137,80,112.83Z"/></g></g></svg>

After

Width:  |  Height:  |  Size: 2.7 KiB

View File

@ -1 +0,0 @@
1.0.90