proofreading

This commit is contained in:
2023-08-30 18:34:56 +02:00
parent 5231f69c57
commit d11c237c68
2 changed files with 37 additions and 29 deletions

View File

@ -123,10 +123,10 @@ Filesystem Size Used Avail Use% Mounted on
/dev/sdb 20G 24K 19,5G 1% /mnt/HC_Volume_XXXXXXXX
```
The volume is of course automatically mounted on each node reboot, it's done via `fstab`.
The volume is of course automatically mounted on each node reboot, it's done via `/etc/fstab`. Retain `/mnt/HC_Volume_XXXXXXXX` path on both storage as we'll use them later for Longhorn configuration.
{{< alert >}}
Note as if you set volume in same time as node pool creation, Hetzner doesn't seem to automatically mount the volume. So it's preferable to create the node pool first, then add the volume as soon as the node in ready state. You can always delete and recreate volume by commenting then uncommenting `volume_size` variable, which will force a remount properly.
Note as if you set volume in same time as node pool creation, Hetzner doesn't seem to automatically mount the volume. So it's preferable to create the node pool first, then add the volume as soon as the node in ready state. You can always detach / re-attach volumes manually through UI, which will force a proper remount.
{{</ alert >}}
### Longhorn variables
@ -296,7 +296,7 @@ Monitoring will have dedicated post later.
### Ingress
Now we only have to expose Longhorn UI to the world. We'll use `IngressRoute` provided by Traefik.
Now we only have to expose Longhorn UI. We'll use `IngressRoute` provided by Traefik.
{{< highlight host="demo-kube-k3s" file="longhorn.tf" >}}
@ -347,24 +347,26 @@ Of course, you can skip this ingress and directly use `kpf svc/longhorn-frontend
### Nodes and volumes configuration
Longhorn is now installed and accessible, but we still have to configure it. Let's disable volume scheduling on worker nodes, as we want to use only storage nodes for it. All can be done via Longhorn UI but let's do more automatable way.
Longhorn is now installed and accessible, but we still have to configure it. Let's disable volume scheduling on worker nodes, as we want to use only storage nodes for it. All can be done via Longhorn UI but let's do CLI way.
```sh
k patch nodes.longhorn.io kube-worker-01 kube-worker-02 kube-worker-03 -n longhorn-system --type=merge --patch '{"spec": {"allowScheduling": false}}'
```
By default, Longhorn use local disk for storage, which is great for high IOPS critical workloads as databases, but we want also use our expandable dedicated block volume as default for large dataset.
By default, Longhorn use local disk for storage, which is great for high IOPS critical workloads as databases, but we want also use our expandable dedicated block volume as default for larger dataset.
Type this commands for both storage nodes or use Longhorn UI from **Node** tab:
```sh
# patch main disk as fast storage, set default-disk-xxx accordingly
# get the default-disk-xxx identifier
kg nodes.longhorn.io okami-storage-01 -n longhorn-system -o yaml
# patch main default-disk-xxx as fast storage
k patch nodes.longhorn.io kube-storage-0x -n longhorn-system --type=merge --patch '{"spec": {"disks": {"default-disk-xxx": {"tags": ["fast"]}}}}'
# add a new schedulable disk, set HC_Volume_XXXXXXXX accordingly to mounted volume
# add a new schedulable disk by adding HC_Volume_XXXXXXXX path
k patch nodes.longhorn.io kube-storage-0x -n longhorn-system --type=merge --patch '{"spec": {"disks": {"disk-mnt": {"allowScheduling": true, "evictionRequested": false, "path": "/mnt/HC_Volume_XXXXXXXX/", "storageReserved": 0}}}}'
```
Now all that's left is to create a dedicated storage class for fast local volumes. We'll use it for IOPS critical statefulset workloads like PostgreSQL and Redis. Let's apply nest `StorageClass` configuration and check it with `kg sc`:
Now all that's left is to create a dedicated storage class for fast local volumes. We'll use it for IOPS critical statefulset workloads like PostgreSQL and Redis. Let's apply next `StorageClass` configuration and check it with `kg sc`:
{{< highlight host="demo-kube-k3s" file="longhorn.tf" >}}
@ -391,13 +393,17 @@ resource "kubernetes_storage_class_v1" "longhorn_fast" {
{{< /highlight >}}
Longhorn is now ready for block and fast local volumes creation.
Longhorn is now ready for volumes creation on both block and fast local disks.
{{< alert >}}
If you need automatic encrypted volumes, which highly recommended for critical data, add `encrypted: "true"` below `parameters` section. You'll need to [set up a proper encryption](https://longhorn.io/docs/latest/advanced-resources/security/volume-encryption/) passphrase inside k8s `Secret`. In the meantime, backups will be encrypted as well, so you haven't to worry about it.
{{< /alert >}}
[![Longhorn UI](longhorn-ui.png)](longhorn-ui.png)
## PostgreSQL with replication
Now it's time to set up some critical statefulset persistence workloads, and firstly a PostgreSQL cluster with replication.
Now it's time to set up some critical statefulset persistence workloads. Let's begin with a PostgreSQL cluster with replication.
### PostgreSQL variables
@ -435,11 +441,11 @@ pgsql_admin_password = "xxx"
pgsql_replication_password = "xxx"
```
{{< /highlight >}}}
{{< /highlight >}}
### PostgreSQL installation
Before continue it's important to identify which storage node will serve the primary database, and which one will serve the replica.
Before continue it's important to identify which storage node will serve the primary database, and which one will serve the replica by adding these labels:
```sh
k label nodes kube-storage-01 node-role.kubernetes.io/primary=true
@ -585,11 +591,11 @@ postgresql-primary-0 2/2 Running 0 151m 10.42.5.253 oka
postgresql-read-0 2/2 Running 0 152m 10.42.2.216 okami-storage-02 <none> <none>
```
And that it, we have replicated PostgreSQL cluster ready to use ! Go to longhorn UI and be sure that 2 volumes are created on fast disk under **Volume** menu.
And that's it, we have replicated PostgreSQL cluster ready to use ! Go to longhorn UI and be sure that 2 volumes are created on fast disk under **Volume** menu.
## Redis cluster
After PostgreSQL, set up a redis cluster is a piece of cake.
After PostgreSQL, set up a master/slave redis is a piece of cake. You may prefer [redis cluster](https://redis.io/docs/management/scaling/) by using [Bitnami redis cluster](https://artifacthub.io/packages/helm/bitnami/redis-cluster), but it [doesn't work](https://github.com/bitnami/charts/issues/12901) at the time of writing this guide.
### Redis variables
@ -610,7 +616,7 @@ variable "redis_password" {
redis_password = "xxx"
```
{{< /highlight >}}}
{{< /highlight >}}
### Redis installation
@ -731,7 +737,7 @@ And that's it, job done ! Always check that Redis pods are correctly running on
## Backups
Final essential steps is to set up s3 backup for volumes. We already configured S3 backup on [longhorn variables step](#longhorn-variables), so we only have to configure backup strategy. We can use UI for that, but don't we are GitOps ? So let's do it with Terraform.
Final essential step is to set up s3 backup for volumes. We already configured S3 backup location on [longhorn variables step](#longhorn-variables), so we only have to configure backup strategy. We can use UI for that, but don't we are GitOps ? So let's do it with Terraform.
{{< highlight host="demo-kube-k3s" file="longhorn.tf" >}}
@ -789,15 +795,15 @@ Bam it's done ! After apply, check trough UI under **Recurring Job** menu if bac
Thanks to GitOps, the default backup strategy described by `job_backups` is marbled and self-explanatory:
* Daily backup until 7 days
* Weekly backup until 4 weeks
* Monthly backup until 3 months
* Daily backup until **7 days**
* Weekly backup until **4 weeks**
* Monthly backup until **3 months**
Configure this variable according to your needs.
### DB dumps
If you need some regular dump of your database without requiring Kubernetes `CronJob`, you can simply use following crontab line on control plane node:
If you need some regular dump of your database without requiring a dedicated Kubernetes `CronJob`, you can simply use following crontab line on control plane node:
```sh
0 */8 * * * root /usr/local/bin/k3s kubectl exec sts/postgresql-primary -n postgres -- /bin/sh -c 'PGUSER="okami" PGPASSWORD="$POSTGRES_PASSWORD" pg_dumpall -c | gzip > /bitnami/postgresql/dump_$(date "+\%H")h.sql.gz'

View File

@ -14,24 +14,24 @@ This is the **Part IV** of more global topic tutorial. [Back to guide summary]({
## Flux
In GitOps world, 2 tools are in lead for CD in k8s: Flux and ArgoCD. As Flux is CLI first and more lightweight, it's my personal goto. You may ask why don't continue with actual k8s Terraform project ?
In GitOps world, 2 tools are leading for CD in k8s: **Flux** and **ArgoCD**. As Flux is CLI first and more lightweight, it's my personal goto. You may wonder why don't continue with actual k3s Terraform project ?
You already noted that by adding more and more Helm dependencies to terraform, the plan time is increasing, as well as the state file. So not very scalable.
It's the perfect moment to draw a clear line between **IaC** and **CD**. IaC is for infrastructure, CD is for application. So to resume our GitOps stack:
It's the perfect moment to draw a clear line between **IaC** (Infrastructure as Code) and **CD** (Continuous Delivery). IaC is for infrastructure, CD is for application. So to resume our GitOps stack:
1. IaC for Hcloud cluster initialization (*the basement*): **Terraform**
2. IaC for cluster configuration (*the walls*): **Helm** through **Terraform**
3. CD for application deployment (*the furniture*): **Flux**
2. IaC for Kubernetes configuration (*the walls*): **Helm** through **Terraform**
3. CD for any application deployments (*the furniture*): **Flux**
{{< alert >}}
You can probably eliminate with some efforts the 2nd stack by using both `Kube-Hetzner`, which take care of ingress and storage, and using Flux directly for the remaining helms like database cluster. Or maybe you can also add custom helms to `Kube-Hetzner` ?
You can probably eliminate with some efforts the 2nd stack by using both `Kube-Hetzner`, which take care of ingress and storage, and using Flux directly for the remaining helms like database cluster. Or maybe you can also add custom helms to `Kube-Hetzner`.
But as it's increase complexity and dependencies problem, I prefer personally to keep a clear separation between the middle part and the rest, as it's more straightforward for me. Just a matter of taste 🥮
{{< /alert >}}
### Flux bootstrap
Create a dedicated Git repository for Flux somewhere, I'm using Github, which is just a matter of:
Create a dedicated Git repository for Flux somewhere, I'm using GitHub, which with [his CLI](https://cli.github.com/) is just a matter of:
```sh
gh repo create demo-kube-flux --private --add-readme
@ -324,6 +324,8 @@ data:
{{< /highlight >}}
Now be sure to encrypt it with `kubeseal` and remove original file:
```sh
cat clusters/demo/postgres/secret-pgadmin.yaml | kubeseal --format=yaml --cert=pub-sealed-secrets.pem > clusters/demo/postgres/sealed-secret-pgadmin.yaml
rm clusters/demo/postgres/secret-pgadmin.yaml
@ -337,10 +339,10 @@ You may use [VSCode extension](https://github.com/codecontemplator/vscode-kubese
Push it and wait a minute, and go to `pgadmin.kube.rocks` and login with chosen credentials. Now try to register a new server with `postgresql-primary.postgres` as hostname, and the rest with your PostgreSQL credential on previous installation. It should work !
{{< alert >}}
If you won't wait, do `flux reconcile kustomization flux-system --with-source` (require `flux-cli`). It also allows easy debugging by printing any syntax error in your manifests. It applies for every push from the flux repo.
If you won't wait each time after code push, do `flux reconcile kustomization flux-system --with-source` (require `flux-cli`). It also allows easy debugging by printing any syntax error in your manifests.
{{< /alert >}}
You can test the read replica too by register a new server using the hostname `postgresql-read.postgres`. Try to do some update on primary and check that it's replicated on read replica. Any modification on replicas should be rejected as well.
You can test the read replica too by register a new server using the hostname `postgresql-read.postgres`. Try to do some update on primary and check that it's replicated on read replica. Any modification on replicas should be rejected as it's on transaction read only mode.
It's time to use some useful apps.
@ -505,7 +507,7 @@ data:
{{< /highlight >}}
Before continue go to pgAdmin and create `n8n` DB and set `n8n` user with proper credentials as owner.
While writing these secrets, create `n8n` DB and set `n8n` user with proper credentials as owner.
Then don't forget to seal secrets and remove original files the same way as pgAdmin. Once pushed, n8n should be deploying, automatically migrate the db, and soon after `n8n.kube.rocks` should be available, allowing you to create your 1st account.