init k8s guide
This commit is contained in:
@ -105,9 +105,9 @@ For this guide, I'll consider using the starter kit as it's more suited for tuto
|
||||
|
||||
### 1st Terraform project
|
||||
|
||||
Let's initialize basic cluster setup. Create an empty folder for our terraform project, and create following `kube.tf` file :
|
||||
Let's initialize basic cluster setup. Create an empty folder (I name it `demo-kube-hcloud` here) for our terraform project, and create following `kube.tf` file :
|
||||
|
||||
{{< highlight file="kube.tf" >}}
|
||||
{{< highlight host="demo-kube-hcloud" file="kube.tf" >}}
|
||||
|
||||
```tf
|
||||
terraform {
|
||||
@ -233,8 +233,6 @@ I'm using a local backend for simplicity, but for teams sharing, you may use mor
|
||||
|
||||
Treat the Terraform state very carefully in secured place, as it's the only source of truth for your cluster. If leaked, consider the cluster as **compromised and you should active DRP (disaster recovery plan)**. The first vital action is at least to renew the Hetzner Cloud and S3 tokens immediately.
|
||||
|
||||
{{< alert >}}
|
||||
|
||||
At any case, consider any leak of writeable Hetzner Cloud token as a **Game Over**. Indeed, even if the attacker has no direct access to existing servers, mainly because cluster SSH private key as well as kube config are not stored into Terraform state, he still has full control of infrastructure, and can do the following actions :
|
||||
|
||||
1. Create new server to same cluster network with its own SSH access.
|
||||
@ -242,8 +240,6 @@ At any case, consider any leak of writeable Hetzner Cloud token as a **Game Over
|
||||
3. Sniff any data from the cluster that comes to the compromised server, including secrets, thanks to the new agent.
|
||||
4. Get access to remote S3 backups.
|
||||
|
||||
{{</ alert >}}
|
||||
|
||||
In order to mitigate any risk of critical data leak, you may use data encryption whenever is possible. K3s offer it [natively for etcd](https://docs.k3s.io/security/secrets-encryption). Longhorn (treated later) also offer it [natively for volumes](https://longhorn.io/docs/latest/advanced-resources/security/volume-encryption/) (including backups).
|
||||
|
||||
{{</ tab >}}
|
||||
@ -365,7 +361,7 @@ As input variables, you have the choice to use environment variables or separate
|
||||
{{< tabs >}}
|
||||
{{< tab tabName="terraform.tfvars file" >}}
|
||||
|
||||
{{< highlight file="terraform.tfvars" >}}
|
||||
{{< highlight host="demo-kube-hcloud" file="terraform.tfvars" >}}
|
||||
|
||||
```tf
|
||||
hcloud_token = "xxx"
|
||||
@ -427,6 +423,17 @@ Host kube-worker-01
|
||||
ProxyJump kube
|
||||
```
|
||||
|
||||
#### Git-able project
|
||||
|
||||
As we are GitOps, you'll need to version the Terraform project. With a proper gitignore generator tool like [gitignore.io](https://docs.gitignore.io/install/command-line) It's just a matter of:
|
||||
|
||||
```sh
|
||||
git init
|
||||
gig terraform
|
||||
```
|
||||
|
||||
And the project is ready to be pushed to any Git repository.
|
||||
|
||||
#### Cluster access
|
||||
|
||||
Merge above SSH config into your `~/.ssh/config` file, then test the connection with `ssh kube`.
|
||||
@ -446,8 +453,7 @@ It's time to log in to K3s and check the cluster status from local.
|
||||
From the controller, copy `/etc/rancher/k3s/k3s.yaml` on your machine located outside the cluster as `~/.kube/config`. Then replace the value of the server field with the IP or name of your K3s server. `kubectl` can now manage your K3s cluster.
|
||||
|
||||
{{< alert >}}
|
||||
If `~/.kube/config` already existing, you have to properly [merging the config inside it](https://able8.medium.com/how-to-merge-multiple-kubeconfig-files-into-one-36fc987c2e2f). You can use `kubectl config view --flatten` for that.
|
||||
|
||||
If `~/.kube/config` already existing, you have to properly [merging the config inside it](https://able8.medium.com/how-to-merge-multiple-kubeconfig-files-into-one-36fc987c2e2f). You can use `kubectl config view --flatten` for that.
|
||||
Then use `kubectl config use-context kube` for switching to your new cluster.
|
||||
{{</ alert >}}
|
||||
|
||||
@ -459,14 +465,14 @@ kube-controller-01 Ready control-plane,etcd,master 153m v1.27.4+k3s1
|
||||
kube-worker-01 Ready <none> 152m v1.27.4+k3s1
|
||||
```
|
||||
|
||||
{{< alert >}}
|
||||
#### Kubectl Aliases
|
||||
|
||||
As we'll use `kubectl` a lot, I highly encourage you to use aliases for better productivity :
|
||||
|
||||
* <https://github.com/ahmetb/kubectl-aliases> for bash
|
||||
* <https://github.com/shanoor/kubectl-aliases-powershell> for Powershell
|
||||
|
||||
After the install the equivalent of `kubectl get nodes` is `kgno`.
|
||||
{{</ alert >}}
|
||||
|
||||
#### Test adding new workers
|
||||
|
||||
|
@ -16,7 +16,7 @@ This is the **Part III** of more global topic tutorial. [Back to first part]({{<
|
||||
|
||||
For this part let's create a new Terraform project that will be dedicated to Kubernetes infrastructure provisioning. Start from scratch with a new empty folder and the following `main.tf` file then `terraform init`.
|
||||
|
||||
{{< highlight file="main.tf" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="main.tf" >}}
|
||||
|
||||
```tf
|
||||
terraform {
|
||||
@ -41,7 +41,7 @@ kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-oper
|
||||
|
||||
When OS kernel is upgraded, the system needs to be rebooted to apply it. This is a critical operation for a Kubernetes cluster as can cause downtime. To avoid this, we'll use [kured](https://github.com/kubereboot/kured) that will take care of cordon & drains before rebooting nodes one by one.
|
||||
|
||||
{{< highlight file="kured.tf" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="kured.tf" >}}
|
||||
|
||||
```tf
|
||||
resource "helm_release" "kubereboot" {
|
||||
@ -101,7 +101,7 @@ kg deploy -n system-upgrade
|
||||
|
||||
Next apply the following upgrade plans for servers and agents.
|
||||
|
||||
{{< highlight file="plans.tf" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="plans.tf" >}}
|
||||
|
||||
```tf
|
||||
resource "kubernetes_manifest" "server_plan" {
|
||||
@ -191,7 +191,7 @@ Now it's time to expose our cluster to the outside world. We'll use Traefik as i
|
||||
|
||||
Apply following file :
|
||||
|
||||
{{< highlight file="traefik.tf" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="traefik.tf" >}}
|
||||
|
||||
```tf
|
||||
locals {
|
||||
@ -267,7 +267,7 @@ traefik LoadBalancer 10.43.134.216 10.0.0.2,10.0.1.1,10.0.1.2,10.0.1.3 8
|
||||
|
||||
External IP are privates IPs of all nodes. In order to access them, we only need to put a load balancer in front of workers. It's time to get back to our 1st Terraform project.
|
||||
|
||||
{{< highlight file="kube.tf" >}}
|
||||
{{< highlight host="demo-kube-hcloud" file="kube.tf" >}}
|
||||
|
||||
```tf
|
||||
//...
|
||||
@ -305,8 +305,7 @@ resource "hcloud_load_balancer_service" "https_service" {
|
||||
Use `hcloud load-balancer-type list` to get the list of available load balancer types.
|
||||
|
||||
{{< alert >}}
|
||||
Don't forget to add `hcloud_load_balancer_service` resource for each service (aka port) you want to serve.
|
||||
|
||||
Don't forget to add `hcloud_load_balancer_service` resource for each service (aka port) you want to serve.
|
||||
We use `tcp` protocol as Traefik will handle SSL termination. Set `proxyprotocol` to true to allow Traefik to get real IP of clients.
|
||||
{{</ alert >}}
|
||||
|
||||
@ -322,7 +321,7 @@ ka https://github.com/cert-manager/cert-manager/releases/download/v1.12.3/cert-m
|
||||
|
||||
Then apply the following Terraform code.
|
||||
|
||||
{{< highlight file="cert-manager.tf" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="cert-manager.tf" >}}
|
||||
|
||||
```tf
|
||||
resource "kubernetes_namespace_v1" "cert_manager" {
|
||||
@ -365,7 +364,7 @@ You may use a DNS provider that is supported by cert-manager. Check the [list of
|
||||
|
||||
First prepare variables and set them accordingly :
|
||||
|
||||
{{< highlight file="main.tf" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="main.tf" >}}
|
||||
|
||||
```tf
|
||||
variable "domain" {
|
||||
@ -384,7 +383,7 @@ variable "dns_api_token" {
|
||||
|
||||
{{</ highlight >}}
|
||||
|
||||
{{< highlight file="terraform.tfvars" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="terraform.tfvars" >}}
|
||||
|
||||
```tf
|
||||
acme_email = "me@kube.rocks"
|
||||
@ -396,7 +395,7 @@ dns_api_token = "xxx"
|
||||
|
||||
Then we need to create a default `Certificate` k8s resource associated to a valid `ClusterIssuer` resource that will manage its generation. Apply the following Terraform code for issuing the new wildcard certificate for your domain.
|
||||
|
||||
{{< highlight file="certificates.tf" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="certificates.tf" >}}
|
||||
|
||||
```tf
|
||||
resource "kubernetes_secret_v1" "cloudflare_api_token" {
|
||||
@ -471,10 +470,8 @@ resource "kubernetes_manifest" "tls_certificate" {
|
||||
{{</ highlight >}}
|
||||
|
||||
{{< alert >}}
|
||||
|
||||
You can set `acme.privateKeySecretRef.name` to **letsencrypt-staging** for testing purpose and note waste limited LE quota.
|
||||
Set `privateKey.rotationPolicy` to **Always** to ensure that the certificate will be [renewed automatically](https://cert-manager.io/docs/usage/certificate/) 30 days before expires without downtime.
|
||||
|
||||
{{</ alert >}}
|
||||
|
||||
In the meantime, go to your DNS provider and add a new `*.kube.rocks` entry pointing to the load balancer IP.
|
||||
@ -489,7 +486,7 @@ First the auth variables :
|
||||
|
||||
First prepare variables and set them accordingly :
|
||||
|
||||
{{< highlight file="main.tf" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="main.tf" >}}
|
||||
|
||||
```tf
|
||||
variable "http_username" {
|
||||
@ -520,7 +517,7 @@ resource "null_resource" "encrypted_admin_password" {
|
||||
|
||||
{{</ highlight >}}
|
||||
|
||||
{{< highlight file="terraform.tfvars" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="terraform.tfvars" >}}
|
||||
|
||||
```tf
|
||||
http_username = "admin"
|
||||
@ -536,7 +533,7 @@ Note on encrypted_admin_password, we generate a bcrypt hash of the password comp
|
||||
|
||||
Then apply the following Terraform code :
|
||||
|
||||
{{< highlight file="traefik.tf" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="traefik.tf" >}}
|
||||
|
||||
```tf
|
||||
resource "helm_release" "traefik" {
|
||||
@ -615,10 +612,11 @@ Now go to `https://traefik.kube.rocks` and you should be asked for credentials.
|
||||
|
||||
This allow to validate that `auth` and `ip` middelwares are working properly.
|
||||
|
||||
{{< alert >}}
|
||||
If you get `Forbidden`, it's because `middleware-ip` can't get your real IP, try to disable it firstly to confirm you have dashboard access with credentials. Then try to re-enable it by changing the [IP strategy](https://doc.traefik.io/traefik/middlewares/http/ipwhitelist/#ipstrategy). For example, if you're behind Cloudflare edge proxies :
|
||||
#### Forbidden troubleshooting
|
||||
|
||||
{{< highlight file="traefik.tf" >}}
|
||||
If you get `Forbidden`, it's because `middleware-ip` can't get your real IP, try to disable it firstly to confirm you have dashboard access with credentials. Then try to re-enable it by changing the [IP strategy](https://doc.traefik.io/traefik/middlewares/http/ipwhitelist/#ipstrategy). For example, if you're behind another reverse proxy like Cloudflare, increment `depth` to 1:
|
||||
|
||||
{{< highlight host="demo-kube-k3s" file="traefik.tf" >}}
|
||||
|
||||
```tf
|
||||
//...
|
||||
@ -640,7 +638,44 @@ resource "kubernetes_manifest" "traefik_middleware_ip" {
|
||||
|
||||
{{</ highlight >}}
|
||||
|
||||
{{</ alert >}}
|
||||
In the case of Cloudflare, you may need also to trust the [Cloudflare IP ranges](https://www.cloudflare.com/ips-v4) in addition to Hetzner load balancer. Just set `ports.websecure.forwardedHeaders.trustedIPs` and `ports.websecure.proxyProtocol.trustedIPs` accordingly.
|
||||
|
||||
{{< highlight host="demo-kube-k3s" file="main.tf" >}}
|
||||
|
||||
```tf
|
||||
variable "cloudflare_ips" {
|
||||
type = list(string)
|
||||
sensitive = true
|
||||
}
|
||||
```
|
||||
|
||||
{{</ highlight >}}
|
||||
|
||||
{{< highlight host="demo-kube-k3s" file="traefik.tf" >}}
|
||||
|
||||
```tf
|
||||
locals {
|
||||
trusted_ips = concat(["127.0.0.1/32", "10.0.0.0/8"], var.cloudflare_ips)
|
||||
}
|
||||
|
||||
resource "helm_release" "traefik" {
|
||||
//...
|
||||
|
||||
set {
|
||||
name = "ports.websecure.forwardedHeaders.trustedIPs"
|
||||
value = "{${join(",", local.trusted_ips)}}"
|
||||
}
|
||||
|
||||
set {
|
||||
name = "ports.websecure.proxyProtocol.trustedIPs"
|
||||
value = "{${join(",", local.trusted_ips)}}"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
{{</ highlight >}}
|
||||
|
||||
Or for testing purpose set `ports.websecure.forwardedHeaders.insecure` and `ports.websecure.proxyProtocol.insecure` to true.
|
||||
|
||||
## 2nd check ✅
|
||||
|
||||
|
@ -64,7 +64,7 @@ storage-02 --> streaming
|
||||
|
||||
Let's get back to our 1st Hcloud Terraform Project, and add a new node pool for storage:
|
||||
|
||||
{{< highlight file="kube.tf" >}}
|
||||
{{< highlight host="demo-kube-hcloud" file="kube.tf" >}}
|
||||
|
||||
```tf
|
||||
module "hcloud_kube" {
|
||||
@ -95,7 +95,7 @@ Be sure to have labels and taints correctly set, as we'll use them later for Lon
|
||||
|
||||
After `terraform apply`, check that new storage nodes are ready with `kgno`. Now we'll also apply a configurable dedicated block volume on each node for more flexible space management.
|
||||
|
||||
{{< highlight file="kube.tf" >}}
|
||||
{{< highlight host="demo-kube-hcloud" file="kube.tf" >}}
|
||||
|
||||
```tf
|
||||
module "hcloud_kube" {
|
||||
@ -133,7 +133,7 @@ Note as if you set volume in same time as node pool creation, Hetzner doesn't se
|
||||
|
||||
Let's add s3 related variables in order to preconfigure Longhorn backup:
|
||||
|
||||
{{< highlight file="main" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="main.tf" >}}
|
||||
|
||||
```tf
|
||||
variable "s3_endpoint" {
|
||||
@ -161,7 +161,7 @@ variable "s3_secret_key" {
|
||||
|
||||
{{< /highlight >}}
|
||||
|
||||
{{< highlight file="terraform.tf.vars" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="terraform.tf.vars" >}}
|
||||
|
||||
```tf
|
||||
s3_endpoint = "s3.fr-par.scw.cloud"
|
||||
@ -177,7 +177,7 @@ s3_secret_key = "xxx"
|
||||
|
||||
Return to the 2nd Kubernetes terraform project, and add Longhorn installation:
|
||||
|
||||
{{< highlight file="longhorn.tf" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="longhorn.tf" >}}
|
||||
|
||||
```tf
|
||||
resource "kubernetes_namespace_v1" "longhorn" {
|
||||
@ -263,7 +263,7 @@ Use `kgpo -n longhorn-system -o wide` to check that Longhorn pods are correctly
|
||||
|
||||
Longhorn Helm doesn't include Prometheus integration yet, in this case all we have to do is to deploy a `ServiceMonitor` which allow metrics scraping to Longhorn pods.
|
||||
|
||||
{{< highlight file="longhorn.tf" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="longhorn.tf" >}}
|
||||
|
||||
```tf
|
||||
resource "kubernetes_manifest" "longhorn_service_monitor" {
|
||||
@ -298,7 +298,7 @@ Monitoring will have dedicated post later.
|
||||
|
||||
Now we only have to expose Longhorn UI to the world. We'll use `IngressRoute` provided by Traefik.
|
||||
|
||||
{{< highlight file="longhorn.tf" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="longhorn.tf" >}}
|
||||
|
||||
```tf
|
||||
resource "kubernetes_manifest" "longhorn_ingress" {
|
||||
@ -366,7 +366,7 @@ k patch nodes.longhorn.io kube-storage-0x -n longhorn-system --type=merge --patc
|
||||
|
||||
Now all that's left is to create a dedicated storage class for fast local volumes. We'll use it for IOPS critical statefulset workloads like PostgreSQL and Redis. Let's apply nest `StorageClass` configuration and check it with `kg sc`:
|
||||
|
||||
{{< highlight file="longhorn.tf" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="longhorn.tf" >}}
|
||||
|
||||
```tf
|
||||
resource "kubernetes_storage_class_v1" "longhorn_fast" {
|
||||
@ -401,7 +401,7 @@ Now it's time to set up some critical statefulset persistence workloads, and fir
|
||||
|
||||
### PostgreSQL variables
|
||||
|
||||
{{< highlight file="main" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="main.tf" >}}
|
||||
|
||||
```tf
|
||||
variable "pgsql_user" {
|
||||
@ -426,7 +426,7 @@ variable "pgsql_replication_password" {
|
||||
|
||||
{{< /highlight >}}
|
||||
|
||||
{{< highlight file="terraform.tf.vars" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="terraform.tf.vars" >}}
|
||||
|
||||
```tf
|
||||
pgsql_user = "kube"
|
||||
@ -448,7 +448,7 @@ k label nodes kube-storage-02 node-role.kubernetes.io/read=true
|
||||
|
||||
We can finally apply next Terraform configuration:
|
||||
|
||||
{{< highlight file="postgresql" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="postgresql.tf" >}}
|
||||
|
||||
```tf
|
||||
resource "kubernetes_namespace_v1" "postgres" {
|
||||
@ -593,7 +593,7 @@ After PostgreSQL, set up a redis cluster is a piece of cake.
|
||||
|
||||
### Redis variables
|
||||
|
||||
{{< highlight file="main" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="main.tf" >}}
|
||||
|
||||
```tf
|
||||
variable "redis_password" {
|
||||
@ -604,7 +604,7 @@ variable "redis_password" {
|
||||
|
||||
{{< /highlight >}}
|
||||
|
||||
{{< highlight file="terraform.tf.vars" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="terraform.tf.vars" >}}
|
||||
|
||||
```tf
|
||||
redis_password = "xxx"
|
||||
@ -614,7 +614,7 @@ redis_password = "xxx"
|
||||
|
||||
### Redis installation
|
||||
|
||||
{{< highlight file="redis.tf" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="redis.tf" >}}
|
||||
|
||||
```tf
|
||||
resource "kubernetes_namespace_v1" "redis" {
|
||||
@ -733,7 +733,7 @@ And that's it, job done ! Always check that Redis pods are correctly running on
|
||||
|
||||
Final essential steps is to set up s3 backup for volumes. We already configured S3 backup on [longhorn variables step](#longhorn-variables), so we only have to configure backup strategy. We can use UI for that, but don't we are GitOps ? So let's do it with Terraform.
|
||||
|
||||
{{< highlight file="longhorn.tf" >}}
|
||||
{{< highlight host="demo-kube-k3s" file="longhorn.tf" >}}
|
||||
|
||||
```tf
|
||||
locals {
|
||||
|
@ -14,7 +14,30 @@ This is the **Part V** of more global topic tutorial. [Back to first part]({{< r
|
||||
|
||||
## Flux
|
||||
|
||||
* Automatic deployment on commit
|
||||
In GitOps world, 2 tools are in lead for CD in k8s : Flux and ArgoCD. As Flux is CLI first and more lightweight, it's my personal goto. You may ask why don't continue with actual k8s Terraform project ?
|
||||
|
||||
You already noted that by adding more and more Helm dependencies to terraform, the plan time is increasing, as well as the state file. So not very scalable.
|
||||
|
||||
It's the perfect moment to draw a clear line between **IaC** and **CD**. IaC is for infrastructure, CD is for application. So to resume our GitOps stack :
|
||||
|
||||
1. IaC for Hcloud cluster initialization (*the basement*) : **Terraform**
|
||||
2. IaC for cluster configuration (*the walls*) : **Helm** through **Terraform**
|
||||
3. CD for application deployment (*the furniture*) : **Flux**
|
||||
|
||||
{{< alert >}}
|
||||
You can probably eliminate with some efforts the 2nd stack by using both `Kube-Hetzner`, which take care of ingress and storage, and using Flux directly for the remaining helms like database cluster. Or maybe you can also add custom helms to `Kube-Hetzner` ?
|
||||
But as it's increase complexity and dependencies problem, I prefer personally to keep a clear separation between the middle part and the rest, as it's more straightforward for me. Just a matter of taste 🥮
|
||||
{{< /alert >}}
|
||||
|
||||
### Flux installation
|
||||
|
||||
Create a dedicated Git repository for Flux somewhere, I'm using Github, which is just a matter of:
|
||||
|
||||
```sh
|
||||
gh repo create demo-kube-flux --private
|
||||
gh repo clone demo-kube-flux
|
||||
cd demo-kube-flux
|
||||
```
|
||||
|
||||
## PgAdmin
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
title: "Setup a HA Kubernetes cluster Part VI - CI/CD tools"
|
||||
date: 2023-10-05
|
||||
title: "Setup a HA Kubernetes cluster Part VI - CI tools"
|
||||
date: 2023-10-06
|
||||
description: "Follow this opinionated guide as starter-kit for your own Kubernetes platform..."
|
||||
tags: ["kubernetes", "postgresql", "longhorn"]
|
||||
draft: true
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
title: "Setup a HA Kubernetes cluster Part VII - Monitoring Stack"
|
||||
date: 2023-10-06
|
||||
date: 2023-10-07
|
||||
description: "Follow this opinionated guide as starter-kit for your own Kubernetes platform..."
|
||||
tags: ["kubernetes", "prometheus", "loki", "grafana"]
|
||||
draft: true
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
title: "Setup a HA Kubernetes cluster Part VIII - Load testing & tracing"
|
||||
date: 2023-10-07
|
||||
date: 2023-10-08
|
||||
description: "Follow this opinionated guide as starter-kit for your own Kubernetes platform..."
|
||||
tags: ["kubernetes", "k6", "jaeger"]
|
||||
draft: true
|
||||
|
Reference in New Issue
Block a user