init k8s guide

This commit is contained in:
2023-08-24 12:53:42 +02:00
parent f72f320e6e
commit 0caa13f5d1
2 changed files with 375 additions and 171 deletions

View File

@ -143,9 +143,118 @@ Note as we'll use `components_extra` to add `image-reflector-controller` and `im
After applying this, use `kg deploy -n flux-system` to check that Flux is correctly installed and running.
### Test with pgAdmin
A 1st typical example is pgAdmin, a web UI for Postgres. We'll use it to manage our database cluster. It requires a local PVC to store its data user and settings.
{{< highlight host="demo-kube-flux" file="clusters/demo/postgres/kustomization.yaml" >}}
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deploy-pgadmin.yaml
```
{{< /highlight >}}
{{< highlight host="demo-kube-flux" file="clusters/demo/postgres/deploy-pgadmin.yaml" >}}
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin
namespace: postgres
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: pgadmin
template:
metadata:
labels:
app: pgadmin
spec:
securityContext:
runAsUser: 5050
runAsGroup: 5050
fsGroup: 5050
fsGroupChangePolicy: "OnRootMismatch"
containers:
- name: pgadmin
image: dpage/pgadmin4:latest
ports:
- containerPort: 80
env:
- name: PGADMIN_DEFAULT_EMAIL
value: admin@kube.rocks
- name: PGADMIN_DEFAULT_PASSWORD
value: kuberocks
volumeMounts:
- name: pgadmin-data
mountPath: /var/lib/pgadmin
volumes:
- name: pgadmin-data
persistentVolumeClaim:
claimName: pgadmin-data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pgadmin-data
namespace: postgres
spec:
resources:
requests:
storage: 128Mi
volumeMode: Filesystem
storageClassName: longhorn
accessModes:
- ReadWriteOnce
---
apiVersion: v1
kind: Service
metadata:
name: pgadmin
namespace: postgres
spec:
selector:
app: pgadmin
ports:
- port: 80
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: pgadmin
namespace: postgres
spec:
entryPoints:
- websecure
routes:
- match: Host(`pgadmin.kube.rocks`)
kind: Rule
middlewares:
- name: middleware-ip
namespace: traefik
services:
- name: pgadmin
port: 80
```
{{< /highlight >}}
Wait few minutes, and go to `pgadmin.kube.rocks` and login with default credentials. Don't forget to change them immediately after with a real password, as it's stored into pgAdmin local DB. Now try to register a new server with `postgresql-primary.postgres` as hostname, and the rest with your PostgreSQL credential on previous installation. It should work !
You can test the read replica too by register a new server using the hostname `postgresql-read.postgres`. Try to do some update on primary and check that it's replicated on read replica. Any modification on replicas should be rejected as well.
## Install some no code tools
### Managing secrets
As always with GitOps, a secured secrets management is critical. Nobody wants to expose sensitive data in a git repository. An easy to go solution is to use [Bitnami Sealed Secrets](https://github.com/bitnami-labs/sealed-secrets), which will deploy a dedicated controller in your cluster that will automatically decrypt sealed secrets.
Before to continue with some more advanced apps, we should talk about secrets. As always with GitOps, a secured secrets management is critical. Nobody wants to expose sensitive data in a git repository. An easy to go solution is to use [Bitnami Sealed Secrets](https://github.com/bitnami-labs/sealed-secrets), which will deploy a dedicated controller in your cluster that will automatically decrypt sealed secrets.
Open `demo-kube-flux` project and create helm deployment for sealed secret.
@ -218,153 +327,7 @@ curl http://localhost:8080/v1/cert.pem > pub-sealed-secrets.pem
By the way install the client with `brew install kubeseal` (Mac / Linux) or `scoop install kubeseal` (Windows).
{{< /alert >}}
## Install some tools
It's now finally time to install some tools to help us in our CD journey.
### pgAdmin
A 1st good example is typically pgAdmin, which is a web UI for Postgres. We'll use it to manage our database cluster. It requires a local PVC to store its data user and settings.
{{< highlight host="demo-kube-flux" file="clusters/demo/postgres/kustomization.yaml" >}}
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deploy-pgadmin.yaml
- sealed-secret-pgadmin.yaml
```
{{< /highlight >}}
{{< highlight host="demo-kube-flux" file="clusters/demo/postgres/deploy-pgadmin.yaml" >}}
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin
namespace: postgres
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: pgadmin
template:
metadata:
labels:
app: pgadmin
spec:
securityContext:
runAsUser: 5050
runAsGroup: 5050
fsGroup: 5050
fsGroupChangePolicy: "OnRootMismatch"
containers:
- name: pgadmin
image: dpage/pgadmin4:latest
ports:
- containerPort: 80
env:
- name: PGADMIN_DEFAULT_EMAIL
valueFrom:
secretKeyRef:
name: pgadmin-auth
key: default-email
- name: PGADMIN_DEFAULT_PASSWORD
valueFrom:
secretKeyRef:
name: pgadmin-auth
key: default-password
volumeMounts:
- name: pgadmin-data
mountPath: /var/lib/pgadmin
volumes:
- name: pgadmin-data
persistentVolumeClaim:
claimName: pgadmin-data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pgadmin-data
namespace: postgres
spec:
resources:
requests:
storage: 128Mi
volumeMode: Filesystem
storageClassName: longhorn
accessModes:
- ReadWriteOnce
---
apiVersion: v1
kind: Service
metadata:
name: pgadmin
namespace: postgres
spec:
selector:
app: pgadmin
ports:
- port: 80
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: pgadmin
namespace: postgres
spec:
entryPoints:
- websecure
routes:
- match: Host(`pgadmin.kube.rocks`)
kind: Rule
middlewares:
- name: middleware-ip
namespace: traefik
services:
- name: pgadmin
port: 80
```
{{< /highlight >}}
Here are the secrets to adapt to your needs:
{{< highlight host="demo-kube-flux" file="clusters/demo/postgres/secret-pgadmin.yaml" >}}
```yaml
apiVersion: v1
kind: Secret
metadata:
name: pgadmin-auth
namespace: postgres
type: Opaque
data:
default-email: YWRtaW5Aa3ViZS5yb2Nrcw==
default-password: YWRtaW4=
```
{{< /highlight >}}
```sh
cat clusters/demo/postgres/secret-pgadmin.yaml | kubeseal --format=yaml --cert=pub-sealed-secrets.pem > clusters/demo/postgres/sealed-secret-pgadmin.yaml
rm clusters/demo/postgres/secret-pgadmin.yaml
```
{{< alert >}}
Don't forget to remove the original secret file before commit for obvious reason ! If too late, consider password leaked and regenerate a new one.
You may use [VSCode extension](https://github.com/codecontemplator/vscode-kubeseal)
{{< /alert >}}
Wait few minutes, and go to `pgadmin.kube.rocks` and login with chosen credentials. Now try to register a new server with `postgresql-primary.postgres` as hostname, and the rest with your PostgreSQL credential on previous installation. It should work !
You can test the read replica too by register a new server using the hostname `postgresql-read.postgres`. Try to do some update on primary and check that it's replicated on read replica. Any modification on replicas should be rejected as well.
It's time to use some useful apps.
It's now finally time to install some useful tools to help us in our CD journey.
### n8n
@ -421,10 +384,6 @@ spec:
value: "5678"
- name: NODE_ENV
value: production
- name: N8N_METRICS
value: "true"
- name: QUEUE_HEALTH_CHECK_ACTIVE
value: "true"
- name: WEBHOOK_URL
value: https://n8n.kube.rocks/
- name: DB_TYPE
@ -493,18 +452,6 @@ spec:
ports:
- port: 5678
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: metrics
namespace: n8n
spec:
endpoints:
- targetPort: 5678
selector:
matchLabels:
app: n8n
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
@ -523,8 +470,6 @@ spec:
{{< /highlight >}}
Because n8n support metrics by setting `N8N_METRICS` to `true`, note as we add `ServiceMonitor` to allow Prometheus to scrape metrics (next chapter).
Here are the secrets to adapt to your needs:
{{< highlight host="demo-kube-flux" file="clusters/demo/n8n/secret-n8n-db.yaml" >}}
@ -558,6 +503,20 @@ data:
{{< /highlight >}}
Now you have to sealed this secrets with `kubeseal` and remove the original files. Type this in project root:
```sh
cat clusters/demo/n8n/secret-n8n-db.yaml | kubeseal --format=yaml --cert=pub-sealed-secrets.pem > clusters/demo/postgres/sealed-secret-n8n-db.yaml
rm clusters/demo/n8n/secret-n8n-db.yaml
```
Do same for SMTP secret.
{{< alert >}}
Don't forget to remove the original secret file before commit for obvious reason ! If too late, consider password leaked and regenerate a new one.
You may use [VSCode extension](https://github.com/codecontemplator/vscode-kubeseal)
{{< /alert >}}
Before continue go to pgAdmin and create `n8n` DB and set `n8n` user with proper credentials as owner.
Then don't forget to seal secrets and remove original files the same way as pgAdmin. Once pushed, n8n should be deploying, automatically migrate the db, and soon after `n8n.kube.rocks` should be available, allowing you to create your 1st account.

View File

@ -12,13 +12,258 @@ Be free from AWS/Azure/GCP by building a production grade On-Premise Kubernetes
This is the **Part VII** of more global topic tutorial. [Back to first part]({{< ref "/posts/10-build-your-own-kubernetes-cluster" >}}) for intro.
## Gitea
## Self-hosted VCS
* Validate DB & redis access
* Enable SSH access
* First commit test with basic DotNet sample app
It's finally time to build our CI stack. Let's start with a self-hosted VCS. We'll use [Gitea](https://gitea.io/) as a lightweight GitHub clone, and far less resource intensive than GitLab. You can of course perfectly skip this entire chapter and stay with GitHub/GitLab if you prefer. But one of the goal of this tutorial is to maximize self-hosting, so let's go !
## Concourse CI
As I consider the CI as part of infrastructure, I'll use the dedicated Terraform project for Helms management. But again it's up to you if you prefer using Flux, it'll work too.
### Gitea
The Gitea Helm Chart is a bit tricky to configure properly. Let's begin with some additional required variables:
{{< highlight host="demo-kube-k3s" file="main.tf" >}}
```tf
variable "gitea_db_password" {
type = string
sensitive = true
}
```
{{< /highlight >}}
{{< highlight host="demo-kube-k3s" file="terraform.tfvars" >}}
```tf
gitea_db_password = "xxx"
```
{{< /highlight >}}
{{< highlight host="demo-kube-k3s" file="gitea.tf" >}}
```tf
locals {
redis_connection = "redis://:${urlencode(var.redis_password)}@redis-master.redis:6379/0"
}
resource "kubernetes_namespace_v1" "gitea" {
metadata {
name = "gitea"
}
}
resource "helm_release" "gitea" {
chart = "gitea"
version = "9.2.0"
repository = "https://dl.gitea.io/charts"
name = "gitea"
namespace = kubernetes_namespace_v1.gitea.metadata[0].name
set {
name = "strategy.type"
value = "Recreate"
}
set {
name = "postgresql-ha.enabled"
value = "false"
}
set {
name = "redis-cluster.enabled"
value = "false"
}
set {
name = "persistence.storageClass"
value = "longhorn"
}
set {
name = "persistence.size"
value = "2Gi"
}
set {
name = "gitea.config.server.DOMAIN"
value = "gitea.${var.domain}"
}
set {
name = "gitea.config.server.SSH_DOMAIN"
value = "ssh.${var.domain}"
}
set {
name = "gitea.config.server.ROOT_URL"
value = "https://gitea.${var.domain}"
}
set {
name = "gitea.config.database.DB_TYPE"
value = "postgres"
}
set {
name = "gitea.config.database.HOST"
value = "postgresql-primary.postgres"
}
set {
name = "gitea.config.database.NAME"
value = "gitea"
}
set {
name = "gitea.config.database.USER"
value = "gitea"
}
set {
name = "gitea.config.database.PASSWD"
value = var.gitea_db_password
}
set {
name = "gitea.config.indexer.REPO_INDEXER_ENABLED"
value = "true"
}
set {
name = "gitea.config.mailer.ENABLED"
value = "true"
}
set {
name = "gitea.config.mailer.FROM"
value = "gitea@${var.domain}"
}
set {
name = "gitea.config.mailer.SMTP_ADDR"
value = var.smtp_host
}
set {
name = "gitea.config.mailer.SMTP_PORT"
value = var.smtp_port
}
set {
name = "gitea.config.mailer.USER"
value = var.smtp_user
}
set {
name = "gitea.config.mailer.PASSWD"
value = var.smtp_password
}
set {
name = "gitea.config.cache.ADAPTER"
value = "redis"
}
set {
name = "gitea.config.cache.HOST"
value = local.redis_connection
}
set {
name = "gitea.config.session.PROVIDER"
value = "redis"
}
set {
name = "gitea.config.session.PROVIDER_CONFIG"
value = local.redis_connection
}
set {
name = "gitea.config.queue.TYPE"
value = "redis"
}
set {
name = "gitea.config.queue.CONN_STR"
value = local.redis_connection
}
set {
name = "gitea.config.service.DISABLE_REGISTRATION"
value = "true"
}
set {
name = "gitea.config.repository.DEFAULT_BRANCH"
value = "main"
}
set {
name = "gitea.config.metrics.ENABLED_ISSUE_BY_REPOSITORY"
value = "true"
}
set {
name = "gitea.config.metrics.ENABLED_ISSUE_BY_LABEL"
value = "true"
}
set {
name = "gitea.config.webhook.ALLOWED_HOST_LIST"
value = "*"
}
}
```
{{< /highlight >}}
Note as we disable included Redis and PostgreSQL, because we use our own Redis and PostgreSQL cluster. We'll try to have a working SSH service too.
The related ingress:
{{< highlight host="demo-kube-k3s" file="gitea.tf" >}}
```tf
resource "kubernetes_manifest" "gitea_ingress" {
manifest = {
apiVersion = "traefik.io/v1alpha1"
kind = "IngressRoute"
metadata = {
name = "gitea-http"
namespace = kubernetes_namespace_v1.gitea.metadata[0].name
}
spec = {
entryPoints = ["websecure"]
routes = [
{
match = "Host(`gitea.${var.domain}`)"
kind = "Rule"
services = [
{
name = "gitea-http"
port = 3000
}
]
}
]
}
}
}
```
{{< /highlight >}}
Go login in `https://gitea.kube.rocks` with default next credentials *gitea_admin / r8sA8CPHD9!bt6d*, and **change them immediately**.
### Push our first app
## CI
### Concourse CI
* Automatic build on commit
* Push to Gitea Container Registry