k8s
@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
title: "Setup a HA Kubernetes cluster for less than $60 / month"
|
title: "Setup a HA Kubernetes cluster for less than $60 by month"
|
||||||
date: 2023-10-01
|
date: 2023-10-01
|
||||||
description: "Follow this opinionated guide as starter-kit for your own Kubernetes platform..."
|
description: "Follow this opinionated guide as starter-kit for your own Kubernetes platform..."
|
||||||
tags: ["kubernetes"]
|
tags: ["kubernetes"]
|
||||||
|
@ -638,558 +638,6 @@ You may set `worker.replicas` as the number of nodes in your runner pool. As usu
|
|||||||
|
|
||||||
Then go to `https://concourse.kube.rocks` and log in with chosen credentials.
|
Then go to `https://concourse.kube.rocks` and log in with chosen credentials.
|
||||||
|
|
||||||
## Workflow
|
|
||||||
|
|
||||||
It's now time to step back and think about how we'll use our CI. Our goal is to build our above dotnet Web API with Concourse CI as a container image, ready to deploy to our cluster through Flux. So we finish the complete CI/CD pipeline. To resume the scenario:
|
|
||||||
|
|
||||||
1. Concourse CI check the repo periodically (pull model) for new code pushed and trigger a build if applicable
|
|
||||||
2. When container image build passed, Concourse CI push the new image to our private registry, which is already take care by Gitea
|
|
||||||
3. Flux, which can perfectly be in a different cluster, check the registry periodically (pull model), if new image tag detected, it will deploy it automatically to our cluster
|
|
||||||
|
|
||||||
{{< alert >}}
|
|
||||||
Although it's the most secured way and configuration less, instead of default pull model, which is generally a check every minute, it's possible secured WebHook instead in order to reduce time between code push and deployment.
|
|
||||||
{{< /alert >}}
|
|
||||||
|
|
||||||
The flow pipeline is pretty straightforward:
|
|
||||||
|
|
||||||
{{< mermaid >}}
|
|
||||||
graph RL
|
|
||||||
subgraph R [Private registry]
|
|
||||||
C[/Container Registry/]
|
|
||||||
end
|
|
||||||
S -- scan --> R
|
|
||||||
S -- push --> J[(Flux repository)]
|
|
||||||
subgraph CD
|
|
||||||
D{Flux} -- check --> J
|
|
||||||
D -- deploy --> E((Kube API))
|
|
||||||
end
|
|
||||||
subgraph S [Image Scanner]
|
|
||||||
I[Image Reflector] -- trigger --> H[Image Automation]
|
|
||||||
end
|
|
||||||
subgraph CI
|
|
||||||
A{Concourse} -- check --> B[(Code repository)]
|
|
||||||
A -- push --> C
|
|
||||||
F((Worker)) -- build --> A
|
|
||||||
end
|
|
||||||
{{< /mermaid >}}
|
|
||||||
|
|
||||||
### The credentials
|
|
||||||
|
|
||||||
We need to:
|
|
||||||
|
|
||||||
1. Give read/write access to our Gitea and registry for Concourse. Note as we need write access in code repository for concourse because we need to store the new image tag. We'll using [semver resource](https://github.com/concourse/semver-resource) for that.
|
|
||||||
2. Give read registry credentials to Flux for regular image tag checking as well as Kubernetes in order to allow image pulling from the private registry.
|
|
||||||
|
|
||||||
Let's create 2 new user `concourse` with admin acces and `container` as standard user on Gitea. Store these credentials on new variables:
|
|
||||||
|
|
||||||
{{< highlight host="demo-kube-k3s" file="main.tf" >}}
|
|
||||||
|
|
||||||
```tf
|
|
||||||
variable "concourse_git_username" {
|
|
||||||
type = string
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "concourse_git_password" {
|
|
||||||
type = string
|
|
||||||
sensitive = true
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "container_registry_username" {
|
|
||||||
type = string
|
|
||||||
}
|
|
||||||
|
|
||||||
variable "container_registry_password" {
|
|
||||||
type = string
|
|
||||||
sensitive = true
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
{{< /highlight >}}
|
|
||||||
|
|
||||||
{{< highlight host="demo-kube-k3s" file="terraform.tfvars" >}}
|
|
||||||
|
|
||||||
```tf
|
|
||||||
concourse_git_username = "concourse"
|
|
||||||
concourse_git_password = "xxx"
|
|
||||||
container_registry_username = "container"
|
|
||||||
container_registry_password = "xxx"
|
|
||||||
```
|
|
||||||
|
|
||||||
{{< /highlight >}}
|
|
||||||
|
|
||||||
Apply the credentials for Concourse:
|
|
||||||
|
|
||||||
{{< highlight host="demo-kube-k3s" file="concourse.tf" >}}
|
|
||||||
|
|
||||||
```tf
|
|
||||||
resource "kubernetes_secret_v1" "concourse_registry" {
|
|
||||||
metadata {
|
|
||||||
name = "registry"
|
|
||||||
namespace = "concourse-main"
|
|
||||||
}
|
|
||||||
|
|
||||||
data = {
|
|
||||||
name = "gitea.${var.domain}"
|
|
||||||
username = var.concourse_git_username
|
|
||||||
password = var.concourse_git_password
|
|
||||||
}
|
|
||||||
|
|
||||||
depends_on = [
|
|
||||||
helm_release.concourse
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
resource "kubernetes_secret_v1" "concourse_git" {
|
|
||||||
metadata {
|
|
||||||
name = "git"
|
|
||||||
namespace = "concourse-main"
|
|
||||||
}
|
|
||||||
|
|
||||||
data = {
|
|
||||||
url = "https://gitea.${var.domain}"
|
|
||||||
username = var.concourse_git_username
|
|
||||||
password = var.concourse_git_password
|
|
||||||
git-user = "Concourse CI <concourse@kube.rocks>"
|
|
||||||
commit-message = "bump to %version% [ci skip]"
|
|
||||||
}
|
|
||||||
|
|
||||||
depends_on = [
|
|
||||||
helm_release.concourse
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
{{< /highlight >}}
|
|
||||||
|
|
||||||
Note as we use `concourse-main` namespace, already created by Concourse Helm installer, which is a dedicated namespace for the default team `main`. Because of that, we should keep `depends_on` to ensure the namespace is created before the secrets.
|
|
||||||
|
|
||||||
{{< alert >}}
|
|
||||||
Don't forget the `[ci skip]` in commit message, which is the commit for version bumping, otherwise you'll have an infinite loop of builds !
|
|
||||||
{{< /alert >}}
|
|
||||||
|
|
||||||
Then same for Flux and the namespace that will receive the app:
|
|
||||||
|
|
||||||
{{< highlight host="demo-kube-k3s" file="flux.tf" >}}
|
|
||||||
|
|
||||||
```tf
|
|
||||||
resource "kubernetes_secret_v1" "image_pull_secrets" {
|
|
||||||
for_each = toset(["flux-system", "kuberocks"])
|
|
||||||
metadata {
|
|
||||||
name = "dockerconfigjson"
|
|
||||||
namespace = each.value
|
|
||||||
}
|
|
||||||
|
|
||||||
type = "kubernetes.io/dockerconfigjson"
|
|
||||||
|
|
||||||
data = {
|
|
||||||
".dockerconfigjson" = jsonencode({
|
|
||||||
auths = {
|
|
||||||
"gitea.${var.domain}" = {
|
|
||||||
auth = base64encode("${var.container_registry_username}:${var.container_registry_password}")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
{{< /highlight >}}
|
|
||||||
|
|
||||||
{{< alert >}}
|
|
||||||
Create the namespace `kuberocks` first by `k create namespace kuberocks`, or you'll get an error.
|
|
||||||
{{< /alert >}}
|
|
||||||
|
|
||||||
### Build and push the container image
|
|
||||||
|
|
||||||
Now that all required credentials are in place, we have to tell Concourse how to check our repo and build our container image. This is done through a pipeline, which is a specific Concourse YAML file.
|
|
||||||
|
|
||||||
#### The Dockerfile
|
|
||||||
|
|
||||||
Firstly create following files in root of your repo that we'll use for building a production ready container image:
|
|
||||||
|
|
||||||
{{< highlight host="kuberocks-demo" file=".dockerignore" >}}
|
|
||||||
|
|
||||||
```txt
|
|
||||||
**/bin/
|
|
||||||
**/obj/
|
|
||||||
```
|
|
||||||
|
|
||||||
{{< /highlight >}}
|
|
||||||
|
|
||||||
{{< highlight host="kuberocks-demo" file="Dockerfile" >}}
|
|
||||||
|
|
||||||
```Dockerfile
|
|
||||||
FROM mcr.microsoft.com/dotnet/aspnet:7.0
|
|
||||||
|
|
||||||
WORKDIR /publish
|
|
||||||
COPY /publish .
|
|
||||||
|
|
||||||
EXPOSE 80
|
|
||||||
ENTRYPOINT ["dotnet", "KubeRocksDemo.dll"]
|
|
||||||
```
|
|
||||||
|
|
||||||
{{< /highlight >}}
|
|
||||||
|
|
||||||
#### The pipeline
|
|
||||||
|
|
||||||
Let's reuse our flux repository and create a file `pipelines/demo.yaml` with following content:
|
|
||||||
|
|
||||||
{{< highlight host="demo-kube-flux" file="pipelines/demo.yaml" >}}
|
|
||||||
|
|
||||||
```tf
|
|
||||||
resources:
|
|
||||||
- name: version
|
|
||||||
type: semver
|
|
||||||
source:
|
|
||||||
driver: git
|
|
||||||
uri: ((git.url))/kuberocks/demo
|
|
||||||
branch: main
|
|
||||||
file: version
|
|
||||||
username: ((git.username))
|
|
||||||
password: ((git.password))
|
|
||||||
git_user: ((git.git-user))
|
|
||||||
commit_message: ((git.commit-message))
|
|
||||||
- name: source-code
|
|
||||||
type: git
|
|
||||||
icon: coffee
|
|
||||||
source:
|
|
||||||
uri: ((git.url))/kuberocks/demo
|
|
||||||
branch: main
|
|
||||||
username: ((git.username))
|
|
||||||
password: ((git.password))
|
|
||||||
- name: docker-image
|
|
||||||
type: registry-image
|
|
||||||
icon: docker
|
|
||||||
source:
|
|
||||||
repository: ((registry.name))/kuberocks/demo
|
|
||||||
tag: latest
|
|
||||||
username: ((registry.username))
|
|
||||||
password: ((registry.password))
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
- name: build
|
|
||||||
plan:
|
|
||||||
- get: source-code
|
|
||||||
trigger: true
|
|
||||||
|
|
||||||
- task: build-source
|
|
||||||
config:
|
|
||||||
platform: linux
|
|
||||||
image_resource:
|
|
||||||
type: registry-image
|
|
||||||
source:
|
|
||||||
repository: mcr.microsoft.com/dotnet/sdk
|
|
||||||
tag: "7.0"
|
|
||||||
inputs:
|
|
||||||
- name: source-code
|
|
||||||
path: .
|
|
||||||
outputs:
|
|
||||||
- name: binaries
|
|
||||||
path: publish
|
|
||||||
caches:
|
|
||||||
- path: /root/.nuget/packages
|
|
||||||
run:
|
|
||||||
path: /bin/sh
|
|
||||||
args:
|
|
||||||
- -ec
|
|
||||||
- |
|
|
||||||
dotnet format --verify-no-changes
|
|
||||||
dotnet build -c Release
|
|
||||||
dotnet publish src/KubeRocks.WebApi -c Release -o publish --no-restore --no-build
|
|
||||||
|
|
||||||
- task: build-image
|
|
||||||
privileged: true
|
|
||||||
config:
|
|
||||||
platform: linux
|
|
||||||
image_resource:
|
|
||||||
type: registry-image
|
|
||||||
source:
|
|
||||||
repository: concourse/oci-build-task
|
|
||||||
inputs:
|
|
||||||
- name: source-code
|
|
||||||
path: .
|
|
||||||
- name: binaries
|
|
||||||
path: publish
|
|
||||||
outputs:
|
|
||||||
- name: image
|
|
||||||
run:
|
|
||||||
path: build
|
|
||||||
- put: version
|
|
||||||
params: { bump: patch }
|
|
||||||
|
|
||||||
- put: docker-image
|
|
||||||
params:
|
|
||||||
additional_tags: version/number
|
|
||||||
image: image/image.tar
|
|
||||||
```
|
|
||||||
|
|
||||||
{{< /highlight >}}
|
|
||||||
|
|
||||||
A bit verbose compared to other CI, but it gets the job done. The price of maximum flexibility. Now in order to apply it we may need to install `fly` CLI tool. Just a matter of `scoop install concourse-fly` on Windows. Then:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
# login to your Concourse instance
|
|
||||||
fly -t kuberocks login -c https://concourse.kube.rocks
|
|
||||||
|
|
||||||
# create the pipeline and active it
|
|
||||||
fly -t kuberocks set-pipeline -p demo -c pipelines/demo.yaml
|
|
||||||
fly -t kuberocks unpause-pipeline -p demo
|
|
||||||
```
|
|
||||||
|
|
||||||
A build will be trigger immediately. You can follow it on Concourse UI.
|
|
||||||
|
|
||||||
[](concourse-pipeline.png)
|
|
||||||
|
|
||||||
If everything is ok, check in `https://gitea.kube.rocks/admin/packages`, you should see a new image tag on your registry ! A new file `version` is automatically pushed in code repo in order to keep tracking of the image tag version.
|
|
||||||
|
|
||||||
[](concourse-build.png)
|
|
||||||
|
|
||||||
#### Automatic pipeline update
|
|
||||||
|
|
||||||
If you don't want to use fly CLI every time for any pipeline update, you maybe interested in `set_pipeline` feature. Create following file:
|
|
||||||
|
|
||||||
{{< highlight host="demo-kube-flux" file="pipelines/main.yaml" >}}
|
|
||||||
|
|
||||||
```tf
|
|
||||||
resources:
|
|
||||||
- name: ci
|
|
||||||
type: git
|
|
||||||
icon: git
|
|
||||||
source:
|
|
||||||
uri: https://github.com/kuberocks/demo-kube-flux
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
- name: configure-pipelines
|
|
||||||
plan:
|
|
||||||
- get: ci
|
|
||||||
trigger: true
|
|
||||||
- set_pipeline: demo
|
|
||||||
file: ci/pipelines/demo.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
{{< /highlight >}}
|
|
||||||
|
|
||||||
Then apply it:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
fly -t kuberocks set-pipeline -p main -c pipelines/main.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Now you can manually trigger the pipeline, or wait for the next check, and it will update the demo pipeline automatically. If you're using a private repo for your pipelines, you may need to add a new secret for the git credentials and set `username` and `password` accordingly.
|
|
||||||
|
|
||||||
You almost no need of fly CLI anymore, except for adding new pipelines ! You can even go further with `set_pipeline: self` which is always an experimental feature.
|
|
||||||
|
|
||||||
### The deployment
|
|
||||||
|
|
||||||
If you followed the previous parts of this tutorial, you should have clue about how to deploy your app. Let's create deploy it with Flux:
|
|
||||||
|
|
||||||
{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}}
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: apps/v1
|
|
||||||
kind: Deployment
|
|
||||||
metadata:
|
|
||||||
name: demo
|
|
||||||
namespace: kuberocks
|
|
||||||
spec:
|
|
||||||
replicas: 1
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: demo
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: demo
|
|
||||||
spec:
|
|
||||||
imagePullSecrets:
|
|
||||||
- name: dockerconfigjson
|
|
||||||
containers:
|
|
||||||
- name: api
|
|
||||||
image: gitea.kube.rocks/kuberocks/demo:latest
|
|
||||||
ports:
|
|
||||||
- containerPort: 80
|
|
||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
name: demo
|
|
||||||
namespace: kuberocks
|
|
||||||
labels:
|
|
||||||
app: demo
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
app: demo
|
|
||||||
ports:
|
|
||||||
- name: http
|
|
||||||
port: 80
|
|
||||||
---
|
|
||||||
apiVersion: traefik.io/v1alpha1
|
|
||||||
kind: IngressRoute
|
|
||||||
metadata:
|
|
||||||
name: demo
|
|
||||||
namespace: kuberocks
|
|
||||||
spec:
|
|
||||||
entryPoints:
|
|
||||||
- websecure
|
|
||||||
routes:
|
|
||||||
- match: Host(`demo.kube.rocks`)
|
|
||||||
kind: Rule
|
|
||||||
services:
|
|
||||||
- name: demo
|
|
||||||
port: http
|
|
||||||
```
|
|
||||||
|
|
||||||
{{< /highlight >}}
|
|
||||||
|
|
||||||
Note as we have set `imagePullSecrets` in order to use fetch previously created credentials for private registry access. The rest is pretty straightforward. Once pushed, after about 1 minute, you should see your app deployed in `https://demo.kube.rocks`. Check the API response on `https://demo.kube.rocks/WeatherForecast`.
|
|
||||||
|
|
||||||
However, one last thing is missing: the automatic deployment.
|
|
||||||
|
|
||||||
#### Image automation
|
|
||||||
|
|
||||||
If you checked the above flowchart, you'll note that Image automation is a separate process from Flux that only scan the registry for new image tags and push any new tag to Flux repository. Then Flux will detect the new commit in Git repository, including the new tag, and automatically deploy it to K8s.
|
|
||||||
|
|
||||||
By default, if not any strategy is set, K8s will do a **rolling deployment**, i.e. creating new replica firstly be terminating the old one. This will prevent any downtime on the condition of you set as well **readiness probe** in your pod spec, which is a later topic.
|
|
||||||
|
|
||||||
Let's define the image update automation task for main Flux repository:
|
|
||||||
|
|
||||||
{{< highlight host="demo-kube-flux" file="clusters/demo/flux-add-ons/image-update-automation.yaml" >}}
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: image.toolkit.fluxcd.io/v1beta1
|
|
||||||
kind: ImageUpdateAutomation
|
|
||||||
metadata:
|
|
||||||
name: flux-system
|
|
||||||
namespace: flux-system
|
|
||||||
spec:
|
|
||||||
interval: 1m0s
|
|
||||||
sourceRef:
|
|
||||||
kind: GitRepository
|
|
||||||
name: flux-system
|
|
||||||
git:
|
|
||||||
checkout:
|
|
||||||
ref:
|
|
||||||
branch: main
|
|
||||||
commit:
|
|
||||||
author:
|
|
||||||
email: fluxcdbot@kube.rocks
|
|
||||||
name: fluxcdbot
|
|
||||||
messageTemplate: "{{range .Updated.Images}}{{println .}}{{end}}"
|
|
||||||
push:
|
|
||||||
branch: main
|
|
||||||
update:
|
|
||||||
path: ./clusters/demo
|
|
||||||
strategy: Setters
|
|
||||||
```
|
|
||||||
|
|
||||||
{{< /highlight >}}
|
|
||||||
|
|
||||||
Now we need to Image Reflector how to scan the repository, as well as the attached policy for tag update:
|
|
||||||
|
|
||||||
{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/images-demo.yaml" >}}
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: image.toolkit.fluxcd.io/v1beta1
|
|
||||||
kind: ImageRepository
|
|
||||||
metadata:
|
|
||||||
name: demo
|
|
||||||
namespace: flux-system
|
|
||||||
spec:
|
|
||||||
image: gitea.kube.rocks/kuberocks/demo
|
|
||||||
interval: 1m0s
|
|
||||||
secretRef:
|
|
||||||
name: dockerconfigjson
|
|
||||||
---
|
|
||||||
apiVersion: image.toolkit.fluxcd.io/v1beta1
|
|
||||||
kind: ImagePolicy
|
|
||||||
metadata:
|
|
||||||
name: demo
|
|
||||||
namespace: flux-system
|
|
||||||
spec:
|
|
||||||
imageRepositoryRef:
|
|
||||||
name: demo
|
|
||||||
namespace: flux-system
|
|
||||||
policy:
|
|
||||||
semver:
|
|
||||||
range: 0.0.x
|
|
||||||
```
|
|
||||||
|
|
||||||
{{< /highlight >}}
|
|
||||||
|
|
||||||
{{< alert >}}
|
|
||||||
As usual, don't forget `dockerconfigjson` for private registry access.
|
|
||||||
{{< /alert >}}
|
|
||||||
|
|
||||||
And finally edit the deployment to use the policy by adding a specific marker next to the image tag:
|
|
||||||
|
|
||||||
{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}}
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# ...
|
|
||||||
containers:
|
|
||||||
- name: api
|
|
||||||
image: gitea.kube.rocks/kuberocks/demo:latest # {"$imagepolicy": "flux-system:demo"}
|
|
||||||
# ...
|
|
||||||
```
|
|
||||||
|
|
||||||
{{< /highlight >}}
|
|
||||||
|
|
||||||
It will tell to `Image Automation` where to update the tag in the Flux repository. The format is `{"$imagepolicy": "<policy-namespace>:<policy-name>"}`.
|
|
||||||
|
|
||||||
Push the changes and wait for about 1 minute then pull the flux repo. You should see a new commit coming and `latest` should be replaced by an explicit tag like so:
|
|
||||||
|
|
||||||
{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}}
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# ...
|
|
||||||
containers:
|
|
||||||
- name: api
|
|
||||||
image: gitea.kube.rocks/kuberocks/demo:0.0.1 # {"$imagepolicy": "flux-system:demo"}
|
|
||||||
# ...
|
|
||||||
```
|
|
||||||
|
|
||||||
{{< /highlight >}}
|
|
||||||
|
|
||||||
Check if the pod as been correctly updated with `kgpo -n kuberocks`. Use `kd -n kuberocks deploy/demo` to check if the same tag is here and no `latest`.
|
|
||||||
|
|
||||||
```txt
|
|
||||||
Pod Template:
|
|
||||||
Labels: app=demo
|
|
||||||
Containers:
|
|
||||||
api:
|
|
||||||
Image: gitea.kube.rocks/kuberocks/demo:0.0.1
|
|
||||||
Port: 80/TCP
|
|
||||||
```
|
|
||||||
|
|
||||||
### Retest all workflow
|
|
||||||
|
|
||||||
Damn, I think we're done 🎉 ! It's time retest the full process. Add new controller endpoint from our demo project and push the code:
|
|
||||||
|
|
||||||
{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Controllers/WeatherForecastController.cs" >}}
|
|
||||||
|
|
||||||
```cs
|
|
||||||
//...
|
|
||||||
public class WeatherForecastController : ControllerBase
|
|
||||||
{
|
|
||||||
//...
|
|
||||||
|
|
||||||
[HttpGet("{id}", Name = "GetWeatherForecastById")]
|
|
||||||
public WeatherForecast GetById(int id)
|
|
||||||
{
|
|
||||||
return new WeatherForecast
|
|
||||||
{
|
|
||||||
Date = DateOnly.FromDateTime(DateTime.Now.AddDays(id)),
|
|
||||||
TemperatureC = Random.Shared.Next(-20, 55),
|
|
||||||
Summary = Summaries[Random.Shared.Next(Summaries.Length)]
|
|
||||||
};
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
{{< /highlight >}}
|
|
||||||
|
|
||||||
Wait the pod to be updated, then check the new endpoint `https://demo.kube.rocks/WeatherForecast/1`. The API should return a new unique random weather forecast with the tomorrow date.
|
|
||||||
|
|
||||||
## 6th check ✅
|
## 6th check ✅
|
||||||
|
|
||||||
We have everything we need for app building with automatic deployment ! Go [next part]({{< ref "/posts/17-build-your-own-kubernetes-cluster-part-8" >}}) for advanced tracing / load testing !
|
We have everything we need for app building with automatic deployment ! Go [next part]({{< ref "/posts/17-build-your-own-kubernetes-cluster-part-8" >}}) for creating a complete CI/CD workflow !
|
||||||
|
Before Width: | Height: | Size: 363 KiB After Width: | Height: | Size: 363 KiB |
Before Width: | Height: | Size: 92 KiB After Width: | Height: | Size: 92 KiB |
Before Width: | Height: | Size: 33 KiB After Width: | Height: | Size: 33 KiB |
Before Width: | Height: | Size: 366 KiB After Width: | Height: | Size: 366 KiB |
Before Width: | Height: | Size: 349 KiB After Width: | Height: | Size: 349 KiB |
Before Width: | Height: | Size: 203 KiB After Width: | Height: | Size: 203 KiB |
Before Width: | Height: | Size: 328 KiB After Width: | Height: | Size: 328 KiB |
Before Width: | Height: | Size: 389 KiB After Width: | Height: | Size: 389 KiB |
Before Width: | Height: | Size: 227 KiB After Width: | Height: | Size: 227 KiB |
Before Width: | Height: | Size: 386 KiB After Width: | Height: | Size: 386 KiB |
Before Width: | Height: | Size: 93 KiB After Width: | Height: | Size: 93 KiB |
Before Width: | Height: | Size: 198 KiB After Width: | Height: | Size: 198 KiB |
Before Width: | Height: | Size: 404 KiB After Width: | Height: | Size: 404 KiB |
@ -0,0 +1,808 @@
|
|||||||
|
---
|
||||||
|
title: "Setup a HA Kubernetes cluster Part XI - Load testing & Frontend"
|
||||||
|
date: 2023-10-10
|
||||||
|
description: "Follow this opinionated guide as starter-kit for your own Kubernetes platform..."
|
||||||
|
tags: ["kubernetes", "testing", "sonarqube", "load-testing", "k6"]
|
||||||
|
draft: true
|
||||||
|
---
|
||||||
|
|
||||||
|
{{< lead >}}
|
||||||
|
Be free from AWS/Azure/GCP by building a production grade On-Premise Kubernetes cluster on cheap VPS provider, fully GitOps managed, and with complete CI/CD tools 🎉
|
||||||
|
{{< /lead >}}
|
||||||
|
|
||||||
|
This is the **Part X** of more global topic tutorial. [Back to first part]({{< ref "/posts/10-build-your-own-kubernetes-cluster" >}}) for intro.
|
||||||
|
|
||||||
|
## Load testing
|
||||||
|
|
||||||
|
When it comes load testing, k6 is a perfect tool for this job and integrate with many real time series database integration like Prometheus or InfluxDB. As we already have Prometheus, let's use it and avoid us a separate InfluxDB installation. First be sure to allow remote write by enable `enableRemoteWriteReceiver` in the Prometheus Helm chart. It should be already done if you follow this tutorial.
|
||||||
|
|
||||||
|
### K6
|
||||||
|
|
||||||
|
We'll reuse our flux repo and add some manifests for defining the load testing scenario. Firstly describe the scenario inside `ConfigMap` that scrape all articles and then each article:
|
||||||
|
|
||||||
|
{{< highlight host="demo-kube-flux" file="jobs/demo-k6.yaml" >}}
|
||||||
|
|
||||||
|
```yml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
name: scenario
|
||||||
|
namespace: kuberocks
|
||||||
|
data:
|
||||||
|
script.js: |
|
||||||
|
import http from "k6/http";
|
||||||
|
import { check } from "k6";
|
||||||
|
|
||||||
|
export default function () {
|
||||||
|
const size = 10;
|
||||||
|
let page = 1;
|
||||||
|
|
||||||
|
let articles = []
|
||||||
|
|
||||||
|
do {
|
||||||
|
const res = http.get(`${__ENV.API_URL}/Articles?page=${page}&size=${size}`);
|
||||||
|
check(res, {
|
||||||
|
"status is 200": (r) => r.status == 200,
|
||||||
|
});
|
||||||
|
|
||||||
|
articles = res.json().articles;
|
||||||
|
page++;
|
||||||
|
|
||||||
|
articles.forEach((article) => {
|
||||||
|
const res = http.get(`${__ENV.API_URL}/Articles/${article.slug}`);
|
||||||
|
check(res, {
|
||||||
|
"status is 200": (r) => r.status == 200,
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
while (articles.length > 0);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
And add the k6 `Job` in the same file and configure it for Prometheus usage and mounting above scenario:
|
||||||
|
|
||||||
|
{{< highlight host="demo-kube-flux" file="jobs/demo-k6.yaml" >}}
|
||||||
|
|
||||||
|
```yml
|
||||||
|
#...
|
||||||
|
---
|
||||||
|
apiVersion: batch/v1
|
||||||
|
kind: Job
|
||||||
|
metadata:
|
||||||
|
name: k6
|
||||||
|
namespace: kuberocks
|
||||||
|
spec:
|
||||||
|
ttlSecondsAfterFinished: 0
|
||||||
|
template:
|
||||||
|
spec:
|
||||||
|
restartPolicy: Never
|
||||||
|
containers:
|
||||||
|
- name: run
|
||||||
|
image: grafana/k6
|
||||||
|
env:
|
||||||
|
- name: API_URL
|
||||||
|
value: https://demo.kube.rocks/api
|
||||||
|
- name: K6_VUS
|
||||||
|
value: "30"
|
||||||
|
- name: K6_DURATION
|
||||||
|
value: 1m
|
||||||
|
- name: K6_PROMETHEUS_RW_SERVER_URL
|
||||||
|
value: http://prometheus-operated.monitoring:9090/api/v1/write
|
||||||
|
command:
|
||||||
|
["k6", "run", "-o", "experimental-prometheus-rw", "script.js"]
|
||||||
|
volumeMounts:
|
||||||
|
- name: scenario
|
||||||
|
mountPath: /home/k6
|
||||||
|
tolerations:
|
||||||
|
- key: node-role.kubernetes.io/runner
|
||||||
|
operator: Exists
|
||||||
|
effect: NoSchedule
|
||||||
|
nodeSelector:
|
||||||
|
node-role.kubernetes.io/runner: "true"
|
||||||
|
volumes:
|
||||||
|
- name: scenario
|
||||||
|
configMap:
|
||||||
|
name: scenario
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
Use appropriate `tolerations` and `nodeSelector` for running the load testing in a node which have free CPU resource. You can play with `K6_VUS` and `K6_DURATION` environment variables in order to change the level of load testing.
|
||||||
|
|
||||||
|
Then you can launch the job with `ka jobs/demo-k6.yaml`. Check quickly that the job is running via `klo -n kuberocks job/k6`:
|
||||||
|
|
||||||
|
```txt
|
||||||
|
|
||||||
|
/\ |‾‾| /‾‾/ /‾‾/
|
||||||
|
/\ / \ | |/ / / /
|
||||||
|
/ \/ \ | ( / ‾‾\
|
||||||
|
/ \ | |\ \ | (‾) |
|
||||||
|
/ __________ \ |__| \__\ \_____/ .io
|
||||||
|
|
||||||
|
execution: local
|
||||||
|
script: script.js
|
||||||
|
output: Prometheus remote write (http://prometheus-operated.monitoring:9090/api/v1/write)
|
||||||
|
|
||||||
|
scenarios: (100.00%) 1 scenario, 30 max VUs, 1m30s max duration (incl. graceful stop):
|
||||||
|
* default: 30 looping VUs for 1m0s (gracefulStop: 30s)
|
||||||
|
```
|
||||||
|
|
||||||
|
After 1 minute of run, job should finish and show some raw result:
|
||||||
|
|
||||||
|
```txt
|
||||||
|
✓ status is 200
|
||||||
|
|
||||||
|
checks.........................: 100.00% ✓ 17748 ✗ 0
|
||||||
|
data_received..................: 404 MB 6.3 MB/s
|
||||||
|
data_sent......................: 1.7 MB 26 kB/s
|
||||||
|
http_req_blocked...............: avg=242.43µs min=223ns med=728ns max=191.27ms p(90)=1.39µs p(95)=1.62µs
|
||||||
|
http_req_connecting............: avg=13.13µs min=0s med=0s max=9.48ms p(90)=0s p(95)=0s
|
||||||
|
http_req_duration..............: avg=104.22ms min=28.9ms med=93.45ms max=609.86ms p(90)=162.04ms p(95)=198.93ms
|
||||||
|
{ expected_response:true }...: avg=104.22ms min=28.9ms med=93.45ms max=609.86ms p(90)=162.04ms p(95)=198.93ms
|
||||||
|
http_req_failed................: 0.00% ✓ 0 ✗ 17748
|
||||||
|
http_req_receiving.............: avg=13.76ms min=32.71µs med=6.49ms max=353.13ms p(90)=36.04ms p(95)=51.36ms
|
||||||
|
http_req_sending...............: avg=230.04µs min=29.79µs med=93.16µs max=25.75ms p(90)=201.92µs p(95)=353.61µs
|
||||||
|
http_req_tls_handshaking.......: avg=200.57µs min=0s med=0s max=166.91ms p(90)=0s p(95)=0s
|
||||||
|
http_req_waiting...............: avg=90.22ms min=14.91ms med=80.76ms max=609.39ms p(90)=138.3ms p(95)=169.24ms
|
||||||
|
http_reqs......................: 17748 276.81409/s
|
||||||
|
iteration_duration.............: avg=5.39s min=3.97s med=5.35s max=7.44s p(90)=5.94s p(95)=6.84s
|
||||||
|
iterations.....................: 348 5.427727/s
|
||||||
|
vus............................: 7 min=7 max=30
|
||||||
|
vus_max........................: 30 min=30 max=30
|
||||||
|
```
|
||||||
|
|
||||||
|
As we use Prometheus for outputting the result, we can visualize it easily with Grafana. You just have to import [this dashboard](https://grafana.com/grafana/dashboards/18030-official-k6-test-result/):
|
||||||
|
|
||||||
|
[](grafana-k6.png)
|
||||||
|
|
||||||
|
As we use Kubernetes, increase the loading performance horizontally is dead easy. Go to the deployment configuration of demo app for increasing replicas count, as well as Traefik, and compare the results.
|
||||||
|
|
||||||
|
### Load balancing database
|
||||||
|
|
||||||
|
So far, we only load balanced the stateless API, but what about the database part ? We have set up a replicated PostgreSQL cluster, however we have no use of the replica that stay sadly idle. But for that we have to distinguish write queries from scalable read queries.
|
||||||
|
|
||||||
|
We can make use of the Bitnami [PostgreSQL HA](https://artifacthub.io/packages/helm/bitnami/postgresql-ha) instead of simple one. It adds the new component [Pgpool-II](https://pgpool.net/mediawiki/index.php/Main_Page) as main load balancer and detect failover. It's able to separate in real time write queries from read queries and send them to the master or the replica. The advantage: works natively for all apps without any changes. The cons: it consumes far more resources and add a new component to maintain.
|
||||||
|
|
||||||
|
A 2nd solution is to separate query typologies from where it counts: the application. It requires some code changes, but it's clearly a far more efficient solution. Let's do this way.
|
||||||
|
|
||||||
|
As Npgsql support load balancing [natively](https://www.npgsql.org/doc/failover-and-load-balancing.html), we don't need to add any Kubernetes service. We just have to create a clear distinction between read and write queries. One simple way is to create a separate RO `DbContext`.
|
||||||
|
|
||||||
|
{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/Contexts/AppRoDbContext.cs" >}}
|
||||||
|
|
||||||
|
```cs
|
||||||
|
namespace KubeRocks.Application.Contexts;
|
||||||
|
|
||||||
|
using KubeRocks.Application.Entities;
|
||||||
|
|
||||||
|
using Microsoft.EntityFrameworkCore;
|
||||||
|
|
||||||
|
public class AppRoDbContext : DbContext
|
||||||
|
{
|
||||||
|
public DbSet<User> Users => Set<User>();
|
||||||
|
public DbSet<Article> Articles => Set<Article>();
|
||||||
|
public DbSet<Comment> Comments => Set<Comment>();
|
||||||
|
|
||||||
|
public AppRoDbContext(DbContextOptions<AppRoDbContext> options) : base(options)
|
||||||
|
{
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
Register it in DI:
|
||||||
|
|
||||||
|
{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/Extensions/ServiceExtensions.cs" >}}
|
||||||
|
|
||||||
|
```cs
|
||||||
|
public static class ServiceExtensions
|
||||||
|
{
|
||||||
|
public static IServiceCollection AddKubeRocksServices(this IServiceCollection services, IConfiguration configuration)
|
||||||
|
{
|
||||||
|
return services
|
||||||
|
//...
|
||||||
|
.AddDbContext<AppRoDbContext>((options) =>
|
||||||
|
{
|
||||||
|
options.UseNpgsql(
|
||||||
|
configuration.GetConnectionString("DefaultRoConnection")
|
||||||
|
??
|
||||||
|
configuration.GetConnectionString("DefaultConnection")
|
||||||
|
);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
We fall back to the RW connection string if the RO one is not defined. Then use it in the `ArticlesController` which as only read endpoints:
|
||||||
|
|
||||||
|
{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Controllers/ArticlesController.cs" >}}
|
||||||
|
|
||||||
|
```cs
|
||||||
|
//...
|
||||||
|
|
||||||
|
public class ArticlesController
|
||||||
|
{
|
||||||
|
private readonly AppRoDbContext _context;
|
||||||
|
|
||||||
|
//...
|
||||||
|
|
||||||
|
public ArticlesController(AppRoDbContext context)
|
||||||
|
{
|
||||||
|
_context = context;
|
||||||
|
}
|
||||||
|
|
||||||
|
//...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
Push and let it pass the CI. In the meantime, add the new RO connection:
|
||||||
|
|
||||||
|
{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}}
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# ...
|
||||||
|
spec:
|
||||||
|
# ...
|
||||||
|
template:
|
||||||
|
# ...
|
||||||
|
spec:
|
||||||
|
# ...
|
||||||
|
containers:
|
||||||
|
- name: api
|
||||||
|
# ...
|
||||||
|
env:
|
||||||
|
- name: DB_PASSWORD
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: demo-db
|
||||||
|
key: password
|
||||||
|
- name: ConnectionStrings__DefaultConnection
|
||||||
|
value: Host=postgresql-primary.postgres;Username=demo;Password='$(DB_PASSWORD)';Database=demo;
|
||||||
|
- name: ConnectionStrings__DefaultRoConnection
|
||||||
|
value: Host=postgresql-primary.postgres,postgresql-read.postgres;Username=demo;Password='$(DB_PASSWORD)';Database=demo;Load Balance Hosts=true;
|
||||||
|
#...
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
We simply have to add multiple host like `postgresql-primary.postgres,postgresql-read.postgres` for the RO connection string and enable LB mode with `Load Balance Hosts=true`.
|
||||||
|
|
||||||
|
Once deployed, relaunch a load test with K6 and admire the DB load balancing in action on both storage servers with `htop` or directly compute pods by namespace in Grafana.
|
||||||
|
|
||||||
|
[](grafana-db-lb.png)
|
||||||
|
|
||||||
|
## Frontend
|
||||||
|
|
||||||
|
Let's finish this guide by a quick view of SPA frontend development as a separate project from backend.
|
||||||
|
|
||||||
|
### Vue TS
|
||||||
|
|
||||||
|
Create a new Vue.js project from [vitesse starter kit](https://github.com/antfu/vitesse-lite) (be sure to have pnpm, just a matter of `scoop/brew install pnpm`):
|
||||||
|
|
||||||
|
```sh
|
||||||
|
npx degit antfu/vitesse-lite kuberocks-demo-ui
|
||||||
|
cd kuberocks-demo-ui
|
||||||
|
git init
|
||||||
|
git add .
|
||||||
|
git commit -m "Initial commit"
|
||||||
|
pnpm i
|
||||||
|
pnpm dev
|
||||||
|
```
|
||||||
|
|
||||||
|
Should launch app in `http://localhost:3333/`. Create a new `kuberocks-demo-ui` Gitea repo and push this code into it. Now lets quick and done for API calls.
|
||||||
|
|
||||||
|
### Get around CORS and HTTPS with YARP
|
||||||
|
|
||||||
|
As always when frontend is separated from backend, we have to deal with CORS. But I prefer to have one single URL for frontend + backend and get rid of CORS problem by simply call under `/api` path. Moreover, it'll be production ready without need to manage any `Vite` variable for API URL and we'll get HTTPS provided by dotnet. Back to API project.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
dotnet add src/KubeRocks.WebApi package Yarp.ReverseProxy
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}}
|
||||||
|
|
||||||
|
```cs
|
||||||
|
//...
|
||||||
|
|
||||||
|
var builder = WebApplication.CreateBuilder(args);
|
||||||
|
|
||||||
|
builder.Services.AddReverseProxy()
|
||||||
|
.LoadFromConfig(builder.Configuration.GetSection("ReverseProxy"));
|
||||||
|
|
||||||
|
//...
|
||||||
|
|
||||||
|
var app = builder.Build();
|
||||||
|
|
||||||
|
app.MapReverseProxy();
|
||||||
|
|
||||||
|
//...
|
||||||
|
|
||||||
|
app.UseRouting();
|
||||||
|
|
||||||
|
//...
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
Note as we must add `app.UseRouting();` too in order to get Swagger UI working.
|
||||||
|
|
||||||
|
The proxy configuration (only for development):
|
||||||
|
|
||||||
|
{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/appsettings.Development.json" >}}
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
//...
|
||||||
|
"ReverseProxy": {
|
||||||
|
"Routes": {
|
||||||
|
"ServerRouteApi": {
|
||||||
|
"ClusterId": "Server",
|
||||||
|
"Match": {
|
||||||
|
"Path": "/api/{**catch-all}"
|
||||||
|
},
|
||||||
|
"Transforms": [
|
||||||
|
{
|
||||||
|
"PathRemovePrefix": "/api"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"ClientRoute": {
|
||||||
|
"ClusterId": "Client",
|
||||||
|
"Match": {
|
||||||
|
"Path": "{**catch-all}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"Clusters": {
|
||||||
|
"Client": {
|
||||||
|
"Destinations": {
|
||||||
|
"Client1": {
|
||||||
|
"Address": "http://localhost:3333"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"Server": {
|
||||||
|
"Destinations": {
|
||||||
|
"Server1": {
|
||||||
|
"Address": "https://localhost:7159"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
Now your frontend app should appear under `https://localhost:7159`, and API calls under `https://localhost:7159/api`. We now benefit from HTTPS for all app. Push API code.
|
||||||
|
|
||||||
|
### Typescript API generator
|
||||||
|
|
||||||
|
As we use OpenAPI, it's possible to generate typescript client for API calls. Add this package:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
pnpm add openapi-typescript -D
|
||||||
|
pnpm add openapi-typescript-fetch
|
||||||
|
```
|
||||||
|
|
||||||
|
Before generate the client model, go back to backend for forcing required by default for attributes when not nullable when using `Swashbuckle.AspNetCore`:
|
||||||
|
|
||||||
|
{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Filters/RequiredNotNullableSchemaFilter.cs" >}}
|
||||||
|
|
||||||
|
```cs
|
||||||
|
using Microsoft.OpenApi.Models;
|
||||||
|
|
||||||
|
using Swashbuckle.AspNetCore.SwaggerGen;
|
||||||
|
|
||||||
|
namespace KubeRocks.WebApi.Filters;
|
||||||
|
|
||||||
|
public class RequiredNotNullableSchemaFilter : ISchemaFilter
|
||||||
|
{
|
||||||
|
public void Apply(OpenApiSchema schema, SchemaFilterContext context)
|
||||||
|
{
|
||||||
|
if (schema.Properties is null)
|
||||||
|
{
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
var notNullableProperties = schema
|
||||||
|
.Properties
|
||||||
|
.Where(x => !x.Value.Nullable && !schema.Required.Contains(x.Key))
|
||||||
|
.ToList();
|
||||||
|
|
||||||
|
foreach (var property in notNullableProperties)
|
||||||
|
{
|
||||||
|
schema.Required.Add(property.Key);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}}
|
||||||
|
|
||||||
|
```cs
|
||||||
|
//...
|
||||||
|
|
||||||
|
builder.Services.AddSwaggerGen(o =>
|
||||||
|
{
|
||||||
|
o.SupportNonNullableReferenceTypes();
|
||||||
|
o.SchemaFilter<RequiredNotNullableSchemaFilter>();
|
||||||
|
});
|
||||||
|
|
||||||
|
//...
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
You should now have proper required attributes for models in swagger UI:
|
||||||
|
|
||||||
|
[](swagger-ui-nullable.png)
|
||||||
|
|
||||||
|
{{< alert >}}
|
||||||
|
Sadly, without this boring step, many attributes will be nullable when generating TypeScript models, and leads to headaches from client side by forcing us to manage nullable everywhere.
|
||||||
|
{{< /alert >}}
|
||||||
|
|
||||||
|
Now generate the models:
|
||||||
|
|
||||||
|
{{< highlight host="kuberocks-demo-ui" file="package.json" >}}
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
//...
|
||||||
|
"scripts": {
|
||||||
|
//...
|
||||||
|
"openapi": "openapi-typescript http://localhost:5123/api/v1/swagger.json --output src/api/openapi.ts"
|
||||||
|
},
|
||||||
|
//...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
Use the HTTP version of swagger as you'll get a self certificate error. The use `pnpm openapi` to generate full TS model. Finally, describe API fetchers like so:
|
||||||
|
|
||||||
|
{{< highlight host="kuberocks-demo-ui" file="src/api/index.ts" >}}
|
||||||
|
|
||||||
|
```ts
|
||||||
|
import { Fetcher } from 'openapi-typescript-fetch'
|
||||||
|
|
||||||
|
import type { components, paths } from './openapi'
|
||||||
|
|
||||||
|
const fetcher = Fetcher.for<paths>()
|
||||||
|
|
||||||
|
type ArticleList = components['schemas']['ArticleListDto']
|
||||||
|
type Article = components['schemas']['ArticleDto']
|
||||||
|
|
||||||
|
const getArticles = fetcher.path('/api/Articles').method('get').create()
|
||||||
|
const getArticleBySlug = fetcher.path('/api/Articles/{slug}').method('get').create()
|
||||||
|
|
||||||
|
export type { Article, ArticleList }
|
||||||
|
export {
|
||||||
|
getArticles,
|
||||||
|
getArticleBySlug,
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
We are now fully typed compliant with the API.
|
||||||
|
|
||||||
|
### Call the API
|
||||||
|
|
||||||
|
Let's create a pretty basic list + detail vue pages:
|
||||||
|
|
||||||
|
{{< highlight host="kuberocks-demo-ui" file="src/pages/articles/index.vue" >}}
|
||||||
|
|
||||||
|
```vue
|
||||||
|
<script lang="ts" setup>
|
||||||
|
import { getArticles } from '~/api'
|
||||||
|
import type { ArticleList } from '~/api'
|
||||||
|
|
||||||
|
const articles = ref<ArticleList[]>([])
|
||||||
|
|
||||||
|
async function loadArticles() {
|
||||||
|
const { data } = await getArticles({
|
||||||
|
page: 1,
|
||||||
|
size: 10,
|
||||||
|
})
|
||||||
|
|
||||||
|
articles.value = data.articles
|
||||||
|
}
|
||||||
|
|
||||||
|
loadArticles()
|
||||||
|
</script>
|
||||||
|
|
||||||
|
<template>
|
||||||
|
<RouterLink
|
||||||
|
v-for="(article, i) in articles"
|
||||||
|
:key="i"
|
||||||
|
:to="`/articles/${article.slug}`"
|
||||||
|
>
|
||||||
|
<h3>{{ article.title }}</h3>
|
||||||
|
</RouterLink>
|
||||||
|
</template>
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
{{< highlight host="kuberocks-demo-ui" file="src/pages/articles/[slug].vue" >}}
|
||||||
|
|
||||||
|
```vue
|
||||||
|
<script lang="ts" setup>
|
||||||
|
import { getArticleBySlug } from '~/api'
|
||||||
|
import type { Article } from '~/api'
|
||||||
|
|
||||||
|
const props = defineProps<{ slug: string }>()
|
||||||
|
|
||||||
|
const article = ref<Article>()
|
||||||
|
|
||||||
|
const router = useRouter()
|
||||||
|
|
||||||
|
async function getArticle() {
|
||||||
|
const { data } = await getArticleBySlug({ slug: props.slug })
|
||||||
|
|
||||||
|
article.value = data
|
||||||
|
}
|
||||||
|
|
||||||
|
getArticle()
|
||||||
|
</script>
|
||||||
|
|
||||||
|
<template>
|
||||||
|
<div v-if="article">
|
||||||
|
<h1>{{ article.title }}</h1>
|
||||||
|
<p>{{ article.description }}</p>
|
||||||
|
<div>{{ article.body }}</div>
|
||||||
|
<div>
|
||||||
|
<button m-3 mt-8 text-sm btn @click="router.back()">
|
||||||
|
Back
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</template>
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
It should work flawlessly.
|
||||||
|
|
||||||
|
### Frontend CI/CD
|
||||||
|
|
||||||
|
The CI frontend is far simpler than backend. Create a new `demo-ui` pipeline:
|
||||||
|
|
||||||
|
{{< highlight host="demo-kube-flux" file="pipelines/demo-ui.yaml" >}}
|
||||||
|
|
||||||
|
```yml
|
||||||
|
resources:
|
||||||
|
- name: version
|
||||||
|
type: semver
|
||||||
|
source:
|
||||||
|
driver: git
|
||||||
|
uri: ((git.url))/kuberocks/demo-ui
|
||||||
|
branch: main
|
||||||
|
file: version
|
||||||
|
username: ((git.username))
|
||||||
|
password: ((git.password))
|
||||||
|
git_user: ((git.git-user))
|
||||||
|
commit_message: ((git.commit-message))
|
||||||
|
- name: source-code
|
||||||
|
type: git
|
||||||
|
icon: coffee
|
||||||
|
source:
|
||||||
|
uri: ((git.url))/kuberocks/demo-ui
|
||||||
|
branch: main
|
||||||
|
username: ((git.username))
|
||||||
|
password: ((git.password))
|
||||||
|
- name: docker-image
|
||||||
|
type: registry-image
|
||||||
|
icon: docker
|
||||||
|
source:
|
||||||
|
repository: ((registry.name))/kuberocks/demo-ui
|
||||||
|
tag: latest
|
||||||
|
username: ((registry.username))
|
||||||
|
password: ((registry.password))
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
- name: build
|
||||||
|
plan:
|
||||||
|
- get: source-code
|
||||||
|
trigger: true
|
||||||
|
|
||||||
|
- task: build-source
|
||||||
|
config:
|
||||||
|
platform: linux
|
||||||
|
image_resource:
|
||||||
|
type: registry-image
|
||||||
|
source:
|
||||||
|
repository: node
|
||||||
|
tag: 18-buster
|
||||||
|
inputs:
|
||||||
|
- name: source-code
|
||||||
|
path: .
|
||||||
|
outputs:
|
||||||
|
- name: dist
|
||||||
|
path: dist
|
||||||
|
caches:
|
||||||
|
- path: .pnpm-store
|
||||||
|
run:
|
||||||
|
path: /bin/sh
|
||||||
|
args:
|
||||||
|
- -ec
|
||||||
|
- |
|
||||||
|
corepack enable
|
||||||
|
corepack prepare pnpm@latest-8 --activate
|
||||||
|
pnpm config set store-dir .pnpm-store
|
||||||
|
pnpm i
|
||||||
|
pnpm lint
|
||||||
|
pnpm build
|
||||||
|
|
||||||
|
- task: build-image
|
||||||
|
privileged: true
|
||||||
|
config:
|
||||||
|
platform: linux
|
||||||
|
image_resource:
|
||||||
|
type: registry-image
|
||||||
|
source:
|
||||||
|
repository: concourse/oci-build-task
|
||||||
|
inputs:
|
||||||
|
- name: source-code
|
||||||
|
path: .
|
||||||
|
- name: dist
|
||||||
|
path: dist
|
||||||
|
outputs:
|
||||||
|
- name: image
|
||||||
|
run:
|
||||||
|
path: build
|
||||||
|
- put: version
|
||||||
|
params: { bump: patch }
|
||||||
|
- put: docker-image
|
||||||
|
params:
|
||||||
|
additional_tags: version/number
|
||||||
|
image: image/image.tar
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
{{< highlight host="demo-kube-flux" file="pipelines/demo-ui.yaml" >}}
|
||||||
|
|
||||||
|
```tf
|
||||||
|
#...
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
- name: configure-pipelines
|
||||||
|
plan:
|
||||||
|
#...
|
||||||
|
- set_pipeline: demo-ui
|
||||||
|
file: ci/pipelines/demo-ui.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
Apply it and put this nginx `Dockerfile` on frontend root project:
|
||||||
|
|
||||||
|
{{< highlight host="kuberocks-demo-ui" file="Dockerfile" >}}
|
||||||
|
|
||||||
|
```Dockerfile
|
||||||
|
FROM nginx:alpine
|
||||||
|
|
||||||
|
COPY docker/nginx.conf /etc/nginx/conf.d/default.conf
|
||||||
|
COPY dist /usr/share/nginx/html
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
After push all CI should build correctly. Then the image policy for auto update:
|
||||||
|
|
||||||
|
{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/images-demo-ui.yaml" >}}
|
||||||
|
|
||||||
|
```yml
|
||||||
|
apiVersion: image.toolkit.fluxcd.io/v1beta1
|
||||||
|
kind: ImageRepository
|
||||||
|
metadata:
|
||||||
|
name: demo-ui
|
||||||
|
namespace: flux-system
|
||||||
|
spec:
|
||||||
|
image: gitea.kube.rocks/kuberocks/demo-ui
|
||||||
|
interval: 1m0s
|
||||||
|
secretRef:
|
||||||
|
name: dockerconfigjson
|
||||||
|
---
|
||||||
|
apiVersion: image.toolkit.fluxcd.io/v1beta1
|
||||||
|
kind: ImagePolicy
|
||||||
|
metadata:
|
||||||
|
name: demo-ui
|
||||||
|
namespace: flux-system
|
||||||
|
spec:
|
||||||
|
imageRepositoryRef:
|
||||||
|
name: demo-ui
|
||||||
|
namespace: flux-system
|
||||||
|
policy:
|
||||||
|
semver:
|
||||||
|
range: 0.0.x
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
The deployment:
|
||||||
|
|
||||||
|
{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo-ui.yaml" >}}
|
||||||
|
|
||||||
|
```yml
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: demo-ui
|
||||||
|
namespace: kuberocks
|
||||||
|
spec:
|
||||||
|
replicas: 2
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: demo-ui
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: demo-ui
|
||||||
|
spec:
|
||||||
|
imagePullSecrets:
|
||||||
|
- name: dockerconfigjson
|
||||||
|
containers:
|
||||||
|
- name: front
|
||||||
|
image: gitea.okami101.io/kuberocks/demo-ui:latest # {"$imagepolicy": "flux-system:image-demo-ui"}
|
||||||
|
ports:
|
||||||
|
- containerPort: 80
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: demo-ui
|
||||||
|
namespace: kuberocks
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
app: demo-ui
|
||||||
|
ports:
|
||||||
|
- name: http
|
||||||
|
port: 80
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
After push, the demo UI container should be deployed. The very last step is to add a new route to existing `IngressRoute` for frontend:
|
||||||
|
|
||||||
|
{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}}
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
#...
|
||||||
|
apiVersion: traefik.io/v1alpha1
|
||||||
|
kind: IngressRoute
|
||||||
|
#...
|
||||||
|
spec:
|
||||||
|
#...
|
||||||
|
routes:
|
||||||
|
- match: Host(`demo.kube.rocks`)
|
||||||
|
kind: Rule
|
||||||
|
services:
|
||||||
|
- name: demo-ui
|
||||||
|
port: http
|
||||||
|
- match: Host(`demo.kube.rocks`) && PathPrefix(`/api`)
|
||||||
|
#...
|
||||||
|
```
|
||||||
|
|
||||||
|
{{< /highlight >}}
|
||||||
|
|
||||||
|
Go to `https://demo.kube.rocks` to confirm if both app front & back are correctly connected !
|
||||||
|
|
||||||
|
[](frontend.png)
|
||||||
|
|
||||||
|
## Final check 🎊🏁🎊
|
||||||
|
|
||||||
|
Congratulation if you're getting that far !!!
|
||||||
|
|
||||||
|
We have made an enough complete tour of Kubernetes cluster building on full GitOps mode.
|
After Width: | Height: | Size: 113 KiB |