diff --git a/content/posts/10-build-your-own-kubernetes-cluster/index.md b/content/posts/10-build-your-own-kubernetes-cluster/index.md index 96aaabb..4694aaa 100644 --- a/content/posts/10-build-your-own-kubernetes-cluster/index.md +++ b/content/posts/10-build-your-own-kubernetes-cluster/index.md @@ -1,5 +1,5 @@ --- -title: "Setup a HA Kubernetes cluster for less than $60 / month" +title: "Setup a HA Kubernetes cluster for less than $60 by month" date: 2023-10-01 description: "Follow this opinionated guide as starter-kit for your own Kubernetes platform..." tags: ["kubernetes"] diff --git a/content/posts/16-build-your-own-kubernetes-cluster-part-7/index.md b/content/posts/16-build-your-own-kubernetes-cluster-part-7/index.md index ca615ed..295be75 100644 --- a/content/posts/16-build-your-own-kubernetes-cluster-part-7/index.md +++ b/content/posts/16-build-your-own-kubernetes-cluster-part-7/index.md @@ -638,558 +638,6 @@ You may set `worker.replicas` as the number of nodes in your runner pool. As usu Then go to `https://concourse.kube.rocks` and log in with chosen credentials. -## Workflow - -It's now time to step back and think about how we'll use our CI. Our goal is to build our above dotnet Web API with Concourse CI as a container image, ready to deploy to our cluster through Flux. So we finish the complete CI/CD pipeline. To resume the scenario: - -1. Concourse CI check the repo periodically (pull model) for new code pushed and trigger a build if applicable -2. When container image build passed, Concourse CI push the new image to our private registry, which is already take care by Gitea -3. Flux, which can perfectly be in a different cluster, check the registry periodically (pull model), if new image tag detected, it will deploy it automatically to our cluster - -{{< alert >}} -Although it's the most secured way and configuration less, instead of default pull model, which is generally a check every minute, it's possible secured WebHook instead in order to reduce time between code push and deployment. -{{< /alert >}} - -The flow pipeline is pretty straightforward: - -{{< mermaid >}} -graph RL - subgraph R [Private registry] - C[/Container Registry/] - end - S -- scan --> R - S -- push --> J[(Flux repository)] - subgraph CD - D{Flux} -- check --> J - D -- deploy --> E((Kube API)) - end - subgraph S [Image Scanner] - I[Image Reflector] -- trigger --> H[Image Automation] - end - subgraph CI - A{Concourse} -- check --> B[(Code repository)] - A -- push --> C - F((Worker)) -- build --> A - end -{{< /mermaid >}} - -### The credentials - -We need to: - -1. Give read/write access to our Gitea and registry for Concourse. Note as we need write access in code repository for concourse because we need to store the new image tag. We'll using [semver resource](https://github.com/concourse/semver-resource) for that. -2. Give read registry credentials to Flux for regular image tag checking as well as Kubernetes in order to allow image pulling from the private registry. - -Let's create 2 new user `concourse` with admin acces and `container` as standard user on Gitea. Store these credentials on new variables: - -{{< highlight host="demo-kube-k3s" file="main.tf" >}} - -```tf -variable "concourse_git_username" { - type = string -} - -variable "concourse_git_password" { - type = string - sensitive = true -} - -variable "container_registry_username" { - type = string -} - -variable "container_registry_password" { - type = string - sensitive = true -} -``` - -{{< /highlight >}} - -{{< highlight host="demo-kube-k3s" file="terraform.tfvars" >}} - -```tf -concourse_git_username = "concourse" -concourse_git_password = "xxx" -container_registry_username = "container" -container_registry_password = "xxx" -``` - -{{< /highlight >}} - -Apply the credentials for Concourse: - -{{< highlight host="demo-kube-k3s" file="concourse.tf" >}} - -```tf -resource "kubernetes_secret_v1" "concourse_registry" { - metadata { - name = "registry" - namespace = "concourse-main" - } - - data = { - name = "gitea.${var.domain}" - username = var.concourse_git_username - password = var.concourse_git_password - } - - depends_on = [ - helm_release.concourse - ] -} - -resource "kubernetes_secret_v1" "concourse_git" { - metadata { - name = "git" - namespace = "concourse-main" - } - - data = { - url = "https://gitea.${var.domain}" - username = var.concourse_git_username - password = var.concourse_git_password - git-user = "Concourse CI " - commit-message = "bump to %version% [ci skip]" - } - - depends_on = [ - helm_release.concourse - ] -} -``` - -{{< /highlight >}} - -Note as we use `concourse-main` namespace, already created by Concourse Helm installer, which is a dedicated namespace for the default team `main`. Because of that, we should keep `depends_on` to ensure the namespace is created before the secrets. - -{{< alert >}} -Don't forget the `[ci skip]` in commit message, which is the commit for version bumping, otherwise you'll have an infinite loop of builds ! -{{< /alert >}} - -Then same for Flux and the namespace that will receive the app: - -{{< highlight host="demo-kube-k3s" file="flux.tf" >}} - -```tf -resource "kubernetes_secret_v1" "image_pull_secrets" { - for_each = toset(["flux-system", "kuberocks"]) - metadata { - name = "dockerconfigjson" - namespace = each.value - } - - type = "kubernetes.io/dockerconfigjson" - - data = { - ".dockerconfigjson" = jsonencode({ - auths = { - "gitea.${var.domain}" = { - auth = base64encode("${var.container_registry_username}:${var.container_registry_password}") - } - } - }) - } -} -``` - -{{< /highlight >}} - -{{< alert >}} -Create the namespace `kuberocks` first by `k create namespace kuberocks`, or you'll get an error. -{{< /alert >}} - -### Build and push the container image - -Now that all required credentials are in place, we have to tell Concourse how to check our repo and build our container image. This is done through a pipeline, which is a specific Concourse YAML file. - -#### The Dockerfile - -Firstly create following files in root of your repo that we'll use for building a production ready container image: - -{{< highlight host="kuberocks-demo" file=".dockerignore" >}} - -```txt -**/bin/ -**/obj/ -``` - -{{< /highlight >}} - -{{< highlight host="kuberocks-demo" file="Dockerfile" >}} - -```Dockerfile -FROM mcr.microsoft.com/dotnet/aspnet:7.0 - -WORKDIR /publish -COPY /publish . - -EXPOSE 80 -ENTRYPOINT ["dotnet", "KubeRocksDemo.dll"] -``` - -{{< /highlight >}} - -#### The pipeline - -Let's reuse our flux repository and create a file `pipelines/demo.yaml` with following content: - -{{< highlight host="demo-kube-flux" file="pipelines/demo.yaml" >}} - -```tf -resources: - - name: version - type: semver - source: - driver: git - uri: ((git.url))/kuberocks/demo - branch: main - file: version - username: ((git.username)) - password: ((git.password)) - git_user: ((git.git-user)) - commit_message: ((git.commit-message)) - - name: source-code - type: git - icon: coffee - source: - uri: ((git.url))/kuberocks/demo - branch: main - username: ((git.username)) - password: ((git.password)) - - name: docker-image - type: registry-image - icon: docker - source: - repository: ((registry.name))/kuberocks/demo - tag: latest - username: ((registry.username)) - password: ((registry.password)) - -jobs: - - name: build - plan: - - get: source-code - trigger: true - - - task: build-source - config: - platform: linux - image_resource: - type: registry-image - source: - repository: mcr.microsoft.com/dotnet/sdk - tag: "7.0" - inputs: - - name: source-code - path: . - outputs: - - name: binaries - path: publish - caches: - - path: /root/.nuget/packages - run: - path: /bin/sh - args: - - -ec - - | - dotnet format --verify-no-changes - dotnet build -c Release - dotnet publish src/KubeRocks.WebApi -c Release -o publish --no-restore --no-build - - - task: build-image - privileged: true - config: - platform: linux - image_resource: - type: registry-image - source: - repository: concourse/oci-build-task - inputs: - - name: source-code - path: . - - name: binaries - path: publish - outputs: - - name: image - run: - path: build - - put: version - params: { bump: patch } - - - put: docker-image - params: - additional_tags: version/number - image: image/image.tar -``` - -{{< /highlight >}} - -A bit verbose compared to other CI, but it gets the job done. The price of maximum flexibility. Now in order to apply it we may need to install `fly` CLI tool. Just a matter of `scoop install concourse-fly` on Windows. Then: - -```sh -# login to your Concourse instance -fly -t kuberocks login -c https://concourse.kube.rocks - -# create the pipeline and active it -fly -t kuberocks set-pipeline -p demo -c pipelines/demo.yaml -fly -t kuberocks unpause-pipeline -p demo -``` - -A build will be trigger immediately. You can follow it on Concourse UI. - -[![Concourse pipeline](concourse-pipeline.png)](concourse-pipeline.png) - -If everything is ok, check in `https://gitea.kube.rocks/admin/packages`, you should see a new image tag on your registry ! A new file `version` is automatically pushed in code repo in order to keep tracking of the image tag version. - -[![Concourse build](concourse-build.png)](concourse-build.png) - -#### Automatic pipeline update - -If you don't want to use fly CLI every time for any pipeline update, you maybe interested in `set_pipeline` feature. Create following file: - -{{< highlight host="demo-kube-flux" file="pipelines/main.yaml" >}} - -```tf -resources: - - name: ci - type: git - icon: git - source: - uri: https://github.com/kuberocks/demo-kube-flux - -jobs: - - name: configure-pipelines - plan: - - get: ci - trigger: true - - set_pipeline: demo - file: ci/pipelines/demo.yaml -``` - -{{< /highlight >}} - -Then apply it: - -```sh -fly -t kuberocks set-pipeline -p main -c pipelines/main.yaml -``` - -Now you can manually trigger the pipeline, or wait for the next check, and it will update the demo pipeline automatically. If you're using a private repo for your pipelines, you may need to add a new secret for the git credentials and set `username` and `password` accordingly. - -You almost no need of fly CLI anymore, except for adding new pipelines ! You can even go further with `set_pipeline: self` which is always an experimental feature. - -### The deployment - -If you followed the previous parts of this tutorial, you should have clue about how to deploy your app. Let's create deploy it with Flux: - -{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}} - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: demo - namespace: kuberocks -spec: - replicas: 1 - selector: - matchLabels: - app: demo - template: - metadata: - labels: - app: demo - spec: - imagePullSecrets: - - name: dockerconfigjson - containers: - - name: api - image: gitea.kube.rocks/kuberocks/demo:latest - ports: - - containerPort: 80 ---- -apiVersion: v1 -kind: Service -metadata: - name: demo - namespace: kuberocks - labels: - app: demo -spec: - selector: - app: demo - ports: - - name: http - port: 80 ---- -apiVersion: traefik.io/v1alpha1 -kind: IngressRoute -metadata: - name: demo - namespace: kuberocks -spec: - entryPoints: - - websecure - routes: - - match: Host(`demo.kube.rocks`) - kind: Rule - services: - - name: demo - port: http -``` - -{{< /highlight >}} - -Note as we have set `imagePullSecrets` in order to use fetch previously created credentials for private registry access. The rest is pretty straightforward. Once pushed, after about 1 minute, you should see your app deployed in `https://demo.kube.rocks`. Check the API response on `https://demo.kube.rocks/WeatherForecast`. - -However, one last thing is missing: the automatic deployment. - -#### Image automation - -If you checked the above flowchart, you'll note that Image automation is a separate process from Flux that only scan the registry for new image tags and push any new tag to Flux repository. Then Flux will detect the new commit in Git repository, including the new tag, and automatically deploy it to K8s. - -By default, if not any strategy is set, K8s will do a **rolling deployment**, i.e. creating new replica firstly be terminating the old one. This will prevent any downtime on the condition of you set as well **readiness probe** in your pod spec, which is a later topic. - -Let's define the image update automation task for main Flux repository: - -{{< highlight host="demo-kube-flux" file="clusters/demo/flux-add-ons/image-update-automation.yaml" >}} - -```yaml -apiVersion: image.toolkit.fluxcd.io/v1beta1 -kind: ImageUpdateAutomation -metadata: - name: flux-system - namespace: flux-system -spec: - interval: 1m0s - sourceRef: - kind: GitRepository - name: flux-system - git: - checkout: - ref: - branch: main - commit: - author: - email: fluxcdbot@kube.rocks - name: fluxcdbot - messageTemplate: "{{range .Updated.Images}}{{println .}}{{end}}" - push: - branch: main - update: - path: ./clusters/demo - strategy: Setters -``` - -{{< /highlight >}} - -Now we need to Image Reflector how to scan the repository, as well as the attached policy for tag update: - -{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/images-demo.yaml" >}} - -```yaml -apiVersion: image.toolkit.fluxcd.io/v1beta1 -kind: ImageRepository -metadata: - name: demo - namespace: flux-system -spec: - image: gitea.kube.rocks/kuberocks/demo - interval: 1m0s - secretRef: - name: dockerconfigjson ---- -apiVersion: image.toolkit.fluxcd.io/v1beta1 -kind: ImagePolicy -metadata: - name: demo - namespace: flux-system -spec: - imageRepositoryRef: - name: demo - namespace: flux-system - policy: - semver: - range: 0.0.x -``` - -{{< /highlight >}} - -{{< alert >}} -As usual, don't forget `dockerconfigjson` for private registry access. -{{< /alert >}} - -And finally edit the deployment to use the policy by adding a specific marker next to the image tag: - -{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}} - -```yaml -# ... - containers: - - name: api - image: gitea.kube.rocks/kuberocks/demo:latest # {"$imagepolicy": "flux-system:demo"} -# ... -``` - -{{< /highlight >}} - -It will tell to `Image Automation` where to update the tag in the Flux repository. The format is `{"$imagepolicy": ":"}`. - -Push the changes and wait for about 1 minute then pull the flux repo. You should see a new commit coming and `latest` should be replaced by an explicit tag like so: - -{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}} - -```yaml -# ... - containers: - - name: api - image: gitea.kube.rocks/kuberocks/demo:0.0.1 # {"$imagepolicy": "flux-system:demo"} -# ... -``` - -{{< /highlight >}} - -Check if the pod as been correctly updated with `kgpo -n kuberocks`. Use `kd -n kuberocks deploy/demo` to check if the same tag is here and no `latest`. - -```txt -Pod Template: - Labels: app=demo - Containers: - api: - Image: gitea.kube.rocks/kuberocks/demo:0.0.1 - Port: 80/TCP -``` - -### Retest all workflow - -Damn, I think we're done 🎉 ! It's time retest the full process. Add new controller endpoint from our demo project and push the code: - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Controllers/WeatherForecastController.cs" >}} - -```cs -//... -public class WeatherForecastController : ControllerBase -{ - //... - - [HttpGet("{id}", Name = "GetWeatherForecastById")] - public WeatherForecast GetById(int id) - { - return new WeatherForecast - { - Date = DateOnly.FromDateTime(DateTime.Now.AddDays(id)), - TemperatureC = Random.Shared.Next(-20, 55), - Summary = Summaries[Random.Shared.Next(Summaries.Length)] - }; - } -} -``` - -{{< /highlight >}} - -Wait the pod to be updated, then check the new endpoint `https://demo.kube.rocks/WeatherForecast/1`. The API should return a new unique random weather forecast with the tomorrow date. - ## 6th check ✅ -We have everything we need for app building with automatic deployment ! Go [next part]({{< ref "/posts/17-build-your-own-kubernetes-cluster-part-8" >}}) for advanced tracing / load testing ! +We have everything we need for app building with automatic deployment ! Go [next part]({{< ref "/posts/17-build-your-own-kubernetes-cluster-part-8" >}}) for creating a complete CI/CD workflow ! diff --git a/content/posts/16-build-your-own-kubernetes-cluster-part-7/concourse-build.png b/content/posts/17-build-your-own-kubernetes-cluster-part-8/concourse-build.png similarity index 100% rename from content/posts/16-build-your-own-kubernetes-cluster-part-7/concourse-build.png rename to content/posts/17-build-your-own-kubernetes-cluster-part-8/concourse-build.png diff --git a/content/posts/16-build-your-own-kubernetes-cluster-part-7/concourse-pipeline.png b/content/posts/17-build-your-own-kubernetes-cluster-part-8/concourse-pipeline.png similarity index 100% rename from content/posts/16-build-your-own-kubernetes-cluster-part-7/concourse-pipeline.png rename to content/posts/17-build-your-own-kubernetes-cluster-part-8/concourse-pipeline.png diff --git a/content/posts/17-build-your-own-kubernetes-cluster-part-8/index.md b/content/posts/17-build-your-own-kubernetes-cluster-part-8/index.md index cd80100..583a0d4 100644 --- a/content/posts/17-build-your-own-kubernetes-cluster-part-8/index.md +++ b/content/posts/17-build-your-own-kubernetes-cluster-part-8/index.md @@ -1,5 +1,5 @@ --- -title: "Setup a HA Kubernetes cluster Part VIII - Further development & OpenTelemetry" +title: "Setup a HA Kubernetes cluster Part VIII - Create a CI+CD workflow" date: 2023-10-08 description: "Follow this opinionated guide as starter-kit for your own Kubernetes platform..." tags: ["kubernetes", "development", "opentelemetry", "tracing", "tempo"] @@ -12,1117 +12,560 @@ Be free from AWS/Azure/GCP by building a production grade On-Premise Kubernetes This is the **Part VIII** of more global topic tutorial. [Back to first part]({{< ref "/posts/10-build-your-own-kubernetes-cluster" >}}) for intro. -## Real DB App sample +## Workflow -Before go any further, let's add some DB usage to our sample app. We'll use the classical `Articles<->Authors<->Comments` relationships. First create `docker-compose.yml` file in root of demo project: +It's now time to step back and think about how we'll use our CI. Our goal is to build our above dotnet Web API with Concourse CI as a container image, ready to deploy to our cluster through Flux. So we finish the complete CI/CD pipeline. To resume the scenario: -{{< highlight host="kuberocks-demo" file="docker-compose.yml" >}} - -```yaml -version: "3" - -services: - db: - image: postgres:15 - environment: - POSTGRES_USER: main - POSTGRES_PASSWORD: main - POSTGRES_DB: main - ports: - - 5432:5432 -``` - -{{< /highlight >}} - -Launch it with `docker compose up -d` and check database running with `docker ps`. - -Time to create basic code that list plenty of articles from an API endpoint. Go back to `kuberocks-demo` and create a new separate project dedicated to app logic: - -```sh -dotnet new classlib -o src/KubeRocks.Application -dotnet sln add src/KubeRocks.Application -dotnet add src/KubeRocks.WebApi reference src/KubeRocks.Application - -dotnet add src/KubeRocks.Application package Microsoft.EntityFrameworkCore -dotnet add src/KubeRocks.Application package Npgsql.EntityFrameworkCore.PostgreSQL -dotnet add src/KubeRocks.WebApi package Microsoft.EntityFrameworkCore.Design -``` +1. Concourse CI check the repo periodically (pull model) for new code pushed and trigger a build if applicable +2. When container image build passed, Concourse CI push the new image to our private registry, which is already take care by Gitea +3. Flux, which can perfectly be in a different cluster, check the registry periodically (pull model), if new image tag detected, it will deploy it automatically to our cluster {{< alert >}} -This is not a DDD course ! We will keep it simple and focus on Kubernetes part. +Although it's the most secured way and configuration less, instead of default pull model, which is generally a check every minute, it's possible secured WebHook instead in order to reduce time between code push and deployment. {{< /alert >}} -### Define the entities +The flow pipeline is pretty straightforward: -{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/Entities/Article.cs" >}} +{{< mermaid >}} +graph RL + subgraph R [Private registry] + C[/Container Registry/] + end + S -- scan --> R + S -- push --> J[(Flux repository)] + subgraph CD + D{Flux} -- check --> J + D -- deploy --> E((Kube API)) + end + subgraph S [Image Scanner] + I[Image Reflector] -- trigger --> H[Image Automation] + end + subgraph CI + A{Concourse} -- check --> B[(Code repository)] + A -- push --> C + F((Worker)) -- build --> A + end +{{< /mermaid >}} -```cs -using System.ComponentModel.DataAnnotations; +## CI part -namespace KubeRocks.Application.Entities; +### The credentials -public class Article -{ - public int Id { get; set; } +We need to: - public required User Author { get; set; } +1. Give read/write access to our Gitea and registry for Concourse. Note as we need write access in code repository for concourse because we need to store the new image tag. We'll using [semver resource](https://github.com/concourse/semver-resource) for that. +2. Give read registry credentials to Flux for regular image tag checking as well as Kubernetes in order to allow image pulling from the private registry. - [MaxLength(255)] - public required string Title { get; set; } - [MaxLength(255)] - public required string Slug { get; set; } - public required string Description { get; set; } - public required string Body { get; set; } +Let's create 2 new user `concourse` with admin acces and `container` as standard user on Gitea. Store these credentials on new variables: - public DateTime CreatedAt { get; set; } = DateTime.UtcNow; - public DateTime UpdatedAt { get; set; } = DateTime.UtcNow; +{{< highlight host="demo-kube-k3s" file="main.tf" >}} - public ICollection Comments { get; } = new List(); +```tf +variable "concourse_git_username" { + type = string +} + +variable "concourse_git_password" { + type = string + sensitive = true +} + +variable "container_registry_username" { + type = string +} + +variable "container_registry_password" { + type = string + sensitive = true } ``` {{< /highlight >}} -{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/Entities/Comment.cs" >}} +{{< highlight host="demo-kube-k3s" file="terraform.tfvars" >}} -```cs -namespace KubeRocks.Application.Entities; - -public class Comment -{ - public int Id { get; set; } - - public required Article Article { get; set; } - public required User Author { get; set; } - - public required string Body { get; set; } - - public DateTime CreatedAt { get; set; } = DateTime.UtcNow; -} +```tf +concourse_git_username = "concourse" +concourse_git_password = "xxx" +container_registry_username = "container" +container_registry_password = "xxx" ``` {{< /highlight >}} -{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/Entities/User.cs" >}} +Apply the credentials for Concourse: -```cs -using System.ComponentModel.DataAnnotations; +{{< highlight host="demo-kube-k3s" file="concourse.tf" >}} -namespace KubeRocks.Application.Entities; - -public class User -{ - public int Id { get; set; } - - [MaxLength(255)] - public required string Name { get; set; } - - [MaxLength(255)] - public required string Email { get; set; } - - public ICollection
Articles { get; } = new List
(); - public ICollection Comments { get; } = new List(); -} -``` - -{{< /highlight >}} - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/Contexts/AppDbContext.cs" >}} - -```cs -namespace KubeRocks.Application.Contexts; - -using KubeRocks.Application.Entities; -using Microsoft.EntityFrameworkCore; - -public class AppDbContext : DbContext -{ - public DbSet Users => Set(); - public DbSet
Articles => Set
(); - public DbSet Comments => Set(); - - public AppDbContext(DbContextOptions options) : base(options) - { - } - - protected override void OnModelCreating(ModelBuilder modelBuilder) - { - base.OnModelCreating(modelBuilder); - - modelBuilder.Entity() - .HasIndex(u => u.Email).IsUnique() - ; - - modelBuilder.Entity
() - .HasIndex(u => u.Slug).IsUnique() - ; - } -} -``` - -{{< /highlight >}} - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/Extensions/ServiceExtensions.cs" >}} - -```cs -using KubeRocks.Application.Contexts; -using Microsoft.EntityFrameworkCore; -using Microsoft.Extensions.Configuration; -using Microsoft.Extensions.DependencyInjection; - -namespace KubeRocks.Application.Extensions; - -public static class ServiceExtensions -{ - public static IServiceCollection AddKubeRocksServices(this IServiceCollection services, IConfiguration configuration) - { - return services.AddDbContext((options) => - { - options.UseNpgsql(configuration.GetConnectionString("DefaultConnection")); - }); - } -} -``` - -{{< /highlight >}} - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} - -```cs -using KubeRocks.Application.Extensions; - -//... - -// Add services to the container. -builder.Services.AddKubeRocksServices(builder.Configuration); - -//... -``` - -{{< /highlight >}} - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/appsettings.Development.json" >}} - -```json -{ - //... - "ConnectionStrings": { - "DefaultConnection": "Host=localhost;Username=main;Password=main;Database=main;" +```tf +resource "kubernetes_secret_v1" "concourse_registry" { + metadata { + name = "registry" + namespace = "concourse-main" } -} -``` -{{< /highlight >}} - -Now as all models are created, we can generate migrations and update database accordingly: - -```sh -dotnet new tool-manifest -dotnet tool install dotnet-ef - -dotnet dotnet-ef -p src/KubeRocks.Application -s src/KubeRocks.WebApi migrations add InitialCreate -dotnet dotnet-ef -p src/KubeRocks.Application -s src/KubeRocks.WebApi database update -``` - -### Inject some dummy data - -We'll use Bogus on a separate console project: - -```sh -dotnet new console -o src/KubeRocks.Console -dotnet sln add src/KubeRocks.Console -dotnet add src/KubeRocks.WebApi reference src/KubeRocks.Application -dotnet add src/KubeRocks.Console package Bogus -dotnet add src/KubeRocks.Console package ConsoleAppFramework -dotnet add src/KubeRocks.Console package Respawn -``` - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.Console/appsettings.json" >}} - -```json -{ - "ConnectionStrings": { - "DefaultConnection": "Host=localhost;Username=main;Password=main;Database=main;" + data = { + name = "gitea.${var.domain}" + username = var.concourse_git_username + password = var.concourse_git_password } + + depends_on = [ + helm_release.concourse + ] +} + +resource "kubernetes_secret_v1" "concourse_git" { + metadata { + name = "git" + namespace = "concourse-main" + } + + data = { + url = "https://gitea.${var.domain}" + username = var.concourse_git_username + password = var.concourse_git_password + git-user = "Concourse CI " + commit-message = "bump to %version% [ci skip]" + } + + depends_on = [ + helm_release.concourse + ] } ``` {{< /highlight >}} -{{< highlight host="kuberocks-demo" file="src/KubeRocks.Console/KubeRocks.Console.csproj" >}} +Note as we use `concourse-main` namespace, already created by Concourse Helm installer, which is a dedicated namespace for the default team `main`. Because of that, we should keep `depends_on` to ensure the namespace is created before the secrets. -```xml - +{{< alert >}} +Don't forget the `[ci skip]` in commit message, which is the commit for version bumping, otherwise you'll have an infinite loop of builds ! +{{< /alert >}} - +Then same for Flux and the namespace that will receive the app: - - - $(MSBuildProjectDirectory) - +{{< highlight host="demo-kube-k3s" file="flux.tf" >}} - - - PreserveNewest - - +```tf +resource "kubernetes_secret_v1" "image_pull_secrets" { + for_each = toset(["flux-system", "kuberocks"]) + metadata { + name = "dockerconfigjson" + namespace = each.value + } - -``` + type = "kubernetes.io/dockerconfigjson" -{{< /highlight >}} - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.Console/Commands/DbCommand.cs" >}} - -```cs -using Bogus; -using KubeRocks.Application.Contexts; -using KubeRocks.Application.Entities; -using Microsoft.EntityFrameworkCore; -using Npgsql; -using Respawn; -using Respawn.Graph; - -namespace KubeRocks.Console.Commands; - -[Command("db")] -public class DbCommand : ConsoleAppBase -{ - private readonly AppDbContext _context; - - public DbCommand(AppDbContext context) - { - _context = context; - } - - [Command("migrate", "Migrate database")] - public async Task Migrate() - { - await _context.Database.MigrateAsync(); - } - - [Command("fresh", "Wipe data")] - public async Task FreshData() - { - await Migrate(); - - using var conn = new NpgsqlConnection(_context.Database.GetConnectionString()); - - await conn.OpenAsync(); - - var respawner = await Respawner.CreateAsync(conn, new RespawnerOptions - { - TablesToIgnore = new Table[] { "__EFMigrationsHistory" }, - DbAdapter = DbAdapter.Postgres - }); - - await respawner.ResetAsync(conn); - } - - [Command("seed", "Fake data")] - public async Task SeedData() - { - await Migrate(); - await FreshData(); - - var users = new Faker() - .RuleFor(m => m.Name, f => f.Person.FullName) - .RuleFor(m => m.Email, f => f.Person.Email) - .Generate(50); - - await _context.Users.AddRangeAsync(users); - await _context.SaveChangesAsync(); - - var articles = new Faker
() - .RuleFor(a => a.Title, f => f.Lorem.Sentence().TrimEnd('.')) - .RuleFor(a => a.Description, f => f.Lorem.Paragraphs(1)) - .RuleFor(a => a.Body, f => f.Lorem.Paragraphs(5)) - .RuleFor(a => a.Author, f => f.PickRandom(users)) - .RuleFor(a => a.CreatedAt, f => f.Date.Recent(90).ToUniversalTime()) - .RuleFor(a => a.Slug, (f, a) => a.Title.Replace(" ", "-").ToLowerInvariant()) - .Generate(500) - .Select(a => - { - new Faker() - .RuleFor(a => a.Body, f => f.Lorem.Paragraphs(2)) - .RuleFor(a => a.Author, f => f.PickRandom(users)) - .RuleFor(a => a.CreatedAt, f => f.Date.Recent(7).ToUniversalTime()) - .Generate(new Faker().Random.Number(10)) - .ForEach(c => a.Comments.Add(c)); - - return a; - }); - - await _context.Articles.AddRangeAsync(articles); - await _context.SaveChangesAsync(); - } -} -``` - -{{< /highlight >}} - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.Console/Program.cs" >}} - -```cs -using KubeRocks.Application.Extensions; -using KubeRocks.Console.Commands; - -var builder = ConsoleApp.CreateBuilder(args); - -builder.ConfigureServices((ctx, services) => -{ - services.AddKubeRocksServices(ctx.Configuration); -}); - -var app = builder.Build(); - -app.AddSubCommands(); - -app.Run(); -``` - -{{< /highlight >}} - -Then launch the command: - -```sh -dotnet run --project src/KubeRocks.Console db seed -``` - -Ensure with your favorite DB client that data is correctly inserted. - -### Define endpoint access - -All that's left is to create the endpoint. Let's define all DTO first: - -```sh -dotnet add src/KubeRocks.WebApi package Mapster -``` - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Models/ArticleListDto.cs" >}} - -```cs -namespace KubeRocks.WebApi.Models; - -public class ArticleListDto -{ - public required string Title { get; set; } - - public required string Slug { get; set; } - - public required string Description { get; set; } - - public required string Body { get; set; } - - public DateTime CreatedAt { get; set; } - - public DateTime UpdatedAt { get; set; } - - public required AuthorDto Author { get; set; } -} -``` - -{{< /highlight >}} - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Models/ArticleDto.cs" >}} - -```cs -namespace KubeRocks.WebApi.Models; - -public class ArticleDto : ArticleListDto -{ - public List Comments { get; set; } = new(); -} -``` - -{{< /highlight >}} - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Models/AuthorDto.cs" >}} - -```cs -namespace KubeRocks.WebApi.Models; - -public class AuthorDto -{ - public required string Name { get; set; } -} -``` - -{{< /highlight >}} - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Models/CommentDto.cs" >}} - -```cs -namespace KubeRocks.WebApi.Models; - -public class CommentDto -{ - public required string Body { get; set; } - - public DateTime CreatedAt { get; set; } - - public required AuthorDto Author { get; set; } -} -``` - -{{< /highlight >}} - -And finally the controller: - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Controllers/ArticlesController.cs" >}} - -```cs -using KubeRocks.Application.Contexts; -using KubeRocks.WebApi.Models; -using Mapster; -using Microsoft.AspNetCore.Mvc; -using Microsoft.EntityFrameworkCore; - -namespace KubeRocks.WebApi.Controllers; - -[ApiController] -[Route("[controller]")] -public class ArticlesController -{ - private readonly AppDbContext _context; - - public record ArticlesResponse(IEnumerable Articles, int ArticlesCount); - - public ArticlesController(AppDbContext context) - { - _context = context; - } - - [HttpGet(Name = "GetArticles")] - public async Task Get([FromQuery] int page = 1, [FromQuery] int size = 10) - { - var articles = await _context.Articles - .OrderByDescending(a => a.Id) - .Skip((page - 1) * size) - .Take(size) - .ProjectToType() - .ToListAsync(); - - var articlesCount = await _context.Articles.CountAsync(); - - return new ArticlesResponse(articles, articlesCount); - } - - [HttpGet("{slug}", Name = "GetArticleBySlug")] - public async Task> GetBySlug(string slug) - { - var article = await _context.Articles - .Include(a => a.Author) - .Include(a => a.Comments.OrderByDescending(c => c.Id)) - .ThenInclude(c => c.Author) - .FirstOrDefaultAsync(a => a.Slug == slug); - - if (article is null) - { - return new NotFoundResult(); + data = { + ".dockerconfigjson" = jsonencode({ + auths = { + "gitea.${var.domain}" = { + auth = base64encode("${var.container_registry_username}:${var.container_registry_password}") } - - return article.Adapt(); - } -} -``` - -{{< /highlight >}} - -Launch the app and check that `/Articles` and `/Articles/{slug}` endpoints are working as expected. - -## Production grade deployment - -### Database connection - -It's time to connect our app to the production database. Create a demo DB & user through pgAdmin and create the appropriate secret: - -{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/secrets-demo-db.yaml" >}} - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: demo-db -type: Opaque -data: - password: ZGVtbw== -``` - -{{< /highlight >}} - -Generate the according sealed secret like previously chapters with `kubeseal` under `sealed-secret-demo-db.yaml` file and delete `secret-demo-db.yaml`. - -```sh -cat clusters/demo/kuberocks/secret-demo.yaml | kubeseal --format=yaml --cert=pub-sealed-secrets.pem > clusters/demo/kuberocks/sealed-secret-demo.yaml -rm clusters/demo/kuberocks/secret-demo.yaml -``` - -Let's inject the appropriate connection string as environment variable: - -{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}} - -```yaml -# ... -spec: - # ... - template: - # ... - spec: - # ... - containers: - - name: api - # ... - env: - - name: DB_PASSWORD - valueFrom: - secretKeyRef: - name: demo-db - key: password - - name: ConnectionStrings__DefaultConnection - value: Host=postgresql-primary.postgres;Username=demo;Password='$(DB_PASSWORD)';Database=demo; -#... -``` - -{{< /highlight >}} - -### Database migration - -The DB connection should be done, but the database isn't migrated yet, the easiest is to add a migration step directly in startup app: - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} - -```cs -// ... -var app = builder.Build(); - -using var scope = app.Services.CreateScope(); -await using var dbContext = scope.ServiceProvider.GetRequiredService(); -await dbContext.Database.MigrateAsync(); - -// ... -``` - -{{< /highlight >}} - -The database should be migrated on first app launch on next deploy. Go to `https://demo.kube.rocks/Articles` to confirm all is ok. It should return next empty response: - -```json -{ - articles: [] - articlesCount: 0 -} -``` - -{{< alert >}} -Don't hesitate to abuse of `klo -n kuberocks deploy/demo` to debug any troubleshooting when pod is on error state. -{{< /alert >}} - -### Database seeding - -We'll try to seed the database directly from local. Change temporarily the connection string in `appsettings.json` to point to the production database: - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.Console/appsettings.json" >}} - -```json -{ - "ConnectionStrings": { - "DefaultConnection": "Host=localhost:54321;Username=demo;Password='xxx';Database=demo;" + } + }) } } ``` {{< /highlight >}} -Then: - -```sh -# forward the production database port to local -kpf svc/postgresql -n postgres 54321:tcp-postgresql -# launch the seeding command -dotnet run --project src/KubeRocks.Console db seed -``` - {{< alert >}} -We may obviously never do this on real production database, but as it's only for seeding, it will never concern them. +Create the namespace `kuberocks` first by `k create namespace kuberocks`, or you'll get an error. {{< /alert >}} -Return to `https://demo.kube.rocks/Articles` to confirm articles are correctly returned. +### The Dockerfile -### Better logging with Serilog +Now that all required credentials are in place, we have to tell Concourse how to check our repo and build our container image. This is done through a pipeline, which is a specific Concourse YAML file. -Default ASP.NET logging are not very standard, let's add Serilog for real requests logging with duration and status code: +Firstly create following files in root of your repo that we'll use for building a production ready container image: -```sh -dotnet add src/KubeRocks.WebApi package Serilog.AspNetCore -``` +{{< highlight host="kuberocks-demo" file=".dockerignore" >}} -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} - -```cs -// ... - -builder.Host.UseSerilog((ctx, cfg) => cfg - .ReadFrom.Configuration(ctx.Configuration) - .WriteTo.Console() -); - -var app = builder.Build(); - -app.UseSerilogRequestLogging(); - -// ... +```txt +**/bin/ +**/obj/ ``` {{< /highlight >}} -Then filtering through Loki stack should by far better. +{{< highlight host="kuberocks-demo" file="Dockerfile" >}} -### Liveness & readiness +```Dockerfile +FROM mcr.microsoft.com/dotnet/aspnet:7.0 -All real production app should have liveness & readiness probes. It generally consists on particular URL which return the current health app status. We'll also include the DB access health. Let's add the standard `/healthz` endpoint, which is dead simple in ASP.NET Core: +WORKDIR /publish +COPY /publish . -```sh -dotnet add src/KubeRocks.WebApi package Microsoft.Extensions.Diagnostics.HealthChecks.EntityFrameworkCore -``` - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} - -```cs -// ... - -builder.Services - .AddHealthChecks() - .AddDbContextCheck(); - -var app = builder.Build(); - -// ... - -app.MapControllers(); -app.MapHealthChecks("/healthz"); - -app.Run(); +EXPOSE 80 +ENTRYPOINT ["dotnet", "KubeRocksDemo.dll"] ``` {{< /highlight >}} -And you're done ! Go to `https://demo.kube.rocks/healthz` to confirm it's working. Try to stop the database with `docker compose stop` and check the healthz endpoint again, it should return `503` status code. +### The pipeline -{{< alert >}} -The `Microsoft.Extensions.Diagnostics.HealthChecks` package is very extensible and you can add any custom check to enrich the health app status. -{{< /alert >}} +Let's reuse our flux repository and create a file `pipelines/demo.yaml` with following content: -And finally the probes: +{{< highlight host="demo-kube-flux" file="pipelines/demo.yaml" >}} + +```tf +resources: + - name: version + type: semver + source: + driver: git + uri: ((git.url))/kuberocks/demo + branch: main + file: version + username: ((git.username)) + password: ((git.password)) + git_user: ((git.git-user)) + commit_message: ((git.commit-message)) + - name: source-code + type: git + icon: coffee + source: + uri: ((git.url))/kuberocks/demo + branch: main + username: ((git.username)) + password: ((git.password)) + - name: docker-image + type: registry-image + icon: docker + source: + repository: ((registry.name))/kuberocks/demo + tag: latest + username: ((registry.username)) + password: ((registry.password)) + +jobs: + - name: build + plan: + - get: source-code + trigger: true + + - task: build-source + config: + platform: linux + image_resource: + type: registry-image + source: + repository: mcr.microsoft.com/dotnet/sdk + tag: "7.0" + inputs: + - name: source-code + path: . + outputs: + - name: binaries + path: publish + caches: + - path: /root/.nuget/packages + run: + path: /bin/sh + args: + - -ec + - | + dotnet format --verify-no-changes + dotnet build -c Release + dotnet publish src/KubeRocks.WebApi -c Release -o publish --no-restore --no-build + + - task: build-image + privileged: true + config: + platform: linux + image_resource: + type: registry-image + source: + repository: concourse/oci-build-task + inputs: + - name: source-code + path: . + - name: binaries + path: publish + outputs: + - name: image + run: + path: build + - put: version + params: { bump: patch } + + - put: docker-image + params: + additional_tags: version/number + image: image/image.tar +``` + +{{< /highlight >}} + +A bit verbose compared to other CI, but it gets the job done. The price of maximum flexibility. Now in order to apply it we may need to install `fly` CLI tool. Just a matter of `scoop install concourse-fly` on Windows. Then: + +```sh +# login to your Concourse instance +fly -t kuberocks login -c https://concourse.kube.rocks + +# create the pipeline and active it +fly -t kuberocks set-pipeline -p demo -c pipelines/demo.yaml +fly -t kuberocks unpause-pipeline -p demo +``` + +A build will be trigger immediately. You can follow it on Concourse UI. + +[![Concourse pipeline](concourse-pipeline.png)](concourse-pipeline.png) + +If everything is ok, check in `https://gitea.kube.rocks/admin/packages`, you should see a new image tag on your registry ! A new file `version` is automatically pushed in code repo in order to keep tracking of the image tag version. + +[![Concourse build](concourse-build.png)](concourse-build.png) + +### Automatic pipeline update + +If you don't want to use fly CLI every time for any pipeline update, you maybe interested in `set_pipeline` feature. Create following file: + +{{< highlight host="demo-kube-flux" file="pipelines/main.yaml" >}} + +```tf +resources: + - name: ci + type: git + icon: git + source: + uri: https://github.com/kuberocks/demo-kube-flux + +jobs: + - name: configure-pipelines + plan: + - get: ci + trigger: true + - set_pipeline: demo + file: ci/pipelines/demo.yaml +``` + +{{< /highlight >}} + +Then apply it: + +```sh +fly -t kuberocks set-pipeline -p main -c pipelines/main.yaml +``` + +Now you can manually trigger the pipeline, or wait for the next check, and it will update the demo pipeline automatically. If you're using a private repo for your pipelines, you may need to add a new secret for the git credentials and set `username` and `password` accordingly. + +You almost no need of fly CLI anymore, except for adding new pipelines ! You can even go further with `set_pipeline: self` which is always an experimental feature. + +## CD part + +### The deployment + +If you followed the previous parts of this tutorial, you should have clue about how to deploy your app. Let's create deploy it with Flux: {{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}} ```yaml -# ... -spec: - # ... - template: - # ... - spec: - # ... - containers: - - name: api - # ... - livenessProbe: - httpGet: - path: /healthz - port: 80 - initialDelaySeconds: 10 - periodSeconds: 10 - readinessProbe: - httpGet: - path: /healthz - port: 80 - initialDelaySeconds: 10 - periodSeconds: 10 -``` - -{{< /highlight >}} - -{{< alert >}} -Be aware of difference between `liveness` and `readiness` probes. The first one is used to restart the pod if it's not responding, the second one is used to tell the pod is not ready to receive traffic, which is vital for preventing any downtime. -When **Rolling Update** strategy is used (the default), the old pod is not killed until the new one is ready (aka healthy). -{{< /alert >}} - -## Telemetry - -The last step but not least missing for a total integration with our monitored Kubernetes cluster is to add some telemetry to our app. We'll use `OpenTelemetry` for that, which becomes the standard library for metrics and tracing, by providing good integration to many languages. - -### Application metrics - -Install minimal ASP.NET Core metrics is really a no-brainer: - -```sh -dotnet add src/KubeRocks.WebApi package OpenTelemetry.AutoInstrumentation --prerelease -dotnet add src/KubeRocks.WebApi package OpenTelemetry.Extensions.Hosting --prerelease -dotnet add src/KubeRocks.WebApi package OpenTelemetry.Exporter.Prometheus.AspNetCore --prerelease -``` - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} - -```cs -//... - -builder.Services.AddOpenTelemetry() - .WithMetrics(b => - { - b - .AddAspNetCoreInstrumentation() - .AddPrometheusExporter(); - }); - -var app = builder.Build(); - -app.UseOpenTelemetryPrometheusScrapingEndpoint(); - -//... -``` - -{{< /highlight >}} - -Relaunch app and go to `https://demo.kube.rocks/metrics` to confirm it's working. It should show metrics after each endpoint call, simply try `https://demo.kube.rocks/Articles`. - -{{< alert >}} -.NET metrics are currently pretty basic, but the next .NET 8 version will provide far better metrics from internal components allowing some [useful dashboard](https://github.com/JamesNK/aspnetcore-grafana). -{{< /alert >}} - -#### Hide internal endpoints - -After push, you should see `/metrics` live. Let's step back and exclude this internal path from external public access. We have 2 options: - -* Force on the app side to listen only on private network on `/metrics` and `/healthz` endpoints -* Push all the app logic under `/api` path and let Traefik to include only this path - -Let's do the option 2. Add the `api/` prefix to controllers to expose: - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Controllers/ArticlesController.cs" >}} - -```cs -//... -[ApiController] -[Route("api/[controller]")] -public class ArticlesController { - //... -} -``` - -{{< /highlight >}} - -Let's move Swagger UI under `/api` path too: - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} - -```cs -//... - -if (app.Environment.IsDevelopment()) -{ - app.UseSwagger(c => - { - c.RouteTemplate = "/api/{documentName}/swagger.json"; - }); - app.UseSwaggerUI(c => - { - c.SwaggerEndpoint("v1/swagger.json", "KubeRocks v1"); - c.RoutePrefix = "api"; - }); -} - -//... -``` - -{{< /highlight >}} - -{{< alert >}} -You may use ASP.NET API versioning, which work the same way with [versioning URL path](https://github.com/dotnet/aspnet-api-versioning/wiki/Versioning-via-the-URL-Path). -{{< /alert >}} - -All is left is to include only the endpoints under `/api` prefix on Traefik IngressRoute: - -{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}} - -```yaml -#... -apiVersion: traefik.io/v1alpha1 -kind: IngressRoute -#... -spec: - #... - routes: - - match: Host(`demo.kube.rocks`) && PathPrefix(`/api`) - #... -``` - -{{< /highlight >}} - -Now the new URL is `https://demo.kube.rocks/api/Articles`. Any path different from `api` will return the Traefik 404 page, and internal paths as `https://demo.kube.rocks/metrics` is not accessible anymore. An other additional advantage of this config, it's simple to put a separated frontend project under `/` path, which can use the under API without any CORS problem natively. - -#### Prometheus integration - -It's only a matter of new ServiceMonitor config: - -{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}} - -```yaml ---- -apiVersion: monitoring.coreos.com/v1 -kind: ServiceMonitor +apiVersion: apps/v1 +kind: Deployment metadata: name: demo namespace: kuberocks spec: - endpoints: - - targetPort: 80 + replicas: 1 selector: matchLabels: app: demo + template: + metadata: + labels: + app: demo + spec: + imagePullSecrets: + - name: dockerconfigjson + containers: + - name: api + image: gitea.kube.rocks/kuberocks/demo:latest + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: demo + namespace: kuberocks + labels: + app: demo +spec: + selector: + app: demo + ports: + - name: http + port: 80 +--- +apiVersion: traefik.io/v1alpha1 +kind: IngressRoute +metadata: + name: demo + namespace: kuberocks +spec: + entryPoints: + - websecure + routes: + - match: Host(`demo.kube.rocks`) + kind: Rule + services: + - name: demo + port: http ``` {{< /highlight >}} -After some time, You can finally use the Prometheus dashboard to query your app metrics. Use `{namespace="kuberocks",job="demo"}` PromQL query to list all available metrics: +Note as we have set `imagePullSecrets` in order to use fetch previously created credentials for private registry access. The rest is pretty straightforward. Once pushed, after about 1 minute, you should see your app deployed in `https://demo.kube.rocks`. Check the API response on `https://demo.kube.rocks/WeatherForecast`. -[![Prometheus metrics](prometheus-graph.png)](prometheus-graph.png) +However, one last thing is missing: the automatic deployment. -### Application tracing +### Image automation -A more useful case for OpenTelemetry is to integrate it to a tracing backend. [Tempo](https://grafana.com/oss/tempo/) is a good candidate, which is a free open-source alternative to Jaeger, simpler to install by requiring a simple s3 as storage, and compatible to many protocols as Jaeger, OTLP, Zipkin. +If you checked the above flowchart, you'll note that Image automation is a separate process from Flux that only scan the registry for new image tags and push any new tag to Flux repository. Then Flux will detect the new commit in Git repository, including the new tag, and automatically deploy it to K8s. -#### Installing Tempo +By default, if not any strategy is set, K8s will do a **rolling deployment**, i.e. creating new replica firstly be terminating the old one. This will prevent any downtime on the condition of you set as well **readiness probe** in your pod spec, which is a later topic. -It's another Helm Chart to install as well as the related grafana datasource: +Let's define the image update automation task for main Flux repository: -{{< highlight host="demo-kube-k3s" file="tracing.tf" >}} +{{< highlight host="demo-kube-flux" file="clusters/demo/flux-add-ons/image-update-automation.yaml" >}} -```tf -resource "kubernetes_namespace_v1" "tracing" { - metadata { - name = "tracing" - } -} - -resource "helm_release" "tempo" { - chart = "tempo" - version = "1.5.1" - repository = "https://grafana.github.io/helm-charts" - - name = "tempo" - namespace = kubernetes_namespace_v1.tracing.metadata[0].name - - set { - name = "tempo.storage.trace.backend" - value = "s3" - } - - set { - name = "tempo.storage.trace.s3.bucket" - value = var.s3_bucket - } - - set { - name = "tempo.storage.trace.s3.endpoint" - value = var.s3_endpoint - } - - set { - name = "tempo.storage.trace.s3.region" - value = var.s3_region - } - - set { - name = "tempo.storage.trace.s3.access_key" - value = var.s3_access_key - } - - set { - name = "tempo.storage.trace.s3.secret_key" - value = var.s3_secret_key - } - - set { - name = "serviceMonitor.enabled" - value = "true" - } -} - -resource "kubernetes_config_map_v1" "tempo_grafana_datasource" { - metadata { - name = "tempo-grafana-datasource" - namespace = kubernetes_namespace_v1.monitoring.metadata[0].name - labels = { - grafana_datasource = "1" - } - } - - data = { - "datasource.yaml" = <}} -#### OpenTelemetry +Now we need to Image Reflector how to scan the repository, as well as the attached policy for tag update: -Let's firstly add another instrumentation package specialized for Npgsql driver used by EF Core to translate queries to PostgreSQL: +{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/images-demo.yaml" >}} -```sh -dotnet add src/KubeRocks.WebApi package Npgsql.OpenTelemetry -``` - -Then bridge all needed instrumentation as well as the OTLP exporter: - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} - -```cs -//... - -builder.Services.AddOpenTelemetry() - //... - .WithTracing(b => - { - b - .SetResourceBuilder(ResourceBuilder - .CreateDefault() - .AddService("KubeRocks.Demo") - .AddTelemetrySdk() - ) - .AddAspNetCoreInstrumentation(b => - { - b.Filter = ctx => - { - return ctx.Request.Path.StartsWithSegments("/api"); - }; - }) - .AddEntityFrameworkCoreInstrumentation() - .AddNpgsql() - .AddOtlpExporter(); - }); - -//... +```yaml +apiVersion: image.toolkit.fluxcd.io/v1beta1 +kind: ImageRepository +metadata: + name: demo + namespace: flux-system +spec: + image: gitea.kube.rocks/kuberocks/demo + interval: 1m0s + secretRef: + name: dockerconfigjson +--- +apiVersion: image.toolkit.fluxcd.io/v1beta1 +kind: ImagePolicy +metadata: + name: demo + namespace: flux-system +spec: + imageRepositoryRef: + name: demo + namespace: flux-system + policy: + semver: + range: 0.0.x ``` {{< /highlight >}} -Then add the exporter endpoint config in order to push traces to Tempo: +{{< alert >}} +As usual, don't forget `dockerconfigjson` for private registry access. +{{< /alert >}} + +And finally edit the deployment to use the policy by adding a specific marker next to the image tag: {{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}} ```yaml -#... -spec: - #... - template: - #... - spec: - #... +# ... containers: - name: api - #... - env: - #... - - name: OTEL_EXPORTER_OTLP_ENDPOINT - value: http://tempo.tracing:4317 + image: gitea.kube.rocks/kuberocks/demo:latest # {"$imagepolicy": "flux-system:demo"} +# ... ``` {{< /highlight >}} -Call some API URLs and get back to Grafana / Explore, select Tempo data source and search for query traces. You should see something like this: +It will tell to `Image Automation` where to update the tag in the Flux repository. The format is `{"$imagepolicy": ":"}`. -[![Tempo search](tempo-search.png)](tempo-search.png) +Push the changes and wait for about 1 minute then pull the flux repo. You should see a new commit coming and `latest` should be replaced by an explicit tag like so: -Click on one specific trace to get details. You can go through HTTP requests, EF Core time response, and even underline SQL queries thanks to Npgsql instrumentation: +{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}} -[![Tempo traces](tempo-trace.png)](tempo-trace.png) - -#### Correlation with Loki - -It would be nice to have directly access to trace from logs through Loki search, as it's clearly a more seamless way than searching inside Tempo. - -For that we need to do 2 things : - -* Add the `TraceId` to logs in order to correlate trace with log. In ASP.NET Core, a `TraceId` correspond to a unique request, allowing isolation analyze for each request. -* Create a link in Grafana from the generated `TraceId` inside log and the detail Tempo view trace. - -So firstly, let's take care of the app part by attaching the OpenTelemetry TraceId to Serilog: - -```sh -dotnet add src/KubeRocks.WebApi package Serilog.Enrichers.Span +```yaml +# ... + containers: + - name: api + image: gitea.kube.rocks/kuberocks/demo:0.0.1 # {"$imagepolicy": "flux-system:demo"} +# ... ``` -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} +{{< /highlight >}} + +Check if the pod as been correctly updated with `kgpo -n kuberocks`. Use `kd -n kuberocks deploy/demo` to check if the same tag is here and no `latest`. + +```txt +Pod Template: + Labels: app=demo + Containers: + api: + Image: gitea.kube.rocks/kuberocks/demo:0.0.1 + Port: 80/TCP +``` + +### Retest all workflow + +Damn, I think we're done 🎉 ! It's time retest the full process. Add new controller endpoint from our demo project and push the code: + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Controllers/WeatherForecastController.cs" >}} ```cs //... +public class WeatherForecastController : ControllerBase +{ + //... -builder.Host.UseSerilog((ctx, cfg) => cfg - .ReadFrom.Configuration(ctx.Configuration) - .Enrich.WithSpan() - .WriteTo.Console( - outputTemplate: "[{Timestamp:HH:mm:ss} {Level:u3}] |{TraceId}| {Message:lj}{NewLine}{Exception}" - ) -); - -//... -``` - -{{< /highlight >}} - -It should now generate that kind of logs: - -```txt -[23:22:57 INF] |aa51c7254aaa10a3f679a511444a5da5| HTTP GET /api/Articles responded 200 in 301.7052 ms -``` - -Now Let's adapt the Loki datasource by creating a derived field inside `jsonData` property: - -{{< highlight host="demo-kube-k3s" file="logging.tf" >}} - -```tf -resource "kubernetes_config_map_v1" "loki_grafana_datasource" { - #... - - data = { - "datasource.yaml" = <}} -This where the magic happens. The `\|(\w+)\|` regex will match and extract the `TraceId` inside the log, which is inside pipes, and create a link to Tempo trace detail view. - -[![Derived fields](loki-derived-fields.png)](loki-derived-fields.png) - -This will give us the nice link button as soon as you you click a log detail: - -[![Derived fields](loki-tempo-link.png)](loki-tempo-link.png) +Wait the pod to be updated, then check the new endpoint `https://demo.kube.rocks/WeatherForecast/1`. The API should return a new unique random weather forecast with the tomorrow date. ## 7th check ✅ -We have done for the basic functional telemetry ! There are infinite things to cover in this subject, but it's enough for this endless guide. Go [next part]({{< ref "/posts/18-build-your-own-kubernetes-cluster-part-9" >}}) for the final part with testing, code metrics, code coverage, and load testing ! +We have done for the set-up of our automated CI/CD workflow process. Go [next part]({{< ref "/posts/18-build-your-own-kubernetes-cluster-part-9" >}}) for going further with a real DB app that handle automatic migrations & monitoring integration with OpenTelemetry ! diff --git a/content/posts/18-build-your-own-kubernetes-cluster-part-9/index.md b/content/posts/18-build-your-own-kubernetes-cluster-part-9/index.md index 605e761..9cfec9b 100644 --- a/content/posts/18-build-your-own-kubernetes-cluster-part-9/index.md +++ b/content/posts/18-build-your-own-kubernetes-cluster-part-9/index.md @@ -1,8 +1,8 @@ --- -title: "Setup a HA Kubernetes cluster Part IX - Feature testing, code metrics & code coverage" -date: 2023-10-09 +title: "Setup a HA Kubernetes cluster Part IX - DB usage & Tracing with OpenTelemetry" +date: 2023-10-08 description: "Follow this opinionated guide as starter-kit for your own Kubernetes platform..." -tags: ["kubernetes", "testing", "sonarqube", "xunit"] +tags: ["kubernetes", "development", "opentelemetry", "tracing", "tempo"] draft: true --- @@ -10,319 +10,11 @@ draft: true Be free from AWS/Azure/GCP by building a production grade On-Premise Kubernetes cluster on cheap VPS provider, fully GitOps managed, and with complete CI/CD tools 🎉 {{< /lead >}} -This is the **Part IX** of more global topic tutorial. [Back to first part]({{< ref "/posts/10-build-your-own-kubernetes-cluster" >}}) for intro. +This is the **Part VIII** of more global topic tutorial. [Back to first part]({{< ref "/posts/10-build-your-own-kubernetes-cluster" >}}) for intro. -## Code Metrics +## Real DB App sample -SonarQube is leading the code metrics industry for a long time, embracing full Open Core model, and the community edition it's completely free of charge even for commercial use. It covers advanced code analysis, code coverage, code duplication, code smells, security vulnerabilities, etc. It ensures high quality code and help to keep it that way. - -### SonarQube installation - -SonarQube as its dedicated Helm chart which perfect for us. However, it's the most resource hungry component of our development stack so far (because Java project ? End of troll), so be sure to deploy it on almost empty free node, maybe a dedicated one. In fact, it's the last Helm chart for this tutorial, I promise! - -Create dedicated database for SonarQube same as usual. - -{{< highlight host="demo-kube-k3s" file="main.tf" >}} - -```tf -variable "sonarqube_db_password" { - type = string - sensitive = true -} -``` - -{{< /highlight >}} - -{{< highlight host="demo-kube-k3s" file="terraform.tfvars" >}} - -```tf -sonarqube_db_password = "xxx" -``` - -{{< /highlight >}} - -{{< highlight host="demo-kube-k3s" file="sonarqube.tf" >}} - -```tf -resource "kubernetes_namespace_v1" "sonarqube" { - metadata { - name = "sonarqube" - } -} - -resource "helm_release" "sonarqube" { - chart = "sonarqube" - version = "10.1.0+628" - repository = "https://SonarSource.github.io/helm-chart-sonarqube" - - name = "sonarqube" - namespace = kubernetes_namespace_v1.sonarqube.metadata[0].name - - set { - name = "prometheusMonitoring.podMonitor.enabled" - value = "true" - } - - set { - name = "postgresql.enabled" - value = "false" - } - - set { - name = "jdbcOverwrite.enabled" - value = "true" - } - - set { - name = "jdbcOverwrite.jdbcUrl" - value = "jdbc:postgresql://postgresql-primary.postgres/sonarqube" - } - - set { - name = "jdbcOverwrite.jdbcUsername" - value = "sonarqube" - } - - set { - name = "jdbcOverwrite.jdbcPassword" - value = var.sonarqube_db_password - } -} - -resource "kubernetes_manifest" "sonarqube_ingress" { - manifest = { - apiVersion = "traefik.io/v1alpha1" - kind = "IngressRoute" - metadata = { - name = "sonarqube" - namespace = kubernetes_namespace_v1.sonarqube.metadata[0].name - } - spec = { - entryPoints = ["websecure"] - routes = [ - { - match = "Host(`sonarqube.${var.domain}`)" - kind = "Rule" - services = [ - { - name = "sonarqube-sonarqube" - port = "http" - } - ] - } - ] - } - } -} -``` - -{{< /highlight >}} - -Be sure to disable the PostgreSQL sub chart and use our self-hosted cluster with both `postgresql.enabled` and `jdbcOverwrite.enabled`. If needed, set proper `tolerations` and `nodeSelector` for deploying on a dedicated node. - -The installation take many minutes, be patient. Once done, you can access SonarQube on `https://sonarqube.kube.rocks` and login with `admin` / `admin`. - -### Project configuration - -Firstly create a new project and retain the project key which is his identifier. Then create a **global analysis token** named `Concourse CI` that will be used for CI integration from your user account under `/account/security`. - -Now we need to create a Kubernetes secret which contains this token value for Concourse CI, for usage inside the pipeline. The token is the one generated above. - -Add a new concourse terraform variable for the token: - -{{< highlight host="demo-kube-k3s" file="main.tf" >}} - -```tf -variable "concourse_analysis_token" { - type = string - sensitive = true -} -``` - -{{< /highlight >}} - -{{< highlight host="demo-kube-k3s" file="terraform.tfvars" >}} - -```tf -concourse_analysis_token = "xxx" -``` - -{{< /highlight >}} - -The secret: - -{{< highlight host="demo-kube-k3s" file="concourse.tf" >}} - -```tf -resource "kubernetes_secret_v1" "concourse_sonarqube" { - metadata { - name = "sonarqube" - namespace = "concourse-main" - } - - data = { - url = "https://sonarqube.${var.domain}" - analysis-token = var.concourse_analysis_token - } - - depends_on = [ - helm_release.concourse - ] -} -``` - -{{< /highlight >}} - -We are ready to tackle the pipeline for integration. - -### SonarScanner for .NET - -As we use a dotnet project, we will use the official SonarQube scanner for .net. But sadly, as it's only a .NET CLI wrapper, it requires a java runtime to run and there is no official SonarQube docker image which contains both .NET SDK and Java runtime. But we have a CI now, so we can build our own QA image on our own private registry. - -Create a new Gitea repo dedicated for any custom docker images with this one single Dockerfile: - -{{< highlight host="demo-kube-images" file="dotnet-qa.dockerfile" >}} - -```Dockerfile -FROM mcr.microsoft.com/dotnet/sdk:7.0 - -RUN apt-get update && apt-get install -y ca-certificates-java && apt-get install -y \ - openjdk-17-jre-headless \ - unzip \ - && rm -rf /var/lib/apt/lists/* - -RUN dotnet tool install --global dotnet-sonarscanner -RUN dotnet tool install --global dotnet-coverage - -ENV PATH="${PATH}:/root/.dotnet/tools" -``` - -{{< /highlight >}} - -Note as we add the `dotnet-sonarscanner` tool to the path, we can use it directly in the pipeline without any extra step. I'll also add `dotnet-coverage` global tool for code coverage generation that we'll use later. - -Then the pipeline: - -{{< highlight host="demo-kube-flux" file="pipelines/images.yaml" >}} - -```yml -resources: - - name: docker-images-git - type: git - icon: coffee - source: - uri: https://gitea.kube.rocks/kuberocks/docker-images - branch: main - - name: dotnet-qa-image - type: registry-image - icon: docker - source: - repository: ((registry.name))/kuberocks/dotnet-qa - tag: "7.0" - username: ((registry.username)) - password: ((registry.password)) - -jobs: - - name: dotnet-qa - plan: - - get: docker-images-git - - task: build-image - privileged: true - config: - platform: linux - image_resource: - type: registry-image - source: - repository: concourse/oci-build-task - inputs: - - name: docker-images-git - outputs: - - name: image - params: - DOCKERFILE: docker-images-git/dotnet-qa.dockerfile - run: - path: build - - put: dotnet-qa-image - params: - image: image/image.tar -``` - -{{< /highlight >}} - -Update the `main.yaml` pipeline to add the new job, then trigger it manually from Concourse UI to add the new above pipeline: - -{{< highlight host="demo-kube-flux" file="pipelines/main.yaml" >}} - -```tf -#... - -jobs: - - name: configure-pipelines - plan: - #... - - set_pipeline: images - file: ci/pipelines/images.yaml -``` - -{{< /highlight >}} - -The pipeline should now start and build the image, trigger it manually if needed on Concourse UI. Once done, you can check it on your Gitea container packages that the new image `gitea.kube.rocks/kuberocks/dotnet-qa` is here. - -### Concourse pipeline integration - -It's finally time to reuse this QA image in our Concourse demo project pipeline. Update it accordingly: - -{{< highlight host="demo-kube-flux" file="pipelines/demo.yaml" >}} - -```yml -#... - -jobs: - - name: build - plan: - - get: source-code - trigger: true - - - task: build-source - config: - platform: linux - image_resource: - type: registry-image - source: - repository: ((registry.name))/kuberocks/dotnet-qa - tag: "7.0" - username: ((registry.username)) - password: ((registry.password)) - #... - run: - path: /bin/sh - args: - - -ec - - | - dotnet format --verify-no-changes - - dotnet sonarscanner begin /k:"KubeRocks-Demo" /d:sonar.host.url="((sonarqube.url))" /d:sonar.token="((sonarqube.analysis-token))" - dotnet build -c Release - dotnet sonarscanner end /d:sonar.token="((sonarqube.analysis-token))" - - dotnet publish src/KubeRocks.WebApi -c Release -o publish --no-restore --no-build - - #... -``` - -{{< /highlight >}} - -Note as we now use the `dotnet-qa` image and surround the build step by `dotnet sonarscanner begin` and `dotnet sonarscanner end` commands with appropriate credentials allowing Sonar CLI to send report to our SonarQube instance. Trigger the pipeline manually, all should pass, and the result will be pushed to SonarQube. - -[![SonarQube](sonarqube-dashboard.png)](sonarqube-dashboard.png) - -## Feature testing - -Let's cover the feature testing by calling the API against a real database. This is the opportunity to cover the code coverage as well. - -### xUnit - -First add a dedicated database for test in the docker compose file as we won't interfere with the development database: +Before go any further, let's add some DB usage to our sample app. We'll use the classical `Articles<->Authors<->Comments` relationships. First create `docker-compose.yml` file in root of demo project: {{< highlight host="kuberocks-demo" file="docker-compose.yml" >}} @@ -330,117 +22,291 @@ First add a dedicated database for test in the docker compose file as we won't i version: "3" services: - #... - - db_test: + db: image: postgres:15 environment: POSTGRES_USER: main POSTGRES_PASSWORD: main POSTGRES_DB: main ports: - - 54320:5432 + - 5432:5432 ``` {{< /highlight >}} -Expose the startup service of minimal API: +Launch it with `docker compose up -d` and check database running with `docker ps`. -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} +Time to create basic code that list plenty of articles from an API endpoint. Go back to `kuberocks-demo` and create a new separate project dedicated to app logic: + +```sh +dotnet new classlib -o src/KubeRocks.Application +dotnet sln add src/KubeRocks.Application +dotnet add src/KubeRocks.WebApi reference src/KubeRocks.Application + +dotnet add src/KubeRocks.Application package Microsoft.EntityFrameworkCore +dotnet add src/KubeRocks.Application package Npgsql.EntityFrameworkCore.PostgreSQL +dotnet add src/KubeRocks.WebApi package Microsoft.EntityFrameworkCore.Design +``` + +{{< alert >}} +This is not a DDD course ! We will keep it simple and focus on Kubernetes part. +{{< /alert >}} + +### Define the entities + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/Entities/Article.cs" >}} ```cs -//... +using System.ComponentModel.DataAnnotations; -public partial class Program +namespace KubeRocks.Application.Entities; + +public class Article { - protected Program() { } + public int Id { get; set; } + + public required User Author { get; set; } + + [MaxLength(255)] + public required string Title { get; set; } + [MaxLength(255)] + public required string Slug { get; set; } + public required string Description { get; set; } + public required string Body { get; set; } + + public DateTime CreatedAt { get; set; } = DateTime.UtcNow; + public DateTime UpdatedAt { get; set; } = DateTime.UtcNow; + + public ICollection Comments { get; } = new List(); } ``` {{< /highlight >}} -Then add a testing JSON environment file for accessing our database `db_test` from the docker-compose.yml: +{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/Entities/Comment.cs" >}} -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/appsettings.Testing.json" >}} +```cs +namespace KubeRocks.Application.Entities; + +public class Comment +{ + public int Id { get; set; } + + public required Article Article { get; set; } + public required User Author { get; set; } + + public required string Body { get; set; } + + public DateTime CreatedAt { get; set; } = DateTime.UtcNow; +} +``` + +{{< /highlight >}} + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/Entities/User.cs" >}} + +```cs +using System.ComponentModel.DataAnnotations; + +namespace KubeRocks.Application.Entities; + +public class User +{ + public int Id { get; set; } + + [MaxLength(255)] + public required string Name { get; set; } + + [MaxLength(255)] + public required string Email { get; set; } + + public ICollection
Articles { get; } = new List
(); + public ICollection Comments { get; } = new List(); +} +``` + +{{< /highlight >}} + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/Contexts/AppDbContext.cs" >}} + +```cs +namespace KubeRocks.Application.Contexts; + +using KubeRocks.Application.Entities; +using Microsoft.EntityFrameworkCore; + +public class AppDbContext : DbContext +{ + public DbSet Users => Set(); + public DbSet
Articles => Set
(); + public DbSet Comments => Set(); + + public AppDbContext(DbContextOptions options) : base(options) + { + } + + protected override void OnModelCreating(ModelBuilder modelBuilder) + { + base.OnModelCreating(modelBuilder); + + modelBuilder.Entity() + .HasIndex(u => u.Email).IsUnique() + ; + + modelBuilder.Entity
() + .HasIndex(u => u.Slug).IsUnique() + ; + } +} +``` + +{{< /highlight >}} + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/Extensions/ServiceExtensions.cs" >}} + +```cs +using KubeRocks.Application.Contexts; +using Microsoft.EntityFrameworkCore; +using Microsoft.Extensions.Configuration; +using Microsoft.Extensions.DependencyInjection; + +namespace KubeRocks.Application.Extensions; + +public static class ServiceExtensions +{ + public static IServiceCollection AddKubeRocksServices(this IServiceCollection services, IConfiguration configuration) + { + return services.AddDbContext((options) => + { + options.UseNpgsql(configuration.GetConnectionString("DefaultConnection")); + }); + } +} +``` + +{{< /highlight >}} + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} + +```cs +using KubeRocks.Application.Extensions; + +//... + +// Add services to the container. +builder.Services.AddKubeRocksServices(builder.Configuration); + +//... +``` + +{{< /highlight >}} + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/appsettings.Development.json" >}} ```json { + //... "ConnectionStrings": { - "DefaultConnection": "Host=localhost:54320;Username=main;Password=main;Database=main;" + "DefaultConnection": "Host=localhost;Username=main;Password=main;Database=main;" } } ``` {{< /highlight >}} -Now the test project: +Now as all models are created, we can generate migrations and update database accordingly: ```sh -dotnet new xunit -o tests/KubeRocks.FeatureTests -dotnet sln add tests/KubeRocks.FeatureTests -dotnet add tests/KubeRocks.FeatureTests reference src/KubeRocks.WebApi -dotnet add tests/KubeRocks.FeatureTests package Microsoft.AspNetCore.Mvc.Testing -dotnet add tests/KubeRocks.FeatureTests package Respawn -dotnet add tests/KubeRocks.FeatureTests package FluentAssertions +dotnet new tool-manifest +dotnet tool install dotnet-ef + +dotnet dotnet-ef -p src/KubeRocks.Application -s src/KubeRocks.WebApi migrations add InitialCreate +dotnet dotnet-ef -p src/KubeRocks.Application -s src/KubeRocks.WebApi database update ``` -The `WebApplicationFactory` that will use our testing environment: +### Inject some dummy data -{{< highlight host="kuberocks-demo" file="tests/KubeRocks.FeatureTests/KubeRocksApiFactory.cs" >}} +We'll use Bogus on a separate console project: -```cs -using Microsoft.AspNetCore.Mvc.Testing; -using Microsoft.Extensions.Hosting; +```sh +dotnet new console -o src/KubeRocks.Console +dotnet sln add src/KubeRocks.Console +dotnet add src/KubeRocks.WebApi reference src/KubeRocks.Application +dotnet add src/KubeRocks.Console package Bogus +dotnet add src/KubeRocks.Console package ConsoleAppFramework +dotnet add src/KubeRocks.Console package Respawn +``` -namespace KubeRocks.FeatureTests; +{{< highlight host="kuberocks-demo" file="src/KubeRocks.Console/appsettings.json" >}} -public class KubeRocksApiFactory : WebApplicationFactory +```json { - protected override IHost CreateHost(IHostBuilder builder) - { - builder.UseEnvironment("Testing"); - - return base.CreateHost(builder); - } + "ConnectionStrings": { + "DefaultConnection": "Host=localhost;Username=main;Password=main;Database=main;" + } } ``` {{< /highlight >}} -The base test class for all test classes that manages database cleanup thanks to `Respawn`: +{{< highlight host="kuberocks-demo" file="src/KubeRocks.Console/KubeRocks.Console.csproj" >}} -{{< highlight host="kuberocks-demo" file="tests/KubeRocks.FeatureTests/TestBase.cs" >}} +```xml + + + + + + + $(MSBuildProjectDirectory) + + + + + PreserveNewest + + + + +``` + +{{< /highlight >}} + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.Console/Commands/DbCommand.cs" >}} ```cs +using Bogus; using KubeRocks.Application.Contexts; - +using KubeRocks.Application.Entities; using Microsoft.EntityFrameworkCore; -using Microsoft.Extensions.DependencyInjection; - using Npgsql; - using Respawn; using Respawn.Graph; -namespace KubeRocks.FeatureTests; +namespace KubeRocks.Console.Commands; -[Collection("Sequencial")] -public class TestBase : IClassFixture, IAsyncLifetime +[Command("db")] +public class DbCommand : ConsoleAppBase { - protected KubeRocksApiFactory Factory { get; private set; } + private readonly AppDbContext _context; - protected TestBase(KubeRocksApiFactory factory) + public DbCommand(AppDbContext context) { - Factory = factory; + _context = context; } - public async Task RefreshDatabase() + [Command("migrate", "Migrate database")] + public async Task Migrate() { - using var scope = Factory.Services.CreateScope(); + await _context.Database.MigrateAsync(); + } - using var conn = new NpgsqlConnection( - scope.ServiceProvider.GetRequiredService().Database.GetConnectionString() - ); + [Command("fresh", "Wipe data")] + public async Task FreshData() + { + await Migrate(); + + using var conn = new NpgsqlConnection(_context.Database.GetConnectionString()); await conn.OpenAsync(); @@ -453,299 +319,810 @@ public class TestBase : IClassFixture, IAsyncLifetime await respawner.ResetAsync(conn); } - public Task InitializeAsync() + [Command("seed", "Fake data")] + public async Task SeedData() { - return RefreshDatabase(); - } + await Migrate(); + await FreshData(); - public Task DisposeAsync() - { - return Task.CompletedTask; + var users = new Faker() + .RuleFor(m => m.Name, f => f.Person.FullName) + .RuleFor(m => m.Email, f => f.Person.Email) + .Generate(50); + + await _context.Users.AddRangeAsync(users); + await _context.SaveChangesAsync(); + + var articles = new Faker
() + .RuleFor(a => a.Title, f => f.Lorem.Sentence().TrimEnd('.')) + .RuleFor(a => a.Description, f => f.Lorem.Paragraphs(1)) + .RuleFor(a => a.Body, f => f.Lorem.Paragraphs(5)) + .RuleFor(a => a.Author, f => f.PickRandom(users)) + .RuleFor(a => a.CreatedAt, f => f.Date.Recent(90).ToUniversalTime()) + .RuleFor(a => a.Slug, (f, a) => a.Title.Replace(" ", "-").ToLowerInvariant()) + .Generate(500) + .Select(a => + { + new Faker() + .RuleFor(a => a.Body, f => f.Lorem.Paragraphs(2)) + .RuleFor(a => a.Author, f => f.PickRandom(users)) + .RuleFor(a => a.CreatedAt, f => f.Date.Recent(7).ToUniversalTime()) + .Generate(new Faker().Random.Number(10)) + .ForEach(c => a.Comments.Add(c)); + + return a; + }); + + await _context.Articles.AddRangeAsync(articles); + await _context.SaveChangesAsync(); } } ``` {{< /highlight >}} -Note the `Collection` attribute that will force the test classes to run sequentially, required as we will use the same database for all tests. - -Finally, the tests for the 2 endpoints of our articles controller: - -{{< highlight host="kuberocks-demo" file="tests/KubeRocks.FeatureTests/Articles/ArticlesListTests.cs" >}} +{{< highlight host="kuberocks-demo" file="src/KubeRocks.Console/Program.cs" >}} ```cs -using System.Net.Http.Json; +using KubeRocks.Application.Extensions; +using KubeRocks.Console.Commands; -using FluentAssertions; +var builder = ConsoleApp.CreateBuilder(args); -using KubeRocks.Application.Contexts; -using KubeRocks.Application.Entities; -using KubeRocks.WebApi.Models; - -using Microsoft.Extensions.DependencyInjection; - -using static KubeRocks.WebApi.Controllers.ArticlesController; - -namespace KubeRocks.FeatureTests.Articles; - -public class ArticlesListTests : TestBase +builder.ConfigureServices((ctx, services) => { - public ArticlesListTests(KubeRocksApiFactory factory) : base(factory) { } + services.AddKubeRocksServices(ctx.Configuration); +}); - [Fact] - public async Task Can_Paginate_Articles() +var app = builder.Build(); + +app.AddSubCommands(); + +app.Run(); +``` + +{{< /highlight >}} + +Then launch the command: + +```sh +dotnet run --project src/KubeRocks.Console db seed +``` + +Ensure with your favorite DB client that data is correctly inserted. + +### Define endpoint access + +All that's left is to create the endpoint. Let's define all DTO first: + +```sh +dotnet add src/KubeRocks.WebApi package Mapster +``` + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Models/ArticleListDto.cs" >}} + +```cs +namespace KubeRocks.WebApi.Models; + +public class ArticleListDto +{ + public required string Title { get; set; } + + public required string Slug { get; set; } + + public required string Description { get; set; } + + public required string Body { get; set; } + + public DateTime CreatedAt { get; set; } + + public DateTime UpdatedAt { get; set; } + + public required AuthorDto Author { get; set; } +} +``` + +{{< /highlight >}} + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Models/ArticleDto.cs" >}} + +```cs +namespace KubeRocks.WebApi.Models; + +public class ArticleDto : ArticleListDto +{ + public List Comments { get; set; } = new(); +} +``` + +{{< /highlight >}} + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Models/AuthorDto.cs" >}} + +```cs +namespace KubeRocks.WebApi.Models; + +public class AuthorDto +{ + public required string Name { get; set; } +} +``` + +{{< /highlight >}} + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Models/CommentDto.cs" >}} + +```cs +namespace KubeRocks.WebApi.Models; + +public class CommentDto +{ + public required string Body { get; set; } + + public DateTime CreatedAt { get; set; } + + public required AuthorDto Author { get; set; } +} +``` + +{{< /highlight >}} + +And finally the controller: + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Controllers/ArticlesController.cs" >}} + +```cs +using KubeRocks.Application.Contexts; +using KubeRocks.WebApi.Models; +using Mapster; +using Microsoft.AspNetCore.Mvc; +using Microsoft.EntityFrameworkCore; + +namespace KubeRocks.WebApi.Controllers; + +[ApiController] +[Route("[controller]")] +public class ArticlesController +{ + private readonly AppDbContext _context; + + public record ArticlesResponse(IEnumerable Articles, int ArticlesCount); + + public ArticlesController(AppDbContext context) { - using (var scope = Factory.Services.CreateScope()) - { - var db = scope.ServiceProvider.GetRequiredService(); - - var user = db.Users.Add(new User - { - Name = "John Doe", - Email = "john.doe@email.com" - }); - - db.Articles.AddRange(Enumerable.Range(1, 50).Select(i => new Article - { - Title = $"Test Title {i}", - Slug = $"test-title-{i}", - Description = "Test Description", - Body = "Test Body", - Author = user.Entity, - })); - - await db.SaveChangesAsync(); - } - - var response = await Factory.CreateClient().GetAsync("/api/Articles?page=1&size=20"); - - response.EnsureSuccessStatusCode(); - - var body = (await response.Content.ReadFromJsonAsync())!; - - body.Articles.Count().Should().Be(20); - body.ArticlesCount.Should().Be(50); - - body.Articles.First().Should().BeEquivalentTo(new - { - Title = "Test Title 50", - Description = "Test Description", - Body = "Test Body", - Author = new - { - Name = "John Doe" - }, - }); + _context = context; } - [Fact] - public async Task Can_Get_Article() + [HttpGet(Name = "GetArticles")] + public async Task Get([FromQuery] int page = 1, [FromQuery] int size = 10) { - using (var scope = Factory.Services.CreateScope()) + var articles = await _context.Articles + .OrderByDescending(a => a.Id) + .Skip((page - 1) * size) + .Take(size) + .ProjectToType() + .ToListAsync(); + + var articlesCount = await _context.Articles.CountAsync(); + + return new ArticlesResponse(articles, articlesCount); + } + + [HttpGet("{slug}", Name = "GetArticleBySlug")] + public async Task> GetBySlug(string slug) + { + var article = await _context.Articles + .Include(a => a.Author) + .Include(a => a.Comments.OrderByDescending(c => c.Id)) + .ThenInclude(c => c.Author) + .FirstOrDefaultAsync(a => a.Slug == slug); + + if (article is null) { - var db = scope.ServiceProvider.GetRequiredService(); - - db.Articles.Add(new Article - { - Title = $"Test Title", - Slug = $"test-title", - Description = "Test Description", - Body = "Test Body", - Author = new User - { - Name = "John Doe", - Email = "john.doe@email.com" - } - }); - - await db.SaveChangesAsync(); + return new NotFoundResult(); } - var response = await Factory.CreateClient().GetAsync($"/api/Articles/test-title"); - - response.EnsureSuccessStatusCode(); - - var body = (await response.Content.ReadFromJsonAsync())!; - - body.Should().BeEquivalentTo(new - { - Title = "Test Title", - Description = "Test Description", - Body = "Test Body", - Author = new - { - Name = "John Doe" - }, - }); + return article.Adapt(); } } ``` {{< /highlight >}} -Ensure all tests passes with `dotnet test`. +Launch the app and check that `/Articles` and `/Articles/{slug}` endpoints are working as expected. -### CI tests & code coverage +## Production grade deployment -Now we need to integrate the tests in our CI pipeline. As we testing with a real database, create a new `demo_test` database through pgAdmin with basic `test` / `test` credentials. +### Database connection -{{< alert >}} -In real world scenario, you should use a dedicated database for testing, and not the same as production. -{{< /alert >}} +It's time to connect our app to the production database. Create a demo DB & user through pgAdmin and create the appropriate secret: -Let's edit the pipeline accordingly for tests: +{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/secrets-demo-db.yaml" >}} -{{< highlight host="demo-kube-flux" file="pipelines/demo.yaml" >}} +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: demo-db +type: Opaque +data: + password: ZGVtbw== +``` -```yml -#... +{{< /highlight >}} -jobs: - - name: build - plan: - #... +Generate the according sealed secret like previously chapters with `kubeseal` under `sealed-secret-demo-db.yaml` file and delete `secret-demo-db.yaml`. - - task: build-source - config: - #... - params: - ConnectionStrings__DefaultConnection: "Host=postgres-primary.postgres;Username=test;Password=test;Database=demo_test" - run: - path: /bin/sh - args: - - -ec - - | - dotnet format --verify-no-changes +```sh +cat clusters/demo/kuberocks/secret-demo.yaml | kubeseal --format=yaml --cert=pub-sealed-secrets.pem > clusters/demo/kuberocks/sealed-secret-demo.yaml +rm clusters/demo/kuberocks/secret-demo.yaml +``` - dotnet sonarscanner begin /k:"KubeRocks-Demo" /d:sonar.host.url="((sonarqube.url))" /d:sonar.token="((sonarqube.analysis-token))" /d:sonar.cs.vscoveragexml.reportsPaths=coverage.xml - dotnet build -c Release - dotnet-coverage collect 'dotnet test -c Release --no-restore --no-build --verbosity=normal' -f xml -o 'coverage.xml' - dotnet sonarscanner end /d:sonar.token="((sonarqube.analysis-token))" +Let's inject the appropriate connection string as environment variable: - dotnet publish src/KubeRocks.WebApi -c Release -o publish --no-restore --no-build +{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}} +```yaml +# ... +spec: + # ... + template: + # ... + spec: + # ... + containers: + - name: api + # ... + env: + - name: DB_PASSWORD + valueFrom: + secretKeyRef: + name: demo-db + key: password + - name: ConnectionStrings__DefaultConnection + value: Host=postgresql-primary.postgres;Username=demo;Password='$(DB_PASSWORD)';Database=demo; #... ``` {{< /highlight >}} -Note as we already include code coverage by using `dotnet-coverage` tool. Don't forget to precise the path of `coverage.xml` to `sonarscanner` CLI too. It's time to push our code with tests or trigger the pipeline manually to test our integration tests. +### Database migration -If all goes well, you should see the tests results on SonarQube with some coverage done: - -[![SonarQube](sonarqube-tests.png)](sonarqube-tests.png) - -Coverage detail: - -[![SonarQube](sonarqube-cc.png)](sonarqube-cc.png) - -You may exclude some files from analysis by adding some project properties: - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/KubeRocks.Application.csproj" >}} - -```xml - - - - - - appsettings.Testing.json - - - -``` - -{{< /highlight >}} - -Same for coverage: - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/KubeRocks.Application.csproj" >}} - -```xml - - - - - - Migrations/**/* - - - -``` - -{{< /highlight >}} - -### Sonar Analyzer - -You can enforce many default sonar rules by using [Sonar Analyzer](https://github.com/SonarSource/sonar-dotnet) directly locally before any code push. - -Create this file at the root of your solution for enabling Sonar Analyzer globally: - -{{< highlight host="kuberocks-demo" file="Directory.Build.props" >}} - -```xml - - - latest-Recommended - true - true - - - - - -``` - -{{< /highlight >}} - -Any rule violation is treated as error at project building, which block the CI before execution of tests. Use `latest-All` as `AnalysisLevel` for psychopath mode. - -At this stage as soon this file is added, you should see some errors at building. If you use VSCode with correct C# extension, these errors will be highlighted directly in the editor. Here are some fixes: +The DB connection should be done, but the database isn't migrated yet, the easiest is to add a migration step directly in startup app: {{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} ```cs +// ... +var app = builder.Build(); + +using var scope = app.Services.CreateScope(); +await using var dbContext = scope.ServiceProvider.GetRequiredService(); +await dbContext.Database.MigrateAsync(); + +// ... +``` + +{{< /highlight >}} + +The database should be migrated on first app launch on next deploy. Go to `https://demo.kube.rocks/Articles` to confirm all is ok. It should return next empty response: + +```json +{ + articles: [] + articlesCount: 0 +} +``` + +{{< alert >}} +Don't hesitate to abuse of `klo -n kuberocks deploy/demo` to debug any troubleshooting when pod is on error state. +{{< /alert >}} + +### Database seeding + +We'll try to seed the database directly from local. Change temporarily the connection string in `appsettings.json` to point to the production database: + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.Console/appsettings.json" >}} + +```json +{ + "ConnectionStrings": { + "DefaultConnection": "Host=localhost:54321;Username=demo;Password='xxx';Database=demo;" + } +} +``` + +{{< /highlight >}} + +Then: + +```sh +# forward the production database port to local +kpf svc/postgresql -n postgres 54321:tcp-postgresql +# launch the seeding command +dotnet run --project src/KubeRocks.Console db seed +``` + +{{< alert >}} +We may obviously never do this on real production database, but as it's only for seeding, it will never concern them. +{{< /alert >}} + +Return to `https://demo.kube.rocks/Articles` to confirm articles are correctly returned. + +### Better logging with Serilog + +Default ASP.NET logging are not very standard, let's add Serilog for real requests logging with duration and status code: + +```sh +dotnet add src/KubeRocks.WebApi package Serilog.AspNetCore +``` + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} + +```cs +// ... + +builder.Host.UseSerilog((ctx, cfg) => cfg + .ReadFrom.Configuration(ctx.Configuration) + .WriteTo.Console() +); + +var app = builder.Build(); + +app.UseSerilogRequestLogging(); + +// ... +``` + +{{< /highlight >}} + +Then filtering through Loki stack should by far better. + +### Liveness & readiness + +All real production app should have liveness & readiness probes. It generally consists on particular URL which return the current health app status. We'll also include the DB access health. Let's add the standard `/healthz` endpoint, which is dead simple in ASP.NET Core: + +```sh +dotnet add src/KubeRocks.WebApi package Microsoft.Extensions.Diagnostics.HealthChecks.EntityFrameworkCore +``` + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} + +```cs +// ... + +builder.Services + .AddHealthChecks() + .AddDbContextCheck(); + +var app = builder.Build(); + +// ... + +app.MapControllers(); +app.MapHealthChecks("/healthz"); + +app.Run(); +``` + +{{< /highlight >}} + +And you're done ! Go to `https://demo.kube.rocks/healthz` to confirm it's working. Try to stop the database with `docker compose stop` and check the healthz endpoint again, it should return `503` status code. + +{{< alert >}} +The `Microsoft.Extensions.Diagnostics.HealthChecks` package is very extensible and you can add any custom check to enrich the health app status. +{{< /alert >}} + +And finally the probes: + +{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}} + +```yaml +# ... +spec: + # ... + template: + # ... + spec: + # ... + containers: + - name: api + # ... + livenessProbe: + httpGet: + path: /healthz + port: 80 + initialDelaySeconds: 10 + periodSeconds: 10 + readinessProbe: + httpGet: + path: /healthz + port: 80 + initialDelaySeconds: 10 + periodSeconds: 10 +``` + +{{< /highlight >}} + +{{< alert >}} +Be aware of difference between `liveness` and `readiness` probes. The first one is used to restart the pod if it's not responding, the second one is used to tell the pod is not ready to receive traffic, which is vital for preventing any downtime. +When **Rolling Update** strategy is used (the default), the old pod is not killed until the new one is ready (aka healthy). +{{< /alert >}} + +## Telemetry + +The last step but not least missing for a total integration with our monitored Kubernetes cluster is to add some telemetry to our app. We'll use `OpenTelemetry` for that, which becomes the standard library for metrics and tracing, by providing good integration to many languages. + +### Application metrics + +Install minimal ASP.NET Core metrics is really a no-brainer: + +```sh +dotnet add src/KubeRocks.WebApi package OpenTelemetry.AutoInstrumentation --prerelease +dotnet add src/KubeRocks.WebApi package OpenTelemetry.Extensions.Hosting --prerelease +dotnet add src/KubeRocks.WebApi package OpenTelemetry.Exporter.Prometheus.AspNetCore --prerelease +``` + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} + +```cs +//... + +builder.Services.AddOpenTelemetry() + .WithMetrics(b => + { + b + .AddAspNetCoreInstrumentation() + .AddPrometheusExporter(); + }); + +var app = builder.Build(); + +app.UseOpenTelemetryPrometheusScrapingEndpoint(); + +//... +``` + +{{< /highlight >}} + +Relaunch app and go to `https://demo.kube.rocks/metrics` to confirm it's working. It should show metrics after each endpoint call, simply try `https://demo.kube.rocks/Articles`. + +{{< alert >}} +.NET metrics are currently pretty basic, but the next .NET 8 version will provide far better metrics from internal components allowing some [useful dashboard](https://github.com/JamesNK/aspnetcore-grafana). +{{< /alert >}} + +#### Hide internal endpoints + +After push, you should see `/metrics` live. Let's step back and exclude this internal path from external public access. We have 2 options: + +* Force on the app side to listen only on private network on `/metrics` and `/healthz` endpoints +* Push all the app logic under `/api` path and let Traefik to include only this path + +Let's do the option 2. Add the `api/` prefix to controllers to expose: + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Controllers/ArticlesController.cs" >}} + +```cs +//... +[ApiController] +[Route("api/[controller]")] +public class ArticlesController { + //... +} +``` + +{{< /highlight >}} + +Let's move Swagger UI under `/api` path too: + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} + +```cs +//... + +if (app.Environment.IsDevelopment()) +{ + app.UseSwagger(c => + { + c.RouteTemplate = "/api/{documentName}/swagger.json"; + }); + app.UseSwaggerUI(c => + { + c.SwaggerEndpoint("v1/swagger.json", "KubeRocks v1"); + c.RoutePrefix = "api"; + }); +} + +//... +``` + +{{< /highlight >}} + +{{< alert >}} +You may use ASP.NET API versioning, which work the same way with [versioning URL path](https://github.com/dotnet/aspnet-api-versioning/wiki/Versioning-via-the-URL-Path). +{{< /alert >}} + +All is left is to include only the endpoints under `/api` prefix on Traefik IngressRoute: + +{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}} + +```yaml #... +apiVersion: traefik.io/v1alpha1 +kind: IngressRoute +#... +spec: + #... + routes: + - match: Host(`demo.kube.rocks`) && PathPrefix(`/api`) + #... +``` + +{{< /highlight >}} + +Now the new URL is `https://demo.kube.rocks/api/Articles`. Any path different from `api` will return the Traefik 404 page, and internal paths as `https://demo.kube.rocks/metrics` is not accessible anymore. An other additional advantage of this config, it's simple to put a separated frontend project under `/` path, which can use the under API without any CORS problem natively. + +#### Prometheus integration + +It's only a matter of new ServiceMonitor config: + +{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}} + +```yaml +--- +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: demo + namespace: kuberocks +spec: + endpoints: + - targetPort: 80 + selector: + matchLabels: + app: demo +``` + +{{< /highlight >}} + +After some time, You can finally use the Prometheus dashboard to query your app metrics. Use `{namespace="kuberocks",job="demo"}` PromQL query to list all available metrics: + +[![Prometheus metrics](prometheus-graph.png)](prometheus-graph.png) + +### Application tracing + +A more useful case for OpenTelemetry is to integrate it to a tracing backend. [Tempo](https://grafana.com/oss/tempo/) is a good candidate, which is a free open-source alternative to Jaeger, simpler to install by requiring a simple s3 as storage, and compatible to many protocols as Jaeger, OTLP, Zipkin. + +#### Installing Tempo + +It's another Helm Chart to install as well as the related grafana datasource: + +{{< highlight host="demo-kube-k3s" file="tracing.tf" >}} + +```tf +resource "kubernetes_namespace_v1" "tracing" { + metadata { + name = "tracing" + } +} + +resource "helm_release" "tempo" { + chart = "tempo" + version = "1.5.1" + repository = "https://grafana.github.io/helm-charts" + + name = "tempo" + namespace = kubernetes_namespace_v1.tracing.metadata[0].name + + set { + name = "tempo.storage.trace.backend" + value = "s3" + } + + set { + name = "tempo.storage.trace.s3.bucket" + value = var.s3_bucket + } + + set { + name = "tempo.storage.trace.s3.endpoint" + value = var.s3_endpoint + } + + set { + name = "tempo.storage.trace.s3.region" + value = var.s3_region + } + + set { + name = "tempo.storage.trace.s3.access_key" + value = var.s3_access_key + } + + set { + name = "tempo.storage.trace.s3.secret_key" + value = var.s3_secret_key + } + + set { + name = "serviceMonitor.enabled" + value = "true" + } +} + +resource "kubernetes_config_map_v1" "tempo_grafana_datasource" { + metadata { + name = "tempo-grafana-datasource" + namespace = kubernetes_namespace_v1.monitoring.metadata[0].name + labels = { + grafana_datasource = "1" + } + } + + data = { + "datasource.yaml" = <}} + +#### OpenTelemetry + +Let's firstly add another instrumentation package specialized for Npgsql driver used by EF Core to translate queries to PostgreSQL: + +```sh +dotnet add src/KubeRocks.WebApi package Npgsql.OpenTelemetry +``` + +Then bridge all needed instrumentation as well as the OTLP exporter: + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} + +```cs +//... + +builder.Services.AddOpenTelemetry() + //... + .WithTracing(b => + { + b + .SetResourceBuilder(ResourceBuilder + .CreateDefault() + .AddService("KubeRocks.Demo") + .AddTelemetrySdk() + ) + .AddAspNetCoreInstrumentation(b => + { + b.Filter = ctx => + { + return ctx.Request.Path.StartsWithSegments("/api"); + }; + }) + .AddEntityFrameworkCoreInstrumentation() + .AddNpgsql() + .AddOtlpExporter(); + }); + +//... +``` + +{{< /highlight >}} + +Then add the exporter endpoint config in order to push traces to Tempo: + +{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}} + +```yaml +#... +spec: + #... + template: + #... + spec: + #... + containers: + - name: api + #... + env: + #... + - name: OTEL_EXPORTER_OTLP_ENDPOINT + value: http://tempo.tracing:4317 +``` + +{{< /highlight >}} + +Call some API URLs and get back to Grafana / Explore, select Tempo data source and search for query traces. You should see something like this: + +[![Tempo search](tempo-search.png)](tempo-search.png) + +Click on one specific trace to get details. You can go through HTTP requests, EF Core time response, and even underline SQL queries thanks to Npgsql instrumentation: + +[![Tempo traces](tempo-trace.png)](tempo-trace.png) + +#### Correlation with Loki + +It would be nice to have directly access to trace from logs through Loki search, as it's clearly a more seamless way than searching inside Tempo. + +For that we need to do 2 things : + +* Add the `TraceId` to logs in order to correlate trace with log. In ASP.NET Core, a `TraceId` correspond to a unique request, allowing isolation analyze for each request. +* Create a link in Grafana from the generated `TraceId` inside log and the detail Tempo view trace. + +So firstly, let's take care of the app part by attaching the OpenTelemetry TraceId to Serilog: + +```sh +dotnet add src/KubeRocks.WebApi package Serilog.Enrichers.Span +``` + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} + +```cs +//... builder.Host.UseSerilog((ctx, cfg) => cfg .ReadFrom.Configuration(ctx.Configuration) .Enrich.WithSpan() .WriteTo.Console( - outputTemplate: "[{Timestamp:HH:mm:ss} {Level:u3}] |{TraceId}| {Message:lj}{NewLine}{Exception}", - // Enforce culture - formatProvider: CultureInfo.InvariantCulture + outputTemplate: "[{Timestamp:HH:mm:ss} {Level:u3}] |{TraceId}| {Message:lj}{NewLine}{Exception}" ) ); -#... +//... ``` {{< /highlight >}} -Delete `WeatherForecastController.cs`. +It should now generate that kind of logs: -{{< highlight host="kuberocks-demo" file="tests/KubeRocks.FeatureTests.csproj" >}} +```txt +[23:22:57 INF] |aa51c7254aaa10a3f679a511444a5da5| HTTP GET /api/Articles responded 200 in 301.7052 ms +``` -```xml - +Now Let's adapt the Loki datasource by creating a derived field inside `jsonData` property: - - +{{< highlight host="demo-kube-k3s" file="logging.tf" >}} - CA1707 - +```tf +resource "kubernetes_config_map_v1" "loki_grafana_datasource" { + #... - - + data = { + "datasource.yaml" = <}} +This where the magic happens. The `\|(\w+)\|` regex will match and extract the `TraceId` inside the log, which is inside pipes, and create a link to Tempo trace detail view. + +[![Derived fields](loki-derived-fields.png)](loki-derived-fields.png) + +This will give us the nice link button as soon as you you click a log detail: + +[![Derived fields](loki-tempo-link.png)](loki-tempo-link.png) + ## 8th check ✅ -We have done for the basic functional telemetry ! There are infinite things to cover in this subject, but it's enough for this endless guide. Go [next part]({{< ref "/posts/18-build-your-own-kubernetes-cluster-part-9" >}}) for the final part with load testing, and some frontend ! +We have done for the basic functional telemetry ! There are infinite things to cover in this subject, but it's enough for this endless guide. Go [next part]({{< ref "/posts/19-build-your-own-kubernetes-cluster-part-10" >}}), we'll talk about feature testing, code metrics and code coverage. diff --git a/content/posts/17-build-your-own-kubernetes-cluster-part-8/loki-derived-fields.png b/content/posts/18-build-your-own-kubernetes-cluster-part-9/loki-derived-fields.png similarity index 100% rename from content/posts/17-build-your-own-kubernetes-cluster-part-8/loki-derived-fields.png rename to content/posts/18-build-your-own-kubernetes-cluster-part-9/loki-derived-fields.png diff --git a/content/posts/17-build-your-own-kubernetes-cluster-part-8/loki-tempo-link.png b/content/posts/18-build-your-own-kubernetes-cluster-part-9/loki-tempo-link.png similarity index 100% rename from content/posts/17-build-your-own-kubernetes-cluster-part-8/loki-tempo-link.png rename to content/posts/18-build-your-own-kubernetes-cluster-part-9/loki-tempo-link.png diff --git a/content/posts/17-build-your-own-kubernetes-cluster-part-8/prometheus-graph.png b/content/posts/18-build-your-own-kubernetes-cluster-part-9/prometheus-graph.png similarity index 100% rename from content/posts/17-build-your-own-kubernetes-cluster-part-8/prometheus-graph.png rename to content/posts/18-build-your-own-kubernetes-cluster-part-9/prometheus-graph.png diff --git a/content/posts/17-build-your-own-kubernetes-cluster-part-8/tempo-search.png b/content/posts/18-build-your-own-kubernetes-cluster-part-9/tempo-search.png similarity index 100% rename from content/posts/17-build-your-own-kubernetes-cluster-part-8/tempo-search.png rename to content/posts/18-build-your-own-kubernetes-cluster-part-9/tempo-search.png diff --git a/content/posts/17-build-your-own-kubernetes-cluster-part-8/tempo-trace.png b/content/posts/18-build-your-own-kubernetes-cluster-part-9/tempo-trace.png similarity index 100% rename from content/posts/17-build-your-own-kubernetes-cluster-part-8/tempo-trace.png rename to content/posts/18-build-your-own-kubernetes-cluster-part-9/tempo-trace.png diff --git a/content/posts/19-build-your-own-kubernetes-cluster-part-10/index.md b/content/posts/19-build-your-own-kubernetes-cluster-part-10/index.md index 336c424..6d40357 100644 --- a/content/posts/19-build-your-own-kubernetes-cluster-part-10/index.md +++ b/content/posts/19-build-your-own-kubernetes-cluster-part-10/index.md @@ -1,8 +1,8 @@ --- -title: "Setup a HA Kubernetes cluster Part X - Load testing & Frontend" -date: 2023-10-10 +title: "Setup a HA Kubernetes cluster Part X - QA with testing & code metrics" +date: 2023-10-09 description: "Follow this opinionated guide as starter-kit for your own Kubernetes platform..." -tags: ["kubernetes", "testing", "sonarqube", "load-testing", "k6"] +tags: ["kubernetes", "testing", "sonarqube", "xunit"] draft: true --- @@ -10,370 +10,107 @@ draft: true Be free from AWS/Azure/GCP by building a production grade On-Premise Kubernetes cluster on cheap VPS provider, fully GitOps managed, and with complete CI/CD tools 🎉 {{< /lead >}} -This is the **Part X** of more global topic tutorial. [Back to first part]({{< ref "/posts/10-build-your-own-kubernetes-cluster" >}}) for intro. +This is the **Part IX** of more global topic tutorial. [Back to first part]({{< ref "/posts/10-build-your-own-kubernetes-cluster" >}}) for intro. -## Load testing +## Code Metrics -When it comes load testing, k6 is a perfect tool for this job and integrate with many real time series database integration like Prometheus or InfluxDB. As we already have Prometheus, let's use it and avoid us a separate InfluxDB installation. First be sure to allow remote write by enable `enableRemoteWriteReceiver` in the Prometheus Helm chart. It should be already done if you follow this tutorial. +SonarQube is leading the code metrics industry for a long time, embracing full Open Core model, and the community edition it's completely free of charge even for commercial use. It covers advanced code analysis, code coverage, code duplication, code smells, security vulnerabilities, etc. It ensures high quality code and help to keep it that way. -### K6 +### SonarQube installation -We'll reuse our flux repo and add some manifests for defining the load testing scenario. Firstly describe the scenario inside `ConfigMap` that scrape all articles and then each article: +SonarQube as its dedicated Helm chart which perfect for us. However, it's the most resource hungry component of our development stack so far (because Java project ? End of troll), so be sure to deploy it on almost empty free node, maybe a dedicated one. In fact, it's the last Helm chart for this tutorial, I promise! -{{< highlight host="demo-kube-flux" file="jobs/demo-k6.yaml" >}} +Create dedicated database for SonarQube same as usual. -```yml -apiVersion: v1 -kind: ConfigMap -metadata: - name: scenario - namespace: kuberocks -data: - script.js: | - import http from "k6/http"; - import { check } from "k6"; +{{< highlight host="demo-kube-k3s" file="main.tf" >}} - export default function () { - const size = 10; - let page = 1; - - let articles = [] - - do { - const res = http.get(`${__ENV.API_URL}/Articles?page=${page}&size=${size}`); - check(res, { - "status is 200": (r) => r.status == 200, - }); - - articles = res.json().articles; - page++; - - articles.forEach((article) => { - const res = http.get(`${__ENV.API_URL}/Articles/${article.slug}`); - check(res, { - "status is 200": (r) => r.status == 200, - }); - }); - } - while (articles.length > 0); - } -``` - -{{< /highlight >}} - -And add the k6 `Job` in the same file and configure it for Prometheus usage and mounting above scenario: - -{{< highlight host="demo-kube-flux" file="jobs/demo-k6.yaml" >}} - -```yml -#... ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: k6 - namespace: kuberocks -spec: - ttlSecondsAfterFinished: 0 - template: - spec: - restartPolicy: Never - containers: - - name: run - image: grafana/k6 - env: - - name: API_URL - value: https://demo.kube.rocks/api - - name: K6_VUS - value: "30" - - name: K6_DURATION - value: 1m - - name: K6_PROMETHEUS_RW_SERVER_URL - value: http://prometheus-operated.monitoring:9090/api/v1/write - command: - ["k6", "run", "-o", "experimental-prometheus-rw", "script.js"] - volumeMounts: - - name: scenario - mountPath: /home/k6 - tolerations: - - key: node-role.kubernetes.io/runner - operator: Exists - effect: NoSchedule - nodeSelector: - node-role.kubernetes.io/runner: "true" - volumes: - - name: scenario - configMap: - name: scenario -``` - -{{< /highlight >}} - -Use appropriate `tolerations` and `nodeSelector` for running the load testing in a node which have free CPU resource. You can play with `K6_VUS` and `K6_DURATION` environment variables in order to change the level of load testing. - -Then you can launch the job with `ka jobs/demo-k6.yaml`. Check quickly that the job is running via `klo -n kuberocks job/k6`: - -```txt - - /\ |‾‾| /‾‾/ /‾‾/ - /\ / \ | |/ / / / - / \/ \ | ( / ‾‾\ - / \ | |\ \ | (‾) | -/ __________ \ |__| \__\ \_____/ .io - -execution: local - script: script.js - output: Prometheus remote write (http://prometheus-operated.monitoring:9090/api/v1/write) - -scenarios: (100.00%) 1 scenario, 30 max VUs, 1m30s max duration (incl. graceful stop): - * default: 30 looping VUs for 1m0s (gracefulStop: 30s) -``` - -After 1 minute of run, job should finish and show some raw result: - -```txt -✓ status is 200 - -checks.........................: 100.00% ✓ 17748 ✗ 0 -data_received..................: 404 MB 6.3 MB/s -data_sent......................: 1.7 MB 26 kB/s -http_req_blocked...............: avg=242.43µs min=223ns med=728ns max=191.27ms p(90)=1.39µs p(95)=1.62µs -http_req_connecting............: avg=13.13µs min=0s med=0s max=9.48ms p(90)=0s p(95)=0s -http_req_duration..............: avg=104.22ms min=28.9ms med=93.45ms max=609.86ms p(90)=162.04ms p(95)=198.93ms - { expected_response:true }...: avg=104.22ms min=28.9ms med=93.45ms max=609.86ms p(90)=162.04ms p(95)=198.93ms -http_req_failed................: 0.00% ✓ 0 ✗ 17748 -http_req_receiving.............: avg=13.76ms min=32.71µs med=6.49ms max=353.13ms p(90)=36.04ms p(95)=51.36ms -http_req_sending...............: avg=230.04µs min=29.79µs med=93.16µs max=25.75ms p(90)=201.92µs p(95)=353.61µs -http_req_tls_handshaking.......: avg=200.57µs min=0s med=0s max=166.91ms p(90)=0s p(95)=0s -http_req_waiting...............: avg=90.22ms min=14.91ms med=80.76ms max=609.39ms p(90)=138.3ms p(95)=169.24ms -http_reqs......................: 17748 276.81409/s -iteration_duration.............: avg=5.39s min=3.97s med=5.35s max=7.44s p(90)=5.94s p(95)=6.84s -iterations.....................: 348 5.427727/s -vus............................: 7 min=7 max=30 -vus_max........................: 30 min=30 max=30 -``` - -As we use Prometheus for outputting the result, we can visualize it easily with Grafana. You just have to import [this dashboard](https://grafana.com/grafana/dashboards/18030-official-k6-test-result/): - -[![Grafana](grafana-k6.png)](grafana-k6.png) - -As we use Kubernetes, increase the loading performance horizontally is dead easy. Go to the deployment configuration of demo app for increasing replicas count, as well as Traefik, and compare the results. - -### Load balancing database - -So far, we only load balanced the stateless API, but what about the database part ? We have set up a replicated PostgreSQL cluster, however we have no use of the replica that stay sadly idle. But for that we have to distinguish write queries from scalable read queries. - -We can make use of the Bitnami [PostgreSQL HA](https://artifacthub.io/packages/helm/bitnami/postgresql-ha) instead of simple one. It adds the new component [Pgpool-II](https://pgpool.net/mediawiki/index.php/Main_Page) as main load balancer and detect failover. It's able to separate in real time write queries from read queries and send them to the master or the replica. The advantage: works natively for all apps without any changes. The cons: it consumes far more resources and add a new component to maintain. - -A 2nd solution is to separate query typologies from where it counts: the application. It requires some code changes, but it's clearly a far more efficient solution. Let's do this way. - -As Npgsql support load balancing [natively](https://www.npgsql.org/doc/failover-and-load-balancing.html), we don't need to add any Kubernetes service. We just have to create a clear distinction between read and write queries. One simple way is to create a separate RO `DbContext`. - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/Contexts/AppRoDbContext.cs" >}} - -```cs -namespace KubeRocks.Application.Contexts; - -using KubeRocks.Application.Entities; - -using Microsoft.EntityFrameworkCore; - -public class AppRoDbContext : DbContext -{ - public DbSet Users => Set(); - public DbSet
Articles => Set
(); - public DbSet Comments => Set(); - - public AppRoDbContext(DbContextOptions options) : base(options) - { - } +```tf +variable "sonarqube_db_password" { + type = string + sensitive = true } ``` {{< /highlight >}} -Register it in DI: +{{< highlight host="demo-kube-k3s" file="terraform.tfvars" >}} -{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/Extensions/ServiceExtensions.cs" >}} +```tf +sonarqube_db_password = "xxx" +``` -```cs -public static class ServiceExtensions -{ - public static IServiceCollection AddKubeRocksServices(this IServiceCollection services, IConfiguration configuration) - { - return services - //... - .AddDbContext((options) => +{{< /highlight >}} + +{{< highlight host="demo-kube-k3s" file="sonarqube.tf" >}} + +```tf +resource "kubernetes_namespace_v1" "sonarqube" { + metadata { + name = "sonarqube" + } +} + +resource "helm_release" "sonarqube" { + chart = "sonarqube" + version = "10.1.0+628" + repository = "https://SonarSource.github.io/helm-chart-sonarqube" + + name = "sonarqube" + namespace = kubernetes_namespace_v1.sonarqube.metadata[0].name + + set { + name = "prometheusMonitoring.podMonitor.enabled" + value = "true" + } + + set { + name = "postgresql.enabled" + value = "false" + } + + set { + name = "jdbcOverwrite.enabled" + value = "true" + } + + set { + name = "jdbcOverwrite.jdbcUrl" + value = "jdbc:postgresql://postgresql-primary.postgres/sonarqube" + } + + set { + name = "jdbcOverwrite.jdbcUsername" + value = "sonarqube" + } + + set { + name = "jdbcOverwrite.jdbcPassword" + value = var.sonarqube_db_password + } +} + +resource "kubernetes_manifest" "sonarqube_ingress" { + manifest = { + apiVersion = "traefik.io/v1alpha1" + kind = "IngressRoute" + metadata = { + name = "sonarqube" + namespace = kubernetes_namespace_v1.sonarqube.metadata[0].name + } + spec = { + entryPoints = ["websecure"] + routes = [ + { + match = "Host(`sonarqube.${var.domain}`)" + kind = "Rule" + services = [ { - options.UseNpgsql( - configuration.GetConnectionString("DefaultRoConnection") - ?? - configuration.GetConnectionString("DefaultConnection") - ); - }); - } -} -``` - -{{< /highlight >}} - -We fall back to the RW connection string if the RO one is not defined. Then use it in the `ArticlesController` which as only read endpoints: - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Controllers/ArticlesController.cs" >}} - -```cs -//... - -public class ArticlesController -{ - private readonly AppRoDbContext _context; - - //... - - public ArticlesController(AppRoDbContext context) - { - _context = context; - } - - //... -} -``` - -{{< /highlight >}} - -Push and let it pass the CI. In the meantime, add the new RO connection: - -{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}} - -```yaml -# ... -spec: - # ... - template: - # ... - spec: - # ... - containers: - - name: api - # ... - env: - - name: DB_PASSWORD - valueFrom: - secretKeyRef: - name: demo-db - key: password - - name: ConnectionStrings__DefaultConnection - value: Host=postgresql-primary.postgres;Username=demo;Password='$(DB_PASSWORD)';Database=demo; - - name: ConnectionStrings__DefaultRoConnection - value: Host=postgresql-primary.postgres,postgresql-read.postgres;Username=demo;Password='$(DB_PASSWORD)';Database=demo;Load Balance Hosts=true; -#... -``` - -{{< /highlight >}} - -We simply have to add multiple host like `postgresql-primary.postgres,postgresql-read.postgres` for the RO connection string and enable LB mode with `Load Balance Hosts=true`. - -Once deployed, relaunch a load test with K6 and admire the DB load balancing in action on both storage servers with `htop` or directly compute pods by namespace in Grafana. - -[![Gafana DB load balancing](grafana-db-lb.png)](grafana-db-lb.png) - -## Frontend - -Let's finish this guide by a quick view of SPA frontend development as a separate project from backend. - -### Vue TS - -Create a new Vue.js project from [vitesse starter kit](https://github.com/antfu/vitesse-lite) (be sure to have pnpm, just a matter of `scoop/brew install pnpm`): - -```sh -npx degit antfu/vitesse-lite kuberocks-demo-ui -cd kuberocks-demo-ui -git init -git add . -git commit -m "Initial commit" -pnpm i -pnpm dev -``` - -Should launch app in `http://localhost:3333/`. Create a new `kuberocks-demo-ui` Gitea repo and push this code into it. Now lets quick and done for API calls. - -### Get around CORS and HTTPS with YARP - -As always when frontend is separated from backend, we have to deal with CORS. But I prefer to have one single URL for frontend + backend and get rid of CORS problem by simply call under `/api` path. Moreover, it'll be production ready without need to manage any `Vite` variable for API URL and we'll get HTTPS provided by dotnet. Back to API project. - -```sh -dotnet add src/KubeRocks.WebApi package Yarp.ReverseProxy -``` - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} - -```cs -//... - -var builder = WebApplication.CreateBuilder(args); - -builder.Services.AddReverseProxy() - .LoadFromConfig(builder.Configuration.GetSection("ReverseProxy")); - -//... - -var app = builder.Build(); - -app.MapReverseProxy(); - -//... - -app.UseRouting(); - -//... -``` - -{{< /highlight >}} - -Note as we must add `app.UseRouting();` too in order to get Swagger UI working. - -The proxy configuration (only for development): - -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/appsettings.Development.json" >}} - -```json -{ - //... - "ReverseProxy": { - "Routes": { - "ServerRouteApi": { - "ClusterId": "Server", - "Match": { - "Path": "/api/{**catch-all}" - }, - "Transforms": [ - { - "PathRemovePrefix": "/api" - } - ] - }, - "ClientRoute": { - "ClusterId": "Client", - "Match": { - "Path": "{**catch-all}" + name = "sonarqube-sonarqube" + port = "http" + } + ] } - } - }, - "Clusters": { - "Client": { - "Destinations": { - "Client1": { - "Address": "http://localhost:3333" - } - } - }, - "Server": { - "Destinations": { - "Server1": { - "Address": "https://localhost:7159" - } - } - } + ] } } } @@ -381,227 +118,165 @@ The proxy configuration (only for development): {{< /highlight >}} -Now your frontend app should appear under `https://localhost:7159`, and API calls under `https://localhost:7159/api`. We now benefit from HTTPS for all app. Push API code. +Be sure to disable the PostgreSQL sub chart and use our self-hosted cluster with both `postgresql.enabled` and `jdbcOverwrite.enabled`. If needed, set proper `tolerations` and `nodeSelector` for deploying on a dedicated node. -### Typescript API generator +The installation take many minutes, be patient. Once done, you can access SonarQube on `https://sonarqube.kube.rocks` and login with `admin` / `admin`. -As we use OpenAPI, it's possible to generate typescript client for API calls. Add this package: +### Project configuration -```sh -pnpm add openapi-typescript -D -pnpm add openapi-typescript-fetch -``` +Firstly create a new project and retain the project key which is his identifier. Then create a **global analysis token** named `Concourse CI` that will be used for CI integration from your user account under `/account/security`. -Before generate the client model, go back to backend for fixing default nullable reference from `Swashbuckle.AspNetCore`: +Now we need to create a Kubernetes secret which contains this token value for Concourse CI, for usage inside the pipeline. The token is the one generated above. -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Filters/RequiredNotNullableSchemaFilter.cs" >}} +Add a new concourse terraform variable for the token: -```cs -using Microsoft.OpenApi.Models; +{{< highlight host="demo-kube-k3s" file="main.tf" >}} -using Swashbuckle.AspNetCore.SwaggerGen; - -namespace KubeRocks.WebApi.Filters; - -public class RequiredNotNullableSchemaFilter : ISchemaFilter -{ - public void Apply(OpenApiSchema schema, SchemaFilterContext context) - { - if (schema.Properties is null) - { - return; - } - - var notNullableProperties = schema - .Properties - .Where(x => !x.Value.Nullable && !schema.Required.Contains(x.Key)) - .ToList(); - - foreach (var property in notNullableProperties) - { - schema.Required.Add(property.Key); - } - } +```tf +variable "concourse_analysis_token" { + type = string + sensitive = true } ``` {{< /highlight >}} -{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} +{{< highlight host="demo-kube-k3s" file="terraform.tfvars" >}} -```cs -//... - -builder.Services.AddSwaggerGen(o => -{ - o.SupportNonNullableReferenceTypes(); - o.SchemaFilter(); -}); - -//... +```tf +concourse_analysis_token = "xxx" ``` {{< /highlight >}} -Sadly, without this boring step, many attributes will be nullable in the generated model, which must not be the case. Now generate the model: +The secret: -{{< highlight host="kuberocks-demo-ui" file="package.json" >}} +{{< highlight host="demo-kube-k3s" file="concourse.tf" >}} -```json -{ - //... - "scripts": { - //... - "openapi": "openapi-typescript http://localhost:5123/api/v1/swagger.json --output src/api/openapi.ts" - }, - //... +```tf +resource "kubernetes_secret_v1" "concourse_sonarqube" { + metadata { + name = "sonarqube" + namespace = "concourse-main" + } + + data = { + url = "https://sonarqube.${var.domain}" + analysis-token = var.concourse_analysis_token + } + + depends_on = [ + helm_release.concourse + ] } ``` {{< /highlight >}} -Use the HTTP version of swagger as you'll get a self certificate error. The use `pnpm openapi` to generate full TS model. Finally, describe API fetchers like so: +We are ready to tackle the pipeline for integration. -{{< highlight host="kuberocks-demo-ui" file="src/api/index.ts" >}} +### SonarScanner for .NET -```ts -import { Fetcher } from 'openapi-typescript-fetch' +As we use a dotnet project, we will use the official SonarQube scanner for .net. But sadly, as it's only a .NET CLI wrapper, it requires a java runtime to run and there is no official SonarQube docker image which contains both .NET SDK and Java runtime. But we have a CI now, so we can build our own QA image on our own private registry. -import type { components, paths } from './openapi' +Create a new Gitea repo dedicated for any custom docker images with this one single Dockerfile: -const fetcher = Fetcher.for() +{{< highlight host="demo-kube-images" file="dotnet-qa.dockerfile" >}} -type ArticleList = components['schemas']['ArticleListDto'] -type Article = components['schemas']['ArticleDto'] +```Dockerfile +FROM mcr.microsoft.com/dotnet/sdk:7.0 -const getArticles = fetcher.path('/api/Articles').method('get').create() -const getArticleBySlug = fetcher.path('/api/Articles/{slug}').method('get').create() +RUN apt-get update && apt-get install -y ca-certificates-java && apt-get install -y \ + openjdk-17-jre-headless \ + unzip \ + && rm -rf /var/lib/apt/lists/* -export type { Article, ArticleList } -export { - getArticles, - getArticleBySlug, -} +RUN dotnet tool install --global dotnet-sonarscanner +RUN dotnet tool install --global dotnet-coverage + +ENV PATH="${PATH}:/root/.dotnet/tools" ``` {{< /highlight >}} -We are now fully typed compliant with the API. +Note as we add the `dotnet-sonarscanner` tool to the path, we can use it directly in the pipeline without any extra step. I'll also add `dotnet-coverage` global tool for code coverage generation that we'll use later. -### Call the API +Then the pipeline: -Let's create a pretty basic list + detail vue pages: - -{{< highlight host="kuberocks-demo-ui" file="src/pages/articles/index.vue" >}} - -```vue - - - -``` - -{{< /highlight >}} - -{{< highlight host="kuberocks-demo-ui" file="src/pages/articles/[slug].vue" >}} - -```vue - - - -``` - -{{< /highlight >}} - -It should work flawlessly. - -### Frontend CI/CD - -The CI frontend is far simpler than backend. Create a new `demo-ui` pipeline: - -{{< highlight host="demo-kube-flux" file="pipelines/demo-ui.yaml" >}} +{{< highlight host="demo-kube-flux" file="pipelines/images.yaml" >}} ```yml resources: - - name: version - type: semver - source: - driver: git - uri: ((git.url))/kuberocks/demo-ui - branch: main - file: version - username: ((git.username)) - password: ((git.password)) - git_user: ((git.git-user)) - commit_message: ((git.commit-message)) - - name: source-code + - name: docker-images-git type: git icon: coffee source: - uri: ((git.url))/kuberocks/demo-ui + uri: https://gitea.kube.rocks/kuberocks/docker-images branch: main - username: ((git.username)) - password: ((git.password)) - - name: docker-image + - name: dotnet-qa-image type: registry-image icon: docker source: - repository: ((registry.name))/kuberocks/demo-ui - tag: latest + repository: ((registry.name))/kuberocks/dotnet-qa + tag: "7.0" username: ((registry.username)) password: ((registry.password)) +jobs: + - name: dotnet-qa + plan: + - get: docker-images-git + - task: build-image + privileged: true + config: + platform: linux + image_resource: + type: registry-image + source: + repository: concourse/oci-build-task + inputs: + - name: docker-images-git + outputs: + - name: image + params: + DOCKERFILE: docker-images-git/dotnet-qa.dockerfile + run: + path: build + - put: dotnet-qa-image + params: + image: image/image.tar +``` + +{{< /highlight >}} + +Update the `main.yaml` pipeline to add the new job, then trigger it manually from Concourse UI to add the new above pipeline: + +{{< highlight host="demo-kube-flux" file="pipelines/main.yaml" >}} + +```tf +#... + +jobs: + - name: configure-pipelines + plan: + #... + - set_pipeline: images + file: ci/pipelines/images.yaml +``` + +{{< /highlight >}} + +The pipeline should now start and build the image, trigger it manually if needed on Concourse UI. Once done, you can check it on your Gitea container packages that the new image `gitea.kube.rocks/kuberocks/dotnet-qa` is here. + +### Concourse pipeline integration + +It's finally time to reuse this QA image in our Concourse demo project pipeline. Update it accordingly: + +{{< highlight host="demo-kube-flux" file="pipelines/demo.yaml" >}} + +```yml +#... + jobs: - name: build plan: @@ -614,187 +289,463 @@ jobs: image_resource: type: registry-image source: - repository: node - tag: 18-buster - inputs: - - name: source-code - path: . - outputs: - - name: dist - path: dist - caches: - - path: .pnpm-store + repository: ((registry.name))/kuberocks/dotnet-qa + tag: "7.0" + username: ((registry.username)) + password: ((registry.password)) + #... run: path: /bin/sh args: - -ec - | - corepack enable - corepack prepare pnpm@latest-8 --activate - pnpm config set store-dir .pnpm-store - pnpm i - pnpm lint - pnpm build + dotnet format --verify-no-changes - - task: build-image - privileged: true - config: - platform: linux - image_resource: - type: registry-image - source: - repository: concourse/oci-build-task - inputs: - - name: source-code - path: . - - name: dist - path: dist - outputs: - - name: image - run: - path: build - - put: version - params: { bump: patch } - - put: docker-image - params: - additional_tags: version/number - image: image/image.tar + dotnet sonarscanner begin /k:"KubeRocks-Demo" /d:sonar.host.url="((sonarqube.url))" /d:sonar.token="((sonarqube.analysis-token))" + dotnet build -c Release + dotnet sonarscanner end /d:sonar.token="((sonarqube.analysis-token))" + + dotnet publish src/KubeRocks.WebApi -c Release -o publish --no-restore --no-build + + #... ``` {{< /highlight >}} -{{< highlight host="demo-kube-flux" file="pipelines/demo-ui.yaml" >}} +Note as we now use the `dotnet-qa` image and surround the build step by `dotnet sonarscanner begin` and `dotnet sonarscanner end` commands with appropriate credentials allowing Sonar CLI to send report to our SonarQube instance. Trigger the pipeline manually, all should pass, and the result will be pushed to SonarQube. -```tf +[![SonarQube](sonarqube-dashboard.png)](sonarqube-dashboard.png) + +## Feature testing + +Let's cover the feature testing by calling the API against a real database. This is the opportunity to cover the code coverage as well. + +### xUnit + +First add a dedicated database for test in the docker compose file as we won't interfere with the development database: + +{{< highlight host="kuberocks-demo" file="docker-compose.yml" >}} + +```yaml +version: "3" + +services: + #... + + db_test: + image: postgres:15 + environment: + POSTGRES_USER: main + POSTGRES_PASSWORD: main + POSTGRES_DB: main + ports: + - 54320:5432 +``` + +{{< /highlight >}} + +Expose the startup service of minimal API: + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} + +```cs +//... + +public partial class Program +{ + protected Program() { } +} +``` + +{{< /highlight >}} + +Then add a testing JSON environment file for accessing our database `db_test` from the docker-compose.yml: + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/appsettings.Testing.json" >}} + +```json +{ + "ConnectionStrings": { + "DefaultConnection": "Host=localhost:54320;Username=main;Password=main;Database=main;" + } +} +``` + +{{< /highlight >}} + +Now the test project: + +```sh +dotnet new xunit -o tests/KubeRocks.FeatureTests +dotnet sln add tests/KubeRocks.FeatureTests +dotnet add tests/KubeRocks.FeatureTests reference src/KubeRocks.WebApi +dotnet add tests/KubeRocks.FeatureTests package Microsoft.AspNetCore.Mvc.Testing +dotnet add tests/KubeRocks.FeatureTests package Respawn +dotnet add tests/KubeRocks.FeatureTests package FluentAssertions +``` + +The `WebApplicationFactory` that will use our testing environment: + +{{< highlight host="kuberocks-demo" file="tests/KubeRocks.FeatureTests/KubeRocksApiFactory.cs" >}} + +```cs +using Microsoft.AspNetCore.Mvc.Testing; +using Microsoft.Extensions.Hosting; + +namespace KubeRocks.FeatureTests; + +public class KubeRocksApiFactory : WebApplicationFactory +{ + protected override IHost CreateHost(IHostBuilder builder) + { + builder.UseEnvironment("Testing"); + + return base.CreateHost(builder); + } +} +``` + +{{< /highlight >}} + +The base test class for all test classes that manages database cleanup thanks to `Respawn`: + +{{< highlight host="kuberocks-demo" file="tests/KubeRocks.FeatureTests/TestBase.cs" >}} + +```cs +using KubeRocks.Application.Contexts; + +using Microsoft.EntityFrameworkCore; +using Microsoft.Extensions.DependencyInjection; + +using Npgsql; + +using Respawn; +using Respawn.Graph; + +namespace KubeRocks.FeatureTests; + +[Collection("Sequencial")] +public class TestBase : IClassFixture, IAsyncLifetime +{ + protected KubeRocksApiFactory Factory { get; private set; } + + protected TestBase(KubeRocksApiFactory factory) + { + Factory = factory; + } + + public async Task RefreshDatabase() + { + using var scope = Factory.Services.CreateScope(); + + using var conn = new NpgsqlConnection( + scope.ServiceProvider.GetRequiredService().Database.GetConnectionString() + ); + + await conn.OpenAsync(); + + var respawner = await Respawner.CreateAsync(conn, new RespawnerOptions + { + TablesToIgnore = new Table[] { "__EFMigrationsHistory" }, + DbAdapter = DbAdapter.Postgres + }); + + await respawner.ResetAsync(conn); + } + + public Task InitializeAsync() + { + return RefreshDatabase(); + } + + public Task DisposeAsync() + { + return Task.CompletedTask; + } +} +``` + +{{< /highlight >}} + +Note the `Collection` attribute that will force the test classes to run sequentially, required as we will use the same database for all tests. + +Finally, the tests for the 2 endpoints of our articles controller: + +{{< highlight host="kuberocks-demo" file="tests/KubeRocks.FeatureTests/Articles/ArticlesListTests.cs" >}} + +```cs +using System.Net.Http.Json; + +using FluentAssertions; + +using KubeRocks.Application.Contexts; +using KubeRocks.Application.Entities; +using KubeRocks.WebApi.Models; + +using Microsoft.Extensions.DependencyInjection; + +using static KubeRocks.WebApi.Controllers.ArticlesController; + +namespace KubeRocks.FeatureTests.Articles; + +public class ArticlesListTests : TestBase +{ + public ArticlesListTests(KubeRocksApiFactory factory) : base(factory) { } + + [Fact] + public async Task Can_Paginate_Articles() + { + using (var scope = Factory.Services.CreateScope()) + { + var db = scope.ServiceProvider.GetRequiredService(); + + var user = db.Users.Add(new User + { + Name = "John Doe", + Email = "john.doe@email.com" + }); + + db.Articles.AddRange(Enumerable.Range(1, 50).Select(i => new Article + { + Title = $"Test Title {i}", + Slug = $"test-title-{i}", + Description = "Test Description", + Body = "Test Body", + Author = user.Entity, + })); + + await db.SaveChangesAsync(); + } + + var response = await Factory.CreateClient().GetAsync("/api/Articles?page=1&size=20"); + + response.EnsureSuccessStatusCode(); + + var body = (await response.Content.ReadFromJsonAsync())!; + + body.Articles.Count().Should().Be(20); + body.ArticlesCount.Should().Be(50); + + body.Articles.First().Should().BeEquivalentTo(new + { + Title = "Test Title 50", + Description = "Test Description", + Body = "Test Body", + Author = new + { + Name = "John Doe" + }, + }); + } + + [Fact] + public async Task Can_Get_Article() + { + using (var scope = Factory.Services.CreateScope()) + { + var db = scope.ServiceProvider.GetRequiredService(); + + db.Articles.Add(new Article + { + Title = $"Test Title", + Slug = $"test-title", + Description = "Test Description", + Body = "Test Body", + Author = new User + { + Name = "John Doe", + Email = "john.doe@email.com" + } + }); + + await db.SaveChangesAsync(); + } + + var response = await Factory.CreateClient().GetAsync($"/api/Articles/test-title"); + + response.EnsureSuccessStatusCode(); + + var body = (await response.Content.ReadFromJsonAsync())!; + + body.Should().BeEquivalentTo(new + { + Title = "Test Title", + Description = "Test Description", + Body = "Test Body", + Author = new + { + Name = "John Doe" + }, + }); + } +} +``` + +{{< /highlight >}} + +Ensure all tests passes with `dotnet test`. + +### CI tests & code coverage + +Now we need to integrate the tests in our CI pipeline. As we testing with a real database, create a new `demo_test` database through pgAdmin with basic `test` / `test` credentials. + +{{< alert >}} +In real world scenario, you should use a dedicated database for testing, and not the same as production. +{{< /alert >}} + +Let's edit the pipeline accordingly for tests: + +{{< highlight host="demo-kube-flux" file="pipelines/demo.yaml" >}} + +```yml #... jobs: - - name: configure-pipelines + - name: build plan: #... - - set_pipeline: demo-ui - file: ci/pipelines/demo-ui.yaml -``` -{{< /highlight >}} + - task: build-source + config: + #... + params: + ConnectionStrings__DefaultConnection: "Host=postgres-primary.postgres;Username=test;Password=test;Database=demo_test" + run: + path: /bin/sh + args: + - -ec + - | + dotnet format --verify-no-changes -Apply it and put this nginx `Dockerfile` on frontend root project: + dotnet sonarscanner begin /k:"KubeRocks-Demo" /d:sonar.host.url="((sonarqube.url))" /d:sonar.token="((sonarqube.analysis-token))" /d:sonar.cs.vscoveragexml.reportsPaths=coverage.xml + dotnet build -c Release + dotnet-coverage collect 'dotnet test -c Release --no-restore --no-build --verbosity=normal' -f xml -o 'coverage.xml' + dotnet sonarscanner end /d:sonar.token="((sonarqube.analysis-token))" -{{< highlight host="kuberocks-demo-ui" file="Dockerfile" >}} + dotnet publish src/KubeRocks.WebApi -c Release -o publish --no-restore --no-build -```Dockerfile -FROM nginx:alpine - -COPY docker/nginx.conf /etc/nginx/conf.d/default.conf -COPY dist /usr/share/nginx/html -``` - -{{< /highlight >}} - -After push all CI should build correctly. Then the image policy for auto update: - -{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/images-demo-ui.yaml" >}} - -```yml -apiVersion: image.toolkit.fluxcd.io/v1beta1 -kind: ImageRepository -metadata: - name: demo-ui - namespace: flux-system -spec: - image: gitea.kube.rocks/kuberocks/demo-ui - interval: 1m0s - secretRef: - name: dockerconfigjson ---- -apiVersion: image.toolkit.fluxcd.io/v1beta1 -kind: ImagePolicy -metadata: - name: demo-ui - namespace: flux-system -spec: - imageRepositoryRef: - name: demo-ui - namespace: flux-system - policy: - semver: - range: 0.0.x -``` - -{{< /highlight >}} - -The deployment: - -{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo-ui.yaml" >}} - -```yml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: demo-ui - namespace: kuberocks -spec: - replicas: 2 - selector: - matchLabels: - app: demo-ui - template: - metadata: - labels: - app: demo-ui - spec: - imagePullSecrets: - - name: dockerconfigjson - containers: - - name: front - image: gitea.okami101.io/kuberocks/demo-ui:latest # {"$imagepolicy": "flux-system:image-demo-ui"} - ports: - - containerPort: 80 ---- -apiVersion: v1 -kind: Service -metadata: - name: demo-ui - namespace: kuberocks -spec: - selector: - app: demo-ui - ports: - - name: http - port: 80 -``` - -{{< /highlight >}} - -After push, the demo UI container should be deployed. The very last step is to add a new route to existing `IngressRoute` for frontend: - -{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}} - -```yaml #... -apiVersion: traefik.io/v1alpha1 -kind: IngressRoute -#... -spec: - #... - routes: - - match: Host(`demo.kube.rocks`) - kind: Rule - services: - - name: demo-ui - port: http - - match: Host(`demo.kube.rocks`) && PathPrefix(`/api`) - #... ``` {{< /highlight >}} -Go to `https://demo.kube.rocks` to confirm if both app front & back are correctly connected ! +Note as we already include code coverage by using `dotnet-coverage` tool. Don't forget to precise the path of `coverage.xml` to `sonarscanner` CLI too. It's time to push our code with tests or trigger the pipeline manually to test our integration tests. -[![Frontend](frontend.png)](frontend.png) +If all goes well, you should see the tests results on SonarQube with some coverage done: -## Final check 🎊🏁🎊 +[![SonarQube](sonarqube-tests.png)](sonarqube-tests.png) -Congratulation if you're getting that far !!! +Coverage detail: -We have made an enough complete tour of Kubernetes cluster building on full GitOps mode. +[![SonarQube](sonarqube-cc.png)](sonarqube-cc.png) + +You may exclude some files from analysis by adding some project properties: + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/KubeRocks.Application.csproj" >}} + +```xml + + + + + + appsettings.Testing.json + + + +``` + +{{< /highlight >}} + +Same for coverage: + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/KubeRocks.Application.csproj" >}} + +```xml + + + + + + Migrations/**/* + + + +``` + +{{< /highlight >}} + +### Sonar Analyzer + +You can enforce many default sonar rules by using [Sonar Analyzer](https://github.com/SonarSource/sonar-dotnet) directly locally before any code push. + +Create this file at the root of your solution for enabling Sonar Analyzer globally: + +{{< highlight host="kuberocks-demo" file="Directory.Build.props" >}} + +```xml + + + latest-Recommended + true + true + + + + + +``` + +{{< /highlight >}} + +Any rule violation is treated as error at project building, which block the CI before execution of tests. Use `latest-All` as `AnalysisLevel` for psychopath mode. + +At this stage as soon this file is added, you should see some errors at building. If you use VSCode with correct C# extension, these errors will be highlighted directly in the editor. Here are some fixes: + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} + +```cs +#... + +builder.Host.UseSerilog((ctx, cfg) => cfg + .ReadFrom.Configuration(ctx.Configuration) + .Enrich.WithSpan() + .WriteTo.Console( + outputTemplate: "[{Timestamp:HH:mm:ss} {Level:u3}] |{TraceId}| {Message:lj}{NewLine}{Exception}", + // Enforce culture + formatProvider: CultureInfo.InvariantCulture + ) +); + +#... +``` + +{{< /highlight >}} + +Delete `WeatherForecastController.cs`. + +{{< highlight host="kuberocks-demo" file="tests/KubeRocks.FeatureTests.csproj" >}} + +```xml + + + + + + CA1707 + + + + +``` + +{{< /highlight >}} + +## 9th check ✅ + +We have done for code quality process. Go to the [final part]({{< ref "/posts/20-build-your-own-kubernetes-cluster-part-11" >}}) with load testing, and some frontend ! diff --git a/content/posts/18-build-your-own-kubernetes-cluster-part-9/sonarqube-cc.png b/content/posts/19-build-your-own-kubernetes-cluster-part-10/sonarqube-cc.png similarity index 100% rename from content/posts/18-build-your-own-kubernetes-cluster-part-9/sonarqube-cc.png rename to content/posts/19-build-your-own-kubernetes-cluster-part-10/sonarqube-cc.png diff --git a/content/posts/18-build-your-own-kubernetes-cluster-part-9/sonarqube-dashboard.png b/content/posts/19-build-your-own-kubernetes-cluster-part-10/sonarqube-dashboard.png similarity index 100% rename from content/posts/18-build-your-own-kubernetes-cluster-part-9/sonarqube-dashboard.png rename to content/posts/19-build-your-own-kubernetes-cluster-part-10/sonarqube-dashboard.png diff --git a/content/posts/18-build-your-own-kubernetes-cluster-part-9/sonarqube-tests.png b/content/posts/19-build-your-own-kubernetes-cluster-part-10/sonarqube-tests.png similarity index 100% rename from content/posts/18-build-your-own-kubernetes-cluster-part-9/sonarqube-tests.png rename to content/posts/19-build-your-own-kubernetes-cluster-part-10/sonarqube-tests.png diff --git a/content/posts/19-build-your-own-kubernetes-cluster-part-10/frontend.png b/content/posts/20-build-your-own-kubernetes-cluster-part-11/frontend.png similarity index 100% rename from content/posts/19-build-your-own-kubernetes-cluster-part-10/frontend.png rename to content/posts/20-build-your-own-kubernetes-cluster-part-11/frontend.png diff --git a/content/posts/19-build-your-own-kubernetes-cluster-part-10/grafana-db-lb.png b/content/posts/20-build-your-own-kubernetes-cluster-part-11/grafana-db-lb.png similarity index 100% rename from content/posts/19-build-your-own-kubernetes-cluster-part-10/grafana-db-lb.png rename to content/posts/20-build-your-own-kubernetes-cluster-part-11/grafana-db-lb.png diff --git a/content/posts/19-build-your-own-kubernetes-cluster-part-10/grafana-k6.png b/content/posts/20-build-your-own-kubernetes-cluster-part-11/grafana-k6.png similarity index 100% rename from content/posts/19-build-your-own-kubernetes-cluster-part-10/grafana-k6.png rename to content/posts/20-build-your-own-kubernetes-cluster-part-11/grafana-k6.png diff --git a/content/posts/20-build-your-own-kubernetes-cluster-part-11/index.md b/content/posts/20-build-your-own-kubernetes-cluster-part-11/index.md new file mode 100644 index 0000000..055e852 --- /dev/null +++ b/content/posts/20-build-your-own-kubernetes-cluster-part-11/index.md @@ -0,0 +1,808 @@ +--- +title: "Setup a HA Kubernetes cluster Part XI - Load testing & Frontend" +date: 2023-10-10 +description: "Follow this opinionated guide as starter-kit for your own Kubernetes platform..." +tags: ["kubernetes", "testing", "sonarqube", "load-testing", "k6"] +draft: true +--- + +{{< lead >}} +Be free from AWS/Azure/GCP by building a production grade On-Premise Kubernetes cluster on cheap VPS provider, fully GitOps managed, and with complete CI/CD tools 🎉 +{{< /lead >}} + +This is the **Part X** of more global topic tutorial. [Back to first part]({{< ref "/posts/10-build-your-own-kubernetes-cluster" >}}) for intro. + +## Load testing + +When it comes load testing, k6 is a perfect tool for this job and integrate with many real time series database integration like Prometheus or InfluxDB. As we already have Prometheus, let's use it and avoid us a separate InfluxDB installation. First be sure to allow remote write by enable `enableRemoteWriteReceiver` in the Prometheus Helm chart. It should be already done if you follow this tutorial. + +### K6 + +We'll reuse our flux repo and add some manifests for defining the load testing scenario. Firstly describe the scenario inside `ConfigMap` that scrape all articles and then each article: + +{{< highlight host="demo-kube-flux" file="jobs/demo-k6.yaml" >}} + +```yml +apiVersion: v1 +kind: ConfigMap +metadata: + name: scenario + namespace: kuberocks +data: + script.js: | + import http from "k6/http"; + import { check } from "k6"; + + export default function () { + const size = 10; + let page = 1; + + let articles = [] + + do { + const res = http.get(`${__ENV.API_URL}/Articles?page=${page}&size=${size}`); + check(res, { + "status is 200": (r) => r.status == 200, + }); + + articles = res.json().articles; + page++; + + articles.forEach((article) => { + const res = http.get(`${__ENV.API_URL}/Articles/${article.slug}`); + check(res, { + "status is 200": (r) => r.status == 200, + }); + }); + } + while (articles.length > 0); + } +``` + +{{< /highlight >}} + +And add the k6 `Job` in the same file and configure it for Prometheus usage and mounting above scenario: + +{{< highlight host="demo-kube-flux" file="jobs/demo-k6.yaml" >}} + +```yml +#... +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: k6 + namespace: kuberocks +spec: + ttlSecondsAfterFinished: 0 + template: + spec: + restartPolicy: Never + containers: + - name: run + image: grafana/k6 + env: + - name: API_URL + value: https://demo.kube.rocks/api + - name: K6_VUS + value: "30" + - name: K6_DURATION + value: 1m + - name: K6_PROMETHEUS_RW_SERVER_URL + value: http://prometheus-operated.monitoring:9090/api/v1/write + command: + ["k6", "run", "-o", "experimental-prometheus-rw", "script.js"] + volumeMounts: + - name: scenario + mountPath: /home/k6 + tolerations: + - key: node-role.kubernetes.io/runner + operator: Exists + effect: NoSchedule + nodeSelector: + node-role.kubernetes.io/runner: "true" + volumes: + - name: scenario + configMap: + name: scenario +``` + +{{< /highlight >}} + +Use appropriate `tolerations` and `nodeSelector` for running the load testing in a node which have free CPU resource. You can play with `K6_VUS` and `K6_DURATION` environment variables in order to change the level of load testing. + +Then you can launch the job with `ka jobs/demo-k6.yaml`. Check quickly that the job is running via `klo -n kuberocks job/k6`: + +```txt + + /\ |‾‾| /‾‾/ /‾‾/ + /\ / \ | |/ / / / + / \/ \ | ( / ‾‾\ + / \ | |\ \ | (‾) | +/ __________ \ |__| \__\ \_____/ .io + +execution: local + script: script.js + output: Prometheus remote write (http://prometheus-operated.monitoring:9090/api/v1/write) + +scenarios: (100.00%) 1 scenario, 30 max VUs, 1m30s max duration (incl. graceful stop): + * default: 30 looping VUs for 1m0s (gracefulStop: 30s) +``` + +After 1 minute of run, job should finish and show some raw result: + +```txt +✓ status is 200 + +checks.........................: 100.00% ✓ 17748 ✗ 0 +data_received..................: 404 MB 6.3 MB/s +data_sent......................: 1.7 MB 26 kB/s +http_req_blocked...............: avg=242.43µs min=223ns med=728ns max=191.27ms p(90)=1.39µs p(95)=1.62µs +http_req_connecting............: avg=13.13µs min=0s med=0s max=9.48ms p(90)=0s p(95)=0s +http_req_duration..............: avg=104.22ms min=28.9ms med=93.45ms max=609.86ms p(90)=162.04ms p(95)=198.93ms + { expected_response:true }...: avg=104.22ms min=28.9ms med=93.45ms max=609.86ms p(90)=162.04ms p(95)=198.93ms +http_req_failed................: 0.00% ✓ 0 ✗ 17748 +http_req_receiving.............: avg=13.76ms min=32.71µs med=6.49ms max=353.13ms p(90)=36.04ms p(95)=51.36ms +http_req_sending...............: avg=230.04µs min=29.79µs med=93.16µs max=25.75ms p(90)=201.92µs p(95)=353.61µs +http_req_tls_handshaking.......: avg=200.57µs min=0s med=0s max=166.91ms p(90)=0s p(95)=0s +http_req_waiting...............: avg=90.22ms min=14.91ms med=80.76ms max=609.39ms p(90)=138.3ms p(95)=169.24ms +http_reqs......................: 17748 276.81409/s +iteration_duration.............: avg=5.39s min=3.97s med=5.35s max=7.44s p(90)=5.94s p(95)=6.84s +iterations.....................: 348 5.427727/s +vus............................: 7 min=7 max=30 +vus_max........................: 30 min=30 max=30 +``` + +As we use Prometheus for outputting the result, we can visualize it easily with Grafana. You just have to import [this dashboard](https://grafana.com/grafana/dashboards/18030-official-k6-test-result/): + +[![Grafana](grafana-k6.png)](grafana-k6.png) + +As we use Kubernetes, increase the loading performance horizontally is dead easy. Go to the deployment configuration of demo app for increasing replicas count, as well as Traefik, and compare the results. + +### Load balancing database + +So far, we only load balanced the stateless API, but what about the database part ? We have set up a replicated PostgreSQL cluster, however we have no use of the replica that stay sadly idle. But for that we have to distinguish write queries from scalable read queries. + +We can make use of the Bitnami [PostgreSQL HA](https://artifacthub.io/packages/helm/bitnami/postgresql-ha) instead of simple one. It adds the new component [Pgpool-II](https://pgpool.net/mediawiki/index.php/Main_Page) as main load balancer and detect failover. It's able to separate in real time write queries from read queries and send them to the master or the replica. The advantage: works natively for all apps without any changes. The cons: it consumes far more resources and add a new component to maintain. + +A 2nd solution is to separate query typologies from where it counts: the application. It requires some code changes, but it's clearly a far more efficient solution. Let's do this way. + +As Npgsql support load balancing [natively](https://www.npgsql.org/doc/failover-and-load-balancing.html), we don't need to add any Kubernetes service. We just have to create a clear distinction between read and write queries. One simple way is to create a separate RO `DbContext`. + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/Contexts/AppRoDbContext.cs" >}} + +```cs +namespace KubeRocks.Application.Contexts; + +using KubeRocks.Application.Entities; + +using Microsoft.EntityFrameworkCore; + +public class AppRoDbContext : DbContext +{ + public DbSet Users => Set(); + public DbSet
Articles => Set
(); + public DbSet Comments => Set(); + + public AppRoDbContext(DbContextOptions options) : base(options) + { + } +} +``` + +{{< /highlight >}} + +Register it in DI: + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/Extensions/ServiceExtensions.cs" >}} + +```cs +public static class ServiceExtensions +{ + public static IServiceCollection AddKubeRocksServices(this IServiceCollection services, IConfiguration configuration) + { + return services + //... + .AddDbContext((options) => + { + options.UseNpgsql( + configuration.GetConnectionString("DefaultRoConnection") + ?? + configuration.GetConnectionString("DefaultConnection") + ); + }); + } +} +``` + +{{< /highlight >}} + +We fall back to the RW connection string if the RO one is not defined. Then use it in the `ArticlesController` which as only read endpoints: + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Controllers/ArticlesController.cs" >}} + +```cs +//... + +public class ArticlesController +{ + private readonly AppRoDbContext _context; + + //... + + public ArticlesController(AppRoDbContext context) + { + _context = context; + } + + //... +} +``` + +{{< /highlight >}} + +Push and let it pass the CI. In the meantime, add the new RO connection: + +{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}} + +```yaml +# ... +spec: + # ... + template: + # ... + spec: + # ... + containers: + - name: api + # ... + env: + - name: DB_PASSWORD + valueFrom: + secretKeyRef: + name: demo-db + key: password + - name: ConnectionStrings__DefaultConnection + value: Host=postgresql-primary.postgres;Username=demo;Password='$(DB_PASSWORD)';Database=demo; + - name: ConnectionStrings__DefaultRoConnection + value: Host=postgresql-primary.postgres,postgresql-read.postgres;Username=demo;Password='$(DB_PASSWORD)';Database=demo;Load Balance Hosts=true; +#... +``` + +{{< /highlight >}} + +We simply have to add multiple host like `postgresql-primary.postgres,postgresql-read.postgres` for the RO connection string and enable LB mode with `Load Balance Hosts=true`. + +Once deployed, relaunch a load test with K6 and admire the DB load balancing in action on both storage servers with `htop` or directly compute pods by namespace in Grafana. + +[![Gafana DB load balancing](grafana-db-lb.png)](grafana-db-lb.png) + +## Frontend + +Let's finish this guide by a quick view of SPA frontend development as a separate project from backend. + +### Vue TS + +Create a new Vue.js project from [vitesse starter kit](https://github.com/antfu/vitesse-lite) (be sure to have pnpm, just a matter of `scoop/brew install pnpm`): + +```sh +npx degit antfu/vitesse-lite kuberocks-demo-ui +cd kuberocks-demo-ui +git init +git add . +git commit -m "Initial commit" +pnpm i +pnpm dev +``` + +Should launch app in `http://localhost:3333/`. Create a new `kuberocks-demo-ui` Gitea repo and push this code into it. Now lets quick and done for API calls. + +### Get around CORS and HTTPS with YARP + +As always when frontend is separated from backend, we have to deal with CORS. But I prefer to have one single URL for frontend + backend and get rid of CORS problem by simply call under `/api` path. Moreover, it'll be production ready without need to manage any `Vite` variable for API URL and we'll get HTTPS provided by dotnet. Back to API project. + +```sh +dotnet add src/KubeRocks.WebApi package Yarp.ReverseProxy +``` + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} + +```cs +//... + +var builder = WebApplication.CreateBuilder(args); + +builder.Services.AddReverseProxy() + .LoadFromConfig(builder.Configuration.GetSection("ReverseProxy")); + +//... + +var app = builder.Build(); + +app.MapReverseProxy(); + +//... + +app.UseRouting(); + +//... +``` + +{{< /highlight >}} + +Note as we must add `app.UseRouting();` too in order to get Swagger UI working. + +The proxy configuration (only for development): + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/appsettings.Development.json" >}} + +```json +{ + //... + "ReverseProxy": { + "Routes": { + "ServerRouteApi": { + "ClusterId": "Server", + "Match": { + "Path": "/api/{**catch-all}" + }, + "Transforms": [ + { + "PathRemovePrefix": "/api" + } + ] + }, + "ClientRoute": { + "ClusterId": "Client", + "Match": { + "Path": "{**catch-all}" + } + } + }, + "Clusters": { + "Client": { + "Destinations": { + "Client1": { + "Address": "http://localhost:3333" + } + } + }, + "Server": { + "Destinations": { + "Server1": { + "Address": "https://localhost:7159" + } + } + } + } + } +} +``` + +{{< /highlight >}} + +Now your frontend app should appear under `https://localhost:7159`, and API calls under `https://localhost:7159/api`. We now benefit from HTTPS for all app. Push API code. + +### Typescript API generator + +As we use OpenAPI, it's possible to generate typescript client for API calls. Add this package: + +```sh +pnpm add openapi-typescript -D +pnpm add openapi-typescript-fetch +``` + +Before generate the client model, go back to backend for forcing required by default for attributes when not nullable when using `Swashbuckle.AspNetCore`: + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Filters/RequiredNotNullableSchemaFilter.cs" >}} + +```cs +using Microsoft.OpenApi.Models; + +using Swashbuckle.AspNetCore.SwaggerGen; + +namespace KubeRocks.WebApi.Filters; + +public class RequiredNotNullableSchemaFilter : ISchemaFilter +{ + public void Apply(OpenApiSchema schema, SchemaFilterContext context) + { + if (schema.Properties is null) + { + return; + } + + var notNullableProperties = schema + .Properties + .Where(x => !x.Value.Nullable && !schema.Required.Contains(x.Key)) + .ToList(); + + foreach (var property in notNullableProperties) + { + schema.Required.Add(property.Key); + } + } +} +``` + +{{< /highlight >}} + +{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}} + +```cs +//... + +builder.Services.AddSwaggerGen(o => +{ + o.SupportNonNullableReferenceTypes(); + o.SchemaFilter(); +}); + +//... +``` + +{{< /highlight >}} + +You should now have proper required attributes for models in swagger UI: + +[![Frontend](swagger-ui-nullable.png)](swagger-ui-nullable.png) + +{{< alert >}} +Sadly, without this boring step, many attributes will be nullable when generating TypeScript models, and leads to headaches from client side by forcing us to manage nullable everywhere. +{{< /alert >}} + +Now generate the models: + +{{< highlight host="kuberocks-demo-ui" file="package.json" >}} + +```json +{ + //... + "scripts": { + //... + "openapi": "openapi-typescript http://localhost:5123/api/v1/swagger.json --output src/api/openapi.ts" + }, + //... +} +``` + +{{< /highlight >}} + +Use the HTTP version of swagger as you'll get a self certificate error. The use `pnpm openapi` to generate full TS model. Finally, describe API fetchers like so: + +{{< highlight host="kuberocks-demo-ui" file="src/api/index.ts" >}} + +```ts +import { Fetcher } from 'openapi-typescript-fetch' + +import type { components, paths } from './openapi' + +const fetcher = Fetcher.for() + +type ArticleList = components['schemas']['ArticleListDto'] +type Article = components['schemas']['ArticleDto'] + +const getArticles = fetcher.path('/api/Articles').method('get').create() +const getArticleBySlug = fetcher.path('/api/Articles/{slug}').method('get').create() + +export type { Article, ArticleList } +export { + getArticles, + getArticleBySlug, +} +``` + +{{< /highlight >}} + +We are now fully typed compliant with the API. + +### Call the API + +Let's create a pretty basic list + detail vue pages: + +{{< highlight host="kuberocks-demo-ui" file="src/pages/articles/index.vue" >}} + +```vue + + + +``` + +{{< /highlight >}} + +{{< highlight host="kuberocks-demo-ui" file="src/pages/articles/[slug].vue" >}} + +```vue + + + +``` + +{{< /highlight >}} + +It should work flawlessly. + +### Frontend CI/CD + +The CI frontend is far simpler than backend. Create a new `demo-ui` pipeline: + +{{< highlight host="demo-kube-flux" file="pipelines/demo-ui.yaml" >}} + +```yml +resources: + - name: version + type: semver + source: + driver: git + uri: ((git.url))/kuberocks/demo-ui + branch: main + file: version + username: ((git.username)) + password: ((git.password)) + git_user: ((git.git-user)) + commit_message: ((git.commit-message)) + - name: source-code + type: git + icon: coffee + source: + uri: ((git.url))/kuberocks/demo-ui + branch: main + username: ((git.username)) + password: ((git.password)) + - name: docker-image + type: registry-image + icon: docker + source: + repository: ((registry.name))/kuberocks/demo-ui + tag: latest + username: ((registry.username)) + password: ((registry.password)) + +jobs: + - name: build + plan: + - get: source-code + trigger: true + + - task: build-source + config: + platform: linux + image_resource: + type: registry-image + source: + repository: node + tag: 18-buster + inputs: + - name: source-code + path: . + outputs: + - name: dist + path: dist + caches: + - path: .pnpm-store + run: + path: /bin/sh + args: + - -ec + - | + corepack enable + corepack prepare pnpm@latest-8 --activate + pnpm config set store-dir .pnpm-store + pnpm i + pnpm lint + pnpm build + + - task: build-image + privileged: true + config: + platform: linux + image_resource: + type: registry-image + source: + repository: concourse/oci-build-task + inputs: + - name: source-code + path: . + - name: dist + path: dist + outputs: + - name: image + run: + path: build + - put: version + params: { bump: patch } + - put: docker-image + params: + additional_tags: version/number + image: image/image.tar +``` + +{{< /highlight >}} + +{{< highlight host="demo-kube-flux" file="pipelines/demo-ui.yaml" >}} + +```tf +#... + +jobs: + - name: configure-pipelines + plan: + #... + - set_pipeline: demo-ui + file: ci/pipelines/demo-ui.yaml +``` + +{{< /highlight >}} + +Apply it and put this nginx `Dockerfile` on frontend root project: + +{{< highlight host="kuberocks-demo-ui" file="Dockerfile" >}} + +```Dockerfile +FROM nginx:alpine + +COPY docker/nginx.conf /etc/nginx/conf.d/default.conf +COPY dist /usr/share/nginx/html +``` + +{{< /highlight >}} + +After push all CI should build correctly. Then the image policy for auto update: + +{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/images-demo-ui.yaml" >}} + +```yml +apiVersion: image.toolkit.fluxcd.io/v1beta1 +kind: ImageRepository +metadata: + name: demo-ui + namespace: flux-system +spec: + image: gitea.kube.rocks/kuberocks/demo-ui + interval: 1m0s + secretRef: + name: dockerconfigjson +--- +apiVersion: image.toolkit.fluxcd.io/v1beta1 +kind: ImagePolicy +metadata: + name: demo-ui + namespace: flux-system +spec: + imageRepositoryRef: + name: demo-ui + namespace: flux-system + policy: + semver: + range: 0.0.x +``` + +{{< /highlight >}} + +The deployment: + +{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo-ui.yaml" >}} + +```yml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: demo-ui + namespace: kuberocks +spec: + replicas: 2 + selector: + matchLabels: + app: demo-ui + template: + metadata: + labels: + app: demo-ui + spec: + imagePullSecrets: + - name: dockerconfigjson + containers: + - name: front + image: gitea.okami101.io/kuberocks/demo-ui:latest # {"$imagepolicy": "flux-system:image-demo-ui"} + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: demo-ui + namespace: kuberocks +spec: + selector: + app: demo-ui + ports: + - name: http + port: 80 +``` + +{{< /highlight >}} + +After push, the demo UI container should be deployed. The very last step is to add a new route to existing `IngressRoute` for frontend: + +{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}} + +```yaml +#... +apiVersion: traefik.io/v1alpha1 +kind: IngressRoute +#... +spec: + #... + routes: + - match: Host(`demo.kube.rocks`) + kind: Rule + services: + - name: demo-ui + port: http + - match: Host(`demo.kube.rocks`) && PathPrefix(`/api`) + #... +``` + +{{< /highlight >}} + +Go to `https://demo.kube.rocks` to confirm if both app front & back are correctly connected ! + +[![Frontend](frontend.png)](frontend.png) + +## Final check 🎊🏁🎊 + +Congratulation if you're getting that far !!! + +We have made an enough complete tour of Kubernetes cluster building on full GitOps mode. diff --git a/content/posts/20-build-your-own-kubernetes-cluster-part-11/swagger-ui-nullable.png b/content/posts/20-build-your-own-kubernetes-cluster-part-11/swagger-ui-nullable.png new file mode 100644 index 0000000..81d5904 Binary files /dev/null and b/content/posts/20-build-your-own-kubernetes-cluster-part-11/swagger-ui-nullable.png differ