rearrange

This commit is contained in:
2023-08-29 14:43:36 +02:00
parent 8ee960c022
commit 54928ab4ef
17 changed files with 1872 additions and 1855 deletions

View File

@@ -568,4 +568,4 @@ Wait the pod to be updated, then check the new endpoint `https://demo.kube.rocks
## 7th check ✅
We have done for the set-up of our automated CI/CD workflow process. Go [next part]({{< ref "/posts/18-build-your-own-kubernetes-cluster-part-9" >}}) for going further with a real DB app that handle automatic migrations & monitoring integration with OpenTelemetry !
We have done for the set-up of our automated CI/CD workflow process. Go [next part]({{< ref "/posts/18-build-your-own-kubernetes-cluster-part-9" >}}) for going further with a real DB app that handle automatic migrations.

View File

@@ -1,8 +1,8 @@
---
title: "Setup a HA Kubernetes cluster Part IX - DB usage & Tracing with OpenTelemetry"
date: 2023-10-08
title: "Setup a HA Kubernetes cluster Part IX - Further deployment with DB"
date: 2023-10-09
description: "Follow this opinionated guide as starter-kit for your own Kubernetes platform..."
tags: ["kubernetes", "efcore", "serilog", "metrics", "opentelemetry", "tracing", "tempo"]
tags: ["kubernetes", "postgresql", "efcore"]
draft: true
---
@@ -10,11 +10,11 @@ draft: true
Be free from AWS/Azure/GCP by building a production grade On-Premise Kubernetes cluster on cheap VPS provider, fully GitOps managed, and with complete CI/CD tools 🎉
{{< /lead >}}
This is the **Part VIII** of more global topic tutorial. [Back to first part]({{< ref "/posts/10-build-your-own-kubernetes-cluster" >}}) for intro.
This is the **Part IX** of more global topic tutorial. [Back to first part]({{< ref "/posts/10-build-your-own-kubernetes-cluster" >}}) for intro.
## Real DB App sample
Before go any further, let's add some DB usage to our sample app. We'll use the classical `Articles<->Authors<->Comments` relationships. First create `docker-compose.yml` file in root of demo project:
Let's add some DB usage to our sample app. We'll use the classical `Articles<->Authors<->Comments` relationships. First create `docker-compose.yml` file in root of demo project:
{{< highlight host="kuberocks-demo" file="docker-compose.yml" >}}
@@ -531,7 +531,7 @@ public class ArticlesController
Launch the app and check that `/Articles` and `/Articles/{slug}` endpoints are working as expected.
## Production grade deployment
## Deployment with database
### Database connection
@@ -649,480 +649,6 @@ We may obviously never do this on real production database, but as it's only for
Return to `https://demo.kube.rocks/Articles` to confirm articles are correctly returned.
### Better logging with Serilog
Default ASP.NET logging are not very standard, let's add Serilog for real requests logging with duration and status code:
```sh
dotnet add src/KubeRocks.WebApi package Serilog.AspNetCore
```
{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}}
```cs
// ...
builder.Host.UseSerilog((ctx, cfg) => cfg
.ReadFrom.Configuration(ctx.Configuration)
.WriteTo.Console()
);
var app = builder.Build();
app.UseSerilogRequestLogging();
// ...
```
{{< /highlight >}}
Then filtering through Loki stack should by far better.
### Liveness & readiness
All real production app should have liveness & readiness probes. It generally consists on particular URL which return the current health app status. We'll also include the DB access health. Let's add the standard `/healthz` endpoint, which is dead simple in ASP.NET Core:
```sh
dotnet add src/KubeRocks.WebApi package Microsoft.Extensions.Diagnostics.HealthChecks.EntityFrameworkCore
```
{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}}
```cs
// ...
builder.Services
.AddHealthChecks()
.AddDbContextCheck<AppDbContext>();
var app = builder.Build();
// ...
app.MapControllers();
app.MapHealthChecks("/healthz");
app.Run();
```
{{< /highlight >}}
And you're done ! Go to `https://demo.kube.rocks/healthz` to confirm it's working. Try to stop the database with `docker compose stop` and check the healthz endpoint again, it should return `503` status code.
{{< alert >}}
The `Microsoft.Extensions.Diagnostics.HealthChecks` package is very extensible and you can add any custom check to enrich the health app status.
{{< /alert >}}
And finally the probes:
{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}}
```yaml
# ...
spec:
# ...
template:
# ...
spec:
# ...
containers:
- name: api
# ...
livenessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 10
periodSeconds: 10
```
{{< /highlight >}}
{{< alert >}}
Be aware of difference between `liveness` and `readiness` probes. The first one is used to restart the pod if it's not responding, the second one is used to tell the pod is not ready to receive traffic, which is vital for preventing any downtime.
When **Rolling Update** strategy is used (the default), the old pod is not killed until the new one is ready (aka healthy).
{{< /alert >}}
## Telemetry
The last step but not least missing for a total integration with our monitored Kubernetes cluster is to add some telemetry to our app. We'll use `OpenTelemetry` for that, which becomes the standard library for metrics and tracing, by providing good integration to many languages.
### Application metrics
Install minimal ASP.NET Core metrics is really a no-brainer:
```sh
dotnet add src/KubeRocks.WebApi package OpenTelemetry.AutoInstrumentation --prerelease
dotnet add src/KubeRocks.WebApi package OpenTelemetry.Extensions.Hosting --prerelease
dotnet add src/KubeRocks.WebApi package OpenTelemetry.Exporter.Prometheus.AspNetCore --prerelease
```
{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}}
```cs
//...
builder.Services.AddOpenTelemetry()
.WithMetrics(b =>
{
b
.AddAspNetCoreInstrumentation()
.AddPrometheusExporter();
});
var app = builder.Build();
app.UseOpenTelemetryPrometheusScrapingEndpoint();
//...
```
{{< /highlight >}}
Relaunch app and go to `https://demo.kube.rocks/metrics` to confirm it's working. It should show metrics after each endpoint call, simply try `https://demo.kube.rocks/Articles`.
{{< alert >}}
.NET metrics are currently pretty basic, but the next .NET 8 version will provide far better metrics from internal components allowing some [useful dashboard](https://github.com/JamesNK/aspnetcore-grafana).
{{< /alert >}}
#### Hide internal endpoints
After push, you should see `/metrics` live. Let's step back and exclude this internal path from external public access. We have 2 options:
* Force on the app side to listen only on private network on `/metrics` and `/healthz` endpoints
* Push all the app logic under `/api` path and let Traefik to include only this path
Let's do the option 2. Add the `api/` prefix to controllers to expose:
{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Controllers/ArticlesController.cs" >}}
```cs
//...
[ApiController]
[Route("api/[controller]")]
public class ArticlesController {
//...
}
```
{{< /highlight >}}
Let's move Swagger UI under `/api` path too:
{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}}
```cs
//...
if (app.Environment.IsDevelopment())
{
app.UseSwagger(c =>
{
c.RouteTemplate = "/api/{documentName}/swagger.json";
});
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint("v1/swagger.json", "KubeRocks v1");
c.RoutePrefix = "api";
});
}
//...
```
{{< /highlight >}}
{{< alert >}}
You may use ASP.NET API versioning, which work the same way with [versioning URL path](https://github.com/dotnet/aspnet-api-versioning/wiki/Versioning-via-the-URL-Path).
{{< /alert >}}
All is left is to include only the endpoints under `/api` prefix on Traefik IngressRoute:
{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}}
```yaml
#...
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
#...
spec:
#...
routes:
- match: Host(`demo.kube.rocks`) && PathPrefix(`/api`)
#...
```
{{< /highlight >}}
Now the new URL is `https://demo.kube.rocks/api/Articles`. Any path different from `api` will return the Traefik 404 page, and internal paths as `https://demo.kube.rocks/metrics` is not accessible anymore. An other additional advantage of this config, it's simple to put a separated frontend project under `/` path, which can use the under API without any CORS problem natively.
#### Prometheus integration
It's only a matter of new ServiceMonitor config:
{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}}
```yaml
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: demo
namespace: kuberocks
spec:
endpoints:
- targetPort: 80
selector:
matchLabels:
app: demo
```
{{< /highlight >}}
After some time, You can finally use the Prometheus dashboard to query your app metrics. Use `{namespace="kuberocks",job="demo"}` PromQL query to list all available metrics:
[![Prometheus metrics](prometheus-graph.png)](prometheus-graph.png)
### Application tracing
A more useful case for OpenTelemetry is to integrate it to a tracing backend. [Tempo](https://grafana.com/oss/tempo/) is a good candidate, which is a free open-source alternative to Jaeger, simpler to install by requiring a simple s3 as storage, and compatible to many protocols as Jaeger, OTLP, Zipkin.
#### Installing Tempo
It's another Helm Chart to install as well as the related grafana datasource:
{{< highlight host="demo-kube-k3s" file="tracing.tf" >}}
```tf
resource "kubernetes_namespace_v1" "tracing" {
metadata {
name = "tracing"
}
}
resource "helm_release" "tempo" {
chart = "tempo"
version = "1.5.1"
repository = "https://grafana.github.io/helm-charts"
name = "tempo"
namespace = kubernetes_namespace_v1.tracing.metadata[0].name
set {
name = "tempo.storage.trace.backend"
value = "s3"
}
set {
name = "tempo.storage.trace.s3.bucket"
value = var.s3_bucket
}
set {
name = "tempo.storage.trace.s3.endpoint"
value = var.s3_endpoint
}
set {
name = "tempo.storage.trace.s3.region"
value = var.s3_region
}
set {
name = "tempo.storage.trace.s3.access_key"
value = var.s3_access_key
}
set {
name = "tempo.storage.trace.s3.secret_key"
value = var.s3_secret_key
}
set {
name = "serviceMonitor.enabled"
value = "true"
}
}
resource "kubernetes_config_map_v1" "tempo_grafana_datasource" {
metadata {
name = "tempo-grafana-datasource"
namespace = kubernetes_namespace_v1.monitoring.metadata[0].name
labels = {
grafana_datasource = "1"
}
}
data = {
"datasource.yaml" = <<EOF
apiVersion: 1
datasources:
- name: Tempo
type: tempo
uid: tempo
url: http://tempo.tracing:3100/
access: proxy
EOF
}
}
```
{{< /highlight >}}
#### OpenTelemetry
Let's firstly add another instrumentation package specialized for Npgsql driver used by EF Core to translate queries to PostgreSQL:
```sh
dotnet add src/KubeRocks.WebApi package Npgsql.OpenTelemetry
```
Then bridge all needed instrumentation as well as the OTLP exporter:
{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}}
```cs
//...
builder.Services.AddOpenTelemetry()
//...
.WithTracing(b =>
{
b
.SetResourceBuilder(ResourceBuilder
.CreateDefault()
.AddService("KubeRocks.Demo")
.AddTelemetrySdk()
)
.AddAspNetCoreInstrumentation(b =>
{
b.Filter = ctx =>
{
return ctx.Request.Path.StartsWithSegments("/api");
};
})
.AddEntityFrameworkCoreInstrumentation()
.AddNpgsql()
.AddOtlpExporter();
});
//...
```
{{< /highlight >}}
Then add the exporter endpoint config in order to push traces to Tempo:
{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}}
```yaml
#...
spec:
#...
template:
#...
spec:
#...
containers:
- name: api
#...
env:
#...
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://tempo.tracing:4317
```
{{< /highlight >}}
Call some API URLs and get back to Grafana / Explore, select Tempo data source and search for query traces. You should see something like this:
[![Tempo search](tempo-search.png)](tempo-search.png)
Click on one specific trace to get details. You can go through HTTP requests, EF Core time response, and even underline SQL queries thanks to Npgsql instrumentation:
[![Tempo traces](tempo-trace.png)](tempo-trace.png)
#### Correlation with Loki
It would be nice to have directly access to trace from logs through Loki search, as it's clearly a more seamless way than searching inside Tempo.
For that we need to do 2 things :
* Add the `TraceId` to logs in order to correlate trace with log. In ASP.NET Core, a `TraceId` correspond to a unique request, allowing isolation analyze for each request.
* Create a link in Grafana from the generated `TraceId` inside log and the detail Tempo view trace.
So firstly, let's take care of the app part by attaching the OpenTelemetry TraceId to Serilog:
```sh
dotnet add src/KubeRocks.WebApi package Serilog.Enrichers.Span
```
{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}}
```cs
//...
builder.Host.UseSerilog((ctx, cfg) => cfg
.ReadFrom.Configuration(ctx.Configuration)
.Enrich.WithSpan()
.WriteTo.Console(
outputTemplate: "[{Timestamp:HH:mm:ss} {Level:u3}] |{TraceId}| {Message:lj}{NewLine}{Exception}"
)
);
//...
```
{{< /highlight >}}
It should now generate that kind of logs:
```txt
[23:22:57 INF] |aa51c7254aaa10a3f679a511444a5da5| HTTP GET /api/Articles responded 200 in 301.7052 ms
```
Now Let's adapt the Loki datasource by creating a derived field inside `jsonData` property:
{{< highlight host="demo-kube-k3s" file="logging.tf" >}}
```tf
resource "kubernetes_config_map_v1" "loki_grafana_datasource" {
#...
data = {
"datasource.yaml" = <<EOF
apiVersion: 1
datasources:
- name: Loki
#...
jsonData:
derivedFields:
- datasourceName: Tempo
matcherRegex: "\\|(\\w+)\\|"
name: TraceID
url: "$$${__value.raw}"
datasourceUid: tempo
EOF
}
}
```
{{< /highlight >}}
This where the magic happens. The `\|(\w+)\|` regex will match and extract the `TraceId` inside the log, which is inside pipes, and create a link to Tempo trace detail view.
[![Derived fields](loki-derived-fields.png)](loki-derived-fields.png)
This will give us the nice link button as soon as you you click a log detail:
[![Derived fields](loki-tempo-link.png)](loki-tempo-link.png)
## 8th check ✅
We have done for the basic functional telemetry ! There are infinite things to cover in this subject, but it's enough for this endless guide. Go [next part]({{< ref "/posts/19-build-your-own-kubernetes-cluster-part-10" >}}), we'll talk about feature testing, code metrics and code coverage.
We now have a little more realistic app. Go [next part]({{< ref "/posts/19-build-your-own-kubernetes-cluster-part-10" >}}), we'll talk about further monitoring integration and tracing with OpenTelemetry.

View File

@@ -0,0 +1,833 @@
---
title: "Setup a HA Kubernetes cluster Part XII - Load testing & Frontend"
date: 2023-10-12
description: "Follow this opinionated guide as starter-kit for your own Kubernetes platform..."
tags: ["kubernetes", "load-testing", "k6", "frontend", "vue", "typescript", "openapi"]
draft: true
---
{{< lead >}}
Be free from AWS/Azure/GCP by building a production grade On-Premise Kubernetes cluster on cheap VPS provider, fully GitOps managed, and with complete CI/CD tools 🎉
{{< /lead >}}
This is the **Part XII** of more global topic tutorial. [Back to first part]({{< ref "/posts/10-build-your-own-kubernetes-cluster" >}}) for intro.
## Load testing
When it comes load testing, k6 is a perfect tool for this job and integrate with many real time series database integration like Prometheus or InfluxDB. As we already have Prometheus, let's use it and avoid us a separate InfluxDB installation. First be sure to allow remote write by enable `enableRemoteWriteReceiver` in the Prometheus Helm chart. It should be already done if you follow this tutorial.
### K6
We'll reuse our flux repo and add some manifests for defining the load testing scenario. Firstly describe the scenario inside `ConfigMap` that scrape all articles and then each article:
{{< highlight host="demo-kube-flux" file="jobs/demo-k6.yaml" >}}
```yml
apiVersion: v1
kind: ConfigMap
metadata:
name: scenario
namespace: kuberocks
data:
script.js: |
import http from "k6/http";
import { check } from "k6";
export default function () {
const size = 10;
let page = 1;
let articles = []
do {
const res = http.get(`${__ENV.API_URL}/Articles?page=${page}&size=${size}`);
check(res, {
"status is 200": (r) => r.status == 200,
});
articles = res.json().articles;
page++;
articles.forEach((article) => {
const res = http.get(`${__ENV.API_URL}/Articles/${article.slug}`);
check(res, {
"status is 200": (r) => r.status == 200,
});
});
}
while (articles.length > 0);
}
```
{{< /highlight >}}
And add the k6 `Job` in the same file and configure it for Prometheus usage and mounting above scenario:
{{< highlight host="demo-kube-flux" file="jobs/demo-k6.yaml" >}}
```yml
#...
---
apiVersion: batch/v1
kind: Job
metadata:
name: k6
namespace: kuberocks
spec:
ttlSecondsAfterFinished: 0
template:
spec:
restartPolicy: Never
containers:
- name: run
image: grafana/k6
env:
- name: API_URL
value: https://demo.kube.rocks/api
- name: K6_VUS
value: "30"
- name: K6_DURATION
value: 1m
- name: K6_PROMETHEUS_RW_SERVER_URL
value: http://prometheus-operated.monitoring:9090/api/v1/write
command:
["k6", "run", "-o", "experimental-prometheus-rw", "script.js"]
volumeMounts:
- name: scenario
mountPath: /home/k6
tolerations:
- key: node-role.kubernetes.io/runner
operator: Exists
effect: NoSchedule
nodeSelector:
node-role.kubernetes.io/runner: "true"
volumes:
- name: scenario
configMap:
name: scenario
```
{{< /highlight >}}
Use appropriate `tolerations` and `nodeSelector` for running the load testing in a node which have free CPU resource. You can play with `K6_VUS` and `K6_DURATION` environment variables in order to change the level of load testing.
Then you can launch the job with `ka jobs/demo-k6.yaml`. Check quickly that the job is running via `klo -n kuberocks job/k6`:
```txt
/\ |‾‾| /‾‾/ /‾‾/
/\ / \ | |/ / / /
/ \/ \ | ( / ‾‾\
/ \ | |\ \ | (‾) |
/ __________ \ |__| \__\ \_____/ .io
execution: local
script: script.js
output: Prometheus remote write (http://prometheus-operated.monitoring:9090/api/v1/write)
scenarios: (100.00%) 1 scenario, 30 max VUs, 1m30s max duration (incl. graceful stop):
* default: 30 looping VUs for 1m0s (gracefulStop: 30s)
```
After 1 minute of run, job should finish and show some raw result:
```txt
✓ status is 200
checks.........................: 100.00% ✓ 17748 ✗ 0
data_received..................: 404 MB 6.3 MB/s
data_sent......................: 1.7 MB 26 kB/s
http_req_blocked...............: avg=242.43µs min=223ns med=728ns max=191.27ms p(90)=1.39µs p(95)=1.62µs
http_req_connecting............: avg=13.13µs min=0s med=0s max=9.48ms p(90)=0s p(95)=0s
http_req_duration..............: avg=104.22ms min=28.9ms med=93.45ms max=609.86ms p(90)=162.04ms p(95)=198.93ms
{ expected_response:true }...: avg=104.22ms min=28.9ms med=93.45ms max=609.86ms p(90)=162.04ms p(95)=198.93ms
http_req_failed................: 0.00% ✓ 0 ✗ 17748
http_req_receiving.............: avg=13.76ms min=32.71µs med=6.49ms max=353.13ms p(90)=36.04ms p(95)=51.36ms
http_req_sending...............: avg=230.04µs min=29.79µs med=93.16µs max=25.75ms p(90)=201.92µs p(95)=353.61µs
http_req_tls_handshaking.......: avg=200.57µs min=0s med=0s max=166.91ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=90.22ms min=14.91ms med=80.76ms max=609.39ms p(90)=138.3ms p(95)=169.24ms
http_reqs......................: 17748 276.81409/s
iteration_duration.............: avg=5.39s min=3.97s med=5.35s max=7.44s p(90)=5.94s p(95)=6.84s
iterations.....................: 348 5.427727/s
vus............................: 7 min=7 max=30
vus_max........................: 30 min=30 max=30
```
As we use Prometheus for outputting the result, we can visualize it easily with Grafana. You just have to import [this dashboard](https://grafana.com/grafana/dashboards/18030-official-k6-test-result/):
[![Grafana](grafana-k6.png)](grafana-k6.png)
As we use Kubernetes, increase the loading performance horizontally is dead easy. Go to the deployment configuration of demo app for increasing replicas count, as well as Traefik, and compare the results.
### Load balancing database
So far, we only load balanced the stateless API, but what about the database part ? We have set up a replicated PostgreSQL cluster, however we have no use of the replica that stay sadly idle. But for that we have to distinguish write queries from scalable read queries.
We can make use of the Bitnami [PostgreSQL HA](https://artifacthub.io/packages/helm/bitnami/postgresql-ha) instead of simple one. It adds the new component [Pgpool-II](https://pgpool.net/mediawiki/index.php/Main_Page) as main load balancer and detect failover. It's able to separate in real time write queries from read queries and send them to the master or the replica. The advantage: works natively for all apps without any changes. The cons: it consumes far more resources and add a new component to maintain.
A 2nd solution is to separate query typologies from where it counts: the application. It requires some code changes, but it's clearly a far more efficient solution. Let's do this way.
As Npgsql support load balancing [natively](https://www.npgsql.org/doc/failover-and-load-balancing.html), we don't need to add any Kubernetes service. We just have to create a clear distinction between read and write queries. One simple way is to create a separate RO `DbContext`.
{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/Contexts/AppRoDbContext.cs" >}}
```cs
namespace KubeRocks.Application.Contexts;
using KubeRocks.Application.Entities;
using Microsoft.EntityFrameworkCore;
public class AppRoDbContext : DbContext
{
public DbSet<User> Users => Set<User>();
public DbSet<Article> Articles => Set<Article>();
public DbSet<Comment> Comments => Set<Comment>();
public AppRoDbContext(DbContextOptions<AppRoDbContext> options) : base(options)
{
}
}
```
{{< /highlight >}}
Register it in DI:
{{< highlight host="kuberocks-demo" file="src/KubeRocks.Application/Extensions/ServiceExtensions.cs" >}}
```cs
public static class ServiceExtensions
{
public static IServiceCollection AddKubeRocksServices(this IServiceCollection services, IConfiguration configuration)
{
return services
//...
.AddDbContext<AppRoDbContext>((options) =>
{
options.UseNpgsql(
configuration.GetConnectionString("DefaultRoConnection")
??
configuration.GetConnectionString("DefaultConnection")
);
});
}
}
```
{{< /highlight >}}
We fall back to the RW connection string if the RO one is not defined. Then use it in the `ArticlesController` which as only read endpoints:
{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Controllers/ArticlesController.cs" >}}
```cs
//...
public class ArticlesController
{
private readonly AppRoDbContext _context;
//...
public ArticlesController(AppRoDbContext context)
{
_context = context;
}
//...
}
```
{{< /highlight >}}
Push and let it pass the CI. In the meantime, add the new RO connection:
{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}}
```yaml
# ...
spec:
# ...
template:
# ...
spec:
# ...
containers:
- name: api
# ...
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: demo-db
key: password
- name: ConnectionStrings__DefaultConnection
value: Host=postgresql-primary.postgres;Username=demo;Password='$(DB_PASSWORD)';Database=demo;
- name: ConnectionStrings__DefaultRoConnection
value: Host=postgresql-primary.postgres,postgresql-read.postgres;Username=demo;Password='$(DB_PASSWORD)';Database=demo;Load Balance Hosts=true;
#...
```
{{< /highlight >}}
We simply have to add multiple host like `postgresql-primary.postgres,postgresql-read.postgres` for the RO connection string and enable LB mode with `Load Balance Hosts=true`.
Once deployed, relaunch a load test with K6 and admire the DB load balancing in action on both storage servers with `htop` or directly compute pods by namespace in Grafana.
[![Gafana DB load balancing](grafana-db-lb.png)](grafana-db-lb.png)
## Frontend
Let's finish this guide by a quick view of SPA frontend development as a separate project from backend.
### Vue TS
Create a new Vue.js project from [vitesse starter kit](https://github.com/antfu/vitesse-lite) (be sure to have pnpm, just a matter of `scoop/brew install pnpm`):
```sh
npx degit antfu/vitesse-lite kuberocks-demo-ui
cd kuberocks-demo-ui
git init
git add .
git commit -m "Initial commit"
pnpm i
pnpm dev
```
Should launch app in `http://localhost:3333/`. Create a new `kuberocks-demo-ui` Gitea repo and push this code into it. Now lets quick and done for API calls.
### Get around CORS and HTTPS with YARP
As always when frontend is separated from backend, we have to deal with CORS. But I prefer to have one single URL for frontend + backend and get rid of CORS problem by simply call under `/api` path. Moreover, it'll be production ready without need to manage any `Vite` variable for API URL and we'll get HTTPS provided by dotnet. Back to API project.
For convenience, let's change the randomly generated ASP.NET ports by predefined ones:
{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Properties/launchSettings.json" >}}
```json
{
//...
"profiles": {
"http": {
//...
"applicationUrl": "http://localhost:5000",
//...
},
"https": {
//...
"applicationUrl": "https://localhost:5001;http://localhost:5000",
//...
},
//...
}
}
```
{{< /highlight >}}
```sh
dotnet add src/KubeRocks.WebApi package Yarp.ReverseProxy
```
{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}}
```cs
//...
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddReverseProxy()
.LoadFromConfig(builder.Configuration.GetSection("ReverseProxy"));
//...
var app = builder.Build();
app.MapReverseProxy();
//...
app.UseRouting();
//...
```
{{< /highlight >}}
Note as we must add `app.UseRouting();` too in order to get Swagger UI working.
The proxy configuration (only for development):
{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/appsettings.Development.json" >}}
```json
{
//...
"ReverseProxy": {
"Routes": {
"ServerRouteApi": {
"ClusterId": "Server",
"Match": {
"Path": "/api/{**catch-all}"
},
"Transforms": [
{
"PathRemovePrefix": "/api"
}
]
},
"ClientRoute": {
"ClusterId": "Client",
"Match": {
"Path": "{**catch-all}"
}
}
},
"Clusters": {
"Client": {
"Destinations": {
"Client1": {
"Address": "http://localhost:3333"
}
}
},
"Server": {
"Destinations": {
"Server1": {
"Address": "https://localhost:5001"
}
}
}
}
}
}
```
{{< /highlight >}}
Now your frontend app should appear under `https://localhost:5001`, and API calls under `https://localhost:5001/api`. We now benefit from HTTPS for all app. Push API code.
### Typescript API generator
As we use OpenAPI, it's possible to generate typescript client for API calls. Add this package:
```sh
pnpm add openapi-typescript -D
pnpm add openapi-typescript-fetch
```
Before generate the client model, go back to backend for forcing required by default for attributes when not nullable when using `Swashbuckle.AspNetCore`:
{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Filters/RequiredNotNullableSchemaFilter.cs" >}}
```cs
using Microsoft.OpenApi.Models;
using Swashbuckle.AspNetCore.SwaggerGen;
namespace KubeRocks.WebApi.Filters;
public class RequiredNotNullableSchemaFilter : ISchemaFilter
{
public void Apply(OpenApiSchema schema, SchemaFilterContext context)
{
if (schema.Properties is null)
{
return;
}
var notNullableProperties = schema
.Properties
.Where(x => !x.Value.Nullable && !schema.Required.Contains(x.Key))
.ToList();
foreach (var property in notNullableProperties)
{
schema.Required.Add(property.Key);
}
}
}
```
{{< /highlight >}}
{{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Program.cs" >}}
```cs
//...
builder.Services.AddSwaggerGen(o =>
{
o.SupportNonNullableReferenceTypes();
o.SchemaFilter<RequiredNotNullableSchemaFilter>();
});
//...
```
{{< /highlight >}}
You should now have proper required attributes for models in swagger UI:
[![Frontend](swagger-ui-nullable.png)](swagger-ui-nullable.png)
{{< alert >}}
Sadly, without this boring step, many attributes will be nullable when generating TypeScript models, and leads to headaches from client side by forcing us to manage nullable everywhere.
{{< /alert >}}
Now generate the models:
{{< highlight host="kuberocks-demo-ui" file="package.json" >}}
```json
{
//...
"scripts": {
//...
"openapi": "openapi-typescript http://localhost:5000/api/v1/swagger.json --output src/api/openapi.ts"
},
//...
}
```
{{< /highlight >}}
Use the HTTP version of swagger as you'll get a self certificate error. The use `pnpm openapi` to generate full TS model. Finally, describe API fetchers like so:
{{< highlight host="kuberocks-demo-ui" file="src/api/index.ts" >}}
```ts
import { Fetcher } from 'openapi-typescript-fetch'
import type { components, paths } from './openapi'
const fetcher = Fetcher.for<paths>()
type ArticleList = components['schemas']['ArticleListDto']
type Article = components['schemas']['ArticleDto']
const getArticles = fetcher.path('/api/Articles').method('get').create()
const getArticleBySlug = fetcher.path('/api/Articles/{slug}').method('get').create()
export type { Article, ArticleList }
export {
getArticles,
getArticleBySlug,
}
```
{{< /highlight >}}
We are now fully typed compliant with the API.
### Call the API
Let's create a pretty basic list + detail vue pages:
{{< highlight host="kuberocks-demo-ui" file="src/pages/articles/index.vue" >}}
```vue
<script lang="ts" setup>
import { getArticles } from '~/api'
import type { ArticleList } from '~/api'
const articles = ref<ArticleList[]>([])
async function loadArticles() {
const { data } = await getArticles({
page: 1,
size: 10,
})
articles.value = data.articles
}
loadArticles()
</script>
<template>
<RouterLink
v-for="(article, i) in articles"
:key="i"
:to="`/articles/${article.slug}`"
>
<h3>{{ article.title }}</h3>
</RouterLink>
</template>
```
{{< /highlight >}}
{{< highlight host="kuberocks-demo-ui" file="src/pages/articles/[slug].vue" >}}
```vue
<script lang="ts" setup>
import { getArticleBySlug } from '~/api'
import type { Article } from '~/api'
const props = defineProps<{ slug: string }>()
const article = ref<Article>()
const router = useRouter()
async function getArticle() {
const { data } = await getArticleBySlug({ slug: props.slug })
article.value = data
}
getArticle()
</script>
<template>
<div v-if="article">
<h1>{{ article.title }}</h1>
<p>{{ article.description }}</p>
<div>{{ article.body }}</div>
<div>
<button m-3 mt-8 text-sm btn @click="router.back()">
Back
</button>
</div>
</div>
</template>
```
{{< /highlight >}}
It should work flawlessly.
### Frontend CI/CD
The CI frontend is far simpler than backend. Create a new `demo-ui` pipeline:
{{< highlight host="demo-kube-flux" file="pipelines/demo-ui.yaml" >}}
```yml
resources:
- name: version
type: semver
source:
driver: git
uri: ((git.url))/kuberocks/demo-ui
branch: main
file: version
username: ((git.username))
password: ((git.password))
git_user: ((git.git-user))
commit_message: ((git.commit-message))
- name: source-code
type: git
icon: coffee
source:
uri: ((git.url))/kuberocks/demo-ui
branch: main
username: ((git.username))
password: ((git.password))
- name: docker-image
type: registry-image
icon: docker
source:
repository: ((registry.name))/kuberocks/demo-ui
tag: latest
username: ((registry.username))
password: ((registry.password))
jobs:
- name: build
plan:
- get: source-code
trigger: true
- task: build-source
config:
platform: linux
image_resource:
type: registry-image
source:
repository: node
tag: 18-buster
inputs:
- name: source-code
path: .
outputs:
- name: dist
path: dist
caches:
- path: .pnpm-store
run:
path: /bin/sh
args:
- -ec
- |
corepack enable
corepack prepare pnpm@latest-8 --activate
pnpm config set store-dir .pnpm-store
pnpm i
pnpm lint
pnpm build
- task: build-image
privileged: true
config:
platform: linux
image_resource:
type: registry-image
source:
repository: concourse/oci-build-task
inputs:
- name: source-code
path: .
- name: dist
path: dist
outputs:
- name: image
run:
path: build
- put: version
params: { bump: patch }
- put: docker-image
params:
additional_tags: version/number
image: image/image.tar
```
{{< /highlight >}}
{{< highlight host="demo-kube-flux" file="pipelines/demo-ui.yaml" >}}
```tf
#...
jobs:
- name: configure-pipelines
plan:
#...
- set_pipeline: demo-ui
file: ci/pipelines/demo-ui.yaml
```
{{< /highlight >}}
Apply it and put this nginx `Dockerfile` on frontend root project:
{{< highlight host="kuberocks-demo-ui" file="Dockerfile" >}}
```Dockerfile
FROM nginx:alpine
COPY docker/nginx.conf /etc/nginx/conf.d/default.conf
COPY dist /usr/share/nginx/html
```
{{< /highlight >}}
After push all CI should build correctly. Then the image policy for auto update:
{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/images-demo-ui.yaml" >}}
```yml
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImageRepository
metadata:
name: demo-ui
namespace: flux-system
spec:
image: gitea.kube.rocks/kuberocks/demo-ui
interval: 1m0s
secretRef:
name: dockerconfigjson
---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImagePolicy
metadata:
name: demo-ui
namespace: flux-system
spec:
imageRepositoryRef:
name: demo-ui
namespace: flux-system
policy:
semver:
range: 0.0.x
```
{{< /highlight >}}
The deployment:
{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo-ui.yaml" >}}
```yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-ui
namespace: kuberocks
spec:
replicas: 2
selector:
matchLabels:
app: demo-ui
template:
metadata:
labels:
app: demo-ui
spec:
imagePullSecrets:
- name: dockerconfigjson
containers:
- name: front
image: gitea.okami101.io/kuberocks/demo-ui:latest # {"$imagepolicy": "flux-system:image-demo-ui"}
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: demo-ui
namespace: kuberocks
spec:
selector:
app: demo-ui
ports:
- name: http
port: 80
```
{{< /highlight >}}
After push, the demo UI container should be deployed. The very last step is to add a new route to existing `IngressRoute` for frontend:
{{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/deploy-demo.yaml" >}}
```yaml
#...
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
#...
spec:
#...
routes:
- match: Host(`demo.kube.rocks`)
kind: Rule
services:
- name: demo-ui
port: http
- match: Host(`demo.kube.rocks`) && PathPrefix(`/api`)
#...
```
{{< /highlight >}}
Go to `https://demo.kube.rocks` to confirm if both app front & back are correctly connected !
[![Frontend](frontend.png)](frontend.png)
## Final check 🎊🏁🎊
Congratulation if you're getting that far !!!
We have made an enough complete tour of Kubernetes cluster building on full GitOps mode.