diff --git a/content/posts/17-a-beautiful-gitops-day-7/index.md b/content/posts/17-a-beautiful-gitops-day-7/index.md index f6ae968..59937b1 100644 --- a/content/posts/17-a-beautiful-gitops-day-7/index.md +++ b/content/posts/17-a-beautiful-gitops-day-7/index.md @@ -16,12 +16,13 @@ This is the **Part VII** of more global topic tutorial. [Back to guide summary]( It's now time to step back and think about how we'll use our CI. Our goal is to build our above dotnet Web API with Concourse CI as a container image, ready to deploy to our cluster through Flux. So we finish the complete CI/CD pipeline. To resume the scenario: -1. Concourse CI check the repo periodically (pull model) for new code pushed and trigger a build if applicable -2. When container image build passed, Concourse CI push the new image to our private registry, which is already take care by Gitea -3. Flux, which can perfectly be in a different cluster, check the registry periodically (pull model), if new image tag detected, it will deploy it automatically to our cluster +1. Concourse CI check the Gitea repo periodically (pull model) for any new code and trigger a build if applicable +2. When container image build passed, Concourse CI push the new image to our private registry, which is already included into Gitea +3. Image Automation, which is a component as part of Flux, check the registry periodically (pull model), if new image tag detected, it will write the last tag into Flux repository +4. Flux check the flux GitHub registry periodically (pull model), if any new or updated manifest detected, it will deploy it automatically to our cluster {{< alert >}} -Although it's the most secured way and configuration less, instead of default pull model, which is generally a check every minute, it's possible secured WebHook instead in order to reduce time between code push and deployment. +Although it's the most secured way and configuration less, instead of default pull model, which is generally a check every minute, it's possible to use WebHook instead in order to reduce time between code push and deployment. {{< /alert >}} The flow pipeline is pretty straightforward: @@ -53,7 +54,7 @@ graph RL We need to: -1. Give read/write access to our Gitea and registry for Concourse. Note as we need write access in code repository for concourse because we need to store the new image tag. We'll using [semver resource](https://github.com/concourse/semver-resource) for that. +1. Give read/write access to our Gitea repo and container registry for Concourse. Note as we need write access in code repository for concourse because we need to store the new image tag. We'll using [semver resource](https://github.com/concourse/semver-resource) for that. 2. Give read registry credentials to Flux for regular image tag checking as well as Kubernetes in order to allow image pulling from the private registry. Let's create 2 new user `concourse` with admin acces and `container` as standard user on Gitea. Store these credentials on new variables: @@ -140,7 +141,7 @@ resource "kubernetes_secret_v1" "concourse_git" { Note as we use `concourse-main` namespace, already created by Concourse Helm installer, which is a dedicated namespace for the default team `main`. Because of that, we should keep `depends_on` to ensure the namespace is created before the secrets. {{< alert >}} -Don't forget the `[ci skip]` in commit message, which is the commit for version bumping, otherwise you'll have an infinite loop of builds ! +Don't forget the `[ci skip]` in commit message, which is the commit for version bumping, otherwise you'll have an infinite build loop ! {{< /alert >}} Then same for Flux and the namespace that will receive the app: @@ -199,7 +200,7 @@ WORKDIR /publish COPY /publish . EXPOSE 80 -ENTRYPOINT ["dotnet", "KubeRocksDemo.dll"] +ENTRYPOINT ["dotnet", "KubeRocks.WebApi.dll"] ``` {{< /highlight >}} @@ -314,7 +315,7 @@ A build will be trigger immediately. You can follow it on Concourse UI. [![Concourse pipeline](concourse-pipeline.png)](concourse-pipeline.png) -If everything is ok, check in `https://gitea.kube.rocks/admin/packages`, you should see a new image tag on your registry ! A new file `version` is automatically pushed in code repo in order to keep tracking of the image tag version. +If everything is ok, check in `https://gitea.kube.rocks/admin/packages`, you should see a new image appear on the list ! A new file `version` is automatically pushed in code repo in order to keep tracking of the image tag version. [![Concourse build](concourse-build.png)](concourse-build.png) @@ -425,7 +426,7 @@ However, one last thing is missing: the automatic deployment. If you checked the above flowchart, you'll note that Image automation is a separate process from Flux that only scan the registry for new image tags and push any new tag to Flux repository. Then Flux will detect the new commit in Git repository, including the new tag, and automatically deploy it to K8s. -By default, if not any strategy is set, K8s will do a **rolling deployment**, i.e. creating new replica firstly be terminating the old one. This will prevent any downtime on the condition of you set as well **readiness probe** in your pod spec, which is a later topic. +By default, if not any strategy is set, K8s will do a **rolling deployment**, i.e. creating new replica firstly before terminating the old one. This will prevent any downtime on the condition of you set as well **readiness probe** in your pod spec, which is a later topic. Let's define the image update automation task for main Flux repository: @@ -460,7 +461,7 @@ spec: {{< /highlight >}} -Now we need to Image Reflector how to scan the repository, as well as the attached policy for tag update: +Now we need to tell Image Reflector how to scan the repository, as well as the attached policy for tag update: {{< highlight host="demo-kube-flux" file="clusters/demo/kuberocks/images-demo.yaml" >}} diff --git a/content/posts/19-a-beautiful-gitops-day-9/index.md b/content/posts/19-a-beautiful-gitops-day-9/index.md index c924756..c1a5dbb 100644 --- a/content/posts/19-a-beautiful-gitops-day-9/index.md +++ b/content/posts/19-a-beautiful-gitops-day-9/index.md @@ -70,10 +70,10 @@ app.Run(); {{< /highlight >}} -And you're done ! Go to `https://demo.kube.rocks/healthz` to confirm it's working. Try to stop the database with `docker compose stop` and check the healthz endpoint again, it should return `503` status code. +And you're done ! Go to `https://localhost:xxxx/healthz` to confirm it's working. Try to stop the database with `docker compose stop` and check the healthz endpoint again, it should return `503` status code. Then push the code. {{< alert >}} -The `Microsoft.Extensions.Diagnostics.HealthChecks` package is very extensible and you can add any custom check to enrich the health app status. +The `Microsoft.Extensions.Diagnostics.HealthChecks` package is very extensible, you can add any custom check to enrich the health app status. {{< /alert >}} And finally the probes: @@ -114,7 +114,7 @@ When **Rolling Update** strategy is used (the default), the old pod is not kille ## Telemetry -The last step but not least missing for a total integration with our monitored Kubernetes cluster is to add some telemetry to our app. We'll use `OpenTelemetry` for that, which becomes the standard library for metrics and tracing, by providing good integration to many languages. +The last step but not least for a total integration with our monitored Kubernetes cluster is to add some telemetry to our app. We'll use `OpenTelemetry` for that, which becomes the standard library for metrics and tracing, by providing good integration to many languages. ### Application metrics @@ -223,7 +223,7 @@ spec: {{< /highlight >}} -Now the new URL is `https://demo.kube.rocks/api/Articles`. Any path different from `api` will return the Traefik 404 page, and internal paths as `https://demo.kube.rocks/metrics` is not accessible anymore. An other additional advantage of this config, it's simple to put a separated frontend project under `/` path, which can use the under API without any CORS problem natively. +Now the new URL is `https://demo.kube.rocks/api/Articles`. Any path different from `api` will return the Traefik 404 page, and internal paths as `https://demo.kube.rocks/metrics` is not accessible anymore. An other additional advantage of this config, it's simple to put a separated frontend project under `/` path (covered later), which can use the under API without any CORS problem natively. #### Prometheus integration @@ -338,6 +338,8 @@ EOF {{< /highlight >}} +Use the *Test* button on `https://grafana.kube.rocks/connections/datasources/edit/tempo` to confirm it's working. + #### OpenTelemetry Let's firstly add another instrumentation package specialized for Npgsql driver used by EF Core to translate queries to PostgreSQL: diff --git a/content/posts/20-a-beautiful-gitops-day-10/index.md b/content/posts/20-a-beautiful-gitops-day-10/index.md index 1e4eb05..74444d9 100644 --- a/content/posts/20-a-beautiful-gitops-day-10/index.md +++ b/content/posts/20-a-beautiful-gitops-day-10/index.md @@ -18,7 +18,7 @@ SonarQube is leading the code metrics industry for a long time, embracing full O ### SonarQube installation -SonarQube as its dedicated Helm chart which perfect for us. However, it's the most resource hungry component of our development stack so far (because Java project ? End of troll), so be sure to deploy it on almost empty free node, maybe a dedicated one. In fact, it's the last Helm chart for this tutorial, I promise! +SonarQube has its dedicated Helm chart which is perfect for us. However, it's the most resource hungry component of our development stack so far (because built with Java ? End of troll), so be sure to deploy it on almost empty free node (which should be ok with 3 workers), maybe a dedicated one. In fact, it's the last Helm chart for this tutorial, I promise! Create dedicated database for SonarQube same as usual. @@ -124,7 +124,7 @@ The installation take many minutes, be patient. Once done, you can access SonarQ ### Project configuration -Firstly create a new project and retain the project key which is his identifier. Then create a **global analysis token** named `Concourse CI` that will be used for CI integration from your user account under `/account/security`. +Firstly create a new project through SonarQube UI and retain the project key which is his identifier. Then create a **global analysis token** named `Concourse CI` that will be used for CI integration from your user account under `/account/security`. Now we need to create a Kubernetes secret which contains this token value for Concourse CI, for usage inside the pipeline. The token is the one generated above. @@ -318,7 +318,7 @@ Note as we now use the `dotnet-qa` image and surround the build step by `dotnet ## Feature testing -Let's cover the feature testing by calling the API against a real database. This is the opportunity to cover the code coverage as well. +Let's cover the feature testing by calling the API against a real database. This is the opportunity to tackle the code coverage as well. ### xUnit diff --git a/content/posts/21-a-beautiful-gitops-day-11/index.md b/content/posts/21-a-beautiful-gitops-day-11/index.md index 78cb602..21da02b 100644 --- a/content/posts/21-a-beautiful-gitops-day-11/index.md +++ b/content/posts/21-a-beautiful-gitops-day-11/index.md @@ -61,7 +61,7 @@ data: {{< /highlight >}} -And add the k6 `Job` in the same file and configure it for Prometheus usage and mounting above scenario: +Finally, add the k6 `Job` in the same file and configure it for Prometheus usage and mounting above scenario: {{< highlight host="demo-kube-flux" file="jobs/demo-k6.yaml" >}} @@ -163,7 +163,7 @@ As we use Kubernetes, increase the loading performance horizontally is dead easy So far, we only load balanced the stateless API, but what about the database part ? We have set up a replicated PostgreSQL cluster, however we have no use of the replica that stay sadly idle. But for that we have to distinguish write queries from scalable read queries. -We can make use of the Bitnami [PostgreSQL HA](https://artifacthub.io/packages/helm/bitnami/postgresql-ha) instead of simple one. It adds the new component [Pgpool-II](https://pgpool.net/mediawiki/index.php/Main_Page) as main load balancer and detect failover. It's able to separate in real time write queries from read queries and send them to the master or the replica. The advantage: works natively for all apps without any changes. The cons: it consumes far more resources and add a new component to maintain. +We can make use of the Bitnami [PostgreSQL HA](https://artifacthub.io/packages/helm/bitnami/postgresql-ha) instead of simple one. It adds the new component [Pgpool-II](https://pgpool.net/mediawiki/index.php/Main_Page) as main load balancer and detect failover. It's able to separate in real time write queries from read queries and send them to the master or the replica. The pros: works natively for all apps without any changes. The cons: it consumes far more resources and add a new component to maintain. A 2nd solution is to separate query typologies from where it counts: the application. It requires some code changes, but it's clearly a far more efficient solution. Let's do this way. @@ -217,7 +217,7 @@ public static class ServiceExtensions {{< /highlight >}} -We fall back to the RW connection string if the RO one is not defined. Then use it in the `ArticlesController` which as only read endpoints: +We fall back to the RW connection string if the RO one is not defined. Then use it in the `ArticlesController` which has only read endpoints: {{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Controllers/ArticlesController.cs" >}} @@ -271,9 +271,9 @@ spec: {{< /highlight >}} -We simply have to add multiple host like `postgresql-primary.postgres,postgresql-read.postgres` for the RO connection string and enable LB mode with `Load Balance Hosts=true`. +We simply have to add multiple hosts like `postgresql-primary.postgres,postgresql-read.postgres` for the RO connection string and enable LB mode with `Load Balance Hosts=true`. -Once deployed, relaunch a load test with K6 and admire the DB load balancing in action on both storage servers with `htop` or directly compute pods by namespace in Grafana. +Once deployed, relaunch a load test with K6 and admire the DB load balancing in action on both storage servers with `htop` or directly on compute pods by namespace dashboard in Grafana. [![Gafana DB load balancing](grafana-db-lb.png)](grafana-db-lb.png) @@ -410,14 +410,7 @@ Now your frontend app should appear under `https://localhost:5001`, and API call ### Typescript API generator -As we use OpenAPI, it's possible to generate typescript client for API calls. Add this package: - -```sh -pnpm add openapi-typescript -D -pnpm add openapi-typescript-fetch -``` - -Before generate the client model, go back to backend for forcing required by default for attributes when not nullable when using `Swashbuckle.AspNetCore`: +As we use OpenAPI, it's possible to generate typescript client for API calls. Before tackle the generation of client models, go back to backend for forcing required by default for attributes when not nullable when using `Swashbuckle.AspNetCore`: {{< highlight host="kuberocks-demo" file="src/KubeRocks.WebApi/Filters/RequiredNotNullableSchemaFilter.cs" >}} @@ -476,7 +469,14 @@ You should now have proper required attributes for models in swagger UI: Sadly, without this boring step, many attributes will be nullable when generating TypeScript models, and leads to headaches from client side by forcing us to manage nullable everywhere. {{< /alert >}} -Now generate the models: +Now back to the `kubrerocks-demo-ui` project and add the following dependencies: + +```sh +pnpm add openapi-typescript -D +pnpm add openapi-typescript-fetch +``` + +Now generate the models by adding this script: {{< highlight host="kuberocks-demo-ui" file="package.json" >}} @@ -493,7 +493,7 @@ Now generate the models: {{< /highlight >}} -Use the HTTP version of swagger as you'll get a self certificate error. The use `pnpm openapi` to generate full TS model. Finally, describe API fetchers like so: +Use the HTTP version of swagger as you'll get a self certificate error. Then use `pnpm openapi` to generate full TS model. Finally, describe API fetchers like so: {{< highlight host="kuberocks-demo-ui" file="src/api/index.ts" >}} @@ -523,7 +523,7 @@ We are now fully typed compliant with the API. ### Call the API -Let's create a pretty basic list + detail vue pages: +Let's create a pretty basic paginated list and detail vue pages: {{< highlight host="kuberocks-demo-ui" file="src/pages/articles/index.vue" >}} @@ -684,6 +684,8 @@ const classes {{< /highlight >}} +The view detail: + {{< highlight host="kuberocks-demo-ui" file="src/pages/articles/[slug].vue" >}} ```vue @@ -726,8 +728,6 @@ getArticle() {{< /highlight >}} -It should work flawlessly. - ### Frontend CI/CD The CI frontend is far simpler than backend. Create a new `demo-ui` pipeline: @@ -825,7 +825,9 @@ jobs: {{< /highlight >}} -{{< highlight host="demo-kube-flux" file="pipelines/demo-ui.yaml" >}} +`pnpm build` take care of TypeScript type-checks and assets building. + +{{< highlight host="demo-kube-flux" file="pipelines/main.yaml" >}} ```tf #...