proofreading
This commit is contained in:
@ -27,6 +27,8 @@ dotnet add package OpenTelemetry.Exporter.Jaeger --prerelease
|
||||
|
||||
Then add automatic ASP.NET instrumentation and configure Jaeger exporter in `Program.cs` by adding following lines before `builder.Build()` :
|
||||
|
||||
{{< highlight file="Program.cs" >}}
|
||||
|
||||
```cs
|
||||
using OpenTelemetry.Resources;
|
||||
using OpenTelemetry.Trace;
|
||||
@ -48,6 +50,8 @@ builder.Services.AddOpenTelemetryTracing(b => b
|
||||
//...
|
||||
```
|
||||
|
||||
{{< /highlight >}}
|
||||
|
||||
Push the code and ensure all CI/CD workflow passes.
|
||||
|
||||
Then edit the `weather` docker stack and configure Jaeger connection.
|
||||
@ -91,6 +95,8 @@ Let's get some automatic quality code metrics.
|
||||
|
||||
On `manager-01` :
|
||||
|
||||
{{< highlight host="manager-01" >}}
|
||||
|
||||
```sh
|
||||
sudo mkdir -p /mnt/storage-pool/sonar/data
|
||||
sudo mkdir -p /mnt/storage-pool/sonar/logs
|
||||
@ -101,6 +107,8 @@ echo "vm.max_map_count=262144" | tee /etc/sysctl.d/local.conf
|
||||
sudo service procps restart
|
||||
```
|
||||
|
||||
{{< /highlight >}}
|
||||
|
||||
Create a `sonar` PostgresSQL database, and create a `sonar` stack :
|
||||
|
||||
{{< highlight host="stack" file="sonar" >}}
|
||||
@ -174,6 +182,8 @@ Because running scanner manually is boring, let's integrate it in our favorite C
|
||||
|
||||
Change the `build` step on `.drone.yml` file :
|
||||
|
||||
{{< highlight file=".drone.yml" >}}
|
||||
|
||||
```yml
|
||||
#...
|
||||
- name: build
|
||||
@ -190,6 +200,8 @@ Change the `build` step on `.drone.yml` file :
|
||||
#...
|
||||
```
|
||||
|
||||
{{< /highlight >}}
|
||||
|
||||
And voilà ! You should have automatic code analysis on every code push.
|
||||
|
||||
{{< alert >}}
|
||||
@ -270,6 +282,8 @@ networks:
|
||||
|
||||
First create a simple JS script as docker swarm *Config* named `k6_weather_test_01` through Portainer UI :
|
||||
|
||||
{{< highlight host="config" file="k6_weather_test_01" >}}
|
||||
|
||||
```js
|
||||
import http from "k6/http";
|
||||
import { check } from "k6";
|
||||
@ -279,6 +293,8 @@ export default function () {
|
||||
}
|
||||
```
|
||||
|
||||
{{< /highlight >}}
|
||||
|
||||
[](portainer-configs-k6.png)
|
||||
|
||||
{{< highlight host="stack" file="k6" >}}
|
||||
@ -371,6 +387,8 @@ data: {
|
||||
|
||||
You can go even further with more progressive scenario. Create a new `k6_weather_test_02` docker config script :
|
||||
|
||||
{{< highlight host="config" file="k6_weather_test_02" >}}
|
||||
|
||||
```js
|
||||
import http from "k6/http";
|
||||
import { check } from "k6";
|
||||
@ -386,6 +404,8 @@ export default function () {
|
||||
}
|
||||
```
|
||||
|
||||
{{< /highlight >}}
|
||||
|
||||
This is a progressive 5 minutes load testing scenario from 1 user to 200 concurrent users.
|
||||
|
||||
Then use this script on above `k6` stack and be sure to comment `K6_VUS` and `K6_DURATION` environment variables. Check the logs to ensure that you have correct scenario :
|
||||
|
Reference in New Issue
Block a user