change domain example

This commit is contained in:
2022-02-26 18:16:30 +01:00
parent c7e3c68a9c
commit d4759befa3
8 changed files with 62 additions and 62 deletions

View File

@ -25,7 +25,7 @@ So the prerequisites before continue :
* Have some knowledge on docker-compose setups
* Be comfortable with SSH terminal
* Registered for a [Hetzner Cloud account](https://accounts.hetzner.com/signUp)
* A custom domain, I'll use `okami101.io` here
* A custom domain, I'll use `mydomain.cool` here as example
* A account to a transactional mail provider as mailgun, sendgrid, sendingblue, etc.
{{< alert >}}

View File

@ -135,22 +135,22 @@ IPs are only showed here as samples, use `hcloud server describe xxxxxx-01` in o
## Setup DNS and SSH config 🌍
Now use `hcloud server ip manager-01` to get the unique frontal IP address of the cluster that will be used for any entry point, including SSH. Then edit the DNS of your domain and apply this IP to a particular subdomain, as well as a wildcard subdomain. You will see later what this wildcard domain is it for. I will use `sw.okami101.io` as sample. It should be looks like next :
Now use `hcloud server ip manager-01` to get the unique frontal IP address of the cluster that will be used for any entry point, including SSH. Then edit the DNS of your domain and apply this IP to a particular subdomain, as well as a wildcard subdomain. You will see later what this wildcard domain is it for. I will use `sw.mydomain.cool` as sample. It should be looks like next :
```txt
sw 3600 IN A 123.123.123.123
*.sw 43200 IN CNAME sw
```
As soon as the above DNS is applied, you should ping `sw.okami101.io` or any `xyz.sw.okami101.io` domains.
As soon as the above DNS is applied, you should ping `sw.mydomain.cool` or any `xyz.sw.mydomain.cool` domains.
It's now time to finalize your local SSH config for optimal access. Go to `~/.ssh/config` and add following hosts (change *okami101.io* and *swarm* accordingly to your own setup) :
It's now time to finalize your local SSH config for optimal access. Go to `~/.ssh/config` and add following hosts (change it accordingly to your own setup) :
```ssh
Host sw
User swarm
Port 2222
HostName sw.okami101.io
HostName sw.mydomain.cool
Host sw-data-01
User swarm
@ -171,7 +171,7 @@ Host sw-worker-01
And that's it ! You should now quickly ssh to these servers easily by `ssh sw`, `ssh sw-worker-01`, `ssh sw-runner-01`, `ssh sw-data-01`, which will be far more practical.
{{< alert >}}
Note as I only use the `sw.okami101.io` as unique endpoint for ssh access to all internal server, without need of external SSH access to servers different from `manager-01`. It's known as SSH proxy, which allows single access point for better security perspective by simply jumping from main SSH access.
Note as I only use the `sw.mydomain.cool` as unique endpoint for ssh access to all internal server, without need of external SSH access to servers different from `manager-01`. It's known as SSH proxy, which allows single access point for better security perspective by simply jumping from main SSH access.
{{< /alert >}}
## The firewall 🧱

View File

@ -201,12 +201,12 @@ entryPoints:
certificatesResolvers:
le:
acme:
email: admin@sw.okami101.io
email: admin@sw.mydomain.cool
storage: /certificates/acme.json
tlsChallenge: {}
providers:
docker:
defaultRule: Host(`{{ index .Labels "com.docker.stack.namespace" }}.sw.okami101.io`)
defaultRule: Host(`{{ index .Labels "com.docker.stack.namespace" }}.sw.mydomain.cool`)
exposedByDefault: false
swarmMode: true
network: traefik_public
@ -248,7 +248,7 @@ It indicates Traefik to read through Docker API in order to discover any new ser
| `network` | Default network connection for all exposed containers |
| `defaultRule` | Default rule that will be applied to HTTP routes, in order to redirect particular URL to the right service. Each service container can override this default value with `traefik.http.routers.my-container.rule` label. |
As a default route rule, I set here a value adapted for an automatic subdomain discovery. `{{ index .Labels "com.docker.stack.namespace" }}.sw.okami101.io` is a dynamic Go template string that means to use the `com.docker.stack.namespace` label that is applied by default on Docker Swarm on each deployed service. So if I deploy a swarm stack called `myapp`, Traefik will automatically set `myapp.sw.okami101.io` as default domain URL to my service, with automatic TLS challenge !
As a default route rule, I set here a value adapted for an automatic subdomain discovery. `{{ index .Labels "com.docker.stack.namespace" }}.sw.mydomain.cool` is a dynamic Go template string that means to use the `com.docker.stack.namespace` label that is applied by default on Docker Swarm on each deployed service. So if I deploy a swarm stack called `myapp`, Traefik will automatically set `myapp.sw.mydomain.cool` as default domain URL to my service, with automatic TLS challenge !
All I have to do is to add a specific label `traefik.enable=true` inside the Docker service configuration and be sure that it's on the `traefik_public` network.
@ -273,7 +273,7 @@ version: '3.2'
services:
traefik:
image: traefik:v2.5
image: traefik:v2.6
ports:
- target: 22
published: 22
@ -344,7 +344,7 @@ This is the Traefik dynamic configuration part. I declare here many service that
| `gzip` | middleware | provides [basic gzip compression](https://doc.traefik.io/traefik/middlewares/http/compress/). Note as Traefik doesn't support brotli yep, which is pretty disappointed where absolutly all other reverse proxies support it... |
| `admin-auth` | middleware | provides basic HTTP authorization. `basicauth.users` will use standard `htpasswd` format. I use `HASHED_PASSWORD` as dynamic environment variable. |
| `admin-ip` | middleware | provides IP whitelist protection, given a source range. |
| `traefik-public-api` | router | Configured for proper redirection to internal dashboard Traefik API from `traefik.sw.okami101.io`, which is defined by default rule. It's configured with above `admin-auth` and `admin-ip` for proper protection. |
| `traefik-public-api` | router | Configured for proper redirection to internal dashboard Traefik API from `traefik.sw.mydomain.cool`, which is defined by default rule. It's configured with above `admin-auth` and `admin-ip` for proper protection. |
| `traefik-public` | service | allow proper redirection to the default exposed 8080 port of Traefik container. This is sadly mandatory when using [Docker Swarm](https://doc.traefik.io/traefik/providers/docker/#port-detection_1) |
{{< alert >}}
@ -375,7 +375,7 @@ docker service ls
docker service logs traefik_traefik
```
After few seconds, Traefik should launch and generate proper SSL certificate for his own domain. You can finally go to <https://traefik.sw.okami101.io>. `http://` should work as well thanks to permanent redirection.
After few seconds, Traefik should launch and generate proper SSL certificate for his own domain. You can finally go to <https://traefik.sw.mydomain.cool>. `http://` should work as well thanks to permanent redirection.
If properly configured, you will be prompted for access. After entering admin as user and your own chosen password, you should finally access to the traefik dashboard similar to below !
@ -451,7 +451,7 @@ As soon as the main portainer service has successfully started, Traefik will det
[![Traefik routers](traefik-routers.png)](traefik-routers.png)
It's time to create your admin account through <https://portainer.sw.okami101.io>. If all goes well, aka Portainer agent are accessible from Portainer portal, you should have access to your cluster home environment with 2 stacks active.
It's time to create your admin account through <https://portainer.sw.mydomain.cool>. If all goes well, aka Portainer agent are accessible from Portainer portal, you should have access to your cluster home environment with 2 stacks active.
[![Portainer home](portainer-home.png)](portainer-home.png)

View File

@ -96,7 +96,7 @@ networks:
The important part is `/etc/hosts` in order to allow proper DNS resolving for `data-01` configured in `PMA_HOST` environment variable. This will avoid us from dragging the real IP of data server everywhere...
Deploy it, and you should access to <https://phpmyadmin.sw.okami101.io> after few seconds, with full admin access to your MySQL DB !
Deploy it, and you should access to <https://phpmyadmin.sw.mydomain.cool> after few seconds, with full admin access to your MySQL DB !
[![phpMyAdmin](phpmyadmin.png)](phpmyadmin.png)
@ -198,7 +198,7 @@ networks:
You'll need both `PGADMIN_DEFAULT_EMAIL` and `PGADMIN_DEFAULT_PASSWORD` variable environment for proper initialization.
Deploy it, and you should access after few seconds to <https://pgadmin.sw.okami101.io> with the default logins just above.
Deploy it, and you should access after few seconds to <https://pgadmin.sw.mydomain.cool> with the default logins just above.
Once logged, you need to add the previously configured PostgreSQL server address via *Add new server*. Just add relevant host informations in *Connection* tab. Host must stay `data-01` with swarm as superuser access.
@ -286,7 +286,7 @@ Configure `REDMINE_DB_*` with proper above created DB credential and set the ran
I use a dynamic `ROOT_PATH` here. So you must add this variable with `/mnt/storage-pool/redmine` value in the below *Environment variables* section of portainer.
{{< /alert >}}
After few seconds, <https://redmine.sw.okami101.io> should be accessible and ready to use, use admin / admin for admin connection !
After few seconds, <https://redmine.sw.mydomain.cool> should be accessible and ready to use, use admin / admin for admin connection !
[![Redmine](redmine.png)](redmine.png)
@ -329,7 +329,7 @@ networks:
external: true
```
And voilà, it's done, n8n will automatically migrate the database and <https://n8n.sw.okami101.io> should be soon accessible. Note as we use `admin-auth` middleware because n8n doesn't offer authentication. Use the same Traefik credentials.
And voilà, it's done, n8n will automatically migrate the database and <https://n8n.sw.mydomain.cool> should be soon accessible. Note as we use `admin-auth` middleware because n8n doesn't offer authentication. Use the same Traefik credentials.
[![n8n](n8n.png)](n8n.png)

View File

@ -101,7 +101,7 @@ The `private` network will serve us later for exporters. Next config are useful
| storage.tsdb.retention.size | The max DB size |
| storage.tsdb.retention.time | The max data retention date |
Deploy it and <https://prometheus.sw.okami101.io> should be available after few seconds. Use same traefik credentials for login.
Deploy it and <https://prometheus.sw.mydomain.cool> should be available after few seconds. Use same traefik credentials for login.
You should now have access to some metrics !
@ -254,8 +254,8 @@ services:
grafana:
image: grafana/grafana:8.4.1
environment:
GF_SERVER_DOMAIN: grafana.sw.okami101.io
GF_SERVER_ROOT_URL: https://grafana.sw.okami101.io
GF_SERVER_DOMAIN: grafana.sw.mydomain.cool
GF_SERVER_ROOT_URL: https://grafana.sw.mydomain.cool
GF_DATABASE_TYPE: postgres
GF_DATABASE_HOST: data-01:5432
GF_DATABASE_NAME: grafana
@ -282,7 +282,7 @@ networks:
external: true
```
Set proper `GF_DATABASE_PASSWORD` and deploy. Database migration should be automatic (don't hesitate to check inside pgAdmin). Go to <https://grafana.sw.okami101.io> and login as admin / admin.
Set proper `GF_DATABASE_PASSWORD` and deploy. Database migration should be automatic (don't hesitate to check inside pgAdmin). Go to <https://grafana.sw.mydomain.cool> and login as admin / admin.
[![Grafana home](grafana-home.png)](grafana-home.png)

View File

@ -340,7 +340,7 @@ networks:
| `agent` | a simple REST endpoint for receiving traces, the latter being forwarded to the collector. An agent should be proper to a machine host, similarly as the portainer agent. |
| `query` | a simple UI that connects to the span storage and allows simple visualization. |
After few seconds, go to <https://jaeger.sw.okami101.io> and enter Traefik credentials. You will land to Jaeger Query UI with empty data.
After few seconds, go to <https://jaeger.sw.mydomain.cool> and enter Traefik credentials. You will land to Jaeger Query UI with empty data.
It's time to inject some trace data. Be sure all above Jaeger services are started through Portainer before continue.

View File

@ -62,11 +62,11 @@ We added a specific TCP router in order to allow SSH cloning. The SSH Traefik en
Note as we need to indicate entry points in order to avoid bad redirection from other HTTPS based service.
{{< /alert >}}
Now go to <https://gitea.sw.okami101.io> and go through the installation procedure. Change default SQLite provider by a more production purpose database.
Now go to <https://gitea.sw.mydomain.cool> and go through the installation procedure. Change default SQLite provider by a more production purpose database.
Create a new `gitea` PostgreSQL database as usual from pgAdmin or `psql` for pro-CLI user, and set the according DB info access to Gitea installer. Host should be `data-01`.
Don't forgive to change all domain related field by the proper current domain URL, which is `gitea.sw.okami101.io` in my case. You should set proper SMTP settings for notifications.
Don't forgive to change all domain related field by the proper current domain URL, which is `gitea.sw.mydomain.cool` in my case. You should set proper SMTP settings for notifications.
[![Gitea admin dashboard](gitea-install.png)](gitea-install.png)
@ -101,7 +101,7 @@ services:
deploy:
labels:
- traefik.enable=true
- traefik.http.routers.registry.rule=Host(`registry.sw.okami101.io`) && PathPrefix(`/v2`)
- traefik.http.routers.registry.rule=Host(`registry.sw.mydomain.cool`) && PathPrefix(`/v2`)
- traefik.http.routers.registry.middlewares=admin-auth
- traefik.http.services.registry.loadbalancer.server.port=5000
placement:
@ -134,11 +134,11 @@ Note as both service must be exposed to Traefik. In order to keep the same subdo
It gives us have an additional condition for redirect to the correct service. It's ok in our case because the official docker registry use only `/v2` as endpoint.
{{< /alert >}}
Go to <https://registry.sw.okami101.io> and use Traefik credentials. We have no images yet let's create one.
Go to <https://registry.sw.mydomain.cool> and use Traefik credentials. We have no images yet let's create one.
### Test our private registry
Login into the `manager-01` server, do `docker login registry.sw.okami101.io` and enter proper credentials. You should see *Login Succeeded*. Don't worry about the warning. Create the next Dockerfile somewhere :
Login into the `manager-01` server, do `docker login registry.sw.mydomain.cool` and enter proper credentials. You should see *Login Succeeded*. Don't worry about the warning. Create the next Dockerfile somewhere :
```Dockerfile
FROM alpine:latest
@ -149,15 +149,15 @@ Then build and push the image :
```sh
docker build -t alpinegit .
docker tag alpinegit registry.sw.okami101.io/alpinegit
docker push registry.sw.okami101.io/alpinegit
docker tag alpinegit registry.sw.mydomain.cool/alpinegit
docker push registry.sw.mydomain.cool/alpinegit
```
Go back to above <https://registry.sw.okami101.io>. You should see 1 new image !
Go back to above <https://registry.sw.mydomain.cool>. You should see 1 new image !
[![Docker registry](docker-registry.png)](docker-registry.png)
Delete the image test through UI and from local docker with `docker image rm registry.sw.okami101.io/alpinegit`.
Delete the image test through UI and from local docker with `docker image rm registry.sw.mydomain.cool/alpinegit`.
{{< alert >}}
Note as the blobs of image is always physically in the disk, even when "deleted". You must launch manually the docker GC in order to cleanup unused images.
@ -201,7 +201,7 @@ drone-runner-- push built docker image -->registry
registry-- pull image when deploy stack -->my-app
{{< /mermaid >}}
Let's follow [the official docs](https://docs.drone.io/server/provider/gitea/) for generating a OAuth2 application on Gitea, which is necessary for Drone integration. Set `https://drone.sw.okami101.io` as redirect UI after successful authentication.
Let's follow [the official docs](https://docs.drone.io/server/provider/gitea/) for generating a OAuth2 application on Gitea, which is necessary for Drone integration. Set `https://drone.sw.mydomain.cool` as redirect UI after successful authentication.
[![Gitea drone application](gitea-drone-application.png)](gitea-drone-application.png)
@ -220,7 +220,7 @@ services:
DRONE_DATABASE_DATASOURCE: postgres://drone:${DRONE_DATABASE_PASSWORD}@data-01:5432/drone?sslmode=disable
DRONE_GITEA_CLIENT_ID:
DRONE_GITEA_CLIENT_SECRET:
DRONE_GITEA_SERVER: https://gitea.sw.okami101.io
DRONE_GITEA_SERVER: https://gitea.sw.mydomain.cool
DRONE_RPC_SECRET:
DRONE_SERVER_HOST:
DRONE_SERVER_PROTO:
@ -259,7 +259,7 @@ Don't forget to have proper docker labels on nodes, as explain [here]({{< ref "0
| variable | description |
| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------- |
| `DRONE_SERVER_HOST` | The host of main Drone server. I'll use `drone.sw.okami101.io` here. |
| `DRONE_SERVER_HOST` | The host of main Drone server. I'll use `drone.sw.mydomain.cool` here. |
| `DRONE_SERVER_PROTO` | The scheme protocol, which is `https`. |
| `DRONE_GITEA_CLIENT_ID` | Use the above client ID token. |
| `DRONE_GITEA_CLIENT_SECRET` | Use the above client secret token. |
@ -267,7 +267,7 @@ Don't forget to have proper docker labels on nodes, as explain [here]({{< ref "0
| `DRONE_RPC_SECRET` | Necessary for proper secured authentication between Drone and runners. Use `openssl rand -hex 16` for generating a valid token. |
| `DRONE_USER_CREATE` | The initial user to create at launch. Put your Gitea username here for setting automatically Gitea user as drone administrator. |
It's time to go to <https://drone.sw.okami101.io/> and generate your first Drone account through OAuth2 from Gitea. You should be properly redirected to Gitea, where you'll just have to authorize Drone application.
It's time to go to <https://drone.sw.mydomain.cool/> and generate your first Drone account through OAuth2 from Gitea. You should be properly redirected to Gitea, where you'll just have to authorize Drone application.
[![Gitea oauth2](gitea-oauth2.png)](gitea-oauth2.png)
@ -301,7 +301,7 @@ dotnet new gitignore
git init
git add .
git commit -m "first commit"
git remote add origin git@gitea.sw.okami101.io:adr1enbe4udou1n/my-weather-api.git # if you use ssh
git remote add origin git@gitea.sw.mydomain.cool:adr1enbe4udou1n/my-weather-api.git # if you use ssh
git push -u origin main
```
@ -324,10 +324,10 @@ It will create a webhook inside repository settings, triggered on every code pus
Now generate a new SSH key on `manager-01` :
```sh
ssh-keygen -t ed25519 -C "admin@sw.okami101.io"
ssh-keygen -t ed25519 -C "admin@sw.mydomain.cool"
cat .ssh/id_ed25519 # the private key to set in swarm_ssh_key
cat .ssh/id_ed25519.pub # the public key to add just below
echo "ssh-ed25519 AAAA... admin@sw.okami101.io" | tee -a .ssh/authorized_keys
echo "ssh-ed25519 AAAA... admin@sw.mydomain.cool" | tee -a .ssh/authorized_keys
```
Then configure the repository settings on Drone. Go to *Organization > Secrets* section and add some global secrets.
@ -358,8 +358,8 @@ steps:
- name: image
image: plugins/docker
settings:
registry: registry.sw.okami101.io
repo: registry.sw.okami101.io/adr1enbe4udou1n/my-weather-api
registry: registry.sw.mydomain.cool
repo: registry.sw.mydomain.cool/adr1enbe4udou1n/my-weather-api
tags: latest
username:
from_secret: registry_username
@ -395,7 +395,7 @@ Commit both above files and push to remote repo. Drone should be automatically t
[![Drone build](drone-build.png)](drone-build.png)
If all's going well, the final image should be pushed in our docker registry. You can ensure it by navigating to <https://registry.sw.okami101.io>.
If all's going well, the final image should be pushed in our docker registry. You can ensure it by navigating to <https://registry.sw.mydomain.cool>.
### Deployment (the CD part) 🚀
@ -406,7 +406,7 @@ version: "3"
services:
app:
image: registry.sw.okami101.io/adr1enbe4udou1n/my-weather-api
image: registry.sw.mydomain.cool/adr1enbe4udou1n/my-weather-api
environment:
ASPNETCORE_ENVIRONMENT: Development
networks:
@ -429,13 +429,13 @@ I use `Development` in order to have the swagger UI.
Be sure to have registered the private registry in Portainer before deploying as [explained here](#register-registry-in-portainer).
{{< /alert >}}
Finally, deploy and see the result in <https://weather.sw.okami101.io/swagger>. You should access to the swagger UI, and API endpoints should correctly respond.
Finally, deploy and see the result in <https://weather.sw.mydomain.cool/swagger>. You should access to the swagger UI, and API endpoints should correctly respond.
#### Continuous deployment
Now it's clear that we don't want to deploy manually every time when the code is pushed.
First be sure that following `docker service update --image registry.sw.okami101.io/adr1enbe4udou1n/my-weather-api:latest weather_app --with-registry-auth` command works well in `manager-01`. It's simply update the current `weather_app` service with the last available image version from the private registry.
First be sure that following `docker service update --image registry.sw.mydomain.cool/adr1enbe4udou1n/my-weather-api:latest weather_app --with-registry-auth` command works well in `manager-01`. It's simply update the current `weather_app` service with the last available image version from the private registry.
Now we must be sure that the `runner-01` host can reach the `manager-01` server from outside. If you have applied the firewall at the beginning of this tutorial, only our own IP is authorized. Let's add the public IP of `runner-01` to your `firewall-external` inside Hetzner console.
@ -446,13 +446,13 @@ Now let's add a new `deploy` step inside `.drone.yml` into our pipeline for auto
- name: deploy
image: appleboy/drone-ssh
settings:
host: sw.okami101.io
host: sw.mydomain.cool
port: 2222
username: swarm
key:
from_secret: swarm_ssh_key
script:
- docker service update --image registry.sw.okami101.io/adr1enbe4udou1n/my-weather-api:latest weather_app --with-registry-auth
- docker service update --image registry.sw.mydomain.cool/adr1enbe4udou1n/my-weather-api:latest weather_app --with-registry-auth
#...
```

View File

@ -57,7 +57,7 @@ version: "3"
services:
app:
image: registry.sw.okami101.io/adr1enbe4udou1n/my-weather-api
image: registry.sw.mydomain.cool/adr1enbe4udou1n/my-weather-api
environment:
ASPNETCORE_ENVIRONMENT: Development
Jaeger__Host: tasks.jaeger_agent
@ -88,9 +88,9 @@ Let's get some automatic quality code metrics.
On `manager-01` :
```sh
sudo mkdir -p /mnt/storage-pool/sonarqube/data
sudo mkdir -p /mnt/storage-pool/sonarqube/logs
sudo mkdir -p /mnt/storage-pool/sonarqube/extensions
sudo mkdir -p /mnt/storage-pool/sonar/data
sudo mkdir -p /mnt/storage-pool/sonar/logs
sudo mkdir -p /mnt/storage-pool/sonar/extensions
# specific thing for embed elasticsearch of sonarqube
echo "vm.max_map_count=262144" | tee /etc/sysctl.d/local.conf
@ -107,9 +107,9 @@ services:
image: sonarqube:9-community
volumes:
- /etc/hosts:/etc/hosts
- ${SONAR_DATA}/data:/opt/sonarqube/data
- ${SONAR_DATA}/logs:/opt/sonarqube/logs
- ${SONAR_DATA}/extensions:/opt/sonarqube/extensions
- ${ROOT_PATH}/data:/opt/sonarqube/data
- ${ROOT_PATH}/logs:/opt/sonarqube/logs
- ${ROOT_PATH}/extensions:/opt/sonarqube/extensions
environment:
SONAR_JDBC_URL: jdbc:postgresql://data-01:5432/sonar
SONAR_JDBC_USERNAME: sonar
@ -129,9 +129,9 @@ networks:
external: true
```
Set proper `SONAR_DATA` with `/mnt/storage-pool/sonarqube` and `SONAR_JDBC_PASSWORD` with above DB password.
Set proper `ROOT_PATH` with `/mnt/storage-pool/sonar` and `SONAR_JDBC_PASSWORD` with above DB password.
Go to <https://sonar.swadmin.okami101.io>, use admin / admin credentials and update password.
Go to <https://sonar.sw.mydomain.cool>, use admin / admin credentials and update password.
### Project analysis
@ -144,7 +144,7 @@ You must have at least Java 11 installed locally.
```sh
dotnet tool install --global dotnet-sonarscanner
dotnet sonarscanner begin /k:"My-Weather-API" /d:sonar.host.url="https://sonar.sw.okami101.io" /d:sonar.login="above-generated-token"
dotnet sonarscanner begin /k:"My-Weather-API" /d:sonar.host.url="https://sonar.sw.mydomain.cool" /d:sonar.login="above-generated-token"
dotnet build
@ -159,10 +159,10 @@ Wait few minutes and the final rapport analysis should automatically appear. Add
Because running scanner manually is boring, let's integrate it in our favorite CI. Create following secrets through Drone UI :
| name | level | description |
| ---------------- | ------------ | ----------------------------------------------------- |
| `sonar_host_url` | organization | Set the sonar host URL `https://sonar.sw.okami101.io` |
| `sonar_token` | repository | Set the above token |
| name | level | description |
| ---------------- | ------------ | ------------------------------------------------------- |
| `sonar_host_url` | organization | Set the sonar host URL `https://sonar.sw.mydomain.cool` |
| `sonar_token` | repository | Set the above token |
Change the `build` step on `.drone.yml` file :
@ -259,7 +259,7 @@ import http from "k6/http";
import { check } from "k6";
export default function () {
http.get('https://weather.sw.okami101.io/WeatherForecast');
http.get('https://weather.sw.mydomain.cool/WeatherForecast');
}
```
@ -362,7 +362,7 @@ export const options = {
};
export default function () {
http.get('https://weather.sw.okami101.io/WeatherForecast');
http.get('https://weather.sw.mydomain.cool/WeatherForecast');
}
```