proofreading

This commit is contained in:
2022-02-27 21:34:35 +01:00
parent ac96fac378
commit 2aff89e590
6 changed files with 97 additions and 21 deletions

View File

@ -189,7 +189,7 @@ In order to deploy Traefik on our shiny new Docker Swarm, we must write a Docker
{{< highlight host="manager-01" file="~/traefik-stack.yml" >}}
```yml
version: '3.2'
version: '3'
services:
traefik:
@ -318,7 +318,7 @@ Create `portainer-agent-stack.yml` swarm stack file with follogin content :
{{< highlight host="manager-01" file="~/portainer-agent-stack.yml" >}}
```yml
version: '3.2'
version: '3'
services:
agent:
@ -398,10 +398,10 @@ It's finally time to test our new cluster environment by testing some stacks thr
Create a new `diun` stack through Portainer and set following content :
{{< highlight host="stack" file="diun">}}
{{< highlight host="stack" file="diun" >}}
```yml
version: "3.2"
version: '3'
services:
diun:

View File

@ -74,8 +74,10 @@ We are now ready to go for installing phpMyAdmin as GUI DB manager. Thanks to ou
Create a new `phpmyadmin` stack with following :
{{< highlight host="stack" file="phpmyadmin" >}}
```yml
version: '3.8'
version: '3'
services:
app:
@ -102,6 +104,8 @@ networks:
external: true
```
{{< /highlight >}}
The important part is `/etc/hosts` in order to allow proper DNS resolving for `data-01` configured in `PMA_HOST` environment variable. This will avoid us from dragging the real IP of data server everywhere...
Deploy it, and you should access to <https://phpmyadmin.sw.dockerswarm.rocks> after few seconds, with full admin access to your MySQL DB !
@ -197,8 +201,10 @@ sudo chown -R 5050:5050 /mnt/storage-pool/pgadmin/
Finally, create a new `pgadmin` stack with following :
{{< highlight host="stack" file="pgadmin" >}}
```yml
version: '3.8'
version: '3'
services:
app:
@ -225,6 +231,8 @@ networks:
external: true
```
{{< /highlight >}}
You'll need both `PGADMIN_DEFAULT_EMAIL` and `PGADMIN_DEFAULT_PASSWORD` variable environment for proper initialization.
Deploy it, and you should access after few seconds to <https://pgadmin.sw.dockerswarm.rocks> with the default logins just above.
@ -243,6 +251,8 @@ Let's now test our cluster with 3 app samples. We'll deploy them to the worker n
Be free from Google Analytics with Matomo. It's incredibly simple to install with our cluster. Note as Matomo only supports MySQL or MariaDB database. Let's create dedicated storage folder with `sudo mkdir /mnt/storage-pool/matomo` and create following `matomo` stack :
{{< highlight host="stack" file="matomo" >}}
```yml
version: '3'
@ -267,6 +277,8 @@ networks:
external: true
```
{{< /highlight >}}
Now we'll creating the `matomo` DB with dedicated user through above *phpMyAdmin*. For that simply create a new `matomo` account and always specify `10.0.0.0/8` inside host field. Don't forget to check *Create database with same name and grant all privileges*.
Then go to <https://matomo.sw.dockerswarm.rocks> and go through all installation. At the DB install step, use the above credentials and use the hostname of your data server, which is `data-01` in our case.
@ -314,8 +326,10 @@ cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 40 | head -n 1
Next create new following `redmine` stack :
{{< highlight host="stack" file="redmine" >}}
```yml
version: '3.8'
version: '3'
services:
app:
@ -347,6 +361,8 @@ networks:
external: true
```
{{< /highlight >}}
Configure `REDMINE_DB_*` with proper above created DB credential and set the random key to `REDMINE_SECRET_KEY_BASE`.
{{< alert >}}
@ -365,8 +381,10 @@ First connect to pgAdmin and create new n8n user and database. Don't forget *Can
Create storage folder with `sudo mkdir /mnt/storage-pool/n8n` and create new following stack :
{{< highlight host="stack" file="n8n" >}}
```yml
version: "3"
version: '3'
services:
app:
@ -396,6 +414,8 @@ networks:
external: true
```
{{< /highlight >}}
And voilà, it's done, n8n will automatically migrate the database and <https://n8n.sw.dockerswarm.rocks> should be soon accessible. Note as we use `admin-auth` middleware because n8n doesn't offer authentication. Use the same Traefik credentials.
[![n8n](n8n.png)](n8n.png)

View File

@ -63,8 +63,10 @@ It consists on 2 scrapes job, use `targets` in order to indicate to Prometheus t
Finally create a `prometheus` stack in Portainer :
{{< highlight host="stack" file="prometheus" >}}
```yml
version: '3.7'
version: '3'
services:
@ -99,6 +101,8 @@ volumes:
data:
```
{{< /highlight >}}
The `private` network will serve us later for exporters. Next config are useful in order to control the DB usage, as metrics can go up very quickly :
| argument | description |
@ -150,6 +154,8 @@ It will take the node hostname and create an exploitable data metric for prometh
Next we'll edit our `prometheus` stack by expanding YML config with next 2 additional services :
{{< highlight host="stack" file="prometheus" >}}
```yml
#...
cadvisor:
@ -195,6 +201,8 @@ configs:
external: true
```
{{< /highlight >}}
Finally, add the 2 next jobs in previous Prometheus config file :
{{< highlight host="manager-01" file="/etc/prometheus/prometheus.yml" >}}
@ -268,8 +276,10 @@ sudo chown -R 472:472 /mnt/storage-pool/grafana
Next create new following `grafana` stack :
{{< highlight host="stack" file="grafana" >}}
```yml
version: '3.7'
version: '3'
services:
grafana:
@ -303,6 +313,8 @@ networks:
external: true
```
{{< /highlight >}}
Set proper `GF_DATABASE_PASSWORD` and deploy. Database migration should be automatic (don't hesitate to check inside pgAdmin). Go to <https://grafana.sw.dockerswarm.rocks> and login as admin / admin.
[![Grafana home](grafana-home.png)](grafana-home.png)
@ -382,6 +394,8 @@ GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'10.0.0.0/8';
Then we just have to expand the above `prometheus` stack description with 2 new exporter services, one for MySQL, and other for PostgreSQL :
{{< highlight host="stack" file="prometheus" >}}
```yml
#...
mysql-exporter:
@ -414,6 +428,8 @@ Then we just have to expand the above `prometheus` stack description with 2 new
#...
```
{{< /highlight >}}
Set proper `MYSQL_PASSWORD` and `POSTGRES_PASSWORD` environment variables and deploy the stack. Be sure that the 2 new services have started.
### Configure Prometheus

View File

@ -341,8 +341,10 @@ Restart Promtail with `sudo service promtail restart`.
It's just a new `jaeger` docker stack to deploy :
{{< highlight host="stack" file="jaeger" >}}
```yml
version: '3.8'
version: '3'
services:
collector:
@ -391,6 +393,8 @@ networks:
external: true
```
{{< /highlight >}}
| name | description |
| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `collector` | acts as a simple GRPC endpoint for saving all traces in particular span storage, as Elasticsearch. |
@ -423,7 +427,7 @@ Then edit original Traefik stack file and add `traefik` service into `jaeger` ne
{{< highlight host="manager-01" file="~/traefik-stack.yml" >}}
```yml
version: '3.2'
version: '3'
services:
traefik:

View File

@ -30,8 +30,10 @@ Let's do `sudo mkdir /mnt/storage-pool/gitea`
Then create a new `gitea` stack :
{{< highlight host="stack" file="gitea" >}}
```yml
version: '3.8'
version: '3'
services:
gitea:
@ -57,6 +59,8 @@ networks:
external: true
```
{{< /highlight >}}
{{< alert >}}
We added a specific TCP router in order to allow SSH cloning. The SSH Traefik entry point will redirect to the first available service with TCP router.
Note as we need to indicate entry points in order to avoid bad redirection from other HTTPS based service.
@ -86,8 +90,10 @@ Before attack the CI/CD part, we should take care of where we put our main docke
We'll use the official docker registry with addition of nice simple UI for images navigation. It's always the same, do `sudo mkdir /mnt/storage-pool/registry` and create `registry` stack :
{{< highlight host="stack" file="registry" >}}
```yml
version: '3.3'
version: '3'
services:
app:
@ -129,6 +135,8 @@ networks:
external: true
```
{{< /highlight >}}
{{< alert >}}
Note as both service must be exposed to Traefik. In order to keep the same subdomain, we made usage of `PathPrefix` feature provided by Traefik with `/v2`.
It gives us have an additional condition for redirect to the correct service. It's ok in our case because the official docker registry use only `/v2` as endpoint.
@ -207,8 +215,10 @@ Let's follow [the official docs](https://docs.drone.io/server/provider/gitea/) f
Save and keep the client and secret tokens. Then create a new `drone` PostgreSQL database and create a new `drone` stack :
{{< highlight host="stack" file="drone" >}}
```yml
version: '3.8'
version: '3'
services:
drone:
@ -253,6 +263,8 @@ networks:
external: true
```
{{< /highlight >}}
{{< alert >}}
Don't forget to have proper docker labels on nodes, as explain [here]({{< ref "04-build-your-own-docker-swarm-cluster-part-3#add-environment-labels" >}}), otherwise docker runner will not run because of `node.labels.environment == build`.
{{< /alert >}}
@ -401,8 +413,10 @@ If all's going well, the final image should be pushed in our docker registry. Yo
Our application is now ready for production deployment ! Let's create our new shiny `weather` stack :
{{< highlight host="stack" file="weather" >}}
```yml
version: "3"
version: '3'
services:
app:
@ -424,6 +438,8 @@ networks:
external: true
```
{{< /highlight >}}
{{< alert >}}
I use `Development` in order to have the swagger UI.
Be sure to have registered the private registry in Portainer before deploying as [explained here](#register-registry-in-portainer).

View File

@ -52,8 +52,10 @@ Push the code and ensure all CI/CD workflow passes.
Then edit the `weather` docker stack and configure Jaeger connection.
{{< highlight host="stack" file="weather" >}}
```yml
version: "3"
version: '3'
services:
app:
@ -74,6 +76,8 @@ networks:
external: true
```
{{< /highlight >}}
Ensure the weather API is deployed and do some API calls. Finally, go back to Jaeger UI, a second service `Weather API` should appear, select it and click on *Find Traces*. You should get all API call traces detail !
Feel free to add any other traces. There are 2 types of traces :
@ -99,8 +103,10 @@ sudo service procps restart
Create a `sonar` PostgresSQL database, and create a `sonar` stack :
{{< highlight host="stack" file="sonar" >}}
```yml
version: '3.8'
version: '3'
services:
server:
@ -129,6 +135,8 @@ networks:
external: true
```
{{< /highlight >}}
Set proper `ROOT_PATH` with `/mnt/storage-pool/sonar` and `SONAR_JDBC_PASSWORD` with above DB password.
Go to <https://sonar.sw.dockerswarm.rocks>, use admin / admin credentials and update password.
@ -204,8 +212,10 @@ Here I'll cover usage of k6. Note that it can be integrated to a time series dat
Create a new influxdb stack :
{{< highlight host="stack" file="influxdb" >}}
```yml
version: '3.8'
version: '3'
services:
db:
@ -226,14 +236,18 @@ volumes:
data:
```
{{< /highlight >}}
{{< alert >}}
Add proper `influxdb.data=true` docker label in the node you want to store the influx data. Here I chose to put in the `runner-01` node by taping this command : `docker node update --label-add influxdb.data=true runner-01`.
{{< /alert >}}
Add InfluxDB private network to Grafana stack :
{{< highlight host="stack" file="grafana" >}}
```yml
version: '3.7'
version: '3'
services:
grafana:
@ -250,6 +264,8 @@ networks:
external: true
```
{{< /highlight >}}
### Test loading with k6
First create a simple JS script as docker swarm *Config* named `k6_weather_test_01` through Portainer UI :
@ -265,8 +281,10 @@ export default function () {
[![Portainer config k6](portainer-configs-k6.png)](portainer-configs-k6.png)
{{< highlight host="stack" file="k6" >}}
```yml
version: '3.8'
version: '3'
services:
load:
@ -297,6 +315,8 @@ configs:
external: true
```
{{< /highlight >}}
| variable | description |
| ------------- | ----------------------------------------------- |
| `K6_VUS` | The number of active user connections. |