proofreading
This commit is contained in:
@ -25,8 +25,8 @@ So the prerequisites before continue :
|
||||
* Have some knowledge on docker-compose setups
|
||||
* Be comfortable with SSH terminal
|
||||
* Registered for a [Hetzner Cloud account](https://accounts.hetzner.com/signUp)
|
||||
* A custom domain, I'll use `mydomain.cool` here as example
|
||||
* A account to a transactional mail provider as mailgun, sendgrid, sendingblue, etc.
|
||||
* A custom domain, I'll use `mydomain.rocks` here as example
|
||||
* A account to a transactional mail provider as Mailgun, SendGrid, Sendinblue, etc.
|
||||
|
||||
{{< alert >}}
|
||||
You can of course apply this guide on any other cloud provider, but I doubt that you can achieve lower price.
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
title: "Setup a Docker Swarm cluster Part II - Hetzner Cloud"
|
||||
title: "Setup a Docker Swarm cluster Part II - Hetzner Cloud & NFS"
|
||||
date: 2022-02-15
|
||||
description: "Build an opinionated containerized platform for developer..."
|
||||
tags: ["docker", "swarm"]
|
||||
@ -65,6 +65,10 @@ hcloud server create --name data-01 --ssh-key swarm --image ubuntu-20.04 --type
|
||||
hcloud volume create --name volume-01 --size 60 --server data-01 --automount --format ext4
|
||||
```
|
||||
|
||||
{{< alert >}}
|
||||
Location is important ! Choose wisely between Germany, Finland and US. Here I go for `nbg1`, aka Nuremberg.
|
||||
{{< /alert >}}
|
||||
|
||||
## Prepare the servers 🛠️
|
||||
|
||||
It's time to do the classic minimal boring viable security setup for each server. Use `hcloud server ssh xxxxxx-01` for ssh connect and do the same for each.
|
||||
@ -135,14 +139,14 @@ IPs are only showed here as samples, use `hcloud server describe xxxxxx-01` in o
|
||||
|
||||
## Setup DNS and SSH config 🌍
|
||||
|
||||
Now use `hcloud server ip manager-01` to get the unique frontal IP address of the cluster that will be used for any entry point, including SSH. Then edit the DNS of your domain and apply this IP to a particular subdomain, as well as a wildcard subdomain. You will see later what this wildcard domain is it for. I will use `sw.mydomain.cool` as sample. It should be looks like next :
|
||||
Now use `hcloud server ip manager-01` to get the unique frontal IP address of the cluster that will be used for any entry point, including SSH. Then edit the DNS of your domain and apply this IP to a particular subdomain, as well as a wildcard subdomain. You will see later what this wildcard domain is it for. I will use `sw.mydomain.rocks` as sample. It should be looks like next :
|
||||
|
||||
```txt
|
||||
sw 3600 IN A 123.123.123.123
|
||||
*.sw 43200 IN CNAME sw
|
||||
```
|
||||
|
||||
As soon as the above DNS is applied, you should ping `sw.mydomain.cool` or any `xyz.sw.mydomain.cool` domains.
|
||||
As soon as the above DNS is applied, you should ping `sw.mydomain.rocks` or any `xyz.sw.mydomain.rocks` domains.
|
||||
|
||||
It's now time to finalize your local SSH config for optimal access. Go to `~/.ssh/config` and add following hosts (change it accordingly to your own setup) :
|
||||
|
||||
@ -150,7 +154,7 @@ It's now time to finalize your local SSH config for optimal access. Go to `~/.ss
|
||||
Host sw
|
||||
User swarm
|
||||
Port 2222
|
||||
HostName sw.mydomain.cool
|
||||
HostName sw.mydomain.rocks
|
||||
|
||||
Host sw-data-01
|
||||
User swarm
|
||||
@ -171,7 +175,7 @@ Host sw-worker-01
|
||||
And that's it ! You should now quickly ssh to these servers easily by `ssh sw`, `ssh sw-worker-01`, `ssh sw-runner-01`, `ssh sw-data-01`, which will be far more practical.
|
||||
|
||||
{{< alert >}}
|
||||
Note as I only use the `sw.mydomain.cool` as unique endpoint for ssh access to all internal server, without need of external SSH access to servers different from `manager-01`. It's known as SSH proxy, which allows single access point for better security perspective by simply jumping from main SSH access.
|
||||
Note as I only use the `sw.mydomain.rocks` as unique endpoint for ssh access to all internal server, without need of external SSH access to servers different from `manager-01`. It's known as SSH proxy, which allows single access point for better security perspective by simply jumping from main SSH access.
|
||||
{{< /alert >}}
|
||||
|
||||
## The firewall 🧱
|
||||
@ -266,8 +270,107 @@ You should have now good protection against any unintended external access with
|
||||
| **80** | the HTTP port for Traefik, only required for proper HTTPS redirection |
|
||||
| **22** | the SSH standard port for Traefik, required for proper usage through you main Git provider container as GitLab / Gitea |
|
||||
|
||||
## Network file system 📄
|
||||
|
||||
Before go further away, we'll quickly need of proper unique shared storage location for all managers and workers. It's mandatory in order to keep same state when your app containers are automatically rearranged by Swarm manager across multiple workers for convergence purpose.
|
||||
|
||||
We'll use `GlusterFS` for that. You can of course use a simple NFS bind mount. But GlusterFS make more sense in the sense that it allows easy replication for HA. You will not regret it when you'll need a `data-02`. We'll not cover GlusterFS replication here, just a unique master replica.
|
||||
|
||||
{{< mermaid >}}
|
||||
flowchart TD
|
||||
subgraph manager-01
|
||||
traefik((Traefik))
|
||||
end
|
||||
subgraph worker-01
|
||||
my-app-01-01((My App 01))
|
||||
my-app-02-01((My App 02))
|
||||
end
|
||||
subgraph worker-02
|
||||
my-app-01-02((My App 01))
|
||||
my-app-02-02((My App 02))
|
||||
end
|
||||
subgraph data-01
|
||||
storage[/GlusterFS/]
|
||||
db1[(MySQL)]
|
||||
db2[(PostgreSQL)]
|
||||
end
|
||||
traefik-->my-app-01-01
|
||||
traefik-->my-app-02-01
|
||||
traefik-->my-app-01-02
|
||||
traefik-->my-app-02-02
|
||||
worker-01-- glusterfs bind mount -->storage
|
||||
worker-02-- glusterfs bind mount -->storage
|
||||
my-app-02-01-->db2
|
||||
my-app-02-02-->db2
|
||||
{{< /mermaid >}}
|
||||
|
||||
{{< alert >}}
|
||||
Note that manager node can be used as worker as well. However, I think it's not well suited for production apps in my opinion.
|
||||
{{< /alert >}}
|
||||
|
||||
### Install GlusterFS 🐜
|
||||
|
||||
It's 2 steps :
|
||||
|
||||
* Installing the file system server on dedicated volume mounted on `data-01`
|
||||
* Mount the above volume on all clients where docker is installed
|
||||
|
||||
{{< tabs >}}
|
||||
{{< tab tabName="1. master (data-01)" >}}
|
||||
|
||||
```sh
|
||||
sudo add-apt-repository -y ppa:gluster/glusterfs-10
|
||||
|
||||
sudo apt install -y glusterfs-server
|
||||
sudo systemctl enable glusterd.service
|
||||
sudo systemctl start glusterd.service
|
||||
|
||||
# get the path of you mounted disk from part 1 of this tuto
|
||||
df -h # it should be like /mnt/HC_Volume_xxxxxxxx
|
||||
|
||||
# create the volume
|
||||
sudo gluster volume create volume-01 data-01:/mnt/HC_Volume_xxxxxxxx/gluster-storage
|
||||
sudo gluster volume start volume-01
|
||||
|
||||
# ensure volume is present with this command
|
||||
sudo gluster volume status
|
||||
|
||||
# next line for testing purpose
|
||||
sudo touch /mnt/HC_Volume_xxxxxxxx/gluster-storage/test.txt
|
||||
```
|
||||
|
||||
{{< /tab >}}
|
||||
{{< tab tabName="2. clients (docker hosts)" >}}
|
||||
|
||||
```sh
|
||||
# do following commands on every docker client host
|
||||
sudo add-apt-repository -y ppa:gluster/glusterfs-10
|
||||
|
||||
sudo apt install -y glusterfs-client
|
||||
|
||||
# I will choose this path as main bind mount
|
||||
sudo mkdir /mnt/storage-pool
|
||||
|
||||
# edit /etc/fstab with following line for persistent mount
|
||||
data-01:/volume-01 /mnt/storage-pool glusterfs defaults,_netdev,x-systemd.automount 0 0
|
||||
|
||||
# test fstab with next command
|
||||
sudo mount -a
|
||||
|
||||
# you should see test.txt
|
||||
ls /mnt/storage-pool/
|
||||
```
|
||||
|
||||
{{< /tab >}}
|
||||
{{< /tabs >}}
|
||||
|
||||
{{< alert >}}
|
||||
You can ask why we use bind mounts directly on the host instead of using more featured docker volumes directly (Kubernetes does similar way). Moreover, it's not really the first recommendation on [official docs](https://docs.docker.com/storage/bind-mounts/), as it states to prefer volumes directly.
|
||||
It's just as I didn't find reliable GlusterFS driver working for Docker. Kubernetes is far more mature in this domain sadly. Please let me know if you know production grade solution for that !
|
||||
{{< /alert >}}
|
||||
|
||||
## 1st check ✅
|
||||
|
||||
We've done all the boring nevertheless essential stuff of this tutorial by preparing the physical layer + OS part.
|
||||
We've done all the boring nevertheless essential stuff of this tutorial by preparing the physical layer + OS part + cloud native NFS.
|
||||
|
||||
Go to the [next part]({{< ref "/posts/04-build-your-own-docker-swarm-cluster-part-3" >}}) for the serious work !
|
||||
|
@ -59,105 +59,6 @@ docker node update --label-add environment=production worker-01
|
||||
docker node update --label-add environment=build runner-01
|
||||
```
|
||||
|
||||
## Network file system 📄
|
||||
|
||||
Before go further away, we'll quickly need of proper unique shared storage location for all managers and workers. It's mandatory in order to keep same state when your app containers are automatically rearranged by Swarm manager across multiple workers for convergence purpose.
|
||||
|
||||
We'll use `GlusterFS` for that. You can of course use a simple NFS bind mount. But GlusterFS make more sense in the sense that it allows easy replication for HA. You will not regret it when you'll need a `data-02`. We'll not cover GlusterFS replication here, just a unique master replica.
|
||||
|
||||
{{< mermaid >}}
|
||||
flowchart TD
|
||||
subgraph manager-01
|
||||
traefik((Traefik))
|
||||
end
|
||||
subgraph worker-01
|
||||
my-app-01-01((My App 01))
|
||||
my-app-02-01((My App 02))
|
||||
end
|
||||
subgraph worker-02
|
||||
my-app-01-02((My App 01))
|
||||
my-app-02-02((My App 02))
|
||||
end
|
||||
subgraph data-01
|
||||
storage[/GlusterFS/]
|
||||
db1[(MySQL)]
|
||||
db2[(PostgreSQL)]
|
||||
end
|
||||
traefik-->my-app-01-01
|
||||
traefik-->my-app-02-01
|
||||
traefik-->my-app-01-02
|
||||
traefik-->my-app-02-02
|
||||
worker-01-- glusterfs bind mount -->storage
|
||||
worker-02-- glusterfs bind mount -->storage
|
||||
my-app-02-01-->db2
|
||||
my-app-02-02-->db2
|
||||
{{< /mermaid >}}
|
||||
|
||||
{{< alert >}}
|
||||
Note that manager node can be used as worker as well. However, I think it's not well suited for production apps in my opinion.
|
||||
{{< /alert >}}
|
||||
|
||||
### Install GlusterFS 🐜
|
||||
|
||||
It's 2 steps :
|
||||
|
||||
* Installing the file system server on dedicated volume mounted on `data-01`
|
||||
* Mount the above volume on all clients where docker is installed
|
||||
|
||||
{{< tabs >}}
|
||||
{{< tab tabName="1. master (data-01)" >}}
|
||||
|
||||
```sh
|
||||
sudo add-apt-repository -y ppa:gluster/glusterfs-10
|
||||
|
||||
sudo apt install -y glusterfs-server
|
||||
sudo systemctl enable glusterd.service
|
||||
sudo systemctl start glusterd.service
|
||||
|
||||
# get the path of you mounted disk from part 1 of this tuto
|
||||
df -h # it should be like /mnt/HC_Volume_xxxxxxxx
|
||||
|
||||
# create the volume
|
||||
sudo gluster volume create volume-01 data-01:/mnt/HC_Volume_xxxxxxxx/gluster-storage
|
||||
sudo gluster volume start volume-01
|
||||
|
||||
# ensure volume is present with this command
|
||||
sudo gluster volume status
|
||||
|
||||
# next line for testing purpose
|
||||
sudo touch /mnt/HC_Volume_xxxxxxxx/gluster-storage/test.txt
|
||||
```
|
||||
|
||||
{{< /tab >}}
|
||||
{{< tab tabName="2. clients (docker hosts)" >}}
|
||||
|
||||
```sh
|
||||
# do following commands on every docker client host
|
||||
sudo add-apt-repository -y ppa:gluster/glusterfs-10
|
||||
|
||||
sudo apt install -y glusterfs-client
|
||||
|
||||
# I will choose this path as main bind mount
|
||||
sudo mkdir /mnt/storage-pool
|
||||
|
||||
# edit /etc/fstab with following line for persistent mount
|
||||
data-01:/volume-01 /mnt/storage-pool glusterfs defaults,_netdev,x-systemd.automount 0 0
|
||||
|
||||
# test fstab with next command
|
||||
sudo mount -a
|
||||
|
||||
# you should see test.txt
|
||||
ls /mnt/storage-pool/
|
||||
```
|
||||
|
||||
{{< /tab >}}
|
||||
{{< /tabs >}}
|
||||
|
||||
{{< alert >}}
|
||||
You can ask why we use bind mounts directly on the host instead of using more featured docker volumes directly (Kubernetes does similar way). Moreover, it's not really the first recommendation on [official docs](https://docs.docker.com/storage/bind-mounts/), as it states to prefer volumes directly.
|
||||
It's just as I didn't find reliable GlusterFS driver working for Docker. Kubernetes is far more mature in this domain sadly. Please let me know if you know production grade solution for that !
|
||||
{{< /alert >}}
|
||||
|
||||
## Installing the Traefik - Portainer combo 💞
|
||||
|
||||
It's finally time to start our first container services. The minimal setup will be :
|
||||
@ -201,12 +102,12 @@ entryPoints:
|
||||
certificatesResolvers:
|
||||
le:
|
||||
acme:
|
||||
email: admin@sw.mydomain.cool
|
||||
email: admin@sw.mydomain.rocks
|
||||
storage: /certificates/acme.json
|
||||
tlsChallenge: {}
|
||||
providers:
|
||||
docker:
|
||||
defaultRule: Host(`{{ index .Labels "com.docker.stack.namespace" }}.sw.mydomain.cool`)
|
||||
defaultRule: Host(`{{ index .Labels "com.docker.stack.namespace" }}.sw.mydomain.rocks`)
|
||||
exposedByDefault: false
|
||||
swarmMode: true
|
||||
network: traefik_public
|
||||
@ -248,7 +149,7 @@ It indicates Traefik to read through Docker API in order to discover any new ser
|
||||
| `network` | Default network connection for all exposed containers |
|
||||
| `defaultRule` | Default rule that will be applied to HTTP routes, in order to redirect particular URL to the right service. Each service container can override this default value with `traefik.http.routers.my-container.rule` label. |
|
||||
|
||||
As a default route rule, I set here a value adapted for an automatic subdomain discovery. `{{ index .Labels "com.docker.stack.namespace" }}.sw.mydomain.cool` is a dynamic Go template string that means to use the `com.docker.stack.namespace` label that is applied by default on Docker Swarm on each deployed service. So if I deploy a swarm stack called `myapp`, Traefik will automatically set `myapp.sw.mydomain.cool` as default domain URL to my service, with automatic TLS challenge !
|
||||
As a default route rule, I set here a value adapted for an automatic subdomain discovery. `{{ index .Labels "com.docker.stack.namespace" }}.sw.mydomain.rocks` is a dynamic Go template string that means to use the `com.docker.stack.namespace` label that is applied by default on Docker Swarm on each deployed service. So if I deploy a swarm stack called `myapp`, Traefik will automatically set `myapp.sw.mydomain.rocks` as default domain URL to my service, with automatic TLS challenge !
|
||||
|
||||
All I have to do is to add a specific label `traefik.enable=true` inside the Docker service configuration and be sure that it's on the `traefik_public` network.
|
||||
|
||||
@ -344,7 +245,7 @@ This is the Traefik dynamic configuration part. I declare here many service that
|
||||
| `gzip` | middleware | provides [basic gzip compression](https://doc.traefik.io/traefik/middlewares/http/compress/). Note as Traefik doesn't support brotli yep, which is pretty disappointed where absolutly all other reverse proxies support it... |
|
||||
| `admin-auth` | middleware | provides basic HTTP authorization. `basicauth.users` will use standard `htpasswd` format. I use `HASHED_PASSWORD` as dynamic environment variable. |
|
||||
| `admin-ip` | middleware | provides IP whitelist protection, given a source range. |
|
||||
| `traefik-public-api` | router | Configured for proper redirection to internal dashboard Traefik API from `traefik.sw.mydomain.cool`, which is defined by default rule. It's configured with above `admin-auth` and `admin-ip` for proper protection. |
|
||||
| `traefik-public-api` | router | Configured for proper redirection to internal dashboard Traefik API from `traefik.sw.mydomain.rocks`, which is defined by default rule. It's configured with above `admin-auth` and `admin-ip` for proper protection. |
|
||||
| `traefik-public` | service | allow proper redirection to the default exposed 8080 port of Traefik container. This is sadly mandatory when using [Docker Swarm](https://doc.traefik.io/traefik/providers/docker/#port-detection_1) |
|
||||
|
||||
{{< alert >}}
|
||||
@ -375,7 +276,7 @@ docker service ls
|
||||
docker service logs traefik_traefik
|
||||
```
|
||||
|
||||
After few seconds, Traefik should launch and generate proper SSL certificate for his own domain. You can finally go to <https://traefik.sw.mydomain.cool>. `http://` should work as well thanks to permanent redirection.
|
||||
After few seconds, Traefik should launch and generate proper SSL certificate for his own domain. You can finally go to <https://traefik.sw.mydomain.rocks>. `http://` should work as well thanks to permanent redirection.
|
||||
|
||||
If properly configured, you will be prompted for access. After entering admin as user and your own chosen password, you should finally access to the traefik dashboard similar to below !
|
||||
|
||||
@ -451,7 +352,7 @@ As soon as the main portainer service has successfully started, Traefik will det
|
||||
|
||||
[](traefik-routers.png)
|
||||
|
||||
It's time to create your admin account through <https://portainer.sw.mydomain.cool>. If all goes well, aka Portainer agent are accessible from Portainer portal, you should have access to your cluster home environment with 2 stacks active.
|
||||
It's time to create your admin account through <https://portainer.sw.mydomain.rocks>. If all goes well, aka Portainer agent are accessible from Portainer portal, you should have access to your cluster home environment with 2 stacks active.
|
||||
|
||||
[](portainer-home.png)
|
||||
|
||||
@ -545,6 +446,6 @@ You can check the service logs which consist of all tasks logs aggregate.
|
||||
|
||||
## 2nd check ✅
|
||||
|
||||
We've done the minimal viable Swarm setup with a nice cloud native reverse proxy and a containers GUI manager, with a cloud native NFS.
|
||||
We've done the minimal viable Swarm setup with a nice cloud native reverse proxy and a containers GUI manager.
|
||||
|
||||
It's time to test more advanced cases with self-hosted managed databases in [next part]({{< ref "/posts/04-build-your-own-docker-swarm-cluster-part-3" >}}).
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
title: "Setup a Docker Swarm cluster Part IV - DB & backups"
|
||||
title: "Setup a Docker Swarm cluster Part IV - DB & Backups"
|
||||
date: 2022-02-18
|
||||
description: "Build an opinionated containerized platform for developer..."
|
||||
tags: ["docker", "swarm"]
|
||||
@ -96,7 +96,7 @@ networks:
|
||||
|
||||
The important part is `/etc/hosts` in order to allow proper DNS resolving for `data-01` configured in `PMA_HOST` environment variable. This will avoid us from dragging the real IP of data server everywhere...
|
||||
|
||||
Deploy it, and you should access to <https://phpmyadmin.sw.mydomain.cool> after few seconds, with full admin access to your MySQL DB !
|
||||
Deploy it, and you should access to <https://phpmyadmin.sw.mydomain.rocks> after few seconds, with full admin access to your MySQL DB !
|
||||
|
||||
[](phpmyadmin.png)
|
||||
|
||||
@ -198,7 +198,7 @@ networks:
|
||||
|
||||
You'll need both `PGADMIN_DEFAULT_EMAIL` and `PGADMIN_DEFAULT_PASSWORD` variable environment for proper initialization.
|
||||
|
||||
Deploy it, and you should access after few seconds to <https://pgadmin.sw.mydomain.cool> with the default logins just above.
|
||||
Deploy it, and you should access after few seconds to <https://pgadmin.sw.mydomain.rocks> with the default logins just above.
|
||||
|
||||
Once logged, you need to add the previously configured PostgreSQL server address via *Add new server*. Just add relevant host informations in *Connection* tab. Host must stay `data-01` with swarm as superuser access.
|
||||
|
||||
@ -286,7 +286,7 @@ Configure `REDMINE_DB_*` with proper above created DB credential and set the ran
|
||||
I use a dynamic `ROOT_PATH` here. So you must add this variable with `/mnt/storage-pool/redmine` value in the below *Environment variables* section of portainer.
|
||||
{{< /alert >}}
|
||||
|
||||
After few seconds, <https://redmine.sw.mydomain.cool> should be accessible and ready to use, use admin / admin for admin connection !
|
||||
After few seconds, <https://redmine.sw.mydomain.rocks> should be accessible and ready to use, use admin / admin for admin connection !
|
||||
|
||||
[](redmine.png)
|
||||
|
||||
@ -329,7 +329,7 @@ networks:
|
||||
external: true
|
||||
```
|
||||
|
||||
And voilà, it's done, n8n will automatically migrate the database and <https://n8n.sw.mydomain.cool> should be soon accessible. Note as we use `admin-auth` middleware because n8n doesn't offer authentication. Use the same Traefik credentials.
|
||||
And voilà, it's done, n8n will automatically migrate the database and <https://n8n.sw.mydomain.rocks> should be soon accessible. Note as we use `admin-auth` middleware because n8n doesn't offer authentication. Use the same Traefik credentials.
|
||||
|
||||
[](n8n.png)
|
||||
|
||||
|
@ -18,7 +18,7 @@ This part is totally optional, as it's mainly focused on monitoring. Feel free t
|
||||
|
||||
## Metrics with Prometheus 🔦
|
||||
|
||||
Prometheus is become the standard de facto for self-hosted monitoring in part thanks to his architecture. It's a TSDB (Time Series Database) that will poll (aka scrape) standard metrics REST endpoints, provided by the tools to monitor. It's the case of Traefik, as we seen in [part III]({{< ref "04-build-your-own-docker-swarm-cluster-part-3#traefik-" >}}). For tools that don't support it natively, like databases, you'll find many exporters that will do the job for you.
|
||||
Prometheus is become the standard de facto for self-hosted monitoring in part thanks to his architecture. It's a TSDB (Time Series Database) that will poll (aka scrape) standard metrics REST endpoints, provided by the tools to monitor. It's the case of Traefik, as we have seen in [part III]({{< ref "04-build-your-own-docker-swarm-cluster-part-3#traefik-" >}}). For tools that don't support it natively, like databases, you'll find many exporters that will do the job for you.
|
||||
|
||||
### Prometheus install 💽
|
||||
|
||||
@ -101,7 +101,7 @@ The `private` network will serve us later for exporters. Next config are useful
|
||||
| storage.tsdb.retention.size | The max DB size |
|
||||
| storage.tsdb.retention.time | The max data retention date |
|
||||
|
||||
Deploy it and <https://prometheus.sw.mydomain.cool> should be available after few seconds. Use same traefik credentials for login.
|
||||
Deploy it and <https://prometheus.sw.mydomain.rocks> should be available after few seconds. Use same traefik credentials for login.
|
||||
|
||||
You should now have access to some metrics !
|
||||
|
||||
@ -254,8 +254,8 @@ services:
|
||||
grafana:
|
||||
image: grafana/grafana:8.4.1
|
||||
environment:
|
||||
GF_SERVER_DOMAIN: grafana.sw.mydomain.cool
|
||||
GF_SERVER_ROOT_URL: https://grafana.sw.mydomain.cool
|
||||
GF_SERVER_DOMAIN: grafana.sw.mydomain.rocks
|
||||
GF_SERVER_ROOT_URL: https://grafana.sw.mydomain.rocks
|
||||
GF_DATABASE_TYPE: postgres
|
||||
GF_DATABASE_HOST: data-01:5432
|
||||
GF_DATABASE_NAME: grafana
|
||||
@ -282,7 +282,7 @@ networks:
|
||||
external: true
|
||||
```
|
||||
|
||||
Set proper `GF_DATABASE_PASSWORD` and deploy. Database migration should be automatic (don't hesitate to check inside pgAdmin). Go to <https://grafana.sw.mydomain.cool> and login as admin / admin.
|
||||
Set proper `GF_DATABASE_PASSWORD` and deploy. Database migration should be automatic (don't hesitate to check inside pgAdmin). Go to <https://grafana.sw.mydomain.rocks> and login as admin / admin.
|
||||
|
||||
[](grafana-home.png)
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
title: "Setup a Docker Swarm cluster Part VI - Logging & tracing"
|
||||
title: "Setup a Docker Swarm cluster Part VI - Logging & Tracing"
|
||||
date: 2022-02-20
|
||||
description: "Build an opinionated containerized platform for developer..."
|
||||
tags: ["docker", "swarm"]
|
||||
@ -340,7 +340,7 @@ networks:
|
||||
| `agent` | a simple REST endpoint for receiving traces, the latter being forwarded to the collector. An agent should be proper to a machine host, similarly as the portainer agent. |
|
||||
| `query` | a simple UI that connects to the span storage and allows simple visualization. |
|
||||
|
||||
After few seconds, go to <https://jaeger.sw.mydomain.cool> and enter Traefik credentials. You will land to Jaeger Query UI with empty data.
|
||||
After few seconds, go to <https://jaeger.sw.mydomain.rocks> and enter Traefik credentials. You will land to Jaeger Query UI with empty data.
|
||||
|
||||
It's time to inject some trace data. Be sure all above Jaeger services are started through Portainer before continue.
|
||||
|
||||
|
@ -62,11 +62,11 @@ We added a specific TCP router in order to allow SSH cloning. The SSH Traefik en
|
||||
Note as we need to indicate entry points in order to avoid bad redirection from other HTTPS based service.
|
||||
{{< /alert >}}
|
||||
|
||||
Now go to <https://gitea.sw.mydomain.cool> and go through the installation procedure. Change default SQLite provider by a more production purpose database.
|
||||
Now go to <https://gitea.sw.mydomain.rocks> and go through the installation procedure. Change default SQLite provider by a more production purpose database.
|
||||
|
||||
Create a new `gitea` PostgreSQL database as usual from pgAdmin or `psql` for pro-CLI user, and set the according DB info access to Gitea installer. Host should be `data-01`.
|
||||
|
||||
Don't forgive to change all domain related field by the proper current domain URL, which is `gitea.sw.mydomain.cool` in my case. You should set proper SMTP settings for notifications.
|
||||
Don't forgive to change all domain related field by the proper current domain URL, which is `gitea.sw.mydomain.rocks` in my case. You should set proper SMTP settings for notifications.
|
||||
|
||||
[](gitea-install.png)
|
||||
|
||||
@ -101,7 +101,7 @@ services:
|
||||
deploy:
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.http.routers.registry.rule=Host(`registry.sw.mydomain.cool`) && PathPrefix(`/v2`)
|
||||
- traefik.http.routers.registry.rule=Host(`registry.sw.mydomain.rocks`) && PathPrefix(`/v2`)
|
||||
- traefik.http.routers.registry.middlewares=admin-auth
|
||||
- traefik.http.services.registry.loadbalancer.server.port=5000
|
||||
placement:
|
||||
@ -134,11 +134,11 @@ Note as both service must be exposed to Traefik. In order to keep the same subdo
|
||||
It gives us have an additional condition for redirect to the correct service. It's ok in our case because the official docker registry use only `/v2` as endpoint.
|
||||
{{< /alert >}}
|
||||
|
||||
Go to <https://registry.sw.mydomain.cool> and use Traefik credentials. We have no images yet let's create one.
|
||||
Go to <https://registry.sw.mydomain.rocks> and use Traefik credentials. We have no images yet let's create one.
|
||||
|
||||
### Test our private registry
|
||||
|
||||
Login into the `manager-01` server, do `docker login registry.sw.mydomain.cool` and enter proper credentials. You should see *Login Succeeded*. Don't worry about the warning. Create the next Dockerfile somewhere :
|
||||
Login into the `manager-01` server, do `docker login registry.sw.mydomain.rocks` and enter proper credentials. You should see *Login Succeeded*. Don't worry about the warning. Create the next Dockerfile somewhere :
|
||||
|
||||
```Dockerfile
|
||||
FROM alpine:latest
|
||||
@ -149,15 +149,15 @@ Then build and push the image :
|
||||
|
||||
```sh
|
||||
docker build -t alpinegit .
|
||||
docker tag alpinegit registry.sw.mydomain.cool/alpinegit
|
||||
docker push registry.sw.mydomain.cool/alpinegit
|
||||
docker tag alpinegit registry.sw.mydomain.rocks/alpinegit
|
||||
docker push registry.sw.mydomain.rocks/alpinegit
|
||||
```
|
||||
|
||||
Go back to above <https://registry.sw.mydomain.cool>. You should see 1 new image !
|
||||
Go back to above <https://registry.sw.mydomain.rocks>. You should see 1 new image !
|
||||
|
||||
[](docker-registry.png)
|
||||
|
||||
Delete the image test through UI and from local docker with `docker image rm registry.sw.mydomain.cool/alpinegit`.
|
||||
Delete the image test through UI and from local docker with `docker image rm registry.sw.mydomain.rocks/alpinegit`.
|
||||
|
||||
{{< alert >}}
|
||||
Note as the blobs of image is always physically in the disk, even when "deleted". You must launch manually the docker GC in order to cleanup unused images.
|
||||
@ -201,7 +201,7 @@ drone-runner-- push built docker image -->registry
|
||||
registry-- pull image when deploy stack -->my-app
|
||||
{{< /mermaid >}}
|
||||
|
||||
Let's follow [the official docs](https://docs.drone.io/server/provider/gitea/) for generating a OAuth2 application on Gitea, which is necessary for Drone integration. Set `https://drone.sw.mydomain.cool` as redirect UI after successful authentication.
|
||||
Let's follow [the official docs](https://docs.drone.io/server/provider/gitea/) for generating a OAuth2 application on Gitea, which is necessary for Drone integration. Set `https://drone.sw.mydomain.rocks` as redirect UI after successful authentication.
|
||||
|
||||
[](gitea-drone-application.png)
|
||||
|
||||
@ -220,7 +220,7 @@ services:
|
||||
DRONE_DATABASE_DATASOURCE: postgres://drone:${DRONE_DATABASE_PASSWORD}@data-01:5432/drone?sslmode=disable
|
||||
DRONE_GITEA_CLIENT_ID:
|
||||
DRONE_GITEA_CLIENT_SECRET:
|
||||
DRONE_GITEA_SERVER: https://gitea.sw.mydomain.cool
|
||||
DRONE_GITEA_SERVER: https://gitea.sw.mydomain.rocks
|
||||
DRONE_RPC_SECRET:
|
||||
DRONE_SERVER_HOST:
|
||||
DRONE_SERVER_PROTO:
|
||||
@ -259,7 +259,7 @@ Don't forget to have proper docker labels on nodes, as explain [here]({{< ref "0
|
||||
|
||||
| variable | description |
|
||||
| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `DRONE_SERVER_HOST` | The host of main Drone server. I'll use `drone.sw.mydomain.cool` here. |
|
||||
| `DRONE_SERVER_HOST` | The host of main Drone server. I'll use `drone.sw.mydomain.rocks` here. |
|
||||
| `DRONE_SERVER_PROTO` | The scheme protocol, which is `https`. |
|
||||
| `DRONE_GITEA_CLIENT_ID` | Use the above client ID token. |
|
||||
| `DRONE_GITEA_CLIENT_SECRET` | Use the above client secret token. |
|
||||
@ -267,7 +267,7 @@ Don't forget to have proper docker labels on nodes, as explain [here]({{< ref "0
|
||||
| `DRONE_RPC_SECRET` | Necessary for proper secured authentication between Drone and runners. Use `openssl rand -hex 16` for generating a valid token. |
|
||||
| `DRONE_USER_CREATE` | The initial user to create at launch. Put your Gitea username here for setting automatically Gitea user as drone administrator. |
|
||||
|
||||
It's time to go to <https://drone.sw.mydomain.cool/> and generate your first Drone account through OAuth2 from Gitea. You should be properly redirected to Gitea, where you'll just have to authorize Drone application.
|
||||
It's time to go to <https://drone.sw.mydomain.rocks/> and generate your first Drone account through OAuth2 from Gitea. You should be properly redirected to Gitea, where you'll just have to authorize Drone application.
|
||||
|
||||
[](gitea-oauth2.png)
|
||||
|
||||
@ -301,7 +301,7 @@ dotnet new gitignore
|
||||
git init
|
||||
git add .
|
||||
git commit -m "first commit"
|
||||
git remote add origin git@gitea.sw.mydomain.cool:adr1enbe4udou1n/my-weather-api.git # if you use ssh
|
||||
git remote add origin git@gitea.sw.mydomain.rocks:adr1enbe4udou1n/my-weather-api.git # if you use ssh
|
||||
git push -u origin main
|
||||
```
|
||||
|
||||
@ -324,10 +324,10 @@ It will create a webhook inside repository settings, triggered on every code pus
|
||||
Now generate a new SSH key on `manager-01` :
|
||||
|
||||
```sh
|
||||
ssh-keygen -t ed25519 -C "admin@sw.mydomain.cool"
|
||||
ssh-keygen -t ed25519 -C "admin@sw.mydomain.rocks"
|
||||
cat .ssh/id_ed25519 # the private key to set in swarm_ssh_key
|
||||
cat .ssh/id_ed25519.pub # the public key to add just below
|
||||
echo "ssh-ed25519 AAAA... admin@sw.mydomain.cool" | tee -a .ssh/authorized_keys
|
||||
echo "ssh-ed25519 AAAA... admin@sw.mydomain.rocks" | tee -a .ssh/authorized_keys
|
||||
```
|
||||
|
||||
Then configure the repository settings on Drone. Go to *Organization > Secrets* section and add some global secrets.
|
||||
@ -358,8 +358,8 @@ steps:
|
||||
- name: image
|
||||
image: plugins/docker
|
||||
settings:
|
||||
registry: registry.sw.mydomain.cool
|
||||
repo: registry.sw.mydomain.cool/adr1enbe4udou1n/my-weather-api
|
||||
registry: registry.sw.mydomain.rocks
|
||||
repo: registry.sw.mydomain.rocks/adr1enbe4udou1n/my-weather-api
|
||||
tags: latest
|
||||
username:
|
||||
from_secret: registry_username
|
||||
@ -395,7 +395,7 @@ Commit both above files and push to remote repo. Drone should be automatically t
|
||||
|
||||
[](drone-build.png)
|
||||
|
||||
If all's going well, the final image should be pushed in our docker registry. You can ensure it by navigating to <https://registry.sw.mydomain.cool>.
|
||||
If all's going well, the final image should be pushed in our docker registry. You can ensure it by navigating to <https://registry.sw.mydomain.rocks>.
|
||||
|
||||
### Deployment (the CD part) 🚀
|
||||
|
||||
@ -406,7 +406,7 @@ version: "3"
|
||||
|
||||
services:
|
||||
app:
|
||||
image: registry.sw.mydomain.cool/adr1enbe4udou1n/my-weather-api
|
||||
image: registry.sw.mydomain.rocks/adr1enbe4udou1n/my-weather-api
|
||||
environment:
|
||||
ASPNETCORE_ENVIRONMENT: Development
|
||||
networks:
|
||||
@ -429,13 +429,13 @@ I use `Development` in order to have the swagger UI.
|
||||
Be sure to have registered the private registry in Portainer before deploying as [explained here](#register-registry-in-portainer).
|
||||
{{< /alert >}}
|
||||
|
||||
Finally, deploy and see the result in <https://weather.sw.mydomain.cool/swagger>. You should access to the swagger UI, and API endpoints should correctly respond.
|
||||
Finally, deploy and see the result in <https://weather.sw.mydomain.rocks/swagger>. You should access to the swagger UI, and API endpoints should correctly respond.
|
||||
|
||||
#### Continuous deployment
|
||||
|
||||
Now it's clear that we don't want to deploy manually every time when the code is pushed.
|
||||
|
||||
First be sure that following `docker service update --image registry.sw.mydomain.cool/adr1enbe4udou1n/my-weather-api:latest weather_app --with-registry-auth` command works well in `manager-01`. It's simply update the current `weather_app` service with the last available image version from the private registry.
|
||||
First be sure that following `docker service update --image registry.sw.mydomain.rocks/adr1enbe4udou1n/my-weather-api:latest weather_app --with-registry-auth` command works well in `manager-01`. It's simply update the current `weather_app` service with the last available image version from the private registry.
|
||||
|
||||
Now we must be sure that the `runner-01` host can reach the `manager-01` server from outside. If you have applied the firewall at the beginning of this tutorial, only our own IP is authorized. Let's add the public IP of `runner-01` to your `firewall-external` inside Hetzner console.
|
||||
|
||||
@ -446,13 +446,13 @@ Now let's add a new `deploy` step inside `.drone.yml` into our pipeline for auto
|
||||
- name: deploy
|
||||
image: appleboy/drone-ssh
|
||||
settings:
|
||||
host: sw.mydomain.cool
|
||||
host: sw.mydomain.rocks
|
||||
port: 2222
|
||||
username: swarm
|
||||
key:
|
||||
from_secret: swarm_ssh_key
|
||||
script:
|
||||
- docker service update --image registry.sw.mydomain.cool/adr1enbe4udou1n/my-weather-api:latest weather_app --with-registry-auth
|
||||
- docker service update --image registry.sw.mydomain.rocks/adr1enbe4udou1n/my-weather-api:latest weather_app --with-registry-auth
|
||||
#...
|
||||
```
|
||||
|
||||
|
@ -57,7 +57,7 @@ version: "3"
|
||||
|
||||
services:
|
||||
app:
|
||||
image: registry.sw.mydomain.cool/adr1enbe4udou1n/my-weather-api
|
||||
image: registry.sw.mydomain.rocks/adr1enbe4udou1n/my-weather-api
|
||||
environment:
|
||||
ASPNETCORE_ENVIRONMENT: Development
|
||||
Jaeger__Host: tasks.jaeger_agent
|
||||
@ -131,7 +131,7 @@ networks:
|
||||
|
||||
Set proper `ROOT_PATH` with `/mnt/storage-pool/sonar` and `SONAR_JDBC_PASSWORD` with above DB password.
|
||||
|
||||
Go to <https://sonar.sw.mydomain.cool>, use admin / admin credentials and update password.
|
||||
Go to <https://sonar.sw.mydomain.rocks>, use admin / admin credentials and update password.
|
||||
|
||||
### Project analysis
|
||||
|
||||
@ -144,7 +144,7 @@ You must have at least Java 11 installed locally.
|
||||
```sh
|
||||
dotnet tool install --global dotnet-sonarscanner
|
||||
|
||||
dotnet sonarscanner begin /k:"My-Weather-API" /d:sonar.host.url="https://sonar.sw.mydomain.cool" /d:sonar.login="above-generated-token"
|
||||
dotnet sonarscanner begin /k:"My-Weather-API" /d:sonar.host.url="https://sonar.sw.mydomain.rocks" /d:sonar.login="above-generated-token"
|
||||
|
||||
dotnet build
|
||||
|
||||
@ -159,10 +159,10 @@ Wait few minutes and the final rapport analysis should automatically appear. Add
|
||||
|
||||
Because running scanner manually is boring, let's integrate it in our favorite CI. Create following secrets through Drone UI :
|
||||
|
||||
| name | level | description |
|
||||
| ---------------- | ------------ | ------------------------------------------------------- |
|
||||
| `sonar_host_url` | organization | Set the sonar host URL `https://sonar.sw.mydomain.cool` |
|
||||
| `sonar_token` | repository | Set the above token |
|
||||
| name | level | description |
|
||||
| ---------------- | ------------ | -------------------------------------------------------- |
|
||||
| `sonar_host_url` | organization | Set the sonar host URL `https://sonar.sw.mydomain.rocks` |
|
||||
| `sonar_token` | repository | Set the above token |
|
||||
|
||||
Change the `build` step on `.drone.yml` file :
|
||||
|
||||
@ -259,7 +259,7 @@ import http from "k6/http";
|
||||
import { check } from "k6";
|
||||
|
||||
export default function () {
|
||||
http.get('https://weather.sw.mydomain.cool/WeatherForecast');
|
||||
http.get('https://weather.sw.mydomain.rocks/WeatherForecast');
|
||||
}
|
||||
```
|
||||
|
||||
@ -362,7 +362,7 @@ export const options = {
|
||||
};
|
||||
|
||||
export default function () {
|
||||
http.get('https://weather.sw.mydomain.cool/WeatherForecast');
|
||||
http.get('https://weather.sw.mydomain.rocks/WeatherForecast');
|
||||
}
|
||||
```
|
||||
|
||||
|
Reference in New Issue
Block a user