write swarm post part III

This commit is contained in:
2022-02-20 12:54:27 +01:00
parent 6d860f29da
commit 8ac2551474
5 changed files with 143 additions and 42 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

View File

@ -66,10 +66,10 @@ Because backup should be taken care from the beginning, I'll show you how to use
### 3. Testing the cluster ✅
We will use the main portainer GUI in order to install following tools :
We will use the main Portainer GUI in order to install following tools :
* `pgAdmin` and `phpMyAdmin` as web database managers (optional)
* [`Diun`](https://crazymax.dev/diun/) (optional), very useful in order to be notified for all used images update inside your Swarm cluster
* `pgAdmin` and `phpMyAdmin` as web database managers (optional)
* Some demo containerized samples that will show you how simple is it to install self-hosted web apps thanks to your shiny new cluster as `redmine`, `n8n`
### 4. Monitoring 📈
@ -100,10 +100,12 @@ Finally, we'll finish this guide by a simple mini-app development with above CI/
Note as this cluster will be intended for developer user with complete self-hosted CI/CD solution. So for a good cluster architecture starting point, we can imagine the following nodes :
* `manager-01` : The frontal manager node, with proper reverse proxy and some management tools
* `worker-01` : A worker for your production/staging apps
* `runner-01` : An additional worker dedicated to CI/CD pipelines execution
* `data-01` : The critical data node, with attached and resizable volume for better flexibility
| server | description |
| ------------ | --------------------------------------------------------------------------------- |
| `manager-01` | The frontal manager node, with proper reverse proxy and some management tools |
| `worker-01` | A worker for your production/staging apps |
| `runner-01` | An additional worker dedicated to CI/CD pipelines execution |
| `data-01` | The critical data node, with attached and resizable volume for better flexibility |
{{< mermaid >}}
flowchart TD
@ -135,10 +137,12 @@ my-app-02 -.-> files
Note as the hostnames correspond to a particular type of server, dedicated for one task specifically. Each type of node can be scale as you wish :
* `manager-0x` For advanced resilient Swarm quorum
* `worker-0x` : For better scaling production apps, the easiest to set up
* `runner-0x` : More power for pipeline execution
* `data-0x` : The hard part for data **HA**, with GlusterFS replications, DB clustering for PostgreSQL and MySQL, etc.
| replica | description |
| ------------ | -------------------------------------------------------------------------------------------------------- |
| `manager-0x` | For advanced resilient Swarm quorum |
| `worker-0x` | For better scaling production apps, the easiest to set up |
| `runner-0x` | More power for pipeline execution |
| `data-0x` | The hard part for data **HA**, with GlusterFS replications, DB clustering for PostgreSQL and MySQL, etc. |
{{< alert >}}
For a simple production cluster, you can start with only `manager-01` and `data-01` as absolutely minimal start.
@ -193,6 +197,8 @@ Initiate the project by following this simple steps :
2. Navigate to security > API tokens
3. Generate new API key with Read Write permissions and copy the generated token
![Hetzner API Token](hetzner-api-token.png)
Then go to the terminal and prepare the new context
```sh
@ -238,6 +244,10 @@ It's time to do the classic minimal boring viable security setup for each server
# ensure last upgrades
apt update && apt upgrade -y && reboot
# configure your locales and timezone
dpkg-reconfigure locales
dpkg-reconfigure tzdata
# create your default non root and sudoer user (swarm in this sample)
adduser swarm # enter any strong password at prompt
@ -420,10 +430,12 @@ Adapt the 4st rule of `firewall-rules.json` accordingly to your own chosen SSH p
You should have now good protection against any unintended external access with only few required ports to your `manager-01` server, aka :
* **2222** : the main SSH port, with IP whitelist
* **443** : the HTTPS port for Traefik, our main access for all of your web apps
* **80** : the HTTP port for Traefik, only required for proper HTTPS redirection
* **22** : the SSH standard port for Traefik, required for proper usage through you main Git provider container as GitLab / Gitea
| port | description |
| -------- | ---------------------------------------------------------------------------------------------------------------------- |
| **2222** | the main SSH port, with IP whitelist |
| **443** | the HTTPS port for Traefik, our main access for all of your web apps |
| **80** | the HTTP port for Traefik, only required for proper HTTPS redirection |
| **22** | the SSH standard port for Traefik, required for proper usage through you main Git provider container as GitLab / Gitea |
## 1st conclusion 🏁

View File

@ -11,7 +11,7 @@ draft: true
Build your own cheap while powerful self-hosted complete CI/CD solution by following this opinionated guide 🎉
{{< /lead >}}
This is the **Part II** of more global topic tutorial. [Go to first part]({{< ref "/posts/2022-02-13-build-your-own-docker-swarm-cluster" >}}) before continue.
This is the **Part II** of more global topic tutorial. [Back to first part]({{< ref "/posts/2022-02-13-build-your-own-docker-swarm-cluster" >}}) to start from beginning.
## Installation of Docker Swarm
@ -207,9 +207,11 @@ metrics:
{{< tabs >}}
{{< tab tabName="entryPoints" >}}
* **HTTPS (443)** as main Web access, I added a global middleware called `gzip` that will be configured on next dynamic configuration for proper compression as well as `le`, aka *Let's encrypt*, as main certificate resolver
* **HTTP (80)** with automatic permanent HTTPS redirection, so every web service will be assured to be accessed through HTTPS only (and you should)
* **SSH (22)** for specific advanced case, as give possibility of SSH clone through your main self-hosted Git provider
| name | description |
| --------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **HTTPS (443)** | Main Web access, I added a global middleware called `gzip` that will be configured on next dynamic configuration for proper compression as well as `le`, aka *Let's encrypt*, as main certificate resolver |
| **HTTP (80)** | Automatic permanent HTTPS redirection, so every web service will be assured to be accessed through HTTPS only (and you should) |
| **SSH (22)** | For specific advanced case, as give possibility of SSH clone through your main self-hosted Git provider |
{{< alert >}}
It's important to have your main SSH for terminal operations on different port than 22 as explained on 1st part of this tutorial, as the 22 port will be taken by Traefik.
@ -227,10 +229,12 @@ This is the famous source of Traefik dynamic configuration. We only need of Dock
It indicates Traefik to read through Docker API in order to discover any new services and apply automatic configurations as well as SSL certificate without any restart. [Docker labels](https://docs.docker.com/config/labels-custom-metadata/) will be used for dynamic configuration.
* `swarmMode` : tell Traefik to uses labels found on services instead of individual containers (case of Docker Standalone mode).
* `exposedByDefault` : when false, force us to use `traefik.enable=true` as explicit label for automatic docker service discovery
* `network` : default network connection for all exposed containers
* `defaultRule` : default rule that will be applied to HTTP routes, in order to redirect particular URL to the right service. Each service container can override this default value with `traefik.http.routers.my-container.rule` label.
| name | description |
| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `swarmMode` | Tell Traefik to uses labels found on services instead of individual containers (case of Docker Standalone mode). |
| `exposedByDefault` | When false, force us to use `traefik.enable=true` as explicit label for automatic docker service discovery |
| `network` | Default network connection for all exposed containers |
| `defaultRule` | Default rule that will be applied to HTTP routes, in order to redirect particular URL to the right service. Each service container can override this default value with `traefik.http.routers.my-container.rule` label. |
As a default route rule, I set here a value adapted for an automatic subdomain discovery. `{{ index .Labels "com.docker.stack.namespace" }}.sw.okami101.io` is a dynamic Go template string that means to use the `com.docker.stack.namespace` label that is applied by default on Docker Swarm on each deployed service. So if I deploy a swarm stack called `myapp`, Traefik will automatically set `myapp.sw.okami101.io` as default domain URL to my service, with automatic TLS challenge !
@ -239,9 +243,11 @@ All I have to do is to add a specific label `traefik.enable=true` inside the Doc
{{< /tab >}}
{{< tab tabName="others" >}}
* `api` : enable a nice Traefik dashboard (with dark theme support !) that will be exposed on the local 8080 port by default
* `accessLog` : show all incoming requests through Docker STDOUT
* `metrics` : define all metrics to expose or export to a supported service. I will use Prometheus as a default here, it configures Traefik for exposing a new `/metrics` endpoint that will be consumed later by Prometheus
| name | description |
| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `api` | enable a nice Traefik dashboard (with dark theme support !) that will be exposed on the local 8080 port by default |
| `accessLog` | show all incoming requests through Docker STDOUT |
| `metrics` | define all metrics to expose or export to a supported service. I will use Prometheus as a default here, it configures Traefik for exposing a new `/metrics` endpoint that will be consumed later by Prometheus |
{{< /tab >}}
{{< /tabs >}}
@ -304,9 +310,11 @@ Then we create a `public` network that will be created with [`overlay driver`](h
We'll declare 3 volumes :
* `/etc/traefik` : location where we putted our above static configuration file
* `/var/run/docker.sock` : Required for allowing Traefik to access to Docker API in order to have automatic dynamic docker configuration working.
* `certificates` : named docker volume in order to store our acme.json generated file from all TLS challenge by Let's Encrypt.
| name | description |
| ---------------------- | ---------------------------------------------------------------------------------------------------------------------- |
| `/etc/traefik` | location where we putted our above static configuration file |
| `/var/run/docker.sock` | Required for allowing Traefik to access to Docker API in order to have automatic dynamic docker configuration working. |
| `certificates` | named docker volume in order to store our acme.json generated file from all TLS challenge by Let's Encrypt. |
{{< alert >}}
Note as we add `node.labels.traefik-public.certificates` inside `deploy.constraints` in order to ensure Traefik will run on the same server where certificates are located every time when Docker Swarm does service convergence.
@ -319,19 +327,13 @@ This is the Traefik dynamic configuration part. I declare here many service that
`traefik.enable=true` : Tell Traefik to expose himself through the network
##### The middlewares
* `gzip` : provides [basic gzip compression](https://doc.traefik.io/traefik/middlewares/http/compress/). Note as Traefik doesn't support brotli yep, which is pretty disappointed where absolutly all other reverse proxies support it...
* `admin-auth` : provides basic HTTP authorization. `basicauth.users` will use standard `htpasswd` format. I use `HASHED_PASSWORD` as dynamic environment variable.
* `admin-ip` : provides IP whitelist protection, given a source range.
##### The routers
* `traefik-public-api` : Configured for proper redirection to internal dashboard Traefik API from `traefik.sw.okami101.io`, which is defined by default rule. It's configured with above `admin-auth` and `admin-ip` for proper protection.
##### The services
* `traefik-public` : allow proper redirection to the default exposed 8080 port of Traefik container. This is sadly mandatory when using [Docker Swarm](https://doc.traefik.io/traefik/providers/docker/#port-detection_1)
| name | type | description |
| -------------------- | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `gzip` | middleware | provides [basic gzip compression](https://doc.traefik.io/traefik/middlewares/http/compress/). Note as Traefik doesn't support brotli yep, which is pretty disappointed where absolutly all other reverse proxies support it... |
| `admin-auth` | middleware | provides basic HTTP authorization. `basicauth.users` will use standard `htpasswd` format. I use `HASHED_PASSWORD` as dynamic environment variable. |
| `admin-ip` | middleware | provides IP whitelist protection, given a source range. |
| `traefik-public-api` | router | Configured for proper redirection to internal dashboard Traefik API from `traefik.sw.okami101.io`, which is defined by default rule. It's configured with above `admin-auth` and `admin-ip` for proper protection. |
| `traefik-public` | service | allow proper redirection to the default exposed 8080 port of Traefik container. This is sadly mandatory when using [Docker Swarm](https://doc.traefik.io/traefik/providers/docker/#port-detection_1) |
{{< alert >}}
Keep in mind that the middlewares here are just declared as available for further usage in our services, but not applied globally, except for `gzip` that been declared globally to HTTPS entry point above in the static configuration.
@ -443,5 +445,11 @@ It's time to create your admin account through <https://portainer.sw.okami101.io
![Portainer home](portainer-home.png)
{{< alert >}}
If you go to the stacks menu, you will note that both `traefik` end `portainer` are *Limited* control, because these stacks were done outside Portainer. We will create and deploy next stacks directly from Portainer GUI.
If you go to the stacks menu, you will note that both `traefik` and `portainer` are *Limited* control, because these stacks were done outside Portainer. We will create and deploy next stacks directly from Portainer GUI.
{{< /alert >}}
## 2st conclusion 🏁
We've done the minimal viable Swarm setup with a nice cloud native reverse proxy and a containers GUI manager, with a cloud native NFS.
It's time to test all of this in [Part III]({{< ref "/posts/2022-02-20-build-your-own-docker-swarm-cluster-part-3" >}}).

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

View File

@ -0,0 +1,81 @@
---
title: "Setup a Docker Swarm cluster - Part III"
date: 2022-02-20
description: "Build an opinionated containerized platform for developer..."
tags: ["docker", "swarm"]
slug: build-your-own-docker-swarm-cluster-part-3
draft: true
---
{{< lead >}}
Build your own cheap while powerful self-hosted complete CI/CD solution by following this opinionated guide 🎉
{{< /lead >}}
This is the **Part III** of more global topic tutorial. [Back to first part]({{< ref "/posts/2022-02-13-build-your-own-docker-swarm-cluster" >}}) to start from beginning.
## Keep the containers image up-to-date
It's finally time to test our new cluster environment by testing some images through the Portainer GUI. We'll start by installing [`Diun`](https://crazymax.dev/diun/), a nice tool for keeping our images up-to-date.
Create a new `diun` stack through Portainer and set following content :
```yml
version: "3.2"
services:
diun:
image: crazymax/diun:latest
command: serve
volumes:
- /mnt/storage-pool/diun:/data
- /var/run/docker.sock:/var/run/docker.sock
environment:
TZ: Europe/Paris
DIUN_WATCH_SCHEDULE: 0 */6 * * *
DIUN_PROVIDERS_SWARM: 'true'
DIUN_PROVIDERS_SWARM_WATCHBYDEFAULT: 'true'
DIUN_NOTIF_MAIL_HOST:
DIUN_NOTIF_MAIL_PORT:
DIUN_NOTIF_MAIL_USERNAME:
DIUN_NOTIF_MAIL_PASSWORD:
DIUN_NOTIF_MAIL_FROM:
DIUN_NOTIF_MAIL_TO:
deploy:
placement:
constraints:
- node.role == manager
```
{{< tabs >}}
{{< tab tabName="volumes" >}}
| name | description |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `/mnt/storage-pool/diun` | It will be used for storage of Diun db location, Diun need it for storing detection of new images version and avoid notification spams. **Don't forget** to create a new dedicated folder in the GlusterFS volume with `sudo mkdir /mnt/storage-pool/diun`. |
| `/var/run/docker.sock` | For proper current docker images used detection through Docker API |
{{< /tab >}}
{{< tab tabName="environment" >}}
| name | description |
| ------------------------------------- | ------------------------------------------------------------------------------------- |
| `TZ` | Required for proper timezone schedule |
| `DIUN_WATCH_SCHEDULE` | The standard linux cron schedule |
| `DIUN_PROVIDERS_SWARM` | Required for detecting all containers on all nodes |
| `DIUN_PROVIDERS_SWARM_WATCHBYDEFAULT` | If `true`, no need of explicit docker label everywhere |
| `DIUN_NOTIF_MAIL_*` | Set all according to your own mail provider, or use any other supported notification. |
{{< alert >}}
Use below section of Portainer for setting all personal environment variable. In all cases, all used environment variables must be declared inside YML.
{{< /alert >}}
{{< /tab >}}
{{< /tabs >}}
![Diun Stack](diun-stack.png)
## Installation of databases
### MySQL
### PostgreSQL