rearrange posts + backup

This commit is contained in:
2022-02-21 23:16:27 +01:00
parent a225981c3c
commit 8378dcb9e4
22 changed files with 497 additions and 330 deletions

View File

@ -3,7 +3,6 @@ title: "History of yet another blog"
date: 2021-12-23
description: "Now I can say I finally have a blog..."
tags: ["hugo", "docker", "drone"]
slug: history-of-yet-another-blog
---
{{< lead >}}

View File

@ -0,0 +1,184 @@
---
title: "Setup a Docker Swarm cluster for less than $30 / month"
date: 2022-02-13
description: "Build an opinionated containerized platform for developer..."
tags: ["docker", "swarm"]
draft: true
---
{{< lead >}}
Build your own cheap while powerful self-hosted complete CI/CD solution by following this opinionated guide 🎉
{{< /lead >}}
## Why Docker Swarm 🧐 ?
Because [Docker Swarm Rocks](https://dockerswarm.rocks/) !
Even if Docker Swarm has lost the enterprise graduate orchestration containers war, you don't have to throw yourself into all Kubernetes fuzzy complicated things for a simple homelab, unless for custom training of course.
If you know how to use docker-compose, you're already ready for Docker Swarm which use almost the same API with addition of specific *deploy* config.
I'll try to show you step by step how to install your own cheap containerized cluster for less than $30 by using [Hetzner](https://www.hetzner.com/), one of the best Cloud provider on European market, with cheap but powerful VPS.
So the prerequisites before continue :
* Have some knowledge on docker-compose setups
* Be comfortable with SSH terminal
* Registered for a [Hetzner Cloud account](https://accounts.hetzner.com/signUp)
* A custom domain, I'll use `okami101.io` here
* A account to a transactional mail provider as mailgun, sendgrid, sendingblue, etc.
{{< alert >}}
You can of course apply this guide on any other cloud provider, but I doubt that you can achieve lower price.
{{< /alert >}}
## Final goal 🎯
In the end of this multi-steps guide, you will have complete working production grade secured cluster, backup included, with optional monitoring and complete development CI/CD workflow.
### 1. Cluster initialization 🌍
* Initial VPS setup for docker under Ubuntu 20.04 with proper Hetzner firewall configuration
* `Docker Swarm` installation, **1 manager and 2 workers**
* `Traefik`, a cloud native reverse proxy with automatic service discovery and SSL configuration
* `Portainer` as simple GUI for containers management
### 2. The stateful part 💾
For all data critical part, I choose to use **1 dedicated VPS**. We will install :
* `GlusterFS` as network filesystem, configured for cluster nodes
* `PostgreSQL` as main production database
* `MySQL` as additional secondary database (optional)
* S3 Backup with `Restic`
Note as I will not set up this for **HA** (High Availability) here, as it's a complete another topic. So this data node will be our **SPF** (Single Point of Failure) with only one file system and DB.
{{< alert >}}
There are many debates about using databases as docker container, but I personally prefer use managed server for better control, local on-disk performance, central backup management and easier possibility of database clustering.
Note as on the Kubernetes world, run containerized databases becomes reality thanks to [powerful operators](https://github.com/zalando/postgres-operator) that provide easy clustering. The is obviously no such things on Docker Swarm 🙈
{{< /alert >}}
### 3. Testing the cluster ✅
We will use the main Portainer GUI in order to install following tools :
* [`Diun`](https://crazymax.dev/diun/) (optional), very useful in order to be notified for all used images update inside your Swarm cluster
* `pgAdmin` and `phpMyAdmin` as web database managers (optional)
* Some demo containerized samples that will show you how simple is it to install self-hosted web apps thanks to your shiny new cluster as `redmine`, `n8n`
### 4. Monitoring 📈
This is an optional part, feel free to skip. We'll set up production grade monitoring and tracing with complete dashboards.
* `Prometheus` as time series DB for monitoring
* We will configure many metrics exporter for each critical part (Data node, PostgreSQL, MySQL, containers detail thanks to `cAdvisor`)
* Basic usage of *PromQL*
* `Loki` with `Promtail` for centralized logs, fetched from data node and docker containers
* `Jaeger` as *tracing* tools
* We will use `Elasticsearch` as main data storage
* `Traefik` configuration for metrics and trace as perfect sample
* `Grafana` as GUI dashboard builder with many battery included dashboards
* Monitoring all the cluster
* Node, PostgreSQL and MySQL metrics
* Navigate through log history of all containers and data server node thanks to `Loki` like *ELK*, with *LogQL*
### 5. CI/CD setup 💻
* `Gitea` as lightweight centralized control version, in case you want get out of Github / GitLab Cloud
* `Private docker registry` with minimal UI for all your custom app images that will be built on your development process and be used as based image for your production docker on cluster
* `Drone CI` as self-hosted CI/CD solution
* `SonarQube` as self-hosted quality code control
Finally, we'll finish this guide by a simple mini-app development with above CI/CD integration !
## Cluster Architecture 🏘️
Note as this cluster will be intended for developer user with complete self-hosted CI/CD solution. So for a good cluster architecture starting point, we can imagine the following nodes :
| server | description |
| ------------ | --------------------------------------------------------------------------------- |
| `manager-01` | The frontal manager node, with proper reverse proxy and some management tools |
| `worker-01` | A worker for your production/staging apps |
| `runner-01` | An additional worker dedicated to CI/CD pipelines execution |
| `data-01` | The critical data node, with attached and resizable volume for better flexibility |
{{< mermaid >}}
flowchart TD
subgraph manager-01
traefik((Traefik))<-- Container Discovery -->docker[Docker API]
end
subgraph worker-01
my-app-01((My App 01))
my-app-02((My App 02))
end
subgraph runner-01
runner((Drone CI runner))
end
subgraph data-01
logs[Loki]
postgresql[(PostgreSQL)]
files[/GlusterFS/]
mysql[(MySQL)]
end
manager-01 == As Worker Node ==> worker-01
manager-01 == As Worker Node ==> runner-01
traefik -. reverse proxy .-> my-app-01
traefik -. reverse proxy .-> my-app-02
my-app-01 -.-> postgresql
my-app-02 -.-> mysql
my-app-01 -.-> files
my-app-02 -.-> files
{{< /mermaid >}}
Note as the hostnames correspond to a particular type of server, dedicated for one task specifically. Each type of node can be scale as you wish :
| replica | description |
| ------------ | -------------------------------------------------------------------------------------------------------- |
| `manager-0x` | For advanced resilient Swarm quorum |
| `worker-0x` | For better scaling production apps, the easiest to set up |
| `runner-0x` | More power for pipeline execution |
| `data-0x` | The hard part for data **HA**, with GlusterFS replications, DB clustering for PostgreSQL and MySQL, etc. |
{{< alert >}}
For a simple production cluster, you can start with only `manager-01` and `data-01` as absolutely minimal start.
For a development perspective, you can skip `worker-01` and use `manager-01` for production running.
You have plenty choices here according to your budget !
{{< /alert >}}
## Cheap solution with Hetzner VPS 🖥️
Here some of the cheapest VPS options we have :
| Server Type | Spec | Price |
| ---------------- | ---------- | --------- |
| **CPX11 (AMD)** | 2C/2G/40Go | **€4.79** |
| **CX21 (Intel)** | 3C/4G/80Go | **€5.88** |
| **CPX21 (AMD)** | 3C/4G/80Go | **€8.28** |
My personal choice for a good balance between cheap and well-balanced cluster :
| Server Name | Type | Why |
| ------------ | --------------------- | -------------------------------------------- |
| `manager-01` | **CX21** | I'll privilege RAM |
| `runner-01` | **CPX11** | 2 powerful core is better for building |
| `worker-01` | **CX21** or **CPX21** | Just a power choice matter for your app |
| `data-01` | **CX21** or **CPX21** | Just a power choice matter for your database |
We'll take additional volume of **60 Go** for **€2.88**
We finally arrive to following respectable budget range : **€25.31** - **$31.31**
The only difference being choice between **Xeon VS EPIC** as CPU power for `worker` and `data` nodes, which will our main production application nodes. A quick [sysbench](https://github.com/akopytov/sysbench) will indicates around **70-80%** more power for AMD (test date from 2022-02).
Choose wisely according to your needs.
If you don't need of `worker` and `runner` nodes, with only one simple standalone docker host without Swarm mode, you can even go down to **€14,64** with only **2 CX21** in addition to volume.
{{< alert >}}
If you intend to have your own self-hosted GitLab for an enterprise grade CI/CD workflow, you should run it on node with **8 GB** of RAM.
**4 GB** is doable if you run just one single GitLab container on it with Prometheus mode disabled and external PostgreSQL.
{{< /alert >}}
## Let's party 🎉
All presentation is done, go to the [next part]({{< ref "/posts/03-build-your-own-docker-swarm-cluster-part-2" >}}) for starting !

View File

@ -1,9 +1,8 @@
---
title: "Setup a Docker Swarm cluster for less than $30 / month"
date: 2022-02-13
title: "Setup a Docker Swarm cluster Part II - Hetzner Cloud"
date: 2022-02-15
description: "Build an opinionated containerized platform for developer..."
tags: ["docker", "swarm"]
slug: build-your-own-docker-swarm-cluster
draft: true
---
@ -11,179 +10,9 @@ draft: true
Build your own cheap while powerful self-hosted complete CI/CD solution by following this opinionated guide 🎉
{{< /lead >}}
## Why Docker Swarm 🧐 ?
This is the **Part II** of more global topic tutorial. [Back to first part]({{< ref "/posts/02-build-your-own-docker-swarm-cluster" >}}) to start from beginning.
Because [Docker Swarm Rocks](https://dockerswarm.rocks/) !
Even if Docker Swarm has lost the enterprise graduate orchestration containers war, you don't have to throw yourself into all Kubernetes fuzzy complicated things for a simple homelab, unless for custom training of course.
If you know how to use docker-compose, you're already ready for Docker Swarm which use almost the same API with addition of specific *deploy* config.
I'll try to show you step by step how to install your own cheap containerized cluster for less than $30 by using [Hetzner](https://www.hetzner.com/), one of the best Cloud provider on European market, with cheap but powerful VPS.
So the prerequisites before continue :
* Have some knowledge on docker-compose setups
* Be comfortable with SSH terminal
* Registered for a [Hetzner Cloud account](https://accounts.hetzner.com/signUp)
* A custom domain, I'll use `okami101.io` here
* A account to a transactional mail provider as mailgun, sendgrid, sendingblue, etc.
{{< alert >}}
You can of course apply this guide on any other cloud provider, but I doubt that you can achieve lower price.
{{< /alert >}}
## Final goal 🎯
In the end of this multi-steps guide, you will have complete working production grade secured cluster, backup included, with optional monitoring and complete development CI/CD workflow.
### 1. Cluster initialization 🌍
* Initial VPS setup for docker under Ubuntu 20.04 with proper Hetzner firewall configuration
* `Docker Swarm` installation, **1 manager and 2 workers**
* `Traefik`, a cloud native reverse proxy with automatic service discovery and SSL configuration
* `Portainer` as simple GUI for containers management
### 2. The stateful part 💾
For all data critical part, I choose to use **1 dedicated VPS**. We will install :
* `GlusterFS` as network filesystem, configured for cluster nodes
* `Loki` with `Promtail` for centralized logs, fetched from data node and docker containers
* `PostgreSQL` as main production database
* `MySQL` as additional secondary database (optional)
Note as I will not set up this for **HA** (High Availability) here, as it's a complete another topic. So this data node will be our **SPF** (Single Point of Failure) with only one file system and DB.
{{< alert >}}
There are many debates about using databases as docker container, but I personally prefer use managed server for better control, local on-disk performance, central backup management and easier possibility of database clustering.
Note as on the Kubernetes world, run containerized databases becomes reality thanks to [powerful operators](https://github.com/zalando/postgres-operator) that provide easy clustering. The is obviously no such things on Docker Swarm 🙈
{{< /alert >}}
#### Data Backup (optional)
Because backup should be taken care from the beginning, I'll show you how to use `Restic` for simple backups to external S3 compatible bucket.
### 3. Testing the cluster ✅
We will use the main Portainer GUI in order to install following tools :
* [`Diun`](https://crazymax.dev/diun/) (optional), very useful in order to be notified for all used images update inside your Swarm cluster
* `pgAdmin` and `phpMyAdmin` as web database managers (optional)
* Some demo containerized samples that will show you how simple is it to install self-hosted web apps thanks to your shiny new cluster as `redmine`, `n8n`
### 4. Monitoring 📈
This is an optional part, feel free to skip. We'll set up production grade monitoring and tracing with complete dashboards.
* `Prometheus` as time series DB for monitoring
* We will configure many metrics exporter for each critical part (Data node, PostgreSQL, MySQL, containers detail thanks to `cAdvisor`)
* Basic usage of *PromQL*
* `Jaeger` as *tracing* tools
* We will use `Elasticsearch` as main data storage
* `Traefik` configuration for metrics and trace as perfect sample
* `Grafana` as GUI dashboard builder with many battery included dashboards
* Monitoring all the cluster
* Node, PostgreSQL and MySQL metrics
* Navigate through log history of all containers and data server node thanks to `Loki` like *ELK*, with *LogQL*
### 5. CI/CD setup 💻
* `Gitea` as lightweight centralized control version, in case you want get out of Github / GitLab Cloud
* `Private docker registry` with minimal UI for all your custom app images that will be built on your development process and be used as based image for your production docker on cluster
* `Drone CI` as self-hosted CI/CD solution
* `SonarQube` as self-hosted quality code control
Finally, we'll finish this guide by a simple mini-app development with above CI/CD integration !
## Cluster Architecture 🏘️
Note as this cluster will be intended for developer user with complete self-hosted CI/CD solution. So for a good cluster architecture starting point, we can imagine the following nodes :
| server | description |
| ------------ | --------------------------------------------------------------------------------- |
| `manager-01` | The frontal manager node, with proper reverse proxy and some management tools |
| `worker-01` | A worker for your production/staging apps |
| `runner-01` | An additional worker dedicated to CI/CD pipelines execution |
| `data-01` | The critical data node, with attached and resizable volume for better flexibility |
{{< mermaid >}}
flowchart TD
subgraph manager-01
traefik((Traefik))<-- Container Discovery -->docker[Docker API]
end
subgraph worker-01
my-app-01((My App 01))
my-app-02((My App 02))
end
subgraph runner-01
runner((Drone CI runner))
end
subgraph data-01
logs[Loki]
postgresql[(PostgreSQL)]
files[/GlusterFS/]
mysql[(MySQL)]
end
manager-01 == As Worker Node ==> worker-01
manager-01 == As Worker Node ==> runner-01
traefik -. reverse proxy .-> my-app-01
traefik -. reverse proxy .-> my-app-02
my-app-01 -.-> postgresql
my-app-02 -.-> mysql
my-app-01 -.-> files
my-app-02 -.-> files
{{< /mermaid >}}
Note as the hostnames correspond to a particular type of server, dedicated for one task specifically. Each type of node can be scale as you wish :
| replica | description |
| ------------ | -------------------------------------------------------------------------------------------------------- |
| `manager-0x` | For advanced resilient Swarm quorum |
| `worker-0x` | For better scaling production apps, the easiest to set up |
| `runner-0x` | More power for pipeline execution |
| `data-0x` | The hard part for data **HA**, with GlusterFS replications, DB clustering for PostgreSQL and MySQL, etc. |
{{< alert >}}
For a simple production cluster, you can start with only `manager-01` and `data-01` as absolutely minimal start.
For a development perspective, you can skip `worker-01` and use `manager-01` for production running.
You have plenty choices here according to your budget !
{{< /alert >}}
### Hetzner VPS 🖥️
Here some of the cheapest VPS options we have :
| Server Type | Spec | Price |
| ---------------- | ---------- | --------- |
| **CPX11 (AMD)** | 2C/2G/40Go | **€4.79** |
| **CX21 (Intel)** | 3C/4G/80Go | **€5.88** |
| **CPX21 (AMD)** | 3C/4G/80Go | **€8.28** |
My personal choice for a good balance between cheap and well-balanced cluster :
| Server Name | Type | Why |
| ------------ | --------------------- | -------------------------------------------- |
| `manager-01` | **CX21** | I'll privilege RAM |
| `runner-01` | **CPX11** | 2 powerful core is better for building |
| `worker-01` | **CX21** or **CPX21** | Just a power choice matter for your app |
| `data-01` | **CX21** or **CPX21** | Just a power choice matter for your database |
We'll take additional volume of **60 Go** for **€2.88**
We finally arrive to following respectable budget range : **€25.31** - **$31.31**
The only difference being choice between **Xeon VS EPIC** as CPU power for `worker` and `data` nodes, which will our main production application nodes. A quick [sysbench](https://github.com/akopytov/sysbench) will indicates around **70-80%** more power for AMD (test date from 2022-02).
Choose wisely according to your needs.
If you don't need of `worker` and `runner` nodes, with only one simple standalone docker host without Swarm mode, you can even go down to **€14,64** with only **2 CX21** in addition to volume.
{{< alert >}}
If you intend to have your own self-hosted GitLab for an enterprise grade CI/CD workflow, you should run it on node with **8 GB** of RAM.
**4 GB** is doable if you run just one single GitLab container on it with Prometheus mode disabled and external PostgreSQL.
{{< /alert >}}
## Let's party 🎉
## Requirements 🛑
Before continue I presume you have :
@ -211,7 +40,7 @@ hcloud ssh-key create --name swarm --public-key-from-file .ssh/id_ed25519.pub
Now we are ready to set up the above architecture !
### Create the cloud servers and networks ☁️
## Create the cloud servers and networks ☁️
```sh
# create private network
@ -236,7 +65,7 @@ hcloud server create --name data-01 --ssh-key swarm --image ubuntu-20.04 --type
hcloud volume create --name volume-01 --size 60 --server data-01 --automount --format ext4
```
### Prepare the servers 🛠️
## Prepare the servers 🛠️
It's time to do the classic minimal boring viable security setup for each server. Use `hcloud server ssh xxxxxx-01` for ssh connect and do the same for each.
@ -270,7 +99,7 @@ The change of SSH port is not only for better security, but also for allowing mo
Finally, test your new `swarm` user by using `hcloud server ssh --user swarm --port 2222 xxxxxx-01` for each server and be sure that the user can do commands as sudo before continue.
Then edit `/etc/hosts` file for each server accordingly in order to add private ips :
Then edit `/etc/hosts` file for each server accordingly in order to add private IPs :
{{< tabs >}}
{{< tab tabName="manager-01" >}}
@ -304,7 +133,7 @@ Then edit `/etc/hosts` file for each server accordingly in order to add private
IPs are only showed here as samples, use `hcloud server describe xxxxxx-01` in order to get the right private IP under `Private Net`.
{{< /alert >}}
### Setup DNS and SSH config 🌍
## Setup DNS and SSH config 🌍
Now use `hcloud server ip manager-01` to get the unique frontal IP address of the cluster that will be used for any entry point, including SSH. Then edit the DNS of your domain and apply this IP to a particular subdomain, as well as a wildcard subdomain. You will see later what this wildcard domain is it for. I will use `sw.okami101.io` as sample. It should be looks like next :
@ -345,7 +174,7 @@ And that's it ! You should now quickly ssh to these servers easily by `ssh sw`,
Note as I only use the `sw.okami101.io` as unique endpoint for ssh access to all internal server, without need of external SSH access to servers different from `manager-01`. It's known as SSH proxy, which allows single access point for better security perspective by simply jumping from main SSH access.
{{< /alert >}}
### The firewall 🧱
## The firewall 🧱
Now it's time to finish this preparation section by putting some security.
You should never let any cluster without properly configured firewall. It's generally preferable to use the cloud provider firewall instead of standard `ufw` because more easy to manage, no risk of being stupidly blocked, and settled once and for all.
@ -437,10 +266,8 @@ You should have now good protection against any unintended external access with
| **80** | the HTTP port for Traefik, only required for proper HTTPS redirection |
| **22** | the SSH standard port for Traefik, required for proper usage through you main Git provider container as GitLab / Gitea |
## 1st conclusion 🏁
And that's finally it !
## 1st check ✅
We've done all the boring nevertheless essential stuff of this tutorial by preparing the physical layer + OS part.
Go to the [Part II]({{< ref "/posts/2022-02-18-build-your-own-docker-swarm-cluster-part-2" >}}) for the serious work !
Go to the [next part]({{< ref "/posts/04-build-your-own-docker-swarm-cluster-part-3" >}}) for the serious work !

View File

@ -1,9 +1,8 @@
---
title: "Setup a Docker Swarm cluster - Install - Part II"
date: 2022-02-18
title: "Setup a Docker Swarm cluster Part III - Cluster Initialization"
date: 2022-02-16
description: "Build an opinionated containerized platform for developer..."
tags: ["docker", "swarm"]
slug: build-your-own-docker-swarm-cluster-part-2
draft: true
---
@ -11,11 +10,9 @@ draft: true
Build your own cheap while powerful self-hosted complete CI/CD solution by following this opinionated guide 🎉
{{< /lead >}}
This is the **Part II** of more global topic tutorial. [Back to first part]({{< ref "/posts/2022-02-13-build-your-own-docker-swarm-cluster" >}}) to start from beginning.
This is the **Part III** of more global topic tutorial. [Back to first part]({{< ref "/posts/02-build-your-own-docker-swarm-cluster" >}}) to start from beginning.
## Installation of Docker Swarm
### Docker engine
## Docker 🐳
Now we must do the classic Docker installation on each stateless servers. Repeat following commands on `manager-01`, `worker-01` and `runner-01`.
@ -51,7 +48,7 @@ When done use `docker node ls` on manager node in order to confirm the presence
Yeah, cluster is already properly configured. Far less overwhelming than Kubernetes, I should say.
## Network file system
## Network file system 📄
Before go further away, we'll quickly need of proper unique shared storage location for all managers and workers. It's mandatory in order to keep same state when your app containers are automatically rearranged by Swarm manager across multiple workers for convergence purpose.
@ -89,7 +86,7 @@ my-app-02-02-->db2
Note that manager node can be used as worker as well. However, I think it's not well suited for production apps in my opinion.
{{< /alert >}}
### Install GlusterFS
### Install GlusterFS 🐜
It's 2 steps :
@ -159,7 +156,7 @@ It's finally time to start our first container services. The minimal setup will
This 2 services will be deployed as docker services on `manager-01`.
### Traefik
### Traefik 🛣️
The main task of traefik will be to redirect correct URL path to corresponding app service, according to regex rules (which domain or subdomain, which prefix URL path, etc.).
@ -374,7 +371,7 @@ If properly configured, you will be prompted for access. After entering admin as
![Traefik Dashboard](traefik-dashboard.png)
### Portainer
### Portainer
The hard part is done, we'll finish this 2nd part by installing Portainer. Portainer is constituted of
@ -452,8 +449,81 @@ It's time to create your admin account through <https://portainer.sw.okami101.io
If you go to the stacks menu, you will note that both `traefik` and `portainer` are *Limited* control, because these stacks were done outside Portainer. We will create and deploy next stacks directly from Portainer GUI.
{{< /alert >}}
## 2st conclusion 🏁
## Keep the containers image up-to-date ⬆️
It's finally time to test our new cluster environment by testing some images through the Portainer GUI. We'll start by installing [`Diun`](https://crazymax.dev/diun/), a very useful tool which notify us when used docker images has available update in his Docker registry.
Create a new `diun` stack through Portainer and set following content :
```yml
version: "3.2"
services:
diun:
image: crazymax/diun:latest
command: serve
volumes:
- /mnt/storage-pool/diun:/data
- /var/run/docker.sock:/var/run/docker.sock
environment:
TZ: Europe/Paris
DIUN_WATCH_SCHEDULE: 0 */6 * * *
DIUN_PROVIDERS_SWARM: 'true'
DIUN_PROVIDERS_SWARM_WATCHBYDEFAULT: 'true'
DIUN_NOTIF_MAIL_HOST:
DIUN_NOTIF_MAIL_PORT:
DIUN_NOTIF_MAIL_USERNAME:
DIUN_NOTIF_MAIL_PASSWORD:
DIUN_NOTIF_MAIL_FROM:
DIUN_NOTIF_MAIL_TO:
deploy:
placement:
constraints:
- node.role == manager
```
{{< tabs >}}
{{< tab tabName="volumes" >}}
| name | description |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `/mnt/storage-pool/diun` | It will be used for storage of Diun db location, Diun need it for storing detection of new images version and avoid notification spams. **Don't forget** to create a new dedicated folder in the GlusterFS volume with `sudo mkdir /mnt/storage-pool/diun`. |
| `/var/run/docker.sock` | For proper current docker images used detection through Docker API |
{{< /tab >}}
{{< tab tabName="environment" >}}
| name | description |
| ------------------------------------- | ------------------------------------------------------------------------------------- |
| `TZ` | Required for proper timezone schedule |
| `DIUN_WATCH_SCHEDULE` | The standard linux cron schedule |
| `DIUN_PROVIDERS_SWARM` | Required for detecting all containers on all nodes |
| `DIUN_PROVIDERS_SWARM_WATCHBYDEFAULT` | If `true`, no need of explicit docker label everywhere |
| `DIUN_NOTIF_MAIL_*` | Set all according to your own mail provider, or use any other supported notification. |
{{< alert >}}
Use below section of Portainer for setting all personal environment variable. In all cases, all used environment variables must be declared inside YML.
{{< /alert >}}
{{< /tab >}}
{{< /tabs >}}
![Diun Stack](diun-stack.png)
Finally click on **Deploy the stack**, it's equivalent of precedent `docker stack deploy`, nothing magic here. At the difference that Portainer will store the YML inside his volume, allowing full control, contrary to limited Traefik and Portainer cases.
Diun should now be deployed and manager host and ready to scan images for any updates !
You can check the full service page which will allows manual scaling, on-fly volumes mounting, environment variable modification, and show current running tasks (aka containers).
![Diun Service](diun-service.png)
You can check the service logs which consist of all tasks logs aggregate.
![Diun Logs](diun-logs.png)
## 2nd check ✅
We've done the minimal viable Swarm setup with a nice cloud native reverse proxy and a containers GUI manager, with a cloud native NFS.
It's time to test all of this in [Part III]({{< ref "/posts/2022-02-19-build-your-own-docker-swarm-cluster-part-3" >}}).
It's time to test more advanced cases with self-hosted managed databases in [next part]({{< ref "/posts/04-build-your-own-docker-swarm-cluster-part-3" >}}).

View File

@ -1,9 +1,8 @@
---
title: "Setup a Docker Swarm cluster - Databases - Part III"
date: 2022-02-19
title: "Setup a Docker Swarm cluster Part IV - DB & backups"
date: 2022-02-18
description: "Build an opinionated containerized platform for developer..."
tags: ["docker", "swarm"]
slug: build-your-own-docker-swarm-cluster-part-3
draft: true
---
@ -11,88 +10,15 @@ draft: true
Build your own cheap while powerful self-hosted complete CI/CD solution by following this opinionated guide 🎉
{{< /lead >}}
This is the **Part III** of more global topic tutorial. [Back to first part]({{< ref "/posts/2022-02-13-build-your-own-docker-swarm-cluster" >}}) to start from beginning.
## Keep the containers image up-to-date
It's finally time to test our new cluster environment by testing some images through the Portainer GUI. We'll start by installing [`Diun`](https://crazymax.dev/diun/), a nice tool for keeping our images up-to-date.
Create a new `diun` stack through Portainer and set following content :
```yml
version: "3.2"
services:
diun:
image: crazymax/diun:latest
command: serve
volumes:
- /mnt/storage-pool/diun:/data
- /var/run/docker.sock:/var/run/docker.sock
environment:
TZ: Europe/Paris
DIUN_WATCH_SCHEDULE: 0 */6 * * *
DIUN_PROVIDERS_SWARM: 'true'
DIUN_PROVIDERS_SWARM_WATCHBYDEFAULT: 'true'
DIUN_NOTIF_MAIL_HOST:
DIUN_NOTIF_MAIL_PORT:
DIUN_NOTIF_MAIL_USERNAME:
DIUN_NOTIF_MAIL_PASSWORD:
DIUN_NOTIF_MAIL_FROM:
DIUN_NOTIF_MAIL_TO:
deploy:
placement:
constraints:
- node.role == manager
```
{{< tabs >}}
{{< tab tabName="volumes" >}}
| name | description |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `/mnt/storage-pool/diun` | It will be used for storage of Diun db location, Diun need it for storing detection of new images version and avoid notification spams. **Don't forget** to create a new dedicated folder in the GlusterFS volume with `sudo mkdir /mnt/storage-pool/diun`. |
| `/var/run/docker.sock` | For proper current docker images used detection through Docker API |
{{< /tab >}}
{{< tab tabName="environment" >}}
| name | description |
| ------------------------------------- | ------------------------------------------------------------------------------------- |
| `TZ` | Required for proper timezone schedule |
| `DIUN_WATCH_SCHEDULE` | The standard linux cron schedule |
| `DIUN_PROVIDERS_SWARM` | Required for detecting all containers on all nodes |
| `DIUN_PROVIDERS_SWARM_WATCHBYDEFAULT` | If `true`, no need of explicit docker label everywhere |
| `DIUN_NOTIF_MAIL_*` | Set all according to your own mail provider, or use any other supported notification. |
{{< alert >}}
Use below section of Portainer for setting all personal environment variable. In all cases, all used environment variables must be declared inside YML.
{{< /alert >}}
{{< /tab >}}
{{< /tabs >}}
![Diun Stack](diun-stack.png)
Finally click on **Deploy the stack**, it's equivalent of precedent `docker stack deploy`, nothing magic here. At the difference that Portainer will store the YML inside his volume, allowing full control, contrary to limited Traefik and Portainer cases.
Diun should now be deployed and manager host and ready to scan images for any updates !
You can check the full service page which will allows manual scaling, on-fly volumes mounting, environment variable modification, and show current running tasks (aka containers).
![Diun Service](diun-service.png)
You can check the service logs which consist of all tasks logs aggregate.
![Diun Logs](diun-logs.png)
This is the **Part IV** of more global topic tutorial. [Back to first part]({{< ref "/posts/02-build-your-own-docker-swarm-cluster" >}}) to start from beginning.
## Installation of databases
It's finally time to install some RDBS. The most commons are *MySQL* and *PostgreSQL*. I advise the last one nowadays, but I'll show you how to install both, web GUI managers included. Choose the best suited DB for your own needs.
We'll install this DB obviously on `data-01` as shown in [previous part II schema]({{< ref "/posts/2022-02-18-build-your-own-docker-swarm-cluster-part-2#network-file-system" >}}).
We'll install this DB obviously on `data-01` as shown in [previous part II schema]({{< ref "/posts/03-build-your-own-docker-swarm-cluster-part-2#network-file-system" >}}).
### MySQL 8
### MySQL 8 🐬
```sh
# on ubuntu 20.04, it's just as simple as next
@ -174,7 +100,7 @@ Deploy it, and you should access to <https://phpmyadmin.sw.okami101.io> after fe
![phpMyAdmin](phpmyadmin.png)
### PostgreSQL 14
### PostgreSQL 14 🐘
```sh
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
@ -407,8 +333,123 @@ And voilà, it's done, n8n will automatically migrate the database and <https://
![n8n](n8n.png)
## 3st conclusion 🏁
## Data backup 💾
Because backup should be taken care from the beginning, I'll show you how to use `Restic` for simple backups to external S3 compatible bucket. We must firstly take care about databases dumps.
### Database dumps
Provided scripts will dump a dedicated file for each database. Fill free to adapt to your own needs.
{{< tabs >}}
{{< tab tabName="MySQL" >}}
Create executable script at /usr/local/bin/backup-mysql
```sh
#!/bin/bash
target=/var/backups/mysql
mkdir -p $target
rm -f $target/*.sql.gz
databases=`mysql -Be 'show databases' | egrep -v 'Database|information_schema|performance_schema|sys'`
for db in $databases; do
mysqldump --force $db | gzip > $target/$db.sql.gz
done;
```
Then add `0 * * * * /usr/local/bin/backup-mysql` to system cron `/etc/crontab` for dumping every hour.
{{< /tab >}}
{{< tab tabName="PostgreSQL" >}}
Create executable script at /usr/local/bin/backup-postgresql
```sh
#!/bin/bash
target=/var/lib/postgresql/backups
mkdir -p $target
rm -f $target/*.gz
databases=`psql -q -A -t -c 'SELECT datname FROM pg_database' | egrep -v 'template0|template1'`
for db in $databases; do
pg_dump $db | gzip > $target/$db.gz
done;
pg_dumpall --roles-only | gzip > $target/roles.gz
```
> Use it via `crontab -e` as postgres user.
> `0 * * * * /usr/local/bin/backup-postgresql`
Then add `0 * * * * /usr/local/bin/backup-postgresql` to postgres cron for dumping every hour. To access postgres cron, do `sudo su postgres` and `crontab -e`.
{{< /tab >}}
{{< /tabs >}}
{{< alert >}}
This scripts doesn't provide rotation of dumps, as the next incremental backup will be sufficient.
{{< /alert >}}
### Incremental backup with Restic
```sh
wget https://github.com/restic/restic/releases/download/v0.12.1/restic_0.12.1_linux_amd64.bz2
bzip2 -d restic_0.12.1_linux_amd64.bz2
chmod +x restic_0.12.1_linux_amd64
sudo mv restic_0.12.1_linux_amd64 /usr/local/bin/restic
restic self-update
sudo restic generate --bash-completion /etc/bash_completion.d/restic
```
Some config files :
{{< tabs >}}
{{< tab tabName="~/.restic-env" >}}
Replace next environment variables with your own S3 configuration.
```sh
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export RESTIC_REPOSITORY="s3:server-url/bucket-name/backup"
export RESTIC_PASSWORD="a-strong-password"
```
{{< /tab >}}
{{< tab tabName="/etc/restic/excludes.txt" >}}
Here some typical folders to exclude from backup.
```txt
.glusterfs
node_modules
```
{{< /tab >}}
{{< /tabs >}}
1. Add `. ~/.restic-env` to `.profile`
2. Reload profile with `source ~/.profile`
3. Create a repository with `restic init` (if using rclone instead above keys)
4. Add following cron for backup every hour at 42min :
```txt
42 * * * * . ~/.restic-env; /usr/local/bin/restic backup -q /mnt/HC_Volume_xxxxxxxx/gluster-storage /var/backups/mysql /var/lib/postgresql/backups --exclude-file=/etc/restic/excludes.txt; /usr/local/bin/restic forget -q --prune --keep-hourly 24 --keep-daily 7 --keep-weekly 4 --keep-monthly 3
```
You now have full and incremental backup of GlusterFS volume and dump databases !
{{< alert >}}
Always testing the restoration !
{{< /alert >}}
## 3rd check ✅
We've done the databases part with some more real case app containers samples.
In real world, we should have full monitoring suite, this will be [next part]({{< ref "/posts/2022-02-20-build-your-own-docker-swarm-cluster-part-4" >}}).
In real world, we should have full monitoring suite, this will be [next part]({{< ref "/posts/05-build-your-own-docker-swarm-cluster-part-4" >}}).

View File

@ -0,0 +1,35 @@
---
title: "Setup a Docker Swarm cluster Part V - Monitoring"
date: 2022-02-19
description: "Build an opinionated containerized platform for developer..."
tags: ["docker", "swarm"]
draft: true
---
{{< lead >}}
Build your own cheap while powerful self-hosted complete CI/CD solution by following this opinionated guide 🎉
{{< /lead >}}
This is the **Part V** of more global topic tutorial. [Back to first part]({{< ref "/posts/02-build-your-own-docker-swarm-cluster" >}}) to start from beginning.
## Metrics with Prometheus 🔦
### Prometheus install 💽
### Nodes & Containers metrics with cAdvisor & Node exporter
## Visualization with Grafana 📈
### Grafana install 💽
### Docker Swarm dashboard
## External node, MySQL and PostgreSQL exports
### Grafana dashboards for data
## 4th check ✅
We've done all the monitoring part with installation of DB times series, exports and UI visualization.
We have all the metrics part. What about logging and tracing, which are an other essential aspects for perfect production analyzing and debugging. We'll see that in the [next part]({{< ref "/posts/07-build-your-own-docker-swarm-cluster-part-6" >}}).

View File

@ -0,0 +1,31 @@
---
title: "Setup a Docker Swarm cluster Part VI - Logging & tracing"
date: 2022-02-20
description: "Build an opinionated containerized platform for developer..."
tags: ["docker", "swarm"]
draft: true
---
{{< lead >}}
Build your own cheap while powerful self-hosted complete CI/CD solution by following this opinionated guide 🎉
{{< /lead >}}
This is the **Part VI** of more global topic tutorial. [Back to first part]({{< ref "/posts/02-build-your-own-docker-swarm-cluster" >}}) to start from beginning.
## Logs with Loki 📄
### Docker hosts
### Data logs with Promtail
### Grafana explore and dashboard
## Tracing with Jaeger 🔍
### Traefik integration
## 5th check ✅
We've done all the logging part with complete centralized logging for cluster + data, as well as tracing.
Now it's time to test a real case scenario for a developer perspective. We'll see that in the [last part]({{< ref "/posts/08-build-your-own-docker-swarm-cluster-part-7" >}}).

View File

@ -0,0 +1,27 @@
---
title: "Setup a Docker Swarm cluster Part VII - CI/CD workflow"
date: 2022-02-21
description: "Build an opinionated containerized platform for developer..."
tags: ["docker", "swarm"]
draft: true
---
{{< lead >}}
Build your own cheap while powerful self-hosted complete CI/CD solution by following this opinionated guide 🎉
{{< /lead >}}
This is the **Part VII** of more global topic tutorial. [Back to first part]({{< ref "/posts/02-build-your-own-docker-swarm-cluster" >}}) to start from beginning.
## Self-hosted VCS with Gitea 🍵
## Private docker registry
## CI/CD with Drone 🪁
## SonarQube 📈
## Tracing with Jaeger with OpenTelemetry 🕰️
## Final check 🎊🏁🎊
We've done all the basics part of installing, using, testing a professional grade Docker Swarm cluster.

View File

@ -1,28 +0,0 @@
---
title: "Setup a Docker Swarm cluster - Monitoring - Part VI"
date: 2022-02-20
description: "Build an opinionated containerized platform for developer..."
tags: ["docker", "swarm"]
slug: build-your-own-docker-swarm-cluster-part-4
draft: true
---
{{< lead >}}
Build your own cheap while powerful self-hosted complete CI/CD solution by following this opinionated guide 🎉
{{< /lead >}}
This is the **Part VI** of more global topic tutorial. [Back to first part]({{< ref "/posts/2022-02-13-build-your-own-docker-swarm-cluster" >}}) to start from beginning.
## UI vizualization with Grafana
## Logs with Loki
## Metrics with Prometheus
## Tracing with Jaeger
## 4st conclusion 🏁
We've done all the monitoring part with complete centralized logging, metrics and tracing.
Now it's time to test a real case scenario for a developer perspective. We'll see that in the [last part]({{< ref "/posts/2022-02-21-build-your-own-docker-swarm-cluster-part-5" >}}).

View File

@ -1,22 +0,0 @@
---
title: "Setup a Docker Swarm cluster - CI/CD - Part V"
date: 2022-02-21
description: "Build an opinionated containerized platform for developer..."
tags: ["docker", "swarm"]
slug: build-your-own-docker-swarm-cluster-part-5
draft: true
---
{{< lead >}}
Build your own cheap while powerful self-hosted complete CI/CD solution by following this opinionated guide 🎉
{{< /lead >}}
This is the **Part V** of more global topic tutorial. [Back to first part]({{< ref "/posts/2022-02-13-build-your-own-docker-swarm-cluster" >}}) to start from beginning.
## Self-hosted VCS with Gitea
## CI/CD with Drone
## Final conclusion 🏁
We've done all the basics part of installing, using, testing cluster.