proofreading

This commit is contained in:
2022-02-27 15:24:47 +01:00
parent 93c1e3ba38
commit f90ca6a8c3
8 changed files with 106 additions and 108 deletions

View File

@ -7,92 +7,90 @@ draft: true
---
{{< lead >}}
Build your own cheap while powerful self-hosted complete CI/CD solution by following this opinionated guide 🎉
Build your own cheap but powerful self-hosted cluster and be free from any SaaS solutions by following this opinionated guide 🎉
{{< /lead >}}
## Why Docker Swarm 🧐 ?
Because [Docker Swarm Rocks](https://dockerswarm.rocks/) !
Even if Docker Swarm has lost the enterprise graduate orchestration containers war, you don't have to throw yourself into all Kubernetes fuzzy complicated things for a simple homelab, unless for custom training of course.
Yeah, for some people it seems a little outdated now in 2022, a period where Kubernetes is everywhere, but I'm personally convicted that [it's really so underrated](https://www.reddit.com/r/docker/comments/oufvd8/why_docker_swarm_is_not_popular_as_kubernetes/). Except for training, you really don't have to throw yourself into all Kubernetes fuzzy complicated things, at least in a personal Homelab perspective.
If you know how to use docker-compose, you're already ready for Docker Swarm which use almost the same API with addition of specific *deploy* config.
Of course with Docker Swarm you'll be completely limited to what Docker API has to offer, without any abstraction, contrary to K8S, which built its community around new abstracted orchestration concepts, like *StatefulSets*, *operators*, *Helm*, etc. But it's the intended purpose of Swarm ! Not many new things to learn once you master docker.
I'll try to show you step by step how to install your own cheap containerized cluster for less than $30 by using [Hetzner](https://www.hetzner.com/), one of the best Cloud provider on European market, with cheap but powerful VPS.
### The 2022 Docker Swarm guide 🚀
So the prerequisites before continue :
I'll try to show you step by step how to install your own serious containerized cluster for less than $30 by using [Hetzner](https://www.hetzner.com/), one of the best Cloud provider on European market, with cheap yet really powerful VPS. Besides, they just recently opened new centers in America !
* Have some knowledge on docker-compose setups
This tutorial is a sort of massive 2022 update from the well-known *dockerswarm.rocks*, with a further comprehension under the hood. It's **NOT** a quick and done tutorial, as we'll go very deeply, but at least you will understand all it's going on. It's divided into 8 parts, so be prepared ! The prerequisites before continue :
* Have some fundamentals on Docker
* Be comfortable with SSH terminal
* Registered for a [Hetzner Cloud account](https://accounts.hetzner.com/signUp)
* A custom domain, I'll use `mydomain.rocks` here as example
* A account to a transactional mail provider as Mailgun, SendGrid, Sendinblue, etc.
{{< alert >}}
You can of course apply this guide on any other cloud provider, but I doubt that you can achieve lower price.
{{< /alert >}}
* Registered for a [Hetzner Cloud account](https://accounts.hetzner.com/signUp), at least for the part 2, or feel free to adapt to any other VPS provider
* A custom domain, I'll use `dockerswarm.rocks` here as an example
* An account to a transactional mail provider as Mailgun, SendGrid, Sendinblue, etc. as a bonus.
## Final goal 🎯
In the end of this multi-steps guide, you will have complete working production grade secured cluster, backup included, with optional monitoring and complete development CI/CD workflow.
In the very end of this multi-steps guide, you will have complete working production grade secured cluster, backup included, with optional monitoring and complete development CI/CD workflow.
### 1. Cluster initialization 🌍
* Initial VPS setup for docker under Ubuntu 20.04 with proper Hetzner firewall configuration
* `Docker Swarm` installation, **1 manager and 2 workers**
* `Traefik`, a cloud native reverse proxy with automatic service discovery and SSL configuration
* `Portainer` as simple GUI for containers management
* **Hetzner** VPS setups under *Ubuntu 20.04* with proper firewall configuration
* **Docker Swarm** installation, with **1 manager and 2 workers**
* **Traefik**, a cloud native reverse proxy with automatic service discovery and SSL configuration
* **Portainer** as simple GUI for containers management
### 2. The stateful part 💾
For all data critical part, I choose to use **1 dedicated VPS**. We will install :
Because Docker Swarm is not really suited for managing stateful containers (an area where K8S can shine thanks to operators), I choose to use **1 dedicated VPS** for all data critical part. We will install :
* `GlusterFS` as network filesystem, configured for cluster nodes
* `PostgreSQL` as main production database
* `MySQL` as additional secondary database (optional)
* `Redis` as fast database cache (optional)
* S3 Backup with `Restic`
* **GlusterFS** as network filesystem, configured for cluster nodes
* **PostgreSQL** as main production database
* **MySQL** as additional secondary database (optional)
* **Redis** as fast database cache (optional)
* **Elasticsearch** as database for indexes
* **Restic** as S3 backup solution
Note as I will not set up this for **HA** (High Availability) here, as it's a complete another topic. So this data node will be our **SPF** (Single Point of Failure) with only one file system and DB.
Note as I will not set up this data server for **HA** (High Availability) here, as it's a complete another topic. But note as every chosen tool's here can be clustered.
{{< alert >}}
There are many debates about using databases as docker container, but I personally prefer use managed server for better control, local on-disk performance, central backup management and easier possibility of database clustering.
Note as on the Kubernetes world, run containerized databases becomes reality thanks to [powerful operators](https://github.com/zalando/postgres-operator) that provide easy clustering. The is obviously no such things on Docker Swarm 🙈
Note as on the Kubernetes world, running containerized **AND** clustered databases becomes reality thanks to [powerful operators](https://github.com/zalando/postgres-operator) that provide clustering. There is obviously no such things on Docker Swarm 🙈.
{{< /alert >}}
### 3. Testing the cluster ✅
We will use the main Portainer GUI in order to install following tools :
* [`Diun`](https://crazymax.dev/diun/) (optional), very useful in order to be notified for all used images update inside your Swarm cluster
* `pgAdmin` and `phpMyAdmin` as web database managers (optional)
* Some demo containerized samples that will show you how simple is it to install self-hosted web apps thanks to your shiny new cluster as `matomo`, `redmine`, `n8n`
* [**Diun**](https://crazymax.dev/diun/) (optional), very useful in order to be notified for all used images update inside your Swarm cluster
* **pgAdmin** and **phpMyAdmin** as web database managers (optional)
* Some containerized app samples as **matomo**, **redmine**, **n8n**, that will show you how simple is it to install self-hosted web apps thanks to your shiny new cluster !
### 4. Monitoring 📈
This is an optional part, feel free to skip. We'll set up production grade monitoring and tracing with complete dashboards.
* `Prometheus` as time series DB for monitoring
* We will configure many metrics exporter for each critical part (Data node, PostgreSQL, MySQL, containers detail thanks to `cAdvisor`)
* **Prometheus** as time series DB for monitoring
* We will configure many metrics exporter for each critical part (Data node, PostgreSQL, MySQL, containers detail thanks to **cAdvisor**)
* Basic usage of *PromQL*
* `Loki` with `Promtail` for centralized logs, fetched from data node and docker containers
* `Jaeger` as *tracing* tools
* We will use `Elasticsearch` as main data storage
* `Traefik` configuration for metrics and trace as perfect sample
* `Grafana` as GUI dashboard builder with many battery included dashboards
* **Loki** with **Promtail** for centralized logs, fetched from data node and docker containers
* **Jaeger** as main *tracing* tool, with Elasticsearch as main data storage
* Configure Traefik for metrics, logs and tracing as perfect sample
* **Grafana** as GUI dashboard builder with many battery included dashboards
* Monitoring all the cluster
* Node, PostgreSQL and MySQL metrics
* Navigate through log history of all containers and data server node thanks to `Loki` like *ELK*, with *LogQL*
* Navigate through log history of all containers and data server node thanks to Loki like *ELK*, with *LogQL*
### 5. CI/CD setup 💻
* `Gitea` as lightweight centralized control version, in case you want get out of Github / GitLab Cloud
* `Private docker registry` with minimal UI for all your custom app images that will be built on your development process and be used as based image for your production docker on cluster
* `Drone CI` as self-hosted CI/CD solution
* `SonarQube` as self-hosted quality code control
* Get perfect load testing environment with `k6` + `InfluxDB` + `Grafana` combo
* **Gitea** as lightweight centralized control version, in case you want get out of Github / GitLab Cloud
* Private **docker registry** with minimal UI for all your custom app images that will be built on your development process and be used as based image for your production docker on cluster
* **Drone CI** as self-hosted CI/CD solution
* **SonarQube** as self-hosted quality code control
* Get perfect load testing environment with **k6** + **InfluxDB** + **Grafana** combo
Finally, we'll finish this guide by a simple mini-app development with above CI/CD integration !
We'll entirely test the above configuration with the basic .NET weather API.
## Cluster Architecture 🏘️

View File

@ -7,7 +7,7 @@ draft: true
---
{{< lead >}}
Build your own cheap while powerful self-hosted complete CI/CD solution by following this opinionated guide 🎉
Build your own cheap but powerful self-hosted cluster and be free from any SaaS solutions by following this opinionated guide 🎉
{{< /lead >}}
This is the **Part II** of more global topic tutorial. [Back to first part]({{< ref "/posts/02-build-your-own-docker-swarm-cluster" >}}) to start from beginning.
@ -139,14 +139,14 @@ IPs are only showed here as samples, use `hcloud server describe xxxxxx-01` in o
## Setup DNS and SSH config 🌍
Now use `hcloud server ip manager-01` to get the unique frontal IP address of the cluster that will be used for any entry point, including SSH. Then edit the DNS of your domain and apply this IP to a particular subdomain, as well as a wildcard subdomain. You will see later what this wildcard domain is it for. I will use `sw.mydomain.rocks` as sample. It should be looks like next :
Now use `hcloud server ip manager-01` to get the unique frontal IP address of the cluster that will be used for any entry point, including SSH. Then edit the DNS of your domain and apply this IP to a particular subdomain, as well as a wildcard subdomain. You will see later what this wildcard domain is it for. I will use `sw.dockerswarm.rocks` as sample. It should be looks like next :
```txt
sw 3600 IN A 123.123.123.123
*.sw 43200 IN CNAME sw
```
As soon as the above DNS is applied, you should ping `sw.mydomain.rocks` or any `xyz.sw.mydomain.rocks` domains.
As soon as the above DNS is applied, you should ping `sw.dockerswarm.rocks` or any `xyz.sw.dockerswarm.rocks` domains.
It's now time to finalize your local SSH config for optimal access. Go to `~/.ssh/config` and add following hosts (change it accordingly to your own setup) :
@ -154,7 +154,7 @@ It's now time to finalize your local SSH config for optimal access. Go to `~/.ss
Host sw
User swarm
Port 2222
HostName sw.mydomain.rocks
HostName sw.dockerswarm.rocks
Host sw-data-01
User swarm
@ -175,7 +175,7 @@ Host sw-worker-01
And that's it ! You should now quickly ssh to these servers easily by `ssh sw`, `ssh sw-worker-01`, `ssh sw-runner-01`, `ssh sw-data-01`, which will be far more practical.
{{< alert >}}
Note as I only use the `sw.mydomain.rocks` as unique endpoint for ssh access to all internal server, without need of external SSH access to servers different from `manager-01`. It's known as SSH proxy, which allows single access point for better security perspective by simply jumping from main SSH access.
Note as I only use the `sw.dockerswarm.rocks` as unique endpoint for ssh access to all internal server, without need of external SSH access to servers different from `manager-01`. It's known as SSH proxy, which allows single access point for better security perspective by simply jumping from main SSH access.
{{< /alert >}}
## The firewall 🧱

View File

@ -7,7 +7,7 @@ draft: true
---
{{< lead >}}
Build your own cheap while powerful self-hosted complete CI/CD solution by following this opinionated guide 🎉
Build your own cheap but powerful self-hosted cluster and be free from any SaaS solutions by following this opinionated guide 🎉
{{< /lead >}}
This is the **Part III** of more global topic tutorial. [Back to first part]({{< ref "/posts/02-build-your-own-docker-swarm-cluster" >}}) to start from beginning.
@ -102,12 +102,12 @@ entryPoints:
certificatesResolvers:
le:
acme:
email: admin@sw.mydomain.rocks
email: admin@sw.dockerswarm.rocks
storage: /certificates/acme.json
tlsChallenge: {}
providers:
docker:
defaultRule: Host(`{{ index .Labels "com.docker.stack.namespace" }}.sw.mydomain.rocks`)
defaultRule: Host(`{{ index .Labels "com.docker.stack.namespace" }}.sw.dockerswarm.rocks`)
exposedByDefault: false
swarmMode: true
network: traefik_public
@ -149,7 +149,7 @@ It indicates Traefik to read through Docker API in order to discover any new ser
| `network` | Default network connection for all exposed containers |
| `defaultRule` | Default rule that will be applied to HTTP routes, in order to redirect particular URL to the right service. Each service container can override this default value with `traefik.http.routers.my-container.rule` label. |
As a default route rule, I set here a value adapted for an automatic subdomain discovery. `{{ index .Labels "com.docker.stack.namespace" }}.sw.mydomain.rocks` is a dynamic Go template string that means to use the `com.docker.stack.namespace` label that is applied by default on Docker Swarm on each deployed service. So if I deploy a swarm stack called `myapp`, Traefik will automatically set `myapp.sw.mydomain.rocks` as default domain URL to my service, with automatic TLS challenge !
As a default route rule, I set here a value adapted for an automatic subdomain discovery. `{{ index .Labels "com.docker.stack.namespace" }}.sw.dockerswarm.rocks` is a dynamic Go template string that means to use the `com.docker.stack.namespace` label that is applied by default on Docker Swarm on each deployed service. So if I deploy a swarm stack called `myapp`, Traefik will automatically set `myapp.sw.dockerswarm.rocks` as default domain URL to my service, with automatic TLS challenge !
All I have to do is to add a specific label `traefik.enable=true` inside the Docker service configuration and be sure that it's on the `traefik_public` network.
@ -245,7 +245,7 @@ This is the Traefik dynamic configuration part. I declare here many service that
| `gzip` | middleware | provides [basic gzip compression](https://doc.traefik.io/traefik/middlewares/http/compress/). Note as Traefik doesn't support brotli yep, which is pretty disappointed where absolutly all other reverse proxies support it... |
| `admin-auth` | middleware | provides basic HTTP authorization. `basicauth.users` will use standard `htpasswd` format. I use `HASHED_PASSWORD` as dynamic environment variable. |
| `admin-ip` | middleware | provides IP whitelist protection, given a source range. |
| `traefik-public-api` | router | Configured for proper redirection to internal dashboard Traefik API from `traefik.sw.mydomain.rocks`, which is defined by default rule. It's configured with above `admin-auth` and `admin-ip` for proper protection. |
| `traefik-public-api` | router | Configured for proper redirection to internal dashboard Traefik API from `traefik.sw.dockerswarm.rocks`, which is defined by default rule. It's configured with above `admin-auth` and `admin-ip` for proper protection. |
| `traefik-public` | service | allow proper redirection to the default exposed 8080 port of Traefik container. This is sadly mandatory when using [Docker Swarm](https://doc.traefik.io/traefik/providers/docker/#port-detection_1) |
{{< alert >}}
@ -276,7 +276,7 @@ docker service ls
docker service logs traefik_traefik
```
After few seconds, Traefik should launch and generate proper SSL certificate for his own domain. You can finally go to <https://traefik.sw.mydomain.rocks>. `http://` should work as well thanks to permanent redirection.
After few seconds, Traefik should launch and generate proper SSL certificate for its own domain. You can finally go to <https://traefik.sw.dockerswarm.rocks>. `http://` should work as well thanks to permanent redirection.
If properly configured, you will be prompted for access. After entering admin as user and your own chosen password, you should finally access to the traefik dashboard similar to below !
@ -329,7 +329,7 @@ networks:
This is an adapted file from the official [Portainer Agent Stack](https://downloads.portainer.io/portainer-agent-stack.yml).
We use `agent_network` as overlay network for communication between agents and manager. No need of `admin-auth` middleware here as Portainer has his own authentication.
We use `agent_network` as overlay network for communication between agents and manager. No need of `admin-auth` middleware here as Portainer has its own authentication.
{{< alert >}}
Note that `traefik_public` must be set to **external** in order to reuse the original Traefik network.
@ -352,7 +352,7 @@ As soon as the main portainer service has successfully started, Traefik will det
[![Traefik routers](traefik-routers.png)](traefik-routers.png)
It's time to create your admin account through <https://portainer.sw.mydomain.rocks>. If all goes well, aka Portainer agent are accessible from Portainer portal, you should have access to your cluster home environment with 2 stacks active.
It's time to create your admin account through <https://portainer.sw.dockerswarm.rocks>. If all goes well, aka Portainer agent are accessible from Portainer portal, you should have access to your cluster home environment with 2 stacks active.
[![Portainer home](portainer-home.png)](portainer-home.png)
@ -373,7 +373,7 @@ sudo apt install -y docker-ctop
## Keep the containers image up-to-date ⬆️
It's finally time to test our new cluster environment by testing some images through the Portainer GUI. We'll start by installing [`Diun`](https://crazymax.dev/diun/), a very useful tool which notify us when used docker images has available update in his Docker registry.
It's finally time to test our new cluster environment by testing some images through the Portainer GUI. We'll start by installing [`Diun`](https://crazymax.dev/diun/), a very useful tool which notify us when used docker images has available update in its Docker registry.
Create a new `diun` stack through Portainer and set following content :
@ -432,7 +432,7 @@ Use below section of Portainer for setting all personal environment variable. In
[![Diun Stack](diun-stack.png)](diun-stack.png)
Finally click on **Deploy the stack**, it's equivalent of precedent `docker stack deploy`, nothing magic here. At the difference that Portainer will store the YML inside his volume, allowing full control, contrary to limited Traefik and Portainer cases.
Finally click on **Deploy the stack**, it's equivalent of precedent `docker stack deploy`, nothing magic here. At the difference that Portainer will store the YML inside its volume, allowing full control, contrary to limited Traefik and Portainer cases.
Diun should now be deployed and manager host and ready to scan images for any updates !

View File

@ -7,7 +7,7 @@ draft: true
---
{{< lead >}}
Build your own cheap while powerful self-hosted complete CI/CD solution by following this opinionated guide 🎉
Build your own cheap but powerful self-hosted cluster and be free from any SaaS solutions by following this opinionated guide 🎉
{{< /lead >}}
This is the **Part IV** of more global topic tutorial. [Back to first part]({{< ref "/posts/02-build-your-own-docker-swarm-cluster" >}}) to start from beginning.
@ -96,7 +96,7 @@ networks:
The important part is `/etc/hosts` in order to allow proper DNS resolving for `data-01` configured in `PMA_HOST` environment variable. This will avoid us from dragging the real IP of data server everywhere...
Deploy it, and you should access to <https://phpmyadmin.sw.mydomain.rocks> after few seconds, with full admin access to your MySQL DB !
Deploy it, and you should access to <https://phpmyadmin.sw.dockerswarm.rocks> after few seconds, with full admin access to your MySQL DB !
[![phpMyAdmin](phpmyadmin.png)](phpmyadmin.png)
@ -198,7 +198,7 @@ networks:
You'll need both `PGADMIN_DEFAULT_EMAIL` and `PGADMIN_DEFAULT_PASSWORD` variable environment for proper initialization.
Deploy it, and you should access after few seconds to <https://pgadmin.sw.mydomain.rocks> with the default logins just above.
Deploy it, and you should access after few seconds to <https://pgadmin.sw.dockerswarm.rocks> with the default logins just above.
Once logged, you need to add the previously configured PostgreSQL server address via *Add new server*. Just add relevant host informations in *Connection* tab. Host must stay `data-01` with swarm as superuser access.
@ -240,7 +240,7 @@ networks:
Now we'll creating the `matomo` DB with dedicated user through above *phpMyAdmin*. For that simply create a new `matomo` account and always specify `10.0.0.0/8` inside host field. Don't forget to check *Create database with same name and grant all privileges*.
Then go to <https://matomo.sw.mydomain.rocks> and go through all installation. At the DB install step, use the above credentials and use the hostname of your data server, which is `data-01` in our case.
Then go to <https://matomo.sw.dockerswarm.rocks> and go through all installation. At the DB install step, use the above credentials and use the hostname of your data server, which is `data-01` in our case.
[![Redmine](matomo.png)](matomo.png)
@ -320,7 +320,7 @@ Configure `REDMINE_DB_*` with proper above created DB credential and set the ran
I use a dynamic `ROOT_PATH` here. So you must add this variable with `/mnt/storage-pool/redmine` value in the below *Environment variables* section of portainer.
{{< /alert >}}
After few seconds, <https://redmine.sw.mydomain.rocks> should be accessible and ready to use, use admin / admin for admin connection !
After few seconds, <https://redmine.sw.dockerswarm.rocks> should be accessible and ready to use, use admin / admin for admin connection !
[![Redmine](redmine.png)](redmine.png)
@ -363,7 +363,7 @@ networks:
external: true
```
And voilà, it's done, n8n will automatically migrate the database and <https://n8n.sw.mydomain.rocks> should be soon accessible. Note as we use `admin-auth` middleware because n8n doesn't offer authentication. Use the same Traefik credentials.
And voilà, it's done, n8n will automatically migrate the database and <https://n8n.sw.dockerswarm.rocks> should be soon accessible. Note as we use `admin-auth` middleware because n8n doesn't offer authentication. Use the same Traefik credentials.
[![n8n](n8n.png)](n8n.png)

View File

@ -7,7 +7,7 @@ draft: true
---
{{< lead >}}
Build your own cheap while powerful self-hosted complete CI/CD solution by following this opinionated guide 🎉
Build your own cheap but powerful self-hosted cluster and be free from any SaaS solutions by following this opinionated guide 🎉
{{< /lead >}}
This is the **Part V** of more global topic tutorial. [Back to first part]({{< ref "/posts/02-build-your-own-docker-swarm-cluster" >}}) to start from beginning.
@ -18,7 +18,7 @@ This part is totally optional, as it's mainly focused on monitoring. Feel free t
## Metrics with Prometheus 🔦
Prometheus is become the standard de facto for self-hosted monitoring in part thanks to his architecture. It's a TSDB (Time Series Database) that will poll (aka scrape) standard metrics REST endpoints, provided by the tools to monitor. It's the case of Traefik, as we have seen in [part III]({{< ref "04-build-your-own-docker-swarm-cluster-part-3#traefik-" >}}). For tools that don't support it natively, like databases, you'll find many exporters that will do the job for you.
Prometheus is become the standard de facto for self-hosted monitoring in part thanks to its architecture. It's a TSDB (Time Series Database) that will poll (aka scrape) standard metrics REST endpoints, provided by the tools to monitor. It's the case of Traefik, as we have seen in [part III]({{< ref "04-build-your-own-docker-swarm-cluster-part-3#traefik-" >}}). For tools that don't support it natively, like databases, you'll find many exporters that will do the job for you.
### Prometheus install 💽
@ -101,7 +101,7 @@ The `private` network will serve us later for exporters. Next config are useful
| storage.tsdb.retention.size | The max DB size |
| storage.tsdb.retention.time | The max data retention date |
Deploy it and <https://prometheus.sw.mydomain.rocks> should be available after few seconds. Use same traefik credentials for login.
Deploy it and <https://prometheus.sw.dockerswarm.rocks> should be available after few seconds. Use same traefik credentials for login.
You should now have access to some metrics !
@ -254,8 +254,8 @@ services:
grafana:
image: grafana/grafana:8.4.1
environment:
GF_SERVER_DOMAIN: grafana.sw.mydomain.rocks
GF_SERVER_ROOT_URL: https://grafana.sw.mydomain.rocks
GF_SERVER_DOMAIN: grafana.sw.dockerswarm.rocks
GF_SERVER_ROOT_URL: https://grafana.sw.dockerswarm.rocks
GF_DATABASE_TYPE: postgres
GF_DATABASE_HOST: data-01:5432
GF_DATABASE_NAME: grafana
@ -282,7 +282,7 @@ networks:
external: true
```
Set proper `GF_DATABASE_PASSWORD` and deploy. Database migration should be automatic (don't hesitate to check inside pgAdmin). Go to <https://grafana.sw.mydomain.rocks> and login as admin / admin.
Set proper `GF_DATABASE_PASSWORD` and deploy. Database migration should be automatic (don't hesitate to check inside pgAdmin). Go to <https://grafana.sw.dockerswarm.rocks> and login as admin / admin.
[![Grafana home](grafana-home.png)](grafana-home.png)

View File

@ -7,7 +7,7 @@ draft: true
---
{{< lead >}}
Build your own cheap while powerful self-hosted complete CI/CD solution by following this opinionated guide 🎉
Build your own cheap but powerful self-hosted cluster and be free from any SaaS solutions by following this opinionated guide 🎉
{{< /lead >}}
This is the **Part VI** of more global topic tutorial. [Back to first part]({{< ref "/posts/02-build-your-own-docker-swarm-cluster" >}}) to start from beginning.
@ -340,7 +340,7 @@ networks:
| `agent` | a simple REST endpoint for receiving traces, the latter being forwarded to the collector. An agent should be proper to a machine host, similarly as the portainer agent. |
| `query` | a simple UI that connects to the span storage and allows simple visualization. |
After few seconds, go to <https://jaeger.sw.mydomain.rocks> and enter Traefik credentials. You will land to Jaeger Query UI with empty data.
After few seconds, go to <https://jaeger.sw.dockerswarm.rocks> and enter Traefik credentials. You will land to Jaeger Query UI with empty data.
It's time to inject some trace data. Be sure all above Jaeger services are started through Portainer before continue.

View File

@ -7,14 +7,14 @@ draft: true
---
{{< lead >}}
Build your own cheap while powerful self-hosted complete CI/CD solution by following this opinionated guide 🎉
Build your own cheap but powerful self-hosted cluster and be free from any SaaS solutions by following this opinionated guide 🎉
{{< /lead >}}
This is the **Part VII** of more global topic tutorial. [Back to first part]({{< ref "/posts/02-build-your-own-docker-swarm-cluster" >}}) to start from beginning.
## Self-hosted VCS 🍵
This specific VCS part is optional and is only for developers that would be completely independent of any cloud VCS providers, by self-hosting his own system.
This specific VCS part is optional and is only for developers that would be completely independent of any cloud VCS providers, by self-hosting its own system.
{{< alert >}}
A backup is highly critical ! Don't underestimate that part and be sure to have a proper solution. **Restic** described in [this previous section]({{< ref "05-build-your-own-docker-swarm-cluster-part-4#data-backup-" >}}) is a perfect choice.
@ -62,11 +62,11 @@ We added a specific TCP router in order to allow SSH cloning. The SSH Traefik en
Note as we need to indicate entry points in order to avoid bad redirection from other HTTPS based service.
{{< /alert >}}
Now go to <https://gitea.sw.mydomain.rocks> and go through the installation procedure. Change default SQLite provider by a more production purpose database.
Now go to <https://gitea.sw.dockerswarm.rocks> and go through the installation procedure. Change default SQLite provider by a more production purpose database.
Create a new `gitea` PostgreSQL database as usual from pgAdmin or `psql` for pro-CLI user, and set the according DB info access to Gitea installer. Host should be `data-01`.
Don't forgive to change all domain related field by the proper current domain URL, which is `gitea.sw.mydomain.rocks` in my case. You should set proper SMTP settings for notifications.
Don't forgive to change all domain related field by the proper current domain URL, which is `gitea.sw.dockerswarm.rocks` in my case. You should set proper SMTP settings for notifications.
[![Gitea admin dashboard](gitea-install.png)](gitea-install.png)
@ -101,7 +101,7 @@ services:
deploy:
labels:
- traefik.enable=true
- traefik.http.routers.registry.rule=Host(`registry.sw.mydomain.rocks`) && PathPrefix(`/v2`)
- traefik.http.routers.registry.rule=Host(`registry.sw.dockerswarm.rocks`) && PathPrefix(`/v2`)
- traefik.http.routers.registry.middlewares=admin-auth
- traefik.http.services.registry.loadbalancer.server.port=5000
placement:
@ -134,11 +134,11 @@ Note as both service must be exposed to Traefik. In order to keep the same subdo
It gives us have an additional condition for redirect to the correct service. It's ok in our case because the official docker registry use only `/v2` as endpoint.
{{< /alert >}}
Go to <https://registry.sw.mydomain.rocks> and use Traefik credentials. We have no images yet let's create one.
Go to <https://registry.sw.dockerswarm.rocks> and use Traefik credentials. We have no images yet let's create one.
### Test our private registry
Login into the `manager-01` server, do `docker login registry.sw.mydomain.rocks` and enter proper credentials. You should see *Login Succeeded*. Don't worry about the warning. Create the next Dockerfile somewhere :
Login into the `manager-01` server, do `docker login registry.sw.dockerswarm.rocks` and enter proper credentials. You should see *Login Succeeded*. Don't worry about the warning. Create the next Dockerfile somewhere :
```Dockerfile
FROM alpine:latest
@ -149,15 +149,15 @@ Then build and push the image :
```sh
docker build -t alpinegit .
docker tag alpinegit registry.sw.mydomain.rocks/alpinegit
docker push registry.sw.mydomain.rocks/alpinegit
docker tag alpinegit registry.sw.dockerswarm.rocks/alpinegit
docker push registry.sw.dockerswarm.rocks/alpinegit
```
Go back to above <https://registry.sw.mydomain.rocks>. You should see 1 new image !
Go back to above <https://registry.sw.dockerswarm.rocks>. You should see 1 new image !
[![Docker registry](docker-registry.png)](docker-registry.png)
Delete the image test through UI and from local docker with `docker image rm registry.sw.mydomain.rocks/alpinegit`.
Delete the image test through UI and from local docker with `docker image rm registry.sw.dockerswarm.rocks/alpinegit`.
{{< alert >}}
Note as the blobs of image is always physically in the disk, even when "deleted". You must launch manually the docker GC in order to cleanup unused images.
@ -201,7 +201,7 @@ drone-runner-- push built docker image -->registry
registry-- pull image when deploy stack -->my-app
{{< /mermaid >}}
Let's follow [the official docs](https://docs.drone.io/server/provider/gitea/) for generating a OAuth2 application on Gitea, which is necessary for Drone integration. Set `https://drone.sw.mydomain.rocks` as redirect UI after successful authentication.
Let's follow [the official docs](https://docs.drone.io/server/provider/gitea/) for generating a OAuth2 application on Gitea, which is necessary for Drone integration. Set `https://drone.sw.dockerswarm.rocks` as redirect UI after successful authentication.
[![Gitea drone application](gitea-drone-application.png)](gitea-drone-application.png)
@ -220,7 +220,7 @@ services:
DRONE_DATABASE_DATASOURCE: postgres://drone:${DRONE_DATABASE_PASSWORD}@data-01:5432/drone?sslmode=disable
DRONE_GITEA_CLIENT_ID:
DRONE_GITEA_CLIENT_SECRET:
DRONE_GITEA_SERVER: https://gitea.sw.mydomain.rocks
DRONE_GITEA_SERVER: https://gitea.sw.dockerswarm.rocks
DRONE_RPC_SECRET:
DRONE_SERVER_HOST:
DRONE_SERVER_PROTO:
@ -259,7 +259,7 @@ Don't forget to have proper docker labels on nodes, as explain [here]({{< ref "0
| variable | description |
| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------- |
| `DRONE_SERVER_HOST` | The host of main Drone server. I'll use `drone.sw.mydomain.rocks` here. |
| `DRONE_SERVER_HOST` | The host of main Drone server. I'll use `drone.sw.dockerswarm.rocks` here. |
| `DRONE_SERVER_PROTO` | The scheme protocol, which is `https`. |
| `DRONE_GITEA_CLIENT_ID` | Use the above client ID token. |
| `DRONE_GITEA_CLIENT_SECRET` | Use the above client secret token. |
@ -267,7 +267,7 @@ Don't forget to have proper docker labels on nodes, as explain [here]({{< ref "0
| `DRONE_RPC_SECRET` | Necessary for proper secured authentication between Drone and runners. Use `openssl rand -hex 16` for generating a valid token. |
| `DRONE_USER_CREATE` | The initial user to create at launch. Put your Gitea username here for setting automatically Gitea user as drone administrator. |
It's time to go to <https://drone.sw.mydomain.rocks/> and generate your first Drone account through OAuth2 from Gitea. You should be properly redirected to Gitea, where you'll just have to authorize Drone application.
It's time to go to <https://drone.sw.dockerswarm.rocks/> and generate your first Drone account through OAuth2 from Gitea. You should be properly redirected to Gitea, where you'll just have to authorize Drone application.
[![Gitea oauth2](gitea-oauth2.png)](gitea-oauth2.png)
@ -301,7 +301,7 @@ dotnet new gitignore
git init
git add .
git commit -m "first commit"
git remote add origin git@gitea.sw.mydomain.rocks:adr1enbe4udou1n/my-weather-api.git # if you use ssh
git remote add origin git@gitea.sw.dockerswarm.rocks:adr1enbe4udou1n/my-weather-api.git # if you use ssh
git push -u origin main
```
@ -324,10 +324,10 @@ It will create a webhook inside repository settings, triggered on every code pus
Now generate a new SSH key on `manager-01` :
```sh
ssh-keygen -t ed25519 -C "admin@sw.mydomain.rocks"
ssh-keygen -t ed25519 -C "admin@sw.dockerswarm.rocks"
cat .ssh/id_ed25519 # the private key to set in swarm_ssh_key
cat .ssh/id_ed25519.pub # the public key to add just below
echo "ssh-ed25519 AAAA... admin@sw.mydomain.rocks" | tee -a .ssh/authorized_keys
echo "ssh-ed25519 AAAA... admin@sw.dockerswarm.rocks" | tee -a .ssh/authorized_keys
```
Then configure the repository settings on Drone. Go to *Organization > Secrets* section and add some global secrets.
@ -358,8 +358,8 @@ steps:
- name: image
image: plugins/docker
settings:
registry: registry.sw.mydomain.rocks
repo: registry.sw.mydomain.rocks/adr1enbe4udou1n/my-weather-api
registry: registry.sw.dockerswarm.rocks
repo: registry.sw.dockerswarm.rocks/adr1enbe4udou1n/my-weather-api
tags: latest
username:
from_secret: registry_username
@ -395,7 +395,7 @@ Commit both above files and push to remote repo. Drone should be automatically t
[![Drone build](drone-build.png)](drone-build.png)
If all's going well, the final image should be pushed in our docker registry. You can ensure it by navigating to <https://registry.sw.mydomain.rocks>.
If all's going well, the final image should be pushed in our docker registry. You can ensure it by navigating to <https://registry.sw.dockerswarm.rocks>.
### Deployment (the CD part) 🚀
@ -406,7 +406,7 @@ version: "3"
services:
app:
image: registry.sw.mydomain.rocks/adr1enbe4udou1n/my-weather-api
image: registry.sw.dockerswarm.rocks/adr1enbe4udou1n/my-weather-api
environment:
ASPNETCORE_ENVIRONMENT: Development
networks:
@ -429,13 +429,13 @@ I use `Development` in order to have the swagger UI.
Be sure to have registered the private registry in Portainer before deploying as [explained here](#register-registry-in-portainer).
{{< /alert >}}
Finally, deploy and see the result in <https://weather.sw.mydomain.rocks/swagger>. You should access to the swagger UI, and API endpoints should correctly respond.
Finally, deploy and see the result in <https://weather.sw.dockerswarm.rocks/swagger>. You should access to the swagger UI, and API endpoints should correctly respond.
#### Continuous deployment
Now it's clear that we don't want to deploy manually every time when the code is pushed.
First be sure that following `docker service update --image registry.sw.mydomain.rocks/adr1enbe4udou1n/my-weather-api:latest weather_app --with-registry-auth` command works well in `manager-01`. It's simply update the current `weather_app` service with the last available image version from the private registry.
First be sure that following `docker service update --image registry.sw.dockerswarm.rocks/adr1enbe4udou1n/my-weather-api:latest weather_app --with-registry-auth` command works well in `manager-01`. It's simply update the current `weather_app` service with the last available image version from the private registry.
Now we must be sure that the `runner-01` host can reach the `manager-01` server from outside. If you have applied the firewall at the beginning of this tutorial, only our own IP is authorized. Let's add the public IP of `runner-01` to your `firewall-external` inside Hetzner console.
@ -446,13 +446,13 @@ Now let's add a new `deploy` step inside `.drone.yml` into our pipeline for auto
- name: deploy
image: appleboy/drone-ssh
settings:
host: sw.mydomain.rocks
host: sw.dockerswarm.rocks
port: 2222
username: swarm
key:
from_secret: swarm_ssh_key
script:
- docker service update --image registry.sw.mydomain.rocks/adr1enbe4udou1n/my-weather-api:latest weather_app --with-registry-auth
- docker service update --image registry.sw.dockerswarm.rocks/adr1enbe4udou1n/my-weather-api:latest weather_app --with-registry-auth
#...
```

View File

@ -7,7 +7,7 @@ draft: true
---
{{< lead >}}
Build your own cheap while powerful self-hosted complete CI/CD solution by following this opinionated guide 🎉
Build your own cheap but powerful self-hosted cluster and be free from any SaaS solutions by following this opinionated guide 🎉
{{< /lead >}}
This is the **Part VIII** of more global topic tutorial. [Back to first part]({{< ref "/posts/02-build-your-own-docker-swarm-cluster" >}}) to start from beginning.
@ -57,7 +57,7 @@ version: "3"
services:
app:
image: registry.sw.mydomain.rocks/adr1enbe4udou1n/my-weather-api
image: registry.sw.dockerswarm.rocks/adr1enbe4udou1n/my-weather-api
environment:
ASPNETCORE_ENVIRONMENT: Development
Jaeger__Host: tasks.jaeger_agent
@ -131,7 +131,7 @@ networks:
Set proper `ROOT_PATH` with `/mnt/storage-pool/sonar` and `SONAR_JDBC_PASSWORD` with above DB password.
Go to <https://sonar.sw.mydomain.rocks>, use admin / admin credentials and update password.
Go to <https://sonar.sw.dockerswarm.rocks>, use admin / admin credentials and update password.
### Project analysis
@ -144,7 +144,7 @@ You must have at least Java 11 installed locally.
```sh
dotnet tool install --global dotnet-sonarscanner
dotnet sonarscanner begin /k:"My-Weather-API" /d:sonar.host.url="https://sonar.sw.mydomain.rocks" /d:sonar.login="above-generated-token"
dotnet sonarscanner begin /k:"My-Weather-API" /d:sonar.host.url="https://sonar.sw.dockerswarm.rocks" /d:sonar.login="above-generated-token"
dotnet build
@ -159,10 +159,10 @@ Wait few minutes and the final rapport analysis should automatically appear. Add
Because running scanner manually is boring, let's integrate it in our favorite CI. Create following secrets through Drone UI :
| name | level | description |
| ---------------- | ------------ | -------------------------------------------------------- |
| `sonar_host_url` | organization | Set the sonar host URL `https://sonar.sw.mydomain.rocks` |
| `sonar_token` | repository | Set the above token |
| name | level | description |
| ---------------- | ------------ | ----------------------------------------------------------- |
| `sonar_host_url` | organization | Set the sonar host URL `https://sonar.sw.dockerswarm.rocks` |
| `sonar_token` | repository | Set the above token |
Change the `build` step on `.drone.yml` file :
@ -259,7 +259,7 @@ import http from "k6/http";
import { check } from "k6";
export default function () {
http.get('https://weather.sw.mydomain.rocks/WeatherForecast');
http.get('https://weather.sw.dockerswarm.rocks/WeatherForecast');
}
```
@ -362,7 +362,7 @@ export const options = {
};
export default function () {
http.get('https://weather.sw.mydomain.rocks/WeatherForecast');
http.get('https://weather.sw.dockerswarm.rocks/WeatherForecast');
}
```