ci cd mermaid

This commit is contained in:
2022-02-25 14:06:33 +01:00
parent fb4b2bc96a
commit 1e6ccbf45d
4 changed files with 54 additions and 22 deletions

View File

@ -48,6 +48,17 @@ When done use `docker node ls` on manager node in order to confirm the presence
Yeah, cluster is already properly configured. Far less overwhelming than Kubernetes, I should say.
### Add environment labels
Before continue, let's add some labels on nodes in order to differentiate properly *production* nodes from *build* nodes :
```sh
# worker-01 is intended for running production app container
docker node update --label-add environment=production worker-01
# runner-01 is intended to build docker image through CI/CD pipeline
docker node update --label-add environment=build runner-01
```
## Network file system 📄
Before go further away, we'll quickly need of proper unique shared storage location for all managers and workers. It's mandatory in order to keep same state when your app containers are automatically rearranged by Swarm manager across multiple workers for convergence purpose.
@ -349,8 +360,7 @@ Go to the `manager-01`, be sure to have above /etc/traefik/traefik.yml file, and
```sh
# declare the current node manager as main certificates host, required in order to respect above deploy constraint
export NODE_ID=$(docker info -f '{{.Swarm.NodeID}}')
docker node update --label-add traefik-public.certificates=true $NODE_ID
docker node update --label-add traefik-public.certificates=true manager-01
# generate your main admin password hash for any admin HTTP basic auth access into specific environment variable
export HASHED_PASSWORD=$(openssl passwd -apr1 aNyR4nd0mP@ssw0rd)

View File

@ -273,7 +273,7 @@ services:
- traefik.http.services.redmine.loadbalancer.server.port=3000
placement:
constraints:
- node.role == worker
- node.labels.environment == production
networks:
traefik_public:
@ -322,7 +322,7 @@ services:
- traefik.http.routers.n8n.middlewares=admin-auth
placement:
constraints:
- node.role == worker
- node.labels.environment == production
networks:
traefik_public:

View File

@ -28,18 +28,17 @@ I'll not use GlusterFS volume for storing Prometheus data, because :
* No critical data, it's just metrics
* No need of backup, and it can be pretty huge
First go to the `master-01` node settings in Portainer inside *Swarm Cluster overview*, and apply a new label that indicates that this node is the host of Prometheus data.
First go to the `manager-01` node settings in Portainer inside *Swarm Cluster overview*, and apply a new label that indicates that this node is the host of Prometheus data.
![Prometheus host overview](portainer-host-overview.png)
It's equivalent of doing :
```sh
export NODE_ID=$(docker info -f '{{.Swarm.NodeID}}')
docker node update --label-add prometheus.data=true $NODE_ID
docker node update --label-add prometheus.data=true manager-01
```
Then create a config file at `/etc/prometheus/prometheus.yml` in `master-01` node :
Then create a config file at `/etc/prometheus/prometheus.yml` in `manager-01` node :
```yml
global:

View File

@ -166,13 +166,30 @@ For that execute `registry garbage-collect /etc/docker/registry/config.yml` insi
## CI/CD with Drone 🪁
```sh
# get the docker node id of runner
docker node ls
# update environment label
docker node update --label-add environment=runner xxxxxx
```
{{< mermaid >}}
flowchart TD
subgraph manager-01
traefik((Traefik))
drone((Drone))
gitea((Gitea))
registry((Registry))
end
subgraph worker-01
my-app((My App 01))
end
subgraph runner-01
drone-runner((Drone runner))
end
traefik-->drone
traefik-->gitea
traefik-->registry
traefik-->my-app
gitea-- webhook on pushed code -->drone
drone-- start pipeline in runner -->drone-runner
gitea-- repo clone -->drone-runner
drone-runner-- push built docker image -->registry
registry-- pull image when deploy stack -->my-app
{{< /mermaid >}}
Let's follow [the official docs](https://docs.drone.io/server/provider/gitea/) in order to generate a OAuth2 application, which is necessary for Drone integration.
@ -197,6 +214,7 @@ services:
DRONE_RPC_SECRET:
DRONE_SERVER_HOST: drone.sw.okami101.io
DRONE_SERVER_PROTO: https
DRONE_USER_CREATE: username:adr1enbe4udou1n,admin:true
networks:
- traefik_public
deploy:
@ -218,19 +236,24 @@ services:
deploy:
placement:
constraints:
- node.labels.environment == runner
- node.labels.environment == build
networks:
traefik_public:
external: true
```
| variable | description |
| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------ |
| `DRONE_GITEA_CLIENT_ID` | Use the above client ID token |
| `DRONE_GITEA_CLIENT_SECRET` | Use the above client secret token |
| `DRONE_DATABASE_PASSWORD` | Use the database password |
| `DRONE_RPC_SECRET` | Necessary for proper secured authentication between Drone and runners. Use `openssl rand -hex 16` for generating a valid token |
{{< alert >}}
Don't forget to have proper docker labels on nodes, as explain [here]({{< ref "04-build-your-own-docker-swarm-cluster-part-3#add-environment-labels" >}}), otherwise docker runner will not run because of `node.labels.environment == build`.
{{< /alert >}}
| variable | description |
| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------- |
| `DRONE_GITEA_CLIENT_ID` | Use the above client ID token |
| `DRONE_GITEA_CLIENT_SECRET` | Use the above client secret token |
| `DRONE_DATABASE_PASSWORD` | Use the database password |
| `DRONE_RPC_SECRET` | Necessary for proper secured authentication between Drone and runners. Use `openssl rand -hex 16` for generating a valid token. |
| `DRONE_USER_CREATE` | The initial user to create at launch. Put your Gitea username here for setting automatically Gitea user as drone administrator. |
It's time to go to <https://drone.sw.okami101.io/> and generate your first Drone account through OAuth2 from Gitea. You should be properly redirected to Gitea, where you'll just have to authorize Drone application.