set lightbox everywhere
This commit is contained in:
@ -26,7 +26,7 @@ Initiate the project by following this simple steps :
|
||||
2. Navigate to security > API tokens
|
||||
3. Generate new API key with Read Write permissions and copy the generated token
|
||||
|
||||

|
||||
[](hetzner-api-token.png)
|
||||
|
||||
Then go to the terminal and prepare the new context
|
||||
|
||||
|
@ -379,7 +379,7 @@ After few seconds, Traefik should launch and generate proper SSL certificate for
|
||||
|
||||
If properly configured, you will be prompted for access. After entering admin as user and your own chosen password, you should finally access to the traefik dashboard similar to below !
|
||||
|
||||

|
||||
[](traefik-dashboard.png)
|
||||
|
||||
### Portainer ⛵
|
||||
|
||||
@ -449,11 +449,11 @@ docker service ls
|
||||
|
||||
As soon as the main portainer service has successfully started, Traefik will detect it and configure it with SSL. The specific router for Portainer should appear in Traefik dashboard on HTTP section as below.
|
||||
|
||||

|
||||
[](traefik-routers.png)
|
||||
|
||||
It's time to create your admin account through <https://portainer.sw.okami101.io>. If all goes well, aka Portainer agent are accessible from Portainer portal, you should have access to your cluster home environment with 2 stacks active.
|
||||
|
||||

|
||||
[](portainer-home.png)
|
||||
|
||||
{{< alert >}}
|
||||
If you go to the stacks menu, you will note that both `traefik` and `portainer` are *Limited* control, because these stacks were done outside Portainer. We will create and deploy next stacks directly from Portainer GUI.
|
||||
@ -529,7 +529,7 @@ Use below section of Portainer for setting all personal environment variable. In
|
||||
{{< /tab >}}
|
||||
{{< /tabs >}}
|
||||
|
||||

|
||||
[](diun-stack.png)
|
||||
|
||||
Finally click on **Deploy the stack**, it's equivalent of precedent `docker stack deploy`, nothing magic here. At the difference that Portainer will store the YML inside his volume, allowing full control, contrary to limited Traefik and Portainer cases.
|
||||
|
||||
@ -537,11 +537,11 @@ Diun should now be deployed and manager host and ready to scan images for any up
|
||||
|
||||
You can check the full service page which will allows manual scaling, on-fly volumes mounting, environment variable modification, and show current running tasks (aka containers).
|
||||
|
||||

|
||||
[](diun-service.png)
|
||||
|
||||
You can check the service logs which consist of all tasks logs aggregate.
|
||||
|
||||

|
||||
[](diun-logs.png)
|
||||
|
||||
## 2nd check ✅
|
||||
|
||||
|
@ -98,7 +98,7 @@ The important part is `/etc/hosts` in order to allow proper DNS resolving for `d
|
||||
|
||||
Deploy it, and you should access to <https://phpmyadmin.sw.okami101.io> after few seconds, with full admin access to your MySQL DB !
|
||||
|
||||

|
||||
[](phpmyadmin.png)
|
||||
|
||||
### PostgreSQL 14 🐘
|
||||
|
||||
@ -204,7 +204,7 @@ Once logged, you need to add the previously configured PostgreSQL server address
|
||||
|
||||
Save it, and you have now full access to your PostgreSQL DB !
|
||||
|
||||

|
||||
[](pgadmin.png)
|
||||
|
||||
## Further cluster app testing
|
||||
|
||||
@ -288,7 +288,7 @@ I use a dynamic `ROOT_PATH` here. So you must add this variable with `/mnt/stora
|
||||
|
||||
After few seconds, <https://redmine.sw.okami101.io> should be accessible and ready to use, use admin / admin for admin connection !
|
||||
|
||||

|
||||
[](redmine.png)
|
||||
|
||||
### N8N over PostgreSQL
|
||||
|
||||
@ -331,7 +331,7 @@ networks:
|
||||
|
||||
And voilà, it's done, n8n will automatically migrate the database and <https://n8n.sw.okami101.io> should be soon accessible. Note as we use `admin-auth` middleware because n8n doesn't offer authentication. Use the same Traefik credentials.
|
||||
|
||||

|
||||
[](n8n.png)
|
||||
|
||||
## Data backup 💾
|
||||
|
||||
|
@ -30,7 +30,7 @@ I'll not use GlusterFS volume for storing Prometheus data, because :
|
||||
|
||||
First go to the `manager-01` node settings in Portainer inside *Swarm Cluster overview*, and apply a new label that indicates that this node is the host of Prometheus data.
|
||||
|
||||

|
||||
[](portainer-host-overview.png)
|
||||
|
||||
It's equivalent of doing :
|
||||
|
||||
@ -105,11 +105,11 @@ Deploy it and <https://prometheus.sw.okami101.io> should be available after few
|
||||
|
||||
You should now have access to some metrics !
|
||||
|
||||

|
||||
[](prometheus-graph.png)
|
||||
|
||||
In *Status > Targets*, you should have 2 endpoints enabled, which correspond to above scrape config.
|
||||
|
||||

|
||||
[](prometheus-targets.png)
|
||||
|
||||
### Get cluster metrics
|
||||
|
||||
@ -135,7 +135,7 @@ set -- /bin/node_exporter "$@"
|
||||
exec "$@"
|
||||
```
|
||||
|
||||

|
||||
[](portainer-configs.png)
|
||||
|
||||
It will take the node hostname and create an exploitable data metric for prometheus.
|
||||
|
||||
@ -212,11 +212,11 @@ You need to restart Prometheus service in order to apply above config.
|
||||
|
||||
Go back to the Prometheus targets UI in order to confirm the apparition of 2 new targets.
|
||||
|
||||

|
||||
[](prometheus-targets-all.png)
|
||||
|
||||
Confirm you fetch the `node_meta` metric with proper hostnames :
|
||||
|
||||

|
||||
[](prometheus-node-meta.png)
|
||||
|
||||
## Visualization with Grafana 📈
|
||||
|
||||
@ -284,7 +284,7 @@ networks:
|
||||
|
||||
Set proper `GF_DATABASE_PASSWORD` and deploy. Database migration should be automatic (don't hesitate to check inside pgAdmin). Go to <https://grafana.sw.okami101.io> and login as admin / admin.
|
||||
|
||||

|
||||
[](grafana-home.png)
|
||||
|
||||
### Docker Swarm dashboard
|
||||
|
||||
@ -292,11 +292,11 @@ For best show-case scenario of Grafana, let's import an [existing dashboard](htt
|
||||
|
||||
First we need to add Prometheus as main metrics data source. Go to *Configuration > Data source* menu and click on *Add data source*. Select Prometheus and set the internal docker prometheus URL, which should be `http://prometheus:9090`.
|
||||
|
||||

|
||||
[](grafana-prometheus-datasource.png)
|
||||
|
||||
Then go to *Create > Import*, load `11939` as dashboard ID, and select Prometheus source and woha!
|
||||
|
||||

|
||||
[](grafana-docker-swarm-dashboard.png)
|
||||
|
||||
The *Available Disk Space* metrics card should indicate N/A because not properly configured for Hetzner disks. Just edit the card and change the PromQL inside *Metrics browser* field by replacing `device="rootfs", mountpoint="/"` by `device="/dev/sda1", mountpoint="/host"`.
|
||||
|
||||
@ -403,7 +403,7 @@ Expand the prometheus config with 3 new jobs :
|
||||
|
||||
Then restart Prometheus service and go back to targets to check you have all new `data-01` endpoints.
|
||||
|
||||

|
||||
[](prometheus-targets-data.png)
|
||||
|
||||
### Grafana dashboards for data
|
||||
|
||||
@ -419,15 +419,15 @@ Nothing more to do !
|
||||
|
||||
#### Node Dashboard
|
||||
|
||||

|
||||
[](grafana-node-exporter.png)
|
||||
|
||||
#### MySQL Dashboard
|
||||
|
||||

|
||||
[](grafana-mysql-exporter.png)
|
||||
|
||||
#### PostgreSQL Dashboard
|
||||
|
||||

|
||||
[](grafana-postgres-exporter.png)
|
||||
|
||||
## 4th check ✅
|
||||
|
||||
|
@ -189,7 +189,7 @@ And voilà, Loki is the default log driver for all containers. Note as you can s
|
||||
|
||||
Now it's time to set up our central logs dashboard. First add *Loki* as a new data source inside Grafana, similarly to previous Prometheus. Set `http://data-01:3100` inside URL field and save it.
|
||||
|
||||

|
||||
[](grafana-loki-datasource.png)
|
||||
|
||||
Then create a new Dashboard. No need to import this time :
|
||||
|
||||
@ -198,7 +198,7 @@ Then create a new Dashboard. No need to import this time :
|
||||
3. Select Loki in Data source
|
||||
4. Test some basic LogQL in Log browser in order to confirm all is working. Simply type `{` It should have full autocomplete. You should have plenty of access logs when using `{swarm_stack="traefik"}`
|
||||
|
||||

|
||||
[](grafana-panel-editor.png)
|
||||
|
||||
After this primary testing, let's use the power of Grafana with variables :
|
||||
|
||||
@ -207,13 +207,13 @@ After this primary testing, let's use the power of Grafana with variables :
|
||||
3. Create a `stack` variable, select Prometheus as *Data source*, and insert following value inside *Query* field : `label_values(container_last_seen, container_label_com_docker_stack_namespace)`
|
||||
4. It's a PromQL which fetch all detected docker stacks, click on *Update* to confirm the validity of *Preview of values* that will be show up
|
||||
|
||||

|
||||
[](grafana-variables.png)
|
||||
|
||||
1. Return to your panel editor. A new *stack* selector will appear in the top will all you to select the stack logs to show !
|
||||
2. Let's apply for saving the panel and test the selector. The Panel should reactive with the *stack* selector.
|
||||
3. Save the dashboard.
|
||||
|
||||

|
||||
[](grafana-logs-dashboard.png)
|
||||
|
||||
## Tracing with Jaeger 🔍
|
||||
|
||||
@ -382,11 +382,11 @@ Go back to Traefik dashboard and ensure Jaeger is enabled in *Features* section.
|
||||
|
||||
Go back now to Jaeger UI. You should have a new `traefik` service available. Click on *Find Traces* in order to get a simple graph a all traces, aka requests with duration !
|
||||
|
||||

|
||||
[](jaeger-ui-traefik.png)
|
||||
|
||||
Detail view of request with duration time on each operation, aka traefik middlewares, docker container request process duration, etc.
|
||||
|
||||

|
||||
[](jaeger-ui-request.png)
|
||||
|
||||
## 5th check ✅
|
||||
|
||||
|
@ -68,13 +68,13 @@ Create a new `gitea` PostgreSQL database as usual from pgAdmin or `psql` for pro
|
||||
|
||||
Don't forgive to change all domain related field by the proper current domain URL, which is `gitea.sw.okami101.io` in my case. You should set proper SMTP settings for notifications.
|
||||
|
||||

|
||||
[](gitea-install.png)
|
||||
|
||||
For information all these settings are saved in `/mnt/storage-pool/gitea/gitea/conf/app.ini` file. You can change them at any time. You may want to disable registration by changing `DISABLE_REGISTRATION`.
|
||||
|
||||
Next just create your first account. The 1st account will be automatically granted to administrator.
|
||||
|
||||

|
||||
[](gitea-admin-dashboard.png)
|
||||
|
||||
You should now test creating some repos and be sure that git cloning works on both HTTPS and SSH protocol. For SSH be sure to add your own SSH public key in your profile.
|
||||
|
||||
@ -155,7 +155,7 @@ docker push registry.sw.okami101.io/alpinegit
|
||||
|
||||
Go back to above <https://registry.sw.okami101.io>. You should see 1 new image !
|
||||
|
||||

|
||||
[](docker-registry.png)
|
||||
|
||||
Delete the image test through UI and from local docker with `docker image rm registry.sw.okami101.io/alpinegit`.
|
||||
|
||||
@ -195,7 +195,7 @@ registry-- pull image when deploy stack -->my-app
|
||||
|
||||
Let's follow [the official docs](https://docs.drone.io/server/provider/gitea/) for generating a OAuth2 application on Gitea, which is necessary for Drone integration. Set `https://drone.sw.okami101.io` as redirect UI after successful authentication.
|
||||
|
||||

|
||||
[](gitea-drone-application.png)
|
||||
|
||||
Save and keep the client and secret tokens. Then create a new `drone` PostgreSQL database and create a new `drone` stack :
|
||||
|
||||
@ -259,11 +259,11 @@ Don't forget to have proper docker labels on nodes, as explain [here]({{< ref "0
|
||||
|
||||
It's time to go to <https://drone.sw.okami101.io/> and generate your first Drone account through OAuth2 from Gitea. You should be properly redirected to Gitea, where you'll just have to authorize Drone application.
|
||||
|
||||

|
||||
[](gitea-oauth2.png)
|
||||
|
||||
Finalize registration, and you should finally arrive to main Drone dashboard. If you have already created some repositories, they should appear in the list.
|
||||
|
||||

|
||||
[](drone-dashboard.png)
|
||||
|
||||
## SonarQube 📈
|
||||
|
||||
|
Reference in New Issue
Block a user