set lightbox everywhere

This commit is contained in:
2022-02-25 15:06:30 +01:00
parent cc1242572b
commit a02c9a5279
7 changed files with 37 additions and 37 deletions

View File

@ -26,7 +26,7 @@ Initiate the project by following this simple steps :
2. Navigate to security > API tokens
3. Generate new API key with Read Write permissions and copy the generated token
![Hetzner API Token](hetzner-api-token.png)
[![Hetzner API Token](hetzner-api-token.png)](hetzner-api-token.png)
Then go to the terminal and prepare the new context

View File

@ -379,7 +379,7 @@ After few seconds, Traefik should launch and generate proper SSL certificate for
If properly configured, you will be prompted for access. After entering admin as user and your own chosen password, you should finally access to the traefik dashboard similar to below !
![Traefik Dashboard](traefik-dashboard.png)
[![Traefik Dashboard](traefik-dashboard.png)](traefik-dashboard.png)
### Portainer ⛵
@ -449,11 +449,11 @@ docker service ls
As soon as the main portainer service has successfully started, Traefik will detect it and configure it with SSL. The specific router for Portainer should appear in Traefik dashboard on HTTP section as below.
![Traefik routers](traefik-routers.png)
[![Traefik routers](traefik-routers.png)](traefik-routers.png)
It's time to create your admin account through <https://portainer.sw.okami101.io>. If all goes well, aka Portainer agent are accessible from Portainer portal, you should have access to your cluster home environment with 2 stacks active.
![Portainer home](portainer-home.png)
[![Portainer home](portainer-home.png)](portainer-home.png)
{{< alert >}}
If you go to the stacks menu, you will note that both `traefik` and `portainer` are *Limited* control, because these stacks were done outside Portainer. We will create and deploy next stacks directly from Portainer GUI.
@ -529,7 +529,7 @@ Use below section of Portainer for setting all personal environment variable. In
{{< /tab >}}
{{< /tabs >}}
![Diun Stack](diun-stack.png)
[![Diun Stack](diun-stack.png)](diun-stack.png)
Finally click on **Deploy the stack**, it's equivalent of precedent `docker stack deploy`, nothing magic here. At the difference that Portainer will store the YML inside his volume, allowing full control, contrary to limited Traefik and Portainer cases.
@ -537,11 +537,11 @@ Diun should now be deployed and manager host and ready to scan images for any up
You can check the full service page which will allows manual scaling, on-fly volumes mounting, environment variable modification, and show current running tasks (aka containers).
![Diun Service](diun-service.png)
[![Diun Service](diun-service.png)](diun-service.png)
You can check the service logs which consist of all tasks logs aggregate.
![Diun Logs](diun-logs.png)
[![Diun Logs](diun-logs.png)](diun-logs.png)
## 2nd check ✅

View File

@ -98,7 +98,7 @@ The important part is `/etc/hosts` in order to allow proper DNS resolving for `d
Deploy it, and you should access to <https://phpmyadmin.sw.okami101.io> after few seconds, with full admin access to your MySQL DB !
![phpMyAdmin](phpmyadmin.png)
[![phpMyAdmin](phpmyadmin.png)](phpmyadmin.png)
### PostgreSQL 14 🐘
@ -204,7 +204,7 @@ Once logged, you need to add the previously configured PostgreSQL server address
Save it, and you have now full access to your PostgreSQL DB !
![pgAdmin](pgadmin.png)
[![pgAdmin](pgadmin.png)](pgadmin.png)
## Further cluster app testing
@ -288,7 +288,7 @@ I use a dynamic `ROOT_PATH` here. So you must add this variable with `/mnt/stora
After few seconds, <https://redmine.sw.okami101.io> should be accessible and ready to use, use admin / admin for admin connection !
![Redmine](redmine.png)
[![Redmine](redmine.png)](redmine.png)
### N8N over PostgreSQL
@ -331,7 +331,7 @@ networks:
And voilà, it's done, n8n will automatically migrate the database and <https://n8n.sw.okami101.io> should be soon accessible. Note as we use `admin-auth` middleware because n8n doesn't offer authentication. Use the same Traefik credentials.
![n8n](n8n.png)
[![n8n](n8n.png)](n8n.png)
## Data backup 💾

View File

@ -30,7 +30,7 @@ I'll not use GlusterFS volume for storing Prometheus data, because :
First go to the `manager-01` node settings in Portainer inside *Swarm Cluster overview*, and apply a new label that indicates that this node is the host of Prometheus data.
![Prometheus host overview](portainer-host-overview.png)
[![Prometheus host overview](portainer-host-overview.png)](portainer-host-overview.png)
It's equivalent of doing :
@ -105,11 +105,11 @@ Deploy it and <https://prometheus.sw.okami101.io> should be available after few
You should now have access to some metrics !
![Prometheus graph](prometheus-graph.png)
[![Prometheus graph](prometheus-graph.png)](prometheus-graph.png)
In *Status > Targets*, you should have 2 endpoints enabled, which correspond to above scrape config.
![Prometheus targets](prometheus-targets.png)
[![Prometheus targets](prometheus-targets.png)](prometheus-targets.png)
### Get cluster metrics
@ -135,7 +135,7 @@ set -- /bin/node_exporter "$@"
exec "$@"
```
![Portainer configs](portainer-configs.png)
[![Portainer configs](portainer-configs.png)](portainer-configs.png)
It will take the node hostname and create an exploitable data metric for prometheus.
@ -212,11 +212,11 @@ You need to restart Prometheus service in order to apply above config.
Go back to the Prometheus targets UI in order to confirm the apparition of 2 new targets.
![Prometheus targets all](prometheus-targets-all.png)
[![Prometheus targets all](prometheus-targets-all.png)](prometheus-targets-all.png)
Confirm you fetch the `node_meta` metric with proper hostnames :
![Prometheus targets all](prometheus-node-meta.png)
[![Prometheus targets all](prometheus-node-meta.png)](prometheus-node-meta.png)
## Visualization with Grafana 📈
@ -284,7 +284,7 @@ networks:
Set proper `GF_DATABASE_PASSWORD` and deploy. Database migration should be automatic (don't hesitate to check inside pgAdmin). Go to <https://grafana.sw.okami101.io> and login as admin / admin.
![Grafana home](grafana-home.png)
[![Grafana home](grafana-home.png)](grafana-home.png)
### Docker Swarm dashboard
@ -292,11 +292,11 @@ For best show-case scenario of Grafana, let's import an [existing dashboard](htt
First we need to add Prometheus as main metrics data source. Go to *Configuration > Data source* menu and click on *Add data source*. Select Prometheus and set the internal docker prometheus URL, which should be `http://prometheus:9090`.
![Grafana prometheus datasource](grafana-prometheus-datasource.png)
[![Grafana prometheus datasource](grafana-prometheus-datasource.png)](grafana-prometheus-datasource.png)
Then go to *Create > Import*, load `11939` as dashboard ID, and select Prometheus source and woha!
![Grafana home](grafana-docker-swarm-dashboard.png)
[![Grafana home](grafana-docker-swarm-dashboard.png)](grafana-docker-swarm-dashboard.png)
The *Available Disk Space* metrics card should indicate N/A because not properly configured for Hetzner disks. Just edit the card and change the PromQL inside *Metrics browser* field by replacing `device="rootfs", mountpoint="/"` by `device="/dev/sda1", mountpoint="/host"`.
@ -403,7 +403,7 @@ Expand the prometheus config with 3 new jobs :
Then restart Prometheus service and go back to targets to check you have all new `data-01` endpoints.
![Prometheus targets data](prometheus-targets-data.png)
[![Prometheus targets data](prometheus-targets-data.png)](prometheus-targets-data.png)
### Grafana dashboards for data
@ -419,15 +419,15 @@ Nothing more to do !
#### Node Dashboard
![Prometheus targets data](grafana-node-exporter.png)
[![Prometheus targets data](grafana-node-exporter.png)](grafana-node-exporter.png)
#### MySQL Dashboard
![Prometheus targets data](grafana-mysql-exporter.png)
[![Prometheus targets data](grafana-mysql-exporter.png)](grafana-mysql-exporter.png)
#### PostgreSQL Dashboard
![Prometheus targets data](grafana-postgres-exporter.png)
[![Prometheus targets data](grafana-postgres-exporter.png)](grafana-postgres-exporter.png)
## 4th check ✅

View File

@ -189,7 +189,7 @@ And voilà, Loki is the default log driver for all containers. Note as you can s
Now it's time to set up our central logs dashboard. First add *Loki* as a new data source inside Grafana, similarly to previous Prometheus. Set `http://data-01:3100` inside URL field and save it.
![Grafana loki datasource](grafana-loki-datasource.png)
[![Grafana loki datasource](grafana-loki-datasource.png)](grafana-loki-datasource.png)
Then create a new Dashboard. No need to import this time :
@ -198,7 +198,7 @@ Then create a new Dashboard. No need to import this time :
3. Select Loki in Data source
4. Test some basic LogQL in Log browser in order to confirm all is working. Simply type `{` It should have full autocomplete. You should have plenty of access logs when using `{swarm_stack="traefik"}`
![Grafana loki datasource](grafana-panel-editor.png)
[![Grafana loki datasource](grafana-panel-editor.png)](grafana-panel-editor.png)
After this primary testing, let's use the power of Grafana with variables :
@ -207,13 +207,13 @@ After this primary testing, let's use the power of Grafana with variables :
3. Create a `stack` variable, select Prometheus as *Data source*, and insert following value inside *Query* field : `label_values(container_last_seen, container_label_com_docker_stack_namespace)`
4. It's a PromQL which fetch all detected docker stacks, click on *Update* to confirm the validity of *Preview of values* that will be show up
![Grafana loki datasource](grafana-variables.png)
[![Grafana loki datasource](grafana-variables.png)](grafana-variables.png)
1. Return to your panel editor. A new *stack* selector will appear in the top will all you to select the stack logs to show !
2. Let's apply for saving the panel and test the selector. The Panel should reactive with the *stack* selector.
3. Save the dashboard.
![Grafana loki datasource](grafana-logs-dashboard.png)
[![Grafana loki datasource](grafana-logs-dashboard.png)](grafana-logs-dashboard.png)
## Tracing with Jaeger 🔍
@ -382,11 +382,11 @@ Go back to Traefik dashboard and ensure Jaeger is enabled in *Features* section.
Go back now to Jaeger UI. You should have a new `traefik` service available. Click on *Find Traces* in order to get a simple graph a all traces, aka requests with duration !
![Jaeger UI Traefik](jaeger-ui-traefik.png)
[![Jaeger UI Traefik](jaeger-ui-traefik.png)](jaeger-ui-traefik.png)
Detail view of request with duration time on each operation, aka traefik middlewares, docker container request process duration, etc.
![Jaeger UI Request](jaeger-ui-request.png)
[![Jaeger UI Request](jaeger-ui-request.png)](jaeger-ui-request.png)
## 5th check ✅

View File

@ -68,13 +68,13 @@ Create a new `gitea` PostgreSQL database as usual from pgAdmin or `psql` for pro
Don't forgive to change all domain related field by the proper current domain URL, which is `gitea.sw.okami101.io` in my case. You should set proper SMTP settings for notifications.
![Gitea admin dashboard](gitea-install.png)
[![Gitea admin dashboard](gitea-install.png)](gitea-install.png)
For information all these settings are saved in `/mnt/storage-pool/gitea/gitea/conf/app.ini` file. You can change them at any time. You may want to disable registration by changing `DISABLE_REGISTRATION`.
Next just create your first account. The 1st account will be automatically granted to administrator.
![Gitea admin dashboard](gitea-admin-dashboard.png)
[![Gitea admin dashboard](gitea-admin-dashboard.png)](gitea-admin-dashboard.png)
You should now test creating some repos and be sure that git cloning works on both HTTPS and SSH protocol. For SSH be sure to add your own SSH public key in your profile.
@ -155,7 +155,7 @@ docker push registry.sw.okami101.io/alpinegit
Go back to above <https://registry.sw.okami101.io>. You should see 1 new image !
![Docker registry](docker-registry.png)
[![Docker registry](docker-registry.png)](docker-registry.png)
Delete the image test through UI and from local docker with `docker image rm registry.sw.okami101.io/alpinegit`.
@ -195,7 +195,7 @@ registry-- pull image when deploy stack -->my-app
Let's follow [the official docs](https://docs.drone.io/server/provider/gitea/) for generating a OAuth2 application on Gitea, which is necessary for Drone integration. Set `https://drone.sw.okami101.io` as redirect UI after successful authentication.
![Gitea drone application](gitea-drone-application.png)
[![Gitea drone application](gitea-drone-application.png)](gitea-drone-application.png)
Save and keep the client and secret tokens. Then create a new `drone` PostgreSQL database and create a new `drone` stack :
@ -259,11 +259,11 @@ Don't forget to have proper docker labels on nodes, as explain [here]({{< ref "0
It's time to go to <https://drone.sw.okami101.io/> and generate your first Drone account through OAuth2 from Gitea. You should be properly redirected to Gitea, where you'll just have to authorize Drone application.
![Gitea oauth2](gitea-oauth2.png)
[![Gitea oauth2](gitea-oauth2.png)](gitea-oauth2.png)
Finalize registration, and you should finally arrive to main Drone dashboard. If you have already created some repositories, they should appear in the list.
![Drone dashboard](drone-dashboard.png)
[![Drone dashboard](drone-dashboard.png)](drone-dashboard.png)
## SonarQube 📈