remove fictive links

This commit is contained in:
2023-07-15 19:57:16 +02:00
parent c841c3f931
commit 6880ea1acc
6 changed files with 18 additions and 18 deletions

View File

@ -326,7 +326,7 @@ docker service logs traefik_traefik
{{< /highlight >}}
After few seconds, Traefik should launch and generate proper SSL certificate for its own domain. You can finally go to <https://traefik.sw.dockerswarm.rocks>. `http://` should work as well thanks to permanent redirection.
After few seconds, Traefik should launch and generate proper SSL certificate for its own domain. You can finally go to `https://traefik.sw.dockerswarm.rocks`. `http://` should work as well thanks to permanent redirection.
If properly configured, you will be prompted for access. After entering admin as user and your own chosen password, you should finally access to the traefik dashboard !
@ -421,7 +421,7 @@ Go to the router detail for checking currently applied middlewares :
[![Traefik portainer](traefik-portainer.png)](traefik-portainer.png)
It's time to create your admin account through <https://portainer.sw.dockerswarm.rocks>. If all goes well, a primary environment should be appearing, and you should have access to your cluster home environment with 2 stacks active.
It's time to create your admin account through `https://portainer.sw.dockerswarm.rocks`. If all goes well, a primary environment should be appearing, and you should have access to your cluster home environment with 2 stacks active.
[![Portainer home](portainer-home.png)](portainer-home.png)
@ -609,7 +609,7 @@ Note as we use `node.labels.environment == production` in order to force the con
The particularity of Minio is to have 2 web endpoints, one for web UI admin manager, and other as S3 API endpoint. So we need 2 Traefik routes in this case. Create an environment variable for `MINIO_ROOT_PASSWORD` and set your own admin password.
When deployed, wait few seconds for SSL auto generation (you can check it in the Traefik Dashboard) and go to <https://minio.sw.dockerswarm.rocks> in order to access the web administration by entering above credentials.
When deployed, wait few seconds for SSL auto generation (you can check it in the Traefik Dashboard) and go to `https://minio.sw.dockerswarm.rocks` in order to access the web administration by entering above credentials.
And yup, it's done, create your 1st bucket through admin UI and you are ready to test the S3 API locally with <https://s3.dockerswarm.rocks/mybucket>.

View File

@ -108,7 +108,7 @@ networks:
The important part is `/etc/hosts` in order to allow proper DNS resolving for `data-01` configured in `PMA_HOST` environment variable. This will avoid us from dragging the real IP of data server everywhere...
Deploy it, and you should access to <https://phpmyadmin.sw.dockerswarm.rocks> after few seconds, with full admin access to your MySQL DB !
Deploy it, and you should access to `https://phpmyadmin.sw.dockerswarm.rocks` after few seconds, with full admin access to your MySQL DB !
[![phpMyAdmin](phpmyadmin.png)](phpmyadmin.png)
@ -236,7 +236,7 @@ networks:
You'll need both `PGADMIN_DEFAULT_EMAIL` and `PGADMIN_DEFAULT_PASSWORD` variable environment for proper initialization.
Deploy it, and you should access after few seconds to <https://pgadmin.sw.dockerswarm.rocks> with the default logins just above.
Deploy it, and you should access after few seconds to `https://pgadmin.sw.dockerswarm.rocks` with the default logins just above.
Once logged, you need to add the previously configured PostgreSQL server address via *Add new server*. Just add relevant host informations in *Connection* tab. Host must stay `data-01` with swarm as superuser access.
@ -319,7 +319,7 @@ local_infile = 1
Don't forget to restart with `sudo service mysql restart`.
Then go to <https://matomo.sw.dockerswarm.rocks> and go through all installation. At the DB install step, use the above credentials and use the hostname of your data server, which is `data-01` in our case. At the end of installation, the Matomo config files will be stored in `config` folder for persisted installation.
Then go to `https://matomo.sw.dockerswarm.rocks` and go through all installation. At the DB install step, use the above credentials and use the hostname of your data server, which is `data-01` in our case. At the end of installation, the Matomo config files will be stored in `config` folder for persisted installation.
[![Matomo](matomo.png)](matomo.png)
@ -447,7 +447,7 @@ Configure `REDMINE_DB_*` with proper above created DB credential and set the ran
As above for `matomo`, use `/mnt/storage-pool/redmine` value for `ROOT` as *Environment variable*.
{{< /alert >}}
After few seconds, <https://redmine.sw.dockerswarm.rocks> should be accessible and ready to use, use admin / admin for admin connection !
After few seconds, `https://redmine.sw.dockerswarm.rocks` should be accessible and ready to use, use admin / admin for admin connection !
[![Redmine](redmine.png)](redmine.png)
@ -499,7 +499,7 @@ networks:
{{< /highlight >}}
And voilà, it's done, n8n will automatically migrate the database and <https://n8n.sw.dockerswarm.rocks> should be soon accessible. Note as we use `admin-auth` middleware because n8n doesn't offer authentication. Use the same Traefik credentials.
And voilà, it's done, n8n will automatically migrate the database and `https://n8n.sw.dockerswarm.rocks` should be soon accessible. Note as we use `admin-auth` middleware because n8n doesn't offer authentication. Use the same Traefik credentials.
[![n8n](n8n.png)](n8n.png)

View File

@ -110,7 +110,7 @@ The `private` network will serve us later for exporters. Next config are useful
| `storage.tsdb.retention.size` | The max DB size |
| `storage.tsdb.retention.time` | The max data retention date |
Deploy it and <https://prometheus.sw.dockerswarm.rocks> should be available after few seconds. Use same traefik credentials for login.
Deploy it and `https://prometheus.sw.dockerswarm.rocks` should be available after few seconds. Use same traefik credentials for login.
You should now have access to some metrics !
@ -339,7 +339,7 @@ networks:
{{< /highlight >}}
Set proper `GF_DATABASE_PASSWORD` and deploy. Database migration should be automatic (don't hesitate to check inside pgAdmin). Go to <https://grafana.sw.dockerswarm.rocks> and login as admin / admin.
Set proper `GF_DATABASE_PASSWORD` and deploy. Database migration should be automatic (don't hesitate to check inside pgAdmin). Go to `https://grafana.sw.dockerswarm.rocks` and login as admin / admin.
[![Grafana home](grafana-home.png)](grafana-home.png)

View File

@ -392,7 +392,7 @@ networks:
| `agent` | A simple REST endpoint for receiving traces, the latter being forwarded to the collector. An agent should be proper to a machine host, similarly as the portainer agent. |
| `query` | A simple UI that connects to the span storage and allows simple visualization. |
After few seconds, go to <https://jaeger.sw.dockerswarm.rocks> and enter Traefik credentials. You will land to Jaeger Query UI with empty data.
After few seconds, go to `https://jaeger.sw.dockerswarm.rocks` and enter Traefik credentials. You will land to Jaeger Query UI with empty data.
It's time to inject some trace data. Be sure all above Jaeger services are started through Portainer before continue.

View File

@ -63,7 +63,7 @@ networks:
We added a specific TCP router in order to allow SSH cloning. The SSH Traefik entry point will redirect to the first available service with TCP router.
{{< /alert >}}
Now go to <https://gitea.sw.dockerswarm.rocks> and go through the installation procedure. Change default SQLite provider by a more production purpose database.
Now go to `https://gitea.sw.dockerswarm.rocks` and go through the installation procedure. Change default SQLite provider by a more production purpose database.
Create a new `gitea` PostgreSQL database as usual from pgAdmin or `psql` for pro-CLI user, and set the according DB info access to Gitea installer. Host should be `data-01`.
@ -141,7 +141,7 @@ Note as both service must be exposed to Traefik. In order to keep the same subdo
It gives us have an additional condition for redirect to the correct service. It's ok in our case because the official docker registry use only `/v2` as endpoint.
{{< /alert >}}
Go to <https://registry.sw.dockerswarm.rocks> and use Traefik credentials. We have no images yet let's create one.
Go to `https://registry.sw.dockerswarm.rocks` and use Traefik credentials. We have no images yet let's create one.
### Test our private registry
@ -172,7 +172,7 @@ docker push registry.sw.dockerswarm.rocks/alpinegit
{{< /highlight >}}
Go back to above <https://registry.sw.dockerswarm.rocks>. You should see 1 new image !
Go back to above `https://registry.sw.dockerswarm.rocks`. You should see 1 new image !
[![Docker registry](docker-registry.png)](docker-registry.png)
@ -291,7 +291,7 @@ Don't forget to have proper docker labels on nodes, as explain [here]({{< ref "0
| `DRONE_RPC_SECRET` | Necessary for proper secured authentication between Drone and runners. Use `openssl rand -hex 16` for generating a valid token. |
| `DRONE_USER_CREATE` | The initial user to create at launch. Put your Gitea username here for setting automatically Gitea user as drone administrator. |
It's time to go to <https://drone.sw.dockerswarm.rocks/> and generate your first Drone account through OAuth2 from Gitea. You should be properly redirected to Gitea, where you'll just have to authorize Drone application.
It's time to go to `https://drone.sw.dockerswarm.rocks` and generate your first Drone account through OAuth2 from Gitea. You should be properly redirected to Gitea, where you'll just have to authorize Drone application.
[![Gitea oauth2](gitea-oauth2.png)](gitea-oauth2.png)
@ -431,7 +431,7 @@ Commit both above files and push to remote repo. Drone should be automatically t
[![Drone build](drone-build.png)](drone-build.png)
If all's going well, the final image should be pushed in our docker registry. You can ensure it by navigating to <https://registry.sw.dockerswarm.rocks>.
If all's going well, the final image should be pushed in our docker registry. You can ensure it by navigating to `https://registry.sw.dockerswarm.rocks`.
### Deployment (the CD part) 🚀
@ -470,7 +470,7 @@ I use `Development` in order to have the swagger UI.
Be sure to have registered the private registry in Portainer before deploying as [explained here](#register-registry-in-portainer).
{{< /alert >}}
Finally, deploy and see the result in <https://weather.sw.dockerswarm.rocks/swagger>. You should access to the swagger UI, and API endpoints should correctly respond.
Finally, deploy and see the result in `https://weather.sw.dockerswarm.rocks/swagger`. You should access to the swagger UI, and API endpoints should correctly respond.
#### Continuous deployment

View File

@ -147,7 +147,7 @@ networks:
Set proper `ROOT_PATH` with `/mnt/storage-pool/sonar` and `SONAR_JDBC_PASSWORD` with above DB password.
Go to <https://sonar.sw.dockerswarm.rocks>, use admin / admin credentials and update password.
Go to `https://sonar.sw.dockerswarm.rocks`, use admin / admin credentials and update password.
### Project analysis