92 KiB
title, date, tags
title | date | tags | |||||
---|---|---|---|---|---|---|---|
A 2024 benchmark of main Web API frameworks | 2023-12-26 |
|
{{< lead >}} We'll be comparing the read performance of 6 Web APIs frameworks, sharing the same OpenAPI contract from realworld app, a medium-like clone, implemented under multiple languages (PHP, Python, Javascript, Java and C#). {{< /lead >}}
This is not a basic synthetic benchmark, but a real world benchmark with DB data tests, and multiple scenarios. This post may be updated when new versions of frameworks will be released or any suggestions for performance related improvement in below commentary section.
A state of the art of real world benchmarks comparison of Web APIs is difficult to achieve and very time-consuming as it forces to master each framework. As performance can highly dependent of:
- Code implementation, all made by my own
- Fine-tuning for each runtime, so I mostly take the default configuration
Now that's said, let's fight !
The contenders
We'll be using the very last up-to-date stable versions of the frameworks, and the latest stable version of the runtime.
Framework & Source code | Runtime | ORM | Tested Database |
---|---|---|---|
Laravel 10 (api) | PHP 8.3 | Eloquent | MySQL & PostgreSQL |
Symfony 7 (api) | PHP 8.3 | Doctrine | MySQL & PostgreSQL |
FastAPI (api) | Python 3.12 | SQLAlchemy 2.0 | PostgreSQL |
NestJS 10 (api) | Node 20 | Prisma 5 | PostgreSQL |
Spring Boot 3.2 (api) | Java 21 | Hibernate 6 | PostgreSQL |
ASP.NET Core 8 (api) | .NET 8.0 | EF Core 8 | PostgreSQL |
Each project are:
- Using the same OpenAPI contract
- Fully tested and fonctional against same Postman collection
- Highly tooled with high code quality in mind (static analyzers, formatter, linters, good code coverage, etc.)
- Share roughly the same amount of DB datasets, 50 users, 500 articles, 5000 comments, generated by faker-like library for each language
- Avoiding N+1 queries with eager loading (normally)
- Containerized with Docker, and deployed on a monitored Docker Swarm cluster
Side note on PHP configuration
Note as I tested against PostgreSQL for all frameworks as main Database, but I added MySQL for Laravel and Symfony too, just by curiosity, and because of simplicity of PHP for switching database without changing code base, as both DB drivers integrated into base PHP Docker image. It allows to have an interesting Eloquent VS Doctrine ORM comparison for each database.
{{< alert >}}
I enabled OPcache and use simple Apache PHP docker image, as it's the simplest configuration for PHP apps containers. The memory_limit
is set to 1G. I tested FrankenPHP, which seems promising at first glance, but performance results was just far lower than Apache, even with worker mode (tried with Symfony runtime and Laravel Octane)...
{{< /alert >}}
The target hardware
We'll running all Web APIs project on a Docker swarm cluster, where each node are composed of 2 dedicated CPUs for stable performance and 8 GB of RAM.
Traefik will be used as a reverse proxy, load balancing the requests to the replicas of each node.
{{< mermaid >}} flowchart TD client((k6)) client -- Port 80 443 --> traefik-01 subgraph manager-01 traefik-01{Traefik SSL} end subgraph worker-01 app-01([Conduit replica 1]) traefik-01 --> app-01 end subgraph worker-02 app-02([Conduit replica 2]) traefik-01 --> app-02 end subgraph storage-01 DB[(MySQL or PostgreSQL)] app-01 --> DB app-02 --> DB end {{< /mermaid >}}
The Swarm cluster is fully monitored with Prometheus and Grafana, allowing to get relevant performance result.
The scenarios
We'll be using k6 to run the tests, with constant-arrival-rate executor for progressive load testing, following 2 different scenarios :
- Scenario 1 : fetch all articles, following the pagination
- Scenario 2 : fetch all articles, calling each single article with slug, fetch associated comments for each article, and fetch profile of each related author
Duration of each scenario is 1 minute, with a 30 seconds graceful for finishing last started iterations. Results with one single test failures, i.e. any response status different than 200 or any response json error parsing, are not accepted.
The Iteration creation rate (rate / timeUnit) will be choosen in order to obtain the highest possible request rate, without any test failures.
Scenario 1
The interest of this scenario is to be very database intensive, by fetching all articles, authors, and favorites, following the pagination, with a couple of SQL queries. Note as each code implementation normally use eager loading to avoid N+1 queries, which can have high influence in this test.
import http from "k6/http";
import { check } from "k6";
export const options = {
scenarios: {
articles: {
env: { CONDUIT_URL: '<framework_url>' },
duration: '1m',
executor: 'constant-arrival-rate',
rate: '<rate>',
timeUnit: '1s',
preAllocatedVUs: 50,
},
},
};
export default function () {
const apiUrl = `https://${__ENV.CONDUIT_URL}/api`;
const limit = 10;
let offset = 0;
let articles = []
do {
const articlesResponse = http.get(`${apiUrl}/articles?limit=${limit}&offset=${offset}`);
check(articlesResponse, {
"status is 200": (r) => r.status == 200,
});
articles = articlesResponse.json().articles;
offset += limit;
}
while (articles && articles.length >= limit);
}
Here the expected JSON response format:
{
"articles": [
{
"title": "Laboriosam aliquid dolore sed dolore",
"slug": "laboriosam-aliquid-dolore-sed-dolore",
"description": "Rerum beatae est enim cum similique.",
"body": "Voluptas maxime incidunt...",
"createdAt": "2023-12-23T16:02:03.000000Z",
"updatedAt": "2023-12-23T16:02:03.000000Z",
"author": {
"username": "Devin Swift III",
"bio": "Nihil impedit totam....",
"image": "https:\/\/randomuser.me\/api\/portraits\/men\/47.jpg",
"following": false
},
"tagList": [
"aut",
"cumque"
],
"favorited": false,
"favoritesCount": 5
}
],
//...
"articlesCount": 500
}
The expected pseudocode SQL queries to build this response:
SELECT * FROM articles LIMIT 10 OFFSET 0;
SELECT count(*) FROM articles;
SELECT * FROM users WHERE id IN (<articles.author_id...>);
SELECT * FROM article_tag WHERE article_id IN (<articles.id...>);
SELECT * FROM favorites WHERE article_id IN (<articles.id...>);
{{< alert >}} It can highly differ according to each ORM, as few of them can prefer to reduce the queries by using subselect, but it's a good approximation. {{< /alert >}}
Scenario 2
The interest of this scenario is to be mainly runtime intensive, by calling each endpoint of the API.
import http from "k6/http";
import { check } from "k6";
export const options = {
scenarios: {
articles: {
env: { CONDUIT_URL: '<framework_url>' },
duration: '1m',
executor: 'constant-arrival-rate',
rate: '<rate>',
timeUnit: '1s',
preAllocatedVUs: 50,
},
},
};
export default function () {
const apiUrl = `https://${__ENV.CONDUIT_URL}.sw.okami101.io/api`;
const limit = 10;
let offset = 0;
const tagsResponse = http.get(`${apiUrl}/tags`);
check(tagsResponse, {
"status is 200": (r) => r.status == 200,
});
let articles = []
do {
const articlesResponse = http.get(`${apiUrl}/articles?limit=${limit}&offset=${offset}`);
check(articlesResponse, {
"status is 200": (r) => r.status == 200,
});
articles = articlesResponse.json().articles;
for (let i = 0; i < articles.length; i++) {
const article = articles[i];
const articleResponse = http.get(`${apiUrl}/articles/${article.slug}`);
check(articleResponse, {
"status is 200": (r) => r.status == 200,
});
const commentsResponse = http.get(`${apiUrl}/articles/${article.slug}/comments`);
check(commentsResponse, {
"status is 200": (r) => r.status == 200,
});
const authorsResponse = http.get(`${apiUrl}/profiles/${article.author.username}`);
check(authorsResponse, {
"status is 200": (r) => r.status == 200,
});
}
offset += limit;
}
while (articles && articles.length >= limit);
}
The results
Laravel
Laravel MySQL scenario 1
Iteration creation rate = 5/s
checks.........................: 100.00% ✓ 8313 ✗ 0
data_received..................: 90 MB 1.3 MB/s
data_sent......................: 781 kB 11 kB/s
dropped_iterations.............: 138 1.937589/s
http_req_blocked...............: avg=180.91µs min=276ns med=1.08µs max=45.16ms p(90)=1.59µs p(95)=1.81µs
http_req_connecting............: avg=6.64µs min=0s med=0s max=6.29ms p(90)=0s p(95)=0s
http_req_duration..............: avg=374.29ms min=11.37ms med=370.41ms max=956.84ms p(90)=559.47ms p(95)=610.32ms
{ expected_response:true }...: avg=374.29ms min=11.37ms med=370.41ms max=956.84ms p(90)=559.47ms p(95)=610.32ms
http_req_failed................: 0.00% ✓ 0 ✗ 8313
http_req_receiving.............: avg=809.54µs min=39.87µs med=404.11µs max=53.92ms p(90)=1ms p(95)=2.3ms
http_req_sending...............: avg=147.9µs min=31.92µs med=121.93µs max=11.96ms p(90)=192.85µs p(95)=235.85µs
http_req_tls_handshaking.......: avg=168.89µs min=0s med=0s max=43.99ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=373.33ms min=11.11ms med=369.54ms max=956.55ms p(90)=558.62ms p(95)=609.32ms
http_reqs......................: 8313 116.718703/s
iteration_duration.............: avg=19.13s min=7.37s med=20.12s max=23.62s p(90)=21.53s p(95)=21.97s
iterations.....................: 163 2.288602/s
vus............................: 3 min=3 max=50
vus_max........................: 50 min=50 max=50
{{< tabs >}} {{< tab tabName="Req/s" >}}
{{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [ 39, 87, 91, 91, 108, 105, 117, 111, 113, 121, 125, 125, 113, 130, 125, 111, 129, 120, 120, 122, 113, 128, 115, 117, 122, 122, 119, 114, 128, 131, 119, 129, 112, 113, 127, 129, 111, 127, 94, 127, 133, 128, 110, 112, 123, 132, 96, 125, 99, 126, 130, 137, 116, 122, 132, 145, 98, 128, 112, 124, 131, 123, 108, 127, 118, 121, 126, 106, 114, 95, 102, 14 ] } ] {{< /chart >}}
{{< /tab >}}
{{< tab tabName="Req duration" >}}
{{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [ 5, 10, 15, 20, 25, 30, 35, 39, 43, 47, 50, 50, 50, 49, 50, 50, 49, 50, 50, 50, 49, 50, 48, 50, 50, 48, 49, 50, 46, 47, 47, 50, 50, 50, 50, 48, 50, 49, 50, 50, 50, 50, 49, 49, 50, 48, 50, 48, 49, 49, 48, 49, 50, 50, 50, 50, 50, 50, 50, 50, 48, 46, 46, 41, 40, 39, 36, 31, 24, 14, 3 ] } ] {{< /chart >}}
{{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [ 73, 85, 143, 179, 199, 255, 285, 327, 343, 377, 386, 387, 463, 401, 390, 420, 398, 415, 403, 410, 427, 385, 434, 432, 413, 397, 413, 452, 388, 354, 386, 380, 445, 415, 386, 396, 444, 406, 470, 426, 380, 396, 443, 447, 391, 384, 409, 498, 456, 391, 376, 366, 423, 388, 387, 352, 463, 421, 410, 408, 374, 405, 401, 372, 343, 333, 304, 308, 260, 218, 90, 24 ] } ] {{< /chart >}}
{{< /tab >}} {{< tab tabName="CPU load" >}}
{{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.02, 0.03, 0.29, 0.34, 0.37, 0.36, 0.36, 0.35, 0.37, 0.37, 0.36, 0.36, 0.36, 0.36, 0.35, 0.34, 0.15, 0.03, 0.02 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.01, 0.01, 0.07, 0.08, 0.08, 0.07, 0.08, 0.07, 0.08, 0.07, 0.08, 0.08, 0.09, 0.08, 0.08, 0.08, 0.04, 0.01, 0.01 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.03, 0.53, 0.91, 0.92, 0.92, 0.93, 0.92, 0.91, 0.91, 0.91, 0.91, 0.91, 0.91, 0.92, 0.92, 0.39, 0.04, 0.03, 0.03 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.02, 0.05, 0.08, 0.08, 0.08, 0.07, 0.08, 0.09, 0.08, 0.09, 0.08, 0.09, 0.08, 0.08, 0.07, 0.04, 0.01, 0.02, 0.02 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< /tab >}} {{< /tabs >}}
As expected here, database is the bottleneck. We'll get slow response time at full load (> 500ms).
Laravel MySQL scenario 2
Iteration creation rate = 1/2/s
checks.........................: 100.00% ✓ 28729 ✗ 0
data_received..................: 68 MB 759 kB/s
data_sent......................: 2.4 MB 26 kB/s
http_req_blocked...............: avg=32.4µs min=250ns med=1.05µs max=61.63ms p(90)=1.5µs p(95)=1.68µs
http_req_connecting............: avg=983ns min=0s med=0s max=3.03ms p(90)=0s p(95)=0s
http_req_duration..............: avg=60.22ms min=8.2ms med=48.41ms max=371.38ms p(90)=118.99ms p(95)=147.13ms
{ expected_response:true }...: avg=60.22ms min=8.2ms med=48.41ms max=371.38ms p(90)=118.99ms p(95)=147.13ms
http_req_failed................: 0.00% ✓ 0 ✗ 28729
http_req_receiving.............: avg=1.07ms min=23.04µs med=190.23µs max=88.63ms p(90)=924.45µs p(95)=4.8ms
http_req_sending...............: avg=125.43µs min=28.12µs med=114.07µs max=6.79ms p(90)=175.41µs p(95)=204.5µs
http_req_tls_handshaking.......: avg=28.68µs min=0s med=0s max=41.99ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=59.02ms min=7.97ms med=47.55ms max=347.73ms p(90)=116.52ms p(95)=143.75ms
http_reqs......................: 28729 319.193743/s
iteration_duration.............: avg=1m1s min=48.73s med=58.88s max=1m18s p(90)=1m15s p(95)=1m16s
iterations.....................: 5 0.055553/s
vus............................: 26 min=1 max=29
vus_max........................: 50 min=50 max=50
{{< tabs >}} {{< tab tabName="Req/s" >}}
{{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [ 34, 39, 124, 158, 206, 214, 245, 248, 271, 280, 291, 287, 289, 307, 318, 324, 307, 304, 318, 317, 329, 315, 309, 340, 338, 339, 325, 323, 341, 344, 345, 326, 330, 350, 340, 348, 334, 336, 347, 343, 354, 328, 324, 339, 357, 347, 342, 328, 337, 348, 357, 340, 329, 352, 344, 347, 336, 345, 341, 356, 353, 340, 344, 352, 339, 353, 340, 340, 347, 344, 344, 338, 324, 352, 349, 348, 337, 333, 336, 345, 355, 338, 336, 348, 345, 346, 341, 339, 342, 347 ] } ] {{< /chart >}}
{{< /tab >}}
{{< tab tabName="Req duration" >}}
{{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [ 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 10, 11, 11, 12, 12, 13, 13, 14, 14, 15, 15, 16, 16, 17, 17, 18, 18, 19, 19, 20, 20, 21, 21, 22, 22, 23, 23, 24, 24, 24, 24, 25, 25, 26, 26, 27, 26, 27, 27, 28, 28, 29, 29, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 26, 26, 26, 26 ] } ] {{< /chart >}}
{{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [ 27, 25, 16, 12, 14, 14, 16, 16, 18, 17, 20, 21, 24, 22, 25, 25, 29, 30, 31, 31, 33, 35, 38, 35, 38, 38, 43, 43, 44, 43, 46, 49, 51, 49, 52, 51, 56, 56, 58, 58, 59, 63, 66, 65, 66, 65, 70, 72, 73, 69, 70, 73, 75, 77, 78, 76, 79, 78, 81, 80, 82, 85, 85, 79, 81, 78, 83, 83, 81, 82, 79, 83, 83, 83, 79, 80, 81, 80, 81, 80, 76, 80, 77, 80, 76, 80, 78, 73, 79, 75 ] } ] {{< /chart >}}
{{< /tab >}} {{< tab tabName="CPU load" >}}
{{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.02, 0.03, 0.19, 0.47, 0.6, 0.67, 0.7, 0.73, 0.73, 0.75, 0.75, 0.75, 0.79, 0.77, 0.79, 0.76, 0.75, 0.77, 0.78 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.02, 0.01, 0.06, 0.12, 0.17, 0.17, 0.2, 0.2, 0.21, 0.22, 0.2, 0.22, 0.18, 0.21, 0.2, 0.2, 0.2, 0.19, 0.19 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.03, 0.06, 0.13, 0.17, 0.17, 0.2, 0.2, 0.2, 0.2, 0.21, 0.2, 0.21, 0.21, 0.2, 0.2, 0.19, 0.2, 0.21, 0.2 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.02, 0.04, 0.11, 0.12, 0.15, 0.14, 0.15, 0.15, 0.15, 0.15, 0.15, 0.16, 0.14, 0.15, 0.15, 0.15, 0.15, 0.15, 0.16 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< /tab >}} {{< /tabs >}}
Now we have a very runtime intensive scenario, with workers as bottleneck, database not very loaded, API is keeping up with a low response time (~100ms).
Laravel PgSQL scenario 1
Iteration creation rate = 5/s
checks.........................: 100.00% ✓ 12087 ✗ 0
data_received..................: 128 MB 1.9 MB/s
data_sent......................: 1.1 MB 16 kB/s
dropped_iterations.............: 63 0.928413/s
http_req_blocked...............: avg=124.62µs min=261ns med=1.05µs max=58.76ms p(90)=1.55µs p(95)=1.76µs
http_req_connecting............: avg=4.69µs min=0s med=0s max=6.28ms p(90)=0s p(95)=0s
http_req_duration..............: avg=236.41ms min=17.43ms med=233.87ms max=580.59ms p(90)=356.84ms p(95)=392.18ms
{ expected_response:true }...: avg=236.41ms min=17.43ms med=233.87ms max=580.59ms p(90)=356.84ms p(95)=392.18ms
http_req_failed................: 0.00% ✓ 0 ✗ 12087
http_req_receiving.............: avg=4.85ms min=48.19µs med=363.24µs max=136.3ms p(90)=16.74ms p(95)=34.43ms
http_req_sending...............: avg=139.94µs min=26.72µs med=119.27µs max=6.81ms p(90)=185.96µs p(95)=224.9µs
http_req_tls_handshaking.......: avg=115.52µs min=0s med=0s max=42ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=231.41ms min=17.2ms med=228.56ms max=580.39ms p(90)=348.64ms p(95)=385.75ms
http_reqs......................: 12087 178.122642/s
iteration_duration.............: avg=12.09s min=2.76s med=13.14s max=15.38s p(90)=14.29s p(95)=14.55s
iterations.....................: 237 3.492601/s
vus............................: 18 min=5 max=50
vus_max........................: 50 min=50 max=50
{{< tabs >}} {{< tab tabName="Req/s" >}}
{{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [ 13, 136, 155, 164, 176, 174, 180, 174, 168, 184, 181, 184, 177, 172, 179, 188, 180, 181, 175, 182, 188, 178, 180, 178, 184, 178, 186, 182, 173, 185, 186, 183, 176, 179, 181, 185, 181, 185, 168, 187, 181, 189, 183, 174, 184, 182, 185, 180, 169, 192, 178, 190, 176, 179, 184, 177, 189, 176, 178, 184, 185, 185, 183, 169, 192, 182, 190, 181, 44 ] } ] {{< /chart >}}
{{< /tab >}}
{{< tab tabName="Req duration" >}}
{{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [ 5, 10, 15, 16, 19, 22, 25, 27, 29, 32, 34, 38, 41, 42, 47, 48, 50, 50, 50, 50, 49, 48, 49, 50, 49, 50, 47, 46, 48, 48, 49, 48, 50, 50, 49, 50, 48, 48, 47, 50, 46, 48, 47, 49, 49, 50, 50, 50, 50, 50, 48, 49, 50, 49, 49, 47, 49, 48, 48, 50, 47, 44, 42, 41, 33, 26, 18 ] } ] {{< /chart >}}
{{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [ 69, 40, 66, 85, 93, 106, 123, 143, 162, 161, 176, 192, 211, 231, 238, 251, 261, 277, 278, 277, 266, 272, 270, 266, 266, 279, 262, 258, 270, 267, 262, 272, 273, 281, 277, 270, 268, 269, 274, 264, 273, 254, 257, 261, 268, 268, 271, 272, 278, 276, 269, 264, 268, 276, 269, 277, 261, 266, 286, 264, 272, 253, 251, 231, 230, 181, 139, 99, 46 ] } ] {{< /chart >}}
{{< /tab >}} {{< tab tabName="CPU load" >}}
{{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.03, 0.02, 0.6, 0.81, 0.83, 0.84, 0.84, 0.82, 0.84, 0.83, 0.84, 0.85, 0.83, 0.84, 0.83, 0.5, 0.03, 0.02, 0.03 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.02, 0.01, 0.12, 0.16, 0.15, 0.14, 0.15, 0.17, 0.14, 0.16, 0.15, 0.14, 0.16, 0.15, 0.16, 0.1, 0.02, 0.02, 0.03 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.03, 0.21, 0.28, 0.31, 0.3, 0.32, 0.32, 0.31, 0.3, 0.31, 0.32, 0.32, 0.32, 0.3, 0.22, 0.03, 0.03, 0.03, 0.04 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.02, 0.28, 0.42, 0.41, 0.44, 0.42, 0.41, 0.42, 0.43, 0.42, 0.4, 0.42, 0.43, 0.43, 0.28, 0.01, 0.02, 0.02, 0.02 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< /tab >}} {{< /tabs >}}
Laravel performs slightly better than MySQL in this scenario, and we are not limited by database this time.
Laravel PgSQL scenario 2
Iteration creation rate = 1/2/s
checks.........................: 100.00% ✓ 17658 ✗ 0
data_received..................: 40 MB 448 kB/s
data_sent......................: 1.5 MB 16 kB/s
http_req_blocked...............: avg=52.73µs min=250ns med=1.08µs max=70.71ms p(90)=1.56µs p(95)=1.77µs
http_req_connecting............: avg=2.51µs min=0s med=0s max=9.95ms p(90)=0s p(95)=0s
http_req_duration..............: avg=104.85ms min=15.23ms med=99.81ms max=367.04ms p(90)=178.92ms p(95)=204.39ms
{ expected_response:true }...: avg=104.85ms min=15.23ms med=99.81ms max=367.04ms p(90)=178.92ms p(95)=204.39ms
http_req_failed................: 0.00% ✓ 0 ✗ 17658
http_req_receiving.............: avg=796.32µs min=22.71µs med=188.69µs max=77.06ms p(90)=721.81µs p(95)=3.02ms
http_req_sending...............: avg=134.62µs min=30.26µs med=119.13µs max=15.1ms p(90)=184.55µs p(95)=218.8µs
http_req_tls_handshaking.......: avg=46.04µs min=0s med=0s max=52.27ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=103.92ms min=15.01ms med=98.88ms max=366.59ms p(90)=177.44ms p(95)=202.74ms
http_reqs......................: 17658 196.17549/s
vus............................: 31 min=1 max=31
vus_max........................: 50 min=50 max=50
{{< tabs >}} {{< tab tabName="Req/s" >}}
{{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [ 7, 26, 47, 95, 108, 126, 131, 153, 164, 168, 165, 163, 180, 185, 188, 178, 177, 192, 194, 194, 188, 187, 200, 202, 202, 202, 185, 207, 197, 202, 202, 196, 210, 210, 212, 213, 193, 213, 218, 214, 206, 203, 213, 212, 215, 206, 195, 218, 220, 214, 211, 203, 209, 220, 220, 212, 205, 215, 209, 218, 212, 206, 217, 220, 219, 211, 205, 205, 222, 218, 216, 203, 211, 221, 223, 212, 207, 215, 216, 221, 211, 209, 216, 219, 225, 209, 206, 217, 216, 217, 115 ] } ] {{< /chart >}}
{{< /tab >}}
{{< tab tabName="Req duration" >}}
{{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [ 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 10, 11, 11, 12, 12, 13, 13, 14, 14, 15, 15, 16, 16, 17, 17, 18, 18, 19, 19, 20, 20, 21, 21, 22, 22, 23, 23, 24, 24, 25, 25, 26, 26, 27, 27, 28, 28, 29, 29, 30, 30, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31 ] } ] {{< /chart >}}
{{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [ 49, 38, 30, 21, 22, 23, 26, 26, 26, 29, 32, 37, 35, 38, 39, 44, 47, 47, 48, 51, 55, 59, 57, 59, 60, 65, 71, 68, 72, 73, 77, 81, 78, 80, 81, 86, 93, 90, 90, 92, 98, 103, 100, 103, 104, 111, 116, 114, 111, 114, 121, 124, 126, 124, 123, 132, 140, 136, 137, 139, 140, 152, 142, 140, 142, 144, 153, 150, 142, 138, 146, 150, 147, 141, 141, 142, 152, 144, 142, 141, 143, 150, 144, 140, 139, 145, 153, 141, 146, 141, 149 ] } ] {{< /chart >}}
{{< /tab >}} {{< tab tabName="CPU load" >}}
{{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.03, 0.02, 0.11, 0.38, 0.55, 0.62, 0.67, 0.71, 0.73, 0.78, 0.78, 0.77, 0.79, 0.81, 0.81, 0.8, 0.8, 0.8, 0.8 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.01, 0.01, 0.04, 0.06, 0.1, 0.11, 0.13, 0.14, 0.14, 0.13, 0.14, 0.14, 0.15, 0.14, 0.14, 0.14, 0.14, 0.15, 0.15 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.03, 0.04, 0.16, 0.24, 0.27, 0.29, 0.29, 0.3, 0.32, 0.32, 0.32, 0.33, 0.32, 0.33, 0.33, 0.34, 0.33, 0.33, 0.33 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.02, 0.05, 0.21, 0.31, 0.36, 0.39, 0.41, 0.43, 0.43, 0.44, 0.44, 0.44, 0.47, 0.46, 0.45, 0.46, 0.47, 0.45, 0.47 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< /tab >}} {{< /tabs >}}
Laravel performing slower than MySQL in this context. Workers and databases are both heavy loaded, and we didn't complete a single scenario iteration.
Symfony
Symfony MySQL scenario 1
Iteration creation rate = 5/s
checks.........................: 100.00% ✓ 10353 ✗ 0
data_received..................: 94 MB 1.4 MB/s
data_sent......................: 949 kB 14 kB/s
dropped_iterations.............: 98 1.43851/s
http_req_blocked...............: avg=148.33µs min=267ns med=1.08µs max=54.56ms p(90)=1.61µs p(95)=1.84µs
http_req_connecting............: avg=4.51µs min=0s med=0s max=3.56ms p(90)=0s p(95)=0s
http_req_duration..............: avg=279.44ms min=21.05ms med=293.58ms max=662.16ms p(90)=395.94ms p(95)=427.75ms
{ expected_response:true }...: avg=279.44ms min=21.05ms med=293.58ms max=662.16ms p(90)=395.94ms p(95)=427.75ms
http_req_failed................: 0.00% ✓ 0 ✗ 10353
http_req_receiving.............: avg=419.67µs min=38.92µs med=255.98µs max=27.16ms p(90)=532.61µs p(95)=959.6µs
http_req_sending...............: avg=138.44µs min=31.52µs med=117.04µs max=10.05ms p(90)=186.82µs p(95)=225.7µs
http_req_tls_handshaking.......: avg=138.79µs min=0s med=0s max=44.36ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=278.88ms min=20.66ms med=293.04ms max=661.64ms p(90)=395.43ms p(95)=427.22ms
http_reqs......................: 10353 151.96829/s
iteration_duration.............: avg=14.29s min=2.62s med=15.93s max=17.73s p(90)=16.93s p(95)=17.08s
iterations.....................: 203 2.97977/s
vus............................: 3 min=3 max=50
vus_max........................: 50 min=50 max=50
{{< tabs >}} {{< tab tabName="Req/s" >}}
{{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [ 21, 134, 154, 146, 158, 153, 159, 155, 146, 156, 157, 152, 155, 138, 159, 157, 154, 160, 147, 149, 156, 157, 159, 138, 155, 151, 162, 155, 138, 160, 159, 150, 148, 153, 156, 147, 154, 161, 144, 152, 157, 149, 155, 145, 155, 158, 148, 162, 141, 148, 160, 149, 167, 135, 154, 163, 151, 154, 144, 152, 158, 158, 160, 151, 156, 168, 155, 164, 71 ] } ] {{< /chart >}}
{{< /tab >}}
{{< tab tabName="Req duration" >}}
{{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [ 5, 10, 13, 17, 19, 23, 27, 30, 34, 36, 40, 42, 46, 50, 50, 49, 50, 50, 50, 50, 50, 50, 49, 50, 50, 50, 46, 50, 48, 48, 49, 50, 49, 50, 49, 50, 50, 50, 50, 50, 47, 50, 50, 49, 49, 46, 49, 49, 50, 49, 50, 50, 49, 49, 48, 47, 49, 48, 49, 48, 47, 41, 35, 31, 26, 24, 18, 3 ] } ] {{< /chart >}}
{{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [ 45, 40, 66, 90, 107, 119, 147, 176, 203, 218, 232, 261, 269, 314, 325, 316, 312, 317, 341, 325, 320, 315, 318, 339, 326, 318, 308, 307, 337, 331, 306, 322, 325, 336, 326, 324, 319, 317, 343, 326, 314, 329, 320, 336, 328, 313, 309, 315, 344, 331, 315, 313, 319, 352, 316, 330, 307, 313, 344, 320, 316, 300, 254, 255, 201, 171, 148, 122, 57 ] } ] {{< /chart >}}
{{< /tab >}} {{< tab tabName="CPU load" >}}
{{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.02, 0.02, 0.28, 0.33, 0.33, 0.33, 0.32, 0.32, 0.31, 0.32, 0.34, 0.32, 0.33, 0.32, 0.33, 0.23, 0.03, 0.02, 0.02 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.01, 0.02, 0.13, 0.15, 0.14, 0.15, 0.15, 0.14, 0.14, 0.15, 0.13, 0.14, 0.14, 0.14, 0.13, 0.11, 0.02, 0.02, 0.02 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.03, 0.71, 0.94, 0.94, 0.96, 0.94, 0.95, 0.94, 0.94, 0.93, 0.94, 0.94, 0.93, 0.94, 0.69, 0.04, 0.03, 0.03, 0.03 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.02, 0.05, 0.05, 0.06, 0.04, 0.06, 0.05, 0.06, 0.06, 0.07, 0.05, 0.06, 0.07, 0.06, 0.05, 0.02, 0.02, 0.01, 0.02 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< /tab >}} {{< /tabs >}}
It's very similar to Laravel.
Symfony MySQL scenario 2
Iteration creation rate = 1/2/s
checks.........................: 100.00% ✓ 34637 ✗ 0
data_received..................: 63 MB 703 kB/s
data_sent......................: 2.8 MB 31 kB/s
http_req_blocked...............: avg=25µs min=229ns med=1.01µs max=53.85ms p(90)=1.5µs p(95)=1.68µs
http_req_connecting............: avg=913ns min=0s med=0s max=2.79ms p(90)=0s p(95)=0s
http_req_duration..............: avg=45.26ms min=7.15ms med=33.2ms max=741.21ms p(90)=97.42ms p(95)=117.33ms
{ expected_response:true }...: avg=45.26ms min=7.15ms med=33.2ms max=741.21ms p(90)=97.42ms p(95)=117.33ms
http_req_failed................: 0.00% ✓ 0 ✗ 34637
http_req_receiving.............: avg=768.97µs min=19.34µs med=130.89µs max=68.86ms p(90)=530.92µs p(95)=3.15ms
http_req_sending...............: avg=126.37µs min=26.62µs med=111.05µs max=19.03ms p(90)=177.14µs p(95)=209.12µs
http_req_tls_handshaking.......: avg=22.07µs min=0s med=0s max=34.55ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=44.36ms min=6.97ms med=32.51ms max=740.19ms p(90)=95.8ms p(95)=115.05ms
http_reqs......................: 34637 384.829353/s
iteration_duration.............: avg=50.67s min=32.73s med=50.6s max=1m8s p(90)=1m7s p(95)=1m7s
iterations.....................: 9 0.099993/s
vus............................: 22 min=1 max=26
vus_max........................: 50 min=50 max=50
{{< tabs >}} {{< tab tabName="Req/s" >}}
{{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [ 1, 57, 181, 219, 275, 284, 319, 335, 336, 376, 376, 367, 367, 366, 406, 403, 408, 389, 378, 412, 411, 416, 394, 381, 416, 413, 419, 391, 393, 413, 418, 420, 404, 400, 424, 413, 415, 400, 399, 427, 420, 400, 399, 404, 417, 421, 426, 397, 389, 408, 418, 418, 399, 405, 414, 419, 416, 405, 394, 415, 405, 407, 399, 409, 395, 408, 416, 405, 392, 421, 397, 421, 399, 407, 405, 414, 401, 402, 401, 405, 422, 411, 402, 405, 415, 416, 417, 398, 396, 240 ] } ] {{< /chart >}}
{{< /tab >}}
{{< tab tabName="Req duration" >}}
{{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [ 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 10, 11, 11, 12, 12, 13, 13, 14, 14, 15, 15, 16, 16, 16, 16, 17, 16, 17, 17, 18, 18, 18, 18, 19, 19, 20, 20, 21, 21, 22, 22, 23, 22, 23, 23, 24, 24, 25, 25, 25, 25, 26, 26, 26, 26, 26, 26, 26, 26, 25, 25, 25, 25, 25, 25, 25, 24, 24, 24, 24, 24, 24, 23, 23, 23, 22, 22, 22, 22, 22, 22 ] } ] {{< /chart >}}
{{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [ 701, 36, 11, 11, 11, 12, 12, 13, 15, 14, 16, 17, 19, 20, 19, 21, 22, 24, 26, 25, 27, 27, 30, 32, 31, 32, 33, 36, 38, 37, 38, 39, 40, 40, 39, 39, 41, 43, 45, 43, 43, 46, 48, 46, 49, 48, 49, 53, 56, 55, 55, 54, 58, 57, 58, 58, 59, 62, 63, 62, 63, 64, 66, 63, 65, 65, 60, 65, 62, 60, 62, 60, 62, 61, 61, 57, 61, 60, 58, 60, 56, 56, 58, 55, 54, 53, 53, 55, 55, 53 ] } ] {{< /chart >}}
{{< /tab >}} {{< tab tabName="CPU load" >}}
{{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.03, 0.02, 0.3, 0.44, 0.55, 0.57, 0.6, 0.57, 0.61, 0.62, 0.59, 0.6, 0.59, 0.61, 0.58, 0.61, 0.6, 0.57, 0.61 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.01, 0.01, 0.12, 0.27, 0.3, 0.35, 0.33, 0.35, 0.34, 0.33, 0.35, 0.34, 0.37, 0.34, 0.35, 0.35, 0.35, 0.35, 0.35 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.03, 0.07, 0.13, 0.14, 0.15, 0.16, 0.14, 0.16, 0.17, 0.17, 0.17, 0.15, 0.16, 0.14, 0.15, 0.16, 0.17, 0.16, 0.15 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.02, 0.04, 0.06, 0.07, 0.08, 0.08, 0.09, 0.08, 0.08, 0.07, 0.08, 0.08, 0.09, 0.09, 0.09, 0.09, 0.07, 0.08, 0.08 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< /tab >}} {{< /tabs >}}
Similar to Laravel too, just slightly better in the same context. Let's see if it's able to keep up with the same performance with PostgreSQL.
Symfony PgSQL scenario 1
Iteration creation rate = 5/s
checks.........................: 100.00% ✓ 10608 ✗ 0
data_received..................: 96 MB 1.4 MB/s
data_sent......................: 971 kB 14 kB/s
dropped_iterations.............: 93 1.362698/s
http_req_blocked...............: avg=154.31µs min=248ns med=1.07µs max=60.23ms p(90)=1.61µs p(95)=1.86µs
http_req_connecting............: avg=5.19µs min=0s med=0s max=3.83ms p(90)=0s p(95)=0s
http_req_duration..............: avg=273.06ms min=19.44ms med=291.33ms max=566.8ms p(90)=362.38ms p(95)=388.91ms
{ expected_response:true }...: avg=273.06ms min=19.44ms med=291.33ms max=566.8ms p(90)=362.38ms p(95)=388.91ms
http_req_failed................: 0.00% ✓ 0 ✗ 10608
http_req_receiving.............: avg=743.59µs min=31.48µs med=265.43µs max=72ms p(90)=757.9µs p(95)=2.34ms
http_req_sending...............: avg=140.55µs min=33.65µs med=118.47µs max=5.54ms p(90)=188.99µs p(95)=232.62µs
http_req_tls_handshaking.......: avg=144.54µs min=0s med=0s max=50.24ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=272.18ms min=19.24ms med=290.43ms max=566.36ms p(90)=361.47ms p(95)=387.77ms
http_reqs......................: 10608 155.435505/s
iteration_duration.............: avg=13.96s min=2.62s med=15.67s max=16.83s p(90)=16.36s p(95)=16.48s
iterations.....................: 208 3.047755/s
vus............................: 9 min=5 max=50
vus_max........................: 50 min=50 max=50
{{< tabs >}} {{< tab tabName="Req/s" >}}
{{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [ 50, 137, 158, 159, 158, 144, 155, 160, 159, 155, 149, 158, 157, 165, 150, 155, 158, 159, 161, 159, 143, 163, 161, 162, 162, 144, 158, 165, 160, 159, 154, 153, 156, 165, 159, 146, 156, 158, 159, 158, 148, 154, 161, 161, 163, 142, 155, 163, 163, 155, 150, 158, 159, 160, 159, 149, 157, 163, 158, 159, 154, 154, 163, 163, 157, 152, 160, 159, 62 ] } ] {{< /chart >}}
{{< /tab >}}
{{< tab tabName="Req duration" >}}
{{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [ 5, 10, 13, 16, 20, 23, 27, 30, 32, 36, 40, 43, 46, 49, 50, 50, 49, 50, 50, 49, 50, 50, 50, 50, 49, 49, 48, 49, 48, 49, 50, 50, 48, 50, 50, 49, 50, 49, 49, 50, 49, 48, 49, 48, 47, 49, 48, 50, 49, 50, 49, 50, 49, 50, 49, 48, 50, 48, 47, 49, 45, 39, 35, 32, 30, 25, 19, 9 ] } ] {{< /chart >}}
{{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [ 36, 48, 72, 90, 109, 137, 156, 170, 196, 213, 240, 260, 272, 286, 309, 340, 310, 313, 307, 306, 329, 323, 310, 307, 307, 333, 313, 303, 302, 306, 325, 322, 310, 309, 307, 339, 312, 317, 310, 319, 332, 322, 306, 306, 296, 332, 316, 308, 306, 309, 331, 321, 315, 309, 306, 337, 316, 307, 309, 301, 324, 286, 245, 217, 197, 196, 151, 104, 57 ] } ] {{< /chart >}}
{{< /tab >}} {{< tab tabName="CPU load" >}}
{{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.08, 0.02, 0.17, 0.53, 0.54, 0.55, 0.55, 0.54, 0.54, 0.55, 0.54, 0.54, 0.54, 0.56, 0.55, 0.54, 0.15, 0.03, 0.02 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.03, 0.02, 0.05, 0.17, 0.17, 0.17, 0.15, 0.17, 0.16, 0.16, 0.16, 0.17, 0.17, 0.16, 0.16, 0.17, 0.05, 0.01, 0.02 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.03, 0.14, 0.61, 0.63, 0.64, 0.63, 0.64, 0.64, 0.64, 0.62, 0.65, 0.63, 0.64, 0.64, 0.64, 0.24, 0.03, 0.03, 0.03 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.02, 0.08, 0.33, 0.36, 0.36, 0.36, 0.36, 0.36, 0.36, 0.38, 0.35, 0.37, 0.36, 0.36, 0.35, 0.12, 0.02, 0.02, 0.02 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< /tab >}} {{< /tabs >}}
Symfony performs same with PostgreSQL and MySQL, but we are limited by database contrary to Laravel case, and performing a little less.
Symfony PgSQL scenario 2
Iteration creation rate = 1/2/s
checks.........................: 100.00% ✓ 20817 ✗ 0
data_received..................: 38 MB 426 kB/s
data_sent......................: 1.7 MB 19 kB/s
http_req_blocked...............: avg=48.76µs min=232ns med=1.07µs max=52.14ms p(90)=1.56µs p(95)=1.78µs
http_req_connecting............: avg=2.93µs min=0s med=0s max=23.78ms p(90)=0s p(95)=0s
http_req_duration..............: avg=88.89ms min=13.36ms med=81.73ms max=1.74s p(90)=157.48ms p(95)=179.93ms
{ expected_response:true }...: avg=88.89ms min=13.36ms med=81.73ms max=1.74s p(90)=157.48ms p(95)=179.93ms
http_req_failed................: 0.00% ✓ 0 ✗ 20817
http_req_receiving.............: avg=753.58µs min=22.88µs med=147.77µs max=69.37ms p(90)=574.84µs p(95)=2.86ms
http_req_sending...............: avg=139.52µs min=30.26µs med=120.37µs max=13.1ms p(90)=191.03µs p(95)=225.35µs
http_req_tls_handshaking.......: avg=43.19µs min=0s med=0s max=44.53ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=88ms min=13.11ms med=80.82ms max=1.74s p(90)=156.15ms p(95)=178.32ms
http_reqs......................: 20817 231.261996/s
vus............................: 31 min=1 max=31
vus_max........................: 50 min=50 max=50
{{< tabs >}} {{< tab tabName="Req/s" >}}
{{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [ 14, 37, 66, 114, 147, 159, 161, 176, 202, 203, 207, 213, 202, 222, 226, 229, 226, 217, 236, 243, 242, 225, 226, 242, 248, 253, 235, 230, 246, 254, 249, 243, 225, 251, 251, 252, 221, 229, 247, 255, 250, 241, 238, 252, 255, 259, 242, 234, 252, 253, 253, 240, 238, 255, 250, 253, 246, 240, 250, 255, 254, 245, 240, 252, 258, 254, 234, 234, 252, 251, 254, 250, 236, 248, 256, 260, 244, 234, 248, 254, 253, 244, 246, 253, 252, 254, 244, 239, 252, 258, 84 ] } ] {{< /chart >}}
{{< /tab >}}
{{< tab tabName="Req duration" >}}
{{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [ 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 10, 11, 11, 12, 12, 13, 13, 14, 14, 15, 15, 16, 16, 17, 17, 18, 18, 19, 19, 20, 20, 21, 21, 22, 22, 23, 23, 24, 24, 25, 25, 26, 26, 27, 27, 28, 28, 29, 29, 30, 30, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31 ] } ] {{< /chart >}}
{{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [ 41, 27, 24, 17, 18, 19, 22, 23, 22, 24, 27, 28, 32, 31, 33, 35, 38, 41, 40, 41, 43, 49, 51, 49, 51, 51, 57, 61, 59, 58, 62, 65, 71, 70, 69, 70, 83, 82, 80, 78, 82, 86, 90, 87, 88, 90, 95, 102, 98, 99, 101, 106, 107, 109, 110, 111, 116, 119, 118, 119, 118, 127, 127, 125, 121, 121, 127, 128, 132, 122, 121, 124, 132, 125, 120, 120, 126, 127, 130, 121, 122, 126, 127, 123, 124, 122, 125, 130, 121, 122, 122 ] } ] {{< /chart >}}
{{< /tab >}} {{< tab tabName="CPU load" >}}
{{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.02, 0.02, 0.16, 0.44, 0.58, 0.63, 0.69, 0.71, 0.72, 0.74, 0.74, 0.73, 0.75, 0.73, 0.75, 0.75, 0.76, 0.76, 0.76 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.01, 0.02, 0.07, 0.13, 0.18, 0.21, 0.21, 0.21, 0.22, 0.21, 0.22, 0.24, 0.22, 0.23, 0.23, 0.23, 0.22, 0.23, 0.23 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.03, 0.07, 0.19, 0.23, 0.26, 0.27, 0.27, 0.29, 0.28, 0.31, 0.32, 0.31, 0.3, 0.31, 0.32, 0.31, 0.31, 0.3, 0.32 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.02, 0.07, 0.26, 0.34, 0.38, 0.4, 0.42, 0.41, 0.43, 0.42, 0.41, 0.42, 0.44, 0.43, 0.41, 0.43, 0.43, 0.44, 0.42 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< /tab >}} {{< /tabs >}}
Now it performs clearly slower than with MySQL in same scenario. Slightly better than Laravel in same context. To summary the 2nd scenario give MySQL a good advantage against PostgreSQL with PHP.
FastAPI
As a side note here, uvicorn is limited to 1 CPU core, so I use 2 replicas on each worker to use all CPU cores.
FastAPI PgSQL scenario 1
Iteration creation rate = 15/s
checks.........................: 100.00% ✓ 33048 ✗ 0
data_received..................: 272 MB 4.3 MB/s
data_sent......................: 2.9 MB 46 kB/s
dropped_iterations.............: 253 4.042284/s
http_req_blocked...............: avg=44.03µs min=197ns med=873ns max=51.66ms p(90)=1.31µs p(95)=1.48µs
http_req_connecting............: avg=1.59µs min=0s med=0s max=3.33ms p(90)=0s p(95)=0s
http_req_duration..............: avg=87.55ms min=5.74ms med=79.16ms max=449.45ms p(90)=160.7ms p(95)=187.45ms
{ expected_response:true }...: avg=87.55ms min=5.74ms med=79.16ms max=449.45ms p(90)=160.7ms p(95)=187.45ms
http_req_failed................: 0.00% ✓ 0 ✗ 33048
http_req_receiving.............: avg=809.01µs min=18.38µs med=273.27µs max=53.15ms p(90)=1.95ms p(95)=3.12ms
http_req_sending...............: avg=156.85µs min=23.21µs med=95.41µs max=45.6ms p(90)=181.16µs p(95)=248.5µs
http_req_tls_handshaking.......: avg=40.32µs min=0s med=0s max=44.77ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=86.59ms min=0s med=78.14ms max=448.54ms p(90)=159.53ms p(95)=186.05ms
http_reqs......................: 33048 528.02138/s
iteration_duration.............: avg=4.49s min=1.14s med=4.65s max=6.25s p(90)=5.17s p(95)=5.3s
iterations.....................: 648 10.35336/s
vus............................: 22 min=15 max=50
vus_max........................: 50 min=50 max=50
{{< tabs >}} {{< tab tabName="Req/s" >}}
{{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [ 270, 514, 530, 535, 552, 541, 524, 493, 545, 555, 560, 539, 519, 545, 540, 531, 525, 514, 547, 540, 537, 533, 485, 511, 534, 525, 508, 500, 550, 527, 538, 516, 500, 542, 532, 530, 504, 508, 540, 538, 553, 537, 497, 560, 517, 578, 559, 487, 551, 546, 538, 531, 517, 518, 578, 559, 521, 516, 556, 567, 517, 517, 351 ] } ] {{< /chart >}}
{{< /tab >}}
{{< tab tabName="Req duration" >}}
{{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [ 15, 23, 31, 41, 46, 50, 50, 50, 48, 49, 50, 48, 48, 45, 49, 50, 49, 48, 50, 49, 50, 49, 49, 50, 50, 50, 50, 48, 50, 47, 49, 48, 48, 49, 49, 50, 48, 48, 50, 49, 49, 50, 48, 48, 49, 48, 45, 50, 48, 49, 49, 48, 47, 50, 49, 50, 48, 47, 50, 48, 39, 22 ] } ] {{< /chart >}}
{{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [ 20, 33, 45, 65, 74, 88, 93, 96, 90, 89, 90, 92, 94, 91, 89, 91, 91, 95, 90, 93, 92, 93, 101, 93, 90, 94, 96, 101, 87, 92, 89, 96, 98, 91, 91, 93, 94, 97, 92, 90, 88, 91, 95, 91, 93, 86, 88, 94, 89, 91, 90, 92, 93, 92, 87, 86, 94, 90, 91, 86, 86, 63, 36 ] } ] {{< /chart >}}
{{< /tab >}} {{< tab tabName="CPU load" >}}
{{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.03, 0.02, 0.52, 0.71, 0.74, 0.73, 0.71, 0.69, 0.71, 0.73, 0.73, 0.75, 0.71, 0.74, 0.5, 0.04, 0.03, 0.03, 0.02 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.01, 0.01, 0.12, 0.15, 0.15, 0.15, 0.15, 0.17, 0.16, 0.15, 0.15, 0.15, 0.15, 0.15, 0.11, 0.02, 0.01, 0.01, 0.01 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.03, 0.3, 0.46, 0.47, 0.47, 0.48, 0.47, 0.48, 0.49, 0.49, 0.49, 0.46, 0.49, 0.35, 0.03, 0.03, 0.03, 0.03, 0.03 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.01, 0.11, 0.25, 0.25, 0.3, 0.27, 0.27, 0.25, 0.29, 0.28, 0.28, 0.26, 0.29, 0.19, 0.02, 0.02, 0.02, 0.01, 0.01 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< /tab >}} {{< /tabs >}}
Now we are talking, FastAPI outperforms above PHP frameworks, and database isn't the bottleneck anymore.
FastAPI PgSQL scenario 2
Iteration creation rate = 2/s
checks.........................: 100.00% ✓ 72075 ✗ 0
data_received..................: 147 MB 1.6 MB/s
data_sent......................: 5.4 MB 60 kB/s
dropped_iterations.............: 68 0.755514/s
http_req_blocked...............: avg=20.35µs min=213ns med=903ns max=52.94ms p(90)=1.31µs p(95)=1.48µs
http_req_connecting............: avg=909ns min=0s med=0s max=10.34ms p(90)=0s p(95)=0s
http_req_duration..............: avg=51.06ms min=3.06ms med=32.91ms max=1.07s p(90)=117.46ms p(95)=138.49ms
{ expected_response:true }...: avg=51.06ms min=3.06ms med=32.91ms max=1.07s p(90)=117.46ms p(95)=138.49ms
http_req_failed................: 0.00% ✓ 0 ✗ 72075
http_req_receiving.............: avg=301.26µs min=17.32µs med=125.14µs max=26.38ms p(90)=678.97µs p(95)=1.14ms
http_req_sending...............: avg=118.25µs min=21.81µs med=94.18µs max=20.22ms p(90)=163.96µs p(95)=204.06µs
http_req_tls_handshaking.......: avg=17.89µs min=0s med=0s max=37.55ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=50.64ms min=911.97µs med=32.54ms max=1.07s p(90)=116.83ms p(95)=137.77ms
http_reqs......................: 72075 800.78886/s
iteration_duration.............: avg=1m10s min=51.94s med=1m14s max=1m21s p(90)=1m20s p(95)=1m21s
iterations.....................: 20 0.22221/s
vus............................: 33 min=2 max=50
vus_max........................: 50 min=50 max=50
{{< tabs >}} {{< tab tabName="Req/s" >}}
{{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [ 19, 168, 465, 631, 720, 792, 752, 758, 757, 763, 839, 819, 777, 770, 821, 828, 760, 792, 741, 869, 862, 831, 846, 811, 820, 878, 792, 811, 815, 829, 804, 807, 842, 819, 791, 804, 744, 839, 810, 828, 841, 890, 841, 834, 804, 829, 821, 837, 852, 853, 853, 884, 871, 773, 774, 825, 794, 832, 825, 787, 807, 872, 837, 815, 826, 778, 811, 810, 823, 807, 786, 872, 886, 810, 808, 831, 824, 853, 770, 818, 793, 827, 795, 813, 795, 858, 869, 805, 846, 823, 563 ] } ] {{< /chart >}}
{{< /tab >}}
{{< tab tabName="Req duration" >}}
{{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [ 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 49, 50, 50, 50, 50, 49, 50, 50, 50, 49, 49, 49, 48, 48, 48, 47, 46, 46, 46, 46, 46, 46, 44, 44, 44, 44, 43, 43, 43, 42, 42, 40, 39, 39, 39, 37, 37, 36, 33 ] } ] {{< /chart >}}
{{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [ 13, 12, 8, 9, 11, 12, 16, 18, 21, 23, 24, 27, 30, 34, 34, 36, 42, 42, 48, 44, 46, 51, 51, 56, 59, 56, 63, 59, 64, 59, 63, 62, 58, 61, 62, 62, 68, 60, 61, 60, 58, 57, 59, 59, 60, 61, 61, 58, 60, 58, 59, 56, 56, 65, 64, 58, 66, 61, 60, 62, 61, 57, 58, 61, 55, 62, 60, 59, 57, 57, 58, 53, 51, 56, 56, 53, 52, 52, 56, 51, 56, 51, 51, 52, 46, 48, 44, 46, 44, 43, 44 ] } ] {{< /chart >}}
{{< /tab >}} {{< tab tabName="CPU load" >}}
{{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.02, 0.02, 0.03, 0.54, 0.65, 0.68, 0.69, 0.72, 0.71, 0.7, 0.71, 0.71, 0.72, 0.69, 0.68, 0.72, 0.69, 0.71, 0.7 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.01, 0.01, 0.01, 0.11, 0.18, 0.18, 0.16, 0.17, 0.17, 0.16, 0.18, 0.18, 0.18, 0.18, 0.18, 0.15, 0.18, 0.17, 0.17 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.03, 0.03, 0.22, 0.22, 0.25, 0.28, 0.31, 0.29, 0.31, 0.3, 0.3, 0.32, 0.33, 0.3, 0.28, 0.3, 0.31, 0.3, 0.28 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.01, 0.01, 0.11, 0.14, 0.18, 0.2, 0.23, 0.23, 0.22, 0.24, 0.26, 0.24, 0.25, 0.21, 0.2, 0.23, 0.24, 0.22, 0.19 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< /tab >}} {{< /tabs >}}
FastAPI performs around at least twice better than PHP main frameworks in every situation. I'm not sure that testing it on MySQL change anything.
NestJS
NestJS PgSQL scenario 1
Iteration creation rate = 15/s
checks.........................: 100.00% ✓ 37434 ✗ 0
data_received..................: 648 MB 11 MB/s
data_sent......................: 3.5 MB 57 kB/s
dropped_iterations.............: 166 2.680206/s
http_req_blocked...............: avg=35.23µs min=216ns med=702ns max=49.57ms p(90)=1.19µs p(95)=1.33µs
http_req_connecting............: avg=1.44µs min=0s med=0s max=5.62ms p(90)=0s p(95)=0s
http_req_duration..............: avg=75.64ms min=3.2ms med=70.44ms max=346.41ms p(90)=134.43ms p(95)=146.32ms
{ expected_response:true }...: avg=75.64ms min=3.2ms med=70.44ms max=346.41ms p(90)=134.43ms p(95)=146.32ms
http_req_failed................: 0.00% ✓ 0 ✗ 37434
http_req_receiving.............: avg=408.39µs min=19.38µs med=219.61µs max=42.89ms p(90)=653.52µs p(95)=1.28ms
http_req_sending...............: avg=134.99µs min=18.84µs med=83.94µs max=26.64ms p(90)=156.84µs p(95)=222.84µs
http_req_tls_handshaking.......: avg=31.9µs min=0s med=0s max=40.37ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=75.09ms min=2.92ms med=69.92ms max=345.47ms p(90)=133.87ms p(95)=145.68ms
http_reqs......................: 37434 604.402491/s
iteration_duration.............: avg=3.89s min=1.2s med=4s max=5.16s p(90)=4.41s p(95)=4.54s
iterations.....................: 734 11.851029/s
vus............................: 31 min=15 max=50
vus_max........................: 50 min=50 max=50
{{< tabs >}} {{< tab tabName="Req/s" >}}
{{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [ 273, 496, 577, 588, 599, 624, 621, 610, 641, 638, 641, 614, 585, 580, 600, 604, 601, 571, 606, 663, 643, 572, 596, 585, 616, 650, 680, 615, 612, 611, 617, 572, 587, 593, 605, 633, 633, 573, 646, 645, 650, 570, 629, 653, 691, 650, 580, 555, 590, 646, 565, 585, 638, 594, 567, 555, 602, 570, 641, 648, 601, 630, 8 ] } ] {{< /chart >}}
{{< /tab >}}
{{< tab tabName="Req duration" >}}
{{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [ 15, 23, 32, 39, 44, 50, 49, 50, 46, 47, 49, 49, 48, 49, 49, 49, 49, 49, 50, 50, 49, 50, 48, 50, 49, 50, 49, 50, 46, 48, 46, 49, 50, 46, 47, 47, 45, 49, 47, 45, 49, 49, 49, 48, 45, 48, 50, 50, 49, 46, 45, 48, 50, 50, 49, 45, 48, 50, 49, 50, 31 ] } ] {{< /chart >}}
{{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [ 21, 38, 45, 58, 65, 74, 79, 79, 75, 74, 75, 78, 85, 80, 80, 82, 81, 86, 78, 74, 76, 85, 81, 83, 77, 76, 72, 75, 78, 77, 81, 82, 83, 83, 79, 76, 76, 84, 75, 74, 73, 81, 81, 74, 68, 72, 81, 88, 82, 77, 75, 81, 76, 83, 86, 85, 74, 84, 80, 73, 74, 33, 6 ] } ] {{< /chart >}}
{{< /tab >}} {{< tab tabName="CPU load" >}}
{{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.02, 0.03, 0.27, 0.49, 0.46, 0.42, 0.45, 0.47, 0.42, 0.42, 0.46, 0.45, 0.42, 0.43, 0.35, 0.02, 0.02, 0.02, 0.02 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.01, 0.01, 0.29, 0.48, 0.51, 0.55, 0.52, 0.49, 0.54, 0.54, 0.49, 0.51, 0.55, 0.53, 0.4, 0.01, 0.01, 0.01, 0.02 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.04, 0.12, 0.21, 0.24, 0.21, 0.22, 0.22, 0.21, 0.21, 0.22, 0.23, 0.21, 0.21, 0.18, 0.03, 0.04, 0.03, 0.03, 0.04 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.02, 0.08, 0.17, 0.16, 0.16, 0.15, 0.18, 0.17, 0.17, 0.17, 0.17, 0.17, 0.16, 0.16, 0.02, 0.02, 0.02, 0.01, 0.02 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< /tab >}} {{< /tabs >}}
It's slightly better than FastAPI, and database is strangely sleeping far more than with FastAPI, let's keep up on scenario 2.
NestJS PgSQL scenario 2
Iteration creation rate = 3/s
checks.........................: 99.93% ✓ 111672 ✗ 72
data_received..................: 530 MB 6.1 MB/s
data_sent......................: 8.9 MB 103 kB/s
dropped_iterations.............: 109 1.255685/s
http_req_blocked...............: avg=13.34µs min=209ns med=701ns max=48.9ms p(90)=1.15µs p(95)=1.3µs
http_req_connecting............: avg=721ns min=0s med=0s max=18.51ms p(90)=0s p(95)=0s
http_req_duration..............: avg=29.19ms min=1.38ms med=26.19ms max=247.1ms p(90)=53.2ms p(95)=61.63ms
{ expected_response:true }...: avg=29.2ms min=2.01ms med=26.21ms max=247.1ms p(90)=53.21ms p(95)=61.65ms
http_req_failed................: 0.06% ✓ 72 ✗ 111672
http_req_receiving.............: avg=501.09µs min=16.93µs med=189.85µs max=46.9ms p(90)=1.11ms p(95)=1.88ms
http_req_sending...............: avg=107.93µs min=13.6µs med=76.9µs max=27.43ms p(90)=147.26µs p(95)=188.72µs
http_req_tls_handshaking.......: avg=11.39µs min=0s med=0s max=39.98ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=28.58ms min=0s med=25.61ms max=246.9ms p(90)=52.33ms p(95)=60.73ms
http_reqs......................: 111744 1287.295935/s
iteration_duration.............: avg=45.63s min=26.8s med=50.21s max=55.45s p(90)=54.19s p(95)=54.62s
iterations.....................: 72 0.829443/s
vus............................: 8 min=3 max=50
vus_max........................: 50 min=50 max=50
{{< tabs >}} {{< tab tabName="Req/s" >}}
{{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [ 121, 508, 755, 899, 1012, 1007, 1156, 1177, 1096, 1180, 1224, 1237, 1208, 1244, 1295, 1437, 1445, 1345, 1338, 1405, 1378, 1380, 1411, 1293, 1420, 1441, 1451, 1365, 1264, 1439, 1384, 1584, 1241, 1361, 1319, 1427, 1398, 1362, 1320, 1448, 1482, 1458, 1311, 1256, 1399, 1363, 1345, 1259, 1346, 1443, 1499, 1445, 1438, 1451, 1425, 1472, 1479, 1367, 1322, 1450, 1414, 1360, 1355, 1457, 1326, 1411, 1363, 1350, 1277, 1279, 1168, 1216, 1198, 1256, 1314, 1248, 1236, 1192, 1183, 1227, 1263, 1357, 1148, 1141, 1168, 1127, 900, 25 ] } ] {{< /chart >}}
{{< /tab >}}
{{< tab tabName="Req duration" >}}
{{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [ 3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36, 39, 42, 45, 48, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 49, 50, 49, 50, 50, 49, 49, 50, 50, 50, 50, 50, 50, 49, 50, 49, 50, 49, 49, 47, 45, 42, 40, 38, 32, 28, 25, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 20, 20, 18, 16, 15, 11, 8 ] } ] {{< /chart >}}
{{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [ 8, 8, 9, 11, 13, 16, 16, 18, 23, 23, 25, 27, 30, 32, 33, 32, 34, 37, 37, 35, 36, 36, 35, 38, 35, 35, 34, 36, 39, 34, 36, 31, 40, 37, 38, 35, 36, 36, 36, 35, 33, 34, 38, 39, 36, 36, 37, 40, 37, 34, 33, 34, 35, 34, 35, 34, 33, 37, 37, 35, 35, 35, 34, 30, 32, 28, 27, 23, 21, 19, 18, 18, 18, 17, 17, 17, 18, 18, 19, 18, 16, 14, 15, 14, 12, 9, 7, 6 ] } ] {{< /chart >}}
{{< /tab >}} {{< tab tabName="CPU load" >}}
{{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.02, 0.03, 0.34, 0.45, 0.47, 0.48, 0.46, 0.49, 0.47, 0.48, 0.46, 0.48, 0.51, 0.49, 0.46, 0.46, 0.43, 0.42, 0.41 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.01, 0.01, 0.5, 0.53, 0.51, 0.5, 0.53, 0.49, 0.51, 0.51, 0.52, 0.51, 0.47, 0.49, 0.52, 0.52, 0.54, 0.57, 0.56 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.03, 0.16, 0.23, 0.26, 0.29, 0.29, 0.29, 0.29, 0.28, 0.29, 0.27, 0.3, 0.3, 0.3, 0.27, 0.24, 0.24, 0.25, 0.1 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.02, 0.08, 0.12, 0.13, 0.16, 0.14, 0.15, 0.16, 0.16, 0.15, 0.15, 0.16, 0.15, 0.15, 0.15, 0.14, 0.14, 0.13, 0.06 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< /tab >}} {{< /tabs >}}
Huge gap now, NestJS is the clear winner so far. The native even loop system seems to make miracles. It's time to test it against compiled language.
Spring Boot
Spring Boot PgSQL scenario 1
Iteration creation rate = 30/s
checks.........................: 100.00% ✓ 91851 ✗ 0
data_received..................: 1.6 GB 26 MB/s
data_sent......................: 7.8 MB 129 kB/s
http_req_blocked...............: avg=16.33µs min=191ns med=419ns max=71.26ms p(90)=723ns p(95)=925ns
http_req_connecting............: avg=978ns min=0s med=0s max=19.89ms p(90)=0s p(95)=0s
http_req_duration..............: avg=14.04ms min=2.37ms med=12.32ms max=223.11ms p(90)=24.19ms p(95)=28.67ms
{ expected_response:true }...: avg=14.04ms min=2.37ms med=12.32ms max=223.11ms p(90)=24.19ms p(95)=28.67ms
http_req_failed................: 0.00% ✓ 0 ✗ 91851
http_req_receiving.............: avg=1.76ms min=19.18µs med=758.76µs max=63.89ms p(90)=4.48ms p(95)=6.73ms
http_req_sending...............: avg=147.52µs min=21.29µs med=51.49µs max=43.07ms p(90)=130.77µs p(95)=286.91µs
http_req_tls_handshaking.......: avg=14.4µs min=0s med=0s max=47.91ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=12.12ms min=0s med=10.43ms max=220.97ms p(90)=21.23ms p(95)=24.99ms
http_reqs......................: 91851 1518.447027/s
iteration_duration.............: avg=741.16ms min=485.58ms med=732.29ms max=1.18s p(90)=865.14ms p(95)=909.61ms
iterations.....................: 1801 29.773471/s
vus............................: 25 min=17 max=29
vus_max........................: 50 min=50 max=50
{{< tabs >}} {{< tab tabName="Req/s" >}}
{{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [ 353, 1407, 1575, 1522, 1483, 1562, 1587, 1578, 1491, 1521, 1545, 1561, 1523, 1493, 1392, 1604, 1609, 1526, 1554, 1493, 1547, 1558, 1531, 1484, 1511, 1530, 1606, 1548, 1479, 1459, 1574, 1582, 1575, 1481, 1439, 1615, 1304, 1567, 1571, 1530, 1610, 1604, 1516, 1523, 1433, 1630, 1503, 1532, 1557, 1492, 1559, 1577, 1521, 1497, 1446, 1583, 1566, 1509, 1424, 1514, 1385 ] } ] {{< /chart >}}
{{< /tab >}}
{{< tab tabName="Req duration" >}}
{{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [ 20, 23, 24, 25, 25, 24, 20, 20, 22, 21, 19, 18, 21, 22, 26, 20, 21, 22, 20, 21, 20, 18, 20, 21, 21, 21, 18, 21, 21, 23, 22, 19, 19, 21, 22, 19, 29, 26, 27, 25, 22, 18, 21, 24, 24, 21, 21, 23, 21, 23, 20, 17, 19, 23, 22, 20, 21, 22, 25, 25 ] } ] {{< /chart >}}
{{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [ 11, 14, 15, 15, 16, 16, 14, 13, 14, 15, 13, 12, 12, 14, 16, 15, 14, 13, 14, 14, 13, 13, 12, 14, 15, 14, 13, 12, 14, 15, 14, 13, 12, 13, 15, 14, 16, 18, 17, 17, 15, 13, 12, 14, 15, 14, 13, 13, 14, 14, 13, 12, 12, 13, 15, 14, 13, 13, 16, 17, 14 ] } ] {{< /chart >}}
{{< /tab >}} {{< tab tabName="CPU load" >}}
{{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.02, 0.02, 0.06, 0.29, 0.29, 0.29, 0.29, 0.28, 0.3, 0.27, 0.28, 0.28, 0.29, 0.29, 0.26, 0.03, 0.02, 0.02, 0.02 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.01, 0.01, 0.04, 0.22, 0.22, 0.21, 0.21, 0.21, 0.21, 0.22, 0.22, 0.21, 0.21, 0.21, 0.21, 0.01, 0.01, 0.02, 0.01 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.03, 0.05, 0.56, 0.6, 0.58, 0.59, 0.58, 0.58, 0.56, 0.59, 0.59, 0.59, 0.57, 0.57, 0.04, 0.03, 0.03, 0.04, 0.03 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.02, 0.03, 0.28, 0.29, 0.28, 0.28, 0.28, 0.3, 0.3, 0.27, 0.28, 0.27, 0.29, 0.3, 0.02, 0.02, 0.02, 0.01, 0.02 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< /tab >}} {{< /tabs >}}
End of debate, Spring Boot destroys competition for 1st scenario. Moreover, database is the bottleneck, and java runtime is clearly sleeping here. But JPA Hibernate was difficult to tune for optimal performance, and finally the magic @BatchSize
annotation was the key, allowing to merge n+1 queries into 1+1 queries. Without it, Spring Boot was performing 3 times slower !
Spring Boot PgSQL scenario 2
Iteration creation rate = 10/s
checks.........................: 100.00% ✓ 225040 ✗ 0
data_received..................: 921 MB 11 MB/s
data_sent......................: 19 MB 235 kB/s
dropped_iterations.............: 456 5.549905/s
http_req_blocked...............: avg=6.76µs min=202ns med=389ns max=52.58ms p(90)=662ns p(95)=859ns
http_req_connecting............: avg=275ns min=0s med=0s max=8.36ms p(90)=0s p(95)=0s
http_req_duration..............: avg=16.7ms min=1.6ms med=13.62ms max=237.91ms p(90)=32.02ms p(95)=39.52ms
{ expected_response:true }...: avg=16.7ms min=1.6ms med=13.62ms max=237.91ms p(90)=32.02ms p(95)=39.52ms
http_req_failed................: 0.00% ✓ 0 ✗ 225040
http_req_receiving.............: avg=1.91ms min=16.76µs med=886.69µs max=211.62ms p(90)=4.52ms p(95)=7.22ms
http_req_sending...............: avg=85.38µs min=17.86µs med=45.75µs max=77.73ms p(90)=86.49µs p(95)=120.1µs
http_req_tls_handshaking.......: avg=5.83µs min=0s med=0s max=51.92ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=14.71ms min=0s med=11.67ms max=222.78ms p(90)=28.88ms p(95)=36.06ms
http_reqs......................: 225040 2738.92674/s
iteration_duration.............: avg=26.14s min=22.07s med=26.88s max=28.52s p(90)=28.02s p(95)=28.16s
iterations.....................: 145 1.764772/s
vus............................: 7 min=7 max=50
vus_max........................: 50 min=50 max=50
{{< tabs >}} {{< tab tabName="Req/s" >}}
{{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [ 33, 1444, 2295, 2566, 2449, 2563, 2918, 2891, 2886, 2676, 2674, 2943, 2975, 2971, 2757, 2645, 2966, 2967, 2925, 2622, 2678, 2888, 2984, 2972, 2667, 2639, 2896, 2974, 2853, 2760, 2747, 2897, 2952, 3026, 2633, 2569, 2890, 3020, 2701, 2521, 2680, 2922, 3013, 2983, 2674, 2622, 2790, 2463, 2693, 2352, 2853, 2762, 3000, 2960, 2653, 2615, 2987, 2870, 2875, 2536, 2593, 2903, 2990, 2712, 2550, 2532, 2903, 2955, 2683, 2485, 2626, 2930, 3004, 2858, 2664, 2596, 2937, 2942, 2899, 2517, 2436, 2577, 2012 ] } ] {{< /chart >}}
{{< /tab >}}
{{< tab tabName="Req duration" >}}
{{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [ 10, 20, 30, 40, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 49, 50, 50, 48, 50, 50, 50, 50, 48, 49, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 49, 49, 49, 50, 50, 50, 50, 47, 48, 49, 46, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 43, 38, 37, 30, 23, 7 ] } ] {{< /chart >}}
{{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [ 7, 5, 7, 10, 15, 18, 17, 17, 17, 19, 19, 17, 17, 17, 18, 19, 17, 17, 17, 19, 19, 17, 17, 17, 19, 19, 17, 17, 17, 18, 18, 17, 16, 16, 19, 19, 17, 16, 19, 20, 19, 17, 16, 17, 19, 19, 18, 20, 19, 21, 17, 18, 16, 17, 19, 19, 17, 17, 17, 19, 19, 16, 15, 16, 18, 18, 15, 15, 17, 18, 17, 16, 15, 15, 17, 17, 15, 15, 13, 15, 13, 10, 6 ] } ] {{< /chart >}}
{{< /tab >}} {{< tab tabName="CPU load" >}}
{{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.03, 0.03, 0.08, 0.37, 0.41, 0.42, 0.41, 0.42, 0.43, 0.41, 0.41, 0.4, 0.38, 0.41, 0.4, 0.42, 0.39, 0.41, 0.41 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.01, 0.01, 0.05, 0.26, 0.26, 0.27, 0.25, 0.27, 0.26, 0.26, 0.26, 0.26, 0.26, 0.26, 0.27, 0.27, 0.27, 0.26, 0.24 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.03, 0.1, 0.52, 0.56, 0.57, 0.58, 0.57, 0.57, 0.55, 0.56, 0.56, 0.52, 0.57, 0.55, 0.52, 0.55, 0.56, 0.54, 0.12 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.02, 0.04, 0.26, 0.3, 0.31, 0.28, 0.3, 0.3, 0.31, 0.3, 0.31, 0.28, 0.3, 0.3, 0.31, 0.3, 0.31, 0.29, 0.07 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< /tab >}} {{< /tabs >}}
Java is maybe not the best DX experience for me, but it's a beast in terms of raw performance. Besides, we'll again have database bottleneck, which is the only case seen in this scenario on every framework tested ! Impossible to reach 100% java runtime CPU usage, even with 4 CPU cores, staying only at 60-70% overall...
ASP.NET Core
ASP.NET Core PgSQL scenario 1
Iteration creation rate = 20/s
checks.........................: 100.00% ✓ 59109 ✗ 0
data_received..................: 1.3 GB 22 MB/s
data_sent......................: 5.2 MB 86 kB/s
dropped_iterations.............: 42 0.685501/s
http_req_blocked...............: avg=25.19µs min=212ns med=505ns max=57.73ms p(90)=940ns p(95)=1.13µs
http_req_connecting............: avg=1.4µs min=0s med=0s max=19.45ms p(90)=0s p(95)=0s
http_req_duration..............: avg=43.01ms min=2.75ms med=36.68ms max=278.96ms p(90)=83.54ms p(95)=99.8ms
{ expected_response:true }...: avg=43.01ms min=2.75ms med=36.68ms max=278.96ms p(90)=83.54ms p(95)=99.8ms
http_req_failed................: 0.00% ✓ 0 ✗ 59109
http_req_receiving.............: avg=1.6ms min=16.85µs med=545.28µs max=53.91ms p(90)=4.05ms p(95)=6.85ms
http_req_sending...............: avg=164.54µs min=13.27µs med=62.93µs max=35.42ms p(90)=166.95µs p(95)=376.28µs
http_req_tls_handshaking.......: avg=22.43µs min=0s med=0s max=52.29ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=41.24ms min=0s med=34.73ms max=275.91ms p(90)=81.52ms p(95)=97.35ms
http_reqs......................: 59109 964.745322/s
iteration_duration.............: avg=2.22s min=759.35ms med=2.33s max=3.24s p(90)=2.74s p(95)=2.85s
iterations.....................: 1159 18.916575/s
vus............................: 21 min=17 max=50
vus_max........................: 50 min=50 max=50
{{< tabs >}} {{< tab tabName="Req/s" >}}
{{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [ 242, 879, 870, 1010, 1007, 1026, 960, 941, 994, 1037, 1020, 990, 884, 997, 1001, 1022, 1005, 921, 941, 979, 984, 969, 891, 966, 978, 1018, 942, 902, 955, 998, 994, 1009, 907, 969, 991, 975, 999, 925, 960, 975, 999, 1001, 917, 967, 977, 1004, 1000, 919, 954, 1001, 992, 996, 936, 963, 999, 994, 944, 922, 958, 999, 1002, 632 ] } ] {{< /chart >}}
{{< /tab >}}
{{< tab tabName="Req duration" >}}
{{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [ 17, 22, 27, 26, 27, 26, 30, 33, 32, 32, 31, 34, 36, 43, 42, 39, 42, 43, 46, 50, 49, 48, 49, 50, 46, 44, 47, 49, 49, 48, 50, 43, 50, 45, 43, 44, 47, 49, 47, 42, 48, 48, 49, 47, 49, 45, 45, 47, 48, 48, 49, 49, 49, 48, 49, 47, 46, 48, 45, 44, 21 ] } ] {{< /chart >}}
{{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [ 12, 18, 23, 25, 25, 26, 28, 31, 31, 31, 31, 32, 38, 37, 39, 39, 39, 45, 45, 48, 49, 49, 53, 51, 49, 43, 47, 51, 50, 48, 46, 46, 50, 50, 44, 46, 45, 50, 49, 50, 44, 45, 50, 49, 48, 47, 47, 48, 50, 47, 46, 48, 51, 51, 48, 49, 49, 51, 49, 47, 41, 24 ] } ] {{< /chart >}}
{{< /tab >}} {{< tab tabName="CPU load" >}}
{{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.02, 0.03, 0.34, 0.47, 0.51, 0.5, 0.49, 0.48, 0.48, 0.49, 0.49, 0.48, 0.48, 0.49, 0.26, 0.03, 0.03, 0.02, 0.02 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.01, 0.02, 0.13, 0.21, 0.18, 0.19, 0.19, 0.19, 0.2, 0.19, 0.19, 0.2, 0.2, 0.18, 0.1, 0.01, 0.01, 0.02, 0.01 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.04, 0.43, 0.76, 0.75, 0.77, 0.78, 0.79, 0.79, 0.78, 0.78, 0.77, 0.78, 0.77, 0.47, 0.03, 0.03, 0.03, 0.03, 0.03 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.02, 0.15, 0.2, 0.22, 0.23, 0.22, 0.2, 0.21, 0.21, 0.21, 0.22, 0.22, 0.23, 0.13, 0.01, 0.02, 0.01, 0.01, 0.01 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< /tab >}} {{< /tabs >}}
ASP.NET Core is performing well here. EF Core is incredibly efficient by default without any tuning headaches.
ASP.NET Core PgSQL scenario 2
Iteration creation rate = 10/s
checks.........................: 100.00% ✓ 155200 ✗ 0
data_received..................: 939 MB 14 MB/s
data_sent......................: 14 MB 202 kB/s
dropped_iterations.............: 500 7.323443/s
http_req_blocked...............: avg=10.06µs min=181ns med=418ns max=74.82ms p(90)=812ns p(95)=997ns
http_req_connecting............: avg=437ns min=0s med=0s max=9.65ms p(90)=0s p(95)=0s
http_req_duration..............: avg=20.44ms min=1.5ms med=15.71ms max=278.6ms p(90)=41.79ms p(95)=52.23ms
{ expected_response:true }...: avg=20.44ms min=1.5ms med=15.71ms max=278.6ms p(90)=41.79ms p(95)=52.23ms
http_req_failed................: 0.00% ✓ 0 ✗ 155200
http_req_receiving.............: avg=1.52ms min=14.25µs med=653.02µs max=205.85ms p(90)=3.58ms p(95)=5.77ms
http_req_sending...............: avg=89.63µs min=12.82µs med=50.62µs max=36.57ms p(90)=102.45µs p(95)=140.6µs
http_req_tls_handshaking.......: avg=8.79µs min=0s med=0s max=73.62ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=18.82ms min=0s med=14.01ms max=278.29ms p(90)=39.41ms p(95)=49.76ms
http_reqs......................: 155200 2273.196786/s
iteration_duration.............: avg=31.96s min=28.57s med=32.03s max=34.2s p(90)=33.6s p(95)=33.75s
iterations.....................: 100 1.464689/s
vus............................: 8 min=8 max=50
vus_max........................: 50 min=50 max=50
{{< tabs >}} {{< tab tabName="Req/s" >}}
{{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [ 143, 1375, 1715, 1720, 1816, 1913, 2037, 2110, 2211, 2142, 2381, 2448, 2417, 2323, 2194, 2387, 2478, 2279, 1817, 2181, 2463, 2412, 2454, 2260, 2247, 2405, 2495, 2530, 2173, 2184, 2480, 2476, 2423, 2146, 2209, 2382, 2513, 2344, 2336, 2266, 2513, 2446, 2496, 2382, 2214, 2371, 2477, 2444, 2269, 2210, 2439, 2498, 2450, 2349, 2195, 2448, 2508, 2406, 2316, 2219, 2528, 2503, 2470, 2285, 2179, 2424, 2374, 2224, 1278 ] } ] {{< /chart >}}
{{< /tab >}}
{{< tab tabName="Req duration" >}}
{{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [ 10, 20, 30, 40, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 49, 50, 50, 49, 50, 47, 49, 48, 48, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 49, 48, 46, 45, 41, 36, 30, 8 ] } ] {{< /chart >}}
{{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [ 5, 6, 11, 17, 21, 25, 24, 24, 22, 23, 21, 20, 21, 21, 23, 21, 20, 22, 27, 23, 20, 21, 20, 23, 22, 21, 20, 20, 22, 23, 20, 20, 20, 23, 22, 20, 19, 21, 20, 22, 20, 21, 20, 21, 23, 21, 20, 20, 22, 23, 20, 20, 20, 21, 23, 20, 20, 21, 21, 22, 20, 19, 19, 20, 20, 17, 15, 13, 9 ] } ] {{< /chart >}}
{{< /tab >}} {{< tab tabName="CPU load" >}}
{{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.03, 0.02, 0.04, 0.42, 0.54, 0.6, 0.6, 0.63, 0.63, 0.62, 0.63, 0.63, 0.64, 0.64, 0.62, 0.62, 0.31, 0.02, 0.03 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.02, 0.01, 0.03, 0.19, 0.25, 0.3, 0.3, 0.31, 0.31, 0.3, 0.31, 0.3, 0.31, 0.31, 0.31, 0.31, 0.16, 0.01, 0.01 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="5" >}} [ { label: 'User', data: [ 0.03, 0.03, 0.4, 0.49, 0.56, 0.55, 0.56, 0.56, 0.54, 0.55, 0.56, 0.55, 0.54, 0.55, 0.55, 0.31, 0.03, 0.04, 0.03 ], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [ 0.02, 0.02, 0.23, 0.3, 0.32, 0.29, 0.32, 0.32, 0.32, 0.34, 0.32, 0.34, 0.34, 0.33, 0.32, 0.18, 0.01, 0.02, 0.02 ], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}}
{{< /tab >}} {{< /tabs >}}
Not that far to Java variant, just a bit behind. But as workers are fully loaded here, contrary to Spring Boot which is limited by database, Java stays by far the clear winner for raw performance (in sacrifice of some memory obviously).
Conclusion
Here are the final req/s results for each framework. I choose to take MySQL results for PHP.
{{< chart type="timeseries" title="Scenario 1" >}} [ { label: 'Laravel', borderColor: '#c2410c', backgroundColor: '#c2410c', data: [ 39, 87, 91, 91, 108, 105, 117, 111, 113, 121, 125, 125, 113, 130, 125, 111, 129, 120, 120, 122, 113, 128, 115, 117, 122, 122, 119, 114, 128, 131, 119, 129, 112, 113, 127, 129, 111, 127, 94, 127, 133, 128, 110, 112, 123, 132, 96, 125, 99, 126, 130, 137, 116, 122, 132, 145, 98, 128, 112, 124, 131, 123, 108, 127, 118, 121, 126, 106, 114, 95, 102, 14 ] }, { label: 'Symfony', borderColor: '#ffffff', backgroundColor: '#ffffff', data: [ 21, 134, 154, 146, 158, 153, 159, 155, 146, 156, 157, 152, 155, 138, 159, 157, 154, 160, 147, 149, 156, 157, 159, 138, 155, 151, 162, 155, 138, 160, 159, 150, 148, 153, 156, 147, 154, 161, 144, 152, 157, 149, 155, 145, 155, 158, 148, 162, 141, 148, 160, 149, 167, 135, 154, 163, 151, 154, 144, 152, 158, 158, 160, 151, 156, 168, 155, 164, 71 ] }, { label: 'FastAPI', borderColor: '#0f766e', backgroundColor: '#0f766e', data: [ 270, 514, 530, 535, 552, 541, 524, 493, 545, 555, 560, 539, 519, 545, 540, 531, 525, 514, 547, 540, 537, 533, 485, 511, 534, 525, 508, 500, 550, 527, 538, 516, 500, 542, 532, 530, 504, 508, 540, 538, 553, 537, 497, 560, 517, 578, 559, 487, 551, 546, 538, 531, 517, 518, 578, 559, 521, 516, 556, 567, 517, 517, 351 ] }, { label: 'NestJS', borderColor: '#b91c1c', backgroundColor: '#b91c1c', data: [ 273, 496, 577, 588, 599, 624, 621, 610, 641, 638, 641, 614, 585, 580, 600, 604, 601, 571, 606, 663, 643, 572, 596, 585, 616, 650, 680, 615, 612, 611, 617, 572, 587, 593, 605, 633, 633, 573, 646, 645, 650, 570, 629, 653, 691, 650, 580, 555, 590, 646, 565, 585, 638, 594, 567, 555, 602, 570, 641, 648, 601, 630, 8 ] }, { label: 'Spring Boot', borderColor: '#15803d', backgroundColor: '#15803d', data: [ 353, 1407, 1575, 1522, 1483, 1562, 1587, 1578, 1491, 1521, 1545, 1561, 1523, 1493, 1392, 1604, 1609, 1526, 1554, 1493, 1547, 1558, 1531, 1484, 1511, 1530, 1606, 1548, 1479, 1459, 1574, 1582, 1575, 1481, 1439, 1615, 1304, 1567, 1571, 1530, 1610, 1604, 1516, 1523, 1433, 1630, 1503, 1532, 1557, 1492, 1559, 1577, 1521, 1497, 1446, 1583, 1566, 1509, 1424, 1514, 1385 ] }, { label: 'ASP.NET Core', borderColor: '#6d28d9', backgroundColor: '#6d28d9', data: [ 242, 879, 870, 1010, 1007, 1026, 960, 941, 994, 1037, 1020, 990, 884, 997, 1001, 1022, 1005, 921, 941, 979, 984, 969, 891, 966, 978, 1018, 942, 902, 955, 998, 994, 1009, 907, 969, 991, 975, 999, 925, 960, 975, 999, 1001, 917, 967, 977, 1004, 1000, 919, 954, 1001, 992, 996, 936, 963, 999, 994, 944, 922, 958, 999, 1002, 632 ] } ] {{< /chart >}}
{{< chart type="timeseries" title="Scenario 2" >}} [ { label: 'Laravel', borderColor: '#c2410c', backgroundColor: '#c2410c', data: [ 34, 39, 124, 158, 206, 214, 245, 248, 271, 280, 291, 287, 289, 307, 318, 324, 307, 304, 318, 317, 329, 315, 309, 340, 338, 339, 325, 323, 341, 344, 345, 326, 330, 350, 340, 348, 334, 336, 347, 343, 354, 328, 324, 339, 357, 347, 342, 328, 337, 348, 357, 340, 329, 352, 344, 347, 336, 345, 341, 356, 353, 340, 344, 352, 339, 353, 340, 340, 347, 344, 344, 338, 324, 352, 349, 348, 337, 333, 336, 345, 355, 338, 336, 348, 345, 346, 341, 339, 342, 347 ] }, { label: 'Symfony', borderColor: '#ffffff', backgroundColor: '#ffffff', data: [ 1, 57, 181, 219, 275, 284, 319, 335, 336, 376, 376, 367, 367, 366, 406, 403, 408, 389, 378, 412, 411, 416, 394, 381, 416, 413, 419, 391, 393, 413, 418, 420, 404, 400, 424, 413, 415, 400, 399, 427, 420, 400, 399, 404, 417, 421, 426, 397, 389, 408, 418, 418, 399, 405, 414, 419, 416, 405, 394, 415, 405, 407, 399, 409, 395, 408, 416, 405, 392, 421, 397, 421, 399, 407, 405, 414, 401, 402, 401, 405, 422, 411, 402, 405, 415, 416, 417, 398, 396, 240 ] }, { label: 'FastAPI', borderColor: '#0f766e', backgroundColor: '#0f766e', data: [ 19, 168, 465, 631, 720, 792, 752, 758, 757, 763, 839, 819, 777, 770, 821, 828, 760, 792, 741, 869, 862, 831, 846, 811, 820, 878, 792, 811, 815, 829, 804, 807, 842, 819, 791, 804, 744, 839, 810, 828, 841, 890, 841, 834, 804, 829, 821, 837, 852, 853, 853, 884, 871, 773, 774, 825, 794, 832, 825, 787, 807, 872, 837, 815, 826, 778, 811, 810, 823, 807, 786, 872, 886, 810, 808, 831, 824, 853, 770, 818, 793, 827, 795, 813, 795, 858, 869, 805, 846, 823, 563 ] }, { label: 'NestJS', borderColor: '#b91c1c', backgroundColor: '#b91c1c', data: [ 121, 508, 755, 899, 1012, 1007, 1156, 1177, 1096, 1180, 1224, 1237, 1208, 1244, 1295, 1437, 1445, 1345, 1338, 1405, 1378, 1380, 1411, 1293, 1420, 1441, 1451, 1365, 1264, 1439, 1384, 1584, 1241, 1361, 1319, 1427, 1398, 1362, 1320, 1448, 1482, 1458, 1311, 1256, 1399, 1363, 1345, 1259, 1346, 1443, 1499, 1445, 1438, 1451, 1425, 1472, 1479, 1367, 1322, 1450, 1414, 1360, 1355, 1457, 1326, 1411, 1363, 1350, 1277, 1279, 1168, 1216, 1198, 1256, 1314, 1248, 1236, 1192, 1183, 1227, 1263, 1357, 1148, 1141, 1168, 1127, 900, 25 ] }, { label: 'Spring Boot', borderColor: '#15803d', backgroundColor: '#15803d', data: [ 33, 1444, 2295, 2566, 2449, 2563, 2918, 2891, 2886, 2676, 2674, 2943, 2975, 2971, 2757, 2645, 2966, 2967, 2925, 2622, 2678, 2888, 2984, 2972, 2667, 2639, 2896, 2974, 2853, 2760, 2747, 2897, 2952, 3026, 2633, 2569, 2890, 3020, 2701, 2521, 2680, 2922, 3013, 2983, 2674, 2622, 2790, 2463, 2693, 2352, 2853, 2762, 3000, 2960, 2653, 2615, 2987, 2870, 2875, 2536, 2593, 2903, 2990, 2712, 2550, 2532, 2903, 2955, 2683, 2485, 2626, 2930, 3004, 2858, 2664, 2596, 2937, 2942, 2899, 2517, 2436, 2577, 2012 ] }, { label: 'ASP.NET Core', borderColor: '#6d28d9', backgroundColor: '#6d28d9', data: [ 143, 1375, 1715, 1720, 1816, 1913, 2037, 2110, 2211, 2142, 2381, 2448, 2417, 2323, 2194, 2387, 2478, 2279, 1817, 2181, 2463, 2412, 2454, 2260, 2247, 2405, 2495, 2530, 2173, 2184, 2480, 2476, 2423, 2146, 2209, 2382, 2513, 2344, 2336, 2266, 2513, 2446, 2496, 2382, 2214, 2371, 2477, 2444, 2269, 2210, 2439, 2498, 2450, 2349, 2195, 2448, 2508, 2406, 2316, 2219, 2528, 2503, 2470, 2285, 2179, 2424, 2374, 2224, 1278 ] } ] {{< /chart >}}
To resume, compiled languages have always a clear advantage when it comes to raw performance. But do you really need it ?
Keep in mind that it shouldn't be the only criteria to choose a web framework. The DX is also very important, for exemple Laravel stays a very nice candidate in this regard.
When it comes to compiled languages, I still personally prefer ASP.NET Core over Spring Boot because of the DX. The performance gap is negligible, and it hasn't this warmup Java feeling and keeps a raisonable memory footprint.
I'm stay open to any suggestions to improve my tests, especially on PHP side. If you have any tips to improve performance by some Framework or PHP low level tuning, let me a comment below !