--- title: "A 2024 benchmark of main Web APIs frameworks" date: 2023-12-30 tags: ["kubernetes", "docker", "load-testing", "k6", "webapi"] --- {{< lead >}} We'll be comparing the read performance of 6 Web APIs frameworks, sharing the same OpenAPI contract from [realworld app](https://github.com/gothinkster/realworld), a medium-like clone, implemented under multiple languages (PHP, Python, Javascript, Java and C#). {{< /lead >}} This is not a basic synthetic benchmark, but a real world benchmark with DB data tests, and multiple scenarios. This post may be updated when new versions of frameworks will be released or any suggestions for performance related improvement in below commentary section. A state of the art of real world benchmarks comparison of Web APIs is difficult to achieve and very time-consuming as it forces to master each framework. As performance can highly dependent of: - Code implementation, all made by my own - Fine-tuning for each runtime, so I mostly take the default configuration Now that's said, let's fight ! ## The contenders We'll be using the very last up-to-date stable versions of the frameworks, and the latest stable version of the runtime. | Framework & Source code | Runtime | ORM | Tested Database | | --------------------------------------------------------------------------------------------------------------------------------------------- | ----------- | -------------- | ------------------ | | [Laravel 10](https://github.com/adr1enbe4udou1n/laravel-realworld-example-app) ([api](https://laravelrealworld.okami101.io/api/)) | PHP 8.3 | Eloquent | MySQL & PostgreSQL | | [Symfony 7](https://github.com/adr1enbe4udou1n/symfony-realworld-example-app) ([api](https://symfonyrealworld.okami101.io/api/)) | PHP 8.3 | Doctrine | MySQL & PostgreSQL | | [FastAPI](https://github.com/adr1enbe4udou1n/fastapi-realworld-example-app) ([api](https://fastapirealworld.okami101.io/api/)) | Python 3.12 | SQLAlchemy 2.0 | PostgreSQL | | [NestJS 10](https://github.com/adr1enbe4udou1n/nestjs-realworld-example-app) ([api](https://nestjsrealworld.okami101.io/api/)) | Node 20 | Prisma 5 | PostgreSQL | | [Spring Boot 3.2](https://github.com/adr1enbe4udou1n/spring-boot-realworld-example-app) ([api](https://springbootrealworld.okami101.io/api/)) | Java 21 | Hibernate 6 | PostgreSQL | | [ASP.NET Core 8](https://github.com/adr1enbe4udou1n/aspnetcore-realworld-example-app) ([api](https://aspnetcorerealworld.okami101.io/api/)) | .NET 8.0 | EF Core 8 | PostgreSQL | Each project are: - Using the same OpenAPI contract - Fully tested and fonctional against same [Postman collection](https://github.com/gothinkster/realworld/blob/main/api/Conduit.postman_collection.json) - Highly tooled with high code quality in mind (static analyzers, formatter, linters, good code coverage, etc.) - Share roughly the same amount of DB datasets, 50 users, 500 articles, 5000 comments, generated by faker-like library for each language - Avoiding N+1 queries with eager loading (normally) - Containerized with Docker, and deployed on a monitored Docker Swarm cluster ### Side note on PHP configuration Note as I tested against PostgreSQL for all frameworks as main Database, but I added MySQL for Laravel and Symfony too, just by curiosity, and because of simplicity of PHP for switching database without changing code base, as both DB drivers integrated into base PHP Docker image. It allows to have an interesting Eloquent VS Doctrine ORM comparison for each database. {{< alert >}} I enabled OPcache and use simple Apache PHP docker image, as it's the simplest configuration for PHP apps containers. I tested [FrankenPHP](https://frankenphp.dev/), which seems promising at first glance, but performance results was just far lower than Apache, even with worker mode (tried with Symfony runtime and Laravel Octane)... {{< /alert >}} ## The target hardware We'll running all Web APIs project on a Docker swarm cluster, where each node are composed of 2 dedicated CPUs for stable performance and 8 GB of RAM. Traefik will be used as a reverse proxy, load balancing the requests to the replicas of each node. {{< mermaid >}} flowchart TD client((k6)) client -- Port 80 443 --> traefik-01 subgraph manager-01 traefik-01{Traefik SSL} end subgraph worker-01 app-01([Conduit replica 1]) traefik-01 --> app-01 end subgraph worker-02 app-02([Conduit replica 2]) traefik-01 --> app-02 end subgraph storage-01 DB[(MySQL or PostgreSQL)] app-01 --> DB app-02 --> DB end {{< /mermaid >}} The Swarm cluster is fully monitored with [Prometheus](https://prometheus.io/) and [Grafana](https://grafana.com/), allowing to get relevant performance result. ## The scenarios We'll be using [k6](https://k6.io/) to run the tests, with [constant-arrival-rate executor](https://k6.io/docs/using-k6/scenarios/executors/constant-arrival-rate/) for progressive load testing, following 2 different scenarios : - **Scenario 1** : fetch all articles, following the pagination - **Scenario 2** : fetch all articles, calling each single article with slug, fetch associated comments for each article, and fetch profile of each related author Duration of each scenario is 1 minute, with a 30 seconds graceful for finishing last started iterations. Results with one single test failures, i.e. any response status different than 200 or any response json error parsing, are not accepted. The **Iteration creation rate** (rate / timeUnit) will be choosen in order to obtain the highest possible request rate, without any test failures. ### Scenario 1 The interest of this scenario is to be very database intensive, by fetching all articles, authors, and favorites, following the pagination, with a couple of SQL queries. Note as each code implementation normally use eager loading to avoid N+1 queries, which can have high influence in this test. ```js import http from "k6/http"; import { check } from "k6"; export const options = { scenarios: { articles: { env: { CONDUIT_URL: '' }, duration: '1m', executor: 'constant-arrival-rate', rate: '', timeUnit: '1s', preAllocatedVUs: 50, }, }, }; export default function () { const apiUrl = `https://${__ENV.CONDUIT_URL}/api`; const limit = 10; let offset = 0; let articles = [] do { const articlesResponse = http.get(`${apiUrl}/articles?limit=${limit}&offset=${offset}`); check(articlesResponse, { "status is 200": (r) => r.status == 200, }); articles = articlesResponse.json().articles; offset += limit; } while (articles && articles.length >= limit); } ``` Here the expected JSON response format: ```json { "articles": [ { "title": "Laboriosam aliquid dolore sed dolore", "slug": "laboriosam-aliquid-dolore-sed-dolore", "description": "Rerum beatae est enim cum similique.", "body": "Voluptas maxime incidunt...", "createdAt": "2023-12-23T16:02:03.000000Z", "updatedAt": "2023-12-23T16:02:03.000000Z", "author": { "username": "Devin Swift III", "bio": "Nihil impedit totam....", "image": "https:\/\/randomuser.me\/api\/portraits\/men\/47.jpg", "following": false }, "tagList": [ "aut", "cumque" ], "favorited": false, "favoritesCount": 5 } ], //... "articlesCount": 500 } ``` The expected pseudocode SQL queries to build this response: ```sql SELECT * FROM articles LIMIT 10 OFFSET 0; SELECT count(*) FROM articles; SELECT * FROM users WHERE id IN (); SELECT * FROM article_tag WHERE article_id IN (); SELECT * FROM favorites WHERE article_id IN (); ``` {{< alert >}} It can highly differ according to each ORM, as few of them can prefer to reduce the queries by using subselect, but it's a good approximation. {{< /alert >}} ### Scenario 2 The interest of this scenario is to be mainly runtime intensive, by calling each endpoint of the API. ```js import http from "k6/http"; import { check } from "k6"; export const options = { scenarios: { articles: { env: { CONDUIT_URL: '' }, duration: '1m', executor: 'constant-arrival-rate', rate: '', timeUnit: '1s', preAllocatedVUs: 50, }, }, }; export default function () { const apiUrl = `https://${__ENV.CONDUIT_URL}.sw.okami101.io/api`; const limit = 10; let offset = 0; const tagsResponse = http.get(`${apiUrl}/tags`); check(tagsResponse, { "status is 200": (r) => r.status == 200, }); let articles = [] do { const articlesResponse = http.get(`${apiUrl}/articles?limit=${limit}&offset=${offset}`); check(articlesResponse, { "status is 200": (r) => r.status == 200, }); articles = articlesResponse.json().articles; for (let i = 0; i < articles.length; i++) { const article = articles[i]; const articleResponse = http.get(`${apiUrl}/articles/${article.slug}`); check(articleResponse, { "status is 200": (r) => r.status == 200, }); const commentsResponse = http.get(`${apiUrl}/articles/${article.slug}/comments`); check(commentsResponse, { "status is 200": (r) => r.status == 200, }); const authorsResponse = http.get(`${apiUrl}/profiles/${article.author.username}`); check(authorsResponse, { "status is 200": (r) => r.status == 200, }); } offset += limit; } while (articles && articles.length >= limit); } ``` ## The results ### Laravel #### Laravel MySQL scenario 1 {{< tabs >}} {{< tab tabName="Counters & Req/s" >}} | Metric | Value | | ----------------------- | --------- | | Iteration creation rate | **5/s** | | Total requests | **8160** | | Total iterations | **160** | | Average max req/s | **130** | | p(90) req duration | **584ms** | {{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [54,93,105,93,96,106,111,130,123,116,115,122,124,122,129,115,111,119,121,108,110,109,135,97,109,120,107,105,103,125,115,125,126,126,113,117,114,131,134,84,115,116,112,89,116,120,121,125,120,119,112,112,124,115,138,89,113,137,98,123,111,125,120,126,123,102,124,111,99,107,89,91] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="Req duration" >}} {{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [5,10,15,20,25,28,32,36,41,45,50,50,50,50,50,49,49,49,50,48,49,49,49,49,50,49,49,49,48,50,48,49,50,49,50,49,50,50,50,50,50,50,49,50,50,50,49,50,49,48,47,50,48,48,50,50,50,50,50,50,48,47,46,44,41,39,34,31,29,22,12] } ] {{< /chart >}} {{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [41,75,121,168,217,240,263,273,308,341,396,394,405,419,388,443,450,387,443,458,457,409,419,436,438,464,455,425,455,472,415,398,406,391,422,434,417,398,387,410,567,434,446,425,476,436,407,401,404,416,449,406,415,399,411,520,420,402,437,424,439,405,378,367,368,364,330,280,301,257,218,103] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="CPU load" >}} {{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.03,0.14,0.36,0.37,0.36,0.35,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.02,0.04,0.09,0.08,0.09,0.07,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.04,0.88,0.92,0.93,0.92,0.92,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.02,0.07,0.08,0.07,0.08,0.08,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< /tab >}} {{< /tabs >}} As expected here, database is the bottleneck. We'll get slow response time at full load (> 500ms). #### Laravel MySQL scenario 2 {{< tabs >}} {{< tab tabName="Counters & Req/s" >}} | Metric | Value | | ----------------------- | --------- | | Iteration creation rate | **1/2/s** | | Total requests | **29015** | | Total iterations | **5** | | Average max req/s | **360** | | p(90) req duration | **117ms** | {{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [1,38,40,137,150,211,216,255,247,269,285,299,294,291,295,322,322,327,308,314,329,329,341,324,318,336,341,344,328,329,349,347,353,329,333,352,360,351,339,330,355,359,353,328,340,355,348,355,340,334,356,347,356,346,337,347,358,353,336,341,347,347,350,328,345,355,351,351,349,341,354,351,353,340,343,343,353,362,336,333,353,344,362,338,335,353,353,355,339,320,304] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="Req duration" >}} {{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8,9,9,10,10,11,11,12,12,13,13,14,14,15,15,16,16,17,17,18,18,19,19,20,20,21,21,22,22,23,23,24,24,25,24,25,25,26,25,26,26,27,27,28,28,29,29,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,27,27,27,26,26,26,26,26,26,26,26,26] } ] {{< /chart >}} {{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [30,26,26,14,13,13,13,15,16,18,17,19,20,24,23,24,24,27,29,31,30,32,32,36,37,38,37,40,42,45,42,45,45,51,50,50,50,53,56,57,58,58,60,65,64,65,65,66,71,73,70,71,70,75,71,76,72,76,80,78,82,83,82,83,82,78,79,79,79,81,78,80,78,81,78,85,79,77,83,81,75,77,76,75,76,74,73,73,76,80,74] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="CPU load" >}} {{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.27,0.69,0.76,0.77,0.77,0.77,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.08,0.16,0.2,0.2,0.19,0.21,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.14,0.2,0.2,0.2,0.21,0.22,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.11,0.14,0.15,0.17,0.14,0.14,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< /tab >}} {{< /tabs >}} Now we have a very runtime intensive scenario, with workers as bottleneck, API is keeping up with a low response time (~100ms). #### Laravel PgSQL scenario 1 {{< tabs >}} {{< tab tabName="Counters & Req/s" >}} | Metric | Value | | ----------------------- | --------- | | Iteration creation rate | **5/s** | | Total requests | **11934** | | Total iterations | **234** | | Average max req/s | **180** | | p(90) req duration | **371ms** | {{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [64,138,161,163,174,170,164,172,183,176,174,161,181,179,181,177,174,182,178,184,183,171,183,177,193,168,161,188,179,187,172,173,187,175,188,174,170,183,181,185,172,175,179,185,183,180,167,186,180,183,173,169,189,181,181,176,174,177,182,188,174,168,181,172,190,176,169,180,30] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="Req duration" >}} {{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [5,10,14,17,20,22,26,29,31,34,37,40,43,45,48,50,50,49,50,49,50,49,50,50,48,47,49,49,49,49,50,50,50,50,49,50,50,48,48,48,47,45,50,46,48,49,49,50,50,48,50,49,47,50,49,48,49,50,48,50,47,44,41,40,31,27,22,7] } ] {{< /chart >}} {{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [39,53,74,94,100,119,137,155,168,175,199,216,237,246,252,277,289,272,274,271,277,279,273,274,267,276,265,280,277,262,283,284,275,278,271,276,286,279,268,270,276,274,266,263,256,274,287,272,276,271,277,291,270,266,271,275,273,283,272,264,278,267,239,238,197,172,150,92,37] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="CPU load" >}} {{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.03,0.35,0.83,0.81,0.83,0.82,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.02,0.07,0.15,0.15,0.16,0.15,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.03,0.3,0.33,0.33,0.33,0.3,0.04], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.02,0.39,0.44,0.44,0.43,0.43,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< /tab >}} {{< /tabs >}} Laravel performs slightly better than MySQL in this scenario, and we are not limited by database, contrary with MySQL. #### Laravel PgSQL scenario 2 {{< tabs >}} {{< tab tabName="Counters & Req/s" >}} | Metric | Value | | ----------------------- | --------- | | Iteration creation rate | **1/3/s** | | Total requests | **16219** | | Total iterations | **0** | | Average max req/s | **220** | | p(90) req duration | **128ms** | {{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [16,26,27,64,93,94,119,128,120,134,149,150,157,155,152,169,168,169,167,166,175,178,185,175,176,187,181,190,185,179,190,196,194,187,178,193,202,195,195,183,195,201,195,196,190,203,195,205,195,191,203,205,205,197,188,200,208,207,197,190,208,215,212,205,185,204,203,211,194,189,208,211,201,198,199,197,207,206,203,194,203,207,203,198,195,202,206,207,203,191,41] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="Req duration" >}} {{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [1,1,1,2,2,2,3,3,3,4,4,4,5,5,5,6,6,6,7,7,7,8,8,8,9,9,9,10,10,10,11,11,11,12,12,12,13,13,13,14,14,14,15,15,15,16,16,16,17,17,17,18,18,18,19,19,19,20,20,20,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21,21] } ] {{< /chart >}} {{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [44,38,37,27,21,21,22,23,24,28,27,26,30,32,32,34,36,35,40,42,40,43,43,46,50,48,49,51,54,55,56,56,56,63,67,62,62,67,66,74,71,70,75,76,78,77,82,77,85,89,84,85,88,91,96,97,92,94,103,104,99,99,97,104,110,107,102,100,105,109,105,99,104,105,107,106,100,102,103,107,104,102,103,104,106,105,103,101,103,108,102] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="CPU load" >}} {{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.05,0.49,0.64,0.7,0.74,0.72,0.6,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.02,0.1,0.12,0.14,0.14,0.14,0.12,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.09,0.28,0.34,0.35,0.37,0.39,0.03,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.13,0.36,0.41,0.44,0.46,0.47,0.02,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< /tab >}} {{< /tabs >}} Laravel performing slower than MySQL in this context. Workers and databases are both heavy loaded, and we didn't complete a single scenario iteration. ### Symfony #### Symfony MySQL scenario 1 {{< tabs >}} {{< tab tabName="Counters & Req/s" >}} | Metric | Value | | ----------------------- | --------- | | Iteration creation rate | **5/s** | | Total requests | **10302** | | Total iterations | **202** | | Average max req/s | **160** | | p(90) req duration | **399s** | {{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [4,105,150,161,153,142,157,151,155,153,138,151,154,154,154,146,153,153,156,153,141,154,155,161,153,136,147,160,159,156,140,156,156,155,160,142,149,158,156,156,134,149,165,147,156,146,153,153,160,149,148,149,159,150,157,143,155,154,162,159,137,157,155,160,158,143,160,157,134] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="Req duration" >}} {{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [5,10,14,16,20,22,26,29,33,37,40,43,47,49,50,50,48,50,50,50,49,49,50,47,50,48,47,49,48,50,50,50,50,49,50,50,50,50,50,50,48,49,48,49,48,48,50,50,50,50,48,49,50,50,50,49,49,48,47,47,46,41,37,31,27,22,16,5] } ] {{< /chart >}} {{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [44,38,58,82,98,133,139,162,182,202,257,268,274,287,317,340,325,332,303,315,365,328,308,332,302,351,324,308,311,315,340,321,317,328,313,335,324,327,307,336,350,332,319,312,313,341,314,313,323,315,350,326,313,313,325,352,321,318,317,307,331,305,280,246,211,191,161,119,75] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="CPU load" >}} {{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.02,0.03,0.32,0.32,0.32,0.31,0.12], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.02,0.01,0.14,0.16,0.15,0.16,0.06], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.03,0.13,0.93,0.95,0.95,0.94,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.02,0.02,0.07,0.05,0.05,0.06,0.03], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< /tab >}} {{< /tabs >}} It's slightly better than Laravel in this case, with a lower response time. #### Symfony MySQL scenario 2 {{< tabs >}} {{< tab tabName="Counters & Req/s" >}} | Metric | Value | | ----------------------- | --------- | | Iteration creation rate | **1/2/s** | | Total requests | **32086** | | Total iterations | **18** | | Average max req/s | **410** | | p(90) req duration | **41ms** | {{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [17,44,40,87,174,168,194,228,229,256,302,289,308,335,345,346,343,328,374,381,359,362,368,393,389,403,380,371,390,387,388,366,379,400,389,397,382,373,390,401,393,387,387,392,413,411,379,390,413,414,414,380,394,417,406,413,388,393,414,417,417,391,395,417,413,410,390,396,409,413,408,378,381,394,412,405,381,393,397,395,396,364,375,363,378,371,336,324,312,292,110] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="Req duration" >}} {{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [1,1,1,2,2,2,3,3,3,4,4,4,5,5,5,6,6,6,7,7,7,8,8,8,9,9,8,8,8,8,9,9,9,9,9,9,10,10,9,10,10,10,11,11,11,11,11,11,12,12,12,12,12,12,13,13,12,13,13,13,14,13,13,13,12,12,12,12,12,12,11,11,11,10,10,10,10,9,9,9,8,8,7,7,6,6,6,5,4,3] } ] {{< /chart >}} {{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [28,22,24,17,11,12,12,13,13,14,13,14,14,15,14,16,17,18,17,18,19,20,21,20,22,22,22,23,21,21,21,24,24,23,23,22,25,26,25,24,25,26,27,28,26,27,29,28,28,29,28,33,31,29,30,31,33,32,31,31,32,36,33,31,30,29,31,30,29,29,28,29,29,27,24,24,26,25,23,23,21,22,21,20,18,1] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="CPU load" >}} {{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.03,0.42,0.56,0.59,0.59,0.58,0.48,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.02,0.24,0.32,0.31,0.36,0.34,0.27,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.09,0.32,0.37,0.38,0.4,0.38,0.12,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.04,0.08,0.09,0.1,0.1,0.08,0.04,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< /tab >}} {{< /tabs >}} Symfony is able to handle the load, still better than Laravel in the same context. Let's see if it's able to keep up with the same performance with PostgreSQL. #### Symfony PgSQL scenario 1 {{< tabs >}} {{< tab tabName="Counters & Req/s" >}} | Metric | Value | | ----------------------- | --------- | | Iteration creation rate | **5/s** | | Total requests | **10302** | | Total iterations | **160** | | Average max req/s | **160** | | p(90) req duration | **379ms** | {{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [21,132,134,149,159,147,160,141,149,159,159,146,155,143,164,150,158,142,155,157,152,160,145,150,159,152,153,147,146,166,151,157,146,144,152,161,153,146,148,157,157,154,152,149,155,158,156,145,151,152,159,160,149,144,158,157,152,144,148,157,153,159,143,160,156,154,153,142,100] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="Req duration" >}} {{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [5,10,15,17,21,24,28,32,35,38,41,45,49,48,50,49,50,49,50,49,49,50,49,49,49,48,50,46,49,48,48,50,50,50,50,49,50,47,49,50,49,49,49,49,48,48,50,50,50,50,49,50,49,49,48,49,50,48,49,48,46,40,35,32,28,24,20,12] } ] {{< /chart >}} {{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [57,45,78,103,110,142,153,197,212,224,239,264,307,322,315,318,323,345,321,322,312,320,341,327,314,324,321,342,318,299,312,315,345,326,320,329,323,343,311,324,319,316,338,324,311,309,310,344,323,316,316,317,348,326,327,310,329,350,319,318,314,298,279,229,202,182,149,128,72] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="CPU load" >}} {{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.03,0.35,0.55,0.54,0.53,0.53,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.02,0.12,0.16,0.15,0.16,0.16,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.04,0.61,0.63,0.65,0.63,0.6,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.03,0.37,0.37,0.35,0.37,0.35,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< /tab >}} {{< /tabs >}} Symfony performs same with PostgreSQL and MySQL, and performing a little less than Laravel when using PostgreSQL. #### Symfony PgSQL scenario 2 {{< tabs >}} {{< tab tabName="Counters & Req/s" >}} | Metric | Value | | ----------------------- | --------- | | Iteration creation rate | **1/3/s** | | Total requests | **19633** | | Total iterations | **4** | | Average max req/s | **250** | | p(90) req duration | **95ms** | {{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [29,30,29,107,108,110,149,152,153,186,178,171,200,203,197,206,199,208,217,215,213,211,225,219,232,221,209,230,239,228,223,217,240,235,246,235,223,233,248,247,233,216,245,246,253,235,229,241,246,243,238,219,242,239,251,238,227,247,251,249,241,235,246,246,248,241,231,240,252,244,231,229,242,246,250,237,227,245,250,249,232,231,245,243,247,237,230,245,251,233] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="Req duration" >}} {{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [1,1,1,2,2,2,3,3,3,4,4,4,5,5,5,6,6,6,7,7,7,8,8,8,9,9,9,10,10,10,11,11,11,12,12,12,13,13,13,14,14,14,15,15,15,16,16,16,17,17,17,18,18,18,19,19,19,20,20,20,20,20,19,19,19,18,18,18,18,18,18,18,18,18,18,18,18,18,17,17,17,17,17,17,17,17,17,17,17,16] } ] {{< /chart >}} {{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [34,34,34,19,18,18,20,19,19,21,22,23,25,24,25,29,30,29,32,32,33,38,35,37,39,41,43,43,42,43,49,50,46,51,48,51,57,56,53,56,60,64,62,60,60,68,70,66,68,70,72,80,75,74,76,79,83,82,79,80,82,82,81,76,76,79,77,74,72,74,77,77,75,73,72,76,78,74,72,68,73,74,69,70,68,72,72,70,67,68] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="CPU load" >}} {{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.12,0.55,0.67,0.71,0.73,0.73,0.29,0.03,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.05,0.17,0.2,0.21,0.22,0.2,0.09,0.01,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.15,0.28,0.32,0.32,0.33,0.32,0.03,0.03,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.2,0.37,0.4,0.43,0.43,0.43,0.02,0.02,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< /tab >}} {{< /tabs >}} Now it performs clearly slower than with MySQL in same scenario. Slightly better than Laravel in same context. To summary the 2nd scenario give MySQL a good advantage against PostgreSQL **with PHP**. ### FastAPI As a side note here, uvicorn is limited to 1 CPU core, so I use 2 replicas on each worker to use all CPU cores. #### FastAPI PgSQL scenario 1 {{< tabs >}} {{< tab tabName="Counters & Req/s" >}} | Metric | Value | | ----------------------- | --------- | | Iteration creation rate | **10/s** | | Total requests | **30651** | | Total iterations | **601** | | Average max req/s | **550** | | p(90) req duration | **49ms** | {{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [2,385,495,476,462,502,534,518,496,480,513,520,520,509,473,539,491,483,516,463,526,522,520,512,503,545,478,541,468,521,519,489,530,469,479,513,515,495,513,491,508,523,548,483,500,526,505,527,519,496,506,541,504,507,478,508,535,521,488,480,543,379] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="Req duration" >}} {{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [7,8,9,11,11,11,11,12,12,12,11,12,11,12,12,12,13,14,15,15,14,13,15,14,12,13,12,13,15,12,14,15,16,16,16,16,16,18,17,17,16,16,15,18,14,16,15,15,17,16,16,16,17,16,16,15,15,16,16,17] } ] {{< /chart >}} {{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [25,14,15,18,21,22,21,21,22,24,23,22,23,23,25,24,24,27,26,29,28,27,27,27,27,25,28,24,27,27,25,27,28,30,33,33,32,31,31,34,33,32,29,31,33,30,33,31,31,32,31,29,30,31,31,31,31,30,32,34,32,22] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="CPU load" >}} {{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.03,0.02,0.63,0.66,0.64,0.64,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.01,0.01,0.13,0.13,0.15,0.14,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.03,0.03,0.39,0.37,0.38,0.38,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.02,0.02,0.13,0.13,0.16,0.15,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< /tab >}} {{< /tabs >}} Now we are talking, FastAPI outperforms above PHP frameworks, and database isn't the bottleneck anymore. #### FastAPI PgSQL scenario 2 {{< tabs >}} {{< tab tabName="Counters & Req/s" >}} | Metric | Value | | ----------------------- | --------- | | Iteration creation rate | **2/s** | | Total requests | **71394** | | Total iterations | **16** | | Average max req/s | **870** | | p(90) req duration | **113ms** | {{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [18,187,561,712,691,710,760,736,773,728,812,853,818,874,808,762,828,797,783,779,779,786,828,795,771,804,877,803,852,828,771,877,837,862,773,813,794,834,770,804,768,803,811,839,780,827,821,824,846,807,808,797,837,859,810,788,803,847,839,783,761,835,800,869,787,775,811,828,840,826,837,873,840,857,819,816,817,763,861,769,789,850,832,801,790,771,784,760,773,756,559] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="Req duration" >}} {{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,49,50,50,50,50,49,49,50,50,50,50,50,50,50,48,48,48,48,47,47,47,47,47,46,45,45,45,44,44,43,43,42,41,40,40,38,38,38] } ] {{< /chart >}} {{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [13,11,7,8,11,14,16,19,20,25,24,25,30,29,35,39,38,42,45,49,51,53,52,58,63,60,58,63,58,60,66,56,60,57,65,60,63,60,66,61,66,60,64,59,64,60,61,61,59,61,61,64,60,57,61,64,63,59,59,63,61,64,63,57,63,63,63,58,56,59,56,56,55,55,57,57,57,59,52,59,56,51,52,53,53,53,53,51,50,50,46] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="CPU load" >}} {{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.11,0.69,0.72,0.7,0.7,0.72,0.49,0.02], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.03,0.19,0.19,0.18,0.17,0.17,0.14,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.19,0.29,0.31,0.33,0.32,0.32,0.03,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.09,0.19,0.24,0.26,0.24,0.24,0.02,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< /tab >}} {{< /tabs >}} FastAPI performs around at least twice better than PHP main frameworks in every situation. I'm not sure that testing it on MySQL change anything. ### NestJS #### NestJS PgSQL scenario 1 {{< tabs >}} {{< tab tabName="Counters & Req/s" >}} | Metric | Value | | ----------------------- | --------- | | Iteration creation rate | **15/s** | | Total requests | **37281** | | Total iterations | **731** | | Average max req/s | **700** | | p(90) req duration | **Xms** | {{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [111,508,571,566,569,558,544,672,589,628,607,610,555,527,586,596,568,598,581,601,630,595,625,615,623,601,620,685,621,569,579,600,672,643,577,663,695,715,581,576,584,605,605,659,638,594,627,583,603,622,642,606,589,618,584,635,642,592,548,568,653,617,237] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="Req duration" >}} {{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [15,22,28,37,43,49,50,50,50,46,50,50,49,46,47,50,50,49,49,49,49,49,49,48,49,49,50,50,47,49,50,46,48,50,48,49,48,50,49,50,48,49,49,48,49,48,50,47,47,46,48,49,48,46,47,48,50,50,48,43,27] } ] {{< /chart >}} {{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [14,25,36,48,62,75,86,73,83,76,78,79,88,93,82,78,86,83,85,81,74,84,79,77,76,82,79,70,78,83,84,82,72,74,86,74,68,64,84,83,84,78,82,74,71,85,77,83,81,78,73,78,83,78,81,79,73,81,89,89,66,45,24] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="CPU load" >}} {{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.06,0.05,0.42,0.43,0.44,0.42,0.04], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.02,0.08,0.56,0.53,0.51,0.55,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.03,0.2,0.22,0.24,0.22,0.1,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.02,0.15,0.17,0.17,0.18,0.07,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< /tab >}} {{< /tabs >}} It's slightly better than FastAPI, let's keep up on scenario 2. #### NestJS PgSQL scenario 2 {{< tabs >}} {{< tab tabName="Counters & Req/s" >}} | Metric | Value | | ----------------------- | ---------- | | Iteration creation rate | **3/s** | | Total requests | **105536** | | Total iterations | **68** | | Average max req/s | **1400** | | p(90) req duration | **53ms** | {{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [17,369,682,787,878,1048,1104,1102,1083,1147,1171,1246,1276,1182,1200,1281,1233,1302,1247,1249,1320,1382,1386,1362,1382,1357,1379,1423,1259,1296,1340,1341,1394,1264,1328,1446,1365,1356,1258,1326,1324,1466,1372,1206,1287,1352,1449,1322,1248,1367,1332,1341,1305,1264,1284,1362,1343,1428,1274,1319,1393,1440,1434,1228,1223,1349,1356,1421,1278,1269,1158,1215,1239,1068,1151,1192,1152,1210,1083,1132,1165,1154,1193,1035,984,765,36] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="Req duration" >}} {{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [3,6,9,12,15,18,21,24,27,30,33,36,39,42,45,48,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,49,50,50,49,50,50,50,50,50,50,49,49,50,50,50,50,49,49,49,50,49,46,44,43,40,40,36,32,29,24,18,18,18,18,18,18,18,18,18,18,17,15,12,9,4] } ] {{< /chart >}} {{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [20,8,8,12,13,14,16,19,22,23,25,26,28,33,35,35,39,38,40,40,37,36,36,37,36,37,36,35,40,39,37,37,36,39,37,35,36,37,40,37,38,34,36,41,39,36,34,37,40,36,38,37,37,40,39,36,37,35,38,39,36,34,32,35,35,30,29,25,26,23,20,15,14,17,16,15,15,15,16,16,15,14,12,12,10,7,5] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="CPU load" >}} {{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.03,0.43,0.47,0.45,0.45,0.45,0.35,0.02], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.02,0.54,0.52,0.52,0.52,0.52,0.57,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.08,0.37,0.39,0.37,0.38,0.34,0.17,0.04], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.01,0.27,0.31,0.32,0.31,0.28,0.11,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< /tab >}} {{< /tabs >}} Huge gap now, NestJS is the clear winner so far. The native even loop system seems to make miracles. It's time to test it against compiled language. ### Spring Boot #### Spring Boot PgSQL scenario 1 {{< tabs >}} {{< tab tabName="Counters & Req/s" >}} | Metric | Value | | ----------------------- | --------- | | Iteration creation rate | **30/s** | | Total requests | **91851** | | Total iterations | **1801** | | Average max req/s | **1600** | | p(90) req duration | **33ms** | {{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [867,1438,1481,1409,1556,1589,1466,1434,1412,1510,1570,1553,1461,1540,1599,1614,1570,1522,1423,1640,1578,1615,1535,1476,1472,1525,1509,1367,1462,1513,1661,1541,1506,1452,1619,1589,1609,1510,1419,1534,1554,1622,1570,1515,1516,1550,1535,1492,1500,1578,1601,1577,1524,1398,1566,1568,1532,1517,1506,1579,905] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="Req duration" >}} {{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [17,23,26,29,28,26,28,33,37,36,35,35,36,39,35,32,30,29,33,32,28,26,24,25,29,27,30,36,38,38,34,35,35,37,34,31,28,30,34,35,33,30,28,26,30,29,26,29,31,28,26,24,24,30,29,28,26,25,28,24] } ] {{< /chart >}} {{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [10,12,16,18,18,17,18,21,23,24,23,23,24,24,23,20,19,20,22,19,18,16,16,17,18,19,19,23,25,25,22,22,23,25,22,21,19,19,22,22,21,19,18,19,18,18,18,19,20,19,17,16,16,19,18,17,17,17,18,17,14] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="CPU load" >}} {{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.01,0.03,0.28,0.28,0.28,0.28,0.05], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.01,0.02,0.2,0.22,0.22,0.21,0.03], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.01,0.03,0.58,0.59,0.62,0.61,0.04], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.01,0.02,0.3,0.31,0.3,0.3,0.03], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< /tab >}} {{< /tabs >}} End of debate, Spring Boot destroys competition for 1st scenario. Moreover, database is the bottleneck, and java runtime is clearly sleeping here. But JPA Hibernate was difficult to tune for optimal performance, and finally the magic [`@BatchSize`](https://docs.jboss.org/hibernate/orm/current/javadocs/org/hibernate/annotations/BatchSize.html) annotation was the key, allowing to merge n+1 queries into 1+1 queries. Without it, Spring Boot was performing 3 times slower ! #### Spring Boot PgSQL scenario 2 {{< tabs >}} {{< tab tabName="Counters & Req/s" >}} | Metric | Value | | ----------------------- | ---------- | | Iteration creation rate | **10/s** | | Total requests | **197104** | | Total iterations | **127** | | Average max req/s | **2900** | | p(90) req duration | **33ms** | {{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [142,1671,2230,2167,2456,2562,2715,2845,2609,2513,2777,2909,2835,2591,2503,2552,2921,2804,2567,2480,2722,2738,2767,2021,2521,2843,2937,2883,2521,2450,2743,2818,2784,2539,2487,2774,2797,2748,2558,2548,2796,2850,2820,2538,2507,2664,2893,2923,2657,2493,2894,2856,2801,2575,2505,2700,2859,2905,2573,2667,2703,2797,2684,2176,2328,2364,2638,2513,2413,2379,2614,2594,2623,2435,2385,2197,737] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="Req duration" >}} {{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [10,20,30,40,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,49,50,49,50,50,47,48,48,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,41,29,27,27,27,27,27,27,27,27,27,27,25,21,14 ] } ] {{< /chart >}} {{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [5,5,8,13,16,19,18,17,19,20,18,17,17,19,20,19,17,18,19,20,18,18,18,25,20,17,17,17,20,20,18,17,17,19,20,18,18,18,19,20,18,17,18,20,20,19,17,17,19,20,17,17,17,19,20,18,17,17,19,19,18,15,11,12,11,11,10,11,11,11,10,10,10,10,9,7,4] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="CPU load" >}} {{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.01,0.13,0.39,0.37,0.4,0.4,0.38,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.01,0.08,0.26,0.25,0.25,0.24,0.22,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.01,0.04,0.58,0.58,0.57,0.58,0.52,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.01,0.02,0.3,0.29,0.3,0.31,0.29,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< /tab >}} {{< /tabs >}} Java is maybe not the best DX experience for me, but it's a beast in terms of raw performance. Besides, we'll again have database bottleneck, which is the only case seen in this scenario on every framework tested ! Impossible to reach 100% java runtime CPU usage, even with 4 CPU cores, staying only at 60-70% overall... ### ASP.NET Core #### ASP.NET Core PgSQL scenario 1 {{< tabs >}} {{< tab tabName="Counters & Req/s" >}} | Metric | Value | | ----------------------- | --------- | | Iteration creation rate | **20** | | Total requests | **57936** | | Total iterations | **1136** | | Average max req/s | **980** | | p(90) req duration | **87ms** | {{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [18,742,920,880,882,977,984,976,947,927,962,967,979,955,911,954,965,1005,957,918,904,986,973,974,892,969,973,988,917,900,973,975,972,953,928,963,997,975,971,884,954,977,950,965,923,942,976,968,972,885,959,960,974,948,890,952,973,986,953,914,973,947,102] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="Req duration" >}} {{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [13,20,24,28,34,36,37,38,43,43,47,50,47,48,49,49,46,48,48,47,48,47,47,45,50,48,44,49,49,48,47,48,44,47,48,47,46,48,48,48,45,46,45,47,48,47,47,49,49,45,49,46,46,43,50,42,44,44,50,48,20] } ] {{< /chart >}} {{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [12,14,18,26,30,33,34,37,39,44,45,48,48,48,52,50,49,45,49,52,52,48,50,45,53,50,49,46,50,55,50,47,48,49,50,49,46,47,49,53,51,47,47,48,50,51,49,48,49,53,49,49,48,48,50,51,44,43,47,51,50,33,9] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="CPU load" >}} {{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.03,0.03,0.43,0.46,0.46,0.46,0.11], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.01,0.01,0.18,0.2,0.21,0.2,0.05], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.03,0.04,0.78,0.76,0.76,0.76,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.02,0.02,0.21,0.24,0.23,0.22,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< /tab >}} {{< /tabs >}} ASP.NET Core is performing well here. EF Core is incredibly efficient by default without any tuning headaches. #### ASP.NET Core PgSQL scenario 2 {{< tabs >}} {{< tab tabName="Counters & Req/s" >}} | Metric | Value | | ----------------------- | ---------- | | Iteration creation rate | **5** | | Total requests | **167616** | | Total iterations | **108** | | Average max req/s | **2500** | | p(90) req duration | **38ms** | {{< chart type="timeseries" title="Req/s count" >}} [ { label: 'Req/s', data: [205,1130,1622,1790,2011,2135,2024,2093,2463,2465,2428,2385,2144,2460,2503,2551,2337,2200,2404,2379,2452,2322,2252,2462,2449,2469,2306,2230,2488,2554,2466,2253,2180,2426,2445,2502,2349,2196,2476,2343,2538,2341,2166,2499,2412,2452,2259,2137,2439,2474,2461,2302,2113,2479,2374,2421,2369,2221,2462,2409,2332,2382,2216,2394,2478,2341,1644,1934,2134,2266,2070,1598,1417,1505,1518,710] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="Req duration" >}} {{< chart type="timeseries" title="VUs count" >}} [ { label: 'VUs', data: [5,10,15,21,25,30,35,40,45,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,49,49,50,50,50,50,48,49,50,50,49,50,48,50,47,50,50,50,50,50,50,50,50,50,50,50,50,50,50,50,49,50,50,48,49,47,46,44,42,39,37,34,31,21,13,8,8,8,4] } ] {{< /chart >}} {{< chart type="timeseries" title="Request duration in ms" >}} [ { label: 'Duration (ms)', data: [6,5,7,9,10,12,15,17,17,18,20,21,23,20,20,19,21,23,21,21,20,21,22,20,20,20,21,22,20,19,20,22,23,20,20,20,21,22,20,21,19,21,23,20,21,20,22,23,21,20,20,22,23,20,21,21,21,22,20,21,21,20,21,19,18,18,24,19,16,13,10,8,5,5,5,4] } ] {{< /chart >}} {{< /tab >}} {{< tab tabName="CPU load" >}} {{< chart type="timeseries" title="CPU runtime load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.03,0.6,0.6,0.61,0.64,0.47,0.02], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.01,0.3,0.29,0.3,0.29,0.29,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< chart type="timeseries" title="CPU database load" stacked="true" max="1" step="15" >}} [ { label: 'User', data: [0.2,0.55,0.54,0.53,0.53,0.15,0.03], borderColor: '#4bc0c0', backgroundColor: '#4bc0c0', fill: true }, { label: 'System', data: [0.14,0.34,0.34,0.34,0.34,0.09,0.02], borderColor: '#ff6384', backgroundColor: '#ff6384', fill: true } ] {{< /chart >}} {{< /tab >}} {{< /tabs >}} Not that far to Java variant, just a bit behind. But as workers are fully loaded here, contrary to Spring Boot which is limited by database, Java stays by far the clear winner for raw performance (in sacrifice of some memory obviously). ### Conclusion Here are the final req/s results for each framework. I choose to take MySQL results for PHP. {{< chart type="timeseries" title="Scenario 1" >}} [ { label: 'Laravel', data: [54,93,105,93,96,106,111,130,123,116,115,122,124,122,129,115,111,119,121,108,110,109,135,97,109,120,107,105,103,125,115,125,126,126,113,117,114,131,134,84,115,116,112,89,116,120,121,125,120,119,112,112,124,115,138,89,113,137,98,123,111,125,120,126,123,102,124,111,99,107,89,91], borderColor: '#c2410c', backgroundColor: '#c2410c' }, { label: 'Symfony', data: [4,105,150,161,153,142,157,151,155,153,138,151,154,154,154,146,153,153,156,153,141,154,155,161,153,136,147,160,159,156,140,156,156,155,160,142,149,158,156,156,134,149,165,147,156,146,153,153,160,149,148,149,159,150,157,143,155,154,162,159,137,157,155,160,158,143,160,157,134], borderColor: '#ffffff', backgroundColor: '#ffffff' }, { label: 'FastAPI', data: [2,385,495,476,462,502,534,518,496,480,513,520,520,509,473,539,491,483,516,463,526,522,520,512,503,545,478,541,468,521,519,489,530,469,479,513,515,495,513,491,508,523,548,483,500,526,505,527,519,496,506,541,504,507,478,508,535,521,488,480,543,379], borderColor: '#0f766e', backgroundColor: '#0f766e' }, { label: 'NestJS', data: [111,508,571,566,569,558,544,672,589,628,607,610,555,527,586,596,568,598,581,601,630,595,625,615,623,601,620,685,621,569,579,600,672,643,577,663,695,715,581,576,584,605,605,659,638,594,627,583,603,622,642,606,589,618,584,635,642,592,548,568,653,617,237], borderColor: '#b91c1c', backgroundColor: '#b91c1c' }, { label: 'Spring Boot', data: [867,1438,1481,1409,1556,1589,1466,1434,1412,1510,1570,1553,1461,1540,1599,1614,1570,1522,1423,1640,1578,1615,1535,1476,1472,1525,1509,1367,1462,1513,1661,1541,1506,1452,1619,1589,1609,1510,1419,1534,1554,1622,1570,1515,1516,1550,1535,1492,1500,1578,1601,1577,1524,1398,1566,1568,1532,1517,1506,1579,905], borderColor: '#15803d', backgroundColor: '#15803d' }, { label: 'ASP.NET Core', data: [18,742,920,880,882,977,984,976,947,927,962,967,979,955,911,954,965,1005,957,918,904,986,973,974,892,969,973,988,917,900,973,975,972,953,928,963,997,975,971,884,954,977,950,965,923,942,976,968,972,885,959,960,974,948,890,952,973,986,953,914,973,947,102], borderColor: '#6d28d9', backgroundColor: '#6d28d9' } ] {{< /chart >}} {{< chart type="timeseries" title="Scenario 2" >}} [ { label: 'Laravel', data: [1,38,40,137,150,211,216,255,247,269,285,299,294,291,295,322,322,327,308,314,329,329,341,324,318,336,341,344,328,329,349,347,353,329,333,352,360,351,339,330,355,359,353,328,340,355,348,355,340,334,356,347,356,346,337,347,358,353,336,341,347,347,350,328,345,355,351,351,349,341,354,351,353,340,343,343,353,362,336,333,353,344,362,338,335,353,353,355,339,320,304], borderColor: '#c2410c', backgroundColor: '#c2410c' }, { label: 'Symfony', data: [17,44,40,87,174,168,194,228,229,256,302,289,308,335,345,346,343,328,374,381,359,362,368,393,389,403,380,371,390,387,388,366,379,400,389,397,382,373,390,401,393,387,387,392,413,411,379,390,413,414,414,380,394,417,406,413,388,393,414,417,417,391,395,417,413,410,390,396,409,413,408,378,381,394,412,405,381,393,397,395,396,364,375,363,378,371,336,324,312,292,110], borderColor: '#ffffff', backgroundColor: '#ffffff' }, { label: 'FastAPI', data: [18,187,561,712,691,710,760,736,773,728,812,853,818,874,808,762,828,797,783,779,779,786,828,795,771,804,877,803,852,828,771,877,837,862,773,813,794,834,770,804,768,803,811,839,780,827,821,824,846,807,808,797,837,859,810,788,803,847,839,783,761,835,800,869,787,775,811,828,840,826,837,873,840,857,819,816,817,763,861,769,789,850,832,801,790,771,784,760,773,756,559], borderColor: '#0f766e', backgroundColor: '#0f766e' }, { label: 'NestJS', data: [17,369,682,787,878,1048,1104,1102,1083,1147,1171,1246,1276,1182,1200,1281,1233,1302,1247,1249,1320,1382,1386,1362,1382,1357,1379,1423,1259,1296,1340,1341,1394,1264,1328,1446,1365,1356,1258,1326,1324,1466,1372,1206,1287,1352,1449,1322,1248,1367,1332,1341,1305,1264,1284,1362,1343,1428,1274,1319,1393,1440,1434,1228,1223,1349,1356,1421,1278,1269,1158,1215,1239,1068,1151,1192,1152,1210,1083,1132,1165,1154,1193,1035,984,765,36], borderColor: '#b91c1c', backgroundColor: '#b91c1c' }, { label: 'Spring Boot', data: [142,1671,2230,2167,2456,2562,2715,2845,2609,2513,2777,2909,2835,2591,2503,2552,2921,2804,2567,2480,2722,2738,2767,2021,2521,2843,2937,2883,2521,2450,2743,2818,2784,2539,2487,2774,2797,2748,2558,2548,2796,2850,2820,2538,2507,2664,2893,2923,2657,2493,2894,2856,2801,2575,2505,2700,2859,2905,2573,2667,2703,2797,2684,2176,2328,2364,2638,2513,2413,2379,2614,2594,2623,2435,2385,2197,737], borderColor: '#15803d', backgroundColor: '#15803d' }, { label: 'ASP.NET Core', data: [205,1130,1622,1790,2011,2135,2024,2093,2463,2465,2428,2385,2144,2460,2503,2551,2337,2200,2404,2379,2452,2322,2252,2462,2449,2469,2306,2230,2488,2554,2466,2253,2180,2426,2445,2502,2349,2196,2476,2343,2538,2341,2166,2499,2412,2452,2259,2137,2439,2474,2461,2302,2113,2479,2374,2421,2369,2221,2462,2409,2332,2382,2216,2394,2478,2341,1644,1934,2134,2266,2070,1598,1417,1505,1518,710], borderColor: '#6d28d9', backgroundColor: '#6d28d9' } ] {{< /chart >}} To resume, compiled languages have always a clear advantage when it comes to raw performance. But do you really need it ? Keep in mind that it shouldn't be the only criteria to choose a web framework. The DX is also very important, for exemple Laravel stays unbeatable in this regard when it comes to make a MVP. When it comes to compiled languages, I still personally prefer ASP.NET Core over Spring Boot because of the DX. The performance gap is negligible, and it hasn't this warmup Java feeling and keeps a raisonable memory footprint. I'm stay open to any suggestions to improve my tests, especially on PHP side (I already tested FrankenPHP which gives worst results, and the `memory_limit` is set to **1G**). If you have any tips to improve performance by some Framework or PHP low level tuning, let me a comment below !