Microsoft has released NET Core 3 and with that included System.Text.Json a new json serializer to replace Newtonsoft. One of the claims is that This new serializer is faster. There is already multiple benchmarks proving that claim. However I wanted to see what the difference actually became in real life against one of my existing APIs.
End to end testing
I will do an E2E testing using ab (Apache Benchmark) to see if there will be a difference in response time and how many simultaneous request the api can handle. I will test everything 10 times and then average out the value.
To test the the response time i will do one single request with ab and se how long it takes.
ab -n 1 -c 1 "https://localhost:5001/v3/api/"
To test how many request per second i can handle i will use ab to send 100 simultaneous request to the server 5 times.
ab -n 500 -c 100 "https://localhost:5001/v3/"
The first test is fetching a single item from the API. The payload is 1,7 KB, the request in is fetching the data IDistributedCache, most of the work is therefor deserialize the data from IDistributedCache and then serialize it again for the response.
|Response time 1/req||Request per second|
|Newtonsoft||14,8 ms||98,7 req/s|
|System.Text.Json||18,4 ms||100,2 req/s|
The second test fetches 48 items from the api, the payload is 36,6 KB. Most of the time is spent on fetching data from the database
Response time 1/req
Request per second
|Newtonsoft||129,8 ms||26,11 req/s|
|System.Text.Json||123,5 ms||31,3 req/s|
I found the result of test 2 interesting and decided to redo the test but increase the total amount of request from 500 to 5000.
|Request per second|
I am a bit amazed on how big the difference became, for test 2. looking into some other tests there actually might be something behind it. quoting from the comments at The Battle of C# to JSON Serializers in .NET Core 3
Up to the initial buffer size (8-16kb depending on the lib)? Nothing, they all pretty much behave the same, the buffer is filled and after the serialization is done, the buffer is flushed to the output pipe/stream.TORNHOOF (@TORN_HOOF)
After that size it gets interesting. System.Text.Json is capable of flushing the data and reusing the old small buffer, Utf8Json/Spanjson will rent a new buffer from the pool and copy the data and continue.
As the payload in Test 2 is quit big (36,6 KB), it might explain the increase in throughput compared to test 1.