Test Unix socket

This is a small test/benchmark to se if the use of a unix domain socket in Nginx would affect performance in any significant way for an api

The api is hosted by .NET Core and containerized. Test is done by using apache benchmark. doing 100 concurrent request 50 times. The request that is done has a payload of 1,7 KB.

ab -n 5000 -c 100 "http://localhost/v3/api/"

Result

This is the average request per second after 10 tests against each configuration.

Requests per second
HTTP100
HTTP (Keepalive enabled)118
Unix Socket98
Unix Socket (Keepalive enabled)120

Keepalive

Nginx option keepalive has an impact on both http and unix sockets. It looks to be far more important than the actual connection type.

upstream backend {
        keepalive 100;
        server 10.265.44.3:8080;
}

Newtonsoft/System.Text.Json Testing

Microsoft has released NET Core 3 and with that included System.Text.Json a new json serializer to replace Newtonsoft. One of the claims is that This new serializer is faster. There is already multiple benchmarks proving that claim. However I wanted to see what the difference actually became in real life against one of my existing APIs.

End to end testing

I will do an E2E testing using ab (Apache Benchmark) to see if there will be a difference in response time and how many simultaneous request the api can handle. I will test everything 10 times and then average out the value.

To test the the response time i will do one single request with ab and se how long it takes.

ab -n 1 -c 1 "https://localhost:5001/v3/api/"

To test how many request per second i can handle i will use ab to send 100 simultaneous request to the server 5 times.

ab -n 500 -c 100 "https://localhost:5001/v3/"

Test 1

The first test is fetching a single item from the API. The payload is 1,7 KB, the request in is fetching the data IDistributedCache, most of the work is therefor deserialize the data from IDistributedCache and then serialize it again for the response.

Response time 1/reqRequest per second
Newtonsoft14,8 ms98,7 req/s
System.Text.Json 18,4 ms100,2 req/s

Test 2

The second test fetches 48 items from the api, the payload is 36,6 KB. Most of the time is spent on fetching data from the database


Response time 1/req

Request per second
Newtonsoft 129,8 ms26,11 req/s
System.Text.Json 123,5 ms
31,3 req/s

Test 2b

I found the result of test 2 interesting and decided to redo the test but increase the total amount of request from 500 to 5000.

Request per second
Newtonsoft 19,4 req/s
System.Text.Json 30,53 req/s

Conclusion

I am a bit amazed on how big the difference became, for test 2. looking into some other tests there actually might be something behind it. quoting from the comments at The Battle of C# to JSON Serializers in .NET Core 3

Up to the initial buffer size (8-16kb depending on the lib)? Nothing, they all pretty much behave the same, the buffer is filled and after the serialization is done, the buffer is flushed to the output pipe/stream.
After that size it gets interesting. System.Text.Json is capable of flushing the data and reusing the old small buffer, Utf8Json/Spanjson will rent a new buffer from the pool and copy the data and continue.

TORNHOOF (@TORN_HOOF)

As the payload in Test 2 is quit big (36,6 KB), it might explain the increase in throughput compared to test 1.

Entity Framework Core 3

I recently upgraded an .NET Core 2 project to .NET Core 3 and with that I also updated other packages, including EF Core. Now EF Core 3 has quite a list of breaking changes you can se them in the list below.

https://docs.microsoft.com/en-us/ef/core/what-is-new/ef-core-3.0/breaking-changes

The worst change for my application was one that is quit a bit down on the list and one that I did not see until I found the actual issue in my code.

Before 3.0, eagerly loading collection navigations via Include operators caused multiple queries to be generated on relational database, one for each related entity type.

Eager loading of related entities now happens in a single query

This does not sound to bad, nothing will stop working, but it will effect performance. If it effects performance depends on your query, some will go faster and some will go slower.

In my case I had an query that did not like this change at all. the query created an list of items. These items then had a bunch of data from other tables, created an 3d dimensional object. I Mocked up an example below on how the Linq Query looked like.

return context.Item
	.Select(i => new ResultItem()
	{
		ItemId   = i.ItemId,
		Siblings = i.parentNavigation.Items.Count(),
		Position = i.parentNavigation.Items.Count(p => p.Order < I.Order),
		tags = i.Tag.Select(t => new ResultTag {
			tag = t.tag
		})
	}).ToList();

In EF Core 2, this worked quite well. EF Core divided the code in two parts. First it did one query to fetch the items, count the siblings and position, and then it would do multiple queries to fetch the tags. So if I fetched 12 items. it would be 1 query to fetch the items and then 12 queries to fetch all the tags.

EF Core 3 however does one query, ONE BIG QUERY. In this case it resulted in some performance drop. In EF Core 2 it had a stable 250ms execution time, independent on how many tags each item had. In EF Core 3 it hovered around 650ms but could be slower if one item had more tags than usual.

Improvement 1

var items = context.Item
	.Select(i => new ResultItem()
	{
		ItemId   = i.ItemId,
                ParentId = i.ParentId,
                Order = i.Order
	}).ToList();

foreach (var item in items)
{
	item.Siblings = context.Item.Count(i => i.ParentId == item.ParentId);
	item.Position = context.Item.Count(i => i.ParentId == item.ParentId && i.Order < item.Order);
	item.tags = context.Tag.Where(t => t.ItemId = item.ItemId).Select(t => new ResultTag {
			tag = t.tag
		}).ToList();
}

First step was to separate the different queries. We simply fetched the item first and then looped over them to fetch the extra data needed. This resulted in quite some improvement, we decreased the time from around 650ms to 350ms. But it was still not fast enough.

The reason it still not as fast as before is because this code results in a lot of queries. If we fetch 12 items, it will result 1 query to fetch the items, then it will loop over all the items and do 3 separate queries for each item. Resulting in a total of 1+36 queries against the database.

Each query means another roundtrip to the database. And this is partly the reason why EF Core decided to make the change in EF core 3, to try to avoid as many roundtrips as possible.

Improvement 2

var items = context.Item
	.Select(i => new ResultItem()
	{
		ItemId   = i.ItemId,
                ParentId = i.ParentId,
                Order = i.Order
	}).ToList();

var itemIds = items.Select(c => c.ItemId);
var tags = context.Tag.Where(t => itemIds.Contains(t.ItemId)).Select(t => new ResultTag {
	ItemId = t.ItemId,
	Tag = t.Tag
}).ToList();

var parentIds = items.Select(i => i.ParentId);
var siblings = context.Item.Where(i => parentIds.Contains(i.parentId)).Select(i => new SiblingItem {
	ParentId = i.ParentId,
	Order = i.Order
}).ToList();

foreach (var item in items)
{
	item.Siblings = siblings.Count(i => i.ParentId == item.ParentItem);
	item.Position = siblings.Count(i => i.ParentId == item.ParentItem && i.Order < item.Order);
	item.tags = tags.Where(t => t.ItemId = item.ItemId);
}

Now we have complicated our code somewhat, but this was the fastest way of doing it. As previous example we first fetch all the items. Our next step is for all items we fetched, select their siblings in on query and select all their tags in another query. We can then loop over the items and filter out the result for each item. This resulted in a total of 3 queries being sent to the database and the execution time dropped to under 100ms.

Podman 1.8

Guide to install a newer version of Podman on Centos can be found on podmans website https://podman.io/getting-started/installation.html This will get you a newer version than 1.4 which is included in the default yum repository.

CentOS 7 is using an older kernel that does not support all features of podman. One thing is that needs to be removed is mount options with metacopy. You can change this line by removing metacopy=on in
/etc/containers/storage.conf 

mountopt = "nodev,metacopy=on"

If you do not do this you will get an error then trying to use podman, as the one below.

[root@dalesjo.com podman]# podman ps -a
Error: error creating libpod runtime: failed to mount overlay for metacopy check: invalid argument

Source: https://github.com/containers/libpod/issues/3560

Installing nginx in CentOS 8

Copy paste code to install newest nginx from nginx.org on CentOS 8. Read more at http://nginx.org/en/linux_packages.html#RHEL-CentOS

sudo yum install yum-utils

cat <<EOF > /etc/yum.repos.d/nginx.repo

[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/\$releasever/\$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/\$releasever/\$basearch/
gpgcheck=1
enabled=0
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

EOF

sudo yum install nginx

Podman/Docker cant reach internet

Had problem on CentOS 7/8 with both podman and docker. no container could reach internet. needed to enable the following.

cat <<EOF > /etc/sysctl.d/podman.conf
# Enable containers to access the outer world
net.ipv4.ip_forward=1
EOF

I also needed to enable masquerading on the external zone.

firewall-cmd --zone=public --add-masquerade
firewall-cmd --permanent --zone=public --add-masquerade

One question left, who is allowed to masquerade? i see no filtering done meaning any other network connected to this machine can masquerade as the public zone. Does not sound good.

icinga/nagios test icecast stream

This is howto monitor an icecast stream that the stream has not gone quiet and to do in during a specific time. First we need a tool to monitor the audio level. This can actually be done with ffmpeg. Below is a command you can do.

ffmpeg -t 10 -i http://example.com/live.mp3 -af "volumedetect" -f null /dev/null 2>&1 | grep Parsed_volumedetect

We do several things here.

  • -t 10 lets us just play the stream we have choosen for 10 seconds before closing ffmpeg
  • -i http://example.com/live.mp3 is our icecast stream we want to monitor
  • -af “volumedetect” is the audio filter we want to apply on the stream.
  • -f null /dev/null is to tell ffmpeg to throw the result away.
  • 2>&1 is very important, nomaly the output from ffmpeg you see on your screen is from stderr, by doing this in the end, we force the output out on the normal stdout giving us the possibilty to pipe the output to our grep command.
  • Lastly  grep Parsed_volumedetect is to only show us the output from the volumedetect filter.

Below we can see the date you can get from volumedetect. The value we want to use is mean_volume which will give us an aproximation on the current audio level in the stream. Max value is zero and anything below -40 db will be considered quite a low volume.

Now is the part where we convert the code above to a icinga/nagios test. I have uploaded a working example using the above code on https://github.com/Dalesjo/dalesjo-nagios/blob/master/media/check_audio_level Below you can se it in action, giving the correct exit code for icinga/nagios of course.

Continue reading

Icinga/Nagios test using zonemaster.

Zonemaster is a great tool to verify that you have set up your domainservers correctly. You can test it out on https://zonemaster.iis.se/en/

I want my icinga server to this automaticly so i will get a warning as soon something changes, so lets do that. First thing you need to now is that Zonemaster is a tool and is freely available on Github. you can download it and run it on your own machine.
Continue reading