Entity Framework Core 3

I recently upgraded an .NET Core 2 project to .NET Core 3 and with that I also updated other packages, including EF Core. Now EF Core 3 has quite a list of breaking changes you can se them in the list below.

https://docs.microsoft.com/en-us/ef/core/what-is-new/ef-core-3.0/breaking-changes

The worst change for my application was one that is quit a bit down on the list and one that I did not see until I found the actual issue in my code.

Before 3.0, eagerly loading collection navigations via Include operators caused multiple queries to be generated on relational database, one for each related entity type.

Eager loading of related entities now happens in a single query

This does not sound to bad, nothing will stop working, but it will effect performance. If it effects performance depends on your query, some will go faster and some will go slower.

In my case I had an query that did not like this change at all. the query created an list of items. These items then had a bunch of data from other tables, created an 3d dimensional object. I Mocked up an example below on how the Linq Query looked like.

return context.Item
	.Select(i => new ResultItem()
	{
		ItemId   = i.ItemId,
		Siblings = i.parentNavigation.Items.Count(),
		Position = i.parentNavigation.Items.Count(p => p.Order < I.Order),
		tags = i.Tag.Select(t => new ResultTag {
			tag = t.tag
		})
	}).ToList();

In EF Core 2, this worked quite well. EF Core divided the code in two parts. First it did one query to fetch the items, count the siblings and position, and then it would do multiple queries to fetch the tags. So if I fetched 12 items. it would be 1 query to fetch the items and then 12 queries to fetch all the tags.

EF Core 3 however does one query, ONE BIG QUERY. In this case it resulted in some performance drop. In EF Core 2 it had a stable 250ms execution time, independent on how many tags each item had. In EF Core 3 it hovered around 650ms but could be slower if one item had more tags than usual.

Improvement 1

var items = context.Item
	.Select(i => new ResultItem()
	{
		ItemId   = i.ItemId,
                ParentId = i.ParentId,
                Order = i.Order
	}).ToList();

foreach (var item in items)
{
	item.Siblings = context.Item.Count(i => i.ParentId == item.ParentId);
	item.Position = context.Item.Count(i => i.ParentId == item.ParentId && i.Order < item.Order);
	item.tags = context.Tag.Where(t => t.ItemId = item.ItemId).Select(t => new ResultTag {
			tag = t.tag
		}).ToList();
}

First step was to separate the different queries. We simply fetched the item first and then looped over them to fetch the extra data needed. This resulted in quite some improvement, we decreased the time from around 650ms to 350ms. But it was still not fast enough.

The reason it still not as fast as before is because this code results in a lot of queries. If we fetch 12 items, it will result 1 query to fetch the items, then it will loop over all the items and do 3 separate queries for each item. Resulting in a total of 1+36 queries against the database.

Each query means another roundtrip to the database. And this is partly the reason why EF Core decided to make the change in EF core 3, to try to avoid as many roundtrips as possible.

Improvement 2

var items = context.Item
	.Select(i => new ResultItem()
	{
		ItemId   = i.ItemId,
                ParentId = i.ParentId,
                Order = i.Order
	}).ToList();

var itemIds = items.Select(c => c.ItemId);
var tags = context.Tag.Where(t => itemIds.Contains(t.ItemId)).Select(t => new ResultTag {
	ItemId = t.ItemId,
	Tag = t.Tag
}).ToList();

var parentIds = items.Select(i => i.ParentId);
var siblings = context.Item.Where(i => parentIds.Contains(i.parentId)).Select(i => new SiblingItem {
	ParentId = i.ParentId,
	Order = i.Order
}).ToList();

foreach (var item in items)
{
	item.Siblings = siblings.Count(i => i.ParentId == item.ParentItem);
	item.Position = siblings.Count(i => i.ParentId == item.ParentItem && i.Order < item.Order);
	item.tags = tags.Where(t => t.ItemId = item.ItemId);
}

Now we have complicated our code somewhat, but this was the fastest way of doing it. As previous example we first fetch all the items. Our next step is for all items we fetched, select their siblings in on query and select all their tags in another query. We can then loop over the items and filter out the result for each item. This resulted in a total of 3 queries being sent to the database and the execution time dropped to under 100ms.

Podman 1.8

Guide to install a newer version of Podman on Centos can be found on podmans website https://podman.io/getting-started/installation.html This will get you a newer version than 1.4 which is included in the default yum repository.

CentOS 7 is using an older kernel that does not support all features of podman. One thing is that needs to be removed is mount options with metacopy. You can change this line by removing metacopy=on in
/etc/containers/storage.conf 

mountopt = "nodev,metacopy=on"

If you do not do this you will get an error then trying to use podman, as the one below.

[root@dalesjo.com podman]# podman ps -a
Error: error creating libpod runtime: failed to mount overlay for metacopy check: invalid argument

Source: https://github.com/containers/libpod/issues/3560

Installing nginx in CentOS 8

Copy paste code to install newest nginx from nginx.org on CentOS 8. Read more at http://nginx.org/en/linux_packages.html#RHEL-CentOS

sudo yum install yum-utils

cat <<EOF > /etc/yum.repos.d/nginx.repo

[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/\$releasever/\$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/\$releasever/\$basearch/
gpgcheck=1
enabled=0
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

EOF

sudo yum install nginx

Podman/Docker cant reach internet

Had problem on CentOS 7/8 with both podman and docker. no container could reach internet. needed to enable the following.

cat <<EOF > /etc/sysctl.d/podman.conf
# Enable containers to access the outer world
net.ipv4.ip_forward=1
EOF

I also needed to enable masquerading on the external zone.

firewall-cmd --zone=public --add-masquerade
firewall-cmd --permanent --zone=public --add-masquerade

One question left, who is allowed to masquerade? i see no filtering done meaning any other network connected to this machine can masquerade as the public zone. Does not sound good.

icinga/nagios test icecast stream

This is howto monitor an icecast stream that the stream has not gone quiet and to do in during a specific time. First we need a tool to monitor the audio level. This can actually be done with ffmpeg. Below is a command you can do.

ffmpeg -t 10 -i http://example.com/live.mp3 -af "volumedetect" -f null /dev/null 2>&1 | grep Parsed_volumedetect

We do several things here.

  • -t 10 lets us just play the stream we have choosen for 10 seconds before closing ffmpeg
  • -i http://example.com/live.mp3 is our icecast stream we want to monitor
  • -af “volumedetect” is the audio filter we want to apply on the stream.
  • -f null /dev/null is to tell ffmpeg to throw the result away.
  • 2>&1 is very important, nomaly the output from ffmpeg you see on your screen is from stderr, by doing this in the end, we force the output out on the normal stdout giving us the possibilty to pipe the output to our grep command.
  • Lastly  grep Parsed_volumedetect is to only show us the output from the volumedetect filter.

Below we can see the date you can get from volumedetect. The value we want to use is mean_volume which will give us an aproximation on the current audio level in the stream. Max value is zero and anything below -40 db will be considered quite a low volume.

Now is the part where we convert the code above to a icinga/nagios test. I have uploaded a working example using the above code on https://github.com/Dalesjo/dalesjo-nagios/blob/master/media/check_audio_level Below you can se it in action, giving the correct exit code for icinga/nagios of course.

Continue reading

Icinga/Nagios test using zonemaster.

Zonemaster is a great tool to verify that you have set up your domainservers correctly. You can test it out on https://zonemaster.iis.se/en/

I want my icinga server to this automaticly so i will get a warning as soon something changes, so lets do that. First thing you need to now is that Zonemaster is a tool and is freely available on Github. you can download it and run it on your own machine.
Continue reading

Start NRPE after openvpn tunnel is connected

NRPE will not start if its server address is a openvpn ip and the tunnel is not yet established when NRPE tries to start. to solve this. create a new systemd file

systemctl -all | grep ovpn
cp /usr/lib/systemd/system/nrpe.service /etc/systemd/system/nrpe.service

Add your tun device to Requires and After. Note you need the systemD name of your tune device. in this case. OpenVPN was configured to use tun ovpn-gwSamuel check systemctl for its correct name. The result should look something like this. Notice the esacped dash sign in the name.

[Unit]
Description=Nagios Remote Program Executor
Documentation=http://www.nagios.org/documentation
Conflicts=nrpe.socket
Requires=network.target sys-devices-virtual-net-ovpn\x2dgwSamuel.device
After=network-online.target sys-devices-virtual-net-ovpn\x2dgwSamuel.device

[Install]
WantedBy=multi-user.target

[Service]
Type=forking
User=nrpe
Group=nrpe
EnvironmentFile=/etc/sysconfig/nrpe
ExecStart=/usr/sbin/nrpe -c /etc/nagios/nrpe.cfg -d $NRPE_SSL_OPT

Autostart a Virtual Machine in Xenserver

1. Enable poweron on pool
xe pool-list
xe pool-param-set uuid=89d4a986-2b3e-e771-b6d3-9a6cad4b7e52 other-config:auto_poweron=true
xe pool-param-list uuid=89d4a986-2b3e-e771-b6d3-9a6cad4b7e52
2. Enable poweron on Virtual Machine
xe vm-list
xe vm-param-set uuid=de89d51b-e581-553f-992d-f0a5044dccd2 other-config:auto_poweron=true
xe vm-param-list uuid=de89d51b-e581-553f-992d-f0a5044dccd2