Categories
Network Storage

Howto setup NFSv4

Setup NFSv4 Server

Below is how the openMediaVault is configured, the interesting part is fsid=0, instead of connecting to /export/nextcloud as we do in NFSv3 we are going to connect directly to /nextcloud
[code language=”bash”]
GNU nano 2.2.6 File: /etc/exports
# This configuration file is auto-generated.
/export/nextcloud 192.168.64.201/32(fsid=1,rw,subtree_check,secure,crossmnt,anonuid=1002,anongid=1002)
/export/UniFi-Video 192.168.64.147/32(fsid=2,rw,subtree_check,secure,crossmnt)
# NFSv4 – pseudo filesystem root
/export 192.168.64.201/32(ro,fsid=0,root_squash,no_subtree_check,hide)
/export 192.168.64.147/32(ro,fsid=0,root_squash,no_subtree_check,hide)
[/code]


fsid=0:
NFS server needs to be able to identify each filesystem that it exports. For NFSv4 server, there is a distinguished filesystem which is the root of all exported filesystem. This is specified with fsid=root or fsid=0 both of which mean exactly the same thing.

Debian / Ubuntu Linux: Setup NFSv4 File Server

Client setup

To connect with NFSv4 instead of NFSv3 we need to use nfs4 instead of nfs as a filesystem. As stated above, we omitt the /export/ part we usally use with NFSv3
[code language=”bash”]
GNU nano 2.2.6 File: /etc/fstab

192.168.64.200:/nextcloud /host/nfs/nextcloud nfs4 rsize=8192,wsize=8192,timeo=14,intr 0 0

[/code]

Categories
Network

NFS and iptables

If you are using NFSv4 (which is likely) you only need to open one port in your firewall port 2049/TCP. The examples below are done on an OpenMediaVault/Debian server to allow NFS access but nothing else from network 192.168.64.144/28 and 192.168.64.192/26.

Categories
Storage

Write performance ZFS

Quick test to measure write performance off two ZFS pools using Raid2z on Linux

Hardware for testbench

Categories
Network Virtualization

OpenMediaVault / Debian network configuration for bonding/lacp and vlan

This is an example of an network configuration on my OpenMediaVault server. It takes two network interfaces (eth3 and rename3) and bonds them together using LACP. On top off this bond i have created three bridges. br1 witch is for untagged traffic and. br641 and 642 for vlan tagged traffic on vlan 641 and 642 respectively. br1/br641/br642 are all attached to the host and is configured for dhcp. they can also be attached to virtual machines.

Categories
Storage

ZFS the beginning

How i created my first pool

Raidz2 has two redundant drives (aka raid6). on spare drive and autoreplace on so the spare drive is used automatic in case of drive failure.
[code language=”bash”]
zpool create zfs-pool-1 raidz2 /dev/disk/by-id/ata-TOSHIBA_HDWN180_67PQK0NNFP9E /dev/disk/by-id/ata-TOSHIBA_HDWN180_67PQK0NGFP9E /dev/disk/by-id/ata-TOSHIBA_HDWN180_67PSK0YAFP9E /dev/disk/by-id/ata-TOSHIBA_HDWN180_67PUK0KWFP9E /dev/disk/by-id/ata-TOSHIBA_HDWN180_67PUK0KUFP9E /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA16NRDD /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA16NR57 /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA16KL2F /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA16PGEQ /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA16PH49
zpool add zfs-pool-1 spare /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA15N257
zpool set autoreplace=on zfs-pool-1
[/code]