How i created my first pool
Raidz2 has two redundant drives (aka raid6). on spare drive and autoreplace on so the spare drive is used automatic in case of drive failure.
[code language=”bash”]
zpool create zfs-pool-1 raidz2 /dev/disk/by-id/ata-TOSHIBA_HDWN180_67PQK0NNFP9E /dev/disk/by-id/ata-TOSHIBA_HDWN180_67PQK0NGFP9E /dev/disk/by-id/ata-TOSHIBA_HDWN180_67PSK0YAFP9E /dev/disk/by-id/ata-TOSHIBA_HDWN180_67PUK0KWFP9E /dev/disk/by-id/ata-TOSHIBA_HDWN180_67PUK0KUFP9E /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA16NRDD /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA16NR57 /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA16KL2F /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA16PGEQ /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA16PH49
zpool add zfs-pool-1 spare /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA15N257
zpool set autoreplace=on zfs-pool-1
[/code]
Scrub
Make sure there there is a cronscript running scrub once a month on your data. you can initiate a scrub from commandline and see status from the status command
[code language=”bash”]
zpool scrub zfs-pool-1
zpool status zfs-pool-1
[/code]
other commands
This is how you se alla parameters for you pool and the history for all zfs/zpool commands of your pool.
[code language=”bash”]
zpool get all zfs-pool-1
zpool set autoreplace=on zfs-pool-1
zpool history zfs-pool-1
[/code]
Add Cache
Only read cache so an cheap ssd could be used,
With cache devices you would actually look at read optimized SSDs, and no real need for mirroring, because all data and metadata could be discarded at any moment without any risk to the rest of the pool.
[code language=”bash”]zpool add mypool cache /dev/sdc[/code]
Designating Hot Spares
Devices can be designated as hot spares in the following ways:
- When the pool is created with the zpool create command
[code language=”bash”]zpool create trinity mirror c1t1d0 c2t1d0 spare c1t2d0 c2t2d0[/code]
- After the pool is created with the zpool add command
[code language=”bash”]zpool add mypool spare /dev/sdc /dev/sdd[/code]
Replacing Devices in a Storage Pool
After you have determined that a device can be replaced, use the zpool replace command to replace the device. If you are replacing the damaged device with different device, use syntax similar to the following:
[code language=”bash”]zpool replace mypool /dev/sda /dev/sdb[/code]
This command migrates data to the new device from the damaged device or from other devices in the pool if it is in a redundant configuration. When the command is finished, it detaches the damaged device from the configuration, at which point the device can be removed from the system. If you have already removed the device and replaced it with a new device in the same location, use the single device form of the command. For example:
[code language=”bash”]zpool replace mypool /dev/sda[/code]
This command takes an unformatted disk, formats it appropriately, and then resilvers data from the rest of the configuration.
My first dataset
dataset can not be created for already existing directories.
[code language=”bash”]zfs create zfs-pool-1/samuel[/code]
make snapshot directory vissible
there is a secret .zfs directory in your dataset folder containing your snapshots.
[code language=”bash”]
zfs set snapdir=visible zfs-pool-1/samuel
zfs set snapdir=hidden zfs-pool-1
[/code]
To list datasets (and zpools)
[code language=”bash”]
zfs list
[/code]
Take a snapshot
mysnapshot can be anything you want.
[code language=”bash”]zfs snapshot zfs-pool-1/samuel@mysnapshot[/code]
view all your snapshots
[code language=”bash”]cd .zfs/snapshot[/code]
Renaming
[code language=”bash”]zfs rename zfs-pool-1/samuel@now zfs-pool-1/samuel@testing[/code]
List all snapshots
[code language=”bash”]zfs list -t snapshot -r zfs-pool-1/samuel[/code]
restoring snapshot
[code language=”bash”]zfs rollback zfs-pool-1/samuel@testing[/code]
Destroy snapshot
[code language=”bash”]zfs destroy zfs-pool-1/samuel@testing[/code]
disable atime
access time should always be disabled.
[code language=”bash”]
zfs get all zfs-pool-1
zfs set atime=off zfs-pool-1
[/code]