== Summary == ZFS filesystems are built on top of virtual storage pools called zpools. A zpool is constructed of virtual devices (vdevs), which are themselves constructed of block devices: files, hard drive partitions, or entire drives, with the last being the recommended usage.[6] Block devices within a vdev may be configured in different ways, depending on needs and space available: non-redundantly (similar to RAID 0), as a mirror (RAID 1) of two or more devices, as a RAID-Z group of three or more devices, or as a RAID-Z2 group of four or more devices.[7] Besides standard storage, devices can be designated as volatile read cache (ARC), nonvolatile write cache, or as a spare disk for use only in the case of a failure. Finally, when mirroring, block devices can be grouped according to physical chassis, so that the filesystem can continue in the face of the failure of an entire chassis. == ZPool Types == '''ZPool Stripe group:''' {{{ zpool create vol0 /dev/sda /dev/sdb /dev/sdc }}} '''ZPool mirror group:''' {{{ zpool create vol0 mirror /dev/sda /dev/sdb }}} '''ZPool raidz group:''' Similar to RAID5. {{{ zpool create vol0 raidz /dev/sda /dev/sdb /dev/sdc }}} '''ZPool raidz2 set:''' Similar to RAID5 with dual parity. {{{ zpool create vol0 raidz2 /dev/sdb /dev/sdc1 /dev/sdd /dev/sde }}} == Status of the ZPool Storage == '''Checking the size and usage of zpools:''' zpool list This will display something like: {{{ root@ubuntu:~# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT vol0 29.8G 230K 29.7G 0% ONLINE - }}} '''Checking health status for ZPools:''' zpool status This will display something like: {{{ root@ubuntu:~# zpool status pool: vol0 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM vol0 ONLINE 0 0 0 raidz2 ONLINE 0 0 0 sdb ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0 spares sde AVAIL errors: No known data errors }}} '''Checking I/O statistics for ZPools:''' zpool iostat This will display something like: {{{ root@ubuntu:~# zpool iostat capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- vol0 230K 29.7G 0 0 10 431 }}} More detailed view can be seen by adding the -v to the end of the '''zpool iostat'''. {{{ root@ubuntu:~# zpool iostat -v capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- vol0 230K 29.7G 0 0 12 518 raidz2 230K 29.7G 0 0 12 518 sdb - - 0 1 274 3.41K sdc - - 0 1 174 3.41K sdd - - 0 0 274 3.41K ---------- ----- ----- ----- ----- ----- ----- }}} == Adding Aditional Storage to a ZPool == '''Adding a spare:''' {{{ zpool add vol0 spare /dev/sde }}} This will add a spare drive to be used as a hot spare in event that one of the drives in the zpool fails. {{{ root@ubuntu:~# zpool status -v pool: vol0 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM vol0 ONLINE 0 0 0 raidz2 ONLINE 0 0 0 sdb ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0 spares sde AVAIL }}} '''Addtional storage to a MIRROR, RAIDZ or RAIDZ2 ZPool:''' Adding additional storage to a mirror, raidz or raidz2 zpool must be done in like raid groups. Example: If I have a 3 drive raidz2 zpool.... {{{ root@ubuntu:~# zpool status pool: vol0 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM vol0 ONLINE 0 0 0 raidz2 ONLINE 0 0 0 sdb ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0 errors: No known data errors }}} ...additional storage must be added by introducing another 3 drive raidz2 raid group to the zpool. {{{ zpool add vol0 raidz2 /dev/sde /dev/sdf /dev/sdg }}} So my vol0 zpool now looks like... {{{ root@ubuntu:~# zpool status pool: vol0 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM vol0 ONLINE 0 0 0 raidz2 ONLINE 0 0 0 sdb ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0 raidz2 ONLINE 0 0 0 sde ONLINE 0 0 0 sdf ONLINE 0 0 0 sdg ONLINE 0 0 0 errors: No known data errors }}} ''Same is true for zpool mirror and raidz sets.'' == Deleting a ZPool == '''Deleting a zpool and deleting all data within the zpool:'''zpool destroy Usage is... {{{ zpool destroy nameofzpool }}} example: {{{ zpool destroy vol0 }}} == Removing a Zpool == If you have a zpool on eg a usb drive, this command will allow you to safely remove it: {{{ zpool export nameofzpool }}} If you want to import the zpool again: {{{ zpool import nameofzpool }}} Running zpool import with no arguments will list all available zpools. == Creating a Zpool on a disk image == Though not recommended for normal use, it is possible to create a zpool on top of a file. {{{ dd if=/dev/zero of=filename.img bs=1M count=1000 zpool create nameofzpool /absolute/path/to/filename.img }}} will create an image of 1GB. It is also possible to create a sparse image, to create an image that can hold 100GB: {{{ dd if=/dev/zero of=filename.img bs=1k count=1 seek=100M }}}