Very terse notes…
- ZFS linux implementation: http://zfsonlinux.org/
- Installation Centos : http://zfsonlinux.org/epel.html
- ZFS docs : http://docs.oracle.com/cd/E19253-01/819-5461/index.html
#1 ZFS Install
CentOS Linux release 7.1.1503 (Core) (on VMWare Version 7.1.0)
Before installing you should run
$ yum upgrade
Then follow steps from http://zfsonlinux.org/epel.html
$ sudo yum localinstall --nogpgcheck https://download.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
$ sudo yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el7.noarch.rpm
$ sudo yum install kernel-devel zfs
After installation try with :
$ zpool list
If you get following error :
Failed to load ZFS module stack. Load the module manually by running 'insmod /zfs.ko' as root.
$ cat /proc/filesystems ... nodev hugetlbfs nodev autofs nodev pstore nodev mqueue nodev selinuxfs xfs nodev zfs
$ lsmod | grep zfs
zfs 2236101 4
zunicode 331170 1 zfs
zavl 14933 1 zfs
zcommon 51315 1 zfs
znvpair 89086 2 zfs,zcommon
spl 91752 3 zfs,zcommon,znvpair
#2 Test (crate pool)

Using VMWare crete 3 disks for tests pupose (just 1G of space).
Use shell to determine all available/attached hard disks on the linux box :
$ lsblk -io KNAME,TYPE,SIZE,MODEL
KNAME TYPE SIZE MODEL
sda disk 20G VMware Virtual S
sda1 part 500M
sda2 part 19.5G
dm-0 lvm 17.5G
dm-1 lvm 2G
sdb disk 1G VMware Virtual S
sdc disk 1G VMware Virtual S
sdd disk 1G VMware Virtual S
sr0 rom 66.6M VMware IDE CDR10
Now create a new pool :
$ zpool create tank mirror sdb sdc
invalid vdev specification
use '-f' to override the following errors:
/dev/sdb does not contain an EFI label but it may contain partition
information in the MBR.
You have to create GPT on sdb and sdc
:
$ fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xe3225926.
Command (m for help): g
Building a new GPT disklabel (GUID: 4993FDA1-5E3D-4A4D-91CB-77BD0C85BD98)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
(please do the same on /dev/sdc)
Again create a new pool :
$ zpool create tank mirror sdb sdc
$ zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
tank 1008M 56.5K 1008M - 0% 0% 1.00x ONLINE -
$ mount
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
...
tank on /tank type zfs (rw,relatime,seclabel,xattr,noacl)
#3 Simulate disk crash (sbd)
Remove from VMWare a disk (for example /dev/sdb) which was part of pool created above (we are simulating that sdb is broken).

After you used VMWare panel in order to remove sdb (please note that sbd
will be moved by VMWare as the new sdb) execute :
$ zpool status tanks cannot open 'tank': no such pool
$ zpool import
pool: tank
id: 17334258515364360988
state: DEGRADED
status: One or more devices contains corrupted data.
action: The pool can be imported despite missing or damaged devices. The
fault tolerance of the pool may be compromised if imported.
see: http://zfsonlinux.org/msg/ZFS-8000-4J
config:
tank DEGRADED
mirror-0 DEGRADED
sdb UNAVAIL
sdc ONLINE
tank can be imported but as degradated (sdb is unavaiable)… so import tank also if pool is in state DEGRADED :
$ zpool import tank $ zpool status pool: tank state: DEGRADED status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the device using 'zpool replace'. see: http://zfsonlinux.org/msg/ZFS-8000-4J scan: none requested config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 12026337132713439253 FAULTED 0 0 0 was /dev/sdb1 pci-0000:00:10.0-scsi-0:0:2:0 ONLINE 0 0 0 errors: No known data errors
Replace broken disk with hot spare : zpool replace tank
$ zpool replace tank sdb sdc
Note : removing the old_sdc
(from VMWare) the disk sdd
is now changed to a new location as sdc