FreeNAS
Properly init and add disks to your pool via command line
This is a great start for ZFS newbies: https://forums.freenas.org/index.php?threads/slideshow-explaining-vdev-zpool-zil-and-l2arc-for-noobs.7775/
In this quest of mine I got great help from this article: https://forums.freenas.org/index.php?threads/building-pools-from-the-cli.17540/
I have a FreeNAS server with 1x2TB disk. I want to replace it with 2x4TB disks in a mirror configuration. This is how I did it!
# zpool status pool: BigData state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM BigData ONLINE 0 0 0 gptid/a37b301e-eb46-11e7-b559-6cf049956cad ONLINE 0 0 0
errors: No known data errors
1) Find out the dev nodes for your new disks
ls -l /dev/ada* zpool status glabel list
The ls command will show you all your disks dev-id:s. The zpool command will show you the gpt-id of all disks currently included in your pools. "glabel list" will give you the mapping between gpt-id and dev nodes. With this information you know which /dev/adaX devices that are your new/unused disks.
2) If any partitions currently exists on the disks, clear them out (ada0 and ada1 in my case, but may be different for you)
gpart destroy -F /dev/ada0 gpart destroy -F /dev/ada1
3) Init the disk as a GPT and create one swap partition and one zfs partition on each disk
gpart create -s gpt /dev/ada0 gpart add -a 4096 -i 1 -s 2g -t freebsd-swap /dev/ada0 gpart add -a 4096 -i 2 -t freebsd-zfs /dev/ada0
gpart create -s gpt /dev/ada1 gpart add -a 4096 -i 1 -s 2g -t freebsd-swap /dev/ada1 gpart add -a 4096 -i 2 -t freebsd-zfs /dev/ada1
4) The disks are now ready to be used in a pool! Now run glabel and note down the new gpt-id:s for your disks
glabel status Name Status Components gptid/fcb8a69a-eb13-11e7-8eaf-6cf049956cad N/A ada2p1 gptid/a37b301e-eb46-11e7-b559-6cf049956cad N/A ada3p2 gptid/a35ca845-eb46-11e7-b559-6cf049956cad N/A ada3p1 gptid/8d0dbf30-f6ce-11e7-a931-6cf049956cad N/A ada0p1 gptid/96286393-f6ce-11e7-a931-6cf049956cad N/A ada0p2 gptid/b791e4cb-f6ce-11e7-a931-6cf049956cad N/A ada1p1 gptid/bbb96c09-f6ce-11e7-a931-6cf049956cad N/A ada1p2
5) Now you can either create a new pool using zpool create, or you can add them to an existing pool with zpool attach. I will start by attaching one of them as a mirror to my existing BigData pool:
zpool attach BigData gptid/a37b301e-eb46-11e7-b559-6cf049956cad gptid/bbb96c09-f6ce-11e7-a931-6cf049956cad
6) Now wait for the resilvering to complete
zpool status pool: BigData state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Thu Jan 11 05:03:53 2018 220G scanned at 423M/s, 118M issued at 228K/s, 1.67T total 109M resilvered, 0.01% done, no estimated completion time config: NAME STATE READ WRITE CKSUM BigData ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/a37b301e-eb46-11e7-b559-6cf049956cad ONLINE 0 0 0 gptid/bbb96c09-f6ce-11e7-a931-6cf049956cad ONLINE 0 0 0 (resilvering) errors: No known data errors
7) Now we are ready to replace the old drive
zpool replace BigData gptid/a37b301e-eb46-11e7-b559-6cf049956cad gptid/96286393-f6ce-11e7-a931-6cf049956cad
Wait again for resilvering to complete. Now our pool is completely replaced and up and running on the new disks:
zpool status pool: BigData state: ONLINE scan: resilvered 1.67T in 0 days 09:50:37 with 0 errors on Fri Jan 12 09:23:46 2018 config: NAME STATE READ WRITE CKSUM BigData ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/96286393-f6ce-11e7-a931-6cf049956cad ONLINE 0 0 0 gptid/bbb96c09-f6ce-11e7-a931-6cf049956cad ONLINE 0 0 0 errors: No known data errors
8) Note however that the size of the pool is still 1.67T, it should be close to 4T since we replaced our original 2TB with 2x4TB drives. So now tell ZFS to expand the drives, and then reboot the system.
zpool online -e BigData gptid/96286393-f6ce-11e7-a931-6cf049956cad zpool online -e BigData gptid/bbb96c09-f6ce-11e7-a931-6cf049956cad reboot