Lovely home NAS people, question for you:
I need around 12TB capacity, should I do a mirror vdevs with 2x12TB drives, or RAIDZ with 3x6TB? Performance doesn't matter, reliability and power consumption do.
Lovely home NAS people, question for you:
I need around 12TB capacity, should I do a mirror vdevs with 2x12TB drives, or RAIDZ with 3x6TB? Performance doesn't matter, reliability and power consumption do.
After expanding my vdev, I ran a script to re-write all the files on disk so they'd use the new parity level. My math was right and this returned 4TB to me.
The math for a simple one vdev pool:
parity = data * (raidz / (disks - raidz))
24TB * (1 / (3 - 1)) = 12TB
24TB * (1 / (4 - 1)) = 8TB
This is about as advanced with math as I get.
This script worked well. I appreciated the `--skip-hardlinks` option.
https://github.com/markusressel/zfs-inplace-rebalancing
@andreasgoebel in (open source) FreeBSD and Linux discussions, ZFS is typically open source:
― OpenZFS abbreviated to ZFS
#ZFS and #OpenZFS tagged in my previous toot.
(Oracle Solaris ZFS nowadays <https://docs.oracle.com/cd/E23824_01/html/821-1448/index.html> is not open source.)
Ah, I just have the Data VDEVs but nothing for the other stuff... not sure how important those are.
I am somewhat stuck on "VDEVs not assigned" in TrueNAS after I did a system reinstall...
@ahoyboyhoy @andreasgoebel in addition to ZFSBootMenu …
I'm looking at zectl, <https://ramsdenj.com/posts/2020-03-18-zectl-zfs-boot-environment-manager-for-linux/>
― zectl ZFS Boot Environment Manager for Linux · John Ramsden
Back to Manjaro. Reading <https://github.com/calamares/calamares/issues/533#issuecomment-971746992> (2021) alongside <https://en.wikipedia.org/wiki/Calamares_(software)>, I wonder why ZFS on root is not an option.
<https://www.theregister.com/2024/08/01/linux_rollback_options/> @lproven mentions licencing …
― Linux updates with an undo function? Some distros have that • The Register
Maybe I'll never need to undo :-)
<https://en.opensuse.org/OpenZFS#ZFS_on_Root> openSUSE does not yet support ZFS on root.
It's not an essential feature, however I'm accustomed to it, so Kubuntu might be an option.
I didn't include these two distros in the poll because my experience with them is so outdated (I toyed with OpenSUSE, maybe on PowerPC, around two decades ago …).
#SelfHosting #zfs
On Alpine Linux.
Had several 'segmentation fault' errors when running 'incus copy' to remote server.
Copy uses zfs send/receive behind the scenes.
Smartctl is reporting errors on the disk -that cannot be a coincidence.
Replacement disk will be on it's way soon.
I need an AED hooked up and charged next time I manually do zpool import/export messing around. The first part of this output nearly gave me a heart attack.
Done! Pretty close to 1 TB/hour for the expansion.
expand: expanded raidz1-0 copied 34.5T in 1 days 09:17:46, on Wed Apr 2 03:44:38 2025
Interestingly, the read and write speed declined at a steady pace as the expansion went on. I wonder why. My first thought is the old days where the inner and outer parts of a disk had different performance?
Rebooting Petabyte Control Node
am rebooting one of the control nodes for a petabyte+ storage array, after 504 days of system uptime..
watching kernel log_level 6 debug info scroll by on the SoL terminal via iDrac..
logs scrolling, the array of SAS3 DE3-24C double-redundant SFF linked Oracle/Sun drive enclosures spin-up and begin talking to multipathd...
waiting for Zpool cache file import..
waiting.. 131 / 132 drives online across all enclosures.. hmm.. what's this now...
> transport_port_remove: removed: sas_addr(0x500c04f2cfe10620)
well ffs
> 12:0:10:0: SATA: handle(0x0017), sas_addr(0x500c04f2cfe10620), phy(32),
oh, that's a SATA drive on the system's local enclosure bay for scratch data, it's not part of the ZFS pool..
next step, not today, move control nodes to a higher performance + lower wattage pair of FreeBSD servers
Finally back home to my computers. First order of business: vdev expansion! One `zpool attach` command and we're off to the races.
expand: expansion of raidz1-0 in progress since Mon Mar 31 18:26:52 2025
45.6G / 34.5T copied at 354M/s, 0.13% done, 1 days 04:21:48 to go
Latest 𝗩𝗮𝗹𝘂𝗮𝗯𝗹𝗲 𝗡𝗲𝘄𝘀 - 𝟮𝟬𝟮𝟱/𝟬𝟯/𝟯𝟭 (Valuable News - 2025/03/31) available.
https://vermaden.wordpress.com/2025/03/31/valuable-news-2025-03-31/
Past releases: https://vermaden.wordpress.com/news/
Being on a dive boat in the Andaman Sea last week, I missed that OpenZFS 2.3.1 is now in Debian stable backports!
Perfect timing to get home to my server and do some vdev expansion! #zfs
Being a #ZFS administrator today, successfully :
- Replaced a old small disk in the mirror with a new one
- Extended their space
#ZFS is actually more fun than it seemed now that I understood the difference between pool and dataset and how to mirror vdevs. I guess I wil have to copy all my data twice now to move onto a mirror setup. I am now rethinking how to do this as my primary need for #RAID is to ensure integrity for my backups from cloud services I host / manage.
I do love being able to expand a ZFS pool by just replacing each side of a mirrored pair with a larger disk in turn, then expand the pool to the new size once both are resilvered. And all with trust that my data still resides on the new disks as it did the old disks. (The old ones will be kept on the shelf for a while, naturally.) #homelab #zfs #smartos