ruby.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
If you are interested in the Ruby programming language, come join us! Tell us about yourself when signing up. If you just want to join Mastodon, another server will be a better place for you.

Administered by:

Server stats:

1.1K
active users

#zfs

16 posts13 participants8 posts today

After expanding my vdev, I ran a script to re-write all the files on disk so they'd use the new parity level. My math was right and this returned 4TB to me.

The math for a simple one vdev pool:
parity = data * (raidz / (disks - raidz))

24TB * (1 / (3 - 1)) = 12TB
24TB * (1 / (4 - 1)) = 8TB

This is about as advanced with math as I get.

This script worked well. I appreciated the `--skip-hardlinks` option.
github.com/markusressel/zfs-in

Replied in thread

@ahoyboyhoy @andreasgoebel in addition to ZFSBootMenu …

I'm looking at zectl, <ramsdenj.com/posts/2020-03-18->

― zectl ZFS Boot Environment Manager for Linux · John Ramsden

Back to Manjaro. Reading <github.com/calamares/calamares> (2021) alongside <en.wikipedia.org/wiki/Calamare>, I wonder why ZFS on root is not an option.

<theregister.com/2024/08/01/lin> @lproven mentions licencing …

― Linux updates with an undo function? Some distros have that • The Register

Maybe I'll never need to undo :-)

John Ramsden · zectl ZFS Boot Environment Manager for LinuxI’m happy to announce a new ZFS boot environment manager written completely from scratch in C - zectl. In 2018 I wrote zedenv, a ZFS Boot Environment manager, I’ve taken what I learned from zedenv and added improvements in workflow, performance and reliability. For a summary on what a boot environment manager is, and how it can be used see my previous post. Why the Rewrite Link to heading I had been having misgivings about writing my original implementation in Python. At the time of writing there was no libzfs library interface for python and I wrote my own “wrapper library” - pyzfscmds - that simply called out to the zfs binary. While the wrapper has worked, it meant a lot of extra work was done parsing string output from zfs subcommands. Directly using the libzfs library allows for more robust code, significantly better performance, and error handling. I was considering porting the python tool to use py-libzfs, or writing it in C when the tool bectl came out for FreeBSD. Seeing bectl’s impressive implementation I was inspired to do the rewrite in C.
Replied in thread

<en.opensuse.org/OpenZFS#ZFS_on> openSUSE does not yet support ZFS on root.

It's not an essential feature, however I'm accustomed to it, so Kubuntu might be an option.

I didn't include these two distros in the poll because my experience with them is so outdated (I toyed with OpenSUSE, maybe on PowerPC, around two decades ago …).

#Kubuntu #OpenSUSE #ZFS #OpenZFS #Linux

@andreasgoebel

en.opensuse.orgOpenZFS - openSUSE Wiki

I need an AED hooked up and charged next time I manually do zpool import/export messing around. The first part of this output nearly gave me a heart attack.

Continued thread

Done! Pretty close to 1 TB/hour for the expansion.

expand: expanded raidz1-0 copied 34.5T in 1 days 09:17:46, on Wed Apr 2 03:44:38 2025

Interestingly, the read and write speed declined at a steady pace as the expansion went on. I wonder why. My first thought is the old days where the inner and outer parts of a disk had different performance?

💾 Rebooting Petabyte Control Node 💾

am rebooting one of the control nodes for a petabyte+ storage array, after 504 days of system uptime..

watching kernel log_level 6 debug info scroll by on the SoL terminal via iDrac..

logs scrolling, the array of SAS3 DE3-24C double-redundant SFF linked Oracle/Sun drive enclosures spin-up and begin talking to multipathd...

waiting for Zpool cache file import..

waiting.. 131 / 132 drives online across all enclosures.. hmm.. what's this now...

> transport_port_remove: removed: sas_addr(0x500c04f2cfe10620)

well ffs 😒

> 12:0:10:0: SATA: handle(0x0017), sas_addr(0x500c04f2cfe10620), phy(32),

oh, that's a SATA drive on the system's local enclosure bay for scratch data, it's not part of the ZFS pool.. 😌

next step, not today, move control nodes to a higher performance + lower wattage pair of FreeBSD servers 💗

Finally back home to my computers. First order of business: vdev expansion! One `zpool attach` command and we're off to the races.

expand: expansion of raidz1-0 in progress since Mon Mar 31 18:26:52 2025
45.6G / 34.5T copied at 354M/s, 0.13% done, 1 days 04:21:48 to go

#ZFS is actually more fun than it seemed now that I understood the difference between pool and dataset and how to mirror vdevs. I guess I wil have to copy all my data twice now to move onto a mirror setup. I am now rethinking how to do this as my primary need for #RAID is to ensure integrity for my backups from cloud services I host / manage.

I do love being able to expand a ZFS pool by just replacing each side of a mirrored pair with a larger disk in turn, then expand the pool to the new size once both are resilvered. And all with trust that my data still resides on the new disks as it did the old disks. (The old ones will be kept on the shelf for a while, naturally.)