@zwol @b0rk Oh, your point about tracing on macOS made me think how much more exciting reading a file can be via #ZFS with L2ARC. I really don’t know how to debug the various places a kernel could have to read from to acquire the contents of the file. Probably @dexter does. But just reading a inode’s pointer to a block address is not deep enough.
@limepot @tomlawrence seriously, #ZFS is awesome!
https://www.youtube.com/watch?v=3oG-1U5AI9A
People frequently say they don’t want #ZFS on their workstation because it’s a server FS. But what’s the best workstation FS? #UFS, #Ext4, #HAMMER, #Btrfs, #XFS? What’s the going benchmark workload look like? What subjective aspects matter too? (And yes, I know about damned lies of https://fsbench.filesystems.org/)
#Unix #illumos #BSD #Linux #xfs #btrfs #hammer #ext4 #ufs #ZFS
@CobaltVelvet I have a hot take:
#RAID controllers are trash and should not exist.
Consider buying an #HBA [Hostbus - Adaptor] instead, and use a real #Filesystem [#btrfs, #ext4] with actual Raid [#dnraid] or even better use #ZFS with it's #ZRAID option...
#zraid #ZFS #dnraid #ext4 #btrfs #filesystem #HBA #raid
@MagicLike #NTFS is trash.
Even #ext2 runs circles around it, and don't even get me started on the superiority of #btrfs & #ZFS...
@lamp using a #RAID without #dmraid only works well with things like #RAID1 on a boot partition since that's the only way to keeo it sync and be bootable...
I usually choose to make lvm partitions to do dmraid and also choose to keep at least 25% vacant for #WearLeveling and #GarbageCollection as well as #TRIM to work, and round it down so I can mix & match drives
But then again going #ZFS is the smarter route...
#ZFS #trim #GarbageCollection #wearleveling #raid1 #dmraid #raid
@marcel @ActionRetro This issue is OFC far less of a problen on modern systems with #Journaling #Filesystems and #Snapshots as well as #Checksumming...
So on #APFS, #ZFS & #btrfs as well as even #ext4 in #journalless-mode the lilelyhood of loosing data unless one were to bake a device is negligible.
#journalless #ext4 #btrfs #ZFS #APFS #checksumming #snapshots #filesystems #journaling
@datacolada This is also feasible using "classical" Unix filesystems using "hard links", and can be implemented at scale using #rsync (with a bit more effort then doing it on #ZFS, which handles this stuff automatically).
@datacolada I was going to reach out (but only 24h in a day). Yes, two files that are the same binary, when stored on #ZFS, will point to the same blocks on disk. They take up no extra space.
@datacolada I do believe that the statement about storage size is inaccurate. MRAN (ex RevolutionAnalytics) used #ZFS for mirroring, which handles the storage part elegantly and transparently. Implicit in https://github.com/RevolutionAnalytics/checkpoint-server
@Natanox in almost all cases, you'll be better off just backing up stuff regularly with deja-dup / duplicity [included in @ubuntu LTS Desktop per default] or #rsync and use #btrfs since unlike #ext4 and #ZFS, it'll be less straining in terms of write operations [which are the limiting factor of SSDs] and you don't want to use "journal-less ext4" outside of embedded devices!...
#ZFS is great for NASes and Server but there is a reason why not even #Canonical defaults to using it on #Desktop.
#Desktop #canonical #ZFS #ext4 #btrfs #rsync
@alina You could use #vmware #ESXi & provide #Storage via #iSCSI using #ixSystems' #TrueNAS Core, which then uses #ZFS under the hood.
Or you could try out @ubuntu if you don't need a simple dashboard and be fine with virtsh and kvm/qemu being run directly...
#ZFS #truenas #ixsystems #iscsi #storage #ESXi #vmware
@frainfostudent #git, cuz #svn is cringe and people who use folder copies don't know how to version.
Also just because one can use #ZFS and/or #btrfs snapshots for that doesn't mean one should..
@atarifrosch AUE!
Ich selbst plane zukünftig nur noch #ZFS #ZRAID-2 bzw. ZRAID-3 bzw. #Linux #dmraid10 zu deployen, weil letzteres wie nen RAID 5 dynamisch wachsen kann...