Follow

Need to deal with expanding my storage on the server and I've been putting it off for a while. I haven't looked in depth, but at a glance (a response to Torvald's whining about it a while back) ZFS looked like it might be pretty useful for a non-RAID multi-disk enclosure.

Up till now I've just been using LVM to make it easy to grow and span disks, but with zero redundancy involved.

Hardware RAID has always made me nervous. Probably too many horror stories from many years ago when cheap motherboards first started including it and then randomly forgetting the RAID configuration. I have an enclosure on order, but it doesn't do RAID. If I can afford to put drives in a second, I'll probably get a second one as well.

- Is ZFS a decent solution to surviving a drive failure?
- Am I just a big paranoid wimp? Should I instead spend the extra dollars for a (no-name) enclosure that supports RAID?
- Should I instead be using some combination of software RAID and a filesystem that's fancier than EXT3 (or whatever the Mint/Debian default is these days) to feel better about potential drive failures?

At the moment my solution is "mirror the disk to an offsite Raspi" but I'd like to turn that into more of a targeted backup solution for higher value stuff, and use some sort of redundancy locally for the less critical things. The Pi disk is the same size as the current disk(s), which are nearly full and I really don't want to throw more disk at the Pi.

A question I forgot about when I wrote the prior toot:

- If I'm just going with an array of spinning disks, 5400 RPM is sufficient these days right? What with all the platters they jam in there and the bit density and the fact that I'll be spreading the load over multiple disks.

My gut says 7200 RPM was more important when the bits were farther apart and the platters fewer. But I also could be way off base on the platters thing. I haven't had my finger on the pulse of tech in a while.

Show thread

@SetecAstronomy I can hopefully answer the first question. My ZFS setup has survived two drive failures so far. I run FreeNAS so it’s in ez-mode. I take the bad drive out, put the new one in, click a couple of buttons in the webUI, and a few hours later the new disk is resilvered and the alerts are cleared.

@ryen I'll peek at FreeNAS, cause easier sounds better. Although this box is already doing a lot more than just serving disk, so dunno if that's an issue.

@SetecAstronomy @ryen To be fair, swapping out bad drives in a zfs array is that easy even if it's on the cli. The hardest part is identifying which drive is dead, if you didn't keep track initiallyt.

@sungo @ryen Ah. I like cli most of the time, provided I can remember the commands, or the keywords to look them up. I've had a few fun moments with LVM because of that.

@SetecAstronomy @ryen loads of zfs docs out there and the man pages are really good on ubuntu, solaris, illumos, and freebsd.

@SetecAstronomy FreeNAS can do VMs and jails as well. I used to have a Ubuntu VM on it running a few Docker containers, along with a syncthing jail, but I offloaded those once I got another server.

@ryen VMs? Docker? Too many layers! Confusion reigns! SETEC ANGRY! SMASH!

Uh, I don't think that's for me.

@SetecAstronomy Ya, it’s a bit much. I was migrating another setup over, I had everything set up in a YAML file already, so copying it over and running one command to get back up was ideal.

@SetecAstronomy @ryen if all you want is zfs storage, and are comfortable with a cli, freenas, in my experience, is way overkill. it's just wrapping some cli commands in php and putting them on a web page.

iX is also pivoting into zfs on linux and while they've said that freebsd based freenas isn't going anywhere, I have doubts as to its future.
@SetecAstronomy Regardless of the rest, I strongly caution against buying some no-name box that claims the support RAID. If nothing else, you can't see what they're doing so who knows what's happening. I've never had good luck with them and have lost data several times.

With ZFS, the "surviving a drive failure" is its most boring awesomeness :) But, it is difficult to expand the storage capacity of a zfs array later. That is its biggest problem, in my opinion.

@sungo I doubt I'd ever trust hardware RAID unless it was enterprise grade stuff from a serious vendor. It's hard to pin down all the reasons why, but it makes me nervous as all get out. My Dad has been doing data extraction and conversion for a long time, their experiments with RAID have never ended well.

The enclosure is also 2/3 the price without that option, funnily enough.

LVM and EXT are easy to expand, I take it ZFS is a wee bit too complicated to do something similar?

@SetecAstronomy LVM and ZFS are not easy to compare because they have different use cases and evolved in different environment. ZFS is designed for large important storage arrays, like those you find in servers. They don't tend to get expanded much. This snackoverflow post hits the high level of what your options are to expand an array. It's, sadly, not pretty.

https://serverfault.com/questions/190207/how-can-i-add-one-disk-to-an-existing-raidz-zpool#190237

@sungo I suspect I just need to start thinking more in terms of separating things by directory structure and less in terms of "this volume is for this and that one is for that"

@SetecAstronomy it’ll make your life easier with zfs, for sure. I have fast and slow pools in my file server (ssd vs spinny) so I ponder use case there. But otherwise, data gets device up essentially by directory. docker stuff lives in the slow/docker dataset which is mounted at /var/lib/docker. /home is in fast/home. I can then snapshot home more than docker or whatever.

@sungo I don't presently have any SSD in the server, at least I don't think so. I hadn't even considered that. Most of what I want is:

- A place to archive things that don't get accessed too often
- A place to host a few game servers on occasion (Minecraft eats up a LOT of disk and then things get pretty messy when you run out of space mid-game)
- Somewhere to put media for streaming on the LAN, at the moment it's just me but possibly one more session
- The Etherpad for WAN
- NextCloud (two users peak)

I don't think any of that needs anything too fast. I haven't felt like I'm suffering for using the spinning disks I have now.

But I also haven't messed with any real snapshots other than some weird encrypted rsync stuff to an offsite Pi, which I'm pretty sure is slow for a lot of other reasons.

I also want to do some security camera archival, but I'll just grab some WD Purples and probably wont even bother with redundancy for those.

@SetecAstronomy Yeah I think you’re fine on spinny. zfs will let you divy that up just nicely. snapshots in zfs are brilliant and I love them. They’ve saved my ass so many times and make baking up the drives so much easier.

For your stuff, I’d probably mimic my server which runs zfs on ubuntu with docker. Docker has specific filesystem handlers to make use of zfs features. And with qemu, you can give it a zvolume as a “drive”. If you go that route, though, hit me up before you do. Current ubuntu needs some extra care up front to keep it from installing a bunch of weird custom zfs tooling.

@sungo I already have a server running a relatively recent version of Mint (Debian based) that has most of these things set up on it. What do I need Docker for? My experiences with it in the past have all ended in disaster or tears.

@sungo And by "most of these things" I meant the things I intended to use it for. It's been running for a few years. The only addition would be the camera software which wasn't anything too complex from what I remember.

@SetecAstronomy If you're running a recent version of linux, you should be able to load the zfs modules and utils. You could then put a usb stick or somesuch in the box, create a single drive mirror, and then play with it and see what you think. Apart from the whole "replacing a drive" situation, the tooling works the same on a single drive "mirror" as it does with a huge 20 drive zraid.

@sungo Thank you for reminding me of that. That's exactly what I did with LVM. I grabbed some older USB drives that weren't being used and had myself a little lab with em.

@SetecAstronomy if you zfs, you can cache in ram automatically or put a usb SSD in for read cache. Then who gives a shit about drive RPM? :)

@sungo Excellent. I bet there's actually a SATA SSD doing _something_ in that box, I'm missing the one I originally bought for this desktop. Just not sure if it's hosting the OS or something even easier to move.

@SetecAstronomy zfs has options that can used drives for basically read and write caches. Having a write cache on slow drives is great, particularly since you can use a mirror as the write cache and thus not worry too much about losing bits due to a failure
@SetecAstronomy it's a great use for any small drives laying around.
Sign in to participate in the conversation
hackers.town

A bunch of technomancers in the fediverse. Keep it fairly clean please. This arcology is for all who wash up upon it's digital shore.