She liiiives! Installing TrueNAS now, featuring @belee. Filled up the DIMM slots with 192GB :blobcat_thinkingcool: don’t think I need a swap partition, no…
Also she’s actually super quiet! Which is fantastic. No worries about running her in the living room anymore.
[image eye contact]


Now please give suggestions on how to set up my storage! I have 10 drive slots (well, 12, but only 10 sleds for now).
I have 5 2TB SSDs, and 10 6TB HDDs.
How do you suggest I set up the storage volumes to have one large slower storage for media and backups, and one smaller high-performance SSD-only storage for virtualised servers? Both with some redundancy and whatever caching is useful.

I’m thinking:
6x 6TB drives in ZFS pool with 1 disk failure readiness
2x 2TB SSD cache for this large disk
2x 2TB SSD in RAID 1 for virtualisation

What do you think?

Update: I updated all the firmwares for the storage controller and stuff but this model (HP 420i array controller) doesn’t support HBA mode (except that apparently some people have forced it into that mode?? Weird but worth investigating).
My next problem is that the 6TB SAS HDDs just aren’t showing up on it. The backplane is definitely designed for SAS, but they might be too big or something? Going to borrow a smaller SAS drive today to confirm that gets detected.
That still wouldn’t solve the problem though, so I think I’ll be bypassing the built-in controller for an LSI PCIe card and connecting that directly to the backplanes. I only need one 8i card for the 12 slots because one of the backplanes is an active SAS expander (that’s how HP fits them). Will probably need longer mini-SAS cables so might end up hunting for one of those to ‘borrow’ as well.

In better news, I picked up a pair of E5-2630v2 CPUs to replace the single E5-2609v1 that’s installed. Hyperthreading + faster cores.

Based on PassMark, that’s 3x the performance per CPU with same TDP. Not bad for $15

Problem with putting the second one in, is I don’t have the right heatsink. Official one is ~$85 on eBay. Alternative would be to take out the airflow shroud and put ordinary CPU coolers on both instead (tonnes of clearance in 5U case). But that would probably end up more expensive anyway

She is running! Set up my ZFS pools, 21TB storage on spinning rust with 2 disk failure tolerance and 2TB read cache, 3TB on SSD with 1 failure tolerance.
Now to set up all the necessary services and copy files over…

@s0 you don't need 4tb of ssd read cache, 100-500gb is sufficient, remember it also gets invalidated on reboots

@sneak I was going for 2TB of read/write cache (because it’s in RAID1 itself).

@s0 oh i assumed you were doing zfs which doesn't have a write cache (unless you are doing synchronous writes which nobody is unless you are running a database)

@s0 isn't 1 disk failure readiness too small? I thought that with ZFS the entire pool has to be rebuilt when one disk fails, and that this is a very taxing and long process, meaning that you're likely to lose another disk while it's going on (especially if they're from the same batch), and losing more than its disk failure readiness means losing _all_ data?

@s0 (I'm not an expert, but I thought this was a general recommendation for almost all kinds of raids/pools?)

@IngaLovinde yeah but like, enterprise recommendations aren’t the same as hobbyist ones

How critical is fault tolerance vs. storage for you? If it were me, i'd go with a Z2 or Z3 pool (2 or 3 disk failure margin) or straight to a mirror (slightly less fault tolerance but also less load due to no parity calcs).

Sign in to participate in the conversation
Cathode Church

A place for trans makers, coders, tinkerers and dreamers.