343478b01056eda347fd64b4cc1e69f23df0c92d
Network/RaidUpgrade2024.md
... | ... | @@ -0,0 +1,56 @@ |
1 | +<!-- |
|
2 | +vim: filetype=markdown |
|
3 | +--> |
|
4 | + |
|
5 | +Requirements |
|
6 | +============ |
|
7 | +* 10TB storage |
|
8 | +* current usage is 8.4TB |
|
9 | + |
|
10 | +Options |
|
11 | +======= |
|
12 | +* (2+2)x 6TB, 12TB raid 10 = 12TB |
|
13 | +* (2+4)x 6TB, 24TB raid 6 |
|
14 | +* (2+4)x 6TB, 18TB raid 10 |
|
15 | +* (2+2)x 6TB, 3x3TB raid 6 |
|
16 | + |
|
17 | +Resources |
|
18 | +========= |
|
19 | +* [ZFS encryption vs ZFS+LUKS](https://raw.githubusercontent.com/jkool702/zfsEncryption_SpeedTest/main/ALL_RESULTS_SUMMARY) |
|
20 | + |
|
21 | +Configurations |
|
22 | +============== |
|
23 | +* H = Hot |
|
24 | +* P = Parity |
|
25 | + |
|
26 | +Rebuild RAID6 |
|
27 | +------------- |
|
28 | +* (1H + 2P + 6) x 3TB = 12TB usable out of 33TB |
|
29 | + |
|
30 | +RAID10 |
|
31 | +------ |
|
32 | +* 50% storage efficiency |
|
33 | +* vs raid6 |
|
34 | + * much faster resilver |
|
35 | + * less load |
|
36 | + * less risk of further failures |
|
37 | +* not guaranteed to survive 2 disk failures |
|
38 | + * 4 disks: 33% failure |
|
39 | + * 6 disks: 20% failure |
|
40 | + * n disks: P(fail) = 1/n-1 |
|
41 | +* each pair should be 1 old and 1 new 6TB drive |
|
42 | + |
|
43 | +ZFS mirror vdev |
|
44 | +--------------- |
|
45 | +* LUKS on top requires guest FS (so, ext4 again...) |
|
46 | +* LUKS beneath requires individual encryption for each disk (blegh...) |
|
47 | +* can stripe across pairs of different sizes (e.g. 4x6TB, 2x3TB = 15TB usable) |
|
48 | + |
|
49 | +Migration Strategy |
|
50 | +================== |
|
51 | +* duplicate mount of md6-media at /mnt/media |
|
52 | +* migrate metadata and config |
|
53 | +* unmount md6-media and mount ia/media in its place |
|
54 | +* symlink /mnt/media as /mnt/md6-media |
|
55 | +* symlink /mnt/home as /mnt/md6-home |
|
56 | +* symlink /mnt/systems as /mnt/md6-media/systems |