Because that’s what Raid 0 for, basically adding together storage space with faster reads and writes. The local backups are basically just to have earlier versions of (system) files, incrementally every hour, for reference or restoring. In case something goes wrong with the main root NVMe and a backup SSD at the same time (eg. trojan wiping everything), I still have exactly the same backups on my “workstation” (beefier server), on also a RAID 0 of 3 1 TB HDDs. And in case the house burns down or something, there are still daily full backups on Google Cloud and Hetzner.
If it fails, I will just throw in a new SSD and redo the backup. I sometimes delete everything and redo it anyway, for various reasons. In any case, I usually have all copies of all files on the original drive, as local backup on the device and backup on the workstation. And even if those three should fail - which I will immediately know, due to monitoring the systemd job - I still have daily backups on two different, global hosters as well as the seperate NAS. The only case in which all full backups would be affected would be a global destruction of all electronics due to solar storms or a general destruction of earth, in which case that’s the least of my problems. And in case the house burns down, and I only have the daily backups, potentially losing 24 hours of data, that’s also the least of my problems. Yes, generally using Raid 5 for backups is better, but in my case I have multiple copies of the same data at all times, surpassing the 321 rule (by far - 622, and soon 623). As all of my devices are connected via Gigabit, getting backups from eg. the workstation after the PC (with backups) died is just as fast as getting backups from the local PC backup Raid itself. And using Raid 0 is better (in speeds) than just slapping them together in series.
Umm … no. Raid 0 is a hold over from an earlier era when some programs needed mulitiple drives to act as one. It has no redundancy. There is very limited use for Raid 0 with modern drive sizes and most of them are in research where gigs of data are generated with a 10s experiment. If a single drive goes down in Raid 0 the whole volume is lost.
The Raid setup you are describing is Raid 1 (full backup) or 5 (distributed backup). Raid 0 for gaming is worse JBOD. Though the faster read times might mater for HDDs.
Edit: Further reading of your system setup looks custom and amounts to manual Raid 1 over Raid 0+JBOD. It’s also extremely high maintenance. If it works good on you but you could offload your data temporarily and configure the majority of your drives into Raid 6 to drastically reduce your maintenance level and stability/parity.
Well its for faster speeds. So I dont get why you would do a backup on a more fragile but faster storage. You described in another comment that you have many other backups, which is awesome. So good on you for taking care of everything. But yhea, using the opposite of what would be better for backups seems a bit counterintuitive to me. And to presume that it doesn’t matter to use the more secure option because you have many other backups anyway, is also slightly weird since why bother in the first place then.
I don’t mean any hate, you’re doing way better than me. Can I ask how fast the RAID 0 gets? And how much it would be on individual drives. And how much data you have to backup daily.
Much respect for your setup, you’ve taken redundancy seriously and I doubt you’ll ever lose anything.
The local backups are done hourly, and incrementally. They hold 2+ weeks of backups, which means I can roll back versions of packages easily, as the normal package cache is cleaned regularly. They also prevent losing individual files accidentally through weird behaviour of apps, or me.
The backups to my workstation are also done hourly, 15 minutes shifted for every device, and also incrementally. They protect against the device itself breaking, ransomware or some rouge program rm -rf’inf /, which would affect local backups too (as they’re mounted in /backups, but those are mainly for providing a file history as I said.)
As most drives are slower than the 1 Gbps ethernet, the local backups are just more convenient to access and use than the one on my workstation, but otherwise exactly the same.
The .tar.xz’d backups are actual backups, considering they are not easily accessible, and need to be unpacked and externally stored.
I didn’t measure the speeds of a normal SSD vs the raid - but it feels faster. Not a valid argument, of course. But in any way, I want to use it as Raid 0/Unraided for more storage space, so I can have 2 weeks of backups instead of 5 days (considering it always keeps space for 2 backups, I would have 200- GB of space instead of 700+).
The latest hourly backup is 1.3 GB in size, but if an application is used which has a single, big DB that can quickly shoot up to dozens of GB - relatively big for a homeserver hosting primarily my own stuff + a few things for my father. Like synapses’ DB has 20 GB alone. On an uneventful day, that would be 31 GB. With several updates done, which means dozens of new packages in cache, that could grow to 70+GB.
256 GB root NVMe, 1 TB games hdd, 3* 256 GB SSD as raid 0 for local backups, 256 GB HDD for data, 256 GB SSD for VM images.
Why would you put local backups on RAID 0?
Because that’s what Raid 0 for, basically adding together storage space with faster reads and writes. The local backups are basically just to have earlier versions of (system) files, incrementally every hour, for reference or restoring. In case something goes wrong with the main root NVMe and a backup SSD at the same time (eg. trojan wiping everything), I still have exactly the same backups on my “workstation” (beefier server), on also a RAID 0 of 3 1 TB HDDs. And in case the house burns down or something, there are still daily full backups on Google Cloud and Hetzner.
Raid 0 offers no redundancy though. If any of those three disks fail, you lose the entire volume.
For the sake of backups, switching to Raid 5 would be more robust
If it fails, I will just throw in a new SSD and redo the backup. I sometimes delete everything and redo it anyway, for various reasons. In any case, I usually have all copies of all files on the original drive, as local backup on the device and backup on the workstation. And even if those three should fail - which I will immediately know, due to monitoring the systemd job - I still have daily backups on two different, global hosters as well as the seperate NAS. The only case in which all full backups would be affected would be a global destruction of all electronics due to solar storms or a general destruction of earth, in which case that’s the least of my problems. And in case the house burns down, and I only have the daily backups, potentially losing 24 hours of data, that’s also the least of my problems. Yes, generally using Raid 5 for backups is better, but in my case I have multiple copies of the same data at all times, surpassing the 321 rule (by far - 622, and soon 623). As all of my devices are connected via Gigabit, getting backups from eg. the workstation after the PC (with backups) died is just as fast as getting backups from the local PC backup Raid itself. And using Raid 0 is better (in speeds) than just slapping them together in series.
Umm … no. Raid 0 is a hold over from an earlier era when some programs needed mulitiple drives to act as one. It has no redundancy. There is very limited use for Raid 0 with modern drive sizes and most of them are in research where gigs of data are generated with a 10s experiment. If a single drive goes down in Raid 0 the whole volume is lost.
The Raid setup you are describing is Raid 1 (full backup) or 5 (distributed backup). Raid 0 for gaming is worse JBOD. Though the faster read times might mater for HDDs.
Edit: Further reading of your system setup looks custom and amounts to manual Raid 1 over Raid 0+JBOD. It’s also extremely high maintenance. If it works good on you but you could offload your data temporarily and configure the majority of your drives into Raid 6 to drastically reduce your maintenance level and stability/parity.
Well its for faster speeds. So I dont get why you would do a backup on a more fragile but faster storage. You described in another comment that you have many other backups, which is awesome. So good on you for taking care of everything. But yhea, using the opposite of what would be better for backups seems a bit counterintuitive to me. And to presume that it doesn’t matter to use the more secure option because you have many other backups anyway, is also slightly weird since why bother in the first place then.
I don’t mean any hate, you’re doing way better than me. Can I ask how fast the RAID 0 gets? And how much it would be on individual drives. And how much data you have to backup daily.
Much respect for your setup, you’ve taken redundancy seriously and I doubt you’ll ever lose anything.
The local backups are done hourly, and incrementally. They hold 2+ weeks of backups, which means I can roll back versions of packages easily, as the normal package cache is cleaned regularly. They also prevent losing individual files accidentally through weird behaviour of apps, or me.
The backups to my workstation are also done hourly, 15 minutes shifted for every device, and also incrementally. They protect against the device itself breaking, ransomware or some rouge program rm -rf’inf /, which would affect local backups too (as they’re mounted in /backups, but those are mainly for providing a file history as I said.)
As most drives are slower than the 1 Gbps ethernet, the local backups are just more convenient to access and use than the one on my workstation, but otherwise exactly the same.
The .tar.xz’d backups are actual backups, considering they are not easily accessible, and need to be unpacked and externally stored.
I didn’t measure the speeds of a normal SSD vs the raid - but it feels faster. Not a valid argument, of course. But in any way, I want to use it as Raid 0/Unraided for more storage space, so I can have 2 weeks of backups instead of 5 days (considering it always keeps space for 2 backups, I would have 200- GB of space instead of 700+).
The latest hourly backup is 1.3 GB in size, but if an application is used which has a single, big DB that can quickly shoot up to dozens of GB - relatively big for a homeserver hosting primarily my own stuff + a few things for my father. Like synapses’ DB has 20 GB alone. On an uneventful day, that would be 31 GB. With several updates done, which means dozens of new packages in cache, that could grow to 70+GB.