RAID works fine[1], but it isn't a backup solution. Indeed it introduces a new layer of potential failure modes[2] in order to makes dealing with a subset of common hardware failures more convenient.
Backups need to be automated so they actually happen, and they need to be designed to handle the most common forms of data loss. In my world, the most common use-case is "Oh fuckit, I've made a complete mess of this piece of work, I'm pulling Thursday's version out of the backups and starting again.". Anything that involves restoring entire images, VMs or otherwise, makes that sort of thing a complete pain in the arse.
Real disasters, like visits from the disk fairy, malware, OS updates run amok, being pwned by 1337 h4xx0rz, or your house burning down are much rarer, but a good backup strategy should cope with those too.
My strategy is a product of paranioa[3], combined with the convenience of working with relatively small datasets. Code, email and documents don't take up much space - if you're regularly creating high-resolution images or videos, it's going to get expensive.
I have a server (running RAID1 to hedge against disk failures), where all the important data lives as a matter of course[4]. Desktops make daily borg backups to the server, mainly to aid recovery of their system-specific configuration if they break. The server has a disk on which nightly rsync snapshots of all but the junk (temp files, cache directories, that sort of thing) are made. This is exposed to the users via a read-only mount, so if you want to restore something, you can just go and fish it out of /backups/$date/path/to/file and you only have to wait for the drive to spin up.
Additionally, I have an external drive unplugged on a shelf which holds a single rsync snapshot, which a script nags me to connect and update after 28 days. That should protect against more serious system failures, although recovery isn't automated. (There seems little point - over the years the only catastrophic server failures I've had were one worm infection when I was young and naive, and a couple of motherboard failures during the capacitor plague years. The way I see it, if I have to rebuild the hardware, the overhead of doing a clean OS install and re-populating /home, /etc important parts of /var etc. from backups is negligible, and probably worthwhile for de-crufting. I try to keep one of the desktop machines of the same generation of hardware as the server, so in an emergency I can use it as a source of donor parts.)
I have a smaller disk in a friend's server in another city, to which I make nightly borg backups of the more important parts of the server's filesystem. This is compressed and encrypted, and generally requires a bit of sysadmin-fu to recover files from, but it's there for worst-case scenarios. A commercial cloud storage solution would work just as well for this, the main advantage here was that I could sneakernet the original image[5] and save myself three months.
[1] At a domestic level, I wouldn't go near anything but Linux software RAID with a barge pole. Hardware RAID is strictly for the big boys who can afford to have duplicate hardware on the shelf doing nothing, otherwise you're liable to find yourself up the creek without a working RAID controller. RAID0 has no redundancy btw - it's just a way to multiply your rate of disk failures in exchange for slightly more performance.
[2] RAID is excellent at faithfully reproducing corruption across multiple drives.
[3] Ignoring the sheer number of person-hours that go into even a reasonably modest computer program, as I get older I find that email and IRC logs are more reliable than my own memory of events. I don't think it's paranoid to put a bit of effort into maintaining those.
[4] Anything portable, or running Microsoft Windows is considered fundamentally untrustworthy, and not for storing data on.
[5] Insert Tannenbaum quote about a hard drive hurtling up the M6.