No, we are not in the Game of Thrones, neither am I a fan of medieval fiction. Although I kind of like Vikings. Anyway, this RAID is not that raid… Raid stands for an even more advanced concept of using many disks together, as an array
As it has been mentioned already, naming things in Linux (and in IT in general), is random at best. You know, like someone comes up with some name, starts using it, and then the others just accept it as canon. This happened with Linux, if you remember, the guy who first made the kernel (Linux Torvalds), just named it after himself… Others took it as canon, some even named themselves Canonical. Go figure… (These people are literal-minded.)
So RAID is an acronym (yea, I know you’ve already figured that one out). It means Redundant Array of Independent Disks. But it used to mean Redundant Array of Inexpensive Disks. What happened, was the advancement of technology. This meant first most, then all disks became relatively, and later absolutely inexpensive (even later they become dirt cheap, but we have to yet reach the point where they pay us to buy them.) Naturally, the idea of an expensive disk was ridiculous (until they came up with SSDs, which put the price tag back on hardware for a while, but let us not even try to go there.) And then they just changed the name. Just like that. Few even noticed.
Being an array of disks, RAID only makes sense if you have more than one disk. It did anyway, but for hardware RAID this is still true. Because there is such thing as a hardware RAID. That can either mean the police raiding The PirateBay‘s server rooms and seizing their hardware, (as they did, you know the word raid probably still brings up memories in Scandinavia) or a redundant array of independent physical hard drives, attached together by a special controller, that makes them work together. RAID can do a lot of things, like joining the disks up, so they will look like virtually one disk or providing a fail-safe solution, where two or more disks store the same information, so if one fails, the data is recoverable from the other, and a number of derivatives from these.
Like above, only it is a software-based implementation, which does not require specific hardware, so anyone can do it. Mostly you should have more than one disk for it to make sense, but there can be some use cases, where you could do it across partitions on the same disk. Software RAID is relatively painless to implement in Linux, so there is a chance that a later article might just cover it in detail. But again there is also a chance that it will not. We shall see.
Setting up a hardware RAID might be as simple as buying a RAID controller, (which of course still does not mean a person who coordinates the pillaging of enemy villages), putting your disks in, and applying this to your PC set-up which of course (of course…) supports this sort of hardware.
Newer Linux kernels should have no problem with identifying these disk arrays, and any configuration should depend on the manufacturer.
There is a way to use RAID without extra hardware though, as we saw above. There are many types of RAID you can use, these are numbered 0-10, and often provide quite different uses and benefits. You can stripe volumes across multiple disks, use two or more disks to hold the same sort of data so that if one fails, the other is still there (with all your data), some are similar to LVMs in that they provide volumes that extend beyond the physical boundaries of disks. Just like with LVM, the explanation is long and boring. If you are really interested you can find a quite thorough explanation right here. Be warned ,however, that page is long. As in looooong… and boring. So make sure you are properly caffeinated before even attempting to read through it.