He’s got a point about the case where the hardware dies and then you’re really, really in a world of pain.
But what’s missing is any mention of the RAID5 write hole and what happens when your software raid server dies mid-update. Now you have an invisible inconsistency which you’ll only discover when bad data shows up during a rebuild. This is why people use hardware RAID. That tiny bit of persistent memory that makes sure the entire stripe makes it to every disk every time.
It’s not entirely accurate to say that the big black boxes like EMC and NetApp are running commodity hardware. They’re running on commodity hardware PLUS special sauce hardware. (Well, special sauce filesystems at least. It’s not really fair to compare RAID to NetApp, because I think WAFL is specifically designed to overcome the problems with RAID. Unless you’re using ZFS (in which case, why are you using software RAID?), your filesystem probably won’t cope with a half written stripe.)
If you use raid5, you are likely going to have a bad time (at some point) regardless. There is even a movement (BAARF) against raid5 (funny backronym?).
I love the acronym!