For the past month I've been running 3 200GB drives in RAID5 on FreeBSD 4.8. This morning I was reading and writing to the array via samba when the box crashed due to disk IO errors. It rebooted on it's own and when everything came back up vinum reported one of the drives as down, the plex as degraded, and one of the subdisks as R 0% (recovering). The recovering failed. All of the physical connections to and from the hard drives are good, so it appears as though one of my new Maxtor's has died. I tried running fsck on the degraded plex, as I should be able to from what I know about RAID 5, and I get a couple hundred bad or dupe block errors. After that it asks me if I want to start deleting about a hundred or so files that are bad (but definately were not being written to at the time). I said no to deleting those files. When that's finished it tells me I need to run fsck again. I've run it a total of three times so far with the same results each time. I should also mention that something similiar happened a few days ago. I had a few people reading from the array (no writing) when I received a kernel dump and had to reboot. When it came up I fscked the drive with no errors and that was that. This leads me to believe that the one drive has been dieing for the past few days and finally bit the dust this morning. What's next? Did RAID 5 completely fail at what it's supposed to do (provide redundancy)? Here's what vinum currently reports, if it's of any use to anyone who can help: Code: 2 drives: D a State: up Device /dev/ad4s1e Avail: 0/194474 MB (0%) D c State: up Device /dev/ad7s1e Avail: 0/194474 MB (0%) D b State: referenced Device Avail: 0/0 MB 1 volumes: V storage State: up Plexes: 1 Size: 379 GB 1 plexes: P storage.p0 R5 State: degraded Subdisks: 3 Size: 379 GB 3 subdisks: S storage.p0.s0 State: up PO: 0 B Size: 189 GB S storage.p0.s1 State: obsolete PO: 512 kB Size: 189 GB S storage.p0.s2 State: up PO: 1024 kB Size: 189 GB I'm not sure why it says available is 0 / 200GB, but it has always said that, so I assume it's an error with vinum, or perhaps just how it's designed to run in RAID 5.