OK, the history information is this:
Code:
/dev/sda3:
Creation Time : Tue Nov 24 23:18:19 2015
Update Time : Thu Nov 14 16:31:43 2019
Device Role : Active device 3
/dev/sdb3:
Creation Time : Tue Nov 24 23:18:19 2015
Update Time : Mon Nov 11 18:02:11 2019
Device Role : Active device 2
/dev/sdd3:
Creation Time : Tue Nov 24 23:18:19 2015
Update Time : Sun Nov 17 23:41:48 2019
Device Role : Active device 0
/dev/sde3:
Creation Time : Tue Nov 24 23:18:19 2015
Update Time : Sun Nov 17 23:41:48 2019
Device Role : Active device 1
You have an array created on Nov 24 2015, with active devices 0..3. So far so good. Device 2 (sdb3) is dropped on Nov 11 2019, then you got the warning. Device 3 (sda3) is dropped on Nov 14 2019, then the array was down.
Yet there is something strange:
Code:
/dev/sda3:
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb3:
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd3:
Array State : AA.. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde3:
Array State : AA.. ('A' == active, '.' == missing, 'R' == replacing)
sdd3 and sde3 agree that they are the last members left, as expected. But sda3 should have known that only 3 members are left, as sdb3 was dropped 3 days before. I have no explanation for this.
You can re-create the (degraded) array from the 3 reliable partitions using the same settings as in 2015:
Code:
mdadm --create --assume-clean --level=5 --raid-devices=4 --metadata=1.2 --chunk=64K --layout=left-symmetric /dev/md2 /dev/sdd3 /dev/sde3 missing /dev/sda3
--assume-clean tells the raid manager not to touch the content of the array, as it already contains valid data.
Here the sequence of the partition nodes is the same as their 'Device Role'. The third one is 'missing' as sdb3 (Active device 2) is not reliable (and too far off sync). Check if the sequence didn't change, before you apply the command. I don't know if the disks will always be found in the same sequence.