NAS540: RAID-1 degraded after removing drive - how to repair? [solved]

NAS326, NAS520, NAS540, NAS542
Sprocki
Beiträge: 18
Registriert: So 3. Mai 2020, 09:57

NAS540: RAID-1 degraded after removing drive - how to repair? [solved]

Beitrag von Sprocki »

Hello everyone,

I just created an account in this forum. I bought two NAS540 in 2015 and used them mainly as storage device since then. I was registered in the old zyxelforum.de which is no longer available. Since then a few questions derived to make more use of the two NAS, and right before I was accepting that I might have to register in the official forum, I did another search and luckily found this link: http://www.hifi-forum.de/viewthread-258-3463.html, so this is how I found your forum. Now I am here and have a bunch of questions :-)

First one:
I first used a 0.5TB and 1TB drive in the first NAS and mirrored them in a RAID-1. Then I added a 4TB drive and mirrored all data to there. I then removed the two smaller drives from the device. When I reboot the NAS it claims that the RAID-1 has been degraded. I can get rid of the nerving beep sound by logging in to the web interface and opening the status window but on the next reboot it beeps and warns again. The web interface says "go to the volume manager and repair the drive" but there is no repair option. I am using firmware 521AATB5C0.
Can I solve this status without plugging in a second drive again? I would like to keep it as RAID-1 (which should be the case, anyway) or downgrade it to a JBOD with a single disk. Because 4TB is more than enough for me and I have a second NAS for mirroring.
Zuletzt geändert von Sprocki am Fr 8. Mai 2020, 21:03, insgesamt 1-mal geändert.

Mijzelf
Beiträge: 108
Registriert: Mi 14. Nov 2018, 19:50

Re: NAS540: RAID-1 degraded after removing drive - how to repair?

Beitrag von Mijzelf »

I first used a 0.5TB and 1TB drive in the first NAS and mirrored them in a RAID-1. Then I added a 4TB drive and mirrored all data to there. I then removed the two smaller drives from the device. When I reboot the NAS it claims that the RAID-1 has been degraded.
So you had a 2 disk 0.5TB raid1 array, with 0.5TB unused on the 1TB disk, and added a 4TB disk to the array, creating a 3 disk 0.5TB array, with in total 4TB unused?
If you added the 4TB disk as separate volume, there is no degraded volume after removing the other 2 disks.

Sprocki
Beiträge: 18
Registriert: So 3. Mai 2020, 09:57

Re: NAS540: RAID-1 degraded after removing drive - how to repair?

Beitrag von Sprocki »

Mijzelf hat geschrieben:
So 3. Mai 2020, 14:16
So you had a 2 disk 0.5TB raid1 array, with 0.5TB unused on the 1TB disk, and added a 4TB disk to the array, creating a 3 disk 0.5TB array, with in total 4TB unused?
Yes, some years ago, in order to use the RAID's facility to clone the content to the new disk.
If you added the 4TB disk as separate volume, there is no degraded volume after removing the other 2 disks.
I did not do it like this at that time. Meanwhile the 4TB disk is more than 1.5TB full,so it won't fit on the two old drives even if I place them back in the NAS. Therefore the question is if I can repair the volume without adding another disk because for quite a long time I will not need another one. I remember seeing a DOCX file from the maintainer of the old zyxelforum.de which described switching between RAID levels and JBOD but I don't have the file and maybe my case was not described in that. If a repair is possible, that would be variant #1.
Or do I have to plug in another 2TB or bigger drive, create a separate volume, copy all files and then remove the first drive? (variant #2)
Or should I erase the disk and sync from the backup NAS? (variant #3)
Variant #1 would be the most comfortable one, if possible. How long might the others take?

Mijzelf
Beiträge: 108
Registriert: Mi 14. Nov 2018, 19:50

Re: NAS540: RAID-1 degraded after removing drive - how to repair?

Beitrag von Mijzelf »

Variant #1 should be possible. Can you login over ssh, as root, and post the output of

Code: Alles auswählen

cat /proc/mdstat
mdadm --examine /dev/sd[abcd]3
Variant #2 will take the time is costs to synchronize the full volume size at 50~80MB/sec. And will only work if the 2nd disk is >= the current volume size.

Variant #3. If you copy from one volume to the other within the NAS, on different disks, I think you can get >100MB/sec. If you do it over network, copying from one share to another, your network will be limiting. If you are on wifi, maybe 10MB/sec. On gigabit, maybe 80MB/sec. But you'll only have to copy the data, and not also the empty space, as in #2.

Benutzeravatar
shv
Beiträge: 66
Registriert: Sa 10. Nov 2018, 17:36

Re: NAS540: RAID-1 degraded after removing drive - how to repair?

Beitrag von shv »


Mijzelf hat geschrieben: Variant #3. If you copy from one volume to the other within the NAS, on different disks, I think you can get >100MB/sec.
Sounds interesting. I have NAS542 with 4 independend JBOD volumes. If I try to copy between 2 volumes with mc I reach just about 30 MByte/s. Therefore I removed the disks last time and mouted them to Ubuntu virtual machine on Oracle Virtual Box to get fast copy speed using USB3.0 to SATA interfaces.






Mijzelf
Beiträge: 108
Registriert: Mi 14. Nov 2018, 19:50

Re: NAS540: RAID-1 degraded after removing drive - how to repair?

Beitrag von Mijzelf »

Well, maybe I'm too optimistic. Don't know. Midnight commander is not the fastest way to copy files, especially not when you are copying a lot of small files. On the other hand, when copying a lot of small files 100MB/sec is impossible, due to random access times. 100MB/sec is only feasible with big files, and I don't know if in that case the overhead of mc is significant. The real test would be to use 'time cp'. Unfortunately I have no NAS5xx with more than one disk at the moment to test it.

Sprocki
Beiträge: 18
Registriert: So 3. Mai 2020, 09:57

Re: NAS540: RAID-1 degraded after removing drive - how to repair?

Beitrag von Sprocki »

This is my shell output:

Code: Alles auswählen

~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid1 sda3[1]
      3902886912 blocks super 1.2 [2/1] [_U]

md1 : active raid1 sda2[4]
      1998784 blocks super 1.2 [4/1] [_U__]

md0 : active raid1 sda1[4]
      1997760 blocks super 1.2 [4/1] [_U__]

unused devices: <none>

...
~ # mdadm --examine /dev/sd[abcd]3
/dev/sda3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 83584448:40880c4e:ffdaef94:9cad5c20
           Name : NAS540:2
  Creation Time : Fri Aug 28 23:28:57 2015
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
     Array Size : 3902886912 (3722.08 GiB 3996.56 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : bb42ec4b:d284c96f:b2c4c5bf:48d060a9

    Update Time : Wed May  6 21:08:52 2020
       Checksum : d8162876 - correct
         Events : 226274


   Device Role : Active device 1
   Array State : .A ('A' == active, '.' == missing)
mdadm: cannot open /dev/sdb3: No such device or address
mdadm: cannot open /dev/sdc3: No such device or address
mdadm: cannot open /dev/sdd3: No such device or address

Sprocki
Beiträge: 18
Registriert: So 3. Mai 2020, 09:57

Re: NAS540: RAID-1 degraded after removing drive - how to repair?

Beitrag von Sprocki »

I mentioned that the 'Repair' option was not available for me. I just found this article: https://www.smallnetbuilder.com/nas/nas ... ed?start=3 . In the paragraph 'Disk Pull' there is a gallery showing the 'Repair' link. Is this only available under certain circumstances, was it removed intentionally or is it a bug that it is missing in my latest firmware?

Mijzelf
Beiträge: 108
Registriert: Mi 14. Nov 2018, 19:50

Re: NAS540: RAID-1 degraded after removing drive - how to repair?

Beitrag von Mijzelf »

Sprocki hat geschrieben:
Mi 6. Mai 2020, 21:46
This is my shell output:
Ah. In that case the command is:

Code: Alles auswählen

mdadm --grow /dev/md2 --raid-devices=1 --force
This will turn the degraded 2 disk raid1 array in a single disk raid1 array.

Sprocki
Beiträge: 18
Registriert: So 3. Mai 2020, 09:57

Re: NAS540: RAID-1 degraded after removing drive - how to repair?

Beitrag von Sprocki »

Thank you! That solved my issue and it was very quick.

Antworten