Zyxel Forum - Herzlich Willkommen!

Aktuelle Zeit: Mi 11. Dez 2019, 01:17

Alle Zeiten sind UTC + 1 Stunde




Ein neues Thema erstellen Auf das Thema antworten  [ 15 Beiträge ]  Gehe zu Seite 1, 2  Nächste
Autor Nachricht
 Betreff des Beitrags: Volume down after attempting repair
BeitragVerfasst: Mo 18. Nov 2019, 01:15 
Offline

Registriert: Mo 18. Nov 2019, 01:05
Beiträge: 12
Hello,

i am running a NAS540 with 4x4TB Hard Drives. A few days ago the NAS started beeping, one of the hard drives had failed (a lot of reallocated sectors). I shut the NAS down, ordered a new Hard Drive and replaced the faulty one. Afterwards i started the repair process in the hard drive manager. I went away for the weekend and hoped the process would be finshed till i'm back today. It was at 2% when i left and said 40h remaining.

Just now i tried to log onto the webinterface. But it froze everytime after entering my credentials. I tried getting the state over ssh and mdadm, it said 3 drives are state clean, and one is in the state spare. So just as it would be before a successful repair? Because there was no access and also no noticeable disk access, i powercycled the NAS.

Now the webinterface works again, but says "Volume down". There is no way of repairing it in the menu. Also when i click on disks, it says that Disk 1,2 and 3 are "Hot Spare". Disk 4 has no state. The disk that i replaced was disk 3.

I am a bit worried now. Is there something i can do?
I also have no access over ssh anymore.


Nach oben
 Profil  
 
BeitragVerfasst: Di 19. Nov 2019, 11:20 
Offline

Registriert: Mi 14. Nov 2018, 19:50
Beiträge: 63
I think you should be able to re-enable the ssh server over the webinterface. If that fails, try to enable telnet server.

Can you then login as root, and post the output of
Code:
cat /proc/mdstat
mdadm --examine /dev/sd[abcd]3


Nach oben
 Profil  
 
BeitragVerfasst: Di 19. Nov 2019, 20:23 
Offline

Registriert: Mo 18. Nov 2019, 01:05
Beiträge: 12
I put the drives in my computer now and can access them in ubuntu.

The output of cat /proc/mdstat is
Code:
Personalities : [raid6] [raid5] [raid4]
unused devices: <none>


This is the output of: sudo mdadm --examine /dev/sd[abcdef]3
Code:
/dev/sda3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 73e88019:b7cf694c:8584cbaa:47f57992
           Name : NAS540:2
  Creation Time : Tue Nov 24 23:18:19 2015
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
     Array Size : 11708660160 (11166.25 GiB 11989.67 GB)
  Used Dev Size : 7805773440 (3722.08 GiB 3996.56 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=384 sectors
          State : clean
    Device UUID : ac8c7cd6:a8f3d86e:cb210c2b:bcdfc2eb

    Update Time : Thu Nov 14 16:31:43 2019
       Checksum : 667f486f - correct
         Events : 1210

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 3
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 73e88019:b7cf694c:8584cbaa:47f57992
           Name : NAS540:2
  Creation Time : Tue Nov 24 23:18:19 2015
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
     Array Size : 11708660160 (11166.25 GiB 11989.67 GB)
  Used Dev Size : 7805773440 (3722.08 GiB 3996.56 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=384 sectors
          State : active
    Device UUID : 1bbec5f9:dec5a68a:d07cfdbe:e05d0cb4

    Update Time : Mon Nov 11 18:02:11 2019
       Checksum : 1cd3509 - correct
         Events : 74

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 2
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 73e88019:b7cf694c:8584cbaa:47f57992
           Name : NAS540:2
  Creation Time : Tue Nov 24 23:18:19 2015
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
     Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=0 sectors
          State : clean
    Device UUID : 78f30bc0:b68074ee:9a3a223c:93decfd4

    Update Time : Sun Nov 17 23:41:48 2019
       Checksum : c9cda273 - correct
         Events : 1230

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : AA.. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 73e88019:b7cf694c:8584cbaa:47f57992
           Name : NAS540:2
  Creation Time : Tue Nov 24 23:18:19 2015
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
     Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=0 sectors
          State : clean
    Device UUID : 85b74994:874b016e:609081d6:4cfcd0ee

    Update Time : Sun Nov 17 23:41:48 2019
       Checksum : d1f8a2d1 - correct
         Events : 1230

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : AA.. ('A' == active, '.' == missing, 'R' == replacing)


And maybe this is of interest?
Code:
ubuntu@ubuntu:~$ sudo mdadm --examine --brief --scan  --config=partitions
ARRAY /dev/md/2  metadata=1.2 UUID=73e88019:b7cf694c:8584cbaa:47f57992 name=NAS540:2
ARRAY /dev/md/0  metadata=1.2 UUID=b705c51b:2360cd8e:6b81c03f:2072f947 name=NAS540:0
ARRAY /dev/md/1  metadata=1.2 UUID=186ed461:615007c3:ab9e4576:7b5f7084 name=NAS540:1
ARRAY /dev/md/2  metadata=1.2 UUID=73e88019:b7cf694c:8584cbaa:47f57992 name=NAS540:2

ubuntu@ubuntu:~$ sudo mdadm --assemble --scan
mdadm: Devices UUID-73e88019:b7cf694c:8584cbaa:47f57992 and UUID-73e88019:b7cf694c:8584cbaa:47f57992 have the same name: /dev/md/2
mdadm: Duplicate MD device names in conf file were found.


Nach oben
 Profil  
 
BeitragVerfasst: Mi 20. Nov 2019, 10:29 
Offline

Registriert: Mi 14. Nov 2018, 19:50
Beiträge: 63
OK, the history information is this:
Code:
/dev/sda3:
  Creation Time : Tue Nov 24 23:18:19 2015
    Update Time : Thu Nov 14 16:31:43 2019
   Device Role : Active device 3
/dev/sdb3:
  Creation Time : Tue Nov 24 23:18:19 2015
    Update Time : Mon Nov 11 18:02:11 2019
   Device Role : Active device 2
/dev/sdd3:
  Creation Time : Tue Nov 24 23:18:19 2015
    Update Time : Sun Nov 17 23:41:48 2019
   Device Role : Active device 0
/dev/sde3:
  Creation Time : Tue Nov 24 23:18:19 2015
    Update Time : Sun Nov 17 23:41:48 2019
   Device Role : Active device 1

You have an array created on Nov 24 2015, with active devices 0..3. So far so good. Device 2 (sdb3) is dropped on Nov 11 2019, then you got the warning. Device 3 (sda3) is dropped on Nov 14 2019, then the array was down.

Yet there is something strange:
Code:
/dev/sda3:
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb3:
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd3:
   Array State : AA.. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde3:
   Array State : AA.. ('A' == active, '.' == missing, 'R' == replacing)

sdd3 and sde3 agree that they are the last members left, as expected. But sda3 should have known that only 3 members are left, as sdb3 was dropped 3 days before. I have no explanation for this.

You can re-create the (degraded) array from the 3 reliable partitions using the same settings as in 2015:
Code:
mdadm --create --assume-clean --level=5  --raid-devices=4 --metadata=1.2 --chunk=64K  --layout=left-symmetric /dev/md2 /dev/sdd3 /dev/sde3 missing /dev/sda3

--assume-clean tells the raid manager not to touch the content of the array, as it already contains valid data.
Here the sequence of the partition nodes is the same as their 'Device Role'. The third one is 'missing' as sdb3 (Active device 2) is not reliable (and too far off sync). Check if the sequence didn't change, before you apply the command. I don't know if the disks will always be found in the same sequence.


Nach oben
 Profil  
 
BeitragVerfasst: Mi 20. Nov 2019, 11:33 
Offline

Registriert: Mo 18. Nov 2019, 01:05
Beiträge: 12
Thank you so much! That worked!

The output now shows the 3 drives as "clean", with the excluded one as "active".
The 3 partitions also all show "Array State: AA.A" as expected.

I now copied the partition table to my new drive using
Code:
sudo sfdisk -d /dev/sda > partition
sudo sfdisk /dev/sdb < partition


and then tried to add it to the raid. But here i get an error message. Any ideas how to proceed?
Code:
sudo mdadm --manage /dev/md2 --add /dev/sdb1
mdadm: /dev/sdb1 not large enough to join array


For completion here is the cat proc/mdstat and mdadm --examine i got:
Code:
sudo cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md2 : active raid5 sda3[3] sde3[1] sdd3[0]
      11708657664 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [UU_U]
      bitmap: 0/30 pages [0KB], 65536KB chunk

unused devices: <none>


Code:
/dev/sda3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 2ea21d52:ebe0c237:be1ae38f:ac70f57d
           Name : ubuntu:2  (local to host ubuntu)
  Creation Time : Wed Nov 20 10:27:14 2019
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805771776 (3722.08 GiB 3996.56 GB)
     Array Size : 11708657664 (11166.25 GiB 11989.67 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=0 sectors
          State : clean
    Device UUID : 4e0f5da9:6ea157e6:a3122c35:9c55acb4

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Nov 20 10:27:14 2019
  Bad Block Log : 512 entries available at offset 24 sectors
       Checksum : b961c1a7 - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 3
   Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 73e88019:b7cf694c:8584cbaa:47f57992
           Name : NAS540:2
  Creation Time : Tue Nov 24 23:18:19 2015
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
     Array Size : 11708660160 (11166.25 GiB 11989.67 GB)
  Used Dev Size : 7805773440 (3722.08 GiB 3996.56 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=384 sectors
          State : active
    Device UUID : 1bbec5f9:dec5a68a:d07cfdbe:e05d0cb4

    Update Time : Mon Nov 11 18:02:11 2019
       Checksum : 1cd3509 - correct
         Events : 74

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 2
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 2ea21d52:ebe0c237:be1ae38f:ac70f57d
           Name : ubuntu:2  (local to host ubuntu)
  Creation Time : Wed Nov 20 10:27:14 2019
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805771776 (3722.08 GiB 3996.56 GB)
     Array Size : 11708657664 (11166.25 GiB 11989.67 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=0 sectors
          State : clean
    Device UUID : 423bd77a:3884df39:1d859a0b:44224dcb

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Nov 20 10:27:14 2019
  Bad Block Log : 512 entries available at offset 24 sectors
       Checksum : cb730f83 - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 2ea21d52:ebe0c237:be1ae38f:ac70f57d
           Name : ubuntu:2  (local to host ubuntu)
  Creation Time : Wed Nov 20 10:27:14 2019
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805771776 (3722.08 GiB 3996.56 GB)
     Array Size : 11708657664 (11166.25 GiB 11989.67 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=0 sectors
          State : clean
    Device UUID : 2039c6dc:5b874083:255ae4f7:f2b2d618

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Nov 20 10:27:14 2019
  Bad Block Log : 512 entries available at offset 24 sectors
       Checksum : b096763c - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)


Nach oben
 Profil  
 
BeitragVerfasst: Mi 20. Nov 2019, 15:26 
Offline

Registriert: Mi 14. Nov 2018, 19:50
Beiträge: 63
stainless hat geschrieben:
and then tried to add it to the raid. But here i get an error message. Any ideas how to proceed?
Code:
sudo mdadm --manage /dev/md2 --add /dev/sdb1
mdadm: /dev/sdb1 not large enough to join array

That should be /dev/sdb3, I hope.


Nach oben
 Profil  
 
BeitragVerfasst: Mi 20. Nov 2019, 16:15 
Offline

Registriert: Mo 18. Nov 2019, 01:05
Beiträge: 12
Yeah the letter of the swapped device changed, no worries.

I read a few things about the formating being done in msdos instead of gpt and deleting the existing superblock on the fresh device.
Will try them later and report back.

EDIT: you are right, i misread. Adding with sdb3 worked.
It is recovering now!


Nach oben
 Profil  
 
BeitragVerfasst: Do 21. Nov 2019, 13:19 
Offline

Registriert: Mo 18. Nov 2019, 01:05
Beiträge: 12
The recovery went through overnight.
The output of mdadm --details show every device as "clean" with a raid status of "AAAA".
It shows some bad block log though, its the same as in my last post here. Is that inherently bad?

mdadm --assemble --scan now found the 3 raids, md0 (swap), md1 (filesystem) and md2.

When i try to mount md2 i get the following error:
Code:
sudo mount /dev/md/2 /media/raid
mount: /media/raid: wrong fs type, bad option, bad superblock on /dev/md2, missing codepage or helper program, or other error.


If i run lsblk -f i seems like the md2 partition has no file system. How can i fix this?
Code:
lsblk -f
NAME        FSTYPE            LABEL             UUID                                 MOUNTPOINT
...
sda                                                                                 
|-sda1      linux_raid_member NAS540:0          b705c51b-2360-cd8e-6b81-c03f2072f947
| `-md0     ext4                                08151163-5bca-4952-96ce-be17b423cb96
|-sda2      linux_raid_member NAS540:1          186ed461-6150-07c3-ab9e-45767b5f7084
| `-md1     swap                                b2540084-ffe6-4e3f-974b-f61910a4afe8
`-sda3      linux_raid_member ubuntu:2          dfc61736-15fb-352a-ae16-648c25ce4817
  `-md2                                                                             
sdb                                                                                 
|-sdb1      linux_raid_member NAS540:0          b705c51b-2360-cd8e-6b81-c03f2072f947
| `-md0     ext4                                08151163-5bca-4952-96ce-be17b423cb96
|-sdb2      linux_raid_member NAS540:1          186ed461-6150-07c3-ab9e-45767b5f7084
| `-md1     swap                                b2540084-ffe6-4e3f-974b-f61910a4afe8
`-sdb3      linux_raid_member ubuntu:2          dfc61736-15fb-352a-ae16-648c25ce4817
  `-md2                                                                             
sdc                                                                                 
`-sdc1      vfat              UBUNTU 18_0       323A-6C50                            /cdrom
sdd                                                                                 
|-sdd1      linux_raid_member NAS540:0          b705c51b-2360-cd8e-6b81-c03f2072f947
| `-md0     ext4                                08151163-5bca-4952-96ce-be17b423cb96
|-sdd2      linux_raid_member NAS540:1          186ed461-6150-07c3-ab9e-45767b5f7084
| `-md1     swap                                b2540084-ffe6-4e3f-974b-f61910a4afe8
`-sdd3      linux_raid_member ubuntu:2          dfc61736-15fb-352a-ae16-648c25ce4817
  `-md2                                                                             
sde                                                                                 
|-sde1      linux_raid_member NAS540:0          b705c51b-2360-cd8e-6b81-c03f2072f947
| `-md0     ext4                                08151163-5bca-4952-96ce-be17b423cb96
|-sde2      linux_raid_member NAS540:1          186ed461-6150-07c3-ab9e-45767b5f7084
| `-md1     swap                                b2540084-ffe6-4e3f-974b-f61910a4afe8
`-sde3      linux_raid_member ubuntu:2          dfc61736-15fb-352a-ae16-648c25ce4817
  `-md2


Also i noticed that the name of the third partition is now ubuntu:2 instead of NAS540:2, may i need to fix that to get the whole thing working in the device again?


Nach oben
 Profil  
 
BeitragVerfasst: Do 21. Nov 2019, 14:11 
Offline

Registriert: Mi 14. Nov 2018, 19:50
Beiträge: 63
Did you use the 'mdadm --create' as specified? Your new array has a bitmap in the header, while your old one hadn't. That is a problem, as it shifted the data offset:
Code:
    Data Offset : 262144 sectors

Code:
    Data Offset : 264192 sectors

So now the array starts 2048 sectors (=1MiB) later. No wonder it can't be mounted. AFAIK no bitmap is default, so did you specify one?


Nach oben
 Profil  
 
BeitragVerfasst: Do 21. Nov 2019, 14:29 
Offline

Registriert: Mo 18. Nov 2019, 01:05
Beiträge: 12
I used the 'mdadm --create' just how you posted it.
So i did not specify a bitmap.

I just read the man page for mdadm regarding bitmaps and it says:
Zitat:
When creating an array on devices which are 100G or larger, mdadm automatically adds an internal bitmap as it will usually be beneficial. This can be suppressed with --bitmap=none or by selecting a different consistency policy with --consistency-policy.

So it probably created this one by default and i should have been running --bitmap=none.

It also says:
Zitat:
If the word none is given with --grow mode, then any bitmap that is present is removed.

So can i just run this without breaking something?


Nach oben
 Profil  
 
Beiträge der letzten Zeit anzeigen:  Sortiere nach  
Ein neues Thema erstellen Auf das Thema antworten  [ 15 Beiträge ]  Gehe zu Seite 1, 2  Nächste

Alle Zeiten sind UTC + 1 Stunde


Wer ist online?

Mitglieder in diesem Forum: 0 Mitglieder und 1 Gast


Du darfst keine neuen Themen in diesem Forum erstellen.
Du darfst keine Antworten zu Themen in diesem Forum erstellen.
Du darfst deine Beiträge in diesem Forum nicht ändern.
Du darfst deine Beiträge in diesem Forum nicht löschen.
Du darfst keine Dateianhänge in diesem Forum erstellen.

Suche nach:
Gehe zu:  
cron
Powered by phpBB® Forum Software © phpBB Group
Deutsche Übersetzung durch phpBB.de