Raid 1 change to 2 seperate Volumes

NSA210, NSA221, NSA310, NSA310a, NSA310S, NSA320, NSA320S, NSA325, NSA325-v2
tomtom38
Beiträge: 8
Registriert: So 2. Feb 2020, 19:13

Raid 1 change to 2 seperate Volumes

Beitrag von tomtom38 »

Hello

can I change the raid 1 to two separate volumes
?

Mijzelf
Beiträge: 108
Registriert: Mi 14. Nov 2018, 19:50

Re: Raid 1 change to 2 seperate Volumes

Beitrag von Mijzelf »

Yes, but only from the commandline.

tomtom38
Beiträge: 8
Registriert: So 2. Feb 2020, 19:13

Re: Raid 1 change to 2 seperate Volumes

Beitrag von tomtom38 »

ok can you say what comand I have to use or where I can find this information.
thanks

Mijzelf
Beiträge: 108
Registriert: Mi 14. Nov 2018, 19:50

Re: Raid 1 change to 2 seperate Volumes

Beitrag von Mijzelf »

The command is mdadm. The difficulty is in the arguments.

Can you login as root over ssh, and post the output of

Code: Alles auswählen

mdadm --examine /dev/sd[ab]2

tomtom38
Beiträge: 8
Registriert: So 2. Feb 2020, 19:13

Re: Raid 1 change to 2 seperate Volumes

Beitrag von tomtom38 »

Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 78192105:8617dca1:5c517454:0892462b
Name : NSA325-v2:0 (local to host NSA325-v2)
Creation Time : Fri Dec 2 18:35:04 2016
Raid Level : raid1
Raid Devices : 2

Avail Dev Size : 2929765376 (2794.04 GiB 3000.08 GB)
Array Size : 2929765240 (2794.04 GiB 3000.08 GB)
Used Dev Size : 2929765240 (2794.04 GiB 3000.08 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 15dee431:96568d13:4309e20a:1be942b3

Update Time : Sat Apr 25 11:09:38 2020
Checksum : ee7ec248 - correct
Events : 3470


Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing)
/dev/sdb2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 78192105:8617dca1:5c517454:0892462b
Name : NSA325-v2:0 (local to host NSA325-v2)
Creation Time : Fri Dec 2 18:35:04 2016
Raid Level : raid1
Raid Devices : 2

Avail Dev Size : 2929765376 (2794.04 GiB 3000.08 GB)
Array Size : 2929765240 (2794.04 GiB 3000.08 GB)
Used Dev Size : 2929765240 (2794.04 GiB 3000.08 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 9a9d7fd5:21be1256:68d145a4:2e3977a4

Update Time : Sat Apr 25 11:09:38 2020
Checksum : 5f370190 - correct
Events : 3470


Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing)

Mijzelf
Beiträge: 108
Registriert: Mi 14. Nov 2018, 19:50

Re: Raid 1 change to 2 seperate Volumes

Beitrag von Mijzelf »

You can split one disk from the current array by executing

Code: Alles auswählen

mdadm /dev/md0 --fail /dev/sdb2 --remove /dev/sdb2
Now you have a single disk raid1 array, and a failed raidmember. You can now create a single disk raid1 array from the 2nd disk:

Code: Alles auswählen

mdadm --create /dev/md1 --assume-clean --level=1 --raid-devices=1 /dev/sdb2
Reboot, and change one of the volume names from the webinterface.

tomtom38
Beiträge: 8
Registriert: So 2. Feb 2020, 19:13

Re: Raid 1 change to 2 seperate Volumes

Beitrag von tomtom38 »

thank you very much

tomtom38
Beiträge: 8
Registriert: So 2. Feb 2020, 19:13

Re: Raid 1 change to 2 seperate Volumes

Beitrag von tomtom38 »

root@NSA325-v2:~# mdadm /dev/md0 --fail /dev/sdb2 --remove /dev/sdb2
mdadm: set /dev/sdb2 faulty in /dev/md0
mdadm: hot removed /dev/sdb2 from /dev/md0
root@NSA325-v2:~# mdadm --create /dev/md1 --assume-clean --level=1 --raid-devices=1 /dev/sdb2
mdadm: '1' is an unusual number of drives for an array, so it is probably
a mistake. If you really mean it you will need to specify --force before
setting the number of drives.
root@NSA325-v2:~# mdadm --create /dev/md1 --assume-clean --level=1 --raid-devices=1 /dev/sdb2
mdadm: '1' is an unusual number of drives for an array, so it is probably
a mistake. If you really mean it you will need to specify --force before
setting the number of drives.
root@NSA325-v2:~# mdadm --creatinge /dev/md1 --assume-clean --level=1 --force --raid-devices=1 /dev/sdb2
mdadm: /dev/sdb2 appears to be part of a raid array:
level=raid1 devices=2 ctime=Fri Dec 2 18:35:04 2016
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

this it what I m done and in the webinterface one volume is marked as degraded und one was down after I start the repair option the first is still degraded the second is marked as healthy
but I started the repair option again und now hte first volume is down , is ther another way to repair than to create a jbod?

Mijzelf
Beiträge: 108
Registriert: Mi 14. Nov 2018, 19:50

Re: Raid 1 change to 2 seperate Volumes

Beitrag von Mijzelf »

this it what I m done
Looks OK.
Did you reboot after that? If not, can you first reboot, before continuing?
is ther another way to repair than to create a jbod?
I don't understand that question. Do you mean "can this be repaired, or can I better start over and create new volumes"?

At the moment I'm not sure what the status is. So can you post the output of

Code: Alles auswählen

cat /proc/mdstat
mdadm --examine /dev/sda2
mdadm --examine /dev/sdb2
mdadm --examine /dev/md0
mdadm --examine /dev/md1

tomtom38
Beiträge: 8
Registriert: So 2. Feb 2020, 19:13

Re: Raid 1 change to 2 seperate Volumes

Beitrag von tomtom38 »

with the first hard drive you can no longer start, I swapped the first and the second, the volume 2 is healty, volume one degraded, can I switch it to healthy or i it normal for zyxel firmware because it is no raid 1 anymore with this hard drive, with ssh i have access to both volumes.

md1 : active raid1 sdb2[2]
2929765240 blocks super 1.2 [2/1] [_U]

md0 : active raid1 sda2[0]
2929765240 blocks super 1.2 [1/1]

unused devices: <none>


-----------------------------------------------------------
root@NSA325-v2:~# mdadm --examine /dev/sda2
/dev/sda2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : d560b54b:a4e89bb7:3aa91029:71ec1c21
Name : NSA325-v2:1 (local to host NSA325-v2)
Creation Time : Sun Apr 26 20:40:34 2020
Raid Level : raid1
Raid Devices : 1

Avail Dev Size : 2929765376 (2794.04 GiB 3000.08 GB)
Array Size : 2929765240 (2794.04 GiB 3000.08 GB)
Used Dev Size : 2929765240 (2794.04 GiB 3000.08 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 48f74e34:f56b7ff8:8348464b:b0998a1c

Update Time : Mon Apr 27 19:11:07 2020
Checksum : acb7d694 - correct
Events : 2


Device Role : Active device 0
Array State : A ('A' == active, '.' == missing)
---------------------------------------------------------------------------------------

root@NSA325-v2:~# mdadm --examine /dev/sdb2
/dev/sdb2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 78192105:8617dca1:5c517454:0892462b
Name : NSA325-v2:0 (local to host NSA325-v2)
Creation Time : Fri Dec 2 18:35:04 2016
Raid Level : raid1
Raid Devices : 2

Avail Dev Size : 2929765376 (2794.04 GiB 3000.08 GB)
Array Size : 2929765240 (2794.04 GiB 3000.08 GB)
Used Dev Size : 2929765240 (2794.04 GiB 3000.08 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 15dee431:96568d13:4309e20a:1be942b3

Update Time : Mon Apr 27 18:39:10 2020
Checksum : ee82d63a - correct
Events : 5414


Device Role : Active device 1
Array State : .A ('A' == active, '.' == missing)
---------------------------------------------------------------------------------------------------------

root@NSA325-v2:~# mdadm --examine /dev/md0
mdadm: No md superblock detected on /dev/md0.
------------------------------------------------------------------------------------------------
root@NSA325-v2:~# mdadm --examine /dev/md1
mdadm: No md superblock detected on /dev/md1.

Antworten