I think you have to select only security updates from the update screen. That should bring back the update5 download.
Reboot the server, that should bring the drives back online... Sometimes the updates seem to drop all volumes until the system is reset.
Have just purchased a N54L off the members market and i'm looking to get XPEnology running on it. Now quick question about using existing drives. If the drives are close to capacity with media content, will this data get erased if use those drives in the microserver? i'm getting some new drives for the unit but also wanted to port over my drives from another NAS unit into the microserver but don't want to loose the data.
is this possible?
so I have 4 x 3tb in my server, all raided to give performance / resilience
have 6tb space all together as a result
it is time to change the drives to 4tb drives or 6tb if the server allows
what's the best way to do this? can I remove 2 alternating drives and pop in 2 x 6tb letting it auto rebuild the raid and then once done remove the other 2 x 3tb then replacing and letting it auto rebuild again? or is it wishful thinking
any advice is appreciated![]()
Can't get my disk mounted after security update to DSM 5.1-5055
Abnormality detected. All volumes have been dismounted
Please help!
edit:
I rolled back using this
Can't get my disk mounted after security update to DSM 5.1-5055![]()
AgreedNew bootloader is available for 5.1-5055 which should sort the problems.
There is a much easier way to do this.
Plug in the USB with the new DSM 5.1-5055 bootloader (you do not need to power down the server.)
Do a manual update of the DSM 5.1-5055 pat file.
This worked like a charm for me.
I cannot take the credit for this. I found it on the xpenology forum site.
Sorry, I'm back on a windows + Flexraid solution so didn't realise that the update wasn't update 5! New bootloader released with support for 5.1-5055.
New bootloader is available for 5.1-5055 which should sort the problems.
cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid5 sda5[0] sdd5[3] sdc5[2] sdb5[1]
5846338944 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3]
2097088 blocks [12/4] [UUUU________]
md0 : active raid1 sda1[0] sdb1[1] sdc1[2] sdd1[3]
2490176 blocks [12/4] [UUUU________]
unused devices: <none>