Hi,
I have a Solaris 11.1 box with a zpool made of 4x 1.5TB Seagate Barracudas.
The PSU of the das box failed and on bringing up the das with a new PSU and doing a zpool status I am told the pool is offline with IO errors.
Ok, sounds reasonable.
I cannot seem to bring the pool back online or mark it as good. There was most likely very little activity on the disks at the time of the PSU loss so I would have thought recovery should have been fairly easy.
Anything I can try to recover the zpool rather than having to destroy and recreate ?.
zpool status -v reports
Note the drives are in the DAS and are seen although they may be in a different order due to a rack restack last night. The error was the same before the restack but some drives were listed as failed rather than unavailable.
RB
I have a Solaris 11.1 box with a zpool made of 4x 1.5TB Seagate Barracudas.
The PSU of the das box failed and on bringing up the das with a new PSU and doing a zpool status I am told the pool is offline with IO errors.
Ok, sounds reasonable.
I cannot seem to bring the pool back online or mark it as good. There was most likely very little activity on the disks at the time of the PSU loss so I would have thought recovery should have been fairly easy.
Anything I can try to recover the zpool rather than having to destroy and recreate ?.
zpool status -v reports
RimBlock@IronSan:~$ zpool status -v Datastore
pool: Datastore
state: SUSPENDED
status: One or more devices are unavailable in response to IO failures.
The pool is suspended.
action: Make sure the affected devices are connected, then run 'zpool clear' or
'fmadm repaired'.
see: http://support.oracle.com/msg/ZFS-8000-HC
scan: scrub repaired 0 in 9h14m with 0 errors on Wed May 1 21:24:13 2013
config:
NAME STATE READ WRITE CKSUM
Datastore UNAVAIL 0 0 0
raidz1-0 UNAVAIL 0 0 0
c0t5000C5002D519534d0 UNAVAIL 0 0 0
c0t5000C500110BC861d0 UNAVAIL 0 0 0
c0t5000C500199819E2d0 UNAVAIL 0 0 0
c0t5000C50011114C27d0 UNAVAIL 0 0 0
logs
c0t5001517BB2AB147Ad0 UNAVAIL 0 0 0
cache
c0t5E83A97EB4A5179Ad0 UNAVAIL 0 0 0
device details:
c0t5000C5002D519534d0 UNAVAIL experienced I/O failures
status: ZFS detected errors on this device.
The pool experienced I/O failures.
c0t5000C500110BC861d0 UNAVAIL experienced I/O failures
status: FMA has faulted this device.
action: Run 'fmadm faulty' for more information. Clear the errors
using 'fmadm repaired'.
c0t5000C500199819E2d0 UNAVAIL experienced I/O failures
status: ZFS detected errors on this device.
The pool experienced I/O failures.
c0t5000C50011114C27d0 UNAVAIL experienced I/O failures
status: FMA has faulted this device.
action: Run 'fmadm faulty' for more information. Clear the errors
using 'fmadm repaired'.
c0t5001517BB2AB147Ad0 UNAVAIL experienced I/O failures
status: ZFS detected errors on this device.
The pool experienced I/O failures.
Note the drives are in the DAS and are seen although they may be in a different order due to a rack restack last night. The error was the same before the restack but some drives were listed as failed rather than unavailable.
RB