Filesystem is un-mountable because of plex in DISABLED IOFAIL state:
The filesystem fails to mount
(HostSol10:/root)# mount -F vxfs /dev/vx/dsk/DGDB1/VOLFSARC /DGDB1
UX:vxfs mount: ERROR: V-3-20003: Cannot open /dev/vx/dsk/DGDB1/VOLFSARC: No such device or address
UX:vxfs mount: ERROR: V-3-24996: Unable to get disk layout version
(HostSol10:/root)#
The volume is shown as in disabled state
(HostSol10:/root)# vxprint -g DGDB1 -v
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
v VOLFSARC fsgen DISABLED 9013559296 - ACTIVE - -
(HostSol10:/root)#
Starting the volume also fails
(HostSol10:/root)# vxvol -g DGDB1 startall
VxVM vxvol ERROR V-5-1-1198 Volume VOLFSARC has no CLEAN or non-volatile ACTIVE plexes
(HostSol10:/root)#
Detailed status of the volume shows DISABLED IOFAIL state
(HostSol10:/root)# vxprint -htg DGDB1 -p | more
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
pl VOLFSARC-01 VOLFSARC DISABLED IOFAIL 9013559296 CONCAT - RW
sd DISK-222A-01 VOLFSARC-01 DISK-222A 0 209643136 0 emc0_222a ENA
sd DISK-222B-01 VOLFSARC-01 DISK-222B 0 209643136 209643136 emc0_222b ENA
sd DISK-222C-01 VOLFSARC-01 DISK-222C 0 209643136 419286272 emc0_222c ENA
sd DISK-222D-01 VOLFSARC-01 DISK-222D 0 209643136 628929408 emc0_222d ENA
sd DISK-222E-01 VOLFSARC-01 DISK-222E 0 209643136 838572544 emc0_222e ENA
sd DISK-222F-01 VOLFSARC-01 DISK-222F 0 209643136 1048215680 emc0_222f ENA
sd DISK-2228-01 VOLFSARC-01 DISK-2228 0 209643136 1257858816 emc0_2228 ENA
sd DISK-2229-01 VOLFSARC-01 DISK-2229 0 209643136 1467501952 emc0_2229 ENA
(HostSol10:/root)#
Once all is verified and no errors seen in disk or SAN, clean the plex
(HostSol10:/root)# vxmend fix clean VOLFSARC-01
(HostSol10:/root)# vxprint -htg DGDB1 -v | grep v
v VOLFSARC - DISABLED ACTIVE 9013559296 SELECT - fsgen
(HostSol10:/root)#
(HostSol10:/root)#
There seems to be some disk reporting failing status. failing status is differnt from failed status reported by vxvm. The disk showing failing status may be because of a transient error rather than a truly failing disk. In this case, you can simply clear the status. If the failing status continues to reappear for the same disk, it may be a sign that there is genuine, hardware problem with the disk, or with the SAN connectivity
(HostSol10:/root)# vxdisk list | grep -i fai
emc0_24ab auto:cdsdisk DISK-DB3-24AB DGDB3 online thinrclm failing
emc0_24a3 auto:cdsdisk DISK-DB3-24A3 DGDB3 online thinrclm failing
emc0_38c0 auto:cdsdisk DISK-DB4-38c0 DGDB4 online thinrclm failing
emc0_39a0 auto:cdsdisk DISK-DB6-39A0 DGDB6 online thinrclm failing
(HostSol10:/root)#
The volume can be forcefully started and the FS mounted.
(HostSol10:/root)#
(HostSol10:/root)# vxvol -g DGDB1 -f start VOLFSARC
(HostSol10:/root)#
(HostSol10:/root)# vxprint -g DGDB1 -v
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
v VOLFSARC fsgen ENABLED 9013559296 - ACTIVE - -
(HostSol10:/root)#
Thanks for sharing this important information. You may also refer http://www.s4techno.com/blog/2016/06/17/extend-vxvm-filesystem/
ReplyDelete