Wednesday, September 18, 2013

Recovering /etc/path_to_inst file

Format command fails

SolHost10:/lue# format
Cannot set directory to /dev/rdsk - No such file or directory
SolHost10:/lue#


/etc/path_to_inst file is lost

no files in /dev/rds/ and /dev/dsk


SolHost10:/lue# ls -l /dev/rdsk | wc -l
/dev/rdsk: No such file or directory
0
SolHost10:/lue# ls -l /dev/dsk | wc -l
/dev/dsk: No such file or directory
0
SolHost10:/lue#


Copy a good backup instance of /etc/path_to_inst of this host from a very recent backup or explorer.

SolHost10:/root# ls -l /var/tmp/path_to_inst
-r--r--r--   1 soluse dvel     260809 Oct 21 13:05 /var/tmp/path_to_inst
SolHost10:/root#


cp path_to_inst /etc/path_to_inst

chown root:root /etc/path_to_inst

chmod 444 /etc/path_to_inst

ls -l /etc/path_to_inst

more /etc/path_to_inst


devfsadm -Cv

This will restore the device tree

Thursday, July 11, 2013

Importing a zpool into a ldom

Importing a zpool into a ldom

Identify the disk to create the zpool on source server.

lrwxrwxrwx   1 root     root          67 Aug 22 12:47 /dev/rdsk/c6t6072482462EDF0001432HH90000203Bd0s2 -> 
../../devices/scsi_vhci/ssd@g60760e80164ef90000014ef90000203b:c,raw


Create the zpool and zfs

(MySource:/)# zpool create MyZpool /dev/rdsk/c6t6072482462EDF0001432HH90000203Bd0
(MySource:/)#

(MySource:/)# df -h /MyFilesystem
Filesystem             size   used  avail capacity  Mounted on
MyZpool/MyFilesystem
                        98G    62G    36G    63%    /MyFilesystem
(MySource:/)#


(MySource:/)# zpool get all MyZpool
NAME          PROPERTY       VALUE               SOURCE
MyZpool  size           99.5G               -
MyZpool  capacity       61%                 -
MyZpool  altroot        -                   default
MyZpool  health         ONLINE              -
MyZpool  guid           917911360396256368  -
MyZpool  version        32                  default
MyZpool  bootfs         -                   default
MyZpool  delegation     on                  default
MyZpool  autoreplace    off                 default
MyZpool  cachefile      -                   default
MyZpool  failmode       wait                default
MyZpool  listsnapshots  on                  default
MyZpool  autoexpand     off                 default
MyZpool  free           38.0G               -
MyZpool  allocated      61.5G               -
MyZpool  readonly       off                 -
(MySource:/)#


Export the zpool

(MySource:/)# zpool export MyZpool
(MySource:/)#


/// On the physical host of the ldom's map this disk and then map the disks to the virtual disk services

Physical host - MyTargetPhy

lrwxrwxrwx   1 root     root          67 Aug 22 16:39 /dev/rdsk/c12t6072482462EDF0001432HH90000203Bd0s2 -> 
../../devices/scsi_vhci/ssd@g60760e80164ef90000014ef90000203b:c,raw


(MyTargetPhy:/root)# zpool import
  pool: MyZpool
    id: 917911360396256368
 state: UNAVAIL
status: The pool is formatted using an incompatible version.
action: The pool cannot be imported.  Access the pool on a system running newer
        software, or recreate the pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-A5
config:

        MyZpool                              UNAVAIL  newer version
          c12t6072482462EDF0001432HH90000203Bd0  ONLINE




status: The pool is formatted using an incompatible version. 

=> the zpool was created in a machine with a higher version and the target physical host has a lower version of OS that is unable to import it. However the ldom is in the same level as of the source and the zpool can be imported into the ldom



Source:

(MySource:/)# cat /etc/release
                   Oracle Solaris 10 1/13 s10s_u11wos_24a SPARC
  Copyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved.
                            Assembled 17 January 2013
(MySource:/)#


Destination - Physical 

(MyTargetPhy:/root)# cat /etc/release
                   Oracle Solaris 10 8/11 s10s_u10wos_17b SPARC
  Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
                            Assembled 23 August 2011
(MyTargetPhy:/root)#


Destination LDOM

(MyLdom:/root)# cat /etc/release
                   Oracle Solaris 10 1/13 s10s_u11wos_24a SPARC
  Copyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved.
                            Assembled 17 January 2013
(MyLdom:/root)#




Map the disk to the ldom using the virtual disk service


(MyTargetPhy:/root)# ldm add-vdsdev /dev/rdsk/c12t6072482462EDF0001432HH90000203Bd0s2 ZP-MyLdom-temp@primary-vds0
(MyTargetPhy:/root)#
(MyTargetPhy:/root)# ldm add-vdisk ZP-MyLdom-temp ZP-MyLdom-temp@primary-vds0 MyLdom
(MyTargetPhy:/root)#

(MyTargetPhy:/root)# ldm list -l primary | grep MyLdom
                     MyLdom-boot                                    /dev/rdsk/c12t6072482462EDF0001432HH900004530d0s2
                     ZP-MyLdom-temp                                 /dev/rdsk/c12t6072482462EDF0001432HH90000203Bd0s2
(MyTargetPhy:/root)#


Check if the zpool is visible in the ldom


(MyLdom:/root)# zpool import
  pool: MyZpool
    id: 917911360396256368
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        MyZpool  ONLINE
          c0d5      ONLINE
(MyLdom:/root)#


Import the zpool

(MyLdom:/root)# zpool import MyZpool


(MyLdom:/root)# df -h /MyFilesystem
Filesystem             size   used  avail capacity  Mounted on
MyZpool/MyFilesystem
                        98G    62G    36G    63%    /MyFilesystem
(MyLdom:/root)#



Thursday, June 27, 2013

NFS enable root authorization for client



client

root@HPHost:/hroot# grep /NFSShare /etc/mnttab
SolHost:/NFSShare /NFSShare nfs rsize=32768,wsize=32768,NFSv3,dev=4 0 0 1374220408
root@HPHost:/hroot#
root@HPHost:/hroot#
root@HPHost:/hroot# ls -ld /NFSShare
drwxrwxrwx  16 100028       grplos          16 Jul 22 15:52 /NFSShare
root@HPHost:/hroot# 

root@HPHost:/NFSShare/CS4_SPM# touch test3


-rw-r-----   1 nobody     nogroup          0 Jul 22 16:07 test3



Server

(SolHost:/root)# dfshares
RESOURCE                                  SERVER ACCESS    TRANSPORT
  SolHost:/NFSShare               SolHost  -         -
(SolHost:/root)#


enable root access to all host

(SolHost:/root)# share -F nfs -o rw,anon=0 /NFSShare


(SolHost:/root)# exportfs -v
share -F nfs
-               /NFSShare   rw,anon=0   ""
(SolHost:/root)#
(SolHost:/root)#


client

root@HPHost:/NFSShare/CS4_SPM# touch test7
root@HPHost:/NFSShare/CS4_SPM# ls -ltr test7
-rw-r-----   1 root       sys              0 Jul 22 16:17 test7
root@HPHost:/NFSShare/CS4_SPM#

Thursday, May 16, 2013

VxVM - Fixing plex in DISABLED IOFAIL state


Filesystem is un-mountable because of plex in DISABLED IOFAIL state:


The filesystem fails to mount 


(HostSol10:/root)# mount -F vxfs /dev/vx/dsk/DGDB1/VOLFSARC /DGDB1
UX:vxfs mount: ERROR: V-3-20003: Cannot open /dev/vx/dsk/DGDB1/VOLFSARC: No such device or address
UX:vxfs mount: ERROR: V-3-24996: Unable to get disk layout version
(HostSol10:/root)#


The volume is shown as in disabled state

(HostSol10:/root)# vxprint -g DGDB1 -v
TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
v  VOLFSARC fsgen      DISABLED 9013559296 -      ACTIVE   -       -
(HostSol10:/root)#

Starting the volume also fails

(HostSol10:/root)# vxvol -g DGDB1 startall
VxVM vxvol ERROR V-5-1-1198 Volume VOLFSARC has no CLEAN or non-volatile ACTIVE plexes
(HostSol10:/root)#

Detailed status of the volume shows DISABLED IOFAIL state

(HostSol10:/root)# vxprint -htg DGDB1 -p | more
PL NAME         VOLUME       KSTATE   STATE    LENGTH   LAYOUT    NCOL/WID MODE
SD NAME         PLEX         DISK     DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
SV NAME         PLEX         VOLNAME  NVOLLAYR LENGTH   [COL/]OFF AM/NM    MODE
SC NAME         PLEX         CACHE    DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE

pl VOLFSARC-01 VOLFSARC DISABLED IOFAIL 9013559296 CONCAT -    RW
sd DISK-222A-01 VOLFSARC-01 DISK-222A 0 209643136 0 emc0_222a ENA
sd DISK-222B-01 VOLFSARC-01 DISK-222B 0 209643136 209643136 emc0_222b ENA
sd DISK-222C-01 VOLFSARC-01 DISK-222C 0 209643136 419286272 emc0_222c ENA
sd DISK-222D-01 VOLFSARC-01 DISK-222D 0 209643136 628929408 emc0_222d ENA
sd DISK-222E-01 VOLFSARC-01 DISK-222E 0 209643136 838572544 emc0_222e ENA
sd DISK-222F-01 VOLFSARC-01 DISK-222F 0 209643136 1048215680 emc0_222f ENA
sd DISK-2228-01 VOLFSARC-01 DISK-2228 0 209643136 1257858816 emc0_2228 ENA
sd DISK-2229-01 VOLFSARC-01 DISK-2229 0 209643136 1467501952 emc0_2229 ENA
(HostSol10:/root)#

Once all is verified and no errors seen in disk or SAN, clean the plex

(HostSol10:/root)# vxmend fix clean VOLFSARC-01

(HostSol10:/root)# vxprint -htg DGDB1 -v | grep v
v  VOLFSARC -          DISABLED ACTIVE   9013559296 SELECT  -        fsgen
(HostSol10:/root)#
(HostSol10:/root)#

There seems to be some disk reporting failing status. failing status is differnt from failed status reported by vxvm. The disk showing failing status may be because of a transient error rather than a truly failing disk. In this case, you can simply clear the status. If the failing status continues to reappear for the same disk, it may be a sign that there is genuine, hardware problem with the disk, or with the SAN connectivity

(HostSol10:/root)# vxdisk list | grep -i fai
emc0_24ab    auto:cdsdisk    DISK-DB3-24AB  DGDB3   online thinrclm failing
emc0_24a3    auto:cdsdisk    DISK-DB3-24A3  DGDB3   online thinrclm failing
emc0_38c0    auto:cdsdisk    DISK-DB4-38c0  DGDB4   online thinrclm failing
emc0_39a0    auto:cdsdisk    DISK-DB6-39A0  DGDB6   online thinrclm failing
(HostSol10:/root)#


The volume can be forcefully started and the FS mounted.

(HostSol10:/root)#
(HostSol10:/root)# vxvol -g DGDB1 -f start VOLFSARC
(HostSol10:/root)#
(HostSol10:/root)# vxprint -g DGDB1 -v
TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
v  VOLFSARC fsgen      ENABLED  9013559296 -      ACTIVE   -       -
(HostSol10:/root)#

Saturday, May 4, 2013

VxVM - vxconfigd debug mode operation

VERITAS Volume Manager (tm) provides the option of logging console output to a file. The Volume Manager configuration daemon vxconfigdcontrols whether such logging is turned on or off, which by default, is disabled. If enabled, the default log file is vxconfigd.log, and its location varies by operating system:

SolHost10:/root#vxconfigd -x 6 -k -x log -x logfile=/tmp/vxconfigd.out
VxVM vxconfigd DEBUG V-5-1-24577

VOLD STARTUP pid=54427 debug-level=6 logfile=/tmp/vxconfigd.out

VxVM vxconfigd DEBUG V-5-1-681 IOCTL GET_VOLINFO: return 0(0x0)
VxVM vxconfigd DEBUG V-5-1-23909 Kernel version 5.1_SP1
VxVM vxconfigd DEBUG V-5-1-681 IOCTL KTRANS_ABORT: failed: errno=22 (Invalid argument)
VxVM vxconfigd DEBUG V-5-1-681 IOCTL GET_KMEM id=0 size=84: return 0(0x0)
        Results: got 84 bytes
VxVM vxconfigd DEBUG V-5-1-681 IOCTL GET_KMEM id=2 size=6384: return 0(0x0)
        Results: got 6384 bytes
VxVM vxconfigd DEBUG V-5-1-681 IOCTL GET_KMEM id=1 size=0: return 0(0x0)
        Results: got 0 bytes
VxVM vxconfigd DEBUG V-5-1-681 IOCTL SET_KMEM id=1 size=0: return 0(0x0)
VxVM vxconfigd DEBUG V-5-1-681 IOCTL SET_KMEM id=0 size=84: return 0(0x0)
VxVM vxconfigd DEBUG V-5-1-5657 mode_set: oldmode=none newmode=enabled
VxVM vxconfigd DEBUG V-5-1-5656 mode_set: locating system disk devices
VxVM vxconfigd DEBUG V-5-1-681 IOCTL GET_DISKS rids: 0.17: failed: errno=2 (No such file or directory)
VxVM vxconfigd DEBUG V-5-1-5477 find_devices_in_system: locating system disk devices
VxVM vxconfigd DEBUG V-5-1-16309 ddl_find_devices_in_system: Thread pool initialization succeed
VxVM vxconfigd DEBUG V-5-1-16366 devintf_find_soldevices: Using libdevinfo for scanning device tree
VxVM vxconfigd DEBUG V-5-1-14578 ctlr_list_insert: Creating entry for c-1, state=V
VxVM vxconfigd DEBUG V-5-1-5294 ctlr_list_insert: check for c-1
VxVM vxconfigd DEBUG V-5-1-14579 ctlr_list_insert: entry already present for c-1, state=V
VxVM vxconfigd DEBUG V-5-1-5294 ctlr_list_insert: check for c-1
VxVM vxconfigd DEBUG V-5-1-14579 ctlr_list_insert: entry already present for c-1, state=V
VxVM vxconfigd DEBUG V-5-1-21563 ddl_add_hba: Added hba c1
VxVM vxconfigd DEBUG V-5-1-21567 ddl_add_port: Added port c1_p0 under hba c1
VxVM vxconfigd DEBUG V-5-1-21569 ddl_add_target: Added target c1_p0_t0 under port c1_p0
.....
...
VxVM vxconfigd DEBUG V-5-1-14886 ddl_vendor_info: nvlist[3] is ANAME=PILLAR-AXIOM
VxVM vxconfigd DEBUG V-5-1-14885 ddl_vendor_info: name = ANAME values = 1
VxVM vxconfigd DEBUG V-5-1-14884 ddl_vendor_info: library: libvxpillaraxiom.so name: ANAME value[0]: PILLAR-AXIOM

VxVM vxconfigd DEBUG V-5-1-14886 ddl_vendor_info: nvlist[4] is ASL_VERSION=vm-5.1.100-rev-1
VxVM vxconfigd DEBUG V-5-1-14885 ddl_vendor_info: name = ASL_VERSION values = 1
VxVM vxconfigd DEBUG V-5-1-14884 ddl_vendor_info: library: libvxpillaraxiom.so name: ASL_VERSION value[0]: vm-5.1.100-rev-1

VxVM vxconfigd DEBUG V-5-1-14882 ddl_vendor_info exits with success for library = libvxpillaraxiom.so
VxVM vxconfigd DEBUG V-5-1-0 Check ASL - libvxpp.so
VxVM vxconfigd DEBUG V-5-1-14563 checkasl: ASL Key file - /etc/vx/aslkey.d/libvxpp.key
VxVM vxconfigd DEBUG V-5-1-14880 ddl_vendor_info entered for library = libvxpp.so
VxVM vxconfigd ERROR V-5-1-0 Segmentation violation - core dumped
SolHost10:/root#

The higher the debug level (0-9), the greater the output

Thursday, May 2, 2013

Solaris - Booting into a zfs root from ok prompt

If the root is in zfs, to boot into a specific Boot Environment, below process can be used

In our case we have 2 BE's but one is not visible 

{0} ok boot -L
Boot device: /pci@400/pci@0/pci@8/scsi@0/disk@0:a  File and args: -L
1 sol10u8
Select environment to boot: [ 1 - 1 ]: 1

To boot the selected entry, invoke:
boot [] -Z rpool/ROOT/sol10u8


Program terminated

However, we can still invoke it from ok prompt, if we know the zfs root fs name.


{0} ok boot -Z rpool/ROOT/sol10u9


T5240, No Keyboard
Copyright (c) 1998, 2012, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.33.6.b, 130848 MB memory available, Serial #94648388.
Ethernet address 0:21:28:a4:38:44, Host ID: 85a43844.



Boot device: /pci@400/pci@0/pci@8/scsi@0/disk@0:a  File and args: -Z rpool/ROOT/sol10-u9
SunOS Release 5.10 Version Generic_147440-11 64-bit
Copyright (c) 1983, 2012, Oracle and/or its affiliates. All rights reserved.
WARNING: /scsi_vhci/ssd@g60060e80153269000001326900002023 (ssd16):

        Corrupt label; wrong magic number

Thursday, March 28, 2013

Unix Move a process to background and bring it back to foreground

It is possible to be move a process to background so that we have the prompt back for other activities and bring the process that was send background to foreground.



Use CTRL+z to suspends a process (SIGTSTP)

(MySolaris10:/var/adm)#
(MySolaris10:/var/adm)# cp -rp lastlog lastlog.old
^Z
[1]+  Stopped                 cp -rp lastlog lastlog.old
(MySolaris10:/var/adm)#
(MySolaris10:/var/adm)#

Use bg command to send the command to background 

(MySolaris10:/var/adm)#
(MySolaris10:/var/adm)# bg
[1]+ cp -rp lastlog lastlog.old &
(MySolaris10:/var/adm)#

Once command completes, it returns the status of the pid

(MySolaris10:/var/adm)#
[1]+  Done                    cp -rp lastlog lastlog.old
(MySolaris10:/var/adm)#
(MySolaris10:/var/adm)#

Using fg will bring the process to foreground.