Tuesday, June 28, 2011

The savemail panic


The savemail panic message is noted when sending mails.


Below messages are found in syslog.


(sunstation1:/)# tail /var/log/syslog
Jun  8 10:20:01 sunstation1 sendmail[1067]: [ID 577507 mail.debug] tid= 1: serverAddr=45.222.28.206:20389
Jun  8 10:20:01 sunstation1 sendmail[1067]: [ID 939703 mail.debug] tid= 1: AuthType=1
Jun  8 10:20:01 sunstation1 sendmail[1067]: [ID 142272 mail.debug] tid= 1: TlsType=0
Jun  8 10:20:01 sunstation1 sendmail[1067]: [ID 537450 mail.debug] tid= 1: SaslMech=0
Jun  8 10:20:01 sunstation1 sendmail[1067]: [ID 625532 mail.debug] tid= 1: SaslOpt=0
Jun  8 10:20:01 sunstation1 sendmail[1067]: [ID 639905 mail.debug] tid= 1: userID=cn=proxyagent,ou=profile,dc=dev,dc=mobile,dc=belgacom,dc=be
Jun  8 10:20:01 sunstation1 sendmail[1067]: [ID 801593 mail.info] p588K1BN001067: p588K1BO001067: return to sender: aliasing/forwarding loop broken
Jun  8 10:20:01 sunstation1 sendmail[1067]: [ID 801593 mail.notice] p588K1BO001067: setsender: : invalid or unparsable, received from localhost
Jun  8 10:20:01 sunstation1 sendmail[1067]: [ID 801593 mail.alert] p588K1BN001067: Losing ./qfp588K1BN001067: savemail panic
Jun  8 10:20:01 sunstation1 sendmail[1067]: [ID 801593 mail.crit] p588K1BN001067: SYSERR(root): savemail: cannot save rejected email anywhere

(sunstation1:/)# mailx -s "Test" aaron@domain.com
EOT
(sunstation1:/)# Jun  8 10:33:35 sunstation1 sendmail[15981]: [ID 801593 mail.alert] p588XZAI015981: Losing ./qfp588XZAI015981: savemail panic



Can be rectified by removing the ldap datasource from nsswitch.conf file.

(sunstation1:/)# cat /etc/nsswitch.conf | grep ali
aliases:    files ldap
(sunstation1:/)#


automount:  files ldap
aliases:    files
"/etc/nsswitch.conf" 49 lines, 1416 characters
(sunstation1:/)#



Restart sendmail services

(sunstation1:/)# svcs -a | grep sendmail
online         Mar_28   svc:/network/smtp:sendmail
online         Mar_28   svc:/network/sendmail-client:default
(sunstation1:/)#

(sunstation1:/)# svcadm restart svc:/network/smtp:sendmail
(sunstation1:/)# svcadm restart svc:/network/sendmail-client:default

(sunstation1:/)# mailx -s "Test" aaron@domain.com
EOT
(sunstation1:/)#

Friday, June 24, 2011

VxVM extension - stripped volume


Issue encountered while extending a volume even when there is enough space available in the DG.


sunstation1:/root# vxassist -g DG-DB001 maxsize
Maximum volume size: 326191104 (159273Mb)
sunstation1:/root#
sunstation1:/root#
sunstation1:/root# /etc/vx/bin/vxresize -g DG-DB001 VOL-DB001-data1 +50g
VxVM vxassist ERROR V-5-1-436 Cannot allocate space to grow volume to 1498001965 blocks
VxVM vxresize ERROR V-5-1-4703 Problem running vxassist command for volume VOL-DB001-data1, in diskgroup DG-DB001
sunstation1:/root#
sunstation1:/root#
sunstation1:/root# /etc/vx/bin/vxresize -g DG-DB001 VOL-DB001-data1 +5g
sunstation1:/root# /etc/vx/bin/vxresize -g DG-DB001 VOL-DB001-data1 +5g
VxVM vxassist ERROR V-5-1-436 Cannot allocate space to grow volume to 1424601645 blocks
VxVM vxresize ERROR V-5-1-4703 Problem running vxassist command for volume VOL-DB001-data1, in diskgroup DG-DB001
sunstation1:/root#
sunstation1:/root#
sunstation1:/root#
sunstation1:/root# vxassist -g DG-DB001 maxsize
Maximum volume size: 305178624 (149013Mb)
sunstation1:/root#
sunstation1:/root#


Reason: Stripped volume


sunstation1:/root# vxprint -htqg DG-DB001 VOL-DB001-data1
v  VOL-DB001-data1 -       ENABLED  ACTIVE   1414115885 SELECT  VOL-DB001-data1-01 fsgen
pl VOL-DB001-data1-01 VOL-DB001-data1 ENABLED ACTIVE 1414164480 STRIPE 8/512 RW
sd DSK-DG-DB001-110d-01 VOL-DB001-data1-01 DSK-DG-DB001-110d 3328 170826240 0/0 xp24k1_110d ENA
sd DSK-DG-DB001-1369-02 VOL-DB001-data1-01 DSK-DG-DB001-1369 2107648 3317760 0/170826240 xp24k1_1369 ENA
sd DSK-DG-DB001-1369-04 VOL-DB001-data1-01 DSK-DG-DB001-1369 110288128 2626560 0/174144000 xp24k1_1369 ENA
sd DSK-DG-DB001-110f-01 VOL-DB001-data1-01 DSK-DG-DB001-110f 3328 170826240 1/0 xp24k1_110f ENA
sd DSK-DG-DB001-136B-02 VOL-DB001-data1-01 DSK-DG-DB001-136B 20969728 5944320 1/170826240 xp24k1_136b ENA
sd DSK-DG-DB001-1100-01 VOL-DB001-data1-01 DSK-DG-DB001-1100 3328 170826240 2/0 xp24k1_1100 ENA
sd DSK-DG-DB001-1112-03 VOL-DB001-data1-01 DSK-DG-DB001-1112 21292288 5944320 2/170826240 xp24k1_1112 ENA
sd DSK-DG-DB001-1101-01 VOL-DB001-data1-01 DSK-DG-DB001-1101 3328 170826240 3/0 xp24k1_1101 ENA
sd DSK-DG-DB001-110e-02 VOL-DB001-data1-01 DSK-DG-DB001-110e 49024768 5944320 3/170826240 xp24k1_110e ENA
sd DSK-DG-DB001-1102-01 VOL-DB001-data1-01 DSK-DG-DB001-1102 3328 170826240 4/0 xp24k1_1102 ENA
sd DSK-DG-DB001-110c-02 VOL-DB001-data1-01 DSK-DG-DB001-110c 83891968 5944320 4/170826240 xp24k1_110c ENA
sd DSK-DG-DB001-1103-01 VOL-DB001-data1-01 DSK-DG-DB001-1103 3328 170826240 5/0 xp24k1_1103 ENA
sd DSK-DG-DB001-1368-02 VOL-DB001-data1-01 DSK-DG-DB001-1368 135232768 5944320 5/170826240 xp24k1_1368 ENA
sd DSK-DG-DB001-1104-01 VOL-DB001-data1-01 DSK-DG-DB001-1104 3328 170826240 6/0 xp24k1_1104 ENA
sd DSK-DG-DB001-110b-02 VOL-DB001-data1-01 DSK-DG-DB001-110b 164723968 5944320 6/170826240 xp24k1_110b ENA
sd DSK-DG-DB001-1105-01 VOL-DB001-data1-01 DSK-DG-DB001-1105 3328 170826240 7/0 xp24k1_1105 ENA
sd DSK-DG-DB001-1111-03 VOL-DB001-data1-01 DSK-DG-DB001-1111 165430528 5399040 7/170826240 xp24k1_1111 ENA
sd DSK-DG-DB001-1640-01 VOL-DB001-data1-01 DSK-DG-DB001-1640 3328 545280 7/176225280 xp24k1_1640 ENA
sunstation1:/root#
sunstation1:/root#
sunstation1:/root# vxassist -g DG-DB001 maxgrow VOL-DB001-data1
Volume VOL-DB001-data1 can be extended by 4163584 to: 1418279469 (692519Mb+557 sectors)
sunstation1:/root#






Striped volumes must have enough devices and free space to grow all columns in parallel Solution.


Volume Manager has the following internal restrictions regarding the extension of striped volume columns:
Device(s) used in one column cannot be used in any other columns in that volume
All stripe columns must be grown in parallel



To resolve this issue you must add enough storage devices to satisfy the above constraints or use a relayout operation to convert the volume's column count.

Tuesday, June 21, 2011

ZFS Mounting issue


 
ZFS file systems defined:
 
(zone01:/)# zfs list | grep appco
zone01-pool/zone01/dataset/appco               60.5K  99.9M  60.5K  /appco
zone01-pool/zone01/dataset/appcocow             494K  99.5M   494K  /appco/cow
zone01-pool/zone01/dataset/appcows             418K   800M   418K  /appco/ws
(zone01:/)#
(zone01:/)#

Mounted:
 
(zone01:/)# df -k | grep appco
zone01-pool/zone01/dataset/appco  102400      60  102339     1%    /appco
zone01-pool/zone01/dataset/appcows  819200     417  818782     1%    /appco/ws
(zone01:/)#


(zone01:/)#
(zone01:/)# zfs get all zone01-pool/zone01/dataset/appcocow
NAME                              PROPERTY              VALUE                  SOURCE
zone01-pool/zone01/dataset/appcocow  type                  filesystem             -
zone01-pool/zone01/dataset/appcocow  creation              Mon Jan  3 12:04 2011  -
zone01-pool/zone01/dataset/appcocow  used                  494K                   -
zone01-pool/zone01/dataset/appcocow  available             99.5M                  -
zone01-pool/zone01/dataset/appcocow  referenced            494K                   -
zone01-pool/zone01/dataset/appcocow  compressratio         1.00x                  -
zone01-pool/zone01/dataset/appcocow  mounted               no                     -
zone01-pool/zone01/dataset/appcocow  quota                 100M                   local
zone01-pool/zone01/dataset/appcocow  reservation           100M                   local
zone01-pool/zone01/dataset/appcocow  recordsize            128K                   default
zone01-pool/zone01/dataset/appcocow  mountpoint            /appco/cow              local
zone01-pool/zone01/dataset/appcocow  sharenfs              off                    default
zone01-pool/zone01/dataset/appcocow  checksum              on                     default
zone01-pool/zone01/dataset/appcocow  compression           off                    default
zone01-pool/zone01/dataset/appcocow  atime                 on                     default
zone01-pool/zone01/dataset/appcocow  devices               on                     default
zone01-pool/zone01/dataset/appcocow  exec                  on                     default
zone01-pool/zone01/dataset/appcocow  setuid                on                     default
zone01-pool/zone01/dataset/appcocow  readonly              off                    default
zone01-pool/zone01/dataset/appcocow  zoned                 on                     inherited from zone01-pool/zone01/dataset
zone01-pool/zone01/dataset/appcocow  snapdir               hidden                 default
zone01-pool/zone01/dataset/appcocow  aclmode               groupmask              default
zone01-pool/zone01/dataset/appcocow  aclinherit            restricted             default
zone01-pool/zone01/dataset/appcocow  canmount              on                     default
zone01-pool/zone01/dataset/appcocow  shareiscsi            off                    default
zone01-pool/zone01/dataset/appcocow  xattr                 on                     default
zone01-pool/zone01/dataset/appcocow  copies                1                      default
zone01-pool/zone01/dataset/appcocow  version               4                      -
zone01-pool/zone01/dataset/appcocow  utf8only              off                    -
zone01-pool/zone01/dataset/appcocow  normalization         none                   -
zone01-pool/zone01/dataset/appcocow  casesensitivity       sensitive              -
zone01-pool/zone01/dataset/appcocow  vscan                 off                    default
zone01-pool/zone01/dataset/appcocow  nbmand                off                    default
zone01-pool/zone01/dataset/appcocow  sharesmb              off                    default
zone01-pool/zone01/dataset/appcocow  refquota              none                   default
zone01-pool/zone01/dataset/appcocow  refreservation        none                   default
zone01-pool/zone01/dataset/appcocow  primarycache          all                    default
zone01-pool/zone01/dataset/appcocow  secondarycache        all                    default
zone01-pool/zone01/dataset/appcocow  usedbysnapshots       0                      -
zone01-pool/zone01/dataset/appcocow  usedbydataset         494K                   -
zone01-pool/zone01/dataset/appcocow  usedbychildren        0                      -
zone01-pool/zone01/dataset/appcocow  usedbyrefreservation  0                      -
zone01-pool/zone01/dataset/appcocow  logbias               latency                default
(zone01:/)#


(zone01:/)# mkdir /appco/cow
mkdir: Failed to make directory "/appco/cow"; File exists
(zone01:/)#
(zone01:/)# mount zone01-pool/zone01/dataset/appcocow
mount: Mount point cannot be determined
(zone01:/)#
(zone01:/)#


Filesystem /appco/cow is not mounted:
 
(zone01:/)# df -k /appco/cow
Filesystem            kbytes    used   avail capacity  Mounted on
zone01-pool/zone01/dataset/appco
                      102400      60  102339     1%    /appco
(zone01:/)#
 

Traditional mounting doesn't work because of zfs current property:
 
(zone01:/)# mount -F zfs zone01-pool/zone01/dataset/appcocow /appco/cow
filesystem 'zone01-pool/zone01/dataset/appcocow' cannot be mounted using 'mount -F zfs'
Use 'zfs set mountpoint=/appco/cow' instead.
If you must use 'mount -F zfs' or /etc/vfstab, use 'zfs set mountpoint=legacy'.
See zfs(1M) for more information.
(zone01:/)#
(zone01:/)#
 

Mount is not success even after setting the mount point:
 
(zone01:/)# zfs set mountpoint=/appco/cow zone01-pool/zone01/dataset/appcocow
(zone01:/)#
(zone01:/)# df -k /appco/cow
Filesystem            kbytes    used   avail capacity  Mounted on
zone01-pool/zone01/dataset/appco
                      102400      60  102339     1%    /appco
(zone01:/)#
 
(zone01:/)# df -k | grep appco
zone01-pool/zone01/dataset/appco  102400      60  102339     1%    /appco
zone01-pool/zone01/dataset/appcows  819200     417  818782     1%    /appco/ws
(zone01:/)#
(zone01:/)#
 
Set the mountpoint option as legacy:
 
(zone01:/)# zfs set mountpoint=legacy zone01-pool/zone01/dataset/appcocow
(zone01:/)#
(zone01:/)#
(zone01:/)# mount -F zfs zone01-pool/zone01/dataset/appcocow /appco/cow
(zone01:/)#
(zone01:/)#
(zone01:/)# df -k /appco/cow
Filesystem            kbytes    used   avail capacity  Mounted on
zone01-pool/zone01/dataset/appcocow
                      102400     494  101905     1%    /appco/cow
Successfully mounted
 
(zone01:/)# df -k | grep appco
zone01-pool/zone01/dataset/appco  102400      60  102339     1%    /appco
zone01-pool/zone01/dataset/appcows  819200     417  818782     1%    /appco/ws
zone01-pool/zone01/dataset/appcocow  102400     494  101905     1%    /appco/cow
(zone01:/)#
 
Set the property back
 
(zone01:/)# zfs set mountpoint=/appco/cow zone01-pool/zone01/dataset/appcocow
(zone01:/)# df -k /appco/cow
Filesystem            kbytes    used   avail capacity  Mounted on
zone01-pool/zone01/dataset/appcocow
                      102400     494  101905     1%    /appco/cow
(zone01:/)#

or mount using zfs mount option

(zone01:/)# zfs mount zone01-pool/zone01/dataset/appcocow


Sunday, June 19, 2011

dig - DNS lookup utility


The utility dig can be used to get the information regarding all the aliases for a host

dig - DNS lookup utility

The dig utility (domain information groper)  is  a  flexible tool  for  interrogating  DNS  name servers. It performs DNS lookups and displays the answers that are returned from  the name  server(s) that  were queried. Most DNS administrators use dig to troubleshoot DNS problems because of  its  flexi-bility,  ease  of  use  and  clarity of output. Other lookup tools tend to have less functionality than dig.

dig @ axfr | grep

To get a full listing of all the domain records use "axfr"

(sunstation1:/)# dig @dns2.bc bc axfr | grep zone20
eto_a.bc.               300     IN      CNAME   zone20.bc.
eto_d.bc.               300     IN      CNAME   zone20.bc.
ppc_d.bc.               1800    IN      CNAME   zone20.bc.
swd_a.bc.               300     IN      CNAME   zone20.bc.
trd_a.bc.               300     IN      CNAME   zone20.bc.
trt_a.bc.               1800    IN      CNAME   zone20.bc.
zone20.bc.               1800    IN      A       10.162.14.31
zone20-e1.bc.            1800    IN      A       10.22.131.8
(sunstation1:/)#


Wednesday, June 1, 2011

LVM quorum Lost/Reestablished


A quorum is the required number of physical volumes that must be available in a volume group to activate that volume group or for it to remain activated. To activate a volume group, more than half its disks that were available during the last activation must be online and in service. For the volume group to remain fully operational, at least half the disks must remain present and available.

During run time, when a volume group is already active, if a disk fails or is taken offline, the quorum might become lost. This condition occurs if less than half of the physical volumes defined for the volume group now remain fully operational. For example, if two disks belong to a volume group, the loss of one does not cause a loss of quorum as is the case when activating the volume group. To lose quorum, both disks must become unavailable. In that case, your volume group remains active, but a message prints to the console, indicating that the volume group has lost quorum. Until the quorum is restored (at least one of the LVM disks in the volume group in the previous example is again available), LVM does not allow you to complete most commands that affect the volume group configuration. Further, some of the I/O accesses to the logical volumes for that volume group might hang because the underlying disks are not accessible. Also, until quorum is restored, the MWC will not be updated because LVM cannot guarantee the consistency (integrity) of the LVM information.

Use the vgchange -q n option to override the system's quorum check when the volume group is activated. This option has no effect on the runtime quorum check. Overriding quorum can result in a volume group with an inaccurate configuration (for example, missing recently creating logical volumes). This configuration change might not be reversible.


root@syshpux:/hroot# ls -ld /dev/vg00
drwxr-xr-x   2 root       root          8192 Apr  1  2009 /dev/vg00
root@syshpux:/hroot# ls -l /dev/vg00
total 0
crw-r-----   1 root       sys         64 0x000000 Apr  1  2009 group
brw-r-----   1 root       sys         64 0x000001 Apr  1  2009 lvol1
brw-r-----   1 root       sys         64 0x00000a Apr  1  2009 lvol10
brw-r-----   1 root       sys         64 0x00000b Apr  1  2009 lvol11
brw-r-----   1 root       sys         64 0x00000c Apr  1  2009 lvol12
brw-r-----   1 root       sys         64 0x00000d Apr  1  2009 lvol13
brw-r-----   1 root       sys         64 0x000002 Apr  1  2009 lvol2
brw-r-----   1 root       sys         64 0x000003 Apr  1  2009 lvol3
brw-r-----   1 root       sys         64 0x000004 Apr  1  2009 lvol4
brw-r-----   1 root       sys         64 0x000005 Apr  1  2009 lvol5
brw-r-----   1 root       sys         64 0x000006 Apr  1  2009 lvol6
brw-r-----   1 root       sys         64 0x000007 Apr  1  2009 lvol7
brw-r-----   1 root       sys         64 0x000008 Apr  1  2009 lvol8
brw-r-----   1 root       sys         64 0x000009 Apr  1  2009 lvol9
crw-r-----   1 root       sys         64 0x000001 Apr  1  2009 rlvol1
crw-r-----   1 root       sys         64 0x00000a Apr  1  2009 rlvol10
crw-r-----   1 root       sys         64 0x00000b Apr  1  2009 rlvol11
crw-r-----   1 root       sys         64 0x00000c Apr  1  2009 rlvol12
crw-r-----   1 root       sys         64 0x00000d Apr  1  2009 rlvol13
crw-r-----   1 root       sys         64 0x000002 Apr  1  2009 rlvol2
crw-r-----   1 root       sys         64 0x000003 Apr  1  2009 rlvol3
crw-r-----   1 root       sys         64 0x000004 Apr  1  2009 rlvol4
crw-r-----   1 root       sys         64 0x000005 Apr  1  2009 rlvol5
crw-r-----   1 root       sys         64 0x000006 Apr  1  2009 rlvol6
crw-r-----   1 root       sys         64 0x000007 Apr  1  2009 rlvol7
crw-r-----   1 root       sys         64 0x000008 Apr  1  2009 rlvol8
crw-r-----   1 root       sys         64 0x000009 Apr  1  2009 rlvol9
root@syshpux:/hroot#

--------- 
class : lunpath, instance 10
Asynchronous write failed on LUN (dev=0x3000008)
IO details : blkno : 43177576, sector no : 87379152

LVM: WARNING: VG 64 0x000000: LV 9: Some I/O requests to this LV are waiting
        indefinitely for an unavailable PV. These requests will be queued until
        the PV becomes available (or a timeout is specified for the LV).
class : lunpath, instance 10
Asynchronous write failed on LUN (dev=0x3000008)
IO details : blkno : 23125536, sector no : 47275072

LVM: WARNING: VG 64 0x000000: LV 5: Some I/O requests to this LV are waiting
        indefinitely for an unavailable PV. These requests will be queued until
        the PV becomes available (or a timeout is specified for the LV).
class : lunpath, instance 10
Asynchronous write failed on LUN (dev=0x3000008)
IO details : blkno : 43177680, sector no : 87379360

class : lunpath, instance 4
Asynchronous write failed on LUN (dev=0x3000008)
IO details : blkno : 43223016, sector no : 87470032

LVM: WARNING: VG 64 0x000000: LV 13: Some I/O requests to this LV are waiting
        indefinitely for an unavailable PV. These requests will be queued until
        the PV becomes available (or a timeout is specified for the LV).
LVM: WARNING: VG 64 0x000000: LV 10: Some I/O requests to this LV are waiting
        indefinitely for an unavailable PV. These requests will be queued until
        the PV becomes available (or a timeout is specified for the LV).
LVM: VG 64 0x000000: Lost quorum.
This may block configuration changes and I/Os. In order to reestablish quorum at least 1 of the following PVs (represented by current link) must become available:
<3 0x000008>
LVM: VG 64 0x000000: PVLink 3 0x000008 Failed! The PV is not accessible.
LVM: WARNING: VG 64 0x000000: LV 4: Some I/O requests to this LV are waiting
        indefinitely for an unavailable PV. These requests will be queued until
        the PV becomes available (or a timeout is specified for the LV).
LVM: VG 64 0x000000: Reestablished quorum.
LVM: VG 64 0x000000: PVLink 3 0x000008 Recovered.
LVM: NOTICE: VG 64 0x000000: LV 5: All I/O requests to this LV that were
        waiting indefinitely for an unavailable PV have now completed.
LVM: NOTICE: VG 64 0x000000: LV 4: All I/O requests to this LV that were
        waiting indefinitely for an unavailable PV have now completed.
LVM: NOTICE: VG 64 0x000000: LV 13: All I/O requests to this LV that were
        waiting indefinitely for an unavailable PV have now completed.
LVM: NOTICE: VG 64 0x000000: LV 10: All I/O requests to this LV that were
        waiting indefinitely for an unavailable PV have now completed.
LVM: NOTICE: VG 64 0x000000: LV 9: All I/O requests to this LV that were
        waiting indefinitely for an unavailable PV have now completed.