Friday, September 10, 2010

LOFS Mounts

A loop back file system is a feature that allows the creation of a virtual file system that acts as an alternate path to access an already mounted filesystem. Like a hard link to a mount point.

This feature use is very useful while implementing zones.

Consider the situation in which you have a global machine where oracle client is installed. The machine has say 5 zones and the requirement is to have oracle client to be installed on each zones.
Using the lofs concept we can make the oracle client available in the global machine to be accessible as installed locally from zones.


The zone available in this machine is:

server1:/root# zoneadm list -icv
  76 zone1    running    /zones/zone1                native   shared



The file system /opt/oracle is in global machine server1 with oracle client installed:

server1:/opt/oracle# df -k .
Filesystem            kbytes    used   avail capacity  Mounted on
/dev/vx/dsk/DG-LOCAL/VOL-LOCAL-opt-oracle
                     5242880 3186470 1927985    63%    /opt/oracle
server1:/opt/oracle# ls
admin       lost+found  product


Lets check the zone configuration for the above zone:

server1:/opt/oracle# zonecfg -z zone1 info
zonename: zone1
zonepath: /zones/zone1
brand: native
autoboot: false
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
[cpu-shares: 20]
fs:
        dir: /etc/globalname
        special: /etc/nodename
        raw not specified
        type: lofs
        options: [ro]
net:
        address: 10.120.198.22
        physical: bge0
        defrouter: 10.120.198.3
net:
        address: 10.3.90.230
        physical: nxge1
        defrouter not specified
rctl:
        name: zone.cpu-shares
        value: (priv=privileged,limit=20,action=none)


Configure the lofs fs on the zone:


server1:/opt/oracle# zonecfg -z zone1
zonecfg:zone1> add fs
zonecfg:zone1:fs> set dir=/opt/oracle
zonecfg:zone1:fs> set special=/opt/oracle
zonecfg:zone1:fs> set type=lofs
zonecfg:zone1:fs> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> info
zonename: zone1
zonepath: /zones/zone1
brand: native
autoboot: false
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
[cpu-shares: 20]
fs:
        dir: /etc/globalname
        special: /etc/nodename
        raw not specified
        type: lofs
        options: [ro]
fs:
        dir: /opt/oracle
        special: /opt/oracle
        raw not specified
        type: lofs
        options: []
net:
        address: 10.120.198.22
        physical: bge0
        defrouter: 10.120.198.3
net:
        address: 10.3.90.230
        physical: nxge1
        defrouter not specified
rctl:
        name: zone.cpu-shares
        value: (priv=privileged,limit=20,action=none)
zonecfg:zone1> exit


Mount the lofs file system onto the correct path. The below part will work even without configuring in zonecfg. But we configure on zonecfg to make the fs permanent so that when we startup the zone, this mount happens automatically(like vfstab)

server1:/# mount -F lofs /opt/oracle /zones/zone1/root/opt/oracle
mount: Mount point /zones/zone1/root/opt/oracle does not exist.
server1:/#
server1:/# mkdir /zones/zone1/root/opt/oracle
server1:/#
server1:/# mount -F lofs /opt/oracle /zones/zone1/root/opt/oracle
server1:/#
server1:/# ls -ld /opt/oracle
drwxr-xr-x   5 oracle   dba           96 Aug 22  2008 /opt/oracle
server1:/#
server1:/# ls -ld /zones/zone1/root/opt/oracle
drwxr-xr-x   5 oracle   dba           96 Aug 22  2008 /zones/zone1/root/opt/oracle


Now the /opt/oracle file system is available in the zone. To check zlogin to the zone1 and try accessing /opt/oracle. /opt/oracle

The above example showed how to configure lofs in zone configuration and mount the file system as lofs.

This way we have shared the oracle client installed on the global machine accessible on the zone there by saving additional disk space needed if the oracle client is be installed seperately inside the zone.

No comments:

Post a Comment