Friday, 11 April 2014

Solaris 10 Zone Creation for Dummies


 

About Zones

In its simple form, a zone is a virtual operating system environment created within a single instance of the Solaris operating system. Efficient resource utilization is the main goal of this technology.
Solaris 10's zone partitioning technology can be used to create local zones that behave like virtual servers. All local zones are controlled from the system's global zone. Processes running in a zone are completely isolated from the rest of the system. This isolation prevents processes that are running in one zone from monitoring or affecting processes that are running in other zones. Note that processes running in a local zone can be monitored from global zone; but the processes running in a global zone or even in another local zone cannot be monitored from a local zone.
As of now, the upper limit for the number of zones that can be created/run on a system is 8192; of course, depending on the resource availability, a single system may or may not run all the configured zones effectively.

Global Zone

When we install Solaris 10, a global zone gets installed automatically; and the core operating system runs under global zone. To list all the configured zones, we can use zoneadm command:
 
 % zoneadm list -v
  ID NAME             STATUS         PATH
   0 global           running        /
 
Global zone is the only one:
  • bootable from the system hardware
  • to be used for system-wide administrative control, such as physical devices, routing, or dynamic reconfiguration (DR). ie., global zone is the only zone that is aware of all devices and all file systems
  • from which a non-global zone can be configured, installed, managed, or uninstalled. ie., global zone is the only zone that is aware of the existence of non-global (local) zones and their configurations. It is not possible to create local zones, within a local zone
Steps to create a Local Zone

Prerequisites:
  • Plenty of disk space to hold the newly installed zone. It needs at least 2G space to copy the essential files to the local zone, and of course the disk space needed by the application(s) you are planning to run, in this zone; and
  • A dedicated IP for network connectivity
Basic Zone creation steps with examples:
  1. Check the disk space & network configuration
2.            
3.            % df -h /
4.            Filesystem             size   used  avail capacity  Mounted on
5.            /dev/dsk/c1t1d0s0       29G    22G   7.1G    76%    /
6.            
7.            % ifconfig -a
8.            lo0: flags=2001000849 mtu 8232 index 1
9.                    inet 127.0.0.1 netmask ff000000
10.        eri0: flags=1000843 mtu 1500 index 2
11.                inet 192.168.7.201 netmask fffffe00 broadcast 192.168.7.65
12.        
  1. Since there is more than 5G free space, I've decided to install a local zone under /zones.
14.        
15.        % mkdir /zones
16.        
  1. Next step is to define/create the zone root. This is the path to zone's root directory that is relative to the global zone's root directory. Zone root must be owned by root user with the mode 700. This will be used in setting the zonepath property, during the zone creation process
18.        
19.        % cd /zones
20.        % mkdir appserver
21.        % chmod 700 appserver
22.        
23.        % ls -l
24.        total 2
25.        drwx------   2 root     root         512 Feb 17 12:46 appserver
26.        
  1. Create & configure a new 'sparse root' local zone, with root privileges
28.        
29.        % zonecfg -z appserv
30.        appserv: No such zone configured
31.        Use 'create' to begin configuring a new zone.
32.        zonecfg:appserv> create
33.        zonecfg:appserv> set zonepath=/zones/appserver
34.        zonecfg:appserv> set autoboot=true
35.        zonecfg:appserv> add net
36.        zonecfg:appserv:net> set physical=eri0
37.        zonecfg:appserv:net> set address=192.168.7.201
38.        zonecfg:appserv:net> end
39.        zonecfg:appserv> add fs
40.        zonecfg:appserv:fs> set dir=/repo2
41.        zonecfg:appserv:fs> set special=/dev/dsk/c2t40d1s6
42.        zonecfg:appserv:fs> set raw=/dev/rdsk/c2t40d1s6
43.        zonecfg:appserv:fs> set type=ufs
44.        zonecfg:appserv:fs> set options noforcedirectio
45.        zonecfg:appserv:fs> end
46.        zonecfg:appserv> add inherit-pkg-dir
47.        zonecfg:appserv:inherit-pkg-dir> set dir=/opt/csw
48.        zonecfg:appserv:inherit-pkg-dir> end
49.        zonecfg:appserv> info
50.        zonepath: /zones/appserver
51.        autoboot: true
52.        pool:
53.        inherit-pkg-dir:
54.                dir: /lib
55.        inherit-pkg-dir:
56.                dir: /platform
57.        inherit-pkg-dir:
58.                dir: /sbin
59.        inherit-pkg-dir:
60.                dir: /usr
61.        inherit-pkg-dir:
62.                dir: /opt/csw
63.        net:
64.                address: 192.168.7.201
65.                physical: eri0
66.        zonecfg:appserv> verify
67.        zonecfg:appserv> commit
68.        zonecfg:appserv> exit
        
Sparse Root Zone Vs Whole Root Zone

In a Sparse Root Zone, the directories /usr, /sbin, /lib and /platform will be mounted as loopback file systems. That is, although all those directories appear as normal directories under the sparse root zone, they will be mounted as read-only file systems. Any change to those directories in the global zone can be seen from the sparse root zone.
However if you need the ability to write into any of those directories listed above, you may need to configure a Whole Root Zone. For example, softwares like ClearCase need write permissions to /usr directory. In that case configuring a Whole Root Zone is the way to go. The steps for creating and configuring a new 'Whole Root' local zone are as follows:
 
 % zonecfg -z appserv
 appserv: No such zone configured
 Use 'create' to begin configuring a new zone.
 zonecfg:appserv> create
 zonecfg:appserv> set zonepath=/zones/appserver
 zonecfg:appserv> set autoboot=true
 zonecfg:appserv> add net
 zonecfg:appserv:net> set physical=eri0
 zonecfg:appserv:net> set address=192.168.7.201
 zonecfg:appserv:net> end
 zonecfg:appserv> add inherit-pkg-dir
 zonecfg:appserv:inherit-pkg-dir> set dir=/opt/csw
 zonecfg:appserv:inherit-pkg-dir> end
 zonecfg:appserv> remove inherit-pkg-dir dir=/usr
 zonecfg:appserv> remove inherit-pkg-dir dir=/sbin
 zonecfg:appserv> remove inherit-pkg-dir dir=/lib
 zonecfg:appserv> remove inherit-pkg-dir dir=/platform
 zonecfg:appserv> info
 zonepath: /zones/appserver
 autoboot: true
 pool:
 inherit-pkg-dir:
         dir: /opt/csw
 net:
         address: 192.168.7.201
         physical: eri0
 zonecfg:appserv> verify
 zonecfg:appserv> commit
 zonecfg:appserv> exit
 
Brief explanation of the properties that I added:
\* zonepath=/zones/appserver
Local zone's root directory, relative to global zone's root directory. ie., local zone will have all the bin, lib, usr, dev, net, etc, var, opt etc., directories physically under /zones/appserver directory
\* autoboot=true
boot this zone automatically when the global zone is booted
\* physical=eri0
eri0 card is used for the physical interface
\* address=192.168.7.201
192.168.7.201 is the IP address. It must have all necessary DNS entries
The whole add fs section adds the file system to the zone. In this example, the file system that is being exported to the zone is an existing UFS file system.
\* set dir=/repo2
/repo2 is the mount point in the local zone
\* set special=/dev/dsk/c2t40d1s6 set raw=/dev/rdsk/c2t40d1s6
Grant access to the block (/dev/dsk/c2t40d1s6) and raw (/dev/rdsk/c2t40d1s6) devices so the file system can be mounted in the non-global zone. Make sure the block device is not mounted anywhere right before installing the non-global zone. Otherwise, the zone installation may fail with ERROR: file system check </usr/lib/fs/ufs/fsck> of </dev/rdsk/c2t40d1s6> failed: exit status <33>: run fsck manually. In that case, unmount the file system that is being exported, uninstall the partially installed zone (zoneadm -z <zone> uninstall) then install the zone from the scratch (no need to re-configure the zone, just do a re-install).
\* set type=ufs
The file system is of type UFS
\* set options noforcedirectio
Mount the file system with the option noforcedirectio[/Added 08/25/08]
\* dir=/opt/csw
read-only path, will be lofs'd (loop back mounted) from global zone. Note: it works for sparse root zone only -- whole root zone cannot have any shared file systems
zonecfg commands verify and commit, verifies and commits the zone configuration for the zone, respectively. Note that it is not necessary to commit the zone configuration; it will be done automatically when we exit from zonecfg tool. info displays information about the current configuration
  1. Check the state of the newly created/configured zone
71.        
72.        % zoneadm list -cv
73.          ID NAME             STATUS         PATH
74.           0 global           running        /
75.           - appserv          configured     /zones/appserver
76.        
  1. Next step is to install the configured zone. It takes a while to install the necessary packages
78.        
79.        % zoneadm -z appserv install
80.        /zones must not be group writable.
81.        could not verify zonepath /zones/appserver because of the above errors.
82.        zoneadm: zone appserv failed to verify
83.        
84.        % ls -ld /zones
85.        drwxrwxr-x   3 root     root         512 Feb 17 12:46 /zones
86.        
Since /zones must not be group writable, let's change the mode to 700.
 
 % chmod 700 /zones
 
 % ls -ld /zones
 drwx------   3 root     root         512 Feb 17 12:46 /zones
 
 % zoneadm -z appserv install
 Preparing to install zone .
 Creating list of files to copy from the global zone.
 Copying <2658> files to the zone.
 Initializing zone product registry.
 Determining zone package initialization order.
 Preparing to initialize <1128> packages on the zone.
 Initialized <1128> packages on zone.
 Zone  is initialized.
 Installation of these packages generated errors: 
 Installation of <2> packages was skipped.
 Installation of these packages generated warnings: <CSWbdb3 CSWtcpwrap 
  CSWreadline CSWlibnet CSWlibpcap CSWjpeg CSWzlib  CSWcommon CSWpkgget SMCethr CSWxpm 
  SMClsof SMClibgcc SMCossld OpenSSH SMCtar SUNWj3dmx CSWexpat CSWftype2 CSWfconfig 
  CSWiconv  CSWggettext CSWlibatk CSWpango CSWpng CSWtiff CSWgtk2 CSWpcre CSWlibmm 
  CSWgsed CSWlibtool CSWncurses CSWunixodbc CSWoldap  CSWt1lib CSWlibxml2 CSWbzip2 
  CSWlibidn CSWphp>
 The file  contains a log of the zone installation.
 
  1. Verify the state of the appserv zone, one more time
88.        
89.        % zoneadm list -cv
90.          ID NAME             STATUS         PATH
91.           0 global           running        /
92.           - appserv          installed      /zones/appserver
93.        
  1. Boot up the appserv zone. Let's note down the ifconfig output to see how it changes after the local zone boots up. Also observe that there is no answer from the server yet, since it is not up
95.        
96.        % ping 192.168.7.201
97.        no answer from 192.168.7.201
98.        
99.        % ifconfig -a
100.    lo0: flags=2001000849 mtu 8232 index 1
101.            inet 127.0.0.1 netmask ff000000
102.    eri0: flags=1000843 mtu 1500 index 2
103.            inet 192.168.74.217 netmask fffffe00 broadcast 192.168.7.65
104.            ether 0:3:ba:2d:0:84
105.    
106.    % zoneadm -z appserv boot
107.    zoneadm: zone 'appserv': WARNING: eri0:1: no matching subnet found in netmasks(4) for 192.168.7.201; 
108.    using default of  255.255.255.0
109.    
110.    % zoneadm list -cv
111.      ID NAME             STATUS         PATH
112.       0 global           running        /
113.       1 appserv          running        /zones/appserver
114.    
115.    % ping 192.168.7.201
116.    192.168.7.201 is alive
117.    
118.    % ifconfig -a
119.    lo0: flags=2001000849 mtu 8232 index 1
120.            inet 127.0.0.1 netmask ff000000
121.    lo0:1: flags=2001000849 mtu 8232 index 1
122.            zone appserv
123.            inet 127.0.0.1 netmask ff000000
124.    eri0: flags=1000843 mtu 1500 index 2
125.            inet 192.168.74.217 netmask fffffe00 broadcast 192.168.7.65
126.            ether 0:3:ba:2d:0:84
127.    eri0:1: flags=1000843 mtu 1500 index 2
128.            zone appserv
129.            inet 192.168.7.201 netmask ffff0000 broadcast 192.168.255.255
130.    
Observe that the zone appserv has it's own virtual instance of lo0, the system's loopback interface and the zone's IP address is also being served by the eri0 network interface
  1. Login to the Zone {console} and performing the internal zone configuration. zlogin utility can be used to enter a zone. The first time we log in to the console, we get a chance to answer a series of questions for the desired zone configuraton. -C option of zlogin can be used to log in to the Zone console.
132.    
133.    % zlogin -C -e [ appserv
134.    [Connected to zone 'appserv' console]
135.    
136.    Select a Language
137.    
138.      0. English
139.      1. es
140.      2. fr
141.    
142.    Please make a choice (0 - 2), or press h or ? for help: 0
143.    
144.    Select a Locale
145.    
146.      0. English (C - 7-bit ASCII)
147.      1. Canada (English) (UTF-8)
148.      2. Canada-English (ISO8859-1)
149.      3. U.S.A. (UTF-8)
150.      4. U.S.A. (en_US.ISO8859-1)
151.      5. U.S.A. (en_US.ISO8859-15)
152.      6. Go Back to Previous Screen
153.    
154.    Please make a choice (0 - 6), or press h or ? for help: 0
155.    
156.    ...
157.    
158.     Enter the host name which identifies this system on the network.  The name
159.      must be unique within your domain; creating a duplicate host name will cause
160.      problems on the network after you install Solaris.
161.    
162.      A host name must have at least one character; it can contain letters,
163.      digits, and minus signs (-).
164.    
165.        Host name for eri0:1 appserv v440appserv
166.    
167.    ...
168.    ...
169.    
170.    System identification is completed. 
171.    ...
172.    
173.    rebooting system due to change(s) in /etc/default/init
174.    
175.    [NOTICE: Zone rebooting]
176.    
177.    SunOS Release 5.11 Version snv_23 64-bit
178.    Copyright 1983-2005 Sun Microsystems, Inc.  All rights reserved.
179.    Use is subject to license terms.
180.    Hostname: v440appserv
181.    
182.    v440appserv console login: root
183.    Password:
184.    Feb 17 15:15:30 v440appserv login: ROOT LOGIN /dev/console
185.    Sun Microsystems Inc.   SunOS 5.11      snv_23  October 2007
186.    
187.    %
188.    
That is all there is in the creation of a local zone. Now simply login to the newly created zone, just like connecting to any other system in the network.

Mounting file systems in a non-global zone

Sometimes it might be necessary to export file systems or create new file systems when the zone is already running. This section's focus is on exporting block devices and the raw devices in such situations i.e., when the local zone is already configured.

Exporting the Raw Device(s) to a non-global zone

If the file system does not exist on the device, raw devices can be exported as they are, so the file system can be created inside the non-global zone using the normal newfs command.
The following example shows how to export the raw device to a non-global zone when the zone is already configured.
 
# zonecfg -z appserv
zonecfg:appserv> add device
zonecfg:appserv:device> set match=/dev/rdsk/c5t0d0s6
zonecfg:appserv:device> end
zonecfg:appserv> verify
zonecfg:appserv> commit
zonecfg:appserv> exit
 
In this example /dev/rdsk/c5t0d0s6 is being exported.
After the zonecfg step, reboot the non-global zone to make the raw device visible inside the non-global zone. After the reboot, check the existence of the raw device.
 
# hostname
v440appserv
 
# ls -l /dev/rdsk/c5t0d0s6
crw-r-----   1 root     sys      118, 126 Aug 27 14:33 /dev/rdsk/c5t0d0s6
 
Now that the raw device is accessible within the non-global zone, we can use the regular Solaris commands to create any file system like UFS.
eg.,
 
# newfs -v c5t0d0s6
newfs: construct a new file system /dev/rdsk/c5t0d0s6: (y/n)? y
mkfs -F ufs /dev/rdsk/c5t0d0s6 1140260864 -1 -1 8192 1024 251 1 120 8192 t 0 -1 8 128 n
Warning: 4096 sector(s) in last cylinder unallocated
/dev/rdsk/c5t0d0s6: 1140260864 sectors in 185590 cylinders of 48 tracks, 128 sectors
 556768.0MB in 11600 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
...............................................................................
...............................................................................
.........................................................................
super-block backups for last 10 cylinder groups at:
 1139344160, 1139442592, 1139541024, 1139639456, 1139737888, 1139836320,
 1139934752, 1140033184, 1140131616, 1140230048
 
Exporting the Block Device(s) to a non-global zone
If the file system exists on the device, block devices can be exported as they are, so the file system can be mounted inside the non-global zone using the normal Solaris command, mount.
The following example shows how to export the block device to a non-global zone when the zone is already configured.
 
# zonecfg -z appserv
zonecfg:appserv> add device
zonecfg:appserv:device> set match=/dev/dsk/c5t0d0s6
zonecfg:appserv:device> end
zonecfg:appserv> verify
zonecfg:appserv> commit
zonecfg:appserv> exit
 
In this example /dev/dsk/c5t0d0s6 is being exported.
After the zonecfg step, reboot the non-global zone to make the block device visible inside the non-global zone. After the reboot, check the existence of the block device; and mount the file system within the non-global zone.
 
# hostname
v440appserv
 
# ls -l /dev/dsk/c5t0d0s6
brw-r-----   1 root     sys      118, 126 Aug 27 14:40 /dev/dsk/c5t0d0s6
 
# fstyp /dev/dsk/c5t0d0s6
ufs
 
# mount /dev/dsk/c5t0d0s6 /mnt
 
# df -h /mnt
Filesystem             size   used  avail capacity  Mounted on
/dev/dsk/c5t0d0s6      535G    64M   530G     1%    /mnt
 
Mounting a file system from the global zone into the non-global zone
Sometimes it is desirable to have the flexibility of mounting a file system in the global zone or non-global zone on-demand. In such situations, rather than exporting the file systems or block devices into the non-global zone, create the file system in the global zone and mount the file system directly from the global zone into the non-global zone. Make sure to unmount that file system in the global zone if mounted, before attempting to mount it in the non-global zone.
eg.,
In the non-global zone:
 
# mkdir /repo1
 
In the global zone:
 
# df -h /repo1
/dev/dsk/c2t40d0s6     134G    64M   133G     1%    /repo1
 
# umount /repo1
 
# ls -ld /zones/appserv/root/repo1
drwxr-xr-x   2 root     root         512 Aug 27 14:45 /zones/appserv/root/repo1
 
# mount /dev/dsk/c2t40d0s6 /zones/appserv/root/repo1
 
Now go back to the non-global zone and check the mounted file systems.
 
# hostname
v440appserv
 
# df -h /repo1
Filesystem             size   used  avail capacity  Mounted on
/repo1                 134G    64M   133G     1%    /repo1
To unmount the file system from the non-global zone, run the following command from the global zone.
 
# umount /zones/appserv/root/repo1
 
Removing the file system from the non-global zone
eg.,
Earlier in the zone creation step, the block device /dev/dsk/c2t40d1s6 was exported and mounted on the mount point /repo2 inside the non-global zone. To remove the file system completely from the non-global zone, run the following in the global zone.
 
# zonecfg -z appserv
zonecfg:appserv> remove fs dir=/repo2
zonecfg:appserv> verify
zonecfg:appserv> commit
zonecfg:appserv> exit
 
Reboot the non-global zone for this setting to take effect. 

Shutting down and booting up the local zones
  1. To bring down the local zone:
2.            
3.            % zlogin appserv shutdown -i 0
4.            
  1. To boot up the local zone:
6.            
7.            % zoneadm -z appserv boot
8.            
Just for the sake of completeness, the following steps show how to remove a local zone.

Steps to delete a Local Zone
  1. Shutdown the local zone
2.            
3.            % zoneadm -z appserv halt
4.            
5.            % zoneadm list -cv
6.              ID NAME             STATUS         PATH
7.               0 global           running        /
8.               - appserv          installed      /zones/appserver
9.            
  1. Uninstall the local zone -- remove the root file system
11.        
12.        % zoneadm -z appserv uninstall
13.        Are you sure you want to uninstall zone appserv (y/[n])? y
14.        
15.         zoneadm list -cv
16.          ID NAME             STATUS         PATH
17.           0 global           running        /
18.           - appserv          configured     /zones/appserver
19.        
  1. Delete the configured local zone
21.        
22.        % zonecfg -z appserv delete
23.        Are you sure you want to delete zone appserv (y/[n])? y
24.        
25.         zoneadm list -cv
26.          ID NAME             STATUS         PATH
27.           0 global           running        /
28.        

Cloning a Non-Global Zone

The following instructions are for cloning a non-global zone on the same system. The example shown below clones the siebeldb zone. After the cloning process, a brand new zone oraclebi emerges as a replica of siebeldb zone.
eg.,
 
# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   - siebeldb         installed  /zones/dbserver                native   excl
 
  1. Export the configuration of the zone that you want to clone/copy
2.            
3.           # zonecfg -z siebeldb export > /tmp/siebeldb.config.cfg
4.            
5.      Change the configuration of the new zone that differ from the existing one -- for example, IP address, data set names, network interface etc. To make these changes, edit /tmp/siebeldb.config.cfg
  1. Create the zone root directory for the new zone being created
7.            
8.           # mkdir /zones3/oraclebi
9.           # chmod 700 /zones3/oraclebi
10.       # ls -ld /zones3/oraclebi
11.       drwx------   2 root     root         512 Mar 12 15:41 /zones3/oraclebi
12.        
  1. Create a new (empty, non-configured) zone in the usual manner with the edited configuration file as an input
14.        
15.       # zonecfg -z oraclebi -f /tmp/siebeldb.config.cfg
16.        
17.       #  zoneadm list -cv
18.         ID NAME             STATUS     PATH                           BRAND    IP    
19.          0 global           running    /                              native   shared
20.          - siebeldb         installed  /zones/dbserver                native   excl   
21.          - oraclebi         configured /zones3/oraclebi               native   excl
22.        
  1. Ensure that the zone you intend to clone/copy is not running
24.        
25.       # zoneadm -z siebeldb halt
26.        
  1. Clone the existing zone
28.        
29.       # zoneadm -z oraclebi clone siebeldb
30.       Cloning zonepath /zones/dbserver...
31.        
This step takes at least 5 minutes to clone the whole zone. Larger zones may take longer to complete the cloning process.
  1. Boot the newly created zone
33.        
34.       # zoneadm -z oraclebi boot
35.        
Bring up the halted zone (the source zone) as well, if wish.
  1. Login to the console of the new zone to configure IP, networking, etc., and you are done.
37.        
38.       # zlogin -C oraclebi
39.        

Migrating a Non-Global Zone from One Host to Another

Keywords: Solaris, Non-Global Zone, Migration, Attach, Detach
The following instructions demonstrate how to migrate the non-global zone, orabi to another server with examples.
 
# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   4 siebeldb         running    /zones/dbserver                native   excl  
   - orabi            installed  /zones3/orabi                  native   shared
 
  1. Halt the zone to be migrated, if running
2.            
3.           # zoneadm -z orabi halt
4.            
  1. Detach the zone. Once detached, it will be in the configured state
6.            
7.           # zoneadm -z orabi detach
8.            
9.           # zoneadm list -cv
10.         ID NAME             STATUS     PATH                           BRAND    IP    
11.          0 global           running    /                              native   shared
12.          4 siebeldb         running    /zones/dbserver                native   excl  
13.          - orabi            configured /zones3/orabi                  native   shared
14.        
  1. Move the zonepath for the zone to be migrated from the old host to the new host.
Do the following on the old host:
 
# cd /zones3
# tar -Ecf orabi.tar orabi
# compress orabi.tar
 
# sftp newhost
Connecting to newhost...
sftp> cd /zones3
sftp> put orabi.tar.Z
Uploading orabi.tar.Z to /zones3/orabi.tar.Z
sftp> quit
 
On the newhost:
 
# cd /zones3
# uncompress orabi.tar.Z
# tar xf orabi.tar
 
  1. On the new host, configure the zone.
Create the equivalent zone orabi on the new host -- use the zonecfg command with the -a option and the zonepath on the new host. Make any required adjustments to the configuration and commit the configuration.
 
# zonecfg -z orabi
orabi: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:orabi> create -a /zones3/orabi
zonecfg:orabi> info
zonename: orabi
zonepath: /zones3/orabi
brand: native
autoboot: false
bootargs: 
pool: 
limitpriv: all,!sys_suser_compat,!sys_res_config,!sys_net_config,!sys_linkdir,!sys_devices,!sys_config,!proc_zone,!dtrace_kernel,!sys_ip_config
scheduling-class: 
ip-type: shared
inherit-pkg-dir:
 dir: /lib
inherit-pkg-dir:
 dir: /platform
inherit-pkg-dir:
 dir: /sbin
inherit-pkg-dir:
 dir: /usr
net:
 address: IPaddress
 physical: nxge1
 defrouter not specified
zonecfg:orabi> set capped-memory
zonecfg:orabi:capped-memory> set physical=8G
zonecfg:orabi:capped-memory> end
zonecfg:orabi> commit
zonecfg:orabi> exit
 
  1. Attach the zone on the new host with a validation check and update the zone to match a host running later versions of the dependent packages
18.        
19.       # ls -ld /zones3
20.       drwxrwxrwx   5 root     root         512 Jul 15 12:30 /zones3
21.       # chmod g-w,o-w /zones3
22.       # ls -ld /zones3
23.       drwxr-xr-x   5 root     root         512 Jul 15 12:30 /zones3
24.        
25.       # zoneadm -z orabi attach -u
26.       Getting the list of files to remove
27.       Removing 1740 files
28.       Remove 607 of 607 packages
29.       Installing 1878 files
30.       Add 627 of 627 packages
31.       Updating editable files
32.       The file  within the zone contains a log of the zone update.
33.        
34.       # zoneadm list -cv
35.         ID NAME             STATUS     PATH                           BRAND    IP    
36.          0 global           running    /                              native   shared
37.          - orabi            installed  /zones3/orabi                  native   shared
38.        
Note:
It is possible to force the attach operation without performing the validation. You can do so with the help of -F option
 
# zoneadm -z orabi attach -F
 
Be careful when using this option because it could lead to an incorrect configuration; and an incorrect configuration could result in undefined behavior

Tip: How to find out whether connected to the primary OS instance or the virtual instance?
If the command zonename returns global, then you are connected to the OS instance that was booted from the physical hardware. If you see any string other than global, you might have connected to the virtual OS instance.
Alternatively try running prstat -Z or zoneadm list -cv commands. If you see exactly one non-zero Zone ID, it is an indication that you are connected to a non-global zone.


No comments:

Post a Comment