In
my previous
post, i detailed the configuration of Sun LDOMs (now called Oracle VM
for SPARC) as failover guest domains. In this post, I’m going to describe the
migration of some physical Servers (running Solaris 10) in the Clustered
Control Domains configured previously.
In order to achieve such migrations, i used Oracle VM Server for SPARC P2V tool
(ldmp2v).
The
Oracle VM Server for SPARC P2V Tool automatically converts an existing physical
system to a virtual system that runs in a logical domain on a chip
multithreading (CMT) system. The source system can be any of the
following:
- Any
sun4u SPARC based system that runs at least the Solaris 8 OS
- Any
sun4v system that runs the Oracle Solaris 10 OS, but does not run in a
logical domain
The
Migration of Physical Systems to Clustered Control Domains is
performed trough 04 steps which are described below: Phases:
Collection --> Preparation --> Conversion --> Clustering
Collection
Phase:
During
the collection phase, we’ll perform the Backup and Archiving of the Source
System. For this step, we’ve decided to separate the collection of System Image
and the collection of Database Data Backup.
The
Collection of the System Image is performed trough the ldmp2v tool. The
Oracle VM Server for SPARC P2V Tool package must be installed and configured
only on the control domain of the target system. It's not needed to be
installed on the source system. Instead, the /usr/sbin/ldmp2v script was
copied from the target system to the source system. Ldmp2v command
creates a backup of all mounted UFS file systems, except the one excluded by
option –x. Below are the commands to run on the Source System (The System we're
migrating).
/*Archive_System
is a NFS server*/
# mount
Archive_System:/share /mnt
# mkdir
/mnt/$(hostname)
Start the
Collection Phase (/data here contains Database's Data Files, so
this FS was excluded and will be restored separately).
/*Backup of System
using ldmp2v*/
# ldmp2v collect
-d /mnt/$(hostname) –x /data
/*Backup of
Database's Data File*/
# tar cvf
/mnt/data.tar /data/*
Preparation
Phase:
This phase
will take place on one of the target Control Domain (Only on one Control
Domain). The aim here is to create the target system according to the collect
phase.
1. Create
the Zpool which will serve as Receptacle for the Guest Domain
We've
identified the Did device which will be used to create the Zpool (d11) and
format this device by allocating the entire space of the Device to the slice 6.
# zpool create
zpool_system1 /dev/did/dsk/d11s6
2. Create
the Server Resource Group Cluster for this Server and add in your Zpool
as a resource group
# clrg create
–n node1,node2 system1-rg
# clrs create
-g system1-rg -t HAStoragePlus -p Zpools=zpool_system1 system1-fs-rs
# clrg online
-M system1-rg
# zpool list
NAME
SIZE ALLOC FREE
CAP HEALTH ALTROOT
rpool
278G 12.8G
265G 4% ONLINE -
zpool_system1
99.5G 79.5K 99.5G 0% ONLINE /
3. Create
the /etc/ldmp2v.conf file and configure the following properties
# cat
/etc/ldmp2v.conf
VSW="primary-vsw0"
VDS="primary-vds0"
VCC="primary-vcc0"
BACKEND_PREFIX="/zpool_system1/"
BACKEND_TYPE="file"
BACKEND_SPARSE="no"
BOOT_TIMEOUT=10
4. Start
the Restoration
The file
system image is restored to one or more virtual disks.
# mount
Archive_System:/share /mnt
# ldmp2v
prepare -vvv -b file -d /mnt/system1 -m /:15g system1
Available
VCPUs: 234
Available
memory: 218624 MB
Creating
vdisks ...
Resizing
partitions ...
Resize /
Partition(s)
on disk /dev/dsk/c1t0d0 were resized, adjusting disksize ...
Creating vdisk
system1-disk0 ...
Creating
volume system1-vol0@primary-vds0 (75653 MB)...
Creating file
//zpool_system1//system1/disk0 ...
Creating VTOC
on /dev/rdsk/c6d0s2 (disk0) ...
Creating file
systems ...
Creating UFS
file system on /dev/rdsk/c6d0s0 ...
Creating UFS
file system on /dev/rdsk/c6d0s4 ...
Creating UFS
file system on /dev/rdsk/c6d0s6 ...
Creating UFS
file system on /dev/rdsk/c6d0s5 ...
Creating UFS
file system on /dev/rdsk/c6d0s3 ...
Populating
file systems ...
Mounting
/var/run/ldmp2v/system1 ...
Mounting
/var/run/ldmp2v/system1/app ...
Mounting
/var/run/ldmp2v/system1/export/home ...
Mounting
/var/run/ldmp2v/system1/opt ...
Mounting
/var/run/ldmp2v/system1/var ...
Extracting
Flash archive /mnt/system1/system1.flar to /var/run/ldmp2v/system1 ...
41999934
blocks
Modifying
guest OS image ...
Modifying SVM
configuration ...
Modifying
/etc/vfstab ...
Modifying
network interfaces ...
Modifying
/devices ...
Modifying /dev
...
Creating disk
device links ...
Cleaning
/var/fm/fmd ...
Modifying
platform specific services ...
Modifying
/etc/path_to_inst ...
Unmounting
file systems ...
Unmounting
/var/run/ldmp2v/system1/var ...
Unmounting
/var/run/ldmp2v/system1/opt ...
Unmounting
/var/run/ldmp2v/system1/export/home ...
Unmounting
/var/run/ldmp2v/system1/app ...
Unmounting
/var/run/ldmp2v/system1 ...
Creating
domain ...
Attaching
vdisks to domain system1 ...
Attaching
volume system1-vol0@primary-vds0 as vdisk disk0 ...
Setting
boot-device to disk0:a
Conversion
Phase:
During the
conversion phase, the logical domain uses the Solaris upgrade process to
upgrade to the Oracle Solaris 10 OS. The upgrade operation removes all existing
packages and installs the Oracle Solaris 10 sun4v packages, which automatically
performs a sun4u-to-sun4v conversion.The convert phase can use an Oracle
Solaris DVD ISO image or a network installation image. Here we used DVD ISO
Image.
Before
starting the conversion, it's better to stop the source system (prevents
duplicate IP addresses).
# ldmp2v
convert -i /export/home/ldoms/iso/sol-10-u9-ga-sparc-dvd.iso -d
/mnt/system1/ -v system1
LDom system1
started
Waiting for
Solaris to come up ...
ldmp2v: ERROR:
Timeout waiting for Solaris to come up.
The Solaris
install image /export/home/ldoms/iso/sol-10-u9-ga-sparc-dvd.iso cannot be
booted.
For unknown
reasons, i got the error highlighted above (Maybe the timeout specified in
ldmp2v.conf was too short). Nevertheless the error could be ignored because the
OS started. That could be checked by running ldm ls and telnet to the console
(see below).
# ldm ls
NAME
STATE FLAGS CONS
VCPU MEMORY UTIL UPTIME
primary
active -n-cv- SP
16 32992M 3.5% 16d 21h 18m
system1
active -t---- 5000
2 8G 50% 19s
# telnet
localhost 5002
Trying
127.0.0.1...
Connected to
localhost.
Escape
character is '^]'.
Connecting to
console "system1" in group "system1" ....
Press ~? for
control options ..
/ - \ | / - \
| / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - Skipped interface vnet0
Reading ZFS
config: * done.
Setting up
Java. Please wait...
Serial
console, reverting to text install
Beginning
system identification...
Searching for
configuration file(s)...
Search
complete.
Discovering
additional network configuration...
Select a
Language
0. English
1. Brazilian Portuguese
2. French
3. German
4. Italian
5. Japanese
6. Korean
7. Simplified Chinese
8. Spanish
9. Swedish
10.
Traditional Chinese
Please make a
choice (0 - 10), or press h or ? for help: 0
Note – The
answers to the sysid questions are only used for the duration of the
upgrade process. This data is not applied to the existing OS image on
disk. The fastest and simplest way to run the conversion is to select
Non-networked. The root password that you specify does not need to match
the root password of the source system. The system's original identity is
preserved by the upgrade and takes effect after the post-upgrade reboot. The
time required to perform the upgrade depends on the Oracle Solaris Cluster
that is installed on the original system.
[...]
- Solaris
Interactive Installation --------------------------------------------
This system is
upgradable, so there are two ways to install the Solaris
software.
The Upgrade
option updates the Solaris software to the new release, saving
as many
modifications to the previous version of Solaris software as
possible. Back
up the system before using the Upgrade option.
The Initial
option overwrites the system disks with the new version of
Solaris
software. This option allows you to preserve any existing file
systems. Back
up any modifications made to the previous version of Solaris
software
before starting the Initial option.
After you
select an option and complete the tasks that follow, a summary of
your actions
will be displayed.
-------------------------------------------------------------------------------
F2_Upgrade
F3_Go Back F4_Initial F5_Exit F6_Help
Clustering
Phase:
During the
Clustering Phase, we'll configure the Control Domain of the others Clusters
Nodes. As described in my previous post, both Control Domain should have the
same services configured.
The Vswitch
and VdiskServer was created during the installation/configuration of Control
Domains. So, the first thing to do at this stage is to switch-over the Resource
Group which contains the zpool and create the vds-dev required by the others
System Controllers Nodes.
On Node
which owned both the zpool and the ldom system1, stop the domain and perform
the switchover of the zpool.
Node 1:
# zpool list
NAME
SIZE ALLOC FREE
CAP HEALTH ALTROOT
rpool
278G 12.8G 265G 4%
ONLINE -
zpool_system1
99.5G 79.5K 99.5G 0% ONLINE /
# ldm ls
NAME
STATE FLAGS CONS
VCPU MEMORY UTIL UPTIME
primary
active -n-cv- SP
16 32992M 2.9% 17d 3h 37m
system1
active -n---- 5002
2 8G 1.9% 2h 35m
# ldm stop
system1
# ldm ls -l
primary
NAME
STATE FLAGS CONS
VCPU MEMORY UTIL UPTIME
primary
active -n-cv- SP
16 32992M 3.6% 17d 4h 26m
SOFTSTATE
Solaris
running
[...]
VDS
NAME VOLUME
OPTIONS MPGROUP
DEVICE
primary-vds0 system1-vol0
//zpool_system1//system1/disk0
system1-solarisdvd
ro
/export/.../sol-10-u9-ga-sparc-dvd.iso
# clrg switch
-n node2 system1-rg
Node 2:
# zpool list
NAME
SIZE ALLOC FREE
CAP HEALTH ALTROOT
rpool
278G 12.8G 265G 4%
ONLINE -
zpool_system1
99.5G 79.5K 99.5G 0% ONLINE /
# ldm
add-vdsdev //zpool_system1//system1/disk0 itsmserver-vol0@primary-vds0
# ldm
add-vdsdev options=ro /export/home/ldoms/iso/sol-10-u9-ga-sparc-dvd.iso \
>system1-solarisdvd@primary-vds0
# ldm ls -l
primary
NAME
STATE FLAGS CONS
VCPU MEMORY UTIL UPTIME
primary
active -n-cv- SP
16 32992M 3.6% 17d 4h 26m
SOFTSTATE
Solaris
running
[...]
VDS
NAME VOLUME
OPTIONS MPGROUP
DEVICE
primary-vds0 system1-vol0
//zpool_system1//system1/disk0
system1-solarisdvd
ro
/export/.../sol-10-u9-ga-sparc-dvd.iso
# clrg switch
-n node1 system1-rg
It's now
time to create the SUNW.LDOM Service, this service will automate the failure
detection in hardware and software so that LDOM can be started on another
cluster node without human intervention.
# clrs create –g
system1-rg –t SUNW.ldom –p Domain_name=system1 \
-p
password_file=/migrate/noninteractive \
-p
Resource_dependencies=system1-fs-rs \
–p
Migration_type=NORMAL system1-ldom-rs
The
parameters provided were explained in my previous post
Here we
are, the Migrated System is up and running in the Clustered System Controller.
A failover
could be initiated by running "clrg switch -n nodex system1-rg" , but
keep in mind that Migration Type was set to NORMAL. It means that the LDOM
Guest OS must be down (at OBP, don't stop because the SUNW.LDOM agent will
restart if you do so) when a failover is initiated.
References:
Oracle
Solaris Cluster Essentials, 2010, Prentice Hall
Steve, very good practical example. I will use this, thank you.
ReplyDeleteYou're welcome...
DeleteSteve,very good Doc
ReplyDeletethank you vey much
Thanks
Kesava
I want to do P2V for approx 4 servers and all these 4 servers will be on one Target T5-2 Server.
ReplyDeleteWhat changes I have to do in /etc/ldmp2v.conf ?
And Solaris ISO is required in conversion phase ?
For the Solaris ISO, you should keep in mind that there's a need to convert the non-virtual architecture (sun4u) to a virtual one (sun4v). So the answer is you need a way to perform that upgrade (ISO/DVD is the simplest one...)
DeleteThe changes in ldmp2v.conf are mostly related to the target environment (how you want the created ldom to be configured...)
what i have to specify in /etc/ldmp2v.conf file if i am using raw disk (LUN) instead of zpool/zvol.
Delete================================
And for one server I am getting bellow error
$sudo ldmp2v collect -d /data1/ -x /data1 -x /redsbin -x /nsm -x /data -x /data2 -x /data3
Collecting system configuration ...
Cannot determine disk slice for VxVM volume rootvol.
ldmp2v: this system can not be converted because the device for / cannot be determined.
$
Hi Harish,
DeleteIf you're planning to use raw disk instead of zpool/zvol, the I think you should have disk as BACKEND_TYPE and adjust BACKEND_PREFIX to match your physical disk or LUN, (specify the back-end device as slice 2 of the block or character device of the disk, such as /dev/dsk/c0t3d0s2 - https://docs.oracle.com/cd/E35434_01/html/E23807/backenddevices.html-)
So, you should have something like:
BACKEND_PREFIX="/dev/dsk/cxtxdxsx"
BACKEND_TYPE="disk"
For the VxVM rootvol, I'm not really sure that Veritas Volumes are supported...
great, Thanks a lot Steve
DeleteU're welcome...
DeleteGreat work Steve, your blog hits are high on this migration tool.
ReplyDeleteThanks, really appreciate that comment
DeleteYes, it is really good, with this blog i did almost 37 p2v
Delete