Oracle Virtual Machine Server 3.2 for X86 / VM Migration from OVM 2.2

That's a short post which aims to describe a v2v (virtual to virtual) migration of Oracle VM X86  from OVM 2.2 to OVM 3.2. I won't cover the installation of OVM 3.2 nor its configuration. I'll assume  that it's already done (with valid server pool/servers/storage repositories...).
Technically, the process involves archiving the source VM (on OVM 2.2), transfer this archive on a web server (thus making it available using http), import this archive as template on the destination environment (OVM 3.2) and create a new VM based on this template. I know it seems a bit cumbersome for a simple v2v, but that's how it worked for me. So, here we go.

1. Stop the source VM and archive its folder on OVM 2.2
The source directory are located under /OVS/.../running_pool on OVM2.2

2. Transfer this archive to a Web Server Folder (or to a FTP Server)
This web server must be accessible from the Target OVM 3.2 environment.

3. Import this archive as template under the destination OVM environment
This web server must be accessible from the Target OVM 3.2 environment.

For that, from the OVM 3.2 Console: Go to “Repositories” / “Choose the target repository” / “VM Templates” / "Click Import VM Templates" and add your internal webserver and file location http://stivesso.local/src_vm.tar.gz

4. When the import job is completed, just create a new VM based on this template

You can then power on the new VM...


Linux System Recovery using Symantec Netbackup and a Linux Live-CD

The aim here is to describe a Linux System Recovery using Linux Live-CD and Symantec Netbackup Backup. Before delving into technical details, we want to highlight the fact that Symantec Netbackup has a Bare Metal Recovery features that suit this type of Recovery. But this feature doesn’t support all Linux OS (most of the popular ones are supported). For instance, this procedure was used to restored a complete Redhat 9 :)  Installation (it can also be used for a P2V migration).

Let’s check what is required for such restoration.

1.   A valid Full Symantec Netbackup Backup of the system we're trying to restore(I guess that one is obvious)
2.   A Linux Live CD that can support Symantec Netbackup Client, we’re using Ubuntu 8.04 LTS in this guide (fully supported, a list of supported OS can be found here)
3.  The installation CD of the Distribution we’re restoring, as stated above, we’re restoring a Redhat 9. So a Redhat 9 Boot CD is what we need for this case. The main reason for having the Install CD of the distribution is to avoid some incompatibility (like creating a filesystem which isn’t supported by the maintenance tools –e.g fsck- of the restored system)
4.  An Internet Connection (though it isn’t mandatory, we may just need some softwares/packages to install before installing Symantec Netbackup Client)

Once these prerequisites are met, we’re ready to start our restoration. As already described above, for this post, we’re using Ubuntu 8.04 LTS as Live-CD and are restoring a Redhat 9 Installation (the procedure can be easily adapted to others Distribution).

Below, a step-by-step (from a to f) description

a.     Boot on the Redhat 9 CD in rescue mode (linux rescue at the prompt) and recreate the target filesystems (the ones which will contain the restored data). Below is what we’ll create during this step.

65 GB
100 MB
65 GB
10 GB

Note that the target filesystems could be re-sized during this step (as long as we give enough space for the restoration). It’s also important to create these filesystems using the operating system we’re going to restore (avoid some filesystems features incompatibilities).

We just need a shell at this stage, so we’ll choose skip during the step in the screenshot below.

The filesystems are then created using fdisk and mkfs.ext3, mkswap… (as usual)

b.     Reboot the system on the Ubuntu Live-CD and install the required software for Netbackup Client Installation.
I assume that the network connectivity for Internet Connection and Software Installation is done for the Ubuntu LiveCD System. The list of software required to install on Netbackup Client for Supported Ubuntu/Debian can be found here. For proper NetBackup client operation on Ubuntu 8.04 Server Edition (64bit) the following packages are needed:

-  ia32-libs (In Universe Repository, so universe repository was enabled)
-  xinetd
-  ssh-server/rsh-server (required for remote install only)

We’ve also added the following for our specific environment

-  nfs-common (required to mount the directory where we’re keeping Netbackup Client binaries)
-  autofs (use to automatically mount the NFS shares)

Let’s also configured autofs to use /net automount features and complete Netbackup Client Installation.

c.     After the prerequisites, we’re adding Netbackup servers entries to the /etc/hosts files, add the Ubuntu Client Entries in Netbackup Servers hosts file, Install the client and register the Ubuntu LiveCD node.

On Ubuntu LiveCD,

On Master/Media Netbackup Servers,

d.     We can now mount the filesystems that were previously created (step a.) in the directory where we’ll push the restoration

e.     Let’s the restoration begin (using Netbackup Java Console, specifying as source the system to restore and as destination our Ubuntu Server, choose “restore everything to different location” and fill with the name of the directory where / is mounted)

Choose the last full you want to recover from the backup history and start the / restoration.

Uncheck  the rename soft/hard link options that are enabled by default.

f.      After the restoration, we can reboot the system and fix the Specific Operating Systems Issues that will arise. In this case, we’ve to do the following:

1.     Reinstall the Grub, for that we’re going through the “linux rescue” using Redhat 9 CD. We’ll let the rescue mode trying to mount the restored system under /mnt/sysimage. If not able, mount the filesystem manually and chroot under the mounted directory

2.     Modify the /etc/modules.conf to add Vmware scsi controller driver by replacing the following in /etc/modules.conf

# alias scsi_hostadapter mptbase
# alias scsi_hostadapter1 mptscsih

3.     Recreate the initrd which will now have the correct scsi modules

4.     I found that my etc/fstab was set to use e2label , so label were well reassigned to each filesystem


Migration : From Solaris 10 Sparse Local Zone to Solaris 11 Local Zone

Let's described the migration process of Solaris 10 Sparse Local Zone to Solaris 11 Local Zone in 09 easy steps. But before delving into the subject, it's important to remind the following. 
In Solaris 10, we had 02 types of zones/containers that we could create, the first (and the default type) is "Sparse root zone" and the second is "Whole root zone". 
The main difference between these 02 types was that with "sparse root zone", you shared part of the root filesystem with the global zone while with  "whole root zone", every Solaris packages are copied to the local zone private filesystem. In Solaris 11, this distinction goes away (more details about Comparison between Solaris 11 Zone and Solaris 10 Zone can be found here). What we address below is the migration of a sparse Solaris 10 zone from Solaris 10 Global Zone to Solaris 11 Global Zone.

1. Let’s Print the existing zone's configuration. We will need this information to recreate the zone on the destination system:

2. The next step is to stop this source local zone and bring it to a state where it can be archived.
The reason for stopping the zone is that we should not archive a running zone because the application or system data within the zone might be captured in an inconsistent state.Also,  the zone is a sparse root zone that has inherit-pkg-dir settings, For that reason we need to bring it in the state "ready" ( that'll enable the archiving of  inherited directories).

3. Now, we should copy the /var/sadm/system/admin/INST_RELEASE from the source global zone to the source local zone, otherwise we may face the  Bug 15751945 (with an error stating that The image release version must be 10 (got unknown), the zone is not usable on this system).

4. Let’s archived the local zone (zonepath: /zpool_slz/slz).  For that we’ll create a gzip compressed cpio archive named Source_local_zone.cpio.gz (The local zone will still be named Source_local_zone on the target system)

N.B: the –c option of cpio is to avoid the error similar to “UID or GID are too large to fit in the selected header format”

5. Transfer the archive to the target Oracle Solaris 11.1 system, using any file transfer mechanism to copy the file, we’ll use sftp here.

6. On the target systems (DEST_GLOBAL_ZONE), create the target zone. Below, we're using a configuration file that's sourced for the creation of the zone.
Note – The zone's brand must be solaris10 and the zone cannot use any inherit-pkg-dir settings, even if the original zone was configured as a sparse root zone.

7. Display the new local zone's configuration:

8. Install the zone from the archive that was created on the source system, with the archive transferred into the /zpool_slz directory on the destination system

N.B: Again, you might get an error here specifying that “The image release version must be 10 (got unknown)” if you haven’t copied the /var/sadm/system/admin/INST_RELEASE file in the source local zone as specified above.

9. Once the zone installation has completed successfully, the zone is ready to boot.

Migration Completed....