Powered By Blogger

2014/10/28

Oracle VM for Sparc Autosave / Practical case

Have ever faced a situation where you think you've lost LDOMs configuration (mostly after an unexpected downtime)? If that's the case, then you must know how much autosave feature on Oracle VM for SPARC is important. Indeed, the main aim of the autosave feature is to ensure that a copy of the current configuration is automatically saved on the control domain whenever the Logical Domains configuration is changed. More important, it occurs even when the new configuration is not explicitly saved on the SP.

Note that this doesn't replace the classic LDOMs configuration Backup which consists of saving the constraints information for each domain into an XML file (ldm list-constraints -x...), but may supplement it when what is needed is just to recover domain Configurations that weren't saved to the SP. 

Enough talk! Let's move forward with this practical case. We've a System with 05 configured LDOMs, and while building them, an unexpected Hardware Failure occurs (:-)). The main issue is that we haven't saved yet our LDOM Configuration on the SP. Meaning before the Outage, we had something similar to:



And after the outage, we've something like this (with maybe some other parameters not present for the primary domain...).


Let's move forward with the following 05 little steps to recover our LDOMs configuration.

1. As a matter of precaution, take a Backup of the actual configuration and another one of the /var/opt/SUNWldm

The /var/opt/SUNWldm folder contains the Autosave directories.




2.lists the available autosave configurations

At this stage, we should have our autosaved configuration shown as being newer than the current configuration.



3. Recover the Autosave Configuration




4. Perform a full power Cycle



5. Check the LDOM Configuration 


2014/09/16

HP ILO / RHEL7 Systemd Output to VSP console

During my last post, I described the redirection of Linux Output to Serial on an Upstart Distribution based (using RHEL6). The main aim of this post is to describe the same on Systemd Distribution based (RHEL7). Before I begin with the technical matters, I have to say that I've been quite impressed by how Systemd makes this configuration so easy (no pain at all! So cool!). This isn't to make a comparison between Systemd and Traditional Init or Upstart, but you can check these two previous posts to draw your own Conclusions:  VSP/Traditional Init, VSP/Upstart.
Let's now delve in the interesting matters. The whole procedure is just about setting Kernel Options and reboot, and if the reboot can't be performed right away, just start a systemd service.

1. Set the Kernel Options boot options: 

On RHEL7 with Grub2, add "console=ttyS1" to  GRUB_CMDLINE_LINUX in /etc/default/grub file (You might also remove rhgb quiet as rhgb is for RedHat Graphical Boot and quiet is meant to hide the majority of boot messages before rhgb starts)



Changes to /etc/default/grub require rebuilding the grub.cfg file. This file location's on BIOS based machine is /boot/grub2 and for UEFI based machine, it is /boot/efi/EFI/redhat/.

On BIOS Based Machine:


On UEFI Based Machine:


2. Reboot or start a serial-getty service on ttyS1

Now we can either reboot the System to have the Kernel Loaded with the new parameter during the reboot , or (especially if we can't afford a downtime :-)) run the following to have a getty service started right away on the ttyS1.



That's it! So simple! Go on the console, run vsp and there's a nice prompt...




Reference:
http://0pointer.de/blog/projects/serial-console.html

HP ILO / RHEL6 Upstart Output to VSP console

Following my post related to HP ILO VSP console redirection , I've received many comments related to the same Configuration on recent Linux Distribution. In this post, I'm willing to detail the same on Upstart Based Distribution (using RHEL6, but should be the same for others Upstart Based distribution ). In fact, the main configuration on old Distribution was mostly related to Init Daemon Configuration, with new distribution based on Upstart/Systemd, things are slightly different. 

Note that I won't detail the configuration of the Virtual Serial Port on BIOS/UEFI as it had already been discussed in this previous post.
If you're interested in the same configuration for Systemd Based Distribution, it's described in this post.

1. Create an init configuration file for ttyS1



2. Check the init configuration and start running the agetty process

Upstart leverages the Linux inotify API to make itself aware of any changes that can happen within its configuration directory, so the creation of the ttyS1 file above is enough to have the service available and listed when using initctl. The only thing we have to do is to start the process.



3. Test you have access to the System through vsp



4. Add Serial Port to securetty to allow login as root

This is needed if we want root account to be able to log in through this serial console.



5. Configure the Grub GRUB config file

Finally, we can configure the GRUB to have outputs of the boot process on the console, this is easily achieved by adding console=tty0 console=ttyS1,115200

2014/09/11

Oracle/Redhat Enterprise Linux 7 Kickstart Installation / without DHCP

RHEL7/OEL7 is out for few months now, with a lot of new features (Revamped Anaconda, Systemd...). But before really starting to enjoy all these nice features, let's perform some basic Automated Kickstart Installation. In this short post, I'm willing to describe just that type of Installation without relying on a DHCP Server (using Static Network Parameters). I'm also using an Installation tree and Kickstart file located on httpd servers and reachable from the System I'm installing. Enough talk! let's detailed this Kickstart Installation in the following 4 steps:

1. Make Installation Tree available on an httpd server:

We've RHEL7/OEL7 ISO Files and an httpd server (192.168.0.10) configured (with DocumentRoot being the classic /var/www/html). Mounting the ISO as loop device in the DocumentRoot is enough to have the Installation Tree available over httpd.



2. Create the Kickstart File and make it available on the httpd system:

For that, I used as template an anaconda-ks.cfg  from another installed node and created the following kickstart.



Make this file available on http (copied under httpd DocumentRoot) and test to make sure it's reachable (i.e http://192.168.0.10/olnode.ks )

3. Start the Kickstart Installation

I'm using an UEFI system, so In order to provide the right Kickstart Parameters during the boot process, I'm selecting an installation option in the boot menu and then press either the E key ( For BIOS Systems, that'll be he Tab key). A prompt is displayed which enables to edit the boot options already defined and to add new options.  In this case, I'm adding the following: 

inst.ks=http://192.168.0.10/olnode.ks ip=192.168.0.20::192.168.0.1:255.255.255.0:olnode:eno49:none

inst.ks specifies the location of the kickstart file, and ip sets statics network parameters, it must be in this form: ip=ip::gateway:netmask:hostname:interface:none. Below are some screenshots taken to illustrate this process.
Note that parameters such as hostname and Interface could be empty









4. Enjoy the Automation:

Once that's done, the last step is to Press Ctrl-x (or enter on BIOS System) and enjoy the automation...







References:
http://docs.oracle.com/cd/E52668_01/E54695/html/ol7-install-boot-options.html

2014/08/25

Oracle VM for X86 / RHEL PVM Kickstart Installation without DHCP

Just a quick and short post to share a tip related to installation of RHEL as ParaVirtual Machine using kickstart(or any distribution that can use kickstart) on Oracle VM 3 for X86 without having to rely on a DHCP for the Network Parameters. The only prerequisite that is needed is to have kickstart file and RHEL Installation Path available on the network and reachable from Oracle VM Manager Server.
Once that's done, we create our Virtual Server as usual, except that for Network Path, we're having to add more parameters. Indeed we can use the --args to add any parameters we're willing to pass to the kernel. So, we're having something like :

--args kernel_boot_paramaters http://install_server/rhel6_4

For example, let's say that my kickstart file is located on an http server and accessible via http://install_server/my_kickstart.ks, I also need to set 192.169.0.10 as IP of eth1 interface in order to get access to this kickstart file's URL and the RHEL installation tree is http://install_server/rhel6_4. Then, i'll have the following:

--args "ks=http://install_server/my_kickstart.ks ip=192.169.0.10 ksdevice=eth1 netmask=255.255.255.0 gateway=192.169.0.1http://install_server/rhel6_4






Note that the --args section are in quote, this is needed to clearly draw the boundary between what is kernel arguments and the installation tree's URL. 

2014/08/19

Ansible Hosts / Install alternate upgraded Python version

Following the highlighted error below which happened while trying to configure a RHEL4 node to be manageable by ansible (and that i rightly attributed to the old Python 2.3.4 which is the default version of Python for RHEL4), i've decided to install an alternate Python version on the same RHEL node without impacting the already existing environment. Below i'm sharing the 8 steps I followed to have that working.
Note that although i'm describing this Python Installation in  relation with Ansible Integration, Step 1 to Step 6 can be used for the installation of an alternate Python Environment on any other Linux Systems.

Ansible's Error:
[stivesso@ansible-server ~]$ ansible my_ansible_node -m ping  --ask-pass
SSH password:
my_ansible_node | FAILED >> {
    "failed": true,
    "msg": "\r\nSUDO-SUCCESS-josewbugtgyoijxrgkuxihoejbjsbiuq\r\n  File \"/home/esso_s/.ansible/tmp/ansible-tmp-1406122775.05-117998270117858/ping\", line 1177\r\n    clean_args = \" \".join(pipes.quote(arg) for arg in args)\r\n
     ^\r\nSyntaxError: invalid syntax\r\n", 
    "parsed": false
}


1. Install Prerequisites on the target host



2. Get an updated python package



3. Untar/unzip the package on the target node



4. configure with the alternate path as option (here, i'm planning to install the alternate environment in /opt/python2.7) and compile/install




5. Export Shared Library and Bin Library in PATH and LD_LIBRARY_PATH on User profile 



6. Create ld.so.conf configuration file and run ldconfig to have library loaded system wide



7. Add the following to /etc/ansible/hosts on the Ansible Control Machine for the target node

 

8. Test again the ping module from ansible-server...

2014/07/16

Linux KVM / Network Bridge over bonded Interface

This is a short post about KVM Network Bridge Configuration for Guest Domain over a bonded Interface. For this post, we're using a Redhat Enterprise Linux 6 as KVM Physical System, but the procedure can be easily adjusted for any others Linux System which supports KVM. Before delving in the Technical Configuration, let’s have a look on what we’re trying to achieve. We’re having a Physical Server with 02 Physical Interfaces (eth0 and eth1) and are planning to use these two physical Interfaces in a highly available configuration. The same interfaces are also used to support a bridge that is used by KVM DomU (Guest Hosts). Below, we can see a picture describing the details of this configuration.





The first thing to do is to complete the bonding configuration and that's pretty straightforward. We're just creating the bonding.conf file in /etc/modprobe.d (to dynamically load the bonding module, bonding's option  will be modified later on and must reflects ones' needs), then we're editing the ifcfg-bond0 network configuration files.


Let us now configure the bond0 Interface by editing its configuration files, note the BRIDGE configuration mentioned below, this is explicitly added to link the bond interface to the bridge interface. We are also setting the bond Interface parameters here.


Next thing to do is to configure the bridge Interface, as usual we’re creating the ifcfg-br0 file containing our network configuration (IP, DNS...). Note that as described above, this is the Interface that will have the IP settings used to administrate the Physical Server (Note that this is a matter of architecture/design, on a typical production server with four interfaces, we could have for example separated this management network from the Guest Domains Network...)



Finally, we can configure the physical interface and restart the network service (or reboot the node…)



Check the Bridge Configuration, should have output similar to the one below. The bridged interface is now ready to be used by the Guest Domains.

2014/02/03

P2V Migration From Solaris 10 Global Zone to Oracle Solaris 11 Local Zone (Solaris 10 Branded Zone)

The aim of this post is to describe a Physical to Virtual (P2V) migration of a Bare-Metal Solaris 10 (a Physical Node) on a Solaris 11 Global Zone (Solaris 10 Branded Zone on Oracle Solaris 11). The scenario describes below involves a Solaris 10 Physical node named oranode (hosts an Oracle Database 10G and another Oracle application) and a Solaris 11 Global Zone named sol11-gz (already hosting some other Local Zones). 
In order to make this migration easily understandable, It has been technically divided in in 03 parts, the first parts deals with Analysis of the Source System, the second is about System/Data Collection and the third is about Target Zone Configuration and Installation.

I. Analysis of the Source System


The Data Analysis phase aims to collect some initial analysis Data. Oracle has provided a very nice tool for that phase, namely zonep2vchk. zonep2vchk serves two functions. First, it can be used to report issues on the source which might prevent a successful p2v migration. Second, it can output a template zonecfg, which can be used to assist in configuring the non-global zone target (man page).
The /usr/sbin/zonep2vchk script is belonging to pkg:/system/zones on Solaris 11 and can be copied from an Oracle Solaris 11 system to an Oracle Solaris 10 system. It is not required to run this utility from a specific location on the system.
Below, we're copying this utility to our Solaris 10 (oranode) and then we're running it with the option -T S11 which specifies that the target node for our planned migration of this Solaris 10 System is running on Oracle Solaris 11.



Quite verbose! Indeed this initial analysis give us a lot of insights for the whole p2v procedure. For example, we can see that in this study case, we have much more than 01 ZFS Zpool (beside the initial rpool). These are File-system which host the Oracle Database 10G (mentioned above) Data Files.
Thus, we must decide how these  application Data's file-systems will be migrated. The strategy we're applying here is to migrate just the Operating System (excluding the Application Data File-System), and use SAN replication for the Application Data. Note that this decision is based on the source system architecture (we may for example decide to go for a zfs send/receive if these Data file-system are not located on an external storage…).

Once done with that, we complete this analysis phase by creating a template file for the Target Zone Creation (using the same zonep2vchk tool with the option -c). This is just a template , it is used to ease the creation of the target Zone and should be adjusted to suit with the requirement of the target environment.


II. System/Data Collection


This phase aims to collect Source System's Data (Solaris 10 System - oranode) and transferred them to the Target System (Solaris 11 Global Zone - s11-gz). As discussed during the analysis phase, I have decided to migrate System Data and Application Data in separate processes.
For the system, I’m making use of the classic flarcreate to generate system archive (due to the presence of Data File-System that i'd like to exclude, i'm using CPIO as method to archive the file). For the Application Data, they're being migrated using External Storage Cloning features (not described here!). 



The transfer of this archive to the target Global Zone can be done using any conventional method (scp/sftp/ftp, SAN Replication...). Once completed, we can move to the last part.

III. Target Zone Configuration and Installation


Now that we've the Data Collection phase completed, we’re going to create the target zone and start the restoration of the flararchive that was created. We’ll also add the Zpool Dataset which were migrated using the SAN replication features.
The zone will be created using the template that was generated by the zonep2vchk tool (we'll obviously customize it to fit the target environment). Before starting, we have to create the zonepath needed for the target zone and import the zpool that was migrated through the SAN replication.



Let’s now configure our new local zone by using the template generated by zonep2vchk tool during the first phase.



Having the configuration done, we can start the zone Installation



Finally, We are adding the ZFS dataset migrated through SAN Replication,



And the big moment! Boot the new zone...

2014/01/31

Oracle Solaris 11: EMC PowerPath / powermt display dev ; Device(s) not found.

This is just a small post about Oracle Solaris 11 and EMC PowerPath Configuration. If you've been trying to get EMC PowerPath configured on Solaris 11 and facing error(s) similar to the one below, then follow steps described below (can also follow the same for initial PowerPath Configuration on Oracle Solaris 11)




1. Make sure that Zoning/Lun Mapping from SAN/Storage is well configured.
For the zoning , run fcinfo remote-port -p <pwwn_connected_hba> (must list the PWWN and others details of the storage, otherwise must check that the zoning is well complete);
For the Storage Configuration, you can use fcinfo lu -v (list all fibre channel  logical  units, must have one or more "OS Device Name" listed here, otherwise must check that mapping is well done at the SAN Level). 




2. Check that Oracle Solaris mpxio is disabled.
That can easilly be achieved by running stmsboot with -L option.  If Solaris  I/O  multipathing  is not enabled, then no mappings are displayed. If It's enabled, then disable it by running "stmsboot -d". For example, below we can see the output with Solaris mpxio enabled (and the output printed when disabling -and the reboot required-).




3. Check that your storage's type is managed by EMC PowerPath
For that , we're using powermt with options display options. If our storage's type is listed as unmanaged (like the clariion below), then simply mark it as managed (using powermt manage as seen below) and you're done!





We should also pay attention to /kernel/drv/iscsi.conf. In fact, though Solaris  can distinguish between FC and iSCSI devices, some PowerPath versions don't make this distinction for manage and unmanage. So the mpxio-disable value must be set to yes in both the fp.conf (for fp.conf, it's also done automatically with stmsboot -d) and iscsi.conf files for PowerPath to manage EMC Clariion and VNX Storage Arrays.