Powered By Blogger

2015/04/25

Open Source Puppet Agent Installation on Solaris 10 without Internet

The initial process to complete the Solaris 10 Open Source Puppet Agent Installation and its dependencies is to use the OpenCSW packages. For that, one must first install pkgutil, which enables the easy retrieval of software from the OpenCSW repositories and use it for the automatic download/Installation of OpenCSW Packages. 
One of the main challenge with that method is the fact that the target host will need an Internet connection and in some environment, that isn't the case for some critical Systems. There are some solutions to work around that little Issue, one is to create a local OpenCSW repository (in a similar way as we did for RHEL in this post), another one is to bundle Puppet Agent Package and all its dependency in one package and use that package to complete Agent Installation on the target nodes with no Internet and that's the option I'm willing to discuss about in this short post.
In order to achieve that second option, we need just one Solaris System connected to Internet. On that System, we’ll be installing pkgutil and then create the single Package that'll be used on the Systems with no Internet Connectivity. 
Let's try  to detail that process in the following 03 short steps.

1. Install pkgutil:

The Installation of pkgutil is well described here and summarized with the 04 commands below (Again, Internet connectivity is needed on this server):



2. Create Solaris 10 Package using pkgutil


On OpenCSW, there are two Packages for Puppet Installation, the first one is CSWpuppet (or puppet) and corresponds to the latest puppet version 2 while the second is  CSWpuppet3 (or puppet3) and corresponds to the latest puppet version 3. So, choose the one you're willing to install, below I'm installing puppet 3 (as my whole Puppet Infrastructure is running version 3). 

Note below that the target option for sparc Solaris 10 is sparc:5.10, that'll be i386:5.10 for X86 System



The resulting Package will be placed under /var/opt/csw/pkgutil/packages.

3. Install the Package on Solaris 10 SPARC System:

For the Installation, all that is needed now is to copy the Package created above on the   target host and install using the classic pkgadd.



Check that the service is well running,



On the Puppet Master, sign the node's certificate and proceed as usual...

2015/04/07

Open Source Puppet Agent Installation / Local Yum Repository for RHEL and derivatives

I've been playing for a while with Open Source Puppet and faced few challenges during its deployment. Among these challenges, there's the installation of the puppet Open Source agent on some RHEL Systems. In fact, The process to install the RHEL Puppet Open Source agent isn’t as straightforward as on CentOS and other community forks because RHEL repositories are divided into many channels and to install Puppet on it, one of this channel (the “optional channel") needs to be enabled (more details here). Meaning that both an Internet Connectivity and a valid RHN subscription are needed to complete Puppet Open Source Agent Installation on RHEL System.
Anyway, the main idea I've to workaround this small challenge is to create a small Internal http-based repository which contains all the required Package for Open Source Puppet Agent (and facter) Installation. That repository will then be used for my RHEL/CentOS node that aren't connected to Internet for Puppet Open Agent Installation.
To achieve that, the very first thing is to download Open Source Puppet Agent RPM, Facter RPM and all their dependencies in the same directory. For that, repotrack is probably one of the best tool. Indeed, this tool helps to download a package and all its dependencies recursively in the same directory. 
Repotrack is using /etc/yum.conf as default Configuration file, meaning that it'll be downloading packages based on the current repository settings. That's why I'm using a CentOS System (instead of RHEL) to create that local repository (CentOS only needs its main repo & PuppetLabs repos to be able to solve all the dependency related to Puppet/Facter).
Although in this post I'm using a CentOS 7 Distribution to download puppet/facter agent RPMS for EL7, the same process applies to lower RHEL/CentOS version (<=6), the only difference being that RHEL7 is only 64-bit, and due to that,no 32 bits repository is needed.
Enough talk, let's move forward with the following steps to complete that Local Repository configuration for CentOS7/RHEL7. 

1. Overview of the environment for this Post

This is just a short description of the System that'll be configured in this post,


System Name
OS
Description
my_repo
CentOS 7
Http-based repository (Connected to Internet)
puppetmaster
CentOS 7
Puppet Server (Connected to Internet)
my_rhel_1
RHEL 7
Puppet Client ( No Internet Connectivity)

2. On the CentOS7 Repository System, add the PuppetLabs Repository




3. Install Apache on this CentOS7 System and configure it accordingly (to make RPMS available using http)


I used my Puppet Master (and foreman) to complete this step, but a simple "yum install httpd" and httpd configuration will also be enough. The following is just for those who are interested in such Puppet configuration. If using a classic yum installation and normal httpd configuration, then you can move forward to Step 3.


On the Puppet Master Server, I'm installing Apache module





Under one of the Module Path, the module directory and the manifest folder are created.



I'm then editing a webconf manifest file to configure the Apache Module (with vhost).



The init.pp for this small module is then created to include the defined webconf




As said above, I'm using Foreman to classify my Puppet Nodes, so all these classes have then been imported under foreman (configure --> Puppet Classes --> Import from...) and the server hosting this web repository (my_repo) have been then edited to include stivrepo Module.

4. Download the CentOS7 puppet/facter Packages with all their dependencies

As discussed above, the tool used for downloading the Packages and their dependencies here is repotrack, it's part of yum-utils Packages (So, if not done, Install yum-utils). Also, if 32 bits OS will use that repository (again, RHEL7 is 64 bits only, so this comment is mostly for version below  7), then create both i686 and x86_64 under the Packages directory and use the option --arch (repotrack --arch=i686of repotrack to specify the architecture you're downloading Packages for.



5. Create and configure the repository


To create the repo, I'm using the classic createrepo with the appropriate option. It is  a program that creates a repomd (xml-based rpm metadata) repository from a set of rpms. If it isn't already installed, then install the Package.



Create the repository Metadata

Check our new repository data folder and files have been created,



It is important to note that RHEL and CentOS have a different way to interpret the releasever value on yum client configuration. In fact, CentOS interprets the realeasever variable as "OS Major Version (like 7,6,5...)" while RHEL interprets it as "OS Major Version + OS Type (7Server, 6Server, 5Server)". To make sure that our repo will be reachable from both Distribution, we'll add a Symbolic Link to the main folder.
Yum variables can be checked with the following command : "python -c 'import yum, pprint; yb = yum.YumBase(); pprint.pprint(yb.conf.yumvar, width=1)' "



At this stage, our repo is available and reachable using http URL http://stivrepo.stivameroon.net/puppet_os_agent/7/x86_64 (or http://stivrepo.stivameroon.net/puppet_os_agent/7Server/x86_64 for RHEL Server) , so the remaining is to make it available on the target System (the one without Internet Connectivity...)

6. Create a repository release RPM (optional):

The aim of this repository release RPM is just to make easy the configuration of yum on the target Systems. Indeed, we'll just have to run a normal rpm Installation to complete their yum configuration. The step is optional because one can still choose to complete manually that configuration.
To create this repository release RPM , I followed this excellent tutorial. Below is just the output of the whole process




The final thing to do is make this rpm available in the right place



7. Configure Target Systems to use the new repository:

There are 02 ways to configure the target system to use this local repository for Puppet/Facter Installation. 

The first one is automatic and apply only if step 5 of this post wasn't skipped. It consists of just installing the repository release RPM we've created.



The second one is manual and consists of adding the following file under /etc/yum.repos.d of the target System.



8. Install Puppet and Facter on the Target System:

Now, we can easily install Open Source Puppet Agent and facter by simply running the following:




Just a final note, though all the dependencies for Puppet/Facter are well resolved with this method, it might happen that there's some installed packages on the target System which depend on a package that is being upgraded for Puppet/Facter.  For such case, we might have errors similar to the one below: 




In such case, the way to workaround is simply by repotrack the package that need upgrade (libselinux-python for this example) in the right directory and update the repository using createrepo as seen below:




References:
https://www.digitalocean.com/community/tutorials/how-to-set-up-and-use-yum-repositories-on-a-centos-6-vps
http://sbr600blog.blogspot.com/2012/03/how-to-create-repository-release-rpm.html

2015/02/20

Oracle VM 3 for X86 / Backend Storage Migration

Have to migrate a bunch of OVM3 X86 Clustered Server Pool and their VMs (Virtual Machines) from one storage to another (From EMC Clariion to EMC Symmetrix VMAX) and decided to document what I did to complete such migration. 

Before starting with the technical details, let's me first explain why such Migration Path isn't really straightforward. In a clustered configuration, Oracle VM needs at least 02 type of shared disk to work:
  1. The first type of shared disk is used by Server pool, It contains the File-system that is used to store all the Cluster related configuration. It's to be noted that only one of this type of disk is required per Server Pool. I refer to that Filesystem below as Server Pool Filesystem
  2. The second type of shared disk is to be used as Repository for Oracle VM System, It contains everything related to the VMs (Virtual Disks, VMs configuration, ISO...). We can have many of this type of disk per server pool. I refer to that type of Shared Disk below as Storage Repository.

As you might already have guessed, Migrating Data that are on the second type of disk isn't the main challenge here. The main challenge is related to the first type, the main reason being that there's no clear documented procedure to complete Server Pool Filesystem Migration. 

In the first section below, I'll describe the current Environment, I mean by that the Environment I'm willing to migrate from old Backend Storage (EMC Clariion) to new Backend Storage (EMC Symmetrix). In the Second section, I'll deal with the Storage Repository Migration and in the third section, I'll try to describe what I see as the main challenge, the Server Pool Filesystem Migration.

1. Environment Description:

The server Pool that we're willing to migrate here is made of 02 HP Proliant Nodes in a Clustered Server Pool , Its pool FileSystem is connected through a Fiber Channel SAN and has a size of 15Gb. The Pool name is HP_Bay2_bay3_Pool and the 02 OVM Servers name are ovms-bay2 and ovms-bay3. Below is a screenshot of its Info section on OVM Manager.




On the following screenshots, we see that there's 03 VMs (stiv-vm1,stiv-vm2 and stiv-vm3) running under these Systems. The three VMs are hosted on 02 Storage Repositories (DC1_HP_HA_TEST-DEV_repo1 & DC1_HP_HA_TEST-DEV_repo2). 







The following table serves as a summary of what we'll be doing during this Migration, My suggestion is to always have such a Table during this Type of Migration.


Type
Source
Destination
Server Pool
HP_bay2_Bay3_pool
DC1_VMAX_HP_Bay2_Bay3_pool
Storage Repository
DC1_HP_HA_TEST-DEV_repo1
DC1_VMAX_HP_Bay2_Bay3_repo1
Storage Repository
DC1_HP_HA_TEST-DEV_repo2
DC1_VMAX_HP_Bay2_Bay3_repo2
VMs to Migrate
stiv-vm1; stiv-vm2; stiv-vm3


2. Storage Repository Migration:

The strategy used for the Storage Repository Migration is to create another Storage Repositories coming from the new Storage and migrate VMs and their Data to these Storage Repository. 
So the obvious first step here is to allocate LUN from the new storage to our Clustered Nodes. In this case, we've 02X01TB Storage Repositories on the Source Storage (EMC Clariion), we'll create the same on the target Storage (EMC Symmetrix VMAX). We'll also add in addition a small 15Gb that'll be used in the next section (Server Pool Migration). This is a screenshot of these FC LUNs after their SAN/Storage Allocation. 



Just to have these new LUNs clearly identified, I've renamed with the friendly names follow



Once these LUNs are allocated, we'll create 02 new storage repositories on them and assign to the the source server pool. Below is the screenshot for the creation of one of these 02 storage repository.










Now that the Target Storage Repositories have been created, we can migrate each VM to this target Repository by using the Clone or Move Option as shown below, note that this procedure required the Machine to be halted during the Migration. 
Also, to avoid trouble during the Server Pool Filesystem migration,we must move all Files (including virtual machine configuration file) related to a single VM under the same Storage Pool (avoiding errors like "com.oracle.ovm.mgr.api.exception.RuleException: OVMRU_002061E Cannot remove clustered file system: ..., from cluster: ... Virtual machine: ..., contains ..., which is in the other clustered file system.")









Note that the Target Repository on the "Clone or Move Virtual Machine" below  is the target location of the virtual machine configuration file.



Once we've all the VMs Migrated from the old storage(s) repository(s), we should un-present this(these) storage(s) Repository(s) from the Server Pool. For that, we just have to select the Storage Repository and use "Present-Unpresent Selected Repos" as seen below. That'll mostly be to make sure that that Repository(s) doesn't hold live data.







3. Server Pool Filesystem Migration:

With The Data in Storage Repositories migrated, we can get to the next step which is the Migration of the Server Pool FileSystem. As previously said, there's no Direct Migration Path for this part, so we're using a trick which consists of creating a new server pool with a new pool file system and then migrating the existing Physical OVM servers  to this new server pool.
Meaning that the first step to take is the creation of a new Server Pool, Note that an additional Virtual IP is needed during this Server pool creation. The new Server Pool is created with no Physical OVM Servers added to it.






Having now a brand new Target Server Pool, we can go through the following 07 steps to complete the Migration, note that it's better to have the OVM VM Servers stopped.

3.I: Un-present one of the Clustered Server from the new Storage Repository

For that, we're editing the new repository as shown below. The same must be done on all the others new Storage Repository.




3.2: Assign the new Server Pool to the new Storage Pool

This is done by editing the new Repository(s) and Change the Server Pool to the new one. Again, it must be done for all the new Repository.
As said in the previous section, If one of the configuration (even Simple CD/DVD) of one of the VM isn't part of the same Repository, this action might fail.  
So, again, in case we're facing error similar to the following ("com.oracle.ovm.mgr.api.exception.RuleException: OVMRU_002061E Cannot remove clustered file system: ..., from cluster: ... Virtual machine: ..., contains ..., which is in the other clustered file system."), we must make sure that all configuration/file related to a VM are part of the same repository 





3.3 Delete the old Storage Repository


This deletion is needed  to complete Step 3.4 which removes all Servers from the old Clustered Server Pool. As no server is available to perform that request on the old Storage Repository, we must re-assign at least one of our Server to complete this task. Note that Storage Repository must be emptied before deletion.





3.4 Remove all the Cluster Server from the Old Server Pool

The Old server pool will have no OVM VM Servers assigned to him after this step, and OVM VM servers which were assigned to the Pool will be under "Unassigned Servers". This is done by editing the Old Server Pool.




3.4 Add these Servers to the New Server Pool

Edit the new Server Pool, under Servers Tab, Select the OVMS Servers that were removed in the previous step




3.5 Present new storage Repository(s) to new Server Pool

Edit the New Storage Repository(s) and select the new Server Pool, send that (using ">") to the "Present to Server Pool(s)"





3.6 Power On and check your VMs

3.7 Delete the old Server Pool (and if needed, disconnect the nodes from the old Storage)



References:

I'd like to highlight the fact that I relied heavily on this document  which is written by a Dell engineer who performed similar Migration on Dell Storage.