Powered By Blogger

2015/09/02

HP ILO VSP: CentOS 7/RHEL 7 Installation through Serial Console

Though it's feared or considered as outdated by some, command line tools (CLI) remain the right ones for many System Administration tasks. Among its countless advantages, there's tasks' automation through its Script-ability.
In this post, I'm willing to describe the Usage of HP ILO CLI utility for the Complete Installation of CentOS 7/RHEL 7 Distribution. Through this mean, one can complete a full System Installation from CLI. Such Installation are quite useful for many cases, like Remote Installation with low bandwidth, No need for  Advanced ILO license...
The Installation described below is a CentOS 7.1 System on Proliant G8 (using ILO4) and I'm leveraging on KickStart to automate its Install process. But, the same procedure can be easily adapted for other Linux Distributions.

Let's start by reviewing what is needed to complete that Installation:

1. Prerequisites:

  • ILO Configured with Valid IP Addressing parameters and reachable through SSH
  • Linux System (any distribution) with mkisofs installed
That Linux System will be used to build Custom CentOS/RHEL Image, in this case I'm using the System the same system to share the ISO mentioned below (Hostname: stivinstall; IP: 192.168.1.11)
  • OS Distribution Media (ISO) available on the network and reachable from the ILO
As said above, I'm installing CentOS 7.1, so I shared the ISO on an http server (installed and configured on my Linux System) and checked that it's well available (http://192.168.1.11/mnt/CentOS-7-x86_64-DVD-1503-01-text.iso)
  • (Optional) KickStart File for automated Installation
I'm leveraging on Kickstart to fully automate this Installation. But that is obviously optional. In this case, I made Kickstart file available on the same System where I hosted the OS ISO (http://192.168.1.11/dladc2-infpup01.ks)

2.  Modify CentOS 7 boot image to get output on Serial Console

By default, the boot ISO of RHEL 7/CentOS 7 will redirect its output to Graphical Console, so that needs to be modified to have Output directed to the Serial Console (VSP).

Mount the CentOS ISO on my elected Installation System (stivinstall)



The main directory we'll be modifying in this ISO is Isolinux, so Copy this directory under a temporarily writable directory.



As we're aiming to have everything in Text (console) mode, we should first get rid of graphical feature, like image. So, in the boot.msg, remove the ^Xsplash.lss line and delete the boot.cat (will be re-created later)



Now, the main and most important modifications are to add/edit two main configuration in the isolinux.cfg:

  1. A new line with "serial 1 9600" which indicates to IsoLinux to redirect the output to the Serial Console
  2. Append "console=ttyS1" to the Kernel Option (append initrd...). This is a kernel option that specifies what device to use as the primary console and its implies text as the Installation Mode.

Below, I've made use of diff between the modified and an unmodified isolinux.cfg to highlight that modifications. We can see that around Line 54 (this can obviously added as option in other place in the file), I added (the line starting with "#"is just a comment):

# Output to Serial Console ttyS1 - Stivesso
serial 1 9600


And for most of the append Kernel options (did this for Linux, Troubleshoot and Rescue Entries), I appended this at the end:
console=ttyS1



With that modification on isolinux.cfg completed, we can recreate the ISO. but first, we mount (with the --bind option) our modified directory under the isolinux directory of the mounted Image. Then, we can create the ISO using mkisofs.

Note that the volume ID (-V) of the ISO image must be the same as the inst.stage2= parameter in isolinux.cfg and replacing \x20 by a space (described in BZ#915563), Otherwise we'll be facing an Issue during the Installation and will be redirected to Dracut Emergency mode (... Warning: /dev/root does not exist, Entering emergency mode. Exit the shell to continue.)



The following is just a way to check that the created ISO has the needed modification in isolinux.cfg.



3.  Use ILO CLI to insert a virtual media

Once we've met the prerequisites listed above and created our modified ISO Image, we can go to the ILO (ssh) and insert the OS Distribution ISO in the ILO Virtual Media. This is done using the vm command, to see a full description of vm command options and syntax, one can use "help vm" as seen below



Get the Status of the CDROM Virtual Media,



Insert our OS Distribution ISO Image,



Connect the Inserted OS Image,



Set the system to boot on this image during the next reboot, for that we can either use boot_once to have it mounted and set as boot drive just during the next boot or boot_always to have it permanently mounted and set as boot drive.



4.  OS Installation:

Now that we have the media Inserted, we can proceed to the OS Installation by powering on the Server (or resetting if it was already running) and getting to VSP (Virtual Serial Port) to complete OS Installation.
I strongly advise to have the ILO SSH Console opened in max window size to make sure that the console output fit the size of the SSH console.



We'll get to the following nice screen,




Press Up (to make sure it’s at “Install CentOS 7” entry) and Tab,



From here, you can either just press enter to proceed to an Interactive Install, or if you prefer to use KickStart (as I do :-) ), then just enter the appropriate options. In this case, I’m entering the following inst.ks=http://192.168.1.10/stix-1.ks ip=192.168.1.11::192.168.1.1:255.255.255.0::ens2f0:none (more details about RHEL7/CentOS7 Kickstart Install in this post)
My suggestion for entering the option, "don't copy-paste, better write..."




References:

2015/07/29

Packstack Installation with Existing External Network / kickstart RHEL7 (CentOS7)

Interested in getting OpenStack up and running within few hours in the easiest possible way? Then Packstack is most probably your friend. Indeed, Packstack is a utility which leverages on Puppet modules to deploy various parts of OpenStack on multiple pre-installed servers over SSH automatically. I've been through a cycle of building/destroying my OpenStack Labs by using this tool and I'm willing to share below a Kickstart File which fully automate such type of Installation (OpenStack using PackStack with an Existing External Network). The initial process on which I built this KickStart Installation is well documented here. My main aim here is to be able to reference a Single Kickstart File during my RHEL/CentOS 7 Installation and have OpenStack Installed and partly configured without manual Intervention (except for a reboot that I preferred not to automate :-) ).

The same Kickstart File can also be used as a Template for RHEL7/CentOS7 (or any other Systemd-based's distribution), especially if there's a need to run some script (or any other program) during the first boot (after OS installation) of the System (see the %post section).
In fact, in this Kickstart File I've edited and created a Systemd service that will run only once after the first boot and delete itself.
I was used to do that on previous RHEL/CentOS (RHEL6...) by leveraging on /etc/rc.local. Though there’s still /etc/rc.local  on RHEL7,It is highly advisable to create own systemd services or udev rules to run scripts during boot instead of using that file.

Note that after the initial reboot, Packstack Configuration Progress can be followed using journalctl (journalctl -f -u pacstack_installation.service) 

Few Things that are specific to this Kickstart:
System Hostname: stiv-opsctr01
System IP: 192.168.0.21/24
System GW: 192.168.0.1 
DNS1 IP: 192.168.0.30
DNS2 IP: 192.168.0.31
PROXY and Install Server (I'm using proxy for Internet Access) : 192.168.0.33


My Kickstart File:



References:

2015/07/19

Puppet puppetlabs/apache SSL Vhost with Hiera

I've been writing a bunch of Puppet Modules to manage Servers these last time, and one of the thing I figured out(somewhat in a painful way...), is how much I needed to separate my Puppet configuration data which varies a lot from my Puppet Manifest, which I'm willing to keep as much static as possible. So, I naturally shifted from a bunch of <if-else> in some params.pp  to a clean and efficient Hiera Configuration (I read somewhere this great way of summarizing what Hiera is: Hiera lets you separate the "how" from the "what").

Anyway, moving forward in this process, I stumbled upon the re-configuration of some Apache VirtualHosts with Hiera (using the puppetlabs-apache module).  As seen in the example below, I've stivesso.mysite.org configured on both http and https, with http redirected to https. This code worked so well, but as explained above, my aim was to draw a clear line between what it does (configuring http, https server  in such a way that the user will also get to https) and the configuration Data associated (stivesso.mysite.org). Succeeding in that enterprise means that  I can with the same code, configured my others Vhosts Servers in an easy and repeatable way. 


class roles::myweb {

# Include Apache
    class { 'apache':
        default_vhost        => false,
    }
  
    apache::vhost { 'stivesso.mysite.org-nossl':
        servername      => 'stivesso.mysite.org',
        port            => '80',
        docroot         => '/var/www/html/stivesso',
        redirect_status => 'permanent',
        redirect_dest   => 'https://stivesso.mysite.org/'
    }

    apache::vhost { 'stivesso.mysite.org-ssl':
        servername => 'stivesso.mysite.org',
        port       => '443',
        docroot    => '/var/www/html/stivesso',
        ssl        => true,
        ssl_cert   => '/etc/pki/tls/certs/stivesso.crt'
        ssl_key    =>'/etc/pki/tls/certs/stivesso.key'
    }

}


Now, using Hiera would have been quite simple if apache::vhost was a Class, I'd just have added in my Hiera Configuration files directive such as  "apache::vhost::docroot": "/var/www/html/stivesso"...
But apache:vhost isn't a Class, it is a defined type, and Puppet DataBinding only works on Classes. The best way to workaround this and be able to use Hiera is to leverage the Function create_resources. In fact, create_resources is a function which converts a hash into a set of resources and adds them to the catalog. It's taking as argument a resource type (which is exactly what a Defined Type is) and the hash that will be converted to a set of resources. Fortunately, the hiera() lookup function returned Hiera data as a hash.
Below is what my Hiera configuration will look like


[stiv@puppetmaster ~]# cat /etc/puppetlabs/puppet/hieradata/node/myweb01.yaml
---
apache::vhost:
  'stivesso.mysite.org-ssl':
    servername: 'stivesso.mysite.org'
    serveraliases:
        - 'www.stivesso.mysite.org'
    serveradmin: 'stiv@myself.net'
    port: '443'
    docroot: '/var/www/html/stivesso'
    ssl: true
    ssl_cert: '/etc/pki/tls/certs/stivesso.crt'
    ssl_key: '/etc/pki/tls/certs/stivesso.key'
  'stivesso.mysite.org-nossl':
    servername: 'stivesso.mysite.org'
    serveraliases:
        - 'www.stivesso.mysite.org'
    serveradmin: 'stiv@myself.net'
    port: '80'
    docroot: '/var/www/html/stivesso'
    redirect_status: 'permanent'
    redirect_dest: 'https://stivesso.mysite.org/'

And my Puppet Module,


class roles::myweb {

    # Include Apache
    class { 'apache':
        default_vhost        => false,
    }
  
    # Create a hash from Hiera Data with the Vhosts
    $myApacheVhosts = hiera('apache::vhost', {})

    # With Create Resource Converts a hash into a set of resources
    create_resources('apache::vhost', $myApacheVhosts)

}

Well, clean module! The how (module) and the what (data) separated, That seems like what I wanted initially...

2015/04/25

Open Source Puppet Agent Installation on Solaris 10 without Internet

The initial process to complete the Solaris 10 Open Source Puppet Agent Installation and its dependencies is to use the OpenCSW packages. For that, one must first install pkgutil, which enables the easy retrieval of software from the OpenCSW repositories and use it for the automatic download/Installation of OpenCSW Packages. 
One of the main challenge with that method is the fact that the target host will need an Internet connection and in some environment, that isn't the case for some critical Systems. There are some solutions to work around that little Issue, one is to create a local OpenCSW repository (in a similar way as we did for RHEL in this post), another one is to bundle Puppet Agent Package and all its dependency in one package and use that package to complete Agent Installation on the target nodes with no Internet and that's the option I'm willing to discuss about in this short post.
In order to achieve that second option, we need just one Solaris System connected to Internet. On that System, we’ll be installing pkgutil and then create the single Package that'll be used on the Systems with no Internet Connectivity. 
Let's try  to detail that process in the following 03 short steps.

1. Install pkgutil:

The Installation of pkgutil is well described here and summarized with the 04 commands below (Again, Internet connectivity is needed on this server):



2. Create Solaris 10 Package using pkgutil


On OpenCSW, there are two Packages for Puppet Installation, the first one is CSWpuppet (or puppet) and corresponds to the latest puppet version 2 while the second is  CSWpuppet3 (or puppet3) and corresponds to the latest puppet version 3. So, choose the one you're willing to install, below I'm installing puppet 3 (as my whole Puppet Infrastructure is running version 3). 

Note below that the target option for sparc Solaris 10 is sparc:5.10, that'll be i386:5.10 for X86 System



The resulting Package will be placed under /var/opt/csw/pkgutil/packages.

3. Install the Package on Solaris 10 SPARC System:

For the Installation, all that is needed now is to copy the Package created above on the   target host and install using the classic pkgadd.



Check that the service is well running,



On the Puppet Master, sign the node's certificate and proceed as usual...

2015/04/07

Open Source Puppet Agent Installation / Local Yum Repository for RHEL and derivatives

I've been playing for a while with Open Source Puppet and faced few challenges during its deployment. Among these challenges, there's the installation of the puppet Open Source agent on some RHEL Systems. In fact, The process to install the RHEL Puppet Open Source agent isn’t as straightforward as on CentOS and other community forks because RHEL repositories are divided into many channels and to install Puppet on it, one of this channel (the “optional channel") needs to be enabled (more details here). Meaning that both an Internet Connectivity and a valid RHN subscription are needed to complete Puppet Open Source Agent Installation on RHEL System.
Anyway, the main idea I've to workaround this small challenge is to create a small Internal http-based repository which contains all the required Package for Open Source Puppet Agent (and facter) Installation. That repository will then be used for my RHEL/CentOS node that aren't connected to Internet for Puppet Open Agent Installation.
To achieve that, the very first thing is to download Open Source Puppet Agent RPM, Facter RPM and all their dependencies in the same directory. For that, repotrack is probably one of the best tool. Indeed, this tool helps to download a package and all its dependencies recursively in the same directory. 
Repotrack is using /etc/yum.conf as default Configuration file, meaning that it'll be downloading packages based on the current repository settings. That's why I'm using a CentOS System (instead of RHEL) to create that local repository (CentOS only needs its main repo & PuppetLabs repos to be able to solve all the dependency related to Puppet/Facter).
Although in this post I'm using a CentOS 7 Distribution to download puppet/facter agent RPMS for EL7, the same process applies to lower RHEL/CentOS version (<=6), the only difference being that RHEL7 is only 64-bit, and due to that,no 32 bits repository is needed.
Enough talk, let's move forward with the following steps to complete that Local Repository configuration for CentOS7/RHEL7. 

1. Overview of the environment for this Post

This is just a short description of the System that'll be configured in this post,


System Name
OS
Description
my_repo
CentOS 7
Http-based repository (Connected to Internet)
puppetmaster
CentOS 7
Puppet Server (Connected to Internet)
my_rhel_1
RHEL 7
Puppet Client ( No Internet Connectivity)

2. On the CentOS7 Repository System, add the PuppetLabs Repository




3. Install Apache on this CentOS7 System and configure it accordingly (to make RPMS available using http)


I used my Puppet Master (and foreman) to complete this step, but a simple "yum install httpd" and httpd configuration will also be enough. The following is just for those who are interested in such Puppet configuration. If using a classic yum installation and normal httpd configuration, then you can move forward to Step 3.


On the Puppet Master Server, I'm installing Apache module





Under one of the Module Path, the module directory and the manifest folder are created.



I'm then editing a webconf manifest file to configure the Apache Module (with vhost).



The init.pp for this small module is then created to include the defined webconf




As said above, I'm using Foreman to classify my Puppet Nodes, so all these classes have then been imported under foreman (configure --> Puppet Classes --> Import from...) and the server hosting this web repository (my_repo) have been then edited to include stivrepo Module.

4. Download the CentOS7 puppet/facter Packages with all their dependencies

As discussed above, the tool used for downloading the Packages and their dependencies here is repotrack, it's part of yum-utils Packages (So, if not done, Install yum-utils). Also, if 32 bits OS will use that repository (again, RHEL7 is 64 bits only, so this comment is mostly for version below  7), then create both i686 and x86_64 under the Packages directory and use the option --arch (repotrack --arch=i686of repotrack to specify the architecture you're downloading Packages for.



5. Create and configure the repository


To create the repo, I'm using the classic createrepo with the appropriate option. It is  a program that creates a repomd (xml-based rpm metadata) repository from a set of rpms. If it isn't already installed, then install the Package.



Create the repository Metadata

Check our new repository data folder and files have been created,



It is important to note that RHEL and CentOS have a different way to interpret the releasever value on yum client configuration. In fact, CentOS interprets the realeasever variable as "OS Major Version (like 7,6,5...)" while RHEL interprets it as "OS Major Version + OS Type (7Server, 6Server, 5Server)". To make sure that our repo will be reachable from both Distribution, we'll add a Symbolic Link to the main folder.
Yum variables can be checked with the following command : "python -c 'import yum, pprint; yb = yum.YumBase(); pprint.pprint(yb.conf.yumvar, width=1)' "



At this stage, our repo is available and reachable using http URL http://stivrepo.stivameroon.net/puppet_os_agent/7/x86_64 (or http://stivrepo.stivameroon.net/puppet_os_agent/7Server/x86_64 for RHEL Server) , so the remaining is to make it available on the target System (the one without Internet Connectivity...)

6. Create a repository release RPM (optional):

The aim of this repository release RPM is just to make easy the configuration of yum on the target Systems. Indeed, we'll just have to run a normal rpm Installation to complete their yum configuration. The step is optional because one can still choose to complete manually that configuration.
To create this repository release RPM , I followed this excellent tutorial. Below is just the output of the whole process




The final thing to do is make this rpm available in the right place



7. Configure Target Systems to use the new repository:

There are 02 ways to configure the target system to use this local repository for Puppet/Facter Installation. 

The first one is automatic and apply only if step 5 of this post wasn't skipped. It consists of just installing the repository release RPM we've created.



The second one is manual and consists of adding the following file under /etc/yum.repos.d of the target System.



8. Install Puppet and Facter on the Target System:

Now, we can easily install Open Source Puppet Agent and facter by simply running the following:




Just a final note, though all the dependencies for Puppet/Facter are well resolved with this method, it might happen that there's some installed packages on the target System which depend on a package that is being upgraded for Puppet/Facter.  For such case, we might have errors similar to the one below: 




In such case, the way to workaround is simply by repotrack the package that need upgrade (libselinux-python for this example) in the right directory and update the repository using createrepo as seen below:




References:
https://www.digitalocean.com/community/tutorials/how-to-set-up-and-use-yum-repositories-on-a-centos-6-vps
http://sbr600blog.blogspot.com/2012/03/how-to-create-repository-release-rpm.html