Powered By Blogger

2017/09/14

MongoDB MongoD Replica Set Installation and Configuration with Puppet and Docker

One of the key benefit that Containerization has brought is that ability to rapidly implement and scale up implementation that were deemed complex. As example, implementing a Mongodb MongoD Replica Set is something that was taking hours, but when using Docker, that implementation time is reduced to less than 05 minutes. Coupled with a Configuration Management Tool like Puppet, the complexity of such implementation is reduced to modifying a simple set of variables on a Yaml file and can be replicated with no efforts.
I am describing below one approach that leverages on both Docker and Puppet to complete a certain level of automation in implementing MongoDB Replica Set.
As usual, in order to make the technical implementation details more easy to digest,I'm making use of a set of well-known Puppet's best practice, with a great emphasis on the following :
  • Roles and Profiles Pattern
  • Hiera for Configuration Data
The post is divided in the following sections:
  1. A view words on the Architecture
  2. Puppet Module Installation and others prerequisites
  3. Profile Module creation
  4. Hiera Configuration
  5. Roles Configuration
  6. MongoDB Replica Set Additional Configuration
   I. A view words on the Architecture:

A picture is worth a thousand words, that has proved so true when it comes to Technology Architecture. So let's first see what the Implementation I am implementing looks like:

Dockerized MongoDB Replica Set

We have two Physical Nodes and One Virtual Systems (doesn't need to absolutely be a Virtual System, the emphasis here is just on the facts that this node is a small one that is just set as MongoDB arbiter).
The two physical Nodes serves as Docker Host , each Docker Host hosts many distinct MongoDB Docker Instances, and each instance is published on a particular TCP Port (27017, 27018,27019...) . The datadir (/data/db) of the MongoDB are Docker volumes (mapped to Docker Host Directory which I usually create as disctinct LVM Logical Volumes). 
To make Container easy to identify, Each container has a name that was created by concatenating the physical node hostname and that container port ("${hostname}-p${public_port}" e.g: node1-p27017 for physical node that has as hostname node1).
The complete Docker Installation and Initial Configuration is handled by Puppet and covered in Section II-V of these Post. The last section (Section VI) covers the Initialization of the Replica Set using Mongo Shell.

Arbiters are mongod instances that are part of a replica set but do not hold data. Arbiters participate in elections in order to break ties. If a replica set has an even number of members, add an arbiter.


   II. Puppet Module Installation and others prerequisites:

To complete this implementation, we only need to install Garethr/Docker Module. I am also adding Puppet/LVM as I prefer to map Docker volumes to LVM Logical Volumes, but this isn't mandatory. As usual, the module can either be installed on Command line or through PuppetFile.

[root@pe-master ~] # puppet module install garethr-docker
[root@pe-master ~] # puppet module install puppetlabs-lvm

or simply by adding the following to Puppetfile,


mod 'mgarethr-docker'
mod 'puppetlabs-lvm'

   III. Profile Module Creation:

The profile is really where most of the code is being written. In facts, the profile code shared below is truly acting like a glue for all these technologies (Docker, MongoDB and optionally LVM) and it provides the great level of automation needed

Note that this profile can easily be adapted to be used for a simple Docker Installation and that I am using host as Network Mode for Docker.



class profiles::mongodb_docker
(
  $mongo_version            = "3.4",          # mongod Docker Version, default to 5.7
  $is_using_lvm             = false,  # Whether or not LVM Module should be included
)
{

  ### Add the needed Modules and Docker Images
  include 'docker'
  docker::image { 'mongo':
    image_tag => $mongo_version,
  } 

  # Include LVM if needed 
  # Note that all the needed parameters must be defined in Hiera
  if $is_using_lvm { include ::lvm }
  
  # Create a hash from Hiera Data with the Docker Run
  $myDockerContainer = hiera('docker::run', {})
  
  # Looping through container
  $myDockerContainer.each |$container_name, $container_param| {

    ### Get the must have parameters - variables , for now ports only
    if !has_key($container_param, 'extra_parameters') {
      fail('Need to have extra_parameters --publish xxxx:27001 set for this to work properly')
    }
    $extra_params_publish = grep($container_param[extra_parameters], 'publish')
    if empty($extra_params_publish) {
      fail('Need to have extra_parameters --publish xxxx:27001 set for this to work properly')
    }

    ### Create the Container Needed Directory for Volumes
    if has_key($container_param, 'volumes') {
      $container_param[volumes].each |$volume_hiera| {
        $mydir = split("${volume_hiera}", ':')[0]
        file { "${mydir}":
          ensure   => directory,
        }
      }
    }
    
    ### Create the Container Hostname
    # Form is <hostname>-p<port>; e.g: dladc2-infdoc01-p27017
    $port_mapping         = split("${extra_params_publish}", Regexp['[\s+]'])[1]
    $public_port          = split("${port_mapping}", ':')[0]
    $docker_hostname      = "${hostname}-p${public_port}"
    
    ## Docker Container Run Configuration
    # Then run the Containers
    docker::run{$container_name:
      * => {
           hostname   =>  $docker_hostname,
           net        => 'host',
           } + $container_param # Merging with container parameters
    }
  }
}


  IV. Hiera Configuration:

As noted above, I'm exclusively using Hiera to store Nodes' configuration data. The configuration below is used to configure 03 MongoD instances running on 03 differents ports with the following parameters all specified:


  • logpath: log directory where mongo would be generating the logs
  • port: port at which mongo would be listening for requests
  • dbpath (volumes): dbpath where mongo would be storing all the data (mapped to a volume that is on the physical nodes)
  • replSet: name of the replicaset
  • wiredTigerCacheSizeGB : the maximum size of the internal cache that WiredTiger will use for all data


---
docker::run: 
  'mongod_db27017':
    image            : 'mongo'
    command          : 'mongod --replSet rs1 --port 27017 --shardsvr --logpath /var/log/mongodb/mongod.log --wiredTigerCacheSizeGB 100'
    volumes          :
      - "/var/docker_db/db_p27017:/data/db"
    extra_parameters :
      - '--publish 27017:27017'
      - '--restart=always'
    memory_limit     : '128g'
  'mongod_db27018':
    image            : 'mongo'
    command          : 'mongod --replSet rs2 --port 27018 --shardsvr  --logpath /var/log/mongodb/mongod.log --wiredTigerCacheSizeGB 100'
    volumes          :
      - "/var/docker_db/db_p27018:/data/db"
    extra_parameters :
      - '--publish 27018:27018'
      - '--restart=always'
    memory_limit     : '128g'
  'mongod_db27019':
    image            : 'mongo'
    command          : 'mongod --replSet rs3 --port 27019 --shardsvr  --logpath /var/log/mongodb/mongod.log --wiredTigerCacheSizeGB 100'
    volumes          :
      - "/var/docker_db/db_p27019:/data/db"
    extra_parameters :
      - '--publish 27019:27019'
      - '--restart=always'
    memory_limit     : '128g'


   V. Roles Configuration:

Using the profile described in the Section above, with the right Hiera Data, we can move forward with the roles modules configuration,

class roles::containers_ha_02  {
 
    # Install Docker and MongoDB Instances
    include profiles::mongodb_docker

}

   VI. MongoDB Replica Set Additional Configuration:


Once the roles/profiles (and the correct Hiera Data associated as shown above) are applied, we'll end up having 3 containers per Docker host from the mongo image, all inside their own docker container network.

Assuming that our Docker Host are named node1,node2 and node3, accordingly with the Puppet Code in the profiles, our Container and Replica Set Names will be (to make things easier, I recommend to add these names to DNS):
  • On node1: node1-p27017 (rs1), node1-p27018 (rs2), node1-p27019 (rs3)
  • On node2: node2-p27017 (rs1), node2-p27018 (rs2), node2-p27019 (rs3)
  • On node3: node3-p27017 (rs1), node3-p27018 (rs2), node3-p27019 (rs3)

As described in the first section,We can access any of these containers MongoD instances using the mongo shell interface from our network,  if we need to (you will have to install MongoDB on your own machine to do this). 

What is remaining is just to initiate the Cluster Replica for each set and add the other instances. Below is the example for rs1 (same should be completed with rs2 and rs3).


#### configuration of rs1

rs.initiate( {
   _id : "rs1",
   members: [ { _id : 0, host : "node1-p27017:27017" } ]
})
rs.add("node2-p27017:27017")
rs.addArb("node3-p27017:27017")
rs.status()

#### Setting the primary

cfg = rs.conf();
cfg.members[0].priority = 10;
cfg.members[1].priority = 5;
rs.reconfig(cfg);
rs.conf();


References:

2016/12/21

Automating Dockers Mysql Multi-Master Instances deployment with Puppet (Puppet, Docker , Mysql Multi-Master and Mysql Multi Instances)

I've been wondering about the implementation of  multiple MySQL Instances (mysqld_multi) using Puppet on the same server. And when I got the additional requirement to complete the same implementation not on just a single server, but on many servers (Multi-Master Replication), I found it was so tricky to implement that using the official (and quite stable) Puppet/Mysql Forge Module. In facts this nice module wasn't designed to support Mysql Multi Instances.

After digging on many alternatives, I've realized that the very same implementation can instead be achieved by combining the power of Puppet/Mysql and Gareth/Docker Forge modules. Below is a small diagram that graphically represents such type of configuration. In the subsequent sections of this post, I'm sharing the technical details for that implementation (trying to use Puppet Best practices).

DevOps tools and software have brought to us new Vocabulary and funny new Words like the one in this post's title, but really Puppetizing Dockerized Mysql Multi-Master Instances is just about automating the deployment of Mysql Multi-Master for Multiple Instances using both Puppet and Docker...

(Click on the image to enlarge)


Just to summarize the architecture described on that picture, we have 02 Docker Host Node (192.169.10.11 & 192.169.10.12), each node is hosting many distinct Mysql Docker Instances, each instance is published on a particular TCP Port (3306, 3307,3308...) . The datadir and confdir of these Mysql Instance are mapped to directories that are sitting under the same LVM Volume Group, but on different LVM Logical Volumes. Mysql Instances are multi-master replicated between two instances (sitting on different Docker Host Nodes - Data Protection/HA).

In order to make the technical implementation details more easy to digest,I'm making use of a set of well-known Puppet's best practice, with a great emphasis on the following :
  • Roles and Profiles Pattern
  • Hiera for Configuration Data
The post is divided in the following sections:
  1. Puppet Forge Module Installation and others prerequisites
  2. Profile Module creation
  3. Hiera Configuration
  4. Roles Configuration
  5. Puppet Agent Single run

   I. Module Installation and others prerequisites:

To complete this implementation, we need to install Garethr/Docker, Puppet/Mysql and Puppet/LVM (Only If using the LVM as suggested above). As usual, the module can either be installed on Command line or through PuppetFile.

[root@pe-master ~] # puppet module install garethr-docker
[root@pe-master ~] # puppet module install puppetlabs-mysql
[root@pe-master ~] # puppet module install puppetlabs-lvm

or simply by adding the following to Puppetfile,


mod 'puppetlabs-mysql'
mod 'mgarethr-docker'
mod 'puppetlabs-lvm'


   II. Profile Module Creation:

The profile is really where most of the code is being written. In fact, the below profile is the one combining all the mentioned technologies (Docker, Mysql-Multi master and LVM -which is optional-) and make them acts as a single module. 
Note that I'm using a parameterized class for that profile, with some default values and some mandatory variables. 

Note also that this profile can easily be adapted to be used on Mysql Docker without replication.
I did my best to add  comments to most of the parameters and block of codes below, but I'm also trying to leverage on Puppet DSL readability to make this easy to understand.

class profiles::mysql_mmm_docker
(
  $server_id,                                 # Specify the Server id - must be defined
  $master_ip,                                 # The IP Address of the master for this MM - must be defined
  $auto_increment_increment,                  # Auto-Increment Increment value - must be defined
  $auto_increment_offset    = "${server_id}", # Auto-Increment Offset Value, default same as server id
  $mysql_version            = "5.7",          # Mysql Docker Version, default to 5.7
  $replica_user             = "replication",  # As master, what is the replication account
  $replica_password         = "replication",  # Replication User password
  $slaves_ips               = [],             # What are the potentials slave for this master
  $replicate_ignore_db      = ["test","information_schema"], # Database to not replicate
  $docker_mysql_maindir     = "/var/docker_db",  # Main Mysql Configuration Directory
  $is_using_lvm             = false,  # Whether or not LVM Module should be included
  $wait_for_mysql_num_try   = "5",    # Trying to control Race Condition between Docker and Mysql
  $wait_for_mysql_sleep     = "10",   # Combined with above to try X times with a 10 seconds sleep
)
{

  ### Add the needed Modules and Docker Images
  include 'docker'
  docker::image { 'mysql':
    image_tag => $mysql_version,
  } 
  # Include LVM if needed 
  # Note that all the needed parameters must be defined in Hiera
  if $is_using_lvm { include ::lvm }
  
  ### Setup Replication for each containers
  
  # Create a hash from Hiera Data with the Docker Run
  $myDockerContainer = hiera('docker::run', {})
  
  # Docker Container DataDir
  $dck_datadir       = '/var/lib/mysql'
  
  # Looping through container
  $myDockerContainer.each |$container_name, $container_param| {

    ### Get the must have parameters - variables , ports and MYSQL_ROOT_PASSWORD
    if !has_key($container_param, 'env') or !has_key($container_param, 'extra_parameters') {
      fail('Need to have MYSQL_ROOT_PASSWORD env and extra_parameters --publish xxxx:3306 set for this to work properly')
    }
    $env_mysql_password   = grep($container_param[env], 'MYSQL_ROOT_PASSWORD')[0]
    $extra_params_publish = grep($container_param[extra_parameters], 'publish')
    if empty($env_mysql_password) or empty($extra_params_publish) {
      fail('Need to have MYSQL_ROOT_PASSWORD env and extra_parameters --publish xxxx:3306 set for this to work properly')
    }
    $mysql_root_password = split("${env_mysql_password}", '=')[1]
    $port_mapping        = split("${extra_params_publish}", Regexp['[\s+]'])[1]
    $public_port         = split("${port_mapping}", ':')[0]
    
    
    ### Create needed directory structure for confdir and datadir and Exec
    # Not managing how $docker_mysql_maindir/db_p${public_port} is create here
    $confdir = "$docker_mysql_maindir/db_p${public_port}/conf.d"
    $datadir = "$docker_mysql_maindir/db_p${public_port}/mysql-datadir"
    file { "$confdir":
      ensure   => directory,
    }
    file { $datadir:
      ensure   => directory,
    }
    file { "${confdir}/my-replication.cnf":
      ensure   => file,
      content  => template('profiles/mycnf-replication.erb'),
      require  => File[$confdir],
    }
    
    ## Docker Container Run Configuration
    # Then run the Containers
    docker::run{$container_name:
      * => {
            volumes   =>  ["${confdir}:/etc/mysql/conf.d","${datadir}:/var/lib/mysql"],
            subscribe =>  File["${confdir}/my-replication.cnf"],
            notify    =>  Exec["check_mysql_${public_port}"]
           } + $container_param # Merging with container parameters
    }
    
    # Exec to Avoid race conditions between docker run and docker exec
    # Trying to get connect to Mysql many times before giving up
    exec {"check_mysql_${public_port}":
      command     => "mysql -uroot -p${mysql_root_password} -h127.0.0.1 -P${public_port} -e \"exit\"",
      tries       => $wait_for_mysql_num_try,
      try_sleep   => $wait_for_mysql_sleep,
      path        => ['/bin','/sbin','/usr/bin', '/usr/sbin',],
      refreshonly => true,
    }
    
    ## Setup Master Slave Replication on that Docker Container
    $cmd_change_master = join(["CHANGE MASTER TO MASTER_HOST",
                               "\'${master_ip}\',MASTER_PORT",
                               "${public_port},MASTER_USER",
                               "\'${replica_user}\',MASTER_PASSWORD",
                               "\'${replica_password}\';"], "=") 
    $cmd_start_slave   = "START SLAVE;"
    $cmd_unless_master = join(["bash -c \'grep ${master_ip} $dck_datadir/master.info ",
                               " grep ${public_port} $dck_datadir/master.info ",
                               " grep -w ${replica_user} $dck_datadir/master.info ",
                               " grep -w ${replica_password} $dck_datadir/master.info\'"], "&&") 
    docker::exec { "set_master_params_${container_name}":
      container    => $container_name,
      command      => "mysql -uroot -p${mysql_root_password} --execute=\"${cmd_change_master}${cmd_start_slave}\"",
      unless       => $cmd_unless_master,
      require      => [Docker::Run[$container_name],Exec["check_mysql_${public_port}"]],
    }
    # Create replication user for each slave
    $slaves_ips.each |$slave_ip| {
      docker::exec { "set_slave_params_${container_name}_${slave_ip}":
        container    => $container_name,
        command      => "mysql -uroot -p${mysql_root_password} --execute=\"GRANT REPLICATION SLAVE ON *.* \
                         TO \'${replica_user}\'@\'${slave_ip}\' IDENTIFIED BY \'${replica_password}\'\"",
        unless       => "mysql -uroot -p${mysql_root_password} --execute=\"SELECT User,Host FROM mysql.user WHERE \
                         user = \'${replica_user}\' AND host = \'${slave_ip}\'\" 2>/dev/null | grep ${replica_user}",
        require      => [Docker::Run[$container_name],Exec["check_mysql_${public_port}"]]
      }
    }
  }
}

  
  III. Hiera Configuration:

As noted above, I'm exclusively using Hiera to store Nodes' configuration data. These are the 03 layers hierarchy used for the nodes (and group of nodes) configuration:
  • node/%{::fqdn}  for per-node overrides
  • puppet_group/%{facts.group} for setting group-specific data (in this case, it is set to "containers_ha_01"
  • common for global fallback
The IP Addresses and Names of these nodes are: 192.168.10.11(node1.yaml) & 192.168.10.12(node2.yaml)

node/node1.yaml
---
profiles::mysql_mmm_docker::server_id :   1
profiles::mysql_mmm_docker::master_ip :   192.168.10.12
profiles::mysql_mmm_docker::slaves_ips:
  - 192.168.10.12

node/node2.yaml
---
profiles::mysql_mmm_docker::server_id :   2
profiles::mysql_mmm_docker::master_ip :   192.168.10.11
profiles::mysql_mmm_docker::slaves_ips:
  - 192.168.10.11

puppet_group/containers_ha_01.yaml
---
profiles::mysql_mmm_docker::auto_increment_increment: 2
profiles::mysql_mmm_docker::is_using_lvm            : true
lvm::volume_groups:
  dockervg:
    physical_volumes:
      - /dev/sdb
    logical_volumes:
      docker_db_p3306:
        size:                                         10G
        mountpath:                                    /var/docker_db/db_p3306
      docker_db_p3307:
        size:                                         10G
        mountpath:                                    /var/docker_db/db_p3307
      docker_db_p3308:
        size:                                         10G
        mountpath:                                    /var/docker_db/db_p3308
docker::run: 
  'mysql_db3306':
    image           : 'mysql'
    extra_parameters:
      - '--publish 3306:3306'
    memory_limit    : '25g'
    env             :
      - 'MYSQL_ROOT_PASSWORD=SecretR00t'
      - 'MYSQL_USER=db3306'
      - 'MYSQL_PASSWORD=db3306'
      - 'MYSQL_DATABASE=db3306'
  'mysql_db3307':
    image           : 'mysql'
    extra_parameters:
      - '--publish 3307:3306'
    memory_limit    : '25g'
    env             :
      - 'MYSQL_ROOT_PASSWORD=SecretR00t'
      - 'MYSQL_USER=db3307'
      - 'MYSQL_PASSWORD=db3307'
      - 'MYSQL_DATABASE=db3307'
  'mysql_db3308':
    image           : 'mysql'
    extra_parameters:
      - '--publish 3308:3306'
    memory_limit    : '25g'
    env             :
      - 'MYSQL_ROOT_PASSWORD=SecretR00t'
      - 'MYSQL_USER=db3308'
      - 'MYSQL_PASSWORD=db3308'
      - 'MYSQL_DATABASE=db3308'

   IV. Roles Configuration:

Using the profile described in Section II, with the right Hiera Data, we can move forward with the roles modules configuration,

class roles::containers_ha_01  {
 
    # Install Docker and Mysql Multi Master Instances
    include profiles::mysql_mmm_docker

}

   IV. Puppet Agent Single run:

And finally, let us trigger a puppet convergence and enjoy the result,

[root@node01 ~]# puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for node01
Info: Applying configuration version '1482231746'
Notice: /Stage[main]/Lvm/Lvm::Volume_group[dockervg]/Lvm::Logical_volume[docker_db_p3306]/Logical_volume[docker_db_p3306]/ensure: created
Notice: /Stage[main]/Lvm/Lvm::Volume_group[dockervg]/Lvm::Logical_volume[docker_db_p3306]/Filesystem[/dev/dockervg/docker_db_p3306]/ensure: created
Notice: /Stage[main]/Lvm/Lvm::Volume_group[dockervg]/Lvm::Logical_volume[docker_db_p3306]/Exec[ensure mountpoint '/var/docker_db/db_p3306' exists]/returns: executed successfully
Notice: /Stage[main]/Lvm/Lvm::Volume_group[dockervg]/Lvm::Logical_volume[docker_db_p3306]/Mount[/var/docker_db/db_p3306]/ensure: defined 'ensure' as 'mounted'
Info: Computing checksum on file /etc/fstab
Info: /Stage[main]/Lvm/Lvm::Volume_group[dockervg]/Lvm::Logical_volume[docker_db_p3306]/Mount[/var/docker_db/db_p3306]: Scheduling refresh of Mount[/var/docker_db/db_p3306]
Info: Mount[/var/docker_db/db_p3306](provider=parsed): Remounting
Notice: /Stage[main]/Lvm/Lvm::Volume_group[dockervg]/Lvm::Logical_volume[docker_db_p3306]/Mount[/var/docker_db/db_p3306]: Triggered 'refresh' from 1 events
Info: /Stage[main]/Lvm/Lvm::Volume_group[dockervg]/Lvm::Logical_volume[docker_db_p3306]/Mount[/var/docker_db/db_p3306]: Scheduling refresh of Mount[/var/docker_db/db_p3306]
Notice: /Stage[main]/Profiles::Mysql_mmm_docker/File[/var/docker_db/db_p3306/conf.d]/ensure: created
Notice: /Stage[main]/Profiles::Mysql_mmm_docker/File[/var/docker_db/db_p3306/mysql-datadir]/ensure: created
Notice: /Stage[main]/Profiles::Mysql_mmm_docker/File[/var/docker_db/db_p3306/conf.d/my-replication.cnf]/ensure: defined content as '{md5}6d7e21c2c821dff87548fb2d416ac4c3'
Info: /Stage[main]/Profiles::Mysql_mmm_docker/File[/var/docker_db/db_p3306/conf.d/my-replication.cnf]: Scheduling refresh of Docker::Run[mysql_db3306]
Info: Docker::Run[mysql_db3306]: Scheduling refresh of Service[docker-mysql-db3306]
Info: Docker::Run[mysql_db3306]: Scheduling refresh of Exec[docker-mysql-db3306-systemd-reload]
Notice: /Stage[main]/Profiles::Mysql_mmm_docker/Docker::Run[mysql_db3306]/File[/etc/systemd/system/docker-mysql-db3306.service]/ensure: defined content as '{md5}a0932d9ef12c1d4158e203b1021e43af'
Info: /Stage[main]/Profiles::Mysql_mmm_docker/Docker::Run[mysql_db3306]/File[/etc/systemd/system/docker-mysql-db3306.service]: Scheduling refresh of Service[docker-mysql-db3306]
Info: /Stage[main]/Profiles::Mysql_mmm_docker/Docker::Run[mysql_db3306]/File[/etc/systemd/system/docker-mysql-db3306.service]: Scheduling refresh of Exec[docker-mysql-db3306-systemd-reload]
Notice: /Stage[main]/Profiles::Mysql_mmm_docker/Docker::Run[mysql_db3306]/Exec[docker-mysql-db3306-systemd-reload]: Triggered 'refresh' from 2 events
Notice: /Stage[main]/Profiles::Mysql_mmm_docker/Docker::Run[mysql_db3306]/Service[docker-mysql-db3306]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Profiles::Mysql_mmm_docker/Docker::Run[mysql_db3306]/Service[docker-mysql-db3306]: Unscheduling refresh on Service[docker-mysql-db3306]
Info: Docker::Run[mysql_db3306]: Scheduling refresh of Exec[check_mysql_3306]
Notice: /Stage[main]/Profiles::Mysql_mmm_docker/Exec[check_mysql_3306]: Triggered 'refresh' from 1 events
Notice: /Stage[main]/Profiles::Mysql_mmm_docker/Docker::Exec[set_master_params_mysql_db3306]/Exec[docker exec  mysql-db3306 mysql -uroot -pSecretR00t --execute="CHANGE MASTER TO MASTER_HOST='192.168.10.12',MASTER_PORT=3306,MASTER_USER='replication',MASTER_PASSWORD='replication';START SLAVE;"]/returns: executed successfully
Notice: /Stage[main]/Profiles::Mysql_mmm_docker/Docker::Exec[set_slave_params_mysql_db3306_192.168.10.12]/Exec[docker exec  mysql-db3306 mysql -uroot -pSecretR00t --execute="GRANT REPLICATION SLAVE ON *.*                          TO 'replication'@'192.168.10.12' IDENTIFIED BY 'replication'"]/returns: executed successfully
Notice: Applied catalog in 19.31 seconds

Connecting to one Mysql Docker DB and check replication,


[root@node01 ~]# mysql -uroot -pMyR00t -h127.0.0.1 -P3308
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 87
Server version: 5.7.16 MySQL Community Server (GPL)

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
               Slave_IO_State: 
                  Master_Host: 192.168.10.11
                  Master_User: replication
                  Master_Port: 3308
                Connect_Retry: 60
              Master_Log_File: 
          Read_Master_Log_Pos: 4
               Relay_Log_File: f654f1d2ffd9-relay-bin.000001
                Relay_Log_Pos: 4
        Relay_Master_Log_File: 
             Slave_IO_Running: No
            Slave_SQL_Running: Yes
              Replicate_Do_DB: 
          Replicate_Ignore_DB: test,information_schema
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 0
              Relay_Log_Space: 154
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 1236
                Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Binary log is not open'
               Last_SQL_Errno: 0
               Last_SQL_Error: 
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 2
                  Master_UUID: 4792c799-c5dc-11e6-8bbf-0242ac110004
             Master_Info_File: /var/lib/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
           Master_Retry_Count: 86400
                  Master_Bind: 
      Last_IO_Error_Timestamp: 161219 22:54:03
     Last_SQL_Error_Timestamp: 
               Master_SSL_Crl: 
           Master_SSL_Crlpath: 
           Retrieved_Gtid_Set: 
            Executed_Gtid_Set: 
                Auto_Position: 0
         Replicate_Rewrite_DB: 
                 Channel_Name: 
           Master_TLS_Version: 
1 row in set (0.00 sec)

References:
http://severalnines.com/blog/mysql-docker-containers-understanding-basics
https://tectonic.com/quay-enterprise/docs/latest/mysql-container.html
https://www.percona.com/blog/2016/03/30/docker-mysql-replication-101/
https://blog.chmouel.com/2014/11/04/avoiding-race-conditions-between-containers-with-docker-and-fig/




2016/12/05

Let us play with some puppet Modules - Part III: Puppetizing Mysql Multi-Master Replication, Installation and Configuration with puppetlabs/mysql on Mysql 5.7 Community Edition

In the series of "Let us play with some puppet Modules" posts, I'm willing to share my experience related to the usage of some Forge Puppet Modules. To make it more understandable, I'm making use of a set of well-known Puppet's best practice, with a great emphasis on the following :
  • Roles and Profiles Pattern
  • Hiera for Configuration Data
The posts in this series are all following the below structure:

  1. Module Installation and others prerequisites
  2. Profile Module creation
  3. Hiera Configuration
  4. Roles Configuration
  5. Puppet Agent Single run

This post is the third within that "Let us play with some puppet Modules" series and it aims to describe Mysql multi-master Installation and configuration using Puppetlabs/MySQL (3.10.0) Module. This has been used to complete Installation of MySQL Community edition 5.7 (and 5.6) on Oracle Enterprise Linux.

I won't emphasize on what is MySQL Multi-Master replication as there are already many great blogs (and official documentations)  on that topic (you can check in the reference section below for some of them). 

   I. Module Installation and others prerequisites:

As usual, the module can either be installed on Command line or through PuppetFile

[root@pe-master ~] # puppet module install camptocamp-bind

or simply by adding the following to Puppetfile,


mod 'puppetlabs-mysql'

   II. Profile Module Creation:

Let us create a specific Profile for Mysql Servers , note that I'm using a parameterized class for that profile, with some default values for each parameter. These default values are set as an example to install a standalone Mysql Server (the version that is on the official distribution repository). A simple include profiles::mysqlserver will do just that.
The same parameters can obviously be modified to suit our needs. In this case, they will be adjusted using Hiera (Check Hiera section) for MySQL multi-master replication Installation and Configuration.
I did my best to add  comments to most of the parameters and block of codes below, but I'm leveraging on Puppet DSL readability to make this easy to understand.


class profiles::mysqlserver (
  $root_password          = "Myr00t", # set password as Myr00t
  $override_options       = {},    # define server options in this hash
  $datadir                = "/var/lib/mysql", # can also be defined under override option
  $port                   = "3306", # can also be defined under override option
  $pid_file               = "/var/run/mysqld/mysqld.pid", # can also be defined under override option
  $socket                 = "/var/lib/mysql/mysql.sock", # can also be defined under override option
  $bind_address           = "127.0.0.1",  # can also be defined under override option
  $dbs                    = {},    # Hash of Array for mysql::db
  $community_release      = false, # Set to true to use community edition
  $community_major_v      = '5',   # If using community what is the major version
  $community_minor_v      = '7',   # If using community what is the minor version
  $is_master              = false, # True if the node is master
  $replica_user           = "replication", # For master, what is the replication account
  $replica_password       = "replication", # Replication User password
  $replica_password_hash  = '*D36660B5249B066D7AC5A1A14CECB71D36944CBC', # the same replication account password hashed
  $slaves_ips             = [],     # What are the potentials slave for this master
  $is_slave               = false,  # True if the node is slave
  $master_ip              = "",     # The IP Address of the master in case this is a slave
  $master_port            = "3306", # The port where the master is listening to
)
{

  ### Check Additional defined option and merge to have full defined options
  $additional_override_options = {
    'mysqld' => {
       'datadir'      => "${datadir}",
       'port'         => "${port}",
       'pid-file'     => "${pid_file}",
       'socket'       => "${socket}",
       'bind-address' => "${bind_address}",
    }
  }
  $full_override_options = deep_merge($additional_override_options, $override_options)

  ### Install the requested Mysql Release
  if $community_release == false {
    class { '::mysql::server':
      root_password           => $root_password,
      override_options        => $full_override_options,
      restart                 => true, #service should be restarted when things change
    }
  }
  else {
    $url_repo = "http://dev.mysql.com/get/mysql${community_major_v}${community_minor_v}-community-release-el${::facts['os']['release']['major']}-${community_minor_v}.noarch.rpm"
    package { 'mysql-community-repo':
      name      => "mysql${community_major_v}${community_minor_v}-community-release",
      ensure    => installed,
      provider  => rpm,
      source    => $url_repo,
    }
    # Note: 
    # Installing mysql-community-server will install all the needed packages
    # for MySQL server mysql-community-server, mysql-community-client, mysql-community-common and mysql-community-libs
    class { '::mysql::server':
      root_password           => $root_password,
      override_options        => $full_override_options,
      package_name            => 'mysql-community-server',
      package_ensure          => 'present',
      restart                 => true, #service should be restarted when things change
      require                 => Package['mysql-community-repo'],
    }
  }

  ### With Create Resource Converts a hash into a set of resources and create dbs
  create_resources(mysql::db, $dbs)

  ### Check if this system was marked as master and set appropriate params and users
  if $is_master {
  
    if $full_override_options['mysqld']['bind-address'] == "127.0.0.1" {
        fail("This cant be a master and listening only to localhost, you must change the bind_address variable to a suitable one")
    }
    
    $slaves_ips.each |$slave_ip| {
  
      mysql_user { "${replica_user}@${slave_ip}":
        ensure                   => 'present',
        max_connections_per_hour => '0',
        max_queries_per_hour     => '0',
        max_updates_per_hour     => '0',
        max_user_connections     => '0',
        password_hash            => $replica_password_hash,
      }
    
      mysql_grant { "${replica_user}@${slave_ip}/*.*":
        ensure                 => 'present',
        options                => ['GRANT'],
        privileges             => ['REPLICATION SLAVE'],
        table                  => '*.*',
        user                   => "${replica_user}@${slave_ip}",
        require                => Mysql_user["${replica_user}@${slave_ip}"],
      }
        
    }
  }
  
  ### Check if this system was marked as slave and set appropriate params and commands
  if $is_slave {
    validate_ip_address($master_ip)  # IP Address must be set to identify the master
    $cmd_change_master = join(["CHANGE MASTER TO MASTER_HOST","\'${master_ip}\',MASTER_PORT","${master_port},MASTER_USER","\'${replica_user}\',MASTER_PASSWORD","\'${replica_password}\';"], "=") 
    $cmd_start_slave   = "START SLAVE;"
    exec { 'set_master_params':
      command => "mysql --defaults-extra-file=/root/.my.cnf --execute=\"${cmd_change_master}${cmd_start_slave}\"",
      require => Class['::mysql::server'],
      path    => ["/usr/local/sbin","/usr/local/bin","/sbin","/bin","/usr/sbin","/usr/bin","/opt/puppetlabs/bin","/root/bin"],
      unless  => "grep ${master_ip} $datadir/master.info && grep ${master_port} $datadir/master.info && grep -w ${replica_user} $datadir/master.info && grep -w ${replica_password} $datadir/master.info",
    }
  }
  
  
}

   III. Hiera Configuration:

As said during the brief introduction above, I'm exclusively using Hiera to store Nodes' configuration data. As this is about Muti-Master replication, I'm showing two separates nodes configuration here, each one related to the node we're willing to replicate. The IP Addresses and Names of these nodes are: 192.168.10.11(node1.yaml) & 192.168.10.12(node2.yaml)

node1.yaml

---
profiles::mysqlserver::root_password    : 'Myr00t'
profiles::mysqlserver::community_release: true
profiles::mysqlserver::is_master        : true
profiles::mysqlserver::bind_address     : '0.0.0.0'
profiles::mysqlserver::slaves_ips       :
  - "192.168.10.12"
profiles::mysqlserver::is_slave         : true
profiles::mysqlserver::master_ip        : "192.168.10.12"
profiles::mysqlserver::override_options :
  'mysqld'                              :
    'server-id'                         : '1'
    'log-bin'                           : "mysql-bin"
    'binlog-ignore-db'                  :
      - 'test'
      - 'information_schema'
    'replicate-ignore-db'               :
      - 'test'
      - 'information_schema'
    'relay-log'                         : "mysql-relay-log"
    #By setting the auto_increment_increment and auto_increment_offset values independent servers
    #will create unique auto_increment values allowing for replication without fear of collision!
    #Note that auto_increment_increment is set to the total number of nodes in this multi-master
    'auto-increment-increment'          : '2'
    'auto-increment-offset'             : '1'
profiles::mysqlserver::dbs:
  'stiv-db1':
    'user'                      : 'stiv'
    'password'                  : 'stiv'
    'host'                      : 'localhost'
    'grant'                     :
      - 'SELECT'
      - 'UPDATE'

node2.yaml


---
profiles::mysqlserver::root_password    : 'Myr00t'
profiles::mysqlserver::community_release: true
profiles::mysqlserver::is_master        : true
profiles::mysqlserver::bind_address     : '0.0.0.0'
profiles::mysqlserver::slaves_ips       :
  - "192.168.10.11"
profiles::mysqlserver::is_slave         : true
profiles::mysqlserver::master_ip        : "192.168.10.11"
profiles::mysqlserver::override_options :
  'mysqld'                              :
    'server-id'                         : '2'
    'log-bin'                           : "mysql-bin"
    'binlog-ignore-db'                  :
      - 'test'
      - 'information_schema'
    'replicate-ignore-db'               :
      - 'test'
      - 'information_schema'
    'relay-log'                         : "mysql-relay-log"
    #By setting the auto_increment_increment and auto_increment_offset values independent servers
    #will create unique auto_increment values allowing for replication without fear of collision!
    #Note that auto_increment_increment is set to the total number of nodes in this multi-master
    'auto-increment-increment'          : '2'
    'auto-increment-offset'             : '2'


   IV. Roles Configuration:

Using the mysqlserver profile described in Section II, with the right Hiera Data, we can move forward with the roles modules configuration,


class roles::mydb  {
 
    # Install Oracle Server
    include profiles::mysqlserver

}

   IV. Puppet Agent Single run:

And finally, let us trigger a puppet convergence and enjoy the result,


[root@node1 ~]# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 82
Server version: 5.7.16-log MySQL Community Server (GPL)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.10.12
                  Master_User: replication
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000002
          Read_Master_Log_Pos: 2468
               Relay_Log_File: mysql-relay-log.000003
                Relay_Log_Pos: 2681
        Relay_Master_Log_File: mysql-bin.000002
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB: 
          Replicate_Ignore_DB: test,information_schema
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 2468
              Relay_Log_Space: 3101
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 2
                  Master_UUID: 4527f43c-ba02-11e6-a49e-b4b52f6d2a48
             Master_Info_File: /var/lib/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
           Master_Retry_Count: 86400
                  Master_Bind: 
      Last_IO_Error_Timestamp: 
     Last_SQL_Error_Timestamp: 
               Master_SSL_Crl: 
           Master_SSL_Crlpath: 
           Retrieved_Gtid_Set: 
            Executed_Gtid_Set: 
                Auto_Position: 0
         Replicate_Rewrite_DB: 
                 Channel_Name: 
           Master_TLS_Version: 
1 row in set (0.00 sec)


Done, Mysql multi-master Installation and Configuration Puppetized!

References:
https://forge.puppet.com/puppetlabs/mysql
https://www.digitalocean.com/community/tutorials/how-to-set-up-mysql-master-master-replication
http://codetalk.code-kobold.de/mysql-55-server-as-replication-slave/
http://linusramos.blogspot.com/2012/09/mysql-replication-for-v55-up-unknown.html
https://tapasmishra.wordpress.com/2012/06/11/how-to-configure-mysql-5-5-server-as-replication-slave/
https://support.rackspace.com/how-to/mysql-master-master-replication/
https://www.howtoforge.com/mysql_master_master_replication