I've been wondering about the implementation of multiple MySQL Instances (mysqld_multi) using Puppet on the same server. And when I got the additional requirement to complete the same implementation not on just a single server, but on many servers (Multi-Master Replication), I found it was so tricky to implement that using the official (and quite stable) Puppet/Mysql Forge Module. In facts this nice module wasn't designed to support Mysql Multi Instances.
After digging on many alternatives, I've realized that the very same implementation can instead be achieved by combining the power of Puppet/Mysql and Gareth/Docker Forge modules. Below is a small diagram that graphically represents such type of configuration. In the subsequent sections of this post, I'm sharing the technical details for that implementation (trying to use Puppet Best practices).
DevOps tools and software have brought to us new Vocabulary and funny new Words like the one in this post's title, but really Puppetizing Dockerized Mysql Multi-Master Instances is just about automating the deployment of Mysql Multi-Master for Multiple Instances using both Puppet and Docker...
(Click on the image to enlarge)
Just to summarize the architecture described on that picture, we have 02 Docker Host Node (192.169.10.11 & 192.169.10.12), each node is hosting many distinct Mysql Docker Instances, each instance is published on a particular TCP Port (3306, 3307,3308...) . The datadir and confdir of these Mysql Instance are mapped to directories that are sitting under the same LVM Volume Group, but on different LVM Logical Volumes. Mysql Instances are multi-master replicated between two instances (sitting on different Docker Host Nodes - Data Protection/HA).
In order to make the technical implementation details more easy to digest,I'm making use of a set of well-known Puppet's best practice, with a great emphasis on the following :
After digging on many alternatives, I've realized that the very same implementation can instead be achieved by combining the power of Puppet/Mysql and Gareth/Docker Forge modules. Below is a small diagram that graphically represents such type of configuration. In the subsequent sections of this post, I'm sharing the technical details for that implementation (trying to use Puppet Best practices).
DevOps tools and software have brought to us new Vocabulary and funny new Words like the one in this post's title, but really Puppetizing Dockerized Mysql Multi-Master Instances is just about automating the deployment of Mysql Multi-Master for Multiple Instances using both Puppet and Docker...
(Click on the image to enlarge)
Just to summarize the architecture described on that picture, we have 02 Docker Host Node (192.169.10.11 & 192.169.10.12), each node is hosting many distinct Mysql Docker Instances, each instance is published on a particular TCP Port (3306, 3307,3308...) . The datadir and confdir of these Mysql Instance are mapped to directories that are sitting under the same LVM Volume Group, but on different LVM Logical Volumes. Mysql Instances are multi-master replicated between two instances (sitting on different Docker Host Nodes - Data Protection/HA).
In order to make the technical implementation details more easy to digest,I'm making use of a set of well-known Puppet's best practice, with a great emphasis on the following :
- Roles and Profiles Pattern
- Hiera for Configuration Data
- Puppet Forge Module Installation and others prerequisites
- Profile Module creation
- Hiera Configuration
- Roles Configuration
- Puppet Agent Single run
I. Module Installation and others prerequisites:
To complete this implementation, we need to install Garethr/Docker, Puppet/Mysql and Puppet/LVM (Only If using the LVM as suggested above). As usual, the module can either be installed on Command line or through PuppetFile.
To complete this implementation, we need to install Garethr/Docker, Puppet/Mysql and Puppet/LVM (Only If using the LVM as suggested above). As usual, the module can either be installed on Command line or through PuppetFile.
[root@pe-master ~] # puppet module install garethr-docker [root@pe-master ~] # puppet module install puppetlabs-mysql [root@pe-master ~] # puppet module install puppetlabs-lvm
or simply by adding the following to Puppetfile,
II. Profile Module Creation:
The profile is really where most of the code is being written. In fact, the below profile is the one combining all the mentioned technologies (Docker, Mysql-Multi master and LVM -which is optional-) and make them acts as a single module.
Note that I'm using a parameterized class for that profile, with some default values and some mandatory variables.
Note also that this profile can easily be adapted to be used on Mysql Docker without replication.
I did my best to add comments to most of the parameters and block of codes below, but I'm also trying to leverage on Puppet DSL readability to make this easy to understand.
mod 'puppetlabs-mysql'
mod 'mgarethr-docker'
mod 'puppetlabs-lvm'
II. Profile Module Creation:
Note that I'm using a parameterized class for that profile, with some default values and some mandatory variables.
Note also that this profile can easily be adapted to be used on Mysql Docker without replication.
I did my best to add comments to most of the parameters and block of codes below, but I'm also trying to leverage on Puppet DSL readability to make this easy to understand.
class profiles::mysql_mmm_docker ( $server_id, # Specify the Server id - must be defined $master_ip, # The IP Address of the master for this MM - must be defined $auto_increment_increment, # Auto-Increment Increment value - must be defined $auto_increment_offset = "${server_id}", # Auto-Increment Offset Value, default same as server id $mysql_version = "5.7", # Mysql Docker Version, default to 5.7 $replica_user = "replication", # As master, what is the replication account $replica_password = "replication", # Replication User password $slaves_ips = [], # What are the potentials slave for this master $replicate_ignore_db = ["test","information_schema"], # Database to not replicate $docker_mysql_maindir = "/var/docker_db", # Main Mysql Configuration Directory $is_using_lvm = false, # Whether or not LVM Module should be included $wait_for_mysql_num_try = "5", # Trying to control Race Condition between Docker and Mysql $wait_for_mysql_sleep = "10", # Combined with above to try X times with a 10 seconds sleep ) { ### Add the needed Modules and Docker Images include 'docker' docker::image { 'mysql': image_tag => $mysql_version, } # Include LVM if needed # Note that all the needed parameters must be defined in Hiera if $is_using_lvm { include ::lvm } ### Setup Replication for each containers # Create a hash from Hiera Data with the Docker Run $myDockerContainer = hiera('docker::run', {}) # Docker Container DataDir $dck_datadir = '/var/lib/mysql' # Looping through container $myDockerContainer.each |$container_name, $container_param| { ### Get the must have parameters - variables , ports and MYSQL_ROOT_PASSWORD if !has_key($container_param, 'env') or !has_key($container_param, 'extra_parameters') { fail('Need to have MYSQL_ROOT_PASSWORD env and extra_parameters --publish xxxx:3306 set for this to work properly') } $env_mysql_password = grep($container_param[env], 'MYSQL_ROOT_PASSWORD')[0] $extra_params_publish = grep($container_param[extra_parameters], 'publish') if empty($env_mysql_password) or empty($extra_params_publish) { fail('Need to have MYSQL_ROOT_PASSWORD env and extra_parameters --publish xxxx:3306 set for this to work properly') } $mysql_root_password = split("${env_mysql_password}", '=')[1] $port_mapping = split("${extra_params_publish}", Regexp['[\s+]'])[1] $public_port = split("${port_mapping}", ':')[0] ### Create needed directory structure for confdir and datadir and Exec # Not managing how $docker_mysql_maindir/db_p${public_port} is create here $confdir = "$docker_mysql_maindir/db_p${public_port}/conf.d" $datadir = "$docker_mysql_maindir/db_p${public_port}/mysql-datadir" file { "$confdir": ensure => directory, } file { $datadir: ensure => directory, } file { "${confdir}/my-replication.cnf": ensure => file, content => template('profiles/mycnf-replication.erb'), require => File[$confdir], } ## Docker Container Run Configuration # Then run the Containers docker::run{$container_name: * => { volumes => ["${confdir}:/etc/mysql/conf.d","${datadir}:/var/lib/mysql"], subscribe => File["${confdir}/my-replication.cnf"], notify => Exec["check_mysql_${public_port}"] } + $container_param # Merging with container parameters } # Exec to Avoid race conditions between docker run and docker exec # Trying to get connect to Mysql many times before giving up exec {"check_mysql_${public_port}": command => "mysql -uroot -p${mysql_root_password} -h127.0.0.1 -P${public_port} -e \"exit\"", tries => $wait_for_mysql_num_try, try_sleep => $wait_for_mysql_sleep, path => ['/bin','/sbin','/usr/bin', '/usr/sbin',], refreshonly => true, } ## Setup Master Slave Replication on that Docker Container $cmd_change_master = join(["CHANGE MASTER TO MASTER_HOST", "\'${master_ip}\',MASTER_PORT", "${public_port},MASTER_USER", "\'${replica_user}\',MASTER_PASSWORD", "\'${replica_password}\';"], "=") $cmd_start_slave = "START SLAVE;" $cmd_unless_master = join(["bash -c \'grep ${master_ip} $dck_datadir/master.info ", " grep ${public_port} $dck_datadir/master.info ", " grep -w ${replica_user} $dck_datadir/master.info ", " grep -w ${replica_password} $dck_datadir/master.info\'"], "&&") docker::exec { "set_master_params_${container_name}": container => $container_name, command => "mysql -uroot -p${mysql_root_password} --execute=\"${cmd_change_master}${cmd_start_slave}\"", unless => $cmd_unless_master, require => [Docker::Run[$container_name],Exec["check_mysql_${public_port}"]], } # Create replication user for each slave $slaves_ips.each |$slave_ip| { docker::exec { "set_slave_params_${container_name}_${slave_ip}": container => $container_name, command => "mysql -uroot -p${mysql_root_password} --execute=\"GRANT REPLICATION SLAVE ON *.* \ TO \'${replica_user}\'@\'${slave_ip}\' IDENTIFIED BY \'${replica_password}\'\"", unless => "mysql -uroot -p${mysql_root_password} --execute=\"SELECT User,Host FROM mysql.user WHERE \ user = \'${replica_user}\' AND host = \'${slave_ip}\'\" 2>/dev/null | grep ${replica_user}", require => [Docker::Run[$container_name],Exec["check_mysql_${public_port}"]] } } } }
III. Hiera Configuration:
As noted above, I'm exclusively using Hiera to store Nodes' configuration data. These are the 03 layers hierarchy used for the nodes (and group of nodes) configuration:
node/node1.yaml
node/node2.yaml
puppet_group/containers_ha_01.yaml
IV. Roles Configuration:
- node/%{::fqdn} for per-node overrides
- puppet_group/%{facts.group} for setting group-specific data (in this case, it is set to "containers_ha_01"
- common for global fallback
node/node1.yaml
--- profiles::mysql_mmm_docker::server_id : 1 profiles::mysql_mmm_docker::master_ip : 192.168.10.12 profiles::mysql_mmm_docker::slaves_ips: - 192.168.10.12
node/node2.yaml
--- profiles::mysql_mmm_docker::server_id : 2 profiles::mysql_mmm_docker::master_ip : 192.168.10.11 profiles::mysql_mmm_docker::slaves_ips: - 192.168.10.11
puppet_group/containers_ha_01.yaml
--- profiles::mysql_mmm_docker::auto_increment_increment: 2 profiles::mysql_mmm_docker::is_using_lvm : true lvm::volume_groups: dockervg: physical_volumes: - /dev/sdb logical_volumes: docker_db_p3306: size: 10G mountpath: /var/docker_db/db_p3306 docker_db_p3307: size: 10G mountpath: /var/docker_db/db_p3307 docker_db_p3308: size: 10G mountpath: /var/docker_db/db_p3308 docker::run: 'mysql_db3306': image : 'mysql' extra_parameters: - '--publish 3306:3306' memory_limit : '25g' env : - 'MYSQL_ROOT_PASSWORD=SecretR00t' - 'MYSQL_USER=db3306' - 'MYSQL_PASSWORD=db3306' - 'MYSQL_DATABASE=db3306' 'mysql_db3307': image : 'mysql' extra_parameters: - '--publish 3307:3306' memory_limit : '25g' env : - 'MYSQL_ROOT_PASSWORD=SecretR00t' - 'MYSQL_USER=db3307' - 'MYSQL_PASSWORD=db3307' - 'MYSQL_DATABASE=db3307' 'mysql_db3308': image : 'mysql' extra_parameters: - '--publish 3308:3306' memory_limit : '25g' env : - 'MYSQL_ROOT_PASSWORD=SecretR00t' - 'MYSQL_USER=db3308' - 'MYSQL_PASSWORD=db3308' - 'MYSQL_DATABASE=db3308'
IV. Roles Configuration:
Using the profile described in Section II, with the right Hiera Data, we can move forward with the roles modules configuration,
class roles::containers_ha_01 { # Install Docker and Mysql Multi Master Instances include profiles::mysql_mmm_docker }
IV. Puppet Agent Single run:
And finally, let us trigger a puppet convergence and enjoy the result,
[root@node01 ~]# puppet agent -t Info: Using configured environment 'production' Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for node01 Info: Applying configuration version '1482231746' Notice: /Stage[main]/Lvm/Lvm::Volume_group[dockervg]/Lvm::Logical_volume[docker_db_p3306]/Logical_volume[docker_db_p3306]/ensure: created Notice: /Stage[main]/Lvm/Lvm::Volume_group[dockervg]/Lvm::Logical_volume[docker_db_p3306]/Filesystem[/dev/dockervg/docker_db_p3306]/ensure: created Notice: /Stage[main]/Lvm/Lvm::Volume_group[dockervg]/Lvm::Logical_volume[docker_db_p3306]/Exec[ensure mountpoint '/var/docker_db/db_p3306' exists]/returns: executed successfully Notice: /Stage[main]/Lvm/Lvm::Volume_group[dockervg]/Lvm::Logical_volume[docker_db_p3306]/Mount[/var/docker_db/db_p3306]/ensure: defined 'ensure' as 'mounted' Info: Computing checksum on file /etc/fstab Info: /Stage[main]/Lvm/Lvm::Volume_group[dockervg]/Lvm::Logical_volume[docker_db_p3306]/Mount[/var/docker_db/db_p3306]: Scheduling refresh of Mount[/var/docker_db/db_p3306] Info: Mount[/var/docker_db/db_p3306](provider=parsed): Remounting Notice: /Stage[main]/Lvm/Lvm::Volume_group[dockervg]/Lvm::Logical_volume[docker_db_p3306]/Mount[/var/docker_db/db_p3306]: Triggered 'refresh' from 1 events Info: /Stage[main]/Lvm/Lvm::Volume_group[dockervg]/Lvm::Logical_volume[docker_db_p3306]/Mount[/var/docker_db/db_p3306]: Scheduling refresh of Mount[/var/docker_db/db_p3306] Notice: /Stage[main]/Profiles::Mysql_mmm_docker/File[/var/docker_db/db_p3306/conf.d]/ensure: created Notice: /Stage[main]/Profiles::Mysql_mmm_docker/File[/var/docker_db/db_p3306/mysql-datadir]/ensure: created Notice: /Stage[main]/Profiles::Mysql_mmm_docker/File[/var/docker_db/db_p3306/conf.d/my-replication.cnf]/ensure: defined content as '{md5}6d7e21c2c821dff87548fb2d416ac4c3' Info: /Stage[main]/Profiles::Mysql_mmm_docker/File[/var/docker_db/db_p3306/conf.d/my-replication.cnf]: Scheduling refresh of Docker::Run[mysql_db3306] Info: Docker::Run[mysql_db3306]: Scheduling refresh of Service[docker-mysql-db3306] Info: Docker::Run[mysql_db3306]: Scheduling refresh of Exec[docker-mysql-db3306-systemd-reload] Notice: /Stage[main]/Profiles::Mysql_mmm_docker/Docker::Run[mysql_db3306]/File[/etc/systemd/system/docker-mysql-db3306.service]/ensure: defined content as '{md5}a0932d9ef12c1d4158e203b1021e43af' Info: /Stage[main]/Profiles::Mysql_mmm_docker/Docker::Run[mysql_db3306]/File[/etc/systemd/system/docker-mysql-db3306.service]: Scheduling refresh of Service[docker-mysql-db3306] Info: /Stage[main]/Profiles::Mysql_mmm_docker/Docker::Run[mysql_db3306]/File[/etc/systemd/system/docker-mysql-db3306.service]: Scheduling refresh of Exec[docker-mysql-db3306-systemd-reload] Notice: /Stage[main]/Profiles::Mysql_mmm_docker/Docker::Run[mysql_db3306]/Exec[docker-mysql-db3306-systemd-reload]: Triggered 'refresh' from 2 events Notice: /Stage[main]/Profiles::Mysql_mmm_docker/Docker::Run[mysql_db3306]/Service[docker-mysql-db3306]/ensure: ensure changed 'stopped' to 'running' Info: /Stage[main]/Profiles::Mysql_mmm_docker/Docker::Run[mysql_db3306]/Service[docker-mysql-db3306]: Unscheduling refresh on Service[docker-mysql-db3306] Info: Docker::Run[mysql_db3306]: Scheduling refresh of Exec[check_mysql_3306] Notice: /Stage[main]/Profiles::Mysql_mmm_docker/Exec[check_mysql_3306]: Triggered 'refresh' from 1 events Notice: /Stage[main]/Profiles::Mysql_mmm_docker/Docker::Exec[set_master_params_mysql_db3306]/Exec[docker exec mysql-db3306 mysql -uroot -pSecretR00t --execute="CHANGE MASTER TO MASTER_HOST='192.168.10.12',MASTER_PORT=3306,MASTER_USER='replication',MASTER_PASSWORD='replication';START SLAVE;"]/returns: executed successfully Notice: /Stage[main]/Profiles::Mysql_mmm_docker/Docker::Exec[set_slave_params_mysql_db3306_192.168.10.12]/Exec[docker exec mysql-db3306 mysql -uroot -pSecretR00t --execute="GRANT REPLICATION SLAVE ON *.* TO 'replication'@'192.168.10.12' IDENTIFIED BY 'replication'"]/returns: executed successfully Notice: Applied catalog in 19.31 seconds
Connecting to one Mysql Docker DB and check replication,
[root@node01 ~]# mysql -uroot -pMyR00t -h127.0.0.1 -P3308 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 87 Server version: 5.7.16 MySQL Community Server (GPL) Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [(none)]> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Master_Host: 192.168.10.11 Master_User: replication Master_Port: 3308 Connect_Retry: 60 Master_Log_File: Read_Master_Log_Pos: 4 Relay_Log_File: f654f1d2ffd9-relay-bin.000001 Relay_Log_Pos: 4 Relay_Master_Log_File: Slave_IO_Running: No Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: test,information_schema Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 0 Relay_Log_Space: 154 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Binary log is not open' Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 2 Master_UUID: 4792c799-c5dc-11e6-8bbf-0242ac110004 Master_Info_File: /var/lib/mysql/master.info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: 161219 22:54:03 Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0 Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: 1 row in set (0.00 sec)
References:
http://severalnines.com/blog/mysql-docker-containers-understanding-basics
https://tectonic.com/quay-enterprise/docs/latest/mysql-container.html
https://www.percona.com/blog/2016/03/30/docker-mysql-replication-101/
https://blog.chmouel.com/2014/11/04/avoiding-race-conditions-between-containers-with-docker-and-fig/