Setting up ProxySQL 1.4 with MySQL 5.7 Group Replication

January 9, 2018

Lire cet article en français

There are 3 pillars for a database architecture:

  • Monitoring
  • Backup / Restore process
  • High Availability

This blog post is about database High Availability; more precisely about one of the best combo of the moment :
MySQLProxySQL

 

  • MySQL 5.7 Group Replication : the only native HA solution for MySQL, it’s a Single/Multi-master update everywhere replication plugin for MySQL with built-in automatic distributed recovery, conflict detection and group membership.
  • ProxySQL 1.4 : probably the best proxy for MySQL.

 

Note : MySQL replication is indeed more than a serious alternative for HA. By the way ProxySQL natively handle it.

Note : ProxySQL have a plethora of features, however the purpose of this article is its use in a MySQL Group Replication context..

Note : MySQL NDB Cluster is also a HA solution adapted for the distributed computing environment. This is out of the scope of this article.

 

ProxySQL is compatible with MySQL Group Replication since version 1.3 see (in French) : Configurer ProxySQL pour MySQL Group Replication. However, version 1.4 have a native MySQL Group Replication support. It is therefore easier to use these two popular technologies together.

This is what we will see now.

 

In this article I will make the following assumptions :

 

MySQL Group Replication

Specifications

  • Server version : 5.7.20
  • Plugin version : 1.0
  • Node 1 : mysql_node1, 172.22.0.10 : 3306
  • Node 2 : mysql_node2, 172.22.0.20 : 3306
  • Node 3 : mysql_node3, 172.22.0.30 : 3306

These characteristics therefore represent a 3-nodes MySQL Group Replication cluster, up and running:

node1>
-- MySQL Server version number
SELECT left(version(),6);
+-------------------+
| left(version(),6) |
+-------------------+
| 5.7.20            |
+-------------------+

-- MySQL Group Replication plugin details
SELECT * FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME LIKE 'group%'\G
*************************** 1. row ***************************
           PLUGIN_NAME: group_replication
        PLUGIN_VERSION: 1.0
         PLUGIN_STATUS: ACTIVE
           PLUGIN_TYPE: GROUP REPLICATION
   PLUGIN_TYPE_VERSION: 1.1
        PLUGIN_LIBRARY: group_replication.so
PLUGIN_LIBRARY_VERSION: 1.7
         PLUGIN_AUTHOR: ORACLE
    PLUGIN_DESCRIPTION: Group Replication (1.0.0)
        PLUGIN_LICENSE: PROPRIETARY
           LOAD_OPTION: FORCE_PLUS_PERMANENT

-- MySQL Group Replication member status
SELECT * FROM performance_schema.replication_group_members\G
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 3c2039a6-f152-11e7-90da-0242ac16001e
 MEMBER_HOST: mysql_node3
 MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
*************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 3c2039a8-f152-11e7-9132-0242ac160014
 MEMBER_HOST: mysql_node2
 MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
*************************** 3. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 3c36e366-f152-11e7-91a5-0242ac16000a
 MEMBER_HOST: mysql_node1
 MEMBER_PORT: 3306
MEMBER_STATE: ONLINE

MySQL Enterprise Monitor gives us a graphical view of the cluster and its state: (click to enlarge) :

MySQL

 

The cluster is in single primary mode, ie only one node is available for reading & writing at a time (while the other 2 nodes are read-only).

node1>
-- Is single primary mode activated?
SHOW VARIABLES LIKE 'group_replication_single_primary_mode';
+---------------------------------------+-------+
| Variable_name                         | Value |
+---------------------------------------+-------+
| group_replication_single_primary_mode | ON    |
+---------------------------------------+-------+

-- Who is the primary node?
SELECT MEMBER_ID, MEMBER_HOST, MEMBER_PORT, MEMBER_STATE
FROM performance_schema.replication_group_members
INNER JOIN performance_schema.global_status ON (MEMBER_ID = VARIABLE_VALUE)
WHERE VARIABLE_NAME='group_replication_primary_member'\G
*************************** 1. row ***************************
   MEMBER_ID: 3c36e366-f152-11e7-91a5-0242ac16000a
 MEMBER_HOST: mysql_node1
 MEMBER_PORT: 3306
MEMBER_STATE: ONLINE

MySQL Enterprise Monitor shows us (click to enlarge):

MySQL

 

I am going to extend the MySQL 5.7 sys schema with the following scriptaddition_to_sys.sql

USE sys;

DELIMITER $$

CREATE FUNCTION IFZERO(a INT, b INT)
RETURNS INT
DETERMINISTIC
RETURN IF(a = 0, b, a)$$

CREATE FUNCTION LOCATE2(needle TEXT(10000), haystack TEXT(10000), offset INT)
RETURNS INT
DETERMINISTIC
RETURN IFZERO(LOCATE(needle, haystack, offset), LENGTH(haystack) + 1)$$

CREATE FUNCTION GTID_NORMALIZE(g TEXT(10000))
RETURNS TEXT(10000)
DETERMINISTIC
RETURN GTID_SUBTRACT(g, '')$$

CREATE FUNCTION GTID_COUNT(gtid_set TEXT(10000))
RETURNS INT
DETERMINISTIC
BEGIN
  DECLARE result BIGINT DEFAULT 0;
  DECLARE colon_pos INT;
  DECLARE next_dash_pos INT;
  DECLARE next_colon_pos INT;
  DECLARE next_comma_pos INT;
  SET gtid_set = GTID_NORMALIZE(gtid_set);
  SET colon_pos = LOCATE2(':', gtid_set, 1);
  WHILE colon_pos != LENGTH(gtid_set) + 1 DO
     SET next_dash_pos = LOCATE2('-', gtid_set, colon_pos + 1);
     SET next_colon_pos = LOCATE2(':', gtid_set, colon_pos + 1);
     SET next_comma_pos = LOCATE2(',', gtid_set, colon_pos + 1);
     IF next_dash_pos < next_colon_pos AND next_dash_pos < next_comma_pos THEN
       SET result = result +
         SUBSTR(gtid_set, next_dash_pos + 1,
                LEAST(next_colon_pos, next_comma_pos) - (next_dash_pos + 1)) -
         SUBSTR(gtid_set, colon_pos + 1, next_dash_pos - (colon_pos + 1)) + 1;
     ELSE
       SET result = result + 1;
     END IF;
     SET colon_pos = next_colon_pos;
  END WHILE;
  RETURN result;
END$$

CREATE FUNCTION gr_applier_queue_length()
RETURNS INT
DETERMINISTIC
BEGIN
  RETURN (SELECT sys.gtid_count( GTID_SUBTRACT( (SELECT
Received_transaction_set FROM performance_schema.replication_connection_status
WHERE Channel_name = 'group_replication_applier' ), (SELECT
@@global.GTID_EXECUTED) )));
END$$

CREATE FUNCTION gr_member_in_primary_partition()
RETURNS VARCHAR(3)
DETERMINISTIC
BEGIN
  RETURN (SELECT IF( MEMBER_STATE='ONLINE' AND ((SELECT COUNT(*) FROM
performance_schema.replication_group_members WHERE MEMBER_STATE != 'ONLINE') >=
((SELECT COUNT(*) FROM performance_schema.replication_group_members)/2) = 0),
'YES', 'NO' ) FROM performance_schema.replication_group_members JOIN
performance_schema.replication_group_member_stats USING(member_id));
END$$

CREATE VIEW gr_member_routing_candidate_status AS SELECT
sys.gr_member_in_primary_partition() as viable_candidate,
IF( (SELECT (SELECT GROUP_CONCAT(variable_value) FROM
performance_schema.global_variables WHERE variable_name IN ('read_only',
'super_read_only')) != 'OFF,OFF'), 'YES', 'NO') as read_only,
sys.gr_applier_queue_length() as transactions_behind, Count_Transactions_in_queue as 'transactions_to_cert' from performance_schema.replication_group_member_stats;$$

DELIMITER ;

 

So I load the functions and views into the cluster primary node (mysql_node1) :

# Loading extra functions and views to sys schema on the cluster (using a primary node)

$ mysql -u root -p -h mysql_node1 < ./addition_to_sys.sql

 

This script will allow ProxySQL to monitor the state of the cluster nodes.

e.g.

node1> 
-- Status of the primary node
SELECT * FROM sys.gr_member_routing_candidate_status;
+------------------+-----------+---------------------+----------------------+
| viable_candidate | read_only | transactions_behind | transactions_to_cert |
+------------------+-----------+---------------------+----------------------+
| YES              | NO        |                   0 |                    0 |
+------------------+-----------+---------------------+----------------------+
node2> 
-- Status of a secondary node
SELECT * FROM sys.gr_member_routing_candidate_status;
+------------------+-----------+---------------------+----------------------+
| viable_candidate | read_only | transactions_behind | transactions_to_cert |
+------------------+-----------+---------------------+----------------------+
| YES              | YES       |                   0 |                    0 |
+------------------+-----------+---------------------+----------------------+

 

On the cluster side, the last configuration step consists of creating the supervision users that will be used by ProxySQL (yes, there is a relationship with the previous step) :).

Here again I use the primary of the group :

node1>
-- Create ProxySQL monitoring user inside the cluster
CREATE USER proxysqlmonitor@'%' IDENTIFIED BY 'Very-S3cr3t';

GRANT SELECT ON sys.* TO proxysqlmonitor@'%';

 

Let’s configure ProxySQL now !

 

ProxySQL

Specifications

  • Proxy version : 1.4.4
  • Admin Interface : 172.22.0.2:6032
  • Cluster connection : 172.22.0.2:3306

 

$ proxysql --version
ProxySQL version 1.4.4-139-ga51b040, codename Truls

$ service proxysql status
ProxySQL is running (58).

 

ProxySQL configuration can be done online, which is obviously a very good thing.

Let’s start by logging into the admin interface on the default port 6032 with the default admin user and password… : admin (!) – Defaults that can and must be changed in real life.

$ mysql -u admin -p -P 6032 --protocol=tcp main

 

Servers configuration

First step, add the cluster nodes to the proxy :

-- Add 3 nodes of the cluster into the mysql_servers table
proxy> 
INSERT INTO mysql_servers (hostgroup_id, hostname, port) VALUES (2, 'mysql_node1', 3306), (2, 'mysql_node2', 3306), (2, 'mysql_node3', 3306);
proxy> 
select * from mysql_servers;
+--------------+-------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname    | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+-------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 2            | mysql_node1 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | mysql_node2 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 2            | mysql_node3 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+-------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+

 

Hostgroups configuration

I have done a brief presentation of ProxySQL 1.3 objects last time : Configurer ProxySQL pour MySQL Group Replication

Version 1.4 has some differences, the most notable in our context is the new table : mysql_group_replication_hostgroups :

proxy> 
SHOW CREATE TABLE main.mysql_group_replication_hostgroups\G
*************************** 1. row ***************************ajouter les nœuds du cluster
       table: mysql_group_replication_hostgroups
Create Table: CREATE TABLE mysql_group_replication_hostgroups (
    writer_hostgroup INT CHECK (writer_hostgroup>=0) NOT NULL PRIMARY KEY,
    backup_writer_hostgroup INT CHECK (backup_writer_hostgroup>=0 AND backup_writer_hostgroup<>writer_hostgroup) NOT NULL,
    reader_hostgroup INT NOT NULL CHECK (reader_hostgroup<>writer_hostgroup AND backup_writer_hostgroup<>reader_hostgroup AND reader_hostgroup>0),
    offline_hostgroup INT NOT NULL CHECK (offline_hostgroup<>writer_hostgroup AND offline_hostgroup<>reader_hostgroup AND backup_writer_hostgroup<>offline_hostgroup AND offline_hostgroup>=0),
    active INT CHECK (active IN (0,1)) NOT NULL DEFAULT 1,
    max_writers INT NOT NULL CHECK (max_writers >= 0) DEFAULT 1,
    writer_is_also_reader INT CHECK (writer_is_also_reader IN (0,1)) NOT NULL DEFAULT 0,
    max_transactions_behind INT CHECK (max_transactions_behind>=0) NOT NULL DEFAULT 0,
    comment VARCHAR,
    UNIQUE (reader_hostgroup),
    UNIQUE (offline_hostgroup),
    UNIQUE (backup_writer_hostgroup))

 

The best description I have found is available in my brilliant colleague’s blog post MySQL Group Replication: native support in ProxySQL

I quote

There are many new columns, let’s have a look at their meaning:

Column Name Description
writer_hostgroup the id of the hostgroup that will contain all the members that are writer
backup_writer_hostgroup if the group is running in multi-primary mode, there are multi writers (read_only=0) but if the amount of these writer is
larger than the max_writers, the extra nodes are located in that backup writer group
reader_hostgroup the id of the hostgroup that will contain all the members in read_only
offline_hostgroup the id of the hostgroup that will contain the host not being online or not being part of the Group
active when enabled, ProxySQL monitors the Group and move the server according in the appropriate hostgroups
max_writers limit the amount of nodes in the writer hostgroup in case of group in multi-primary mode
writer_is_also_reader boolean value, 0 or 1, when enabled, a node in the writer hostgroup will also belongs the the reader hostgroup
max_transactions_behind if the value is greater than 0, it defines how much a node can be lagging in applying the transactions from the Group, see this post for more info

by  @Lefred 🙂

 

Our configuration is as follows :

  • writer_hostgroup = 2
  • backup_writer_hostgroup = 4
  • reader_hostgroup = 3
  • offline_hostgroup = 1
  • active = 1
  • max_writers = 1
  • writer_is_also_reader = 1
  • max_transactions_behind = 0

Which gives us :

proxy> 
INSERT INTO mysql_group_replication_hostgroups (writer_hostgroup,backup_writer_hostgroup,reader_hostgroup,offline_hostgroup,active,max_writers,writer_is_also_reader,max_transactions_behind) VALUES (2,4,3,1,1,1,1,0);

-- Save & load new configuration
SAVE MYSQL SERVERS TO DISK;
LOAD MYSQL SERVERS TO RUNTIME;
proxy> 
select * from mysql_group_replication_hostgroups\G
*************************** 1. row ***************************
       writer_hostgroup: 2
backup_writer_hostgroup: 4
       reader_hostgroup: 3
      offline_hostgroup: 1
                 active: 1
            max_writers: 1
  writer_is_also_reader: 1
max_transactions_behind: 0
                comment: NULL

 

Supervision configuration

A little further up we created a supervision user in the cluster (remember?).

It is this user that will use ProxySQL to be aware of the state of the various nodes of the cluster :

-- Set user name & password for monitoring module
SET mysql-monitor_username='proxysqlmonitor';
SET mysql-monitor_password='Very-S3cr3t';

-- Save & load new configuration
SAVE MYSQL VARIABLES TO DISK;
LOAD MYSQL VARIABLES TO RUNTIME;
proxy> 
select hostgroup_id, hostname, status  from runtime_mysql_servers;
+--------------+-------------+--------+
| hostgroup_id | hostname    | status |
+--------------+-------------+--------+
| 2            | mysql_node1 | ONLINE |
| 2            | mysql_node3 | ONLINE |
| 2            | mysql_node2 | ONLINE |
+--------------+-------------+--------+

 

Application user creation

The application connects to the MySQL server with a user and password (in the best case: D).

This user must obviously exist in the different MySQL instances that form the cluster, with the correct MySQL privileges.

This user must also be created in ProxySQL :

proxy> 
INSERT INTO mysql_users (username, password, active, default_hostgroup, max_connections) VALUES ('app_user', 'app_pwd, 1, 2, 200);

-- Save & load new configuration
SAVE MYSQL USERS TO DISK;
LOAD MYSQL USERS TO RUNTIME;
proxy> 
select * from mysql_users;
*************************** 1. row ***************************
              username: app_user
              password: app_pwd
                active: 1
               use_ssl: 0
     default_hostgroup: 2
        default_schema: NULL
         schema_locked: 0
transaction_persistent: 1
          fast_forward: 0
               backend: 1
              frontend: 1
       max_connections: 200

 

Change the listening port

By default the application will connect to the cluster through ProxySQL using port 6032 (in our case: 172.22.0.2:6032)

In real life, applications usually connect to MySQL using its default port, 3306.

It is therefore possible (not required) to modify the ProxySQL port to listen on 3306 :

proxy> 
SET mysql-interfaces='0.0.0.0:3306;/tmp/proxysql.sock';

SAVE MYSQL VARIABLES TO DISK;
proxy> 
SHOW VARIABLES LIKE 'mysql-interfaces'\G
*************************** 1. row ***************************
Variable_name: mysql-interfaces
        Value: 0.0.0.0:3306;/tmp/proxysql.sock

 

Note : For a mysterious reason, this last configuration change is not online. In other words I must restart ProxySQL for this change to be taken into account.

$ service proxysql restart
Shutting down ProxySQL: DONE!
Starting ProxySQL: DONE!

 

The ProxySQL configuration for MySQL Group Replication is now complete \o/

I’m taking this opportunity to introduce a new table in version 1.4 : mysql_server_group_replication_log.

Useful for monitoring :

proxy> 
select * from mysql_server_group_replication_log order by time_start_us desc limit 3\G
*************************** 1. row ***************************
           hostname: mysql_node3
               port: 3306
      time_start_us: 1515079109822616
    success_time_us: 1782
   viable_candidate: YES
          read_only: YES
transactions_behind: 0
              error: NULL
*************************** 2. row ***************************
           hostname: mysql_node2
               port: 3306
      time_start_us: 1515079109822292
    success_time_us: 1845
   viable_candidate: YES
          read_only: YES
transactions_behind: 0
              error: NULL
*************************** 3. row ***************************
           hostname: mysql_node1
               port: 3306
      time_start_us: 1515079109821971
    success_time_us: 1582
   viable_candidate: YES
          read_only: NO
transactions_behind: 0
              error: NULL

 

Playtime

As a reminder, workflow is as follows :

  • The app connects to ProxySQL (i.e. it only sees and knows the proxy)
  • ProxySQL retrieves transactions and redirects them to the primary node of the MySQL Group Replication cluster.
  • In case of primary node crash / shutdown,
    • MySQL Group Replication elects a new primary
    • ProxySQL identifies the new primary and sends transactions to this new primary

 

The application must therefore be configured to connect to ProxySQL. For example, if my app is Drupal  then my settings.php file will look like the following code snippet :

...
$databases['default']['default'] = array (
  'database' => 'drupal',
  'username' => 'app_user',
  'password' => 'app_pwd',
  'prefix' => 'drupal_',
  'host' => '172.22.0.2',
  'port' => '3306',
  'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
  'driver' => 'mysql',
);
...
  • User : app_user
  • Password : app_pwd
  • Port : 3306 (ProxySQL)
  • Host : 172.22.0.2 (ProxySQL)

QED!

 

Let’s simulate all this in the command line!

My app is the mysql command-line tool. To simplify my commands, I first create a configuration file for the mysql client that contains the following information (btw it’s not recommended to have passwords in clear in a text file)  :

$ cat /tmp/temp.cnf
[mysql]
user=app_user
password=app_pwd
protocol=tcp

 

My servers are configured with the variable report_host filled (see http://dasini.net/blog/2016/11/08/deployer-un-cluster-mysql-group-replication/).

$ for x in {1..5}; do mysql --defaults-file=/tmp/temp.cnf -h 172.22.0.2 -P 3306  -BNe"SELECT @@report_host;" ; done;
mysql_node1
mysql_node1
mysql_node1
mysql_node1
mysql_node1

The cluster primary node is still mysql_node1 (no database failover yet since the beginning of this article, but we get there).

 

Now let’s test the failover with a slightly more complex example.

First, let’s create the test.poc table in InnoDB format :

proxy>
CREATE SCHEMA test;

CREATE TABLE test.poc (
id tinyint unsigned NOT NULL AUTO_INCREMENT,
host varchar(11),
time timestamp,
PRIMARY KEY (id)
) ENGINE=InnoDB;

 

The test application will run the following 2 queries :

  • INSERT INTO test.poc (host) VALUES (@@report_host);
  • SELECT * FROM test.poc;

 

With these 2 queries, the little scripts below and a timely kill -9, we will be able to follow the routing process (ProxySQL) and the database failover (MySQL Group Replication).

$ while true; do
>   mysql --defaults-file=/tmp/temp.cnf -h 172.22.0.2 -P 3306 -BNe"INSERT INTO test.poc (host) VALUES (@@report_host); SELECT * FROM test.poc;";
>   echo;
>   sleep 4;
> done

2	mysql_node1	2018-01-04 16:42:31 <=> Inserted row #1

2	mysql_node1	2018-01-04 16:42:31
9	mysql_node1	2018-01-04 16:42:35 <=> Inserted row #2

2	mysql_node1	2018-01-04 16:42:31
9	mysql_node1	2018-01-04 16:42:35
11	mysql_node3	2018-01-04 16:42:43 <=> Inserted row #3 => Failover! New primary is mysql_node3

2	mysql_node1	2018-01-04 16:42:31
9	mysql_node1	2018-01-04 16:42:35
11	mysql_node3	2018-01-04 16:42:43
18	mysql_node3	2018-01-04 16:42:47 <=> Inserted row #4

Transactions 1 & 2 (ids 2 et 9) are run on mysql_node 1.

From the third transaction (ids 11 et 18), they are run on the new primary mysql_node 3, because mysql_node 1 crashed.

 

Let’s finish this tutorial with pictures.

 

MySQL Enterprise Monitor

As stated in the introduction, a database architecture should rely on the 3 pillars : Monitoring / Backup process / High Availability.

Below a brief introduction of MySQL Enterprise Monitor (MEM) which is the MySQL monitoring tool, available with the commercial editions of MySQL (MySQL Enterprise Edition). MEM enables monitoring of MySQL instances and their hosts, notification of potential issues and problems, and advice on how to correct issues. You can also monitor and troubleshoot all types of MySQL Replication including MySQL Group Replication.

If you want to try our  Enterprise tools, please click here.

Below are some screenshots of the various states of the cluster supervised by MySQL Enterprise Monitor (click to enlarge):

MySQL

MySQL

MySQL

MySQL

MySQL

MySQL

MySQL

MySQL

 

 

The final word(s)

You already know, MySQL Replication and MySQL semi-synchronous Replication.

MySQL Group Replication is an additional tool that brings in especially the concept of built-in Automatic Database Failoverthat was missing in MySQL.

These architectures are used most of the time with a proxy. ProxySQL is without a doubt one of the best companion for MySQL today.

 

Note : MySQL provides MySQL InnoDB Cluster, which combines MySQL technologies ( MySQL Group Replication, MySQL Router and MySQL Shell) to enable you to create highly available clusters of MySQL server instances.
Tutorial : Tutoriel – Déployer MySQL 5.7 InnoDB Cluster (French)

 

 

References

dasini.net

MySQL Group Replication

ProxySQL

 

Thanks for using MySQL!

 

3 Responses to “Setting up ProxySQL 1.4 with MySQL 5.7 Group Replication”

  1. […] Read this post in English […]

  2. great article just follow all the step and it’s working.
    thanks a lot

  3. Thank you I appreciate!