Tester MySQL InnoDB Cluster

mars 13, 2017

MySQL InnoDB Cluster est la (future) solution out-of-the-box HA de MySQL (à ne pas confondre avec MySQL NDB Cluster). Ce produit est composé de 3 éléments :

  • MySQL Group Replication
    • Plugin de réplication multi-maître, avec résolution de conflits et basculement (failover) automatique.
  • MySQL Router
    • Middleware léger et performant qui fournit un routage transparent entre l’application et le cluster.
  • MySQL Shell
    • Client interactif Javascript, Python et SQL qui permet d’administrer le cluster.

MySQL InnoDB Cluster Architecture
MySQL Group Replication est GA et peut donc être utilisé tel quel hors MySQL InnoDB Cluster (voir Déployer un cluster MySQL Group Replication).

Ce n’est par contre pas encore le cas pour les 2 autres composants, MySQL Shell et MySQL Router qui sont en Release Candidate (RC), il n’est donc pas recommandé à ce jour de les utiliser dans un environnement de production.

 

Note: 
L’article traite de MySQL InnoDB Cluster, HA natif de MySQL Server, solution à ne pas confondre avec MySQL NDB Cluster.

 

Installer MySQL InnoDB Cluster

Dans le cadre de cet article, les versions utilisées sont:

  • MySQL Server : 5.7.17
  • MySQL Shell : 1.0.8-rc
  • MySQL Router : 2.1.2 rc

Pour utiliser MySQL InnoDB Cluster, il faut simplement installer ces 3 composants :

 

Déployer les instances de test

MySQL Shell permet de déployer simplement des instances MySQL de test (sandbox).

Connexion avec MySQL Shell :

$ mysqlsh
Welcome to MySQL Shell 1.0.8-rc

Copyright (c) 2016, 2017, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type '\help', '\h' or '\?' for help, type '\quit' or '\q' to exit.

Currently in JavaScript mode. Use \sql to switch to SQL mode and execute queries.
mysql-js>

 

Déployer la 1ère instance MySQL qui fera partie de notre cluster :

  • Host : localhost
  • Port : 3310
mysql-js> dba.deploySandboxInstance(3310)
A new MySQL sandbox instance will be created on this host in 
/home/daz/mysql-sandboxes/3310

Please enter a MySQL root password for the new instance: 
Deploying new MySQL instance...

Instance localhost:3310 successfully deployed and started.
Use shell.connect('root@localhost:3310'); to connect to the instance.

Il suffit de rentrer le mot de passe root, puis l’instance est crée dans ~/mysql-sandboxes :

$ ls ~/mysql-sandboxes/3310/
3310.pid my.cnf mysqld mysqld.sock mysqld.sock.lock mysql-files mysqlx.sock mysqlx.sock.lock sandboxdata start.sh stop.sh

 

Créons 2 autres instances pour le cluster:

  • Host : localhost
  • Ports : 3320 & 3330
mysql-js> dba.deploySandboxInstance(3320)
A new MySQL sandbox instance will be created on this host in 
/home/daz/mysql-sandboxes/3320

Please enter a MySQL root password for the new instance: 
Deploying new MySQL instance...

Instance localhost:3320 successfully deployed and started.
Use shell.connect('root@localhost:3320'); to connect to the instance.

mysql-js> dba.deploySandboxInstance(3330)
A new MySQL sandbox instance will be created on this host in 
/home/daz/mysql-sandboxes/3330

Please enter a MySQL root password for the new instance: 
Deploying new MySQL instance...

Instance localhost:3330 successfully deployed and started.
Use shell.connect('root@localhost:3330'); to connect to the instance.

On a donc 3 instances MySQL dans notre sandbox.

 

A Noter que si vous avez déjà un cluster MySQL Group Replication actif, MySQL InnoDB Cluster est capable de l’adopter. Ceci fera l’objet d’un prochain article.

 

Gérer les instances

D’autres méthodes existent pour gérer les instances:

  • Stop
    • dba.stopSandboxInstance()
  • Start
    • dba.startSandboxInstance()
  • Kill  : permet de simuler le crash d’un nœud
    • dba.killSandboxInstance()
  • Delete : suppression totale de l’instance de la sandbox
    • dba.deleteSandboxInstance()

 

Exemple – Arrêt et suppression d’une instance

mysql-js> dba.stopSandboxInstance(3320)
The MySQL sandbox instance on this host in 
/.../mysql-sandboxes/3320 will be stopped

Please enter the MySQL root password for the instance 'localhost:3320':

Stopping MySQL instance...

Instance localhost:3320 successfully stopped.

mysql-js> dba.deleteSandboxInstance(3320);
The MySQL sandbox instance on this host in 
/.../mysql-sandboxes/3320 will be deleted


Deleting MySQL instance...

Instance localhost:3320 successfully deleted.

 

L’aide est disponible dans MySQL Shell avec dba.help()

mysql-js> dba.help()

The global variable 'dba' is used to access the MySQL AdminAPI functionality
and perform DBA operations. It is used for managing MySQL InnoDB clusters.

The following properties are currently supported.

- verbose Enables verbose mode on the Dba operations.


The following functions are currently supported.

- checkInstanceConfiguration
- configureLocalInstance Validates and configures an instance for
cluster usage.
- createCluster Creates a MySQL InnoDB cluster.
- deleteSandboxInstance Deletes an existing MySQL Server instance on
localhost.
- deploySandboxInstance Creates a new MySQL Server instance on
localhost.
- dropMetadataSchema Drops the Metadata Schema.
- getCluster Retrieves a cluster from the Metadata Store.
- help Provides help about this class and it's
members
- killSandboxInstance Kills a running MySQL Server instance on
localhost.
- rebootClusterFromCompleteOutage Reboots a cluster from complete outage.
- resetSession Sets the session object to be used on the
Dba operations.
- startSandboxInstance Starts an existing MySQL Server instance on
localhost.
- stopSandboxInstance Stops a running MySQL Server instance on
localhost.

For more help on a specific function use: dba.help('<functionName>')

e.g. dba.help('deploySandboxInstance')

 

Vérifier la configuration des instances

Un moyen simple de savoir si les instances ont la configuration requise pour faire partie du cluster est d’utiliser : dba.checkInstanceConfiguration()

mysql-js> dba.checkInstanceConfiguration('root@localhost:3310')
Please provide the password for 'root@localhost:3310': 
Validating instance...

The instance 'localhost:3310' is valid for Cluster usage
{
"status": "ok"
}
mysql-js> dba.checkInstanceConfiguration('root@localhost:3320')
Please provide the password for 'root@localhost:3320': 
Validating instance...

The instance 'localhost:3320' is valid for Cluster usage
{
"status": "ok"
}
mysql-js> dba.checkInstanceConfiguration('root@localhost:3330')
Please provide the password for 'root@localhost:3330': 
Validating instance...

The instance 'localhost:3330' is valid for Cluster usage
{
"status": "ok"
}

 

Créer le cluster

On a donc 3 instances MySQL, en standalone, configurées et prêtes à se transformer en une base de données distribuée.

Je vais donc me connecter à une de mes instances :

  • User : root
  • Host : localhost
  • Ports : 3310
mysql-js> \c root@localhost:3310
Creating a Session to 'root@localhost:3310'
Enter password: 
Classic Session successfully established. No default schema selected.

 

Puis commencer la création effective de mon instance MySQL InnoDB Cluster, nommée testcluster

mysql-js> var cluster=dba.createCluster('testcluster')
A new InnoDB cluster will be created on instance 'root@localhost:3310'.

Creating InnoDB cluster 'testcluster' on 'root@localhost:3310'...
Adding Seed Instance...

Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

Je me retrouve pour le moment avec un cluster d’1 nœud. Certes, pas encore hautement disponible, mais c’est un début 🙂

La méthode status() me le confirme:

mysql-js> cluster.status()
{
    "clusterName": "testcluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "localhost:3310", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures.", 
        "topology": {
            "localhost:3310": {
                "address": "localhost:3310", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }
    }
}

 

Avant de lui rajouter des petits copains, on va vérifier que toutes les instances ont la même liste de transactions exécutées:

mysql-js> cluster.checkInstanceState('root@localhost:3310')
Please provide the password for 'root@localhost:3310': 
Analyzing the instance replication state...

The instance 'localhost:3310' is valid for the cluster.
The instance is fully recoverable.

{
"reason": "recoverable", 
"state": "ok"
}

mysql-js> cluster.checkInstanceState('root@localhost:3320')
Please provide the password for 'root@localhost:3320': 
Analyzing the instance replication state...

The instance 'localhost:3320' is valid for the cluster.
The instance is new to Group Replication.

{
"reason": "new", 
"state": "ok"
}

mysql-js> cluster.checkInstanceState('root@localhost:3330')
Please provide the password for 'root@localhost:3330': 
Analyzing the instance replication state...

The instance 'localhost:3330' is valid for the cluster.
The instance is new to Group Replication.

{
"reason": "new", 
"state": "ok"
}

Parfait !

 

Ajouter les autres nœuds

addInstance(), la bien nommée :

Ajout de localhost:3320

mysql-js> cluster.addInstance('root@localhost:3320')
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Please provide the password for 'root@localhost:3320': 
Adding instance to the cluster ...

The instance 'root@localhost:3320' was successfully added to the cluster.

Ajout de localhost:3330

mysql-js> cluster.addInstance('root@localhost:3330')
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Please provide the password for 'root@localhost:3330': 
Adding instance to the cluster ...

The instance 'root@localhost:3330' was successfully added to the cluster.

 

L’architecture de notre cluster est maintenant:

mysql-js> cluster.status()
{
    "clusterName": "testcluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "localhost:3310", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "localhost:3310": {
                "address": "localhost:3310", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "localhost:3320": {
                "address": "localhost:3320", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "localhost:3330": {
                "address": "localhost:3330", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }
    }
}

localhost:3310 est le primaire il est donc le seul à accepter les écritures. Les 2 autres membres ne sont accessibles qu’en lecture.

C’est le comportement par défaut de MySQL Group Replication est donc de MySQL InnoDB Cluster.  Pour avoir un cluster en multi-master, il faut le préciser lors de la création du cluster (dba.createCluster()).

 

Les informations révélées par les différentes commandes exécutée jusqu’ici, sont persistante. Elles sont en stockées dans les nœuds du cluster, dans le schéma mysql_innodb_cluster_metadata :

$ mysql --protocol=tcp -uroot -p -P3310

mysql> show schemas;
+-------------------------------+
| Database                      |
+-------------------------------+
| information_schema            |
| mysql                         |
| mysql_innodb_cluster_metadata |
| performance_schema            |
| sys                           |
+-------------------------------+
5 rows in set (0,00 sec)


mysql> show tables in mysql_innodb_cluster_metadata;
+-----------------------------------------+
| Tables_in_mysql_innodb_cluster_metadata |
+-----------------------------------------+
| clusters                                |
| hosts                                   |
| instances                               |
| replicasets                             |
| routers                                 |
| schema_version                          |
+-----------------------------------------+
6 rows in set (0,00 sec)

 

Déployer MySQL Router

MySQL Router étant déjà installé, on va le configurer pour l’interfacer avec notre cluster:

$ mysqlrouter --bootstrap root@localhost:3310 --directory routerSandbox
Please enter MySQL password for root:

Bootstrapping system MySQL Router instance...
MySQL Router has now been configured for the InnoDB cluster 'testcluster'.

The following connection information can be used to connect to the cluster.

Classic MySQL protocol connections to cluster 'testcluster':
- Read/Write Connections: localhost:6446
- Read/Only Connections: localhost:6447

X protocol connections to cluster 'testcluster':
- Read/Write Connections: localhost:64460
- Read/Only Connections: localhost:64470

4 ports TCP ont été configurés, 2 pour les connexions MySQL traditionnelles:

  • 6446 : lectures / écritures pour le noeud primaire
  • 6447 : lectures seules pour les nœuds secondaires (Round-Robin)

Et le pendant pour les connexions avec le protocole X (64460 & 64470), pour une utilisation NoSQL Document Store de MySQL.

 

Le bootstrap à généré un fichier de configuration pour MySQL Router

$ view ~/routerSandbox/mysqlrouter.conf

# File automatically generated during MySQL Router bootstrap
[DEFAULT]
name=routerSandbox
logging_folder=/home/xxxx/routerSandbox/log
runtime_folder=/home/xxxx/routerSandbox/run
data_folder=/home/xxxx/routerSandbox/data
keyring_path=/home/xxxx/routerSandbox/data/keyring
master_key_path=/home/xxxx/routerSandbox/mysqlrouter.key

[logger]
level = INFO

[metadata_cache:testcluster]
router_id=6
bootstrap_server_addresses=mysql://localhost:3310,mysql://localhost:3320,mysql://localhost:3330
user=mysql_router6_qy5a3dmn5y68
metadata_cluster=testcluster
ttl=300

[routing:testcluster_default_rw]
bind_address=0.0.0.0
bind_port=6446
destinations=metadata-cache://testcluster/default?role=PRIMARY
mode=read-write
protocol=classic

[routing:testcluster_default_ro]
bind_address=0.0.0.0
bind_port=6447
destinations=metadata-cache://testcluster/default?role=SECONDARY
mode=read-only
protocol=classic
...

 

Utiliser MySQL Router

Evidemment il faut le démarrer

$ ~/routerSandbox/start.sh

 

Un petit coup d’œil dans les logs:

$ tail -f ~/routerSandbox/log/mysqlrouter.log
 
2017-03-10 19:03:36 INFO    [7f81e420e700] Starting Metadata Cache
2017-03-10 19:03:36 INFO    [7f81e420e700] Connections using ssl_mode 'PREFERRED'
2017-03-10 19:03:36 INFO    [7f81e3a0d700] [routing:testcluster_default_ro] started: listening on 0.0.0.0:6447; read-only
2017-03-10 19:03:36 INFO    [7f81e320c700] [routing:testcluster_default_rw] started: listening on 0.0.0.0:6446; read-write
2017-03-10 19:03:36 INFO    [7f81e2a0b700] [routing:testcluster_default_x_ro] started: listening on 0.0.0.0:64470; read-only
2017-03-10 19:03:36 INFO    [7f81e220a700] [routing:testcluster_default_x_rw] started: listening on 0.0.0.0:64460; read-write
2017-03-10 19:03:36 INFO    [7f81e420e700] Connected with metadata server running on 127.0.0.1:3310
2017-03-10 19:03:36 INFO    [7f81e420e700] Changes detected in cluster 'testcluster' after metadata refresh
2017-03-10 19:03:36 INFO    [7f81e420e700] Metadata for cluster 'testcluster' has 1 replicasets:
2017-03-10 19:03:36 INFO    [7f81e420e700] 'default' (3 members, single-master)
2017-03-10 19:03:36 INFO    [7f81e420e700]     localhost:3310 / 33100 - role=HA mode=RW
2017-03-10 19:03:36 INFO    [7f81e420e700]     localhost:3320 / 33200 - role=HA mode=RO
2017-03-10 19:03:36 INFO    [7f81e420e700]     localhost:3330 / 33300 - role=HA mode=RO
2017-03-10 19:03:36 INFO    [7f81c37fe700] Connected with metadata server running on 127.0.0.1:3310

 

Voila MySQL InnoDB Cluster configuré et prêt à être testé !

L’application doit se connecter au port 6446 (écritures et lectures vers le noeud primaire). Les lectures peuvent également être dirigées vers le port 6447.

 

Tests

Port de lecture

$ watch -td -n1 "mysql -uroot -P6447 --protocol=tcp -pxxx -BNe'SELECT @@port;' 2> /dev/null"

=> affiche : 3320, 3330, 3320, 3330, …

 

Port d’écriture

$ watch -td -n1 "mysql -uroot -P6446 --protocol=tcp -pxxx -BNe'SELECT @@port;' 2> /dev/null"

=> affiche : 3310, 3310, 3310, …

 

Failover automatique

Grâce à la méthode dba.killSandboxInstance() on peut simuler un crash du serveur primaire et ainsi voir à l’oeuvre le failover automatique du cluster.

Les 2 sessions qui suivent s’exécutent en parallèle:

Session 1

$ watch -td -n1 "mysql -uroot -P6446 --protocol=tcp -pxxx -BNe'SELECT @@port;' 2> /dev/null"

 

Session 2

mysql-js> dba.killSandboxInstance(3310)
The MySQL sandbox instance on this host in 
/.../mysql-sandboxes/3310 will be killed


Killing MySQL instance...

Instance localhost:3310 successfully killed.

=> La session 1 va afficher 3310 puis 3320 après le basculement de la base de données (database failover).

 

 

En complément je vous invite à lire :

et regarder la video de lefred : MySQL InnoDB Cluster: MySQL Shell starter guide

 

Documentation

 

Thanks for using MySQL!

3 Responses to “Tester MySQL InnoDB Cluster”

  1. […] Tester MySQL InnoDB Cluster […]

  2. […] Tester MySQL InnoDB Cluster […]

  3. […] épisodes précédents on a vu comment  déployer « manuellement » MySQL Group Replication,  comprendre et tester MySQL InnoDB Cluster  ainsi que comment  gérer aisément un cluster Group Replication déja déployé avec MySQL […]