
{"id":2549,"date":"2018-08-21T09:15:55","date_gmt":"2018-08-21T08:15:55","guid":{"rendered":"http:\/\/dasini.net\/blog\/?p=2549"},"modified":"2018-08-30T13:34:02","modified_gmt":"2018-08-30T12:34:02","slug":"tutoriel-deployer-mysql-5-7-innodb-cluster","status":"publish","type":"post","link":"https:\/\/dasini.net\/blog\/2018\/08\/21\/tutoriel-deployer-mysql-5-7-innodb-cluster\/","title":{"rendered":"Tutoriel \u2013 D\u00e9ployer MySQL 5.7 InnoDB Cluster"},"content":{"rendered":"<p>Cet article remplace le pr\u00e9c\u00e9dent tuto :\u00a0<a href=\"http:\/\/dasini.net\/blog\/2017\/05\/11\/tutoriel-deployer-mysql-innodb-cluster\/\" target=\"_blank\" rel=\"noopener\">Tutoriel \u2013 D\u00e9ployer MySQL innoDB Cluster<\/a>.<\/p>\n<p>Si tu utilises MySQL <strong>8.0<\/strong>, alors lit plut\u00f4t ce tuto : <a href=\"http:\/\/dasini.net\/blog\/2018\/08\/30\/tutoriel-deployer-mysql-8-0-innodb-cluster\/\" target=\"_blank\" rel=\"noopener\">Tutoriel \u2013 D\u00e9ployer MySQL 8.0 InnoDB Cluster<\/a>.<\/p>\n<p>&nbsp;<\/p>\n<p>Je re\u00e7ois pas mal de questions de clients et d&rsquo;utilisateurs de MySQL 5.7, j&rsquo;esp\u00e8re donc que ce post t&rsquo;apportera l&rsquo;essentiel des r\u00e9ponses et bonnes pratiques pour te permettre de d\u00e9ployer un cluster InnoDB avec MySQL 5.7.<\/p>\n<p>De plus, les nouvelles versions de <a href=\"https:\/\/dev.mysql.com\/downloads\/router\/\" target=\"_blank\" rel=\"noopener\">MySQL Router<\/a> et de <a href=\"https:\/\/dev.mysql.com\/downloads\/shell\/\" target=\"_blank\" rel=\"noopener\">MySQL Shell<\/a> am\u00e8nent de sensibles am\u00e9liorations qu&rsquo;il faut que je te montre \u00e0 tout prix \ud83d\ude42<\/p>\n<hr \/>\n<p>&nbsp;<\/p>\n<p>L&rsquo;un des principaux besoins de mes clients est la\u00a0<strong>Haute Disponibilit\u00e9 avec MySQL<\/strong>. On va voir, dans cet article, comment d\u00e9ployer et g\u00e9rer un cluster MySQL 5.7 \u00ab\u00a0<em>from scratch\u00a0\u00bb<\/em>\u00a0, sous la forme d&rsquo;un tutoriel, gr\u00e2ce \u00e0 la solution HA tout en un : <strong>MySQL (5.7) InnoDB Cluster<\/strong>.<\/p>\n<p><em><span style=\"text-decoration: underline;\">Note<\/span>:\u00a0L&rsquo;article traite de MySQL InnoDB Cluster, HA natif de MySQL Server (via le plugin Group Replication) avec pour moteur de stockage <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/innodb-storage-engine.html\" target=\"_blank\" rel=\"noopener noreferrer\">InnoDB<\/a>, solution \u00e0 ne pas confondre avec <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/mysql-cluster.html\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL <strong>NDB<\/strong> Cluster<\/a>\u00a0(moteur de stockage NDB).<\/em><\/p>\n<p>&nbsp;<\/p>\n<h2>Le contexte<\/h2>\n<p>3 instances MySQL autonomes, &lt;spoil altert&gt; qui vont gr\u00e2ce au plugin <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/group-replication.html\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL Group Replication<\/a> se transformer en une base de donn\u00e9es distribu\u00e9e.&lt;\/spoil altert&gt;<\/p>\n<ul>\n<li>Instance MySQL 1 (mysql_5.7_node1) : 172.18.0.11; N\u0153ud num\u00e9ro 1 du cluster<\/li>\n<li>Instance MySQL 2 (mysql_5.7_node2) : 172.18.0.12; N\u0153ud num\u00e9ro 2 du cluster<\/li>\n<li>Instance MySQL 3\u00a0(mysql_5.7_node3)\u00a0: 172.18.0.13; N\u0153ud num\u00e9ro 3 du cluster<\/li>\n<\/ul>\n<p>1 instance applicative :\u00a0192.168.1.11; MySQL Router <strong>+<\/strong> mon application.<\/p>\n<p>&nbsp;<\/p>\n<p><em><span style=\"text-decoration: underline;\">Note<\/span>: J&rsquo;utilise l&rsquo;image <a href=\"https:\/\/hub.docker.com\/r\/mysql\/mysql-server\/\" target=\"_blank\" rel=\"noopener\">Docker\u00a0MySQL Server<\/a> support\u00e9e par l&rsquo;\u00e9quipe MySQL d&rsquo;Oracle.<\/em><\/p>\n<p><em><span style=\"text-decoration: underline;\">Note<\/span>: Je n&rsquo;aborde pas dans cet article la redondance de MySQL Router. Plusieurs sc\u00e9narios sont possibles, je te recommande de lire\u00a0<a href=\"https:\/\/lefred.be\/content\/mysql-innodb-cluster-is-the-router-a-single-point-of-failure\/\" target=\"_blank\" rel=\"noopener\">\u00e7a<\/a>, <a href=\"https:\/\/lefred.be\/content\/mysql-router-ha-with-pacemaker\/\" target=\"_blank\" rel=\"noopener\">ceci<\/a> et <a href=\"https:\/\/lefred.be\/content\/mysql-router-ha-with-keepalived\/\" target=\"_blank\" rel=\"noopener\">cela<\/a>.<\/em><\/p>\n<p>&nbsp;<\/p>\n<p>MySQL Shell est install\u00e9 sur toutes les instances.<\/p>\n<p>En ce qui concerne les versions des logiciels, ce sont les plus r\u00e9centes \u00e0 ce jour (journ\u00e9e caniculaire du mois d&rsquo;ao\u00fbt 2018):<\/p>\n<ul>\n<li><a href=\"https:\/\/dev.mysql.com\/downloads\/mysql\/5.7.html#downloads\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL Server <strong>5.7.23<\/strong> GA<\/a><\/li>\n<li><a href=\"https:\/\/dev.mysql.com\/downloads\/router\/\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL Router <strong>8.0.12<\/strong> GA<\/a><\/li>\n<li><a href=\"https:\/\/dev.mysql.com\/downloads\/shell\/\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL Shell <strong>8.0.12<\/strong> GA<\/a><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><em><span style=\"text-decoration: underline;\">Note<\/span>: Dans cet article j&rsquo;utilise la derni\u00e8re GA de MySQL 5.7. Je publierai un autre tuto avec MySQL 8.0. Cependant, en ce qu&rsquo;il concerne MySQL Router et MySQL Shell, il faut <span style=\"color: #ff0000;\"><strong>TOUJOURS<\/strong> prendre la <strong>derni\u00e8re<\/strong> version<\/span> (branche 8.0).<\/em><\/p>\n<p>&nbsp;<\/p>\n<p>Pour r\u00e9capituler notre architecture, une image valant (au moins) 1000 mots, \u00e7a nous donne :<\/p>\n<p><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/MySQL_InnoDB_Cluster_Production_post2.png?resize=600%2C837\" alt=\"MySQL InnoDB Cluster PoC Architecture\" width=\"600\" height=\"837\" \/><\/p>\n<p>&nbsp;<\/p>\n<h2>V\u00e9rifier la configuration des instances<\/h2>\n<p>La premi\u00e8re \u00e9tape consiste \u00e0 s&rsquo;assurer que les instances MySQL sont correctement configur\u00e9es pour l&rsquo;utilisation de <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/group-replication.html\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL Group Replication<\/a>, la couche <strong>haute disponibilit\u00e9<\/strong> de notre architecture. A noter qu&rsquo;il est pr\u00e9f\u00e9rable de provisionner ses instances d\u00e9j\u00e0 correctement configur\u00e9es (<a href=\"http:\/\/dasini.net\/blog\/2016\/11\/08\/deployer-un-cluster-mysql-group-replication\/\" target=\"_blank\" rel=\"noopener noreferrer\">comme d\u00e9taill\u00e9 dans cet article<\/a>) pour MySQL Group Replication.<\/p>\n<p>&nbsp;<\/p>\n<p><em><span style=\"text-decoration: underline;\">Note<\/span>: J&rsquo;utiliser le compte utilisateur root pour configurer le cluster, cependant ce n&rsquo;est pas une obligation. Il est effectivement possible de cr\u00e9er un compte utilisateur sp\u00e9cifique (ou plusieurs), avec les droits qui vont bien (acc\u00e8s total sur les tables des m\u00e9ta-donn\u00e9es d&rsquo;InnoDB Cluster + des droits d&rsquo;administration de l&rsquo;instance MySQL). Plus d&rsquo;info <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/mysql-innodb-cluster-production-deployment.html#mysql-innodb-cluster-user-privileges\" target=\"_blank\" rel=\"noopener\">ici (Paragraphe \u00ab\u00a0User Privileges)\u00a0\u00bb<\/a>.<\/em><\/p>\n<p>&nbsp;<\/p>\n<p>La v\u00e9rification de la configuration se fait gr\u00e2ce \u00e0 MySQL Shell et la m\u00e9thode dba.<em><strong><a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_dba.html#aef7387e5a7be5b449af2559bb4e66099\" target=\"_blank\" rel=\"noopener\">checkInstanceConfiguration<\/a>()<\/strong><\/em>\u00a0:<\/p>\n<pre class=\"lang:sh decode:true\" title=\"V\u00e9rification de la configuration du noeud\">$ mysqlsh --uri root@172.18.0.11\r\n...\r\nServer version: 5.7.23 MySQL Community Server (GPL)\r\nNo default schema selected; type \\use &lt;schema&gt; to set one.\r\nMySQL Shell 8.0.12\r\n...\r\n\r\nJS&gt; dba.checkInstanceConfiguration('root@172.18.0.11:3306')\r\nPlease provide the password for 'root@172.18.0.11:3306': \r\nValidating MySQL instance at 172.18.0.11:3306 for use in an InnoDB cluster...\r\n\r\nThis instance reports its own address as 51306ade1992\r\nClients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.\r\n\r\nChecking whether existing tables comply with Group Replication requirements...\r\nNo incompatible tables detected\r\n\r\nChecking instance configuration...\r\nNote: verifyMyCnf option was not given so only dynamic configuration will be verified.\r\n\r\nSome configuration options need to be fixed:\r\n+----------------------------------+---------------+----------------+--------------------------------------------------+\r\n| Variable                         | Current Value | Required Value | Note                                             |\r\n+----------------------------------+---------------+----------------+--------------------------------------------------+\r\n| binlog_checksum                  | CRC32         | NONE           | Update the server variable                       |\r\n| enforce_gtid_consistency         | OFF           | ON             | Update read-only variable and restart the server |\r\n| gtid_mode                        | OFF           | ON             | Update read-only variable and restart the server |\r\n| log_bin                          | 0             | 1              | Update read-only variable and restart the server |\r\n| log_slave_updates                | 0             | ON             | Update read-only variable and restart the server |\r\n| master_info_repository           | FILE          | TABLE          | Update read-only variable and restart the server |\r\n| relay_log_info_repository        | FILE          | TABLE          | Update read-only variable and restart the server |\r\n| server_id                        | 0             | &lt;unique ID&gt;    | Update read-only variable and restart the server |\r\n| transaction_write_set_extraction | OFF           | XXHASH64       | Update read-only variable and restart the server |\r\n+----------------------------------+---------------+----------------+--------------------------------------------------+\r\n\r\nThe following variable needs to be changed, but cannot be done dynamically: 'log_bin'\r\nPlease use the dba.configureInstance() command to repair these issues.\r\n\r\n{\r\n    \"config_errors\": [\r\n        {\r\n            \"action\": \"server_update\", \r\n            \"current\": \"CRC32\", \r\n            \"option\": \"binlog_checksum\", \r\n            \"required\": \"NONE\"\r\n        },\r\n        {\r\n            \"action\": \"restart\", \r\n            \"current\": \"OFF\", \r\n            \"option\": \"enforce_gtid_consistency\", \r\n            \"required\": \"ON\"\r\n        },\r\n        {\r\n            \"action\": \"restart\", \r\n            \"current\": \"OFF\", \r\n            \"option\": \"gtid_mode\", \r\n            \"required\": \"ON\"\r\n        },\r\n        {\r\n            \"action\": \"restart\", \r\n            \"current\": \"0\", \r\n            \"option\": \"log_bin\", \r\n            \"required\": \"1\"\r\n        },\r\n        {\r\n            \"action\": \"restart\", \r\n            \"current\": \"0\", \r\n            \"option\": \"log_slave_updates\", \r\n            \"required\": \"ON\"\r\n        },\r\n        {\r\n            \"action\": \"restart\", \r\n            \"current\": \"FILE\", \r\n            \"option\": \"master_info_repository\", \r\n            \"required\": \"TABLE\"\r\n        },\r\n        {\r\n            \"action\": \"restart\", \r\n            \"current\": \"FILE\", \r\n            \"option\": \"relay_log_info_repository\", \r\n            \"required\": \"TABLE\"\r\n        },\r\n        {\r\n            \"action\": \"restart\", \r\n            \"current\": \"0\", \r\n            \"option\": \"server_id\", \r\n            \"required\": \"&lt;unique ID&gt;\"\r\n        },\r\n        {\r\n            \"action\": \"restart\", \r\n            \"current\": \"OFF\", \r\n            \"option\": \"transaction_write_set_extraction\", \r\n            \"required\": \"XXHASH64\"\r\n        }\r\n    ], \r\n    \"errors\": [], \r\n    \"status\": \"error\"\r\n}<\/pre>\n<p>Dans mon cas, avec l&rsquo;installation de MySQL 5.7 par d\u00e9faut sous Ubuntu (avec l&rsquo;<a href=\"https:\/\/hub.docker.com\/r\/mysql\/mysql-server\/\" target=\"_blank\" rel=\"noopener\">image Docker<\/a>), niveau configuration&#8230; bah j&rsquo;ai tout \u00e0 faire \ud83d\ude42<\/p>\n<p>La m\u00e9thode renvoie un document JSON (pratique pour l&rsquo;automatisation) avec la liste des t\u00e2ches \u00e0 effectuer pour \u00eatre conforme&#8230; Configurons donc !<\/p>\n<p>&nbsp;<\/p>\n<p>J&rsquo;ai deux solutions :<\/p>\n<ul>\n<li>1\/ je pr\u00e9pare mes instances \u00ab\u00a0manuellement\u00a0\u00bb (cette t\u00e2che peut bien \u00e9videmment s&rsquo;automatiser e.g. Ansible, Puppet, Chef, &#8230;) comme expliqu\u00e9 dans l&rsquo;article\u00a0<a href=\"http:\/\/dasini.net\/blog\/2016\/11\/08\/deployer-un-cluster-mysql-group-replication\/\" target=\"_blank\" rel=\"noopener noreferrer\">comment configurer un groupe<\/a>.<\/li>\n<li>2\/ je me connecte \u00e0 chaque instance en local, et j&rsquo;utilise la m\u00e9thode : <em><strong>configureLocalInstance()<\/strong><\/em><\/li>\n<\/ul>\n<p>Et ensuite je ne dois pas oublier de red\u00e9marrer les instances \ud83d\ude42<\/p>\n<p>&nbsp;<\/p>\n<p>Soyons fou ! allons y\u00a0pour la m\u00e9thode 2 :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Tentative de configuration du n\u0153ud sur machine distante\">JS&gt; dba.configureLocalInstance('root@172.18.0.11:3306')\r\nPlease provide the password for 'root@172.18.0.11:3306': \r\nConfiguring MySQL instance at 172.18.0.11:3306 for use in an InnoDB cluster...\r\n\r\nThis instance reports its own address as mysql_5.7_node1\r\nClients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.\r\n\r\nSome configuration options need to be fixed:\r\n+----------------------------------+---------------+----------------+--------------------------------------------------+\r\n| Variable                         | Current Value | Required Value | Note                                             |\r\n+----------------------------------+---------------+----------------+--------------------------------------------------+\r\n| binlog_checksum                  | CRC32         | NONE           | Update the server variable                       |\r\n| enforce_gtid_consistency         | OFF           | ON             | Update read-only variable and restart the server |\r\n| gtid_mode                        | OFF           | ON             | Update read-only variable and restart the server |\r\n| log_bin                          | 0             | 1              | Update read-only variable and restart the server |\r\n| log_slave_updates                | 0             | ON             | Update read-only variable and restart the server |\r\n| master_info_repository           | FILE          | TABLE          | Update read-only variable and restart the server |\r\n| relay_log_info_repository        | FILE          | TABLE          | Update read-only variable and restart the server |\r\n| server_id                        | 0             | &lt;unique ID&gt;    | Update read-only variable and restart the server |\r\n| transaction_write_set_extraction | OFF           | XXHASH64       | Update read-only variable and restart the server |\r\n+----------------------------------+---------------+----------------+--------------------------------------------------+\r\n\r\nThe following variable needs to be changed, but cannot be done dynamically: 'log_bin'\r\nWARNING: Cannot update configuration file for a remote target instance.\r\nERROR: Unable to change MySQL configuration.\r\nMySQL server configuration needs to be updated, but neither remote nor local configuration is possible.\r\nPlease run this command locally, in the same host as the MySQL server being configured, and pass the path to its configuration file through the mycnfPath option.\r\nDba.configureLocalInstance: Unable to update configuration (RuntimeError)<\/pre>\n<p>Ouppss!!!\u00a0<em>dba.configureLocalInstance<\/em>\u00a0ne fonctionne qu&rsquo;en local, c&rsquo;est-\u00e0-dire, si je suis connect\u00e9 sur la machine h\u00f4te de l&rsquo;instance MySQL (ce qui est une bonne chose). Du coup apr\u00e8s m&rsquo;\u00eatre connect\u00e9 \u00e0 l&rsquo;h\u00f4te 172.18.0.11 :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Configurer le n\u0153ud automatiquement avec MySQL Shell\">daz@172.18.0.11:~$ mysqlsh \r\nMySQL Shell 8.0.12\r\n\r\nCopyright (c) 2016, 2018, Oracle and\/or its affiliates. All rights reserved.\r\n\r\nOracle is a registered trademark of Oracle Corporation and\/or its\r\naffiliates. Other names may be trademarks of their respective\r\nowners.\r\n\r\nType '\\help' or '\\?' for help; '\\quit' to exit.\r\n\r\n\r\nJS&gt; dba.configureLocalInstance('root@localhost:3306')\r\nPlease provide the password for 'root@localhost:3306': \r\nConfiguring local MySQL instance listening at port 3306 for use in an InnoDB cluster...\r\n\r\nThis instance reports its own address as mysql_5.7_node1\r\nClients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.\r\n\r\nSome configuration options need to be fixed:\r\n+----------------------------------+---------------+----------------+--------------------------------------------------+\r\n| Variable                         | Current Value | Required Value | Note                                             |\r\n+----------------------------------+---------------+----------------+--------------------------------------------------+\r\n| binlog_checksum                  | CRC32         | NONE           | Update the server variable                       |\r\n| enforce_gtid_consistency         | OFF           | ON             | Update read-only variable and restart the server |\r\n| gtid_mode                        | OFF           | ON             | Update read-only variable and restart the server |\r\n| log_bin                          | 0             | 1              | Update read-only variable and restart the server |\r\n| log_slave_updates                | 0             | ON             | Update read-only variable and restart the server |\r\n| master_info_repository           | FILE          | TABLE          | Update read-only variable and restart the server |\r\n| relay_log_info_repository        | FILE          | TABLE          | Update read-only variable and restart the server |\r\n| server_id                        | 0             | &lt;unique ID&gt;    | Update read-only variable and restart the server |\r\n| transaction_write_set_extraction | OFF           | XXHASH64       | Update read-only variable and restart the server |\r\n+----------------------------------+---------------+----------------+--------------------------------------------------+\r\n\r\nThe following variable needs to be changed, but cannot be done dynamically: 'log_bin'\r\n\r\nDetecting the configuration file...\r\nDefault file not found at the standard locations.\r\nPlease specify the path to the MySQL configuration file: \/etc\/my.cnf\r\nDo you want to perform the required configuration changes? [y\/n]: y\r\nConfiguring instance...\r\nThe instance 'localhost:3306' was configured for cluster usage.\r\nMySQL server needs to be restarted for configuration changes to take effect.\r\n<\/pre>\n<p><em><span style=\"text-decoration: underline;\">Note<\/span>: Assure toi d&rsquo;avoir les droits n\u00e9cessaires\u00a0pour mettre \u00e0 jour le fichier de configuration de MySQL.<\/em><\/p>\n<p>Les informations ajout\u00e9es dans le fichier de configuration se trouvent en fin de fichier :<\/p>\n<pre class=\"lang:sh decode:true\" title=\"Nouveau my.cnf\">daz@172.18.0.11:~$ cat \/etc\/my.cnf\r\n\r\n[mysqld]\r\n... &lt;data previously in the file&gt; ...\r\n\r\nlog_slave_updates = ON\r\nserver_id = 1467421716\r\nrelay_log_info_repository = TABLE\r\nmaster_info_repository = TABLE\r\ntransaction_write_set_extraction = XXHASH64\r\nbinlog_format = ROW\r\nreport_port = 3306\r\nbinlog_checksum = NONE\r\nenforce_gtid_consistency = ON\r\nlog_bin\r\ngtid_mode = ON<\/pre>\n<p>172.18.0.11 est configur\u00e9e !<\/p>\n<p>Apr\u00e8s <strong>red\u00e9marrage<\/strong> de l&rsquo;instance MySQL, la sortie de\u00a0<a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_dba.html#af99b79d538a656af71337c7fc70eddb9\" target=\"_blank\" rel=\"noopener\"><em>checkInstanceConfiguration<\/em><\/a> est beaucoup moins anxiog\u00e8ne :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Valider la configuration du n\u0153ud\">JS&gt; dba.checkInstanceConfiguration('root@172.18.0.11:3306')\r\nPlease provide the password for 'root@172.18.0.11:3306': \r\nValidating MySQL instance at 172.18.0.11:3306 for use in an InnoDB cluster...\r\n\r\nThis instance reports its own address as mysql_5.7_node1\r\nClients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.\r\n\r\nChecking whether existing tables comply with Group Replication requirements...\r\nNo incompatible tables detected\r\n\r\nChecking instance configuration...\r\nNote: verifyMyCnf option was not given so only dynamic configuration will be verified.\r\nInstance configuration is compatible with InnoDB cluster\r\n\r\nThe instance '172.18.0.11:3306' is valid for InnoDB cluster usage.\r\n\r\n{\r\n    \"status\": \"ok\"\r\n}<\/pre>\n<p>OK ! Le membre est pr\u00eat pour faire parti d&rsquo;un groupe.<\/p>\n<p>La <span style=\"text-decoration: underline;\">m\u00eame proc\u00e9dure doit \u00eatre appliqu\u00e9e sur les autres instances<\/span> MySQL.<\/p>\n<p>&#8230; &lt;Quelques commandes de configuration&gt;&#8230;<\/p>\n<p>Et je me retrouve avec le r\u00e9sultat suivant:<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Valider la configuration des autres n\u0153uds\">JS &gt; dba.checkInstanceConfiguration('root@172.18.0.12:3306')\r\nPlease provide the password for 'root@172.18.0.12:3306': \r\nValidating MySQL instance at 172.18.0.12:3306 for use in an InnoDB cluster...\r\n\r\nThis instance reports its own address as mysql_5.7_node2\r\nClients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.\r\n\r\nChecking whether existing tables comply with Group Replication requirements...\r\nNo incompatible tables detected\r\n\r\nChecking instance configuration...\r\nNote: verifyMyCnf option was not given so only dynamic configuration will be verified.\r\nInstance configuration is compatible with InnoDB cluster\r\n\r\nThe instance '172.18.0.12:3306' is valid for InnoDB cluster usage.\r\n\r\n{\r\n    \"status\": \"ok\"\r\n}\r\n\r\n\r\n\r\nJS &gt; dba.checkInstanceConfiguration('root@172.18.0.13:3306')\r\nPlease provide the password for 'root@172.18.0.13:3306': \r\nValidating MySQL instance at 172.18.0.13:3306 for use in an InnoDB cluster...\r\n\r\nThis instance reports its own address as mysql_5.7_node3\r\nClients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.\r\n\r\nChecking whether existing tables comply with Group Replication requirements...\r\nNo incompatible tables detected\r\n\r\nChecking instance configuration...\r\nNote: verifyMyCnf option was not given so only dynamic configuration will be verified.\r\nInstance configuration is compatible with InnoDB cluster\r\n\r\nThe instance '172.18.0.13:3306' is valid for InnoDB cluster usage.\r\n\r\n{\r\n    \"status\": \"ok\"\r\n}<\/pre>\n<p>All good!<\/p>\n<p>&nbsp;<\/p>\n<h2>Cr\u00e9er le cluster<\/h2>\n<p>Une fois les 3 instances correctement configur\u00e9es, l&rsquo;\u00e9tape suivante consiste \u00e0 cr\u00e9er le cluster avec <a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_dba.html#a12f040129a2c4c301392dd69611da0c8\" target=\"_blank\" rel=\"noopener\"><strong><em>createCluster<\/em><\/strong><\/a>. Cette m\u00e9thode va \u00eatre jou\u00e9e sur le premier membre, l&rsquo;instance MySQL sur \u00a0172.18.0.11, \u00a0elle va permettre de bootstrapper le cluster:<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Cr\u00e9er le cluster\">JS&gt; \\c root@172.18.0.11\r\nCreating a Session to 'root@172.18.0.11'\r\nEnter password: \r\n...\r\n\r\nJS&gt; var cluster = dba.createCluster('pocCluster', {ipWhitelist: \"172.18.0.0\/16\"});\r\nA new InnoDB cluster will be created on instance 'root@172.18.0.11:3306'.\r\n\r\nValidating instance at 172.18.0.11:3306...\r\n\r\nThis instance reports its own address as mysql_5.7_node1\r\n\r\nInstance configuration is suitable.\r\nCreating InnoDB cluster 'pocCluster' on 'root@172.18.0.11:3306'...\r\nWARNING: On instance '172.18.0.11:3306' membership change cannot be persisted since MySQL version 5.7.23 does not support the SET PERSIST command (MySQL version &gt;= 8.0.11 required). Please use the &lt;Dba&gt;.configureLocalInstance command locally to persist the changes.\r\nAdding Seed Instance...\r\n\r\nCluster successfully created. Use Cluster.addInstance() to add MySQL instances.\r\nAt least 3 instances are needed for the cluster to be able to withstand up to\r\none server failure.<\/pre>\n<p><em><strong><a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_dba.html#a12f040129a2c4c301392dd69611da0c8\" target=\"_blank\" rel=\"noopener\">createCluster<\/a>()<\/strong><\/em> prend comme param\u00e8tre le nom du cluster (pocCluster). Tu peux lui passer \u00e9galement quelques informations optionnelles comme la <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/group-replication-ip-address-whitelisting.html\" target=\"_blank\" rel=\"noopener noreferrer\">whitelist<\/a>.<\/p>\n<p>Tu peux ensuite v\u00e9rifier l&rsquo;\u00e9tat du n\u0153ud dans le cluster avec\u00a0<strong><a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_cluster.html#a11c55beb4c14a7fec8d041991f3f39d8\" target=\"_blank\" rel=\"noopener\">status<\/a><em>()<\/em><\/strong> :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Etat du cluster\">JS&gt; cluster.status()\r\n{\r\n    \"clusterName\": \"pocCluster\", \r\n    \"defaultReplicaSet\": {\r\n        \"name\": \"default\", \r\n        \"primary\": \"172.18.0.11:3306\", \r\n        \"ssl\": \"REQUIRED\", \r\n        \"status\": \"OK_NO_TOLERANCE\", \r\n        \"statusText\": \"Cluster is NOT tolerant to any failures.\", \r\n        \"topology\": {\r\n            \"172.18.0.11:3306\": {\r\n                \"address\": \"172.18.0.11:3306\", \r\n                \"mode\": \"R\/W\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }\r\n        }\r\n    }, \r\n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.18.0.11:3306\"\r\n}<\/pre>\n<p><em><span style=\"text-decoration: underline;\">Notes<\/span> : Assure toi que ton\u00a0DNS (ou \/etc\/hosts) est\u00a0correctement configur\u00e9, sinon tu vas avoir des soucis de connections&#8230;<\/em><\/p>\n<p>&nbsp;<\/p>\n<p>L&rsquo;ajouts des\u00a0n\u0153uds suivant se fait avec <strong><a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_cluster.html#a92471821375405214215958ee924087f\" target=\"_blank\" rel=\"noopener\">addInstance<\/a>()<\/strong>, il est n\u00e9anmoins conseill\u00e9 d&rsquo;ex\u00e9cuter <strong><a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_cluster.html#ae589dbf4e07c85108ab47b3735c3114a\" target=\"_blank\" rel=\"noopener\">checkInstanceState<\/a>()<\/strong>\u00a0au pr\u00e9alable pour s&rsquo;assurer de la compatibilit\u00e9 des <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/replication-gtids-concepts.html\" target=\"_blank\" rel=\"noopener\">GTID<\/a> sets :<\/p>\n<p><span style=\"text-decoration: underline;\">N\u0153ud 2<\/span><\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Ajout du deuxi\u00e8me n\u0153ud\">JS&gt; cluster.checkInstanceState('root@172.18.0.12:3306')\r\nAnalyzing the instance replication state...\r\nPlease provide the password for 'root@172.18.0.12:3306': \r\n\r\nThe instance 'root@172.18.0.12:3306' is valid for the cluster.\r\nThe instance is new to Group Replication.\r\n\r\n{\r\n    \"reason\": \"new\", \r\n    \"state\": \"ok\"\r\n}\r\n\r\n\r\n\r\nJS&gt; cluster.addInstance(\"root@172.18.0.12:3306\", {ipWhitelist: \"172.18.0.0\/16\"})\r\nA new instance will be added to the InnoDB cluster. Depending on the amount of\r\ndata on the cluster this might take from a few seconds to several hours.\r\n\r\nAdding instance to the cluster ...\r\n\r\nPlease provide the password for 'root@172.18.0.12:3306': \r\nValidating instance at 172.18.0.12:3306...\r\n\r\nThis instance reports its own address as mysql_5.7_node2\r\n\r\nInstance configuration is suitable.\r\nWARNING: On instance '172.18.0.12:3306' membership change cannot be persisted since MySQL version 5.7.23 does not support the SET PERSIST command (MySQL version &gt;= 8.0.11 required). Please use the &lt;Dba&gt;.configureLocalInstance command locally to persist the changes.\r\nWARNING: On instance '172.18.0.11:3306' membership change cannot be persisted since MySQL version 5.7.23 does not support the SET PERSIST command (MySQL version &gt;= 8.0.11 required). Please use the &lt;Dba&gt;.configureLocalInstance command locally to persist the changes.\r\nThe instance 'root@172.18.0.12:3306' was successfully added to the cluster.\r\n\r\n\r\n\r\nJS&gt; cluster.status();\r\n{\r\n    \"clusterName\": \"pocCluster\", \r\n    \"defaultReplicaSet\": {\r\n        \"name\": \"default\", \r\n        \"primary\": \"172.18.0.11:3306\", \r\n        \"ssl\": \"REQUIRED\", \r\n        \"status\": \"OK_NO_TOLERANCE\", \r\n        \"statusText\": \"Cluster is NOT tolerant to any failures.\", \r\n        \"topology\": {\r\n            \"172.18.0.11:3306\": {\r\n                \"address\": \"172.18.0.11:3306\", \r\n                \"mode\": \"R\/W\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.12:3306\": {\r\n                \"address\": \"172.18.0.12:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }\r\n        }\r\n    }, \r\n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.18.0.11:3306\"\r\n}<\/pre>\n<p>Au cas o\u00f9 l&rsquo;instance ajout\u00e9e n&rsquo;a pas un GTID set compatible avec le groupe <a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_cluster.html#ae589dbf4e07c85108ab47b3735c3114a\" target=\"_blank\" rel=\"noopener\"><em>checkInstanceState<\/em><\/a>\u00a0te le fait savoir :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"N\u0153ud invalide ne pouvant pas faire parti du cluster\">JS&gt; cluster.checkInstanceState('root@172.18.0.12:3306')\r\nAnalyzing the instance replication state...\r\nPlease provide the password for 'root@172.18.0.12:3306': \r\n\r\nThe instance '172.18.0.12:3306' is invalid for the cluster.\r\nThe instance contains additional transactions in relation to the cluster.\r\n\r\n{\r\n    \"reason\": \"diverged\", \r\n    \"state\": \"error\"\r\n}<\/pre>\n<p>En fonction du contexte, il faut alors soit restaurer une sauvegarde d&rsquo;un membre du cluster sur l&rsquo;instance probl\u00e9matique (celle qui diverge) ou alors si <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/replication-gtids-failover.html\" target=\"_blank\" rel=\"noopener noreferrer\"><span style=\"text-decoration: underline;\">tu sais ce que tu fais<\/span><\/a>, une <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/replication-gtids-failover.html#replication-gtids-failover-empty\" target=\"_blank\" rel=\"noopener noreferrer\">synchronisation des GTIDs<\/a> est toujours possible, voir un <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/reset-master.html\" target=\"_blank\" rel=\"noopener noreferrer\">reset master<\/a>.<\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"text-decoration: underline;\">N\u0153ud 3<\/span><\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Ajout du troisi\u00e8me n\u0153ud\">JS&gt; cluster.checkInstanceState('root@172.18.0.13:3306')\r\nAnalyzing the instance replication state...\r\nPlease provide the password for 'root@172.18.0.13:3306': \r\n\r\nThe instance 'root@172.18.0.13:3306' is valid for the cluster.\r\nThe instance is new to Group Replication.\r\n\r\n{\r\n    \"reason\": \"new\", \r\n    \"state\": \"ok\"\r\n}\r\n\r\n\r\n\r\nJS&gt; cluster.addInstance(\"root@172.18.0.13:3306\", {ipWhitelist: \"172.18.0.0\/16\"})\r\nA new instance will be added to the InnoDB cluster. Depending on the amount of\r\ndata on the cluster this might take from a few seconds to several hours.\r\n\r\nAdding instance to the cluster ...\r\n\r\nPlease provide the password for 'root@172.18.0.13:3306': \r\nValidating instance at 172.18.0.13:3306...\r\n\r\nThis instance reports its own address as mysql_5.7_node3\r\n\r\nInstance configuration is suitable.\r\nWARNING: On instance '172.18.0.13:3306' membership change cannot be persisted since MySQL version 5.7.23 does not support the SET PERSIST command (MySQL version &gt;= 8.0.11 required). Please use the &lt;Dba&gt;.configureLocalInstance command locally to persist the changes.\r\nWARNING: On instance '172.18.0.11:3306' membership change cannot be persisted since MySQL version 5.7.23 does not support the SET PERSIST command (MySQL version &gt;= 8.0.11 required). Please use the &lt;Dba&gt;.configureLocalInstance command locally to persist the changes.\r\nWARNING: On instance '172.18.0.12:3306' membership change cannot be persisted since MySQL version 5.7.23 does not support the SET PERSIST command (MySQL version &gt;= 8.0.11 required). Please use the &lt;Dba&gt;.configureLocalInstance command locally to persist the changes.\r\nThe instance 'root@172.18.0.13:3306' was successfully added to the cluster.\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p>Le r\u00e9sultat final:<\/p>\n<p><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/MySQL_InnoDB_Cluster-5.7_cluster.status.png?resize=563%2C491\" alt=\"MySQL InnoDB Cluster PoC Architecture\" width=\"563\" height=\"491\" \/><\/p>\n<p>Et voil\u00e0!<\/p>\n<p>Un cluster MySQL Group Replication de 3 n\u0153uds est d\u00e9ploy\u00e9 gr\u00e2ce \u00e0 MySQL Shell !<\/p>\n<p>La configuration actuelle est la suivante:<\/p>\n<ul>\n<li>N\u0153ud 1 (mysql_5.7_node1) = 172.18.0.11 : Primaire (lecture\/\u00e9criture)<\/li>\n<li>N\u0153ud 2 (mysql_5.7_node2) = 172.18.0.12 : Secondaire (lecture seule)<\/li>\n<li>N\u0153ud 3 (mysql_5.7_node3) = 172.18.0.13 : Secondaire (lecture seule)<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p>Si tu as \u00e9t\u00e9 attentif, lors de l&rsquo;ajout des n\u0153uds (pareil donc pour la cr\u00e9ation du cluster), tu as not\u00e9 que MySQL Shell me renvoi des \u00ab\u00a0Warnings\u00a0\u00bb,\u00a0 cependant, rien de bien m\u00e9chant !<\/p>\n<p>En MySQL 5.7 la commande <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/set-variable.html\" target=\"_blank\" rel=\"noopener\">SET PERSIST<\/a> n&rsquo;existe tout simplement pas. Il n&rsquo;est donc pas possible, \u00e0 cette \u00e9tape, d&rsquo;automatiquement rendre\u00a0 persistante la configuration ie l&rsquo;\u00e9crire dans le fichier de configuration \u00e0 distance (remote en bon franglais). Bref, en clair, la conf des n\u0153uds\u00a0 est en m\u00e9moire.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h2>Persistance de la configuration<\/h2>\n<p>Pour rendre la configuration persistante, il faut alors ex\u00e9cuter, sur chacun des n\u0153uds et apr\u00e8s que le n\u0153ud soit configur\u00e9, la m\u00e9thode (d\u00e9j\u00e0 vue)\u00a0 dba.<strong><a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_dba.html#af99b79d538a656af71337c7fc70eddb9\" target=\"_blank\" rel=\"noopener\">configureLocalInstance<\/a>()<\/strong> :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Persistance de la conf dans le n\u0153ud 1\"># Before persistence, no MySQL Group Replication variable in my.cnf\r\ndaz@172.18.0.11:~$ grep -c group_replication \/etc\/my.cnf\r\n0\r\n\r\n\r\n\r\ndaz@172.18.0.11:~$ mysqlsh\r\nMySQL Shell 8.0.12\r\n\r\nCopyright (c) 2016, 2018, Oracle and\/or its affiliates. All rights reserved.\r\n\r\nOracle is a registered trademark of Oracle Corporation and\/or its\r\naffiliates. Other names may be trademarks of their respective\r\nowners.\r\n\r\nType '\\help' or '\\?' for help; '\\quit' to exit.\r\n\r\n\r\nJS &gt; dba.configureLocalInstance(\"root@localhost:3306\")  \r\nPlease provide the password for 'root@localhost:3306': \r\nThe instance 'localhost:3306' belongs to an InnoDB cluster.\r\n\r\nDetecting the configuration file...\r\nDefault file not found at the standard locations.\r\nPlease specify the path to the MySQL configuration file: \/etc\/my.cnf\r\nPersisting the cluster settings...\r\nThe instance 'localhost:3306' was configured for use in an InnoDB cluster.\r\n\r\nThe instance cluster settings were successfully persisted.\r\n\r\n\r\n\r\n# After persistence, some MySQL Group Replication variables were added in my.cnf\r\ndaz@172.18.0.11:~$ grep -c group_replication \/etc\/my.cnf\r\n35<\/pre>\n<p>A noter que cette op\u00e9ration ne peut se faire qu&rsquo;en local.<\/p>\n<p>Evidemment, \u00e0 faire sur tout les autres n\u0153uds:<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Faire persister la configuration sur le n\u0153ud 2\">daz@172.18.0.12:~$ mysqlsh\r\nMySQL Shell 8.0.12\r\n\r\nCopyright (c) 2016, 2018, Oracle and\/or its affiliates. All rights reserved.\r\n\r\nOracle is a registered trademark of Oracle Corporation and\/or its\r\naffiliates. Other names may be trademarks of their respective\r\nowners.\r\n\r\nType '\\help' or '\\?' for help; '\\quit' to exit.\r\n\r\n\r\nJS &gt; dba.configureLocalInstance(\"root@localhost:3306\")\r\nPlease provide the password for 'root@localhost:3306': \r\nThe instance 'localhost:3306' belongs to an InnoDB cluster.\r\n\r\nDetecting the configuration file...\r\nDefault file not found at the standard locations.\r\nPlease specify the path to the MySQL configuration file: \/etc\/my.cnf\r\nPersisting the cluster settings...\r\nThe instance 'localhost:3306' was configured for use in an InnoDB cluster.\r\n\r\nThe instance cluster settings were successfully persisted.<\/pre>\n<p>Et le dernier:<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Faire persister la configuration sur le n\u0153ud 3\">daz@172.18.0.13:~$ mysqlsh\r\nMySQL Shell 8.0.12\r\n\r\nCopyright (c) 2016, 2018, Oracle and\/or its affiliates. All rights reserved.\r\n\r\nOracle is a registered trademark of Oracle Corporation and\/or its\r\naffiliates. Other names may be trademarks of their respective\r\nowners.\r\n\r\nType '\\help' or '\\?' for help; '\\quit' to exit.\r\n\r\n\r\nJS &gt; dba.configureLocalInstance(\"root@localhost:3306\")\r\nPlease provide the password for 'root@localhost:3306': \r\nThe instance 'localhost:3306' belongs to an InnoDB cluster.\r\n\r\nDetecting the configuration file...\r\nDefault file not found at the standard locations.\r\nPlease specify the path to the MySQL configuration file: \/etc\/my.cnf\r\nPersisting the cluster settings...\r\nThe instance 'localhost:3306' was configured for use in an InnoDB cluster.\r\n\r\nThe instance cluster settings were successfully persisted.<\/pre>\n<p>Bien que pas obligatoire, je recommande de le faire syst\u00e9matiquement.<\/p>\n<p>La suite ?<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h2>Configuration de MySQL Router<\/h2>\n<p>Les recommandations de MySQL sont d&rsquo;installer MySQL Router sur la machine h\u00f4te de l&rsquo;application, je vais donc l&rsquo;installer sur la machine\u00a0192.168.1.11.<\/p>\n<p>&nbsp;<\/p>\n<p><em><span style=\"text-decoration: underline;\">Note<\/span>: Si tu ne peux (veux) pas mettre MySQL Router sur l&rsquo;application, il va alors te falloir g\u00e9rer le HA du Router. Plusieurs solutions sont envisageables comme :<\/em><\/p>\n<ul>\n<li><em><a href=\"https:\/\/lefred.be\/content\/mysql-router-ha-with-pacemaker\/\" target=\"_blank\" rel=\"noopener\">Pacemaker<\/a><\/em><\/li>\n<li><em><a href=\"https:\/\/lefred.be\/content\/mysql-router-ha-with-keepalived\/\" target=\"_blank\" rel=\"noopener\">Keepalived<\/a><\/em><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h3>Bootstrap MySQL Router<\/h3>\n<p>La premi\u00e8re \u00e9tape est le <a href=\"https:\/\/dev.mysql.com\/doc\/mysql-router\/8.0\/en\/mysql-router-deploying-bootstrapping.html\" target=\"_blank\" rel=\"noopener\">bootstrap<\/a>:<\/p>\n<pre class=\"lang:sh decode:true\" title=\"Bootstrapper MySQL Router\">daz@192.168.1.11:~$ mysqlrouter --bootstrap root@172.18.0.11:3306 --conf-base-port 3306 --directory ~\/routerConf\/RouterPoC\r\nPlease enter MySQL password for root: \r\n\r\nBootstrapping MySQL Router instance at '\/home\/daz\/routerConf\/RouterPoC'...\r\nChecking for old Router accounts\r\nCreating account mysql_router6_7gnev5crokb8@'%'\r\nMySQL Router  has now been configured for the InnoDB cluster 'pocCluster'.\r\n\r\nThe following connection information can be used to connect to the cluster.\r\n\r\nClassic MySQL protocol connections to cluster 'pocCluster':\r\n- Read\/Write Connections: localhost:3306\r\n- Read\/Only Connections: localhost:3307\r\nX protocol connections to cluster 'pocCluster':\r\n- Read\/Write Connections: localhost:3308\r\n- Read\/Only Connections: localhost:3309<\/pre>\n<p><em><span style=\"text-decoration: underline;\">Note<\/span>: il se peut que tu rencontres un probl\u00e8me de permission. Probablement d\u00fb \u00e0 la configuration de\u00a0<a href=\"https:\/\/fr.wikipedia.org\/wiki\/AppArmor\" target=\"_blank\" rel=\"noopener\">AppArmor<\/a>&#8230; Google (ou \u00e9quivalent) est ton ami \ud83d\ude42 (si tu es sous Ubuntu <a href=\"https:\/\/doc.ubuntu-fr.org\/apparmor\" target=\"_blank\" rel=\"noopener\">click\u00a0ici<\/a>).<\/em><\/p>\n<p>&nbsp;<\/p>\n<p>J&rsquo;ai cr\u00e9\u00e9 une configuration diff\u00e9rente de celle par d\u00e9faut, en personnalisant avec quelques options:<\/p>\n<ul>\n<li><a href=\"https:\/\/dev.mysql.com\/doc\/mysql-router\/8.0\/en\/mysqlrouter.html#option_mysqlrouter_conf-base-port\" target=\"_blank\" rel=\"noopener\">conf-base-port<\/a> : le port propos\u00e9 par d\u00e9faut est 6446 pour la lecture\/\u00e9criture. Dans mon cas, je veux utiliser le c\u00e9l\u00e8bre port 3306<\/li>\n<li><a href=\"https:\/\/dev.mysql.com\/doc\/mysql-router\/8.0\/en\/mysqlrouter.html#option_mysqlrouter_directory\" target=\"_blank\" rel=\"noopener\">directory<\/a>\u00a0: histoire de ranger tout le bazar de cette instance de Router dans le r\u00e9pertoire sp\u00e9cifi\u00e9<\/li>\n<\/ul>\n<p>La liste compl\u00e8te des options est disponible <a href=\"https:\/\/dev.mysql.com\/doc\/mysql-router\/8.0\/en\/mysqlrouter.html#option_mysqlrouter_directory\" target=\"_blank\" rel=\"noopener\">ici<\/a>.<\/p>\n<p>&nbsp;<\/p>\n<p>Pour r\u00e9sumer, 4 ports TCP ont \u00e9t\u00e9 configur\u00e9s, dont 2 pour les connexions MySQL traditionnelles:<\/p>\n<p>3306 (au lieu de 6446 par d\u00e9faut) : lectures \/ \u00e9critures pour le n\u0153ud primaire<br \/>\n3307 (au lieu de 6447 par d\u00e9faut) : lectures seules pour les n\u0153uds secondaires (en Round-Robin)<br \/>\nEt le pendant pour les connexions avec le protocole X (3308 &amp; 3309 (au lieu de respectivement 64460 &amp; 64470)), pour une utilisation <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/document-store.html\" target=\"_blank\" rel=\"noopener noreferrer\">NoSQL Document Store<\/a> de MySQL.<\/p>\n<p>&nbsp;<\/p>\n<p>Le fichier de configuration de MySQL Router contient quelques informations importantes, tel que le(s) port(s) \u00e0 utiliser par l&rsquo;application (comme vu pr\u00e9c\u00e9demment) :<\/p>\n<pre class=\"lang:sh decode:true\" title=\"Contenu du fichier de configuration de MySQL Router\">daz@192.168.1.11:~$ cat ~\/routerConf\/RouterPoC\/mysqlrouter.conf \r\n# File automatically generated during MySQL Router bootstrap\r\n[DEFAULT]\r\nlogging_folder=\/home\/daz\/routerConf\/RouterPoC\/log\r\nruntime_folder=\/home\/daz\/routerConf\/RouterPoC\/run\r\ndata_folder=\/home\/daz\/routerConf\/RouterPoC\/data\r\nkeyring_path=\/home\/daz\/routerConf\/RouterPoC\/data\/keyring\r\nmaster_key_path=\/home\/daz\/routerConf\/RouterPoC\/mysqlrouter.key\r\nconnect_timeout=30\r\nread_timeout=30\r\n\r\n[logger]\r\nlevel = INFO\r\n\r\n[metadata_cache:pocCluster]\r\nrouter_id=6\r\nbootstrap_server_addresses=mysql:\/\/172.18.0.11:3306,mysql:\/\/172.18.0.12:3306,mysql:\/\/172.18.0.13:3306\r\nuser=mysql_router6_7gnev5crokb8\r\nmetadata_cluster=pocCluster\r\nttl=0.5\r\n\r\n[routing:pocCluster_default_rw]\r\nbind_address=0.0.0.0\r\nbind_port=3306\r\ndestinations=metadata-cache:\/\/pocCluster\/default?role=PRIMARY\r\nrouting_strategy=round-robin\r\nprotocol=classic\r\n\r\n[routing:pocCluster_default_ro]\r\nbind_address=0.0.0.0\r\nbind_port=3307\r\ndestinations=metadata-cache:\/\/pocCluster\/default?role=SECONDARY\r\nrouting_strategy=round-robin\r\nprotocol=classic\r\n\r\n[routing:pocCluster_default_x_rw]\r\nbind_address=0.0.0.0\r\nbind_port=3308\r\ndestinations=metadata-cache:\/\/pocCluster\/default?role=PRIMARY\r\nrouting_strategy=round-robin\r\nprotocol=x\r\n\r\n[routing:pocCluster_default_x_ro]\r\nbind_address=0.0.0.0\r\nbind_port=3309\r\ndestinations=metadata-cache:\/\/pocCluster\/default?role=SECONDARY\r\nrouting_strategy=round-robin\r\nprotocol=x<\/pre>\n<p>Il est \u00e9videmment possible de modifier ce fichier.<\/p>\n<p>&nbsp;<\/p>\n<p>Ensuite, il faut d\u00e9marrer MySQL Router avec le script <strong>start.sh<\/strong><\/p>\n<pre class=\"lang:sh decode:true\">daz@192.168.1.11:~$ ~\/routerConf\/RouterPoC\/start.sh<\/pre>\n<p>L&rsquo;arr\u00eat du Router se fait avec le script <strong>stop.sh<\/strong>\u00a0(mais tu l&rsquo;avais devin\u00e9)<\/p>\n<pre class=\"lang:sh decode:true\">daz@192.168.1.11:~$ ~\/routerConf\/RouterPoC\/stop.sh<\/pre>\n<p>Voil\u00e0 pour MySQL Router !<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h2>Se connecter au cluster<\/h2>\n<p>A partir de maintenant, ton cluster et \u00ab\u00a0<em>up and running<\/em>\u00ab\u00a0.\u00a0Ton application va donc devoir se connecter au port 3306 (car on l&rsquo;a configur\u00e9 comme cela, sinon c&rsquo;est 6446 par d\u00e9faut &#8211; je radote, je sais) pour utiliser la base de donn\u00e9e. D&rsquo;ailleurs du point de vue de l&rsquo;application, <span style=\"text-decoration: underline;\">la base de donn\u00e9e est MySQL Router<\/span>, sauf qu&rsquo;en r\u00e9alit\u00e9 ce n&rsquo;est pas 1 instance, mais bel et bien 3 instances qui sont en <em>backend<\/em>\u00a0et ceci en toute transparence (\u00e9patant! hein ?).<\/p>\n<p>La partie <span style=\"text-decoration: underline;\">utilisation du cluster<\/span>\u00a0est hors du scope de cet article, mais on peut facilement simuler le comportement de l&rsquo;application avec un client MySQL (MySQL Shell ici) et MySQL router.<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Connexion au Router\">daz@192.168.1.11:~$ mysqlsh --uri=root@localhost:3306 --sql\r\nSQL &gt; SELECT @@report_host;\r\n+-----------------+\r\n| @@report_host   |\r\n+-----------------+\r\n| mysql_5.7_node1 |\r\n+-----------------+<\/pre>\n<p>Je me connecte avec MySQL Shell en mode SQL (\u00e7a c&rsquo;est l&rsquo;applicatif), au cluster (\u00e0 <strong>mysql_5.7_node1<\/strong>, n\u0153ud primaire InnoDB Cluster), par l&rsquo;interm\u00e9diaire de MySQL Router en localhost (car je suis sur la machine 192.168.1.11) sur le port 3306.<\/p>\n<p>Le param\u00e8tre <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/replication-options-slave.html#sysvar_report_host\" target=\"_blank\" rel=\"noopener\"><strong>report_host<\/strong><\/a>\u00a0(d\u00e9fini dans mon fichier de configuration) me renvoi la valeur du\u00a0 n\u0153ud 1, le primaire.<\/p>\n<p>En cas d&rsquo;arr\u00eat du primaire, un nouveau va \u00eatre automatiquement \u00e9lu par le cluster (voir paragraphe failover plus bas) est la m\u00eame commande me donnera un r\u00e9sultat diff\u00e9rent:<\/p>\n<pre class=\"lang:mysql decode:true \" title=\"Failover automatique\">-- \/!\\ Arr\u00eat du n\u0153ud 1\r\nSQL&gt; SELECT @@report_host;\r\nERROR: 2013 (HY000): Lost connection to MySQL server during query\r\nThe global session got disconnected..\r\nAttempting to reconnect to 'mysql:\/\/root@localhost:3306'..\r\nThe global session was successfully reconnected.\r\n\r\nSQL&gt; SELECT @@report_host;\r\n+-----------------+\r\n| @@report_host   |\r\n+-----------------+\r\n| mysql_5.7_node2 |\r\n+-----------------+<\/pre>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h2>Gestion des n\u0153uds<\/h2>\n<p>Quelques commandes qui vont te simplifier la vie&#8230;<\/p>\n<h3>Performance_Schema<\/h3>\n<p>Quelques informations sont disponibles en SQL au niveau des instances.<\/p>\n<p><span style=\"text-decoration: underline;\"><strong>Identifier le n\u0153ud primaire<\/strong><\/span><\/p>\n<pre class=\"lang:mysql decode:true \" title=\"Identifier le n\u0153ud primaire en SQL\">SQL&gt; SELECT MEMBER_ID, MEMBER_HOST, MEMBER_PORT, MEMBER_STATE \r\nFROM performance_schema.replication_group_members \r\n    INNER JOIN performance_schema.global_status \r\n        ON (MEMBER_ID = VARIABLE_VALUE) \r\nWHERE VARIABLE_NAME='group_replication_primary_member'\\G\r\n*************************** 1. row ***************************\r\n   MEMBER_ID: 56ea1a24-9cbe-11e8-aba1-0242ac12000b\r\n MEMBER_HOST: mysql_5.7_node1\r\n MEMBER_PORT: 3306\r\nMEMBER_STATE: ONLINE<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"text-decoration: underline;\"><strong>Description des membres du cluster<\/strong><\/span><\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Description des membres du cluster\">SQL&gt; SELECT * FROM performance_schema.replication_group_members\\G\r\n*************************** 1. row ***************************\r\nCHANNEL_NAME: group_replication_applier\r\n   MEMBER_ID: 556ba78e-9cbe-11e8-ac75-0242ac12000c\r\n MEMBER_HOST: mysql_5.7_node2\r\n MEMBER_PORT: 3306\r\nMEMBER_STATE: ONLINE\r\n*************************** 2. row ***************************\r\nCHANNEL_NAME: group_replication_applier\r\n   MEMBER_ID: 5693705a-9cbe-11e8-abf9-0242ac12000d\r\n MEMBER_HOST: mysql_5.7_node3\r\n MEMBER_PORT: 3306\r\nMEMBER_STATE: ONLINE\r\n*************************** 3. row ***************************\r\nCHANNEL_NAME: group_replication_applier\r\n   MEMBER_ID: 56ea1a24-9cbe-11e8-aba1-0242ac12000b\r\n MEMBER_HOST: mysql_5.7_node1\r\n MEMBER_PORT: 3306\r\nMEMBER_STATE: ONLINE<\/pre>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h3>R\u00e9cup\u00e9rer les m\u00e9ta-donn\u00e9es d&rsquo;un cluster<\/h3>\n<p>Les m\u00e9ta-donn\u00e9es du cluster sont stock\u00e9es sur les membres dans le sch\u00e9ma\u00a0<strong><em>mysql_innodb_cluster_metadata<\/em><\/strong> :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Exploration du sch\u00e9ma mysql_innodb_cluster_metadata\">daz@172.18.0.11 ~ $ mysqlsh --uri root@172.18.0.11 --sql\r\n...\r\n\r\nSQL&gt; SHOW SCHEMAS;\r\n+-------------------------------+\r\n| Database                      |\r\n+-------------------------------+\r\n| information_schema            |\r\n| mysql                         |\r\n| mysql_innodb_cluster_metadata |\r\n| performance_schema            |\r\n| sys                           |\r\n+-------------------------------+\r\n\r\n\r\nSQL&gt; SHOW TABLES IN mysql_innodb_cluster_metadata;\r\n+-----------------------------------------+\r\n| Tables_in_mysql_innodb_cluster_metadata |\r\n+-----------------------------------------+\r\n| clusters                                |\r\n| hosts                                   |\r\n| instances                               |\r\n| replicasets                             |\r\n| routers                                 |\r\n| schema_version                          |\r\n+-----------------------------------------+\r\n\r\n\r\nSQL&gt; SELECT cluster_name FROM mysql_innodb_cluster_metadata.clusters;\r\n+--------------+\r\n| cluster_name |\r\n+--------------+\r\n| pocCluster   |\r\n+--------------+\r\n\r\n\r\nSQL &gt; SELECT host_name FROM mysql_innodb_cluster_metadata.hosts;\r\n+-----------------+\r\n| host_name       |\r\n+-----------------+\r\n| 172.18.0.11     |\r\n| 172.18.0.12     |\r\n| 172.18.0.13     |\r\n| 192.168.1.11    |\r\n+-----------------+\r\n<\/pre>\n<p>Pour r\u00e9cup\u00e9rer les informations relatives \u00e0 l&rsquo;\u00e9tat du cluster dans une nouvelle session il faut utiliser la m\u00e9thode\u00a0<a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_dba.html#a57075d1355767e1d3b433c7e21bb0cd5\" target=\"_blank\" rel=\"noopener\"><strong><em>getCluster<\/em><\/strong><\/a> :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Etat du cluster, dans une nouvelle session\">JS&gt; cluster.status()\r\nReferenceError: cluster is not defined\r\n\r\n\r\nJS&gt; var cluster = dba.getCluster('pocCluster')\r\n\r\n\r\nJS&gt; cluster.status()\r\n{\r\n    \"clusterName\": \"pocCluster\", \r\n    \"defaultReplicaSet\": {\r\n        \"name\": \"default\", \r\n        \"primary\": \"172.18.0.11:3306\", \r\n        \"ssl\": \"REQUIRED\", \r\n        \"status\": \"OK\", \r\n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \r\n        \"topology\": {\r\n            \"172.18.0.11:3306\": {\r\n                \"address\": \"172.18.0.11:3306\", \r\n                \"mode\": \"R\/W\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.12:3306\": {\r\n                \"address\": \"172.18.0.12:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.13:3306\": {\r\n                \"address\": \"172.18.0.13:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }\r\n        }\r\n    }, \r\n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.18.0.11:3306\"\r\n}<\/pre>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h3>Failover<\/h3>\n<p>Le basculement niveau base de donn\u00e9es (changement de primaire) est automatiquement g\u00e9r\u00e9 par les membres du cluster entre eux.<\/p>\n<pre class=\"lang:mysql decode:true\">JS&gt; cluster.status()\r\n{\r\n    \"clusterName\": \"pocCluster\", \r\n    \"defaultReplicaSet\": {\r\n        \"name\": \"default\", \r\n        \"primary\": \"172.18.0.11:3306\", \r\n        \"ssl\": \"REQUIRED\", \r\n        \"status\": \"OK\", \r\n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \r\n        \"topology\": {\r\n            \"172.18.0.11:3306\": {\r\n                \"address\": \"172.18.0.11:3306\", \r\n                \"mode\": \"R\/W\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.12:3306\": {\r\n                \"address\": \"172.18.0.12:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.13:3306\": {\r\n                \"address\": \"172.18.0.13:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }\r\n        }\r\n    }, \r\n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.18.0.11:3306\"\r\n}<\/pre>\n<p>Crash du noeud primaire (172.18.0.11)&#8230;<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Failover automatique\">JS&gt; cluster.status()\r\n{\r\n    \"clusterName\": \"pocCluster\", \r\n    \"defaultReplicaSet\": {\r\n        \"name\": \"default\", \r\n        \"primary\": \"172.18.0.13:3306\", \r\n        \"ssl\": \"REQUIRED\", \r\n        \"status\": \"OK_NO_TOLERANCE\", \r\n        \"statusText\": \"Cluster is NOT tolerant to any failures. 1 member is not active\", \r\n        \"topology\": {\r\n            \"172.18.0.11:3306\": {\r\n                \"address\": \"172.18.0.11:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"(MISSING)\"\r\n            }, \r\n            \"172.18.0.12:3306\": {\r\n                \"address\": \"172.18.0.12:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.13:3306\": {\r\n                \"address\": \"172.18.0.13:3306\", \r\n                \"mode\": \"R\/W\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }\r\n        }\r\n    }, \r\n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.18.0.13:3306\"\r\n}<\/pre>\n<p>Nouveau primaire \u00e9lu par le groupe :\u00a0172.18.0.13.<\/p>\n<p>Et 172.18.0.11 est port\u00e9 disparu (MIA).<\/p>\n<p>&nbsp;<\/p>\n<p>Les donn\u00e9es configuration cluster \u00e9tant sauvegard\u00e9es dans le fichier de configuration (voir\u00a0paragraphe\u00a0<span style=\"text-decoration: underline;\"><strong>Persistance de la configuration<\/strong><\/span>), une fois le red\u00e9marr\u00e9\/r\u00e9par\u00e9\/restaur\u00e9 il fera automatiquement parti du cluster \u00e0 nouveau. et il aura un r\u00f4le de secondaire.<\/p>\n<p>En cas de configuration non persistante, un <em><strong>rejoinInstance()<\/strong><\/em> est n\u00e9cessaire pour remettre le n\u0153ud dans le cluster. (voir paragraphe suivant\u00a0<strong><span style=\"text-decoration: underline;\">Remettre un membre dans le groupe<\/span><\/strong>).<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Etat du cluster\">JS&gt; cluster.status()\r\n{\r\n    \"clusterName\": \"pocCluster\", \r\n    \"defaultReplicaSet\": {\r\n        \"name\": \"default\", \r\n        \"primary\": \"172.18.0.13:3306\", \r\n        \"ssl\": \"REQUIRED\", \r\n        \"status\": \"OK\", \r\n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \r\n        \"topology\": {\r\n            \"172.18.0.11:3306\": {\r\n                \"address\": \"172.18.0.11:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.12:3306\": {\r\n                \"address\": \"172.18.0.12:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.13:3306\": {\r\n                \"address\": \"172.18.0.13:3306\", \r\n                \"mode\": \"R\/W\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }\r\n        }\r\n    }, \r\n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.18.0.13:3306\"\r\n}<\/pre>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h3>Remettre un membre dans le groupe<\/h3>\n<p>N\u00e9cessaire si la conf n&rsquo;est pas persistante ou si la variable\u00a0<strong>group_replication_start_on_boot<\/strong> = OFF.<\/p>\n<p>Le n\u0153ud peut alors \u00eatre remit dans le groupe avec la commande <em><strong>rejoinInstance()<\/strong><\/em> :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Rejoin Instance\">JS&gt; cluster.status()\r\n{\r\n    \"clusterName\": \"pocCluster\", \r\n    \"defaultReplicaSet\": {\r\n        \"name\": \"default\", \r\n        \"primary\": \"172.18.0.12:3306\", \r\n        \"ssl\": \"REQUIRED\", \r\n        \"status\": \"OK_NO_TOLERANCE\", \r\n        \"statusText\": \"Cluster is NOT tolerant to any failures. 1 member is not active\", \r\n        \"topology\": {\r\n            \"172.18.0.11:3306\": {\r\n                \"address\": \"172.18.0.11:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.12:3306\": {\r\n                \"address\": \"172.18.0.12:3306\", \r\n                \"mode\": \"R\/W\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.13:3306\": {\r\n                \"address\": \"172.18.0.13:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"(MISSING)\"\r\n            }\r\n        }\r\n    }, \r\n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.18.0.12:3306\"\r\n}\r\n\r\n\r\n\r\nJS&gt; dba.checkInstanceConfiguration('root@172.18.0.13:3306')\r\nPlease provide the password for 'root@172.18.0.13:3306': \r\nValidating MySQL instance at 172.18.0.13:3306 for use in an InnoDB cluster...\r\n \r\nThis instance reports its own address as mysql_5.7_node3\r\n\r\nChecking whether existing tables comply with Group Replication requirements...\r\nNo incompatible tables detected\r\n\r\nChecking instance configuration...\r\nNote: verifyMyCnf option was not given so only dynamic configuration will be verified.\r\nInstance configuration is compatible with InnoDB cluster\r\n\r\nThe instance '172.18.0.13:3306' is valid for InnoDB cluster usage.\r\n\r\n{\r\n    \"status\": \"ok\"\r\n}\r\n\r\n\r\n\r\nJS&gt; cluster.rejoinInstance(\"root@172.18.0.13:3306\", {ipWhitelist: \"172.18.0.0\/16\"})\r\nRejoining the instance to the InnoDB cluster. Depending on the original\r\nproblem that made the instance unavailable, the rejoin operation might not be\r\nsuccessful and further manual steps will be needed to fix the underlying\r\nproblem.\r\n\r\nPlease monitor the output of the rejoin operation and take necessary action if\r\nthe instance cannot rejoin.\r\n\r\nRejoining instance to the cluster ...\r\n\r\nPlease provide the password for 'root@172.18.0.13:3306': \r\nThe instance '172.18.0.13:3306' was successfully rejoined on the cluster.\r\n\r\n\r\n\r\nJS&gt; cluster.status()\r\n{\r\n    \"clusterName\": \"pocCluster\", \r\n    \"defaultReplicaSet\": {\r\n        \"name\": \"default\", \r\n        \"primary\": \"172.18.0.12:3306\", \r\n        \"ssl\": \"REQUIRED\", \r\n        \"status\": \"OK\", \r\n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \r\n        \"topology\": {\r\n            \"172.18.0.11:3306\": {\r\n                \"address\": \"172.18.0.11:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.12:3306\": {\r\n                \"address\": \"172.18.0.12:3306\", \r\n                \"mode\": \"R\/W\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.13:3306\": {\r\n                \"address\": \"172.18.0.13:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }\r\n        }\r\n    }, \r\n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.18.0.12:3306\"\r\n}<\/pre>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h3>Supprimer\u00a0une instance du groupe<\/h3>\n<p>Sans grande surprise, c&rsquo;est la commande\u00a0<strong><em>removeInstance<\/em><\/strong><\/p>\n<pre class=\"lang:mysql decode:true\">JS&gt; cluster.status()\r\n{\r\n    \"clusterName\": \"pocCluster\", \r\n    \"defaultReplicaSet\": {\r\n        \"name\": \"default\", \r\n        \"primary\": \"172.18.0.12:3306\", \r\n        \"ssl\": \"REQUIRED\", \r\n        \"status\": \"OK\", \r\n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \r\n        \"topology\": {\r\n            \"172.18.0.11:3306\": {\r\n                \"address\": \"172.18.0.11:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.12:3306\": {\r\n                \"address\": \"172.18.0.12:3306\", \r\n                \"mode\": \"R\/W\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.13:3306\": {\r\n                \"address\": \"172.18.0.13:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }\r\n        }\r\n    }, \r\n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.18.0.12:3306\"\r\n}\r\n\r\n\r\n\r\nJS&gt; cluster.removeInstance(\"root@172.18.0.13:3306\")\r\nThe instance will be removed from the InnoDB cluster. Depending on the instance\r\nbeing the Seed or not, the Metadata session might become invalid. If so, please\r\nstart a new session to the Metadata Storage R\/W instance.\r\n\r\nAttempting to leave from the Group Replication group...\r\nWARNING: On instance '172.18.0.13:3306' configuration cannot be persisted since MySQL version 5.7.23 does not support the SET PERSIST command (MySQL version &gt;= 8.0.11 required). Please set the 'group_replication_start_on_boot' variable to 'OFF' in the server configuration file, otherwise it might rejoin the cluster upon restart.\r\nWARNING: On instance '172.18.0.12:3306' membership change cannot be persisted since MySQL version 5.7.23 does not support the SET PERSIST command (MySQL version &gt;= 8.0.11 required). Please use the &lt;Dba&gt;.configureLocalInstance command locally to persist the changes.\r\nWARNING: On instance '172.18.0.11:3306' membership change cannot be persisted since MySQL version 5.7.23 does not support the SET PERSIST command (MySQL version &gt;= 8.0.11 required). Please use the &lt;Dba&gt;.configureLocalInstance command locally to persist the changes.\r\n\r\nThe instance '172.18.0.13:3306' was successfully removed from the cluster.\r\n\r\n\r\n\r\nJS&gt; cluster.status()\r\n{\r\n    \"clusterName\": \"pocCluster\", \r\n    \"defaultReplicaSet\": {\r\n        \"name\": \"default\", \r\n        \"primary\": \"172.18.0.12:3306\", \r\n        \"ssl\": \"REQUIRED\", \r\n        \"status\": \"OK_NO_TOLERANCE\", \r\n        \"statusText\": \"Cluster is NOT tolerant to any failures.\", \r\n        \"topology\": {\r\n            \"172.18.0.11:3306\": {\r\n                \"address\": \"172.18.0.11:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.12:3306\": {\r\n                \"address\": \"172.18.0.12:3306\", \r\n                \"mode\": \"R\/W\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }\r\n        }\r\n    }, \r\n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.18.0.12:3306\"\r\n}<\/pre>\n<p>L&rsquo;instance n&rsquo;est alors plus list\u00e9e dans les m\u00e9ta-donn\u00e9es :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"V\u00e9rifier la liste des instances directement dans les m\u00e9ta-donn\u00e9es\">JS&gt; \\sql\r\nSwitching to SQL mode... Commands end with ;\r\n\r\n\r\nSQL&gt; SELECT instance_name FROM mysql_innodb_cluster_metadata.instances;\r\n+------------------+\r\n| instance_name    |\r\n+------------------+\r\n| 172.18.0.11:3306 |\r\n| 172.18.0.12:3306 |\r\n+------------------+<\/pre>\n<p>Pour la remettre dans le groupe, il faut donc rejouer le processus de l&rsquo;ajout d&rsquo;instance vu plus haut :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Ajout d'un autre n\u0153ud \">JS&gt; cluster.checkInstanceState('root@172.18.0.13:3306')\r\nAnalyzing the instance replication state...\r\nPlease provide the password for 'root@172.18.0.13:3306': \r\n\r\nThe instance 'root@172.18.0.13:3306' is valid for the cluster.\r\nThe instance is fully recoverable.\r\n\r\n{\r\n    \"reason\": \"recoverable\", \r\n    \"state\": \"ok\"\r\n}\r\n\r\n\r\n\r\nJS&gt; cluster.addInstance(\"root@172.18.0.13:3306\", {ipWhitelist: \"172.18.0.0\/16\"})\r\nA new instance will be added to the InnoDB cluster. Depending on the amount of\r\ndata on the cluster this might take from a few seconds to several hours.\r\n\r\nAdding instance to the cluster ...\r\n\r\nPlease provide the password for 'root@172.18.0.13:3306': \r\nValidating instance at 172.18.0.13:3306...\r\n\r\nThis instance reports its own address as mysql_5.7_node3\r\n\r\nInstance configuration is suitable.\r\nWARNING: On instance '172.18.0.13:3306' membership change cannot be persisted since MySQL version 5.7.23 does not support the SET PERSIST command (MySQL version &gt;= 8.0.11 required). Please use the &lt;Dba&gt;.configureLocalInstance command locally to persist the changes.\r\nWARNING: On instance '172.18.0.12:3306' membership change cannot be persisted since MySQL version 5.7.23 does not support the SET PERSIST command (MySQL version &gt;= 8.0.11 required). Please use the &lt;Dba&gt;.configureLocalInstance command locally to persist the changes.\r\nWARNING: On instance '172.18.0.11:3306' membership change cannot be persisted since MySQL version 5.7.23 does not support the SET PERSIST command (MySQL version &gt;= 8.0.11 required). Please use the &lt;Dba&gt;.configureLocalInstance command locally to persist the changes.\r\nThe instance 'root@172.18.0.13:3306' was successfully added to the cluster.\r\n\r\n\r\n\r\nJS&gt; cluster.status()\r\n{\r\n    \"clusterName\": \"pocCluster\", \r\n    \"defaultReplicaSet\": {\r\n        \"name\": \"default\", \r\n        \"primary\": \"172.18.0.12:3306\", \r\n        \"ssl\": \"REQUIRED\", \r\n        \"status\": \"OK\", \r\n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \r\n        \"topology\": {\r\n            \"172.18.0.11:3306\": {\r\n                \"address\": \"172.18.0.11:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.12:3306\": {\r\n                \"address\": \"172.18.0.12:3306\", \r\n                \"mode\": \"R\/W\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.13:3306\": {\r\n                \"address\": \"172.18.0.13:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }\r\n        }\r\n    }, \r\n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.18.0.12:3306\"\r\n}<\/pre>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h3>Perte du quorum<\/h3>\n<p>Si le cluster perd plus de la moiti\u00e9 de ses membres (<strong>crash<\/strong> ou <strong><a href=\"https:\/\/fr.wikipedia.org\/wiki\/Split-brain\" target=\"_blank\" rel=\"noopener\">split brain<\/a><\/strong> par exemple) il se retrouve dans un \u00e9tat assez d\u00e9sagr\u00e9able, <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/group-replication-network-partitioning.html\" target=\"_blank\" rel=\"noopener noreferrer\">network partitioning<\/a>, en clair il faut une intervention externe au cluster pour permettre aux membres restant de continuer \u00e0 faire leur boulot.<\/p>\n<p>&nbsp;<\/p>\n<p><em><span style=\"text-decoration: underline;\">Note<\/span>: Par perte j&rsquo;entend arr\u00eat non pr\u00e9vu (crash). En cas d&rsquo;arr\u00eat normal ou propre, m\u00eame si le cluster perd son quorum (dans ce cas pr\u00e9sent arr\u00eat normal de 2 n\u0153uds), le n\u0153ud restant sait que les autres n\u0153uds ne sont plus l\u00e0 (en clair pas de risque de split brain) donc le cluster continue de fonctionner. Mais c&rsquo;est un cluster avec un seul n\u0153ud&#8230;\u00a0<\/em><\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Cluster avec un seul n\u0153ud actif\">JS &gt; cluster.status()\r\n{\r\n    \"clusterName\": \"pocCluster\", \r\n    \"defaultReplicaSet\": {\r\n        \"name\": \"default\", \r\n        \"primary\": \"172.18.0.12:3306\", \r\n        \"ssl\": \"REQUIRED\", \r\n        \"status\": \"OK_NO_TOLERANCE\", \r\n        \"statusText\": \"Cluster is NOT tolerant to any failures. 2 members are not active\", \r\n        \"topology\": {\r\n            \"172.18.0.11:3306\": {\r\n                \"address\": \"172.18.0.11:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"(MISSING)\"\r\n            }, \r\n            \"172.18.0.12:3306\": {\r\n                \"address\": \"172.18.0.12:3306\", \r\n                \"mode\": \"R\/W\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.13:3306\": {\r\n                \"address\": \"172.18.0.13:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"(MISSING)\"\r\n            }\r\n        }\r\n    }, \r\n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.18.0.12:3306\"\r\n}<\/pre>\n<p>&nbsp;<\/p>\n<p>Dans notre cas, avec 3 instances, il faut en perdre\u00a02 d&rsquo;un coup :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Cluster status\">JS&gt; cluster.status()\r\n{\r\n    \"clusterName\": \"pocCluster\", \r\n    \"defaultReplicaSet\": {\r\n        \"name\": \"default\", \r\n        \"primary\": \"172.18.0.12:3306\", \r\n        \"ssl\": \"REQUIRED\", \r\n        \"status\": \"OK\", \r\n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \r\n        \"topology\": {\r\n            \"172.18.0.11:3306\": {\r\n                \"address\": \"172.18.0.11:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.12:3306\": {\r\n                \"address\": \"172.18.0.12:3306\", \r\n                \"mode\": \"R\/W\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.13:3306\": {\r\n                \"address\": \"172.18.0.13:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }\r\n        }\r\n    }, \r\n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.18.0.12:3306\"\r\n}<\/pre>\n<p>Perte des n\u0153uds (crash) 172.18.0.11 &amp;\u00a0172.18.0.12&#8230; \u00a0(Mayday, Mayday, Mayday!!!)<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Cluster sans quorum\">JS&gt; cluster.status()\r\nWARNING: Cluster has no quorum and cannot process write transactions: 2 out of 3 members of the InnoDB cluster are unreachable from the member we\u2019re connected to, which is not sufficient for a quorum to be reached.\r\n{\r\n    \"clusterName\": \"pocCluster\", \r\n    \"defaultReplicaSet\": {\r\n        \"name\": \"default\", \r\n        \"primary\": \"172.18.0.13:3306\", \r\n        \"ssl\": \"REQUIRED\", \r\n        \"status\": \"NO_QUORUM\", \r\n        \"statusText\": \"Cluster has no quorum as visible from '172.18.0.13:3306' and cannot process write transactions. 2 members are not active\", \r\n        \"topology\": {\r\n            \"172.18.0.11:3306\": {\r\n                \"address\": \"172.18.0.11:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"UNREACHABLE\"\r\n            }, \r\n            \"172.18.0.12:3306\": {\r\n                \"address\": \"172.18.0.12:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"UNREACHABLE\"\r\n            }, \r\n            \"172.18.0.13:3306\": {\r\n                \"address\": \"172.18.0.13:3306\", \r\n                \"mode\": \"R\/W\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }\r\n        }\r\n    }, \r\n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.18.0.13:3306\"\r\n}<\/pre>\n<p>Le failover automatique ne peut pas s\u2019enclencher, le n\u0153ud survivant (172.18.0.13) est <strong>bloqu\u00e9<\/strong>.<\/p>\n<p>Il faut donc intervenir :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Perte du quorum\">$ mysqlsh --uri=root@172.18.0.13\r\nCreating a Session to 'root@172.18.0.13'\r\n...\r\n\r\n\r\nJS&gt; var cluster = dba.getCluster(\"pocCluster\")\r\nWARNING: Cluster has no quorum and cannot process write transactions: 2 out of 3 members of the InnoDB cluster are unreachable from the member we\u2019re connected to, which is not sufficient for a quorum to be reached.\r\n\r\n\r\n\r\nJS&gt;  cluster.forceQuorumUsingPartitionOf('root@172.18.0.13:3306')\r\nRestoring replicaset 'default' from loss of quorum, by using the partition composed of [172.18.0.13:3306]\r\n\r\nRestoring the InnoDB cluster ...\r\n\r\nPlease provide the password for 'root@172.18.0.13:3306': \r\nThe InnoDB cluster was successfully restored using the partition from the instance 'root@172.18.0.13:3306'.\r\n\r\nWARNING: To avoid a split-brain scenario, ensure that all other members of the replicaset are removed or joined back to the group that was restored.\r\n\r\n\r\n\r\nJS&gt; cluster.status()\r\n{\r\n    \"clusterName\": \"pocCluster\", \r\n    \"defaultReplicaSet\": {\r\n        \"name\": \"default\", \r\n        \"primary\": \"172.18.0.13:3306\", \r\n        \"ssl\": \"REQUIRED\", \r\n        \"status\": \"OK_NO_TOLERANCE\", \r\n        \"statusText\": \"Cluster is NOT tolerant to any failures. 2 members are not active\", \r\n        \"topology\": {\r\n            \"172.18.0.11:3306\": {\r\n                \"address\": \"172.18.0.11:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"(MISSING)\"\r\n            }, \r\n            \"172.18.0.12:3306\": {\r\n                \"address\": \"172.18.0.12:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"(MISSING)\"\r\n            }, \r\n            \"172.18.0.13:3306\": {\r\n                \"address\": \"172.18.0.13:3306\", \r\n                \"mode\": \"R\/W\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }\r\n        }\r\n    }, \r\n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.18.0.13:3306\"\r\n}<\/pre>\n<p>Evidemment, sauf si tu es joueur \ud83d\ude42 , il faut \u00e9viter de rester trop longtemps dans cet \u00e9tat.<\/p>\n<p>Une fois les instances remisent en condition, il faut soit simplement les d\u00e9marrer ou alors utiliser\u00a0<em><strong>rejoinInstance()<\/strong><\/em>\u00a0pour les remettre dans le cluster, en tant que secondaire.<\/p>\n<pre class=\"lang:mysql decode:true\">JS&gt; cluster.status()\r\n{\r\n    \"clusterName\": \"pocCluster\", \r\n    \"defaultReplicaSet\": {\r\n        \"name\": \"default\", \r\n        \"primary\": \"172.18.0.13:3306\", \r\n        \"ssl\": \"REQUIRED\", \r\n        \"status\": \"OK\", \r\n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \r\n        \"topology\": {\r\n            \"172.18.0.11:3306\": {\r\n                \"address\": \"172.18.0.11:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.12:3306\": {\r\n                \"address\": \"172.18.0.12:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.13:3306\": {\r\n                \"address\": \"172.18.0.13:3306\", \r\n                \"mode\": \"R\/W\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }\r\n        }\r\n    }, \r\n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.18.0.13:3306\"\r\n}<\/pre>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h3>Repartir apr\u00e8s un arr\u00eat total du cluster<\/h3>\n<p>La perte du quorum est une chose, mais il y a pire, perdre tout les n\u0153uds&#8230;<\/p>\n<p>En cas d&rsquo;arr\u00eat total du cluster i.e. toutes les instances\u00a0sont \u00e9teintes, il faut utiliser, une fois les instances MySQL de nouveau d\u00e9marr\u00e9es \u00a0<strong><em>rebootClusterFromCompleteOutage()<\/em><\/strong> :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Reboot du cluster - erreur\">$ mysqlsh root@172.18.0.12:3306\r\n...\r\n\r\nJS &gt; var cluster = dba.rebootClusterFromCompleteOutage('pocCluster')\r\nReconfiguring the cluster 'pocCluster' from complete outage...\r\n\r\nThe instance '172.18.0.11:3306' was part of the cluster configuration.\r\nWould you like to rejoin it to the cluster? [y\/N]: y\r\n\r\nThe instance '172.18.0.13:3306' was part of the cluster configuration.\r\nWould you like to rejoin it to the cluster? [y\/N]: y\r\n\r\nDba.rebootClusterFromCompleteOutage: The active session instance isn't the most updated in comparison with the ONLINE instances of the Cluster's metadata. Please use the most up to date instance: '172.18.0.13:3306'. (RuntimeError)<\/pre>\n<p>&nbsp;<\/p>\n<p>Le reboot doit se faire sur l&rsquo;instance la plus \u00e0 jour (ici la machine 172.18.0.13) :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Reboot du cluster \">$ mysqlsh --uri=root@172.18.0.13\r\nCreating a session to 'root@172.18.0.13'\r\n...\r\n\r\nJS&gt; var cluster = dba.rebootClusterFromCompleteOutage('pocCluster')\r\nvar cluster = dba.rebootClusterFromCompleteOutage('pocCluster')\r\nReconfiguring the cluster 'pocCluster' from complete outage...\r\n\r\nThe instance '172.18.0.11:3306' was part of the cluster configuration.\r\nWould you like to rejoin it to the cluster? [y\/N]: y\r\n\r\nThe instance '172.18.0.12:3306' was part of the cluster configuration.\r\nWould you like to rejoin it to the cluster? [y\/N]: y\r\n\r\nWARNING: On instance '172.18.0.13:3306' membership change cannot be persisted since MySQL version 5.7.23 does not support the SET PERSIST command (MySQL version &gt;= 8.0.11 required). Please use the &lt;Dba&gt;.configureLocalInstance command locally to persist the changes.\r\n\r\nThe cluster was successfully rebooted.\r\n\r\n\r\n\r\nJS&gt; cluster.status()\r\n{\r\n    \"clusterName\": \"pocCluster\", \r\n    \"defaultReplicaSet\": {\r\n        \"name\": \"default\", \r\n        \"primary\": \"172.18.0.13:3306\", \r\n        \"ssl\": \"REQUIRED\", \r\n        \"status\": \"OK\", \r\n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \r\n        \"topology\": {\r\n            \"172.18.0.11:3306\": {\r\n                \"address\": \"172.18.0.11:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.12:3306\": {\r\n                \"address\": \"172.18.0.12:3306\", \r\n                \"mode\": \"R\/O\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }, \r\n            \"172.18.0.13:3306\": {\r\n                \"address\": \"172.18.0.13:3306\", \r\n                \"mode\": \"R\/W\", \r\n                \"readReplicas\": {}, \r\n                \"role\": \"HA\", \r\n                \"status\": \"ONLINE\"\r\n            }\r\n        }\r\n    }, \r\n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.18.0.13:3306\"\r\n}<\/pre>\n<p>Le membre sur lequel la commande est ex\u00e9cut\u00e9e est le nouveau primaire.<\/p>\n<p>&nbsp;<\/p>\n<p>Voil\u00e0 c&rsquo;est tout pour aujourd&rsquo;hui \ud83d\ude42<\/p>\n<p>C&rsquo;est tr\u00e8s certainement l&rsquo;un de mes derniers articles sur MySQL 5.7, cependant, vu le nombre \u00e9lev\u00e9 de retour que je re\u00e7ois, je tenais \u00e0 mettre \u00e0 jour le tuto sur la version 5.7 de MySQL InnoDB Cluster.<\/p>\n<p>&nbsp;<\/p>\n<p>Dans la m\u00eame th\u00e9matique :<\/p>\n<ul>\n<li><a href=\"http:\/\/dasini.net\/blog\/2018\/08\/30\/tutoriel-deployer-mysql-8-0-innodb-cluster\/\" target=\"_blank\" rel=\"noopener\">Tutoriel \u2013 D\u00e9ployer MySQL 8.0 InnoDB Cluster<\/a><\/li>\n<li><a href=\"https:\/\/lefred.be\/content\/mysql-innodb-cluster-is-the-router-a-single-point-of-failure\/\" target=\"_blank\" rel=\"noopener\">MySQL InnoDB Cluster: is the router a single point of failure ?<\/a><\/li>\n<li><a href=\"https:\/\/lefred.be\/content\/mysql-router-ha-with-pacemaker\/\" target=\"_blank\" rel=\"noopener\">MySQL Router HA with Pacemaker<\/a><\/li>\n<li><a href=\"https:\/\/lefred.be\/content\/mysql-router-ha-with-keepalived\/\" target=\"_blank\" rel=\"noopener\">MySQL Router HA with Keepalived<\/a><\/li>\n<li><a href=\"http:\/\/dasini.net\/blog\/2016\/11\/08\/deployer-un-cluster-mysql-group-replication\" target=\"_blank\" rel=\"noopener noreferrer\">D\u00e9ployer un cluster MySQL Group Replication<\/a><\/li>\n<li><a href=\"https:\/\/mysqlrelease.com\/2018\/03\/docker-compose-setup-for-innodb-cluster\/\" target=\"_blank\" rel=\"noopener\">D\u00e9ployer MySQL InnoDB Cluster avec Docker<\/a><\/li>\n<li><a href=\"http:\/\/dasini.net\/blog\/2017\/03\/03\/faq-webinar-mysql-group-replication\/\" target=\"_blank\" rel=\"noopener noreferrer\">FAQ Webinar MySQL Group Replication<\/a><\/li>\n<li><a href=\"http:\/\/dasini.net\/blog\/2017\/03\/13\/tester-mysql-innodb-cluster\/\" target=\"_blank\" rel=\"noopener noreferrer\">Tester MySQL InnoDB Cluster<\/a><\/li>\n<li><a href=\"http:\/\/dasini.net\/blog\/2017\/01\/11\/configurer-proxysql-pour-mysql-group-replication\/\" target=\"_blank\" rel=\"noopener noreferrer\">Configurer ProxySQL pour MySQL Group Replication<\/a><\/li>\n<li><a href=\"https:\/\/mysqlhighavailability.com\/group-replication-features-backported-to-mysql-5-7\/\" target=\"_blank\" rel=\"noopener\">Group Replication Features backported to MySQL 5.7<\/a><\/li>\n<li><a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/group-replication.html\" target=\"_blank\" rel=\"noopener noreferrer\">Doc &#8211; MySQL Group Replication<\/a><\/li>\n<li><a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/mysql-innodb-cluster-userguide.html\" target=\"_blank\" rel=\"noopener noreferrer\">Doc &#8211; MySQL InnoDB Cluster<\/a><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p>Thanks for using MySQL!<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>L&rsquo;un des principaux besoins de mes clients est la\u00a0Haute Disponibilit\u00e9 avec MySQL. On va voir, dans cet article, comment d\u00e9ployer et g\u00e9rer un cluster MySQL 5.7 \u00ab\u00a0from scratch\u00a0\u00bb\u00a0, sous la forme d&rsquo;un tutoriel, gr\u00e2ce \u00e0 la solution HA tout en un : MySQL 5.7 InnoDB Cluster.<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[282,288,8,337],"tags":[75,286,79,298],"class_list":["post-2549","post","type-post","status-publish","format-standard","hentry","category-group-replication","category-innodb-cluster","category-mysql","category-tuto","tag-cluster","tag-group-replication","tag-haute-disponibilite","tag-innodb-cluster"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9LfWW-F7","jetpack-related-posts":[{"id":3291,"url":"https:\/\/dasini.net\/blog\/2019\/07\/16\/deployer-1-cluster-mysql-avec-2-commandes\/","url_meta":{"origin":2549,"position":0},"title":"D\u00e9ployer 1 cluster MySQL avec 2 commandes","author":"Olivier DASINI","date":"16 juillet 2019","format":false,"excerpt":"Dans cette courte vid\u00e9o, je vais te montrer comment cr\u00e9er un cluster MySQL de 3 n\u0153uds avec simplement 2 commandes. https:\/\/www.youtube.com\/watch?v=YZzGsDjlWvk","rel":"","context":"Dans &quot;Group Replication&quot;","block_context":{"text":"Group Replication","link":"https:\/\/dasini.net\/blog\/category\/group-replication\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/YZzGsDjlWvk\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":2587,"url":"https:\/\/dasini.net\/blog\/2018\/08\/30\/tutoriel-deployer-mysql-8-0-innodb-cluster\/","url_meta":{"origin":2549,"position":1},"title":"Tutoriel \u2013 D\u00e9ployer MySQL 8.0 InnoDB Cluster","author":"Olivier DASINI","date":"30 ao\u00fbt 2018","format":false,"excerpt":"Cela fait maintenant plus d'un trimestre que MySQL 8.0 est GA (8.0.11; 8.0.12), il est temps que je t'en parle :) Dans la liste des besoins essentiels de mes clients se trouve la\u00a0Haute Disponibilit\u00e9 avec MySQL. On va voir, dans cet article, comment d\u00e9ployer et g\u00e9rer un cluster MySQL \"from\u2026","rel":"","context":"Dans &quot;Group Replication&quot;","block_context":{"text":"Group Replication","link":"https:\/\/dasini.net\/blog\/category\/group-replication\/"},"img":{"alt_text":"MySQL InnoDB Cluster Architecture","src":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/MySQL_8.0_InnoDB_Cluster_Production_post.png?resize=350%2C200","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/MySQL_8.0_InnoDB_Cluster_Production_post.png?resize=350%2C200 1x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/MySQL_8.0_InnoDB_Cluster_Production_post.png?resize=525%2C300 1.5x"},"classes":[]},{"id":2959,"url":"https:\/\/dasini.net\/blog\/2019\/04\/08\/200\/","url_meta":{"origin":2549,"position":2},"title":"200 !!","author":"Olivier DASINI","date":"8 avril 2019","format":false,"excerpt":"Cet article est en fait mon 200 \u00e8me \\o\/ Pour marquer le coup, j'ai d\u00e9cid\u00e9 de faire une petite pause et de regarder dans le r\u00e9troviseur...","rel":"","context":"Dans &quot;Divers&quot;","block_context":{"text":"Divers","link":"https:\/\/dasini.net\/blog\/category\/divers\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":1654,"url":"https:\/\/dasini.net\/blog\/2016\/10\/05\/mysql-a-oracle-openworld-2016\/","url_meta":{"origin":2549,"position":3},"title":"MySQL \u00e0 Oracle OpenWorld 2016","author":"Olivier DASINI","date":"5 octobre 2016","format":false,"excerpt":"R\u00e9sum\u00e9 des annonces MySQL \u00e0 Oracle OpenWorld: MySQL dans le cloud, MySQL Group Replication, MySQL InnoDB Cluster, MySQL 8,...","rel":"","context":"Dans &quot;Group Replication&quot;","block_context":{"text":"Group Replication","link":"https:\/\/dasini.net\/blog\/category\/group-replication\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/Oracle_MySQL_Cloud_Service.png?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":1736,"url":"https:\/\/dasini.net\/blog\/2016\/11\/24\/retour-sur-le-mysql-day-paris-2016\/","url_meta":{"origin":2549,"position":4},"title":"Retour sur le MySQL Day Paris 2016","author":"Olivier DASINI","date":"24 novembre 2016","format":false,"excerpt":"Venez d\u00e9couvrir le nouveau MySQL Cloud Service d'Oracle et toutes les nouveaut\u00e9s produits MySQL annonc\u00e9es \u00e0 Oracle Open World Vous utilisez ou envisagez d\u2019utiliser des services cloud pour votre projet ? Le nouveau MySQL Cloud Service d\u2019Oracle fournit un service cloud simple, automatis\u00e9, int\u00e9gr\u00e9, pr\u00eat \u00e0 l\u2019emploi, pour permettre aux\u2026","rel":"","context":"Dans &quot;Conf\u00e9rence&quot;","block_context":{"text":"Conf\u00e9rence","link":"https:\/\/dasini.net\/blog\/category\/conference\/"},"img":{"alt_text":"Oracle MySQL Day Paris","src":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/Oracle_MySQL_Day_Paris.png?resize=350%2C200","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/Oracle_MySQL_Day_Paris.png?resize=350%2C200 1x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/Oracle_MySQL_Day_Paris.png?resize=525%2C300 1.5x"},"classes":[]},{"id":1855,"url":"https:\/\/dasini.net\/blog\/2017\/04\/10\/adopte-un-cluster-mysql-group-replication\/","url_meta":{"origin":2549,"position":5},"title":"Adopte un&#8230; cluster MySQL Group Replication","author":"Olivier DASINI","date":"10 avril 2017","format":false,"excerpt":"Au menu d'aujourd'hui : comment passer de l'administration \"manuelle\" de votre solution HA MySQL Group Replication \u00e0 une administration plus simple, plus fun mais surtout facilement automatisable avec le pack MySQL InnoDB Cluster. En clair, on va voir comment utiliser MySQL Shell pour l'administration et l'orchestration du cluster et MySQL\u2026","rel":"","context":"Dans &quot;Group Replication&quot;","block_context":{"text":"Group Replication","link":"https:\/\/dasini.net\/blog\/category\/group-replication\/"},"img":{"alt_text":"MySQL InnoDB Cluster Overview","src":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/MySQL_InnoDB_cluster_overview.png?resize=350%2C200","width":350,"height":200},"classes":[]}],"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/posts\/2549","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/comments?post=2549"}],"version-history":[{"count":55,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/posts\/2549\/revisions"}],"predecessor-version":[{"id":2654,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/posts\/2549\/revisions\/2654"}],"wp:attachment":[{"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/media?parent=2549"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/categories?post=2549"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/tags?post=2549"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}