
{"id":2587,"date":"2018-08-30T11:24:53","date_gmt":"2018-08-30T10:24:53","guid":{"rendered":"http:\/\/dasini.net\/blog\/?p=2587"},"modified":"2019-09-03T08:04:42","modified_gmt":"2019-09-03T07:04:42","slug":"tutoriel-deployer-mysql-8-0-innodb-cluster","status":"publish","type":"post","link":"https:\/\/dasini.net\/blog\/2018\/08\/30\/tutoriel-deployer-mysql-8-0-innodb-cluster\/","title":{"rendered":"Tutoriel \u2013 D\u00e9ployer MySQL 8.0 InnoDB Cluster"},"content":{"rendered":"<p><span style=\"color: #003366;\">Une nouvelle version de ce tutoriel est disponible. <\/span><\/p>\n<p><span style=\"color: #003366;\">Il concerne les versions MySQL sup\u00e9rieures ou \u00e9gales \u00e0 <strong>8.0.17<\/strong>:<\/span><\/p>\n<p><a title=\"Tutoriel \u2013 D\u00e9ployer MySQL 8.0 InnoDB Cluster (09-2019)\" href=\"http:\/\/dasini.net\/blog\/2019\/09\/03\/tutoriel-deployer-mysql-8-0-innodb-cluster-09-2019\/\"><strong><span style=\"color: #ff0000;\">Tutoriel \u2013 D\u00e9ployer MySQL 8.0 InnoDB Cluster (09-2019)<\/span><\/strong><\/a><\/p>\n<p><span style=\"color: #003366;\">Pour les versions inf\u00e9rieures tu peux rester sur cette page, mais cependant, je t&rsquo;invite vivement \u00e0 utiliser la derni\u00e8re GA en date.<\/span><\/p>\n<p><span style=\"color: #003366;\">C&rsquo;est dommage de se priver des nouvelles fonctionnalit\u00e9s.<\/span><\/p>\n<p><span style=\"color: #003366;\">\ud83d\ude09<\/span><\/p>\n<div id=\"wp_fb_like_button\"><\/div>\n<div>\n<hr \/>\n<\/div>\n<hr \/>\n<p>Cela fait maintenant plus d&rsquo;un trimestre que MySQL 8.0 est GA (<a href=\"https:\/\/mysqlserverteam.com\/changes-in-mysql-8-0-11-general-availability\/\" target=\"_blank\" rel=\"noopener noreferrer\">8.0.11<\/a>; <a href=\"https:\/\/mysqlserverteam.com\/the-mysql-8-0-12-maintenance-release-is-generally-available\/\" target=\"_blank\" rel=\"noopener noreferrer\">8.0.12<\/a>), il est grand temps que je t&rsquo;en parle \ud83d\ude42<\/p>\n<p>Dans la liste des besoins essentiels de mes clients se trouve la\u00a0<strong>Haute Disponibilit\u00e9 avec MySQL<\/strong>. On va voir, dans cet article, comment d\u00e9ployer et g\u00e9rer un cluster MySQL \u00ab\u00a0<em>from scratch\u00a0\u00bb<\/em>\u00a0, sous la forme d&rsquo;un tutoriel, gr\u00e2ce \u00e0 la solution HA tout en un : <strong>MySQL InnoDB Cluster<\/strong>.<\/p>\n<p>Si tu utilises MySQL <strong>5.7<\/strong> tu peux \u00e9galement mettre en oeuvre InnoDB Cluster, je te recommande cet article:\u00a0<a href=\"http:\/\/dasini.net\/blog\/2018\/08\/21\/tutoriel-deployer-mysql-5-7-innodb-cluster\/\" target=\"_blank\" rel=\"noopener noreferrer\"><strong>Tutoriel \u2013 D\u00e9ployer MySQL 5.7 InnoDB Cluster<\/strong><\/a>.<\/p>\n<p><em><span style=\"text-decoration: underline;\">Note<\/span>:\u00a0L&rsquo;article traite de MySQL InnoDB Cluster, HA natif de MySQL Server (plugin Group Replication) avec pour moteur de stockage <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/innodb-storage-engine.html\" target=\"_blank\" rel=\"noopener noreferrer\">InnoDB<\/a>, solution \u00e0 ne pas confondre avec <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/mysql-cluster.html\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL <strong>NDB<\/strong> Cluster<\/a>\u00a0(moteur de stockage NDB).<\/em><\/p>\n<h2>Le contexte<\/h2>\n<p>3 instances MySQL autonomes, &lt;spoil alert&gt; qui vont gr\u00e2ce au plugin <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/group-replication.html\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL Group Replication<\/a> se transformer en une base de donn\u00e9es distribu\u00e9e.&lt;\/spoil alert&gt;<\/p>\n<ul>\n<li>Instance MySQL 1 (mysql_8.0_node1) : 172.19.0.11; N\u0153ud num\u00e9ro 1 du cluster<\/li>\n<li>Instance MySQL 2 (mysql_8.0_node2) : 172.19.0.12; N\u0153ud num\u00e9ro 2 du cluster<\/li>\n<li>Instance MySQL 3\u00a0(mysql_8.0_node3)\u00a0: 172.19.0.13; N\u0153ud num\u00e9ro 3 du cluster<\/li>\n<\/ul>\n<p>1 instance applicative :\u00a0192.168.1.11; MySQL Router <strong>+<\/strong> mon application.<\/p>\n<p><em><span style=\"text-decoration: underline;\">Note<\/span>: J&rsquo;utilise l&rsquo;image <a href=\"https:\/\/hub.docker.com\/r\/mysql\/mysql-server\/\" target=\"_blank\" rel=\"noopener noreferrer\">Docker\u00a0MySQL Server<\/a> support\u00e9e par l&rsquo;\u00e9quipe MySQL d&rsquo;Oracle.<\/em><\/p>\n<p><em><span style=\"text-decoration: underline;\">Note<\/span>: Je n&rsquo;aborde pas dans cet article la redondance de MySQL Router. Plusieurs sc\u00e9narios sont possibles, je te recommande de lire\u00a0<a href=\"https:\/\/lefred.be\/content\/mysql-innodb-cluster-is-the-router-a-single-point-of-failure\/\" target=\"_blank\" rel=\"noopener noreferrer\">\u00e7a<\/a>, <a href=\"https:\/\/lefred.be\/content\/mysql-router-ha-with-pacemaker\/\" target=\"_blank\" rel=\"noopener noreferrer\">ceci<\/a> et <a href=\"https:\/\/lefred.be\/content\/mysql-router-ha-with-keepalived\/\" target=\"_blank\" rel=\"noopener noreferrer\">cela<\/a>.<\/em><\/p>\n<p>MySQL Shell n&rsquo;a pas besoin d&rsquo;\u00eatre install\u00e9 sur toutes les instances (contrairement \u00e0 5.7, pour persister la configuration), cependant ce client texte est quand m\u00eame beaucoup plus puissant que le client texte par d\u00e9faut. Si tu ne le connais pas encore, <a href=\"https:\/\/dev.mysql.com\/downloads\/shell\/\" target=\"_blank\" rel=\"noopener noreferrer\">essaie le<\/a> et tu verras tu ne pourras plus t&rsquo;en passer \ud83d\ude42<\/p>\n<p>En ce qui concerne les versions des logiciels, ce sont les plus r\u00e9centes \u00e0 ce jour (journ\u00e9e caniculaire du mois d&rsquo;ao\u00fbt 2018):<\/p>\n<ul>\n<li><a href=\"https:\/\/dev.mysql.com\/downloads\/mysql\/8.0.html#downloads\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL Server 8.0.12 GA<\/a><\/li>\n<li><a href=\"https:\/\/dev.mysql.com\/downloads\/router\/\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL Router 8.0.12 GA<\/a><\/li>\n<li><a href=\"https:\/\/dev.mysql.com\/downloads\/shell\/\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL Shell 8.0.12 GA<\/a><\/li>\n<\/ul>\n<p><em><span style=\"text-decoration: underline;\">Note<\/span>: Dans cet article j&rsquo;utilise la derni\u00e8re GA de MySQL 8.0. En ce qui concerne MySQL Router et MySQL Shell, il est imp\u00e9ratif d&rsquo;utiliser\u00a0la <strong>derni\u00e8re<\/strong> version courante.<\/em><\/p>\n<p>Pour r\u00e9capituler notre architecture, une image valant (au moins) 1000 mots, \u00e7a nous donne \u00e0 :<\/p>\n<p><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/MySQL_8.0_InnoDB_Cluster_Production_post.png?resize=600%2C837\" alt=\"MySQL InnoDB Cluster Architecture\" width=\"600\" height=\"837\" \/><\/p>\n<h2>V\u00e9rifier la configuration des instances<\/h2>\n<p>La premi\u00e8re \u00e9tape consiste \u00e0 s&rsquo;assurer que les instances MySQL sont correctement configur\u00e9es pour l&rsquo;utilisation de <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/group-replication.html\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL Group Replication<\/a>, la couche <strong>haute disponibilit\u00e9<\/strong> de notre architecture. A noter qu&rsquo;il est pr\u00e9f\u00e9rable de provisionner ses instances d\u00e9j\u00e0 correctement configur\u00e9es (<a href=\"http:\/\/dasini.net\/blog\/2016\/11\/08\/deployer-un-cluster-mysql-group-replication\/\" target=\"_blank\" rel=\"noopener noreferrer\">comme d\u00e9taill\u00e9 dans cet article<\/a>) pour MySQL Group Replication.<\/p>\n<p><em><span style=\"text-decoration: underline;\">Note<\/span>: J&rsquo;utilise le compte utilisateur root pour configurer le cluster, cependant ce n&rsquo;est pas un pr\u00e9-requis. Il est possible (et souhaitable) de cr\u00e9er un compte utilisateur sp\u00e9cifique (ou plusieurs), avec les droits qui vont bien. La m\u00e9thode recommand\u00e9e pour cr\u00e9er cet utilisateur et d&rsquo;utiliser l&rsquo;option <strong>clusterAdmin<\/strong> des m\u00e9thodes\u00a0<strong>dba.<a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_dba.html#af4510380474d531a9fb775693e94eaac\" target=\"_blank\" rel=\"noopener noreferrer\">configureInstance<\/a><\/strong>() et cluster.<a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_cluster.html#a92471821375405214215958ee924087f\" target=\"_blank\" rel=\"noopener noreferrer\"><strong>addInstance<\/strong><\/a>(). Plus d&rsquo;info <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/mysql-innodb-cluster-production-deployment.html#mysql-innodb-cluster-user-privileges\" target=\"_blank\" rel=\"noopener noreferrer\">ici (Paragraphe \u00ab\u00a0User Privileges\u00a0\u00bb)<\/a>.<\/em><\/p>\n<p>La v\u00e9rification de la configuration se fait gr\u00e2ce \u00e0 MySQL Shell et la m\u00e9thode <em><strong><a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_dba.html#aef7387e5a7be5b449af2559bb4e66099\" target=\"_blank\" rel=\"noopener noreferrer\">checkInstanceConfiguration<\/a>()<\/strong><\/em>\u00a0:<\/p>\n<pre class=\"lang:sh decode:true\" title=\"V\u00e9rification de la configuration du noeud\">daz@192.168.1.11:~$ mysqlsh\nMySQL Shell 8.0.12-commercial\n\nCopyright (c) 2016, 2018, Oracle and\/or its affiliates. All rights reserved.\n\nOracle is a registered trademark of Oracle Corporation and\/or its\naffiliates. Other names may be trademarks of their respective\nowners.\n\nType '\\help' or '\\?' for help; '\\quit' to exit.\n\n\nJS &gt; dba.checkInstanceConfiguration('root@172.19.0.11:3306')\nPlease provide the password for 'root@172.19.0.11:3306':\nValidating MySQL instance at 172.19.0.11:3306 for use in an InnoDB cluster...\n\nThis instance reports its own address as mysql_8.0_node1\nClients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.\n\nChecking whether existing tables comply with Group Replication requirements...\nNo incompatible tables detected\n\nChecking instance configuration...\n\nSome configuration options need to be fixed:\n+--------------------------+---------------+----------------+--------------------------------------------------+\n| Variable                 | Current Value | Required Value | Note                                             |\n+--------------------------+---------------+----------------+--------------------------------------------------+\n| binlog_checksum          | CRC32         | NONE           | Update the server variable                       |\n| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |\n| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |\n| server_id                | 1             | &lt;unique ID&gt;    | Update read-only variable and restart the server |\n+--------------------------+---------------+----------------+--------------------------------------------------+\n\nPlease use the dba.configureInstance() command to repair these issues.\n\n{\n    \"config_errors\": [\n        {\n            \"action\": \"server_update\", \n            \"current\": \"CRC32\", \n            \"option\": \"binlog_checksum\", \n            \"required\": \"NONE\"\n        },\n        {\n            \"action\": \"restart\", \n            \"current\": \"OFF\", \n            \"option\": \"enforce_gtid_consistency\", \n            \"required\": \"ON\"\n        },\n        {\n            \"action\": \"restart\", \n            \"current\": \"OFF\", \n            \"option\": \"gtid_mode\", \n            \"required\": \"ON\"\n        },\n        {\n            \"action\": \"restart\", \n            \"current\": \"1\", \n            \"option\": \"server_id\", \n            \"required\": \"&lt;unique ID&gt;\"\n        }\n    ], \n    \"errors\": [], \n    \"status\": \"error\"\n}<\/pre>\n<p>Dans mon cas, avec l&rsquo;installation <a href=\"https:\/\/hub.docker.com\/r\/mysql\/mysql-server\/\" target=\"_blank\" rel=\"noopener noreferrer\">Docker<\/a> de MySQL 8.0 par d\u00e9faut sous Ubuntu, niveau configuration j&rsquo;ai quasiment tout \u00e0 faire \ud83d\ude42<\/p>\n<p>La m\u00e9thode renvoie un document JSON (pratique pour l&rsquo;automatisation) avec la liste des t\u00e2ches \u00e0 effectuer pour \u00eatre conforme&#8230; Configurons donc !<\/p>\n<p>J&rsquo;ai deux solutions :<\/p>\n<ul>\n<li>1\/ je pr\u00e9pare mes instances \u00ab\u00a0manuellement\u00a0\u00bb (cette t\u00e2che peut bien \u00e9videmment s&rsquo;automatiser) comme expliqu\u00e9 dans l&rsquo;article\u00a0<a href=\"http:\/\/dasini.net\/blog\/2016\/11\/08\/deployer-un-cluster-mysql-group-replication\/\" target=\"_blank\" rel=\"noopener noreferrer\">comment configurer un groupe<\/a>.<\/li>\n<li>2\/ je laisse MySQL Shell faire le boulot en utilisant la m\u00e9thode : <em><strong><a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_dba.html#af4510380474d531a9fb775693e94eaac\" target=\"_blank\" rel=\"noopener noreferrer\">configureInstance<\/a>()<\/strong><\/em><\/li>\n<\/ul>\n<p>je sens que ton c\u0153ur balance pour la 2&#8230; moi aussi \ud83d\ude42 :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Configurer le n\u0153ud automatiquement avec MySQL Shell\">JS &gt; dba.configureInstance('root@172.19.0.11:3306')\nPlease provide the password for 'root@172.19.0.11:3306': ****\nConfiguring MySQL instance at 172.19.0.11:3306 for use in an InnoDB cluster...\n\nThis instance reports its own address as 9475cf8d5192\nClients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.\n\nSome configuration options need to be fixed:\n+--------------------------+---------------+----------------+--------------------------------------------------+\n| Variable                 | Current Value | Required Value | Note                                             |\n+--------------------------+---------------+----------------+--------------------------------------------------+\n| binlog_checksum          | CRC32         | NONE           | Update the server variable                       |\n| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |\n| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |\n| server_id                | 1             | &lt;unique ID&gt;    | Update read-only variable and restart the server |\n+--------------------------+---------------+----------------+--------------------------------------------------+\n\nDo you want to perform the required configuration changes? [y\/n]: y\nDo you want to restart the instance after configuring it? [y\/n]: y\nConfiguring instance...\nThe instance '172.19.0.11:3306' was configured for cluster usage.\nRestarting MySQL...\nERROR: Remote restart of MySQL server failed: MySQL Error 3707 (HY000): Restart server failed (mysqld is not managed by supervisor process).\nPlease restart MySQL manually<\/pre>\n<p>Alors plusieurs commentaires !<\/p>\n<p>Les informations de configurations sont <span style=\"text-decoration: underline;\">automatiquement<\/span> sauvegard\u00e9es avec la commande <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/set-variable.html\" target=\"_blank\" rel=\"noopener noreferrer\">SET PERSIST<\/a>. Commande super pratique (Cloud Friendly) apparue en 8.0, qui me permet de faire des changements de configurations \u00e0 chaud (online) <strong>et<\/strong> persistants, stock\u00e9s dans le fichier <strong><em>mysqld-auto.cnf<\/em><\/strong>.<\/p>\n<pre class=\"lang:js decode:true\" title=\"mysqld-auto.cnf\"># cat \/var\/lib\/mysql\/mysqld-auto.cnf \n{\n  \"Version\": 1,\n  \"mysql_server\": {\n    \"server_id\": {\n      \"Value\": \"866646436\",\n      \"Metadata\": {\n        \"Timestamp\": 1534169971463437,\n        \"User\": \"root\",\n        \"Host\": \"\"\n      }\n    },\n    \"mysql_server_static_options\": {\n      \"binlog_checksum\": {\n        \"Value\": \"NONE\",\n        \"Metadata\": {\n          \"Timestamp\": 1534169971419313,\n          \"User\": \"root\",\n          \"Host\": \"\"\n        }\n      },\n      \"enforce_gtid_consistency\": {\n        \"Value\": \"ON\",\n        \"Metadata\": {\n          \"Timestamp\": 1534169970411779,\n          \"User\": \"root\",\n          \"Host\": \"\"\n        }\n      },\n      \"gtid_mode\": {\n        \"Value\": \"ON\",\n        \"Metadata\": {\n          \"Timestamp\": 1534169971458247,\n          \"User\": \"root\",\n          \"Host\": \"\"\n        }\n      }\n    }\n  }\n}<\/pre>\n<p>On peut \u00e9galement voir ces informations en SQL, gr\u00e2ce \u00e0 la table <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/persisted-variables-table.html\" target=\"_blank\" rel=\"noopener noreferrer\"><strong>persisted_variables<\/strong><\/a> de <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/performance-schema.html\" target=\"_blank\" rel=\"noopener noreferrer\"><strong>performance_schema<\/strong><\/a> :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Persisted variables\">172.19.0.11:33060+ SQL&gt; SELECT * FROM performance_schema.persisted_variables;\n+--------------------------+----------------+\n| VARIABLE_NAME            | VARIABLE_VALUE |\n+--------------------------+----------------+\n| server_id                | 866646436      |\n| binlog_checksum          | NONE           |\n| enforce_gtid_consistency | ON             |\n| gtid_mode                | ON             |\n+--------------------------+----------------+<\/pre>\n<p>La persistance des changements ce fait automatiquement gr\u00e2ce au param\u00e8tre\u00a0<a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/server-system-variables.html#sysvar_persisted_globals_load\" target=\"_blank\" rel=\"noopener noreferrer\">persisted_globals_load<\/a> activ\u00e9 par d\u00e9faut (ON). Elle est effective apr\u00e8s l&rsquo;ex\u00e9cution des commandes:<\/p>\n<ul>\n<li>dba.<a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_dba.html#af4510380474d531a9fb775693e94eaac\" target=\"_blank\" rel=\"noopener noreferrer\">configureInstance<\/a>()<\/li>\n<li>dba.<a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_dba.html#a12f040129a2c4c301392dd69611da0c8\" target=\"_blank\" rel=\"noopener noreferrer\">createCluster<\/a>()<\/li>\n<li>Cluster.<a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_cluster.html#a92471821375405214215958ee924087f\" target=\"_blank\" rel=\"noopener noreferrer\">addInstance<\/a>()<\/li>\n<li>Cluster.<a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_cluster.html#a0c878a44100530d6a0e0cd2f893a71af\" target=\"_blank\" rel=\"noopener noreferrer\">removeInstance<\/a>()<\/li>\n<li>Cluster.<a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_cluster.html#af56d9116e7d6d0cd111aed0971163248\" target=\"_blank\" rel=\"noopener noreferrer\">rejoinInstance<\/a>()<\/li>\n<\/ul>\n<p><em><span style=\"text-decoration: underline;\">Note<\/span>: je te recommande cet <a href=\"http:\/\/mysql.wisborg.dk\/2018\/08\/14\/mysql-8-0-persisted-variables\/\" target=\"_blank\" rel=\"noopener noreferrer\">excellent article sur les variables persistantes<\/a> de mon non moins excellent coll\u00e8gue <a href=\"https:\/\/twitter.com\/JWKrogh\" target=\"_blank\" rel=\"noopener noreferrer\">Jesper<\/a>.<\/em><\/p>\n<p><a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/set-variable.html\" target=\"_blank\" rel=\"noopener noreferrer\">SET PERSIST<\/a>\u00a0est tr\u00e8s appr\u00e9ciable dans les environnements o\u00f9 acc\u00e9der au fichier de configuration est compliqu\u00e9 voir impossible (Cloud Friendly, je te dis!!!). Cependant, dans un environnement plus ma\u00eetris\u00e9 (plus classique), il peut \u00eatre pr\u00e9f\u00e9rable, de centraliser toutes les informations de configurations dans le fichier de configuration original (my.cnf \/ my.ini).<\/p>\n<p>C&rsquo;est bien \u00e9videmment possible gr\u00e2ce \u00e0 l&rsquo;option\u00a0<em><strong>mycnfPath<\/strong><\/em> de la m\u00e9thode dba.<a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_dba.html#af4510380474d531a9fb775693e94eaac\" target=\"_blank\" rel=\"noopener noreferrer\">configureInstance<\/a>().<\/p>\n<p>Exemple:<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Cluster configuration in my.cnf (or my.ini)\">JS&gt; dba.configureInstance('root@172.19.0.11:3306', {mycnfPath: \"\/etc\/my.cnf\"})\nConfiguring local MySQL instance listening at port 3306 for use in an InnoDB cluster...\n\nThis instance reports its own address as mysql_8.0_node1\n\nSome configuration options need to be fixed:\n+----------------------------------+---------------+----------------+------------------------+\n| Variable                         | Current Value | Required Value | Note                   |\n+----------------------------------+---------------+----------------+------------------------+\n| binlog_checksum                  | &lt;not set&gt;     | NONE           | Update the config file |\n| binlog_format                    | &lt;not set&gt;     | ROW            | Update the config file |\n| log_slave_updates                | &lt;not set&gt;     | ON             | Update the config file |\n| transaction_write_set_extraction | &lt;not set&gt;     | XXHASH64       | Update the config file |\n+----------------------------------+---------------+----------------+------------------------+\n\nDo you want to perform the required configuration changes? [y\/n]: y\nConfiguring instance...\nThe instance '172.19.0.11:3306' was configured for use in an InnoDB cluster.<\/pre>\n<p>Tu as donc le choix !<\/p>\n<p>Gr\u00e2ce \u00e0 ton \u0153il aiguis\u00e9, tu as remarqu\u00e9, que l&rsquo;outil me propose de red\u00e9marrer l&rsquo;instance MySQL, ce qui est bien pratique. Cependant, dans le cas pr\u00e9sent, la plateforme que j&rsquo;utilise (<a href=\"https:\/\/hub.docker.com\/r\/mysql\/mysql-server\/\" target=\"_blank\" rel=\"noopener noreferrer\">image Docker<\/a>) ne dispose pas d&rsquo;un processus de supervision de <strong><a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/mysqld.html\" target=\"_blank\" rel=\"noopener noreferrer\"><em>mysqld<\/em><\/a><\/strong>\u00a0(comme c&rsquo;est g\u00e9n\u00e9ralement le cas sur ta plateforme pr\u00e9f\u00e9r\u00e9e. Info <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/restart.html\" target=\"_blank\" rel=\"noopener noreferrer\">ici<\/a>). En clair, je vais devoir red\u00e9marrer sans l&rsquo;outil (mais je devrai m&rsquo;en sortir&#8230; <a href=\"https:\/\/www.youtube.com\/watch?v=9NnG54zTg8A\" target=\"_blank\" rel=\"noopener noreferrer\">ou pas<\/a>\u00a0\ud83d\ude42 ).<\/p>\n<p><em><span style=\"text-decoration: underline;\">Note<\/span>: Assure toi d&rsquo;avoir les droits n\u00e9cessaires\u00a0pour mettre \u00e0 jour le fichier de configuration de MySQL.<\/em><\/p>\n<p>L&rsquo;instance\u00a0172.19.0.11 est configur\u00e9e et pr\u00eate pour \u00eatre un membre d&rsquo;un cluster MySQL InnoDB Cluster\u00a0 !<\/p>\n<p>On peut le v\u00e9rifier en red\u00e9marrant\u00a0l&rsquo;instance MySQL et refaire un\u00a0<em>checkInstanceConfiguration<\/em>\u00a0 :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Valider la configuration du n\u0153ud\">JS&gt; dba.checkInstanceConfiguration('root@172.19.0.11:3306')\nPlease provide the password for 'root@172.19.0.11:3306':\nValidating MySQL instance at 172.19.0.11:3306 for use in an InnoDB cluster...\n\nThis instance reports its own address as mysql_8.0_node1\nClients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.\n\nChecking whether existing tables comply with Group Replication requirements...\nNo incompatible tables detected\n\nChecking instance configuration...\nInstance configuration is compatible with InnoDB cluster\n\nThe instance '172.19.0.11:3306' is valid for InnoDB cluster usage.\n\n{\n    \"status\": \"ok\"\n}<\/pre>\n<p>Statut: OK.<\/p>\n<p>La <span style=\"text-decoration: underline;\">m\u00eame proc\u00e9dure doit \u00eatre appliqu\u00e9e sur les autres instances<\/span> MySQL.<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Configuration des autres instances\">JS&gt; dba.configureInstance('root@172.19.0.12:3306')\n...\n\nJS&gt; dba.configureInstance('root@172.19.0.13:3306')\n...<\/pre>\n<p>Je me retrouve, apr\u00e8s configuration et red\u00e9marrage\u00a0 avec le r\u00e9sultat suivant:<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Valider la configuration des autres n\u0153uds\">JS &gt; dba.checkInstanceConfiguration('root@172.19.0.12:3306')\nPlease provide the password for 'root@172.19.0.12:3306': \nValidating MySQL instance at 172.19.0.12:3306 for use in an InnoDB cluster...\n\nThis instance reports its own address as mysql_8.0_node2\nClients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.\n\nChecking whether existing tables comply with Group Replication requirements...\nNo incompatible tables detected\n\nChecking instance configuration...\nInstance configuration is compatible with InnoDB cluster\n\nThe instance '172.19.0.12:3306' is valid for InnoDB cluster usage.\n\n{\n    \"status\": \"ok\"\n}\n\n\n\nJS &gt; dba.checkInstanceConfiguration('root@172.19.0.13:3306')\nPlease provide the password for 'root@172.19.0.13:3306': \nValidating MySQL instance at 172.19.0.13:3306 for use in an InnoDB cluster...\n\nThis instance reports its own address as mysql_8.0_node3\nClients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.\n\nChecking whether existing tables comply with Group Replication requirements...\nNo incompatible tables detected\n\nChecking instance configuration...\nInstance configuration is compatible with InnoDB cluster\n\nThe instance '172.19.0.13:3306' is valid for InnoDB cluster usage.\n\n{\n    \"status\": \"ok\"\n}<\/pre>\n<p><a href=\"https:\/\/www.youtube.com\/watch?v=nZWXR23YPmw\" target=\"_blank\" rel=\"noopener noreferrer\">All good!<\/a><\/p>\n<h2>Cr\u00e9er le cluster<\/h2>\n<p>Une fois les 3 instances correctement configur\u00e9es, l&rsquo;\u00e9tape suivante consiste \u00e0 cr\u00e9er le cluster avec\u00a0<strong><em>createCluster<\/em><\/strong>. Cette m\u00e9thode va \u00eatre jou\u00e9e sur le premier membre, l&rsquo;instance MySQL sur \u00a0172.19.0.11, \u00a0elle va permettre de cr\u00e9er un cluster&#8230; d&rsquo;un n\u0153ud.\u00a0 \u00a0Bon faut bien commencer quelque part \ud83d\ude42 :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Create the 1st node of the cluster\">$ mysqlsh --uri=root@172.19.0.11\nCreating a Session to 'root@172.19.0.11'\nPlease provide the password for 'root@172.19.0.11': \n...\n\n172.19.0.11 JS&gt; var cluster = dba.createCluster('pocCluster', {ipWhitelist: \"172.19.0.0\/16\"})\nA new InnoDB cluster will be created on instance 'root@172.19.0.11:3306'.\n\nValidating instance at 172.19.0.11:3306...\n\nThis instance reports its own address as 9475cf8d5192\n\nInstance configuration is suitable.\nCreating InnoDB cluster 'pocCluster' on 'root@172.19.0.11:3306'...\nAdding Seed Instance...\n\nCluster successfully created. Use Cluster.addInstance() to add MySQL instances.\nAt least 3 instances are needed for the cluster to be able to withstand up to\none server failure.<\/pre>\n<p><em><strong>createCluster()<\/strong><\/em> prend comme param\u00e8tre le nom du cluster (pocCluster). Je peux lui passer \u00e9galement quelques information optionnelles comme la <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/group-replication-ip-address-whitelisting.html\" target=\"_blank\" rel=\"noopener noreferrer\">whitelist<\/a>.<\/p>\n<p>On peut v\u00e9rifier l&rsquo;\u00e9tat du n\u0153ud dans le cluster avec\u00a0<strong>status<em>()<\/em><\/strong> :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Etat du cluster\">172.19.0.11 JS&gt; cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"172.19.0.11:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK_NO_TOLERANCE\", \n        \"statusText\": \"Cluster is NOT tolerant to any failures.\", \n        \"topology\": {\n            \"172.19.0.11:3306\": {\n                \"address\": \"172.19.0.11:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }\n        }\n    }, \n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.19.0.11:3306\"\n}<\/pre>\n<p><em><span style=\"text-decoration: underline;\">Notes<\/span> : Assure toi que ton\u00a0DNS (ou \/etc\/hosts) est\u00a0correctement configur\u00e9, sinon tu vas avoir des soucis de connections&#8230;<\/em><\/p>\n<p>L&rsquo;ajouts des\u00a0n\u0153uds suivant se fait avec <em><strong><a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_cluster.html#a92471821375405214215958ee924087f\" target=\"_blank\" rel=\"noopener noreferrer\">addInstance<\/a>()<\/strong><\/em>, il est n\u00e9anmoins conseill\u00e9 d&rsquo;ex\u00e9cuter <em><strong><a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_cluster.html#ae589dbf4e07c85108ab47b3735c3114a\" target=\"_blank\" rel=\"noopener noreferrer\">checkInstanceState<\/a>()<\/strong><\/em>\u00a0au pr\u00e9alable pour s&rsquo;assurer de la compatibilit\u00e9 des <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/replication-gtids-concepts.html\" target=\"_blank\" rel=\"noopener noreferrer\">GTID<\/a> sets :<\/p>\n<p><span style=\"text-decoration: underline;\">N\u0153ud 2<\/span><\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Ajout du deuxi\u00e8me n\u0153ud\">172.19.0.11 JS&gt; cluster.checkInstanceState('root@172.19.0.12:3306')\nAnalyzing the instance replication state...\nPlease provide the password for 'root@172.19.0.12:3306': \n\nThe instance 'root@172.19.0.12:3306' is valid for the cluster.\nThe instance is new to Group Replication.\n\n{\n    \"reason\": \"new\", \n    \"state\": \"ok\"\n}\n\n\n\nJS&gt; cluster.addInstance(\"root@172.19.0.12:3306\", {ipWhitelist: \"172.19.0.0\/16\"})\nA new instance will be added to the InnoDB cluster. Depending on the amount of\ndata on the cluster this might take from a few seconds to several hours.\n\nAdding instance to the cluster ...\n\nPlease provide the password for 'root@172.19.0.12:3306': \nValidating instance at 172.19.0.12:3306...\n\nThis instance reports its own address as mysql_8.0_node2\n\nInstance configuration is suitable.\nThe instance 'root@172.19.0.12:3306' was successfully added to the cluster.\n\n\n\nJS&gt; cluster.status();\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"172.19.0.11:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK_NO_TOLERANCE\", \n        \"statusText\": \"Cluster is NOT tolerant to any failures.\", \n        \"topology\": {\n            \"172.19.0.11:3306\": {\n                \"address\": \"172.19.0.11:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.12:3306\": {\n                \"address\": \"172.19.0.12:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }\n        }\n    }, \n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.19.0.11:3306\"\n}<\/pre>\n<p>Au cas o\u00f9 l&rsquo;instance ajout\u00e9e contient plus de transactions que le groupe <a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_cluster.html#ae589dbf4e07c85108ab47b3735c3114a\" target=\"_blank\" rel=\"noopener noreferrer\"><em>checkInstanceState<\/em><\/a>\u00a0le fait savoir :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"N\u0153ud invalide ne pouvant pas faire parti du cluster\">JS&gt; cluster.checkInstanceState('root@172.19.0.12:3306')\nAnalyzing the instance replication state...\nPlease provide the password for 'root@172.19.0.12:3306': \n\nThe instance '172.19.0.12:3306' is invalid for the cluster.\nThe instance contains additional transactions in relation to the cluster.\n\n{\n    \"reason\": \"diverged\", \n    \"state\": \"error\"\n}<\/pre>\n<p>En fonction du contexte, il faut alors soit restaurer une sauvegarde d&rsquo;un membre du cluster sur l&rsquo;instance probl\u00e9matique (celle qui diverge) ou alors si <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/replication-gtids-failover.html\" target=\"_blank\" rel=\"noopener noreferrer\"><span style=\"text-decoration: underline;\">tu sais ce que tu fais<\/span><\/a>, une <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/replication-gtids-failover.html#replication-gtids-failover-empty\" target=\"_blank\" rel=\"noopener noreferrer\">synchronisation des GTIDs<\/a> est toujours possible, voir un <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/reset-master.html\" target=\"_blank\" rel=\"noopener noreferrer\">reset master<\/a>.<\/p>\n<p><span style=\"text-decoration: underline;\">N\u0153ud 3<\/span><\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Ajout du troisi\u00e8me n\u0153ud\">172.19.0.11 JS&gt; cluster.checkInstanceState('root@172.19.0.13:3306')\nAnalyzing the instance replication state...\nPlease provide the password for 'root@172.19.0.13:3306': \n\nThe instance 'root@172.19.0.13:3306' is valid for the cluster.\nThe instance is new to Group Replication.\n\n{\n    \"reason\": \"new\", \n    \"state\": \"ok\"\n}\n\n\n\nJS&gt; cluster.addInstance(\"root@172.19.0.13:3306\", {ipWhitelist: \"172.19.0.0\/16\"})\nA new instance will be added to the InnoDB cluster. Depending on the amount of\ndata on the cluster this might take from a few seconds to several hours.\n\nAdding instance to the cluster ...\n\nPlease provide the password for 'root@172.19.0.13:3306': \nValidating instance at 172.19.0.13:3306...\n\nThis instance reports its own address as mysql_8.0_node3\n\nInstance configuration is suitable.\nThe instance 'root@172.19.0.13:3306' was successfully added to the cluster.\n<\/pre>\n<p>Et le r\u00e9sultat final:<\/p>\n<pre class=\"lang:mysql decode:true \" title=\"3 nodes cluster is configured\">JS&gt; cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"172.19.0.11:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK\", \n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \n        \"topology\": {\n            \"172.19.0.11:3306\": {\n                \"address\": \"172.19.0.11:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.12:3306\": {\n                \"address\": \"172.19.0.12:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.13:3306\": {\n                \"address\": \"172.19.0.13:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }\n        }\n    }, \n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.19.0.11:3306\"\n}<\/pre>\n<p><a href=\"https:\/\/media1.tenor.com\/images\/e6ca17bcb0b1c3b0de3def3dc80527b0\/tenor.gif?itemid=9880513\" target=\"_blank\" rel=\"noopener noreferrer\">Et voil\u00e0<\/a>!<\/p>\n<p>Un Cluster <strong>MySQL Group Replication<\/strong> de 3 n\u0153uds facilement et rapidement d\u00e9ploy\u00e9 gr\u00e2ce \u00e0 <strong>MySQL Shell<\/strong>, c&rsquo;est \u00e7a <strong>MySQL InnoDB Cluster<\/strong> (enfin presque, il manque encore un \u00e9l\u00e9ment).<\/p>\n<p>La configuration actuelle est la suivante:<\/p>\n<ul>\n<li>N\u0153ud 1 (mysql_8.0_node1) = 172.19.0.11 : Primaire (lecture\/\u00e9criture)<\/li>\n<li>N\u0153ud 2 (mysql_8.0_node2) = 172.19.0.12 : Secondaire (lecture seule)<\/li>\n<li>N\u0153ud 3 (mysql_8.0_node3) = 172.19.0.13 : Secondaire (lecture seule)<\/li>\n<\/ul>\n<p><a href=\"https:\/\/www.youtube.com\/watch?v=BPSiB2vyHLU\" target=\"_blank\" rel=\"noopener noreferrer\">Et qu&rsquo;est ce que l&rsquo;on fait maintenant<\/a> ???<\/p>\n<p>Le Router !<\/p>\n<p>Le Router !<\/p>\n<p>Le Router !<\/p>\n<h2>Configuration de MySQL Router<\/h2>\n<p>Il est recommand\u00e9 d&rsquo;installer MySQL Router sur la machine h\u00f4te de l&rsquo;application, je vais donc suivre cette recommandation et l&rsquo;installer sur la machine\u00a0192.168.1.11.<\/p>\n<p><em><span style=\"text-decoration: underline;\">Note<\/span>: Si tu ne peux (veux) pas mettre MySQL Router sur l&rsquo;application, il va alors te falloir g\u00e9rer le HA du Router. Plusieurs solutions sont envisageables comme :<\/em><\/p>\n<ul>\n<li><em><a href=\"https:\/\/lefred.be\/content\/mysql-router-ha-with-pacemaker\/\" target=\"_blank\" rel=\"noopener noreferrer\">Pacemaker<\/a><\/em><\/li>\n<li><em><a href=\"https:\/\/lefred.be\/content\/mysql-router-ha-with-keepalived\/\" target=\"_blank\" rel=\"noopener noreferrer\">Keepalived<\/a><\/em><\/li>\n<\/ul>\n<h3>Bootstrap MySQL Router<\/h3>\n<p>La premi\u00e8re \u00e9tape est le <a href=\"https:\/\/dev.mysql.com\/doc\/mysql-router\/8.0\/en\/mysql-router-deploying-bootstrapping.html\" target=\"_blank\" rel=\"noopener noreferrer\">bootstrap<\/a>, c&rsquo;est \u00e0 dire cr\u00e9er un lien entre MySQL Router et le cluster. Il faut donc fournir \u00e0 <a href=\"https:\/\/dev.mysql.com\/doc\/mysql-router\/8.0\/en\/\" target=\"_blank\" rel=\"noopener noreferrer\">mysqlrouter<\/a> au minimum l&rsquo;adresse d&rsquo;un membre du cluster :<\/p>\n<pre class=\"lang:sh decode:true\" title=\"Bootstrap of MySQL Router\">daz@192.168.1.11:~$ mysqlrouter --bootstrap root@172.19.0.11:3306 --conf-base-port 3306 --directory ~\/routerConf\/RouterPoC2\nPlease enter MySQL password for root: \n\nBootstrapping MySQL Router instance at '\/home\/daz\/routerConf\/RouterPoC2'...\nChecking for old Router accounts\nCreating account mysql_router6_2ah00zu9lfdj@'%'\nMySQL Router  has now been configured for the InnoDB cluster 'pocCluster'.\n\nThe following connection information can be used to connect to the cluster.\n\nClassic MySQL protocol connections to cluster 'pocCluster':\n- Read\/Write Connections: localhost:3306\n- Read\/Only Connections: localhost:3307\nX protocol connections to cluster 'pocCluster':\n- Read\/Write Connections: localhost:3308\n- Read\/Only Connections: localhost:3309<\/pre>\n<p><em><span style=\"text-decoration: underline;\">Note<\/span>: il se peut que tu rencontres un probl\u00e8me de permission. Probablement d\u00fb \u00e0 la configuration de\u00a0<a href=\"https:\/\/fr.wikipedia.org\/wiki\/AppArmor\" target=\"_blank\" rel=\"noopener noreferrer\">AppArmor<\/a>&#8230; Google (ou \u00e9quivalent) est ton ami \ud83d\ude42 (si tu es sous Ubuntu <a href=\"https:\/\/doc.ubuntu-fr.org\/apparmor\" target=\"_blank\" rel=\"noopener noreferrer\">clic ici<\/a>)<\/em><\/p>\n<p>J&rsquo;ai cr\u00e9\u00e9 une configuration diff\u00e9rente de celle par d\u00e9faut, en personnalisant avec quelques options:<\/p>\n<ul>\n<li><a href=\"https:\/\/dev.mysql.com\/doc\/mysql-router\/8.0\/en\/mysqlrouter.html#option_mysqlrouter_conf-base-port\" target=\"_blank\" rel=\"noopener noreferrer\">conf-base-port<\/a> : le port propos\u00e9 par d\u00e9faut est <strong>6446<\/strong> pour la lecture\/\u00e9criture. Dans mon cas, je veux utiliser le c\u00e9l\u00e8bre port <strong>3306<\/strong>.<\/li>\n<li><a href=\"https:\/\/dev.mysql.com\/doc\/mysql-router\/8.0\/en\/mysqlrouter.html#option_mysqlrouter_directory\" target=\"_blank\" rel=\"noopener noreferrer\">directory<\/a>\u00a0: histoire de ranger tout le bazar de cette instance de Router dans le r\u00e9pertoire sp\u00e9cifi\u00e9.<\/li>\n<\/ul>\n<p>La liste compl\u00e8te des options est disponible <a href=\"https:\/\/dev.mysql.com\/doc\/mysql-router\/8.0\/en\/mysqlrouter.html#option_mysqlrouter_directory\" target=\"_blank\" rel=\"noopener noreferrer\">ici<\/a>.<\/p>\n<p>Pour r\u00e9sumer, 4 ports TCP ont \u00e9t\u00e9 configur\u00e9s, dont 2 pour les connexions MySQL traditionnelles:<\/p>\n<p>3306 (au lieu de 6446 par d\u00e9faut) : lectures \/ \u00e9critures pour le n\u0153ud primaire<br \/>\n3307 (au lieu de 6447 par d\u00e9faut) : lectures seules pour les n\u0153uds secondaires (en Round-Robin)<br \/>\nEt le pendant pour les connexions avec le protocole X (3308 &amp; 3309 (au lieu de respectivement 64460 &amp; 64470)), pour une utilisation <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/document-store.html\" target=\"_blank\" rel=\"noopener noreferrer\">NoSQL Document Store<\/a> de MySQL.<\/p>\n<p>Le fichier de configuration de MySQL Router contient quelques informations importantes, tel que le(s) port(s) \u00e0 utiliser par l&rsquo;application (comme vu pr\u00e9c\u00e9demment) :<\/p>\n<pre class=\"lang:sh decode:true\" title=\"MySQL Router config file content\">daz@192.168.1.11:~$ cat ~\/routerConf\/RouterPoC2\/mysqlrouter.conf \n# File automatically generated during MySQL Router bootstrap\n[DEFAULT]\nlogging_folder=\/home\/daz\/routerConf\/RouterPoC2\/log\nruntime_folder=\/home\/daz\/routerConf\/RouterPoC2\/run\ndata_folder=\/home\/daz\/routerConf\/RouterPoC2\/data\nkeyring_path=\/home\/daz\/routerConf\/RouterPoC2\/data\/keyring\nmaster_key_path=\/home\/daz\/routerConf\/RouterPoC2\/mysqlrouter.key\nconnect_timeout=30\nread_timeout=30\n\n[logger]\nlevel = INFO\n\n[metadata_cache:pocCluster]\nrouter_id=6\nbootstrap_server_addresses=mysql:\/\/172.19.0.11:3306,mysql:\/\/172.19.0.12:3306,mysql:\/\/172.19.0.13:3306\nuser=mysql_router6_2ah00zu9lfdj\nmetadata_cluster=pocCluster\nttl=0.5\n\n[routing:pocCluster_default_rw]\nbind_address=0.0.0.0\nbind_port=3306\ndestinations=metadata-cache:\/\/pocCluster\/default?role=PRIMARY\nrouting_strategy=round-robin\nprotocol=classic\n\n[routing:pocCluster_default_ro]\nbind_address=0.0.0.0\nbind_port=3307\ndestinations=metadata-cache:\/\/pocCluster\/default?role=SECONDARY\nrouting_strategy=round-robin\nprotocol=classic\n\n[routing:pocCluster_default_x_rw]\nbind_address=0.0.0.0\nbind_port=3308\ndestinations=metadata-cache:\/\/pocCluster\/default?role=PRIMARY\nrouting_strategy=round-robin\nprotocol=x\n\n[routing:pocCluster_default_x_ro]\nbind_address=0.0.0.0\nbind_port=3309\ndestinations=metadata-cache:\/\/pocCluster\/default?role=SECONDARY\nrouting_strategy=round-robin\nprotocol=x<\/pre>\n<p>Il est \u00e9videmment possible de modifier ce fichier.<\/p>\n<p>Ensuite, il faut d\u00e9marrer MySQL Router avec le script <strong>start.sh<\/strong><\/p>\n<pre class=\"lang:sh decode:true\">daz@192.168.1.11:~$ ~\/routerConf\/RouterPoC2\/start.sh<\/pre>\n<p>L&rsquo;arr\u00eat du Router se fait avec le script <strong>stop.sh<\/strong>\u00a0(mais tu l&rsquo;avais devin\u00e9)<\/p>\n<pre class=\"lang:sh decode:true\">daz@192.168.1.11:~$ ~\/routerConf\/RouterPoC2\/stop.sh<\/pre>\n<p>Voil\u00e0 pour le Router !<\/p>\n<p>C&rsquo;est termin\u00e9 pour la phase de d\u00e9ploiement du cluster.<\/p>\n<p><strong>Simple<\/strong>, <strong>rapide<\/strong> et surtout <strong>facilement automatisable<\/strong>, tels sont les principales caract\u00e9ristiques de <strong>MySQL InnoDB Cluster<\/strong>. Qualit\u00e9s qui constituent le c\u0153ur m\u00eame de l&rsquo;ADN de MySQL.<\/p>\n<hr \/>\n<h2>Se connecter au cluster<\/h2>\n<p>A partir de maintenant, ton cluster est <em>\u00ab\u00a0up and running\u00a0\u00bb<\/em>.\u00a0Ton application va donc devoir se connecter au port 3306 (car on l&rsquo;a configur\u00e9 comme cela, sinon c&rsquo;est 6446 par d\u00e9faut) pour utiliser la base de donn\u00e9e. D&rsquo;ailleurs du point de vue de l&rsquo;application, la base de donn\u00e9e est MySQL Router, sauf qu&rsquo;en r\u00e9alit\u00e9 ce n&rsquo;est pas 1 instance, mais bel et bien 3 instances MySQL qui sont en <em>backend<\/em>\u00a0et ceci en toute transparence \\o\/.<\/p>\n<p>La partie <span style=\"text-decoration: underline;\">utilisation du cluster<\/span>\u00a0est hors du scope de cet article, mais on peut facilement simuler le comportement de l&rsquo;application avec un client MySQL et MySQL router.<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Connexion au Router\">daz@192.168.1.11:~$ mysqlsh --uri=root@localhost:3306 --sql\nSQL&gt; SELECT @@report_host;\n+-----------------+\n| @@report_host   |\n+-----------------+\n| mysql_8.0_node1 |\n+-----------------+<\/pre>\n<p>Je me connecte avec MySQL Shell en mode SQL (\u00e7a c&rsquo;est l&rsquo;applicatif), au cluster (\u00e0 <strong>mysql_8.0_node1<\/strong>, n\u0153ud primaire InnoDB Cluster), par l&rsquo;interm\u00e9diaire de MySQL Router en localhost (car je suis sur la machine 192.168.1.11) sur le port 3306.<\/p>\n<p>Le param\u00e8tre <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/replication-options-slave.html#option_mysqld_report-host\" target=\"_blank\" rel=\"noopener noreferrer\"><strong>report_host<\/strong><\/a>\u00a0(d\u00e9fini dans mon fichier de configuration) me renvoi la valeur du\u00a0 n\u0153ud 1, le primaire.<\/p>\n<p>En cas d&rsquo;arr\u00eat du primaire, un nouveau va \u00eatre automatiquement \u00e9lu par le cluster (voir paragraphe failover plus bas) est la m\u00eame commande me donnera un r\u00e9sultat diff\u00e9rent:<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Automatic Failover\">-- \/!\\ Stop Node 1\nSQL&gt; SELECT @@report_host;\nERROR: 2013 (HY000): Lost connection to MySQL server during query\nThe global session got disconnected..\nAttempting to reconnect to 'mysql:\/\/root@localhost:3306'..\nThe global session was successfully reconnected.\n\nSQL&gt; SELECT @@report_host;\n+-----------------+\n| @@report_host   |\n+-----------------+\n| mysql_8.0_node2 |\n+-----------------+<\/pre>\n<h2>Gestion des n\u0153uds<\/h2>\n<p>Quelques commandes qui vont te simplifier la vie&#8230;<\/p>\n<h3>Performance_Schema<\/h3>\n<p>Quelques informations sont disponibles en SQL au niveau des instances.<\/p>\n<p><span style=\"text-decoration: underline;\"><strong>Identifier le n\u0153ud primaire<\/strong><\/span><\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Who is the primary node (SQL)\">SQL&gt; SELECT * \nFROM performance_schema.replication_group_members \nWHERE MEMBER_ROLE='PRIMARY'\\G\n*************************** 1. row ***************************\n  CHANNEL_NAME: group_replication_applier\n     MEMBER_ID: 99bc527e-9f00-11e8-910a-0242ac13000b\n   MEMBER_HOST: mysql_8.0_node1\n   MEMBER_PORT: 3306\n  MEMBER_STATE: ONLINE\n   MEMBER_ROLE: PRIMARY\nMEMBER_VERSION: 8.0.12<\/pre>\n<p><span style=\"text-decoration: underline;\"><strong>Description des membres du cluster<\/strong><\/span><\/p>\n<pre class=\"lang:mysql decode:true \" title=\"Cluster members description\">SQL&gt; SELECT * FROM performance_schema.replication_group_members\\G\n*************************** 1. row ***************************\n  CHANNEL_NAME: group_replication_applier\n     MEMBER_ID: 99bc527e-9f00-11e8-910a-0242ac13000b\n   MEMBER_HOST: mysql_8.0_node1\n   MEMBER_PORT: 3306\n  MEMBER_STATE: ONLINE\n   MEMBER_ROLE: PRIMARY\nMEMBER_VERSION: 8.0.12\n*************************** 2. row ***************************\n  CHANNEL_NAME: group_replication_applier\n     MEMBER_ID: 9aceda6d-9f00-11e8-a3b3-0242ac13000c\n   MEMBER_HOST: mysql_8.0_node2\n   MEMBER_PORT: 3306\n  MEMBER_STATE: ONLINE\n   MEMBER_ROLE: SECONDARY\nMEMBER_VERSION: 8.0.12\n*************************** 3. row ***************************\n  CHANNEL_NAME: group_replication_applier\n     MEMBER_ID: 9c362837-9f00-11e8-b6a1-0242ac13000d\n   MEMBER_HOST: mysql_8.0_node3\n   MEMBER_PORT: 3306\n  MEMBER_STATE: ONLINE\n   MEMBER_ROLE: SECONDARY\nMEMBER_VERSION: 8.0.12<\/pre>\n<h3>R\u00e9cup\u00e9rer les m\u00e9ta-donn\u00e9es d&rsquo;un cluster<\/h3>\n<p>Les m\u00e9ta-donn\u00e9es du cluster sont stock\u00e9es sur les membres dans le sch\u00e9ma\u00a0<strong><em>mysql_innodb_cluster_metadata<\/em><\/strong> :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Exploration du sch\u00e9ma mysql_innodb_cluster_metadata\">daz@172.19.0.11 ~ $ mysqlsh --uri root@172.19.0.11 --sql\n...\n\nSQL&gt; SHOW SCHEMAS;\n+-------------------------------+\n| Database                      |\n+-------------------------------+\n| information_schema            |\n| mysql                         |\n| mysql_innodb_cluster_metadata |\n| performance_schema            |\n| sys                           |\n+-------------------------------+\n\n\nSQL&gt; SHOW TABLES IN mysql_innodb_cluster_metadata;\n+-----------------------------------------+\n| Tables_in_mysql_innodb_cluster_metadata |\n+-----------------------------------------+\n| clusters                                |\n| hosts                                   |\n| instances                               |\n| replicasets                             |\n| routers                                 |\n| schema_version                          |\n+-----------------------------------------+\n\n\nSQL&gt; SELECT cluster_name FROM mysql_innodb_cluster_metadata.clusters;\n+--------------+\n| cluster_name |\n+--------------+\n| pocCluster   |\n+--------------+\n\n\nSQL &gt; SELECT host_name FROM mysql_innodb_cluster_metadata.hosts;\n+-----------------+\n| host_name       |\n+-----------------+\n| 172.19.0.11     |\n| 172.19.0.12     |\n| 172.19.0.13     |\n| 192.168.1.11    |\n+-----------------+\n<\/pre>\n<p>Pour r\u00e9cup\u00e9rer les informations relatives \u00e0 l&rsquo;\u00e9tat du cluster dans une nouvelle session il faut utiliser la m\u00e9thode dba.<a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_dba.html#a57075d1355767e1d3b433c7e21bb0cd5\" target=\"_blank\" rel=\"noopener noreferrer\"><strong><em>getCluster<\/em><\/strong><\/a> :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Etat du cluster, dans une nouvelle session\">JS&gt; cluster.status()\nReferenceError: cluster is not defined\n\n\nJS&gt; var cluster = dba.getCluster('pocCluster')\n\n\nJS&gt; cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"172.19.0.11:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK\", \n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \n        \"topology\": {\n            \"172.19.0.11:3306\": {\n                \"address\": \"172.19.0.11:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.12:3306\": {\n                \"address\": \"172.19.0.12:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.13:3306\": {\n                \"address\": \"172.19.0.13:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }\n        }\n    }, \n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.19.0.11:3306\"\n}<\/pre>\n<h3>Failover<\/h3>\n<p>Le basculement niveau base de donn\u00e9es (changement de primaire) est automatiquement g\u00e9r\u00e9 par les membres du cluster entre eux.<\/p>\n<pre class=\"lang:mysql decode:true\">JS&gt; cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"172.19.0.11:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK\", \n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \n        \"topology\": {\n            \"172.19.0.11:3306\": {\n                \"address\": \"172.19.0.11:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.12:3306\": {\n                \"address\": \"172.19.0.12:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.13:3306\": {\n                \"address\": \"172.19.0.13:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }\n        }\n    }, \n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.19.0.11:3306\"\n}<\/pre>\n<p>Crash du noeud primaire (172.19.0.11)&#8230;<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Failover automatique\">JS&gt; cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"172.19.0.13:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK_NO_TOLERANCE\", \n        \"statusText\": \"Cluster is NOT tolerant to any failures. 1 member is not active\", \n        \"topology\": {\n            \"172.19.0.11:3306\": {\n                \"address\": \"172.19.0.11:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"(MISSING)\"\n            }, \n            \"172.19.0.12:3306\": {\n                \"address\": \"172.19.0.12:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.13:3306\": {\n                \"address\": \"172.19.0.13:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }\n        }\n    }, \n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.19.0.13:3306\"\n}<\/pre>\n<p>Nouveau primaire \u00e9lu par le groupe :\u00a0172.19.0.13.<\/p>\n<p>Et 172.19.0.11 est port\u00e9 disparu (<a href=\"https:\/\/www.youtube.com\/watch?v=7ceNf9qJjgc\" target=\"_blank\" rel=\"noopener noreferrer\">MIA<\/a>).<\/p>\n<p>Les donn\u00e9es configuration cluster \u00e9tant sauvegard\u00e9es, une fois le n\u0153ud red\u00e9marr\u00e9\/r\u00e9par\u00e9\/restaur\u00e9 il fera automatiquement parti du cluster \u00e0 nouveau. et il aura un r\u00f4le de secondaire.<\/p>\n<p>En cas de configuration non persistante, un <em><strong><a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_cluster.html#af56d9116e7d6d0cd111aed0971163248\" target=\"_blank\" rel=\"noopener noreferrer\">rejoinInstance<\/a>()<\/strong><\/em> est n\u00e9cessaire pour remettre le n\u0153ud dans le cluster. (voir paragraphe suivant\u00a0<strong><span style=\"text-decoration: underline;\">Remettre un membre dans le groupe<\/span><\/strong>).<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Etat du cluster\">JS&gt; cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"172.19.0.13:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK\", \n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \n        \"topology\": {\n            \"172.19.0.11:3306\": {\n                \"address\": \"172.19.0.11:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.12:3306\": {\n                \"address\": \"172.19.0.12:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.13:3306\": {\n                \"address\": \"172.19.0.13:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }\n        }\n    }, \n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.19.0.13:3306\"\n}<\/pre>\n<h3>Remettre un membre dans le groupe<\/h3>\n<p>N\u00e9cessaire si la conf n&rsquo;est pas persistante ou si la variable\u00a0<strong>group_replication_start_on_boot<\/strong> = OFF.<\/p>\n<p>Le n\u0153ud peut alors \u00eatre remit dans le groupe avec la commande <em><strong>rejoinInstance()<\/strong><\/em> :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Rejoin Instance\">JS&gt; cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"172.19.0.12:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK_NO_TOLERANCE\", \n        \"statusText\": \"Cluster is NOT tolerant to any failures. 1 member is not active\", \n        \"topology\": {\n            \"172.19.0.11:3306\": {\n                \"address\": \"172.19.0.11:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.12:3306\": {\n                \"address\": \"172.19.0.12:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.13:3306\": {\n                \"address\": \"172.19.0.13:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"(MISSING)\"\n            }\n        }\n    }, \n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.19.0.12:3306\"\n}\n\n\n\nJS&gt; dba.checkInstanceConfiguration('root@172.19.0.13:3306')\nPlease provide the password for 'root@172.19.0.13:3306': \nValidating MySQL instance at 172.19.0.13:3306 for use in an InnoDB cluster...\n \nThis instance reports its own address as mysql_8.0_node3\n\nChecking whether existing tables comply with Group Replication requirements...\nNo incompatible tables detected\n\nChecking instance configuration...\nNote: verifyMyCnf option was not given so only dynamic configuration will be verified.\nInstance configuration is compatible with InnoDB cluster\n\nThe instance '172.19.0.13:3306' is valid for InnoDB cluster usage.\n\n{\n    \"status\": \"ok\"\n}\n\n\n\nJS&gt; cluster.rejoinInstance(\"root@172.19.0.13:3306\", {ipWhitelist: \"172.19.0.0\/16\"})\nRejoining the instance to the InnoDB cluster. Depending on the original\nproblem that made the instance unavailable, the rejoin operation might not be\nsuccessful and further manual steps will be needed to fix the underlying\nproblem.\n\nPlease monitor the output of the rejoin operation and take necessary action if\nthe instance cannot rejoin.\n\nRejoining instance to the cluster ...\n\nPlease provide the password for 'root@172.19.0.13:3306': \nThe instance '172.19.0.13:3306' was successfully rejoined on the cluster.\n\n\n\nJS&gt; cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"172.19.0.12:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK\", \n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \n        \"topology\": {\n            \"172.19.0.11:3306\": {\n                \"address\": \"172.19.0.11:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.12:3306\": {\n                \"address\": \"172.19.0.12:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.13:3306\": {\n                \"address\": \"172.19.0.13:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }\n        }\n    }, \n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.19.0.12:3306\"\n}<\/pre>\n<h3>Supprimer\u00a0une instance du groupe<\/h3>\n<p>Sans grande surprise, c&rsquo;est la commande\u00a0<strong><em>removeInstance<\/em><\/strong><\/p>\n<pre class=\"lang:mysql decode:true\">JS&gt; cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"172.19.0.12:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK\", \n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \n        \"topology\": {\n            \"172.19.0.11:3306\": {\n                \"address\": \"172.19.0.11:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.12:3306\": {\n                \"address\": \"172.19.0.12:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.13:3306\": {\n                \"address\": \"172.19.0.13:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }\n        }\n    }, \n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.19.0.12:3306\"\n}\n\n\n\nJS&gt; cluster.removeInstance(\"root@172.19.0.13:3306\")\nThe instance will be removed from the InnoDB cluster. Depending on the instance\nbeing the Seed or not, the Metadata session might become invalid. If so, please\nstart a new session to the Metadata Storage R\/W instance.\n\nAttempting to leave from the Group Replication group...\n\nThe instance '172.19.0.13:3306' was successfully removed from the cluster.\n\n\n\nJS&gt; cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"172.19.0.12:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK_NO_TOLERANCE\", \n        \"statusText\": \"Cluster is NOT tolerant to any failures.\", \n        \"topology\": {\n            \"172.19.0.11:3306\": {\n                \"address\": \"172.19.0.11:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.12:3306\": {\n                \"address\": \"172.19.0.12:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }\n        }\n    }, \n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.19.0.12:3306\"\n}<\/pre>\n<p>L&rsquo;instance n&rsquo;est alors plus list\u00e9e dans les m\u00e9ta-donn\u00e9es :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"V\u00e9rifier la liste des instances directement dans les m\u00e9ta-donn\u00e9es\">JS&gt; \\sql\nSwitching to SQL mode... Commands end with ;\n\n\nSQL&gt; SELECT instance_name FROM mysql_innodb_cluster_metadata.instances;\n+------------------+\n| instance_name    |\n+------------------+\n| 172.19.0.11:3306 |\n| 172.19.0.12:3306 |\n+------------------+<\/pre>\n<p>Pour la remettre dans le groupe, il faut donc rejouer le processus de l&rsquo;ajout d&rsquo;instance vu plus haut :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Ajout d'un autre n\u0153ud \">JS&gt; cluster.checkInstanceState('root@172.19.0.13:3306')\nAnalyzing the instance replication state...\nPlease provide the password for 'root@172.19.0.13:3306': \n\nThe instance 'root@172.19.0.13:3306' is valid for the cluster.\nThe instance is fully recoverable.\n\n{\n    \"reason\": \"recoverable\", \n    \"state\": \"ok\"\n}\n\n\n\nJS&gt; cluster.addInstance(\"root@172.19.0.13:3306\", {ipWhitelist: \"172.19.0.0\/16\"})\nA new instance will be added to the InnoDB cluster. Depending on the amount of\ndata on the cluster this might take from a few seconds to several hours.\n\nAdding instance to the cluster ...\n\nPlease provide the password for 'root@172.19.0.13:3306': \nValidating instance at 172.19.0.13:3306...\n\nThis instance reports its own address as mysql_8.0_node3\n\nInstance configuration is suitable.\nThe instance 'root@172.19.0.13:3306' was successfully added to the cluster.\n\n\n\nJS&gt; cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"172.19.0.12:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK\", \n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \n        \"topology\": {\n            \"172.19.0.11:3306\": {\n                \"address\": \"172.19.0.11:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.12:3306\": {\n                \"address\": \"172.19.0.12:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.13:3306\": {\n                \"address\": \"172.19.0.13:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }\n        }\n    }, \n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.19.0.12:3306\"\n}<\/pre>\n<h3>Perte du quorum<\/h3>\n<p>Si le cluster perd plus de la moiti\u00e9 de ses membres (<strong>crash<\/strong> ou <strong><a href=\"https:\/\/fr.wikipedia.org\/wiki\/Split-brain\" target=\"_blank\" rel=\"noopener noreferrer\">split brain<\/a><\/strong> par exemple) il se retrouve dans un \u00e9tat assez d\u00e9sagr\u00e9able, <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/group-replication-network-partitioning.html\" target=\"_blank\" rel=\"noopener noreferrer\">network partitioning<\/a>, en clair il faut une intervention externe au cluster pour permettre aux membres restant de continuer \u00e0 faire leur boulot.<\/p>\n<p><em><span style=\"text-decoration: underline;\">Note<\/span>: Par perte j&rsquo;entend arr\u00eat non pr\u00e9vu (crash). En cas d&rsquo;arr\u00eat normal ou propre, m\u00eame si le cluster perd son quorum (dans ce cas pr\u00e9sent arr\u00eat normal de 2 n\u0153uds), le n\u0153ud restant sait que les autres n\u0153uds ne sont plus l\u00e0 (en clair pas de risque de split brain) donc le cluster continue de fonctionner. Mais c&rsquo;est un cluster avec un seul n\u0153ud&#8230;\u00a0<\/em><\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Cluster avec un seul n\u0153ud actif\">JS &gt; cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"172.19.0.12:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK_NO_TOLERANCE\", \n        \"statusText\": \"Cluster is NOT tolerant to any failures. 2 members are not active\", \n        \"topology\": {\n            \"172.19.0.11:3306\": {\n                \"address\": \"172.19.0.11:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"(MISSING)\"\n            }, \n            \"172.19.0.12:3306\": {\n                \"address\": \"172.19.0.12:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.13:3306\": {\n                \"address\": \"172.19.0.13:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"(MISSING)\"\n            }\n        }\n    }, \n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.19.0.12:3306\"\n}<\/pre>\n<p>Dans notre cas, avec 3 instances, il faut en perdre\u00a02 d&rsquo;un coup :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Cluster status\">JS&gt; cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"172.19.0.12:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK\", \n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \n        \"topology\": {\n            \"172.19.0.11:3306\": {\n                \"address\": \"172.19.0.11:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.12:3306\": {\n                \"address\": \"172.19.0.12:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.13:3306\": {\n                \"address\": \"172.19.0.13:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }\n        }\n    }, \n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.19.0.12:3306\"\n}<\/pre>\n<p>Perte des n\u0153uds (crash) 172.19.0.11 &amp;\u00a0172.19.0.12&#8230; \u00a0(Mayday, Mayday, Mayday!!!)<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Cluster sans quorum\">JS&gt; cluster.status()\nWARNING: Cluster has no quorum and cannot process write transactions: 2 out of 3 members of the InnoDB cluster are unreachable from the member we\u2019re connected to, which is not sufficient for a quorum to be reached.\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"172.19.0.13:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"NO_QUORUM\", \n        \"statusText\": \"Cluster has no quorum as visible from '172.19.0.13:3306' and cannot process write transactions. 2 members are not active\", \n        \"topology\": {\n            \"172.19.0.11:3306\": {\n                \"address\": \"172.19.0.11:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"UNREACHABLE\"\n            }, \n            \"172.19.0.12:3306\": {\n                \"address\": \"172.19.0.12:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"UNREACHABLE\"\n            }, \n            \"172.19.0.13:3306\": {\n                \"address\": \"172.19.0.13:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }\n        }\n    }, \n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.19.0.13:3306\"\n}<\/pre>\n<p>Le failover automatique ne peut pas s\u2019enclencher, le n\u0153ud survivant (172.19.0.13) est <strong>bloqu\u00e9<\/strong>.<\/p>\n<p>Il faut donc intervenir :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Perte du quorum\">$ mysqlsh --uri=root@172.19.0.13\nCreating a Session to 'root@172.19.0.13'\n...\n\n\nJS&gt; var cluster = dba.getCluster(\"pocCluster\")\nWARNING: Cluster has no quorum and cannot process write transactions: 2 out of 3 members of the InnoDB cluster are unreachable from the member we\u2019re connected to, which is not sufficient for a quorum to be reached.\n\n\n\nJS&gt;  cluster.forceQuorumUsingPartitionOf('root@172.19.0.13:3306')\nRestoring replicaset 'default' from loss of quorum, by using the partition composed of [172.19.0.13:3306]\n\nRestoring the InnoDB cluster ...\n\nPlease provide the password for 'root@172.19.0.13:3306': \nThe InnoDB cluster was successfully restored using the partition from the instance 'root@172.19.0.13:3306'.\n\nWARNING: To avoid a split-brain scenario, ensure that all other members of the replicaset are removed or joined back to the group that was restored.\n\n\n\nJS&gt; cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"172.19.0.13:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK_NO_TOLERANCE\", \n        \"statusText\": \"Cluster is NOT tolerant to any failures. 2 members are not active\", \n        \"topology\": {\n            \"172.19.0.11:3306\": {\n                \"address\": \"172.19.0.11:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"(MISSING)\"\n            }, \n            \"172.19.0.12:3306\": {\n                \"address\": \"172.19.0.12:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"(MISSING)\"\n            }, \n            \"172.19.0.13:3306\": {\n                \"address\": \"172.19.0.13:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }\n        }\n    }, \n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.19.0.13:3306\"\n}<\/pre>\n<p>Evidemment, sauf si tu es joueur \ud83d\ude42 , il faut \u00e9viter de rester trop longtemps dans cet \u00e9tat.<\/p>\n<p>Une fois les instances remisent en condition, il faut soit simplement les d\u00e9marrer ou alors utiliser\u00a0<em><strong>rejoinInstance()<\/strong><\/em>\u00a0pour les remettre dans le cluster, en tant que secondaire.<\/p>\n<pre class=\"lang:mysql decode:true\">JS&gt; cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"172.19.0.13:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK\", \n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \n        \"topology\": {\n            \"172.19.0.11:3306\": {\n                \"address\": \"172.19.0.11:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.12:3306\": {\n                \"address\": \"172.19.0.12:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.13:3306\": {\n                \"address\": \"172.19.0.13:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }\n        }\n    }, \n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.19.0.13:3306\"\n}<\/pre>\n<h3>Repartir apr\u00e8s un arr\u00eat total du cluster<\/h3>\n<p>La perte du quorum est une chose, mais il y a pire, perdre tout les n\u0153uds&#8230;<\/p>\n<p>En cas d&rsquo;arr\u00eat total du cluster i.e. toutes les instances\u00a0sont \u00e9teintes, il faut utiliser, une fois les instances MySQL de nouveau d\u00e9marr\u00e9es \u00a0<strong><em>rebootClusterFromCompleteOutage()<\/em><\/strong> :<\/p>\n<pre class=\"lang:mysql decode:true\" title=\"Reboot du cluster - erreur\">$ mysqlsh root@172.19.0.12:3306\n...\n\nJS &gt; var cluster = dba.rebootClusterFromCompleteOutage('pocCluster')\nReconfiguring the cluster 'pocCluster' from complete outage...\n\nThe instance '172.19.0.11:3306' was part of the cluster configuration.\nWould you like to rejoin it to the cluster? [y\/N]: y\n\nThe instance '172.19.0.13:3306' was part of the cluster configuration.\nWould you like to rejoin it to the cluster? [y\/N]: y\n\nDba.rebootClusterFromCompleteOutage: The active session instance isn't the most updated in comparison with the ONLINE instances of the Cluster's metadata. Please use the most up to date instance: '172.19.0.13:3306'. (RuntimeError)<\/pre>\n<p>Le reboot doit se faire sur l&rsquo;instance la plus \u00e0 jour (ici la machine 172.19.0.13) :<\/p>\n<pre class=\"lang:mysql decode:true \" title=\"Reboot du cluster \">$ mysqlsh --uri=root@172.19.0.13\nCreating a session to 'root@172.19.0.13'\n...\n\nJS&gt; var cluster = dba.rebootClusterFromCompleteOutage('pocCluster')\nvar cluster = dba.rebootClusterFromCompleteOutage('pocCluster')\nReconfiguring the cluster 'pocCluster' from complete outage...\n\nThe instance '172.19.0.11:3306' was part of the cluster configuration.\nWould you like to rejoin it to the cluster? [y\/N]: y\n\nThe instance '172.19.0.12:3306' was part of the cluster configuration.\nWould you like to rejoin it to the cluster? [y\/N]: y\n\n\nThe cluster was successfully rebooted.\n\n\n\nJS&gt; cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"172.19.0.13:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK\", \n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \n        \"topology\": {\n            \"172.19.0.11:3306\": {\n                \"address\": \"172.19.0.11:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.12:3306\": {\n                \"address\": \"172.19.0.12:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }, \n            \"172.19.0.13:3306\": {\n                \"address\": \"172.19.0.13:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\"\n            }\n        }\n    }, \n    \"groupInformationSourceMember\": \"mysql:\/\/root@172.19.0.13:3306\"\n}<\/pre>\n<p>Le membre sur lequel la commande est ex\u00e9cut\u00e9e est le nouveau primaire.<\/p>\n<p>Voil\u00e0 c&rsquo;est tout pour aujourd&rsquo;hui \ud83d\ude42<\/p>\n<p>Dans la m\u00eame th\u00e9matique :<\/p>\n<ul>\n<li><a title=\"Tutoriel \u2013 D\u00e9ployer MySQL 8.0 InnoDB Cluster (09-2019)\" href=\"http:\/\/dasini.net\/blog\/2019\/09\/03\/tutoriel-deployer-mysql-8-0-innodb-cluster-09-2019\/\" target=\"_blank\" rel=\"noopener noreferrer\">Tutoriel \u2013 D\u00e9ployer MySQL 8.0 InnoDB Cluster (09-2019)<\/a><\/li>\n<li><a href=\"http:\/\/dasini.net\/blog\/2019\/04\/11\/mysql-innodb-cluster-howto-1-monitor-your-cluster\/\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL InnoDB Cluster \u2013 HowTo #1 \u2013 Monitor your cluster<\/a><\/li>\n<li><a href=\"http:\/\/dasini.net\/blog\/2019\/05\/21\/mysql-innodb-cluster-howto-2-validate-an-instance\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\">MySQL InnoDB Cluster \u2013 HowTo #2 \u2013 Validate an instance<\/a><\/li>\n<li><a href=\"http:\/\/dasini.net\/blog\/2018\/08\/21\/tutoriel-deployer-mysql-5-7-innodb-cluster\/\" target=\"_blank\" rel=\"noopener noreferrer\">Tutoriel \u2013 D\u00e9ployer MySQL 5.7 InnoDB Cluster<\/a><\/li>\n<li><a href=\"https:\/\/lefred.be\/content\/mysql-innodb-cluster-is-the-router-a-single-point-of-failure\/\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL InnoDB Cluster: is the router a single point of failure ?<\/a><\/li>\n<li><a href=\"https:\/\/lefred.be\/content\/mysql-router-ha-with-pacemaker\/\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL Router HA with Pacemaker<\/a><\/li>\n<li><a href=\"https:\/\/lefred.be\/content\/mysql-router-ha-with-keepalived\/\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL Router HA with Keepalived<\/a><\/li>\n<li><a href=\"http:\/\/dasini.net\/blog\/2016\/11\/08\/deployer-un-cluster-mysql-group-replication\" target=\"_blank\" rel=\"noopener noreferrer\">D\u00e9ployer un cluster MySQL Group Replication<\/a><\/li>\n<li><a href=\"https:\/\/mysqlrelease.com\/2018\/03\/docker-compose-setup-for-innodb-cluster\/\" target=\"_blank\" rel=\"noopener noreferrer\">D\u00e9ployer MySQL InnoDB Cluster avec Docker<\/a><\/li>\n<li><a href=\"http:\/\/dasini.net\/blog\/2017\/03\/03\/faq-webinar-mysql-group-replication\/\" target=\"_blank\" rel=\"noopener noreferrer\">FAQ Webinar MySQL Group Replication<\/a><\/li>\n<li><a href=\"http:\/\/dasini.net\/blog\/2017\/03\/13\/tester-mysql-innodb-cluster\/\" target=\"_blank\" rel=\"noopener noreferrer\">Tester MySQL InnoDB Cluster<\/a><\/li>\n<li><a href=\"http:\/\/dasini.net\/blog\/2017\/01\/11\/configurer-proxysql-pour-mysql-group-replication\/\" target=\"_blank\" rel=\"noopener noreferrer\">Configurer ProxySQL pour MySQL Group Replication<\/a><\/li>\n<li><a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/group-replication.html\" target=\"_blank\" rel=\"noopener noreferrer\">Doc &#8211; MySQL Group Replication<\/a><\/li>\n<li><a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/mysql-innodb-cluster-userguide.html\" target=\"_blank\" rel=\"noopener noreferrer\">Doc &#8211; MySQL InnoDB Cluster<\/a><\/li>\n<li><a href=\"https:\/\/www.slideshare.net\/freshdaz\/mysql-day-paris-2018-mysql-innodb-cluster-a-complete-high-availability-solution-for-mysql\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\"Pr\u00e9sentation: MySQL InnoDB Cluster; A complete High Availability solution for MySQL (opens in a new tab)\">Pr\u00e9sentation: MySQL InnoDB Cluster; A complete High Availability solution for MySQL<\/a><\/li>\n<\/ul>\n<p>Video:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.youtube.com\/watch?v=YZzGsDjlWvk\" target=\"_blank\" rel=\"noopener noreferrer\">D\u00e9ployer 1 cluster <b>MySQL<\/b> avec seulement 2 commandes<\/a><\/li>\n<\/ul>\n<p>Articles connexes:<\/p>\n<ul>\n<li><a href=\"http:\/\/mysql.wisborg.dk\/2018\/08\/14\/mysql-8-0-persisted-variables\/\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL 8.0: Persisted Variables<\/a><\/li>\n<li><a href=\"https:\/\/lefred.be\/content\/mysql-innodb-cluster-8-0-12-abort_server\/\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL InnoDB Cluster 8.0.12 \u2013 avoid old reads on partitioned members<\/a><\/li>\n<li><a href=\"https:\/\/lefred.be\/content\/mysql-innodb-cluster-upgrade-from-8-0-11-to-8-0-12\/\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL InnoDB Cluster: upgrade from 8.0.11 to 8.0.12<\/a><\/li>\n<li><a href=\"https:\/\/mysqlserverteam.com\/getting-help-in-mysql-shell\/\" target=\"_blank\" rel=\"noopener noreferrer\">Getting Help in MySQL Shell<\/a><\/li>\n<li><a href=\"https:\/\/mysqlserverteam.com\/mysql-shell-8-0-12-whats-new\/\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL Shell 8.0.12 \u2013 What\u2019s New?<\/a><\/li>\n<li><a href=\"https:\/\/mysql.wisborg.dk\/2018\/08\/25\/mysql-shell-built-in-help\/\" target=\"_blank\" rel=\"noopener noreferrer\">MySQL Shell: Built-In Help<\/a><\/li>\n<\/ul>\n<p>Thanks for using MySQL!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Cela fait maintenant plus d&rsquo;un trimestre que MySQL 8.0 est GA (8.0.11; 8.0.12), il est temps que je t&rsquo;en parle \ud83d\ude42<\/p>\n<p>Dans la liste des besoins essentiels de mes clients se trouve la\u00a0Haute Disponibilit\u00e9 avec MySQL. On va voir, dans cet article, comment d\u00e9ployer et g\u00e9rer un cluster MySQL \u00ab\u00a0from scratch\u00a0\u00bb\u00a0, sous la forme d&rsquo;un tutoriel, gr\u00e2ce \u00e0 la solution HA tout en un : MySQL InnoDB Cluster.<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[282,316,288,8,337],"tags":[75,286,79,298,302,341],"class_list":["post-2587","post","type-post","status-publish","format-standard","hentry","category-group-replication","category-haute-disponibilite","category-innodb-cluster","category-mysql","category-tuto","tag-cluster","tag-group-replication","tag-haute-disponibilite","tag-innodb-cluster","tag-router","tag-tuto"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9LfWW-FJ","jetpack-related-posts":[{"id":3291,"url":"https:\/\/dasini.net\/blog\/2019\/07\/16\/deployer-1-cluster-mysql-avec-2-commandes\/","url_meta":{"origin":2587,"position":0},"title":"D\u00e9ployer 1 cluster MySQL avec 2 commandes","author":"Olivier DASINI","date":"16 juillet 2019","format":false,"excerpt":"Dans cette courte vid\u00e9o, je vais te montrer comment cr\u00e9er un cluster MySQL de 3 n\u0153uds avec simplement 2 commandes. https:\/\/www.youtube.com\/watch?v=YZzGsDjlWvk","rel":"","context":"Dans &quot;Group Replication&quot;","block_context":{"text":"Group Replication","link":"https:\/\/dasini.net\/blog\/category\/group-replication\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/YZzGsDjlWvk\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":1654,"url":"https:\/\/dasini.net\/blog\/2016\/10\/05\/mysql-a-oracle-openworld-2016\/","url_meta":{"origin":2587,"position":1},"title":"MySQL \u00e0 Oracle OpenWorld 2016","author":"Olivier DASINI","date":"5 octobre 2016","format":false,"excerpt":"R\u00e9sum\u00e9 des annonces MySQL \u00e0 Oracle OpenWorld: MySQL dans le cloud, MySQL Group Replication, MySQL InnoDB Cluster, MySQL 8,...","rel":"","context":"Dans &quot;Group Replication&quot;","block_context":{"text":"Group Replication","link":"https:\/\/dasini.net\/blog\/category\/group-replication\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/Oracle_MySQL_Cloud_Service.png?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":2959,"url":"https:\/\/dasini.net\/blog\/2019\/04\/08\/200\/","url_meta":{"origin":2587,"position":2},"title":"200 !!","author":"Olivier DASINI","date":"8 avril 2019","format":false,"excerpt":"Cet article est en fait mon 200 \u00e8me \\o\/ Pour marquer le coup, j'ai d\u00e9cid\u00e9 de faire une petite pause et de regarder dans le r\u00e9troviseur...","rel":"","context":"Dans &quot;Divers&quot;","block_context":{"text":"Divers","link":"https:\/\/dasini.net\/blog\/category\/divers\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":2549,"url":"https:\/\/dasini.net\/blog\/2018\/08\/21\/tutoriel-deployer-mysql-5-7-innodb-cluster\/","url_meta":{"origin":2587,"position":3},"title":"Tutoriel \u2013 D\u00e9ployer MySQL 5.7 InnoDB Cluster","author":"Olivier DASINI","date":"21 ao\u00fbt 2018","format":false,"excerpt":"L'un des principaux besoins de mes clients est la\u00a0Haute Disponibilit\u00e9 avec MySQL. On va voir, dans cet article, comment d\u00e9ployer et g\u00e9rer un cluster MySQL 5.7 \"from scratch\"\u00a0, sous la forme d'un tutoriel, gr\u00e2ce \u00e0 la solution HA tout en un : MySQL 5.7 InnoDB Cluster.","rel":"","context":"Dans &quot;Group Replication&quot;","block_context":{"text":"Group Replication","link":"https:\/\/dasini.net\/blog\/category\/group-replication\/"},"img":{"alt_text":"MySQL InnoDB Cluster PoC Architecture","src":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/MySQL_InnoDB_Cluster_Production_post2.png?resize=350%2C200","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/MySQL_InnoDB_Cluster_Production_post2.png?resize=350%2C200 1x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/MySQL_InnoDB_Cluster_Production_post2.png?resize=525%2C300 1.5x"},"classes":[]},{"id":4381,"url":"https:\/\/dasini.net\/blog\/2021\/02\/24\/webinar-la-haute-disponibilite-avec-mysql\/","url_meta":{"origin":2587,"position":4},"title":"Webinar \u2013  La haute disponibilit\u00e9 avec MySQL","author":"Olivier DASINI","date":"24 f\u00e9vrier 2021","format":false,"excerpt":"MySQL propose plusieurs solutions pour vous aider \u00e0 b\u00e2tir une architecture hautement disponible. On retrouve InnoDB Cluster avec Group Replication, la r\u00e9plication asynchrone en utilisant InnoDB ReplicaSet ou encore la r\u00e9plication semi-synchrone, et enfin NDB Cluster. Dans cette session, nous examinerons ces diff\u00e9rents sc\u00e9narios, les \u00e9cueils \u00e0 \u00e9viter et les\u2026","rel":"","context":"Dans &quot;MySQL&quot;","block_context":{"text":"MySQL","link":"https:\/\/dasini.net\/blog\/category\/mysql\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":4389,"url":"https:\/\/dasini.net\/blog\/2021\/03\/02\/slides-et-video-du-webinar-la-haute-disponibilite-avec-mysql\/","url_meta":{"origin":2587,"position":5},"title":"Slides et vid\u00e9o du webinar  La haute disponibilit\u00e9 avec MySQL","author":"Olivier DASINI","date":"2 mars 2021","format":false,"excerpt":"Le 25 f\u00e9vrier dernier, a eu lieu le webinar: La haute disponibilit\u00e9 avec MySQL. MERCI \u00e0 tous pour votre pr\u00e9sence ? La pr\u00e9sentation est disponible ici, sur mon compte slideshare:","rel":"","context":"Dans &quot;Group Replication&quot;","block_context":{"text":"Group Replication","link":"https:\/\/dasini.net\/blog\/category\/group-replication\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/S41Z-QMupHE\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]}],"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/posts\/2587","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/comments?post=2587"}],"version-history":[{"count":40,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/posts\/2587\/revisions"}],"predecessor-version":[{"id":3424,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/posts\/2587\/revisions\/3424"}],"wp:attachment":[{"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/media?parent=2587"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/categories?post=2587"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/tags?post=2587"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}