
{"id":3360,"date":"2019-09-10T08:39:25","date_gmt":"2019-09-10T07:39:25","guid":{"rendered":"http:\/\/dasini.net\/blog\/?p=3360"},"modified":"2019-09-10T08:39:31","modified_gmt":"2019-09-10T07:39:31","slug":"mysql-innodb-cluster-easy-recovering-and-provisioning","status":"publish","type":"post","link":"https:\/\/dasini.net\/blog\/2019\/09\/10\/mysql-innodb-cluster-easy-recovering-and-provisioning\/","title":{"rendered":"MySQL InnoDB Cluster &#8211; Easy Recovering and provisioning"},"content":{"rendered":"\n<div class=\"wp-block-image\"><figure class=\"aligncenter\"><img data-recalc-dims=\"1\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/Cloudy_Sky_by_olivier_dasini_1430x600.JPG\" alt=\"Cloudy Sky by Olivier DASINI\"\/><\/figure><\/div>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>As the administrator of a cluster, among other tasks, you should be able to restore failed nodes and grow (or shrink) your cluster by adding (or removing) new nodes<\/p><cite>Olivier DASINI<\/cite><\/blockquote>\n\n\n\n<p>Up to MySQL 8.0.16, to perform these tasks you could:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Use <strong><a rel=\"noreferrer noopener\" href=\"https:\/\/www.mysql.com\/products\/enterprise\/backup.html\" target=\"_blank\">MySQL Enterprise Backup<\/a><\/strong> : see <a rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\" href=\"http:\/\/dasini.net\/blog\/2019\/07\/11\/mysql-innodb-cluster-recovering-and-provisioning-with-mysql-enterprise-backup\/\" target=\"_blank\">MySQL InnoDB Cluster \u2013 Recovering and provisioning with MySQL Enterprise Backup<\/a><\/li><li>Use <a rel=\"noreferrer noopener\" href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/mysqldump.html\" target=\"_blank\"><strong>mysqldump<\/strong><\/a> : see <a rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\" href=\"http:\/\/dasini.net\/blog\/2019\/07\/09\/mysql-innodb-cluster-recovering-and-provisioning-with-mysqldump\/\" target=\"_blank\">MySQL InnoDB Cluster \u2013 Recovering and provisioning with mysqldump<\/a><\/li><\/ul>\n\n\n\n<p>Starting with <strong>MySQL 8.0.17<\/strong>, the easiest and more convenient method is to use the <a rel=\"noreferrer noopener\" aria-label=\"CLONE (opens in a new tab)\" href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/clone-plugin.html\" target=\"_blank\">CLONE<\/a> feature.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><u><em>Note<\/em><\/u>:<br>Regardless of the recovering and provisioning method, it is important to have a proven backup\/restore procedure.<\/p><p>I recommend reading the excellent blog post from my colleague&nbsp;<a rel=\"noreferrer noopener\" href=\"https:\/\/mysql.wisborg.dk\/\" target=\"_blank\">Jesper<\/a><em>&nbsp;\u2013&nbsp;<\/em><a rel=\"noreferrer noopener\" href=\"https:\/\/mysql.wisborg.dk\/2019\/03\/31\/mysql-backup-best-practices\/\" target=\"_blank\">MySQL Backup Best Practices<\/a>.<\/p><\/blockquote>\n\n\n\n\n\n<h2 class=\"wp-block-heading\" id=\"mce_6\">Context<\/h2>\n\n\n\n<p>Let&rsquo;s make it as simple as possible \ud83d\ude42<br>I&rsquo;m using MySQL <strong>8.0.17<\/strong>.<\/p>\n\n\n\n<p>I running a <strong>MySQL InnoDB Cluster<\/strong> well <a rel=\"noreferrer noopener\" aria-label=\" (s\u2019ouvre dans un nouvel onglet)\" href=\"http:\/\/dasini.net\/blog\/2019\/09\/03\/tutoriel-deployer-mysql-8-0-innodb-cluster-09-2019\/\" target=\"_blank\">setup<\/a>.<br>So my main assumption is that you already know what is <a rel=\"noreferrer noopener\" href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/group-replication.html\" target=\"_blank\">MySQL Group Replication<\/a> &amp; <a rel=\"noreferrer noopener\" href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/mysql-innodb-cluster-userguide.html\" target=\"_blank\">MySQL InnoDB Cluster<\/a>.<br>Additionally you can read <a rel=\"noreferrer noopener\" aria-label=\" (s\u2019ouvre dans un nouvel onglet)\" href=\"http:\/\/dasini.net\/blog\/2019\/09\/03\/tutoriel-deployer-mysql-8-0-innodb-cluster-09-2019\/\" target=\"_blank\">this tutorial<\/a> and this <a rel=\"noreferrer noopener\" href=\"https:\/\/lefred.be\/content\/mysql-innodb-cluster-howto-install-it-from-scratch\/\" target=\"_blank\">article<\/a> from my colleague <a rel=\"noreferrer noopener\" href=\"https:\/\/lefred.be\/\" target=\"_blank\">lefred<\/a> or <a rel=\"noreferrer noopener\" href=\"http:\/\/mysqlhk.blogspot.com\/2019\/05\/mysql-innodb-cluster-setup-on-windows.html\" target=\"_blank\">this one on Windows Platform<\/a> from my colleague <a rel=\"noreferrer noopener\" href=\"http:\/\/mysqlhk.blogspot.com\/\" target=\"_blank\">Ivan<\/a>. <\/p>\n\n\n\n\n\n<h2 class=\"wp-block-heading\" id=\"mce_9\">Scenario 1 &#8211; Node Recovering<\/h2>\n\n\n\n<ul class=\"wp-block-list\"><li>A 3 nodes MySQL InnoDB Cluster &#8211; M1 \/ M2 \/ M3, in single primary mode<\/li><li>MySQL Router is configured to enable R\/W connections on 3306 and RO connections on 3307<\/li><li>M1 is currently the primary (that is in Read\/Write mode)<\/li><li>M2 &amp; M3 are currently the secondaries (that is Read Only mode)<\/li><li>M1 crashed! :'(<\/li><li>M2 &amp; M3 are now the (new temporary) cluster<\/li><li>Then&#8230;<\/li><\/ul>\n\n\n\n<p>The goal then is to rebuild M1 and put it back to the cluster.<\/p>\n\n\n\n\n\n<p>So like I said before we have a 3 nodes MySQL <strong>8.0.17<\/strong> InnoDB Cluster up and running:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>$ mysqlsh clusterAdmin@{mysqlRouterIP}:3306 --cluster\n\nMySQL localhost JS> cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"M1:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK\", \n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \n        \"topology\": {\n            \"M1:3306\": {\n                \"address\": \"M1:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }, \n            \"M2:3306\": {\n                \"address\": \"M2:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }, \n            \"M3:3306\": {\n                \"address\": \"M3:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }\n        }, \n        \"topologyMode\": \"Single-Primary\"\n    }, \n    \"groupInformationSourceMember\": \"M1:3306\"\n}<\/code><\/pre>\n\n\n\n<p>Then node M1 crashed\u2026 (status is \u201c<em>MISSING<\/em>\u201c &amp; we have a relevant <em><strong>shellConnectError<\/strong><\/em> message) :<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>MySQL localhost JS> cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"M2:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK_NO_TOLERANCE\", \n        \"statusText\": \"Cluster is NOT tolerant to any failures. 1 member is not active\", \n        \"topology\": {\n            \"M1:3306\": {\n                \"address\": \"M1:3306\", \n                \"mode\": \"n\/a\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"shellConnectError\": \"MySQL Error 2003 (HY000): Can't connect to MySQL server on 'M1' (113)\", \n                \"status\": \"(MISSING)\"\n            }, \n            \"M2:3306\": {\n                \"address\": \"M2:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }, \n            \"M3:3306\": {\n                \"address\": \"M3:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }\n        }, \n        \"topologyMode\": \"Single-Primary\"\n    }, \n    \"groupInformationSourceMember\": \"M2:3306\"\n}<\/code><\/pre>\n\n\n\n<p>M1 was the primary. <br>The cluster initiated an automatic database failover to elect a new primary\u2026 blablabla<br>Anyway you already know the story ?<\/p>\n\n\n\n\n\n<p>After a while M1 is finally fixed and ready to be part of the cluster again.<\/p>\n\n\n\n\n\n<h3 class=\"wp-block-heading\">Node Recovery: Auto distributed recovery<\/h3>\n\n\n\n<p>Well I have a great news for you most of the time your only task will be to start the fixed MySQL instance.<\/p>\n\n\n\n<p>Before starting the node (M1), status is <em>Missing<\/em>:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>MySQL localhost JS> cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"M2:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK_NO_TOLERANCE\", \n        \"statusText\": \"Cluster is NOT tolerant to any failures. 1 member is not active\", \n        \"topology\": {\n            \"M1:3306\": {\n                \"address\": \"M1:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"(MISSING)\"\n            }, \n            \"M2:3306\": {\n                \"address\": \"M2:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }, \n            \"M3:3306\": {\n                \"address\": \"M3:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }\n        }, \n        \"topologyMode\": \"Single-Primary\"\n    }, \n    \"groupInformationSourceMember\": \"M2:3306\"\n}<\/code><\/pre>\n\n\n\n<p>Then MySQL InnoDB Cluster will <u>automatically<\/u> choose the relevant distributed recovery method.<br>During a certain period of time status is <em>Recovering<\/em>. <br>Details are available in <em><strong>recoveryStatusText<\/strong><\/em> field:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>MySQL localhost JS> cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"M2:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK\", \n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \n        \"topology\": {\n            \"M1:3306\": {\n                \"address\": \"M1:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"recovery\": {\n                    \"state\": \"ON\"\n                }, \n                \"recoveryStatusText\": \"Distributed recovery in progress\", \n                \"role\": \"HA\", \n                \"status\": \"RECOVERING\", \n                \"version\": \"8.0.17\"\n            }, \n            \"M2:3306\": {\n                \"address\": \"M2:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }, \n            \"M3:3306\": {\n                \"address\": \"M3:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }\n        }, \n        \"topologyMode\": \"Single-Primary\"\n    }, \n    \"groupInformationSourceMember\": \"M2:3306\"\n}<\/code><\/pre>\n\n\n\n<p>The node eventually becomes online.<br>Status: <em>Online<\/em>.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>MySQL localhost JS> cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"M2:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK\", \n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \n        \"topology\": {\n            \"M1:3306\": {\n                \"address\": \"M1:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }, \n            \"M2:3306\": {\n                \"address\": \"M2:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }, \n            \"M3:3306\": {\n                \"address\": \"M3:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }\n        }, \n        \"topologyMode\": \"Single-Primary\"\n    }, \n    \"groupInformationSourceMember\": \"M2:3306\"\n}<\/code><\/pre>\n\n\n\n\n\n<p>So you get it now, the node recovering process is handle by MySQL InnoDB Cluster using the automated distributed recovery.<\/p>\n\n\n\n<p>That means, if your lucky, your only task (in addition to fixing the issue) is to (re)start the MySQL instance.<\/p>\n\n\n\n<p>If you&rsquo;re not lucky, you&rsquo;ll probably need to remove and add the node again.<\/p>\n\n\n\n\n\n<h3 class=\"wp-block-heading\" id=\"mce_17\">Node Recovery: Remove and add the node<\/h3>\n\n\n\n<p>The strategy here is to remove the node from the cluster metadata and then add it back &#8211; as a new node &#8211; to the cluster.<\/p>\n\n\n\n<p>For example for node M3, remove the node with <a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_cluster.html#a0c878a44100530d6a0e0cd2f893a71af\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\"removeInstance (opens in a new tab)\">removeInstance<\/a>():<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>cluster.removeInstance(\"clusterAdmin@M3:3306\", {force:true}) <\/code><\/pre>\n\n\n\n<p><em>force<\/em> is a boolean. By default, set to false.<br>Indicating if the instance must be removed (even if only from metadata) in case it cannot be reached. <br>Useful if the instance is no longer reachable (<a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_cluster.html\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\"RTFM (s\u2019ouvre dans un nouvel onglet)\">RTFM<\/a>).<\/p>\n\n\n\n<p>Then add the node with <a href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_cluster.html#a92471821375405214215958ee924087f\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\"addInstance (opens in a new tab)\">addInstance<\/a>():<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>cluster.addInstance(\"clusterAdmin@M3:3306\")<\/code><\/pre>\n\n\n\n\n\n<p>An alternative, that I prefer, is to use <a rel=\"noreferrer noopener\" aria-label=\"rescan (opens in a new tab)\" href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_cluster.html#a96c63d07c753c4482d60fc6eea9a895f\" target=\"_blank\">rescan<\/a>() then <a rel=\"noreferrer noopener\" href=\"https:\/\/dev.mysql.com\/doc\/dev\/mysqlsh-api-javascript\/8.0\/classmysqlsh_1_1dba_1_1_cluster.html#a92471821375405214215958ee924087f\" target=\"_blank\">addInstance<\/a>().<\/p>\n\n\n\n<p>Let&rsquo;s see how to use it, assuming M3 crashed.<br>The current status is:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>MySQL localhost JS> cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"M1:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK_PARTIAL\", \n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \n        \"topology\": {\n            \"M1:3306\": {\n                \"address\": \"M1:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }, \n            \"M2:3306\": {\n                \"address\": \"M2:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }, \n            \"M3:3306\": {\n                \"address\": \"M3:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"(MISSING)\"\n            }\n        }, \n        \"topologyMode\": \"Single-Primary\"\n    }, \n    \"groupInformationSourceMember\": \"M1:3306\"\n}<\/code><\/pre>\n\n\n\n<p>Let&rsquo;s (re)scan the cluster and remove the missing node (M3):<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>MySQL localhost JS> cluster.rescan()\nRescanning the cluster...\n\nResult of the rescanning operation for the 'default' ReplicaSet:\n{\n    \"name\": \"default\", \n    \"newTopologyMode\": null, \n    \"newlyDiscoveredInstances\": [], \n    \"unavailableInstances\": [\n        {\n            \"host\": \"M3:3306\", \n            \"label\": \"M3:3306\", \n            \"member_id\": \"93303635-c8c0-11e9-9665-0242ac13000d\"\n        }\n    ]\n}\n\nThe instance 'M3:3306' is no longer part of the ReplicaSet.\nThe instance is either offline or left the HA group. You can try to add it to the cluster again with the cluster.rejoinInstance('M3:3306') command or you can remove it from the cluster configuration.\nWould you like to remove it from the cluster metadata? [Y\/n]: Y\nRemoving instance from the cluster metadata...\nThe instance 'M3:3306' was successfully removed from the cluster metadata.<\/code><\/pre>\n\n\n\n<p>The new cluster status is:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>MySQL localhost JS> cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"M1:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK_NO_TOLERANCE\", \n        \"statusText\": \"Cluster is NOT tolerant to any failures.\", \n        \"topology\": {\n            \"M1:3306\": {\n                \"address\": \"M1:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }, \n            \"M2:3306\": {\n                \"address\": \"M2:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }\n        }, \n        \"topologyMode\": \"Single-Primary\"\n    }, \n    \"groupInformationSourceMember\": \"M1:3306\"\n}<\/code><\/pre>\n\n\n\n<p>Now it&rsquo;s time to bring back M3, using the Clone distributed process:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>MySQL localhost JS> cluster.addInstance(\"clusterAdmin@M3:3306\")\n\nWARNING: A GTID set check of the MySQL instance at 'M3:3306' determined\nthat it contains transactions that do not originate from the cluster, which\nmust be discarded before it can join the cluster.\n\nM3:3306 has the following errant GTIDs that do not exist in the cluster:\n93303635-c8c0-11e9-9665-0242ac13000d:1-2\n\nWARNING: Discarding these extra GTID events can either be done manually or by completely\noverwriting the state of M3:3306 with a physical snapshot from an\nexisting cluster member. To use this method by default, set the\n'recoveryMethod' option to 'clone'.\n\nHaving extra GTID events is not expected, and it is recommended to investigate\nthis further and ensure that the data can be removed prior to choosing the\nclone recovery method.\n\nPlease select a recovery method [C]lone\/[A]bort (default Abort): C\nValidating instance at M3:3306...\n\nThis instance reports its own address as M3:3306\n\nInstance configuration is suitable.\nA new instance will be added to the InnoDB cluster. Depending on the amount of\ndata on the cluster this might take from a few seconds to several hours.\n\nAdding instance to the cluster...\n\nMonitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.\nClone based state recovery is now in progress.\n\nNOTE: A server restart is expected to happen as part of the clone process. If the\nserver does not support the RESTART command or does not come back after a\nwhile, you may need to manually start it back.\n\n* Waiting for clone to finish...\nNOTE: M3:3306 is being cloned from M1:3306\n** Stage DROP DATA: Completed\n** Clone Transfer  \n    FILE COPY  ############################################################  100%  Completed\n    PAGE COPY  ############################################################  100%  Completed\n    REDO COPY  ############################################################  100%  Completed\n** Stage RECOVERY: |\nNOTE: M3:3306 is shutting down...\n\n* Waiting for server restart... ready\n* M3:3306 has restarted, waiting for clone to finish...\n* Clone process has finished: 68.99 MB transferred in about 1 second (~inf TB\/s)\n\nIncremental distributed state recovery is now in progress.\n\n* Waiting for distributed recovery to finish...\nNOTE: 'M3:3306' is being recovered from 'M2:3306'\n* Distributed recovery has finished\n\nThe instance 'M3:3306' was successfully added to the cluster.<\/code><\/pre>\n\n\n\n<p>Here we go!<br>Our 3 nodes MySQL InnoDB Cluster can tolerate up to 1 failure again:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>MySQL localhost JS> cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"M1:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK\", \n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \n        \"topology\": {\n            \"M1:3306\": {\n                \"address\": \"M1:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }, \n            \"M2:3306\": {\n                \"address\": \"M2:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }, \n            \"M3:3306\": {\n                \"address\": \"M3:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }\n        }, \n        \"topologyMode\": \"Single-Primary\"\n    }, \n    \"groupInformationSourceMember\": \"M1:3306\"\n}<\/code><\/pre>\n\n\n\n\n\n<h2 class=\"wp-block-heading\" id=\"mce_13\">Scenario 2 &#8211; Node Provisioning<\/h2>\n\n\n\n<ul class=\"wp-block-list\"><li>A 3 nodes MySQL InnoDB Cluster \u2013 M1 \/ M2 \/ M3 in single primary mode<\/li><li>MySQL Router is configured to enable R\/W connections on port 3306 and RO connections on port 3307<\/li><li>M1 is currently the primary (that is Read\/Write mode)<\/li><li>M2 &amp; M3 are currently the secondaries (that is Read Only mode)<\/li><\/ul>\n\n\n\n<p>The goal then is to add 2 new nodes: M4 &amp; M5<\/p>\n\n\n\n\n\n<p>So we have the 3 nodes MySQL <strong>8.0.17<\/strong> InnoDB Cluster that we used in the first part of this article. The cluster is up and running.<\/p>\n\n\n\n<p>Actually adding new nodes is very close to what we have done previously.<\/p>\n\n\n\n<p>The process is :<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Deploy the new MySQL instance preferably already configured for Group Replication<\/li><li>Use the automatic distributed recovery process<\/li><\/ul>\n\n\n\n\n\n<p>Add node M4:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>MySQL localhost JS> cluster.addInstance(\"clusterAdmin@M4:3306\")\nNOTE: A GTID set check of the MySQL instance at 'M4:3306' determined\nthat it is missing transactions that were purged from all cluster members.\n\nPlease select a recovery method [C]lone\/[A]bort (default Abort): C\nValidating instance at M4:3306...\n\nThis instance reports its own address as M4:3306\n\nInstance configuration is suitable.\nA new instance will be added to the InnoDB cluster. Depending on the amount of\ndata on the cluster this might take from a few seconds to several hours.\n\nAdding instance to the cluster...\n\nMonitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.\nClone based state recovery is now in progress.\n\nNOTE: A server restart is expected to happen as part of the clone process. If the\nserver does not support the RESTART command or does not come back after a\nwhile, you may need to manually start it back.\n\n* Waiting for clone to finish...\nNOTE: M4:3306 is being cloned from M3:3306\n** Stage DROP DATA: Completed\n** Clone Transfer  \n    FILE COPY  ############################################################  100%  Completed\n    PAGE COPY  ############################################################  100%  Completed\n    REDO COPY  ############################################################  100%  Completed\n** Stage RECOVERY: \\\nNOTE: M4:3306 is shutting down...\n\n* Waiting for server restart... ready\n* M4:3306 has restarted, waiting for clone to finish...\n* Clone process has finished: 116.24 MB transferred in 3 sec (38.75 MB\/s)\n\nIncremental distributed state recovery is now in progress.\n\n* Waiting for distributed recovery to finish...\nNOTE: 'M4:3306' is being recovered from 'M3:3306'\n* Distributed recovery has finished\n\nThe instance 'M4:3306' was successfully added to the cluster.<\/code><\/pre>\n\n\n\n<p>As you can see we use the Clone process.<br>The new cluster status is:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>MySQL localhost JS> cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"M1:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK\", \n        \"statusText\": \"Cluster is ONLINE and can tolerate up to ONE failure.\", \n        \"topology\": {\n            \"M1:3306\": {\n                \"address\": \"M1:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }, \n            \"M2:3306\": {\n                \"address\": \"M2:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }, \n            \"M3:3306\": {\n                \"address\": \"M3:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }, \n            \"M4:3306\": {\n                \"address\": \"M4:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }\n        }, \n        \"topologyMode\": \"Single-Primary\"\n    }, \n    \"groupInformationSourceMember\": \"M1:3306\"\n}<\/code><\/pre>\n\n\n\n<p>Same process for node M5:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>MySQL localhost JS> cluster.addInstance(\"clusterAdmin@M5:3306\")\n...<\/code><\/pre>\n\n\n\n<p>Our final status is then:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>MySQL localhost JS> cluster.status()\n{\n    \"clusterName\": \"pocCluster\", \n    \"defaultReplicaSet\": {\n        \"name\": \"default\", \n        \"primary\": \"M1:3306\", \n        \"ssl\": \"REQUIRED\", \n        \"status\": \"OK\", \n        \"statusText\": \"Cluster is ONLINE and can tolerate up to 2 failures.\", \n        \"topology\": {\n            \"M1:3306\": {\n                \"address\": \"M1:3306\", \n                \"mode\": \"R\/W\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }, \n            \"M2:3306\": {\n                \"address\": \"M2:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }, \n            \"M3:3306\": {\n                \"address\": \"M3:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }, \n            \"M4:3306\": {\n                \"address\": \"M4:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }, \n            \"M5:3306\": {\n                \"address\": \"M5:3306\", \n                \"mode\": \"R\/O\", \n                \"readReplicas\": {}, \n                \"role\": \"HA\", \n                \"status\": \"ONLINE\", \n                \"version\": \"8.0.17\"\n            }\n        }, \n        \"topologyMode\": \"Single-Primary\"\n    }, \n    \"groupInformationSourceMember\": \"M1:3306\"\n}<\/code><\/pre>\n\n\n\n\n\n<h2 class=\"wp-block-heading\" id=\"mce_13\">References<\/h2>\n\n\n\n<ul class=\"wp-block-list\"><li><a rel=\"noreferrer noopener\" href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/mysql-innodb-cluster-userguide.html\" target=\"_blank\">MySQL InnoDB Cluster<\/a> <\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/group-replication.html\" target=\"_blank\">MySQL Group Replication<\/a> <\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/dev.mysql.com\/doc\/mysql-shell\/8.0\/en\/\" target=\"_blank\">MySQL Shell<\/a> <\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/dev.mysql.com\/doc\/mysql-router\/8.0\/en\/\" target=\"_blank\">MySQL Router<\/a> <\/li><\/ul>\n\n\n\n<ul class=\"wp-block-list\"><li><a rel=\"noreferrer noopener\" aria-label=\"The Clone Plugin (opens in a new tab)\" href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/clone-plugin.html\" target=\"_blank\">The Clone Plugin<\/a> <\/li><li><a rel=\"noreferrer noopener\" aria-label=\"Using MySQL Clone with InnoDB cluster (opens in a new tab)\" href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/mysql-innodb-cluster-clone-deployment.html\" target=\"_blank\">Using MySQL Clone with InnoDB cluster<\/a> <\/li><li><a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/group-replication-distributed-recovery.html\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\"Group Replication - Distributed Recovery (opens in a new tab)\">Group Replication &#8211; Distributed Recovery<\/a> <\/li><\/ul>\n\n\n\n<ul class=\"wp-block-list\"><li><a rel=\"noreferrer noopener\" href=\"http:\/\/dasini.net\/blog\/2019\/07\/11\/mysql-innodb-cluster-recovering-and-provisioning-with-mysql-enterprise-backup\/\" target=\"_blank\">MySQL InnoDB Cluster \u2013 Recovering and provisioning with MySQL Enterprise Backup<\/a> <\/li><li><a rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\" href=\"http:\/\/dasini.net\/blog\/2019\/07\/09\/mysql-innodb-cluster-recovering-and-provisioning-with-mysqldump\/\" target=\"_blank\">MySQL InnoDB Cluster \u2013 Recovering and provisioning with mysqldump<\/a><\/li><li><a href=\"http:\/\/dasini.net\/blog\/2019\/09\/03\/tutoriel-deployer-mysql-8-0-innodb-cluster-09-2019\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (s\u2019ouvre dans un nouvel onglet)\">Tutoriel \u2013 D\u00e9ployer MySQL 8.0 InnoDB Cluster (09-2019)<\/a><\/li><\/ul>\n\n\n\n<ul class=\"wp-block-list\"><li><a rel=\"noreferrer noopener\" href=\"http:\/\/dasini.net\/blog\/2019\/04\/11\/mysql-innodb-cluster-howto-1-monitor-your-cluster\/\" target=\"_blank\">MySQL InnoDB Cluster \u2013 HowTo #1 \u2013 Monitor your cluster<\/a><\/li><li><a rel=\"noreferrer noopener\" href=\"http:\/\/dasini.net\/blog\/2019\/05\/21\/mysql-innodb-cluster-howto-2-validate-an-instance\/\" target=\"_blank\">MySQL InnoDB Cluster \u2013 HowTo #2 \u2013 Validate an instance<\/a> <\/li><\/ul>\n\n\n\n\n\n<p><u>Video<\/u>:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/www.youtube.com\/watch?v=YZzGsDjlWvk\" target=\"_blank\" rel=\"noreferrer noopener\">D\u00e9ployer 1 cluster <strong>MySQL<\/strong> avec seulement 2 commandes<\/a><\/li><\/ul>\n\n\n\n\n\n<p><strong>Misc<\/strong><br>Node 1 &#8211; Group Replication configuration variables<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>+-----------------------------------------------------+---------------------------------------------+\n| Variable_name                                       | Value                                       |\n+-----------------------------------------------------+---------------------------------------------+\n| group_replication_allow_local_lower_version_join    | OFF                                         |\n| group_replication_auto_increment_increment          | 7                                           |\n| group_replication_autorejoin_tries                  | 0                                           |\n| group_replication_bootstrap_group                   | OFF                                         |\n| group_replication_clone_threshold                   | 9223372036854775807                         |\n| group_replication_communication_debug_options       | GCS_DEBUG_NONE                              |\n| group_replication_communication_max_message_size    | 10485760                                    |\n| group_replication_components_stop_timeout           | 31536000                                    |\n| group_replication_compression_threshold             | 1000000                                     |\n| group_replication_consistency                       | EVENTUAL                                    |\n| group_replication_enforce_update_everywhere_checks  | OFF                                         |\n| group_replication_exit_state_action                 | READ_ONLY                                   |\n| group_replication_flow_control_applier_threshold    | 25000                                       |\n| group_replication_flow_control_certifier_threshold  | 25000                                       |\n| group_replication_flow_control_hold_percent         | 10                                          |\n| group_replication_flow_control_max_quota            | 0                                           |\n| group_replication_flow_control_member_quota_percent | 0                                           |\n| group_replication_flow_control_min_quota            | 0                                           |\n| group_replication_flow_control_min_recovery_quota   | 0                                           |\n| group_replication_flow_control_mode                 | QUOTA                                       |\n| group_replication_flow_control_period               | 1                                           |\n| group_replication_flow_control_release_percent      | 50                                          |\n| group_replication_force_members                     |                                             |\n| group_replication_group_name                        | bc39053e-c9e0-11e9-9797-0242ac13000b        |\n| group_replication_group_seeds                       | M2:33061,M3:33061 |\n| group_replication_gtid_assignment_block_size        | 1000000                                     |\n| group_replication_ip_whitelist                      | AUTOMATIC                                   |\n| group_replication_local_address                     | M1:33061                       |\n| group_replication_member_expel_timeout              | 0                                           |\n| group_replication_member_weight                     | 50                                          |\n| group_replication_message_cache_size                | 1073741824                                  |\n| group_replication_poll_spin_loops                   | 0                                           |\n| group_replication_recovery_complete_at              | TRANSACTIONS_APPLIED                        |\n| group_replication_recovery_get_public_key           | OFF                                         |\n| group_replication_recovery_public_key_path          |                                             |\n| group_replication_recovery_reconnect_interval       | 60                                          |\n| group_replication_recovery_retry_count              | 10                                          |\n| group_replication_recovery_ssl_ca                   |                                             |\n| group_replication_recovery_ssl_capath               |                                             |\n| group_replication_recovery_ssl_cert                 |                                             |\n| group_replication_recovery_ssl_cipher               |                                             |\n| group_replication_recovery_ssl_crl                  |                                             |\n| group_replication_recovery_ssl_crlpath              |                                             |\n| group_replication_recovery_ssl_key                  |                                             |\n| group_replication_recovery_ssl_verify_server_cert   | OFF                                         |\n| group_replication_recovery_use_ssl                  | ON                                          |\n| group_replication_single_primary_mode               | ON                                          |\n| group_replication_ssl_mode                          | REQUIRED                                    |\n| group_replication_start_on_boot                     | ON                                          |\n| group_replication_transaction_size_limit            | 150000000                                   |\n| group_replication_unreachable_majority_timeout      | 0                                           |\n+-----------------------------------------------------+---------------------------------------------+<\/code><\/pre>\n\n\n\n\n\n<p>Some optional but useful parameters from my node1&rsquo;s <strong>my.cnf<\/strong>:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>[mysqld]\nreport_port \t   = 3306\nreport_host \t   = M1\n\nplugin_load        = group_replication.so\nplugin-load-add    = mysql_clone.so<\/code><\/pre>\n\n\n\n\n\n<div style=\"height:75px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p class=\"has-vivid-red-color has-text-color\"><strong>Thanks for using MySQL!<\/strong><\/p>\n\n\n\n<div style=\"height:25px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><a href=\"https:\/\/www.linkedin.com\/groups\/12524512\/\" target=\"_blank\" rel=\"noopener\" title=\"Olivier DASINI on Linkedin\">Follow me on Linkedin<\/a><\/p>\n\n\n\n<p>Watch my videos on my <a href=\"https:\/\/www.youtube.com\/channel\/UC12TulyJsJZHoCmby3Nm3WQ\" target=\"_blank\" rel=\"noreferrer noopener\" title=\"Olivier's MySQL Channel\">YouTube channel<\/a> and <a href=\"https:\/\/www.youtube.com\/channel\/UC12TulyJsJZHoCmby3Nm3WQ\/?sub_confirmation=1\" target=\"_blank\" rel=\"noreferrer noopener\" title=\"Subscribe\">subscribe<\/a>.<\/p>\n\n\n\n<p>My <a href=\"https:\/\/www.slideshare.net\/freshdaz\" target=\"_blank\" rel=\"noreferrer noopener\" title=\"Olivier DASINI on Slideshare\">Slideshare account<\/a>.<\/p>\n\n\n\n<p>My <a href=\"https:\/\/speakerdeck.com\/freshdaz\/\" target=\"_blank\" rel=\"noreferrer noopener\" title=\"Olivier DASINI on Speaker Deck\">Speaker Deck account<\/a>.<\/p>\n\n\n\n<div style=\"height:25px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p class=\"has-vivid-red-color has-text-color\"><strong>Thanks for using HeatWave &amp; MySQL!<\/strong><\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Up to MySQL 8.0.16, to perform these tasks you could:<br \/>\n  &#8211; Use MySQL Enterprise Backup :<br \/>\n  &#8211; Use mysqldump<\/p>\n<p>Starting with MySQL 8.0.17, the easiest and recommended method is to use the CLONE feature.<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[351,355,203,339],"tags":[413,343,345,349],"class_list":["post-3360","post","type-post","status-publish","format-standard","hentry","category-group-replication-en","category-high-availability","category-mysql-en","category-tuto-en","tag-cluster-en","tag-group-replication-en","tag-high-availability-en","tag-tuto-en"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9LfWW-Sc","jetpack-related-posts":[{"id":3219,"url":"https:\/\/dasini.net\/blog\/2019\/07\/11\/mysql-innodb-cluster-recovering-and-provisioning-with-mysql-enterprise-backup\/","url_meta":{"origin":3360,"position":0},"title":"MySQL InnoDB Cluster &#8211; Recovering and provisioning with MySQL Enterprise Backup","author":"Olivier DASINI","date":"11 juillet 2019","format":false,"excerpt":"Like I stated in my previous article - MySQL InnoDB Cluster - Recovering and provisioning with mysqldump : \"As the administrator of a cluster, among others tasks, you should be able to restore failed nodes and to add (or remove) new nodes\". Well, I still agree with myself :) MySQL\u2026","rel":"","context":"Dans &quot;Group Replication&quot;","block_context":{"text":"Group Replication","link":"https:\/\/dasini.net\/blog\/category\/group-replication-en\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":3193,"url":"https:\/\/dasini.net\/blog\/2019\/07\/09\/mysql-innodb-cluster-recovering-and-provisioning-with-mysqldump\/","url_meta":{"origin":3360,"position":1},"title":"MySQL InnoDB Cluster &#8211;  Recovering and provisioning with mysqldump","author":"Olivier DASINI","date":"9 juillet 2019","format":false,"excerpt":"As the administrator of a cluster, among other tasks, you should be able to restore failed nodes and grow (or shrink) your cluster by adding (or removing) new nodes. In MySQL, as a backup tool (and if your amount of data is not too big), you can use mysqldump a\u2026","rel":"","context":"Dans &quot;Group Replication&quot;","block_context":{"text":"Group Replication","link":"https:\/\/dasini.net\/blog\/category\/group-replication-en\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":3086,"url":"https:\/\/dasini.net\/blog\/2019\/05\/21\/mysql-innodb-cluster-howto-2-validate-an-instance\/","url_meta":{"origin":3360,"position":2},"title":"MySQL InnoDB Cluster &#8211; HowTo #2 &#8211; Validate an instance","author":"Olivier DASINI","date":"21 mai 2019","format":false,"excerpt":"Q: Validate an instance for MySQL InnoDB Cluster usage? A: Use check_instance_configuration()","rel":"","context":"Dans &quot;Group Replication&quot;","block_context":{"text":"Group Replication","link":"https:\/\/dasini.net\/blog\/category\/group-replication-en\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":2951,"url":"https:\/\/dasini.net\/blog\/2019\/04\/11\/mysql-innodb-cluster-howto-1-monitor-your-cluster\/","url_meta":{"origin":3360,"position":3},"title":"MySQL InnoDB Cluster &#8211; HowTo #1 &#8211; Monitor your cluster","author":"Olivier DASINI","date":"11 avril 2019","format":false,"excerpt":"MySQL InnoDB Cluster - HowTo #1 - Monitor your cluster Q: How do I monitor the status & the configuration of my cluster? A: Use status() or status({extended:true}) or status({queryMembers:true})?","rel":"","context":"Dans &quot;Group Replication&quot;","block_context":{"text":"Group Replication","link":"https:\/\/dasini.net\/blog\/category\/group-replication-en\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":3852,"url":"https:\/\/dasini.net\/blog\/2020\/11\/05\/automatic-connection-failover-for-asynchronous-replication\/","url_meta":{"origin":3360,"position":4},"title":"Automatic connection failover for Asynchronous Replication","author":"Olivier DASINI","date":"5 novembre 2020","format":false,"excerpt":"Since MySQL 8.0.22 there is a mechanism in asynchronous replication that makes the receiver automatically try to re-establish an asynchronous replication connection to another sender, in case the current connection gets interrupted due to the failure of the current sender.","rel":"","context":"Dans &quot;Group Replication&quot;","block_context":{"text":"Group Replication","link":"https:\/\/dasini.net\/blog\/category\/group-replication-en\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":2090,"url":"https:\/\/dasini.net\/blog\/2018\/01\/09\/setting-up-proxysql-1-4-with-mysql-5-7-group-replication\/","url_meta":{"origin":3360,"position":5},"title":"Setting up ProxySQL 1.4 with MySQL 5.7 Group Replication","author":"Olivier DASINI","date":"9 janvier 2018","format":false,"excerpt":"There are 3\u00a0pillars for a database architecture: Monitoring, Backup \/ Restore process, High Availability This blog post is about database High Availability; more precisely about one of the best combo of the moment : MySQL 5.7 Group Replication\u00a0: the only native HA solution for MySQL, it's a\u00a0Single\/Multi-master update everywhere replication\u2026","rel":"","context":"Dans &quot;Group Replication&quot;","block_context":{"text":"Group Replication","link":"https:\/\/dasini.net\/blog\/category\/group-replication-en\/"},"img":{"alt_text":"MySQL","src":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/mysql_300x161.jpg?resize=350%2C200","width":350,"height":200},"classes":[]}],"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/posts\/3360","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/comments?post=3360"}],"version-history":[{"count":25,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/posts\/3360\/revisions"}],"predecessor-version":[{"id":3450,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/posts\/3360\/revisions\/3450"}],"wp:attachment":[{"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/media?parent=3360"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/categories?post=3360"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/tags?post=3360"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}