MySQL 8.0.18 New Features Summary

November 26, 2019
Tags: , ,
Sakila mozaik by Olivier DASINI

Presentation of some of the new features of MySQL 8.0.18 released on October 14, 2019.

Agenda

  • Hash Join
  • EXPLAIN ANALYZE
  • Only OpenSSL
  • Random Password
  • MySQL Shell Enhancements
  • MySQL Router Enhancements
  • InnoDB Cluster Enhancements
  • Group Replication Enhancements
  • Replication Enhancements
  • Enterprise New Features
  • Thanks to the Contributors

Download this presentation and others on my SlideShare account.

That might interest you

Follow me on Linkedin

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.

My Speaker Deck account.

Thanks for using HeatWave & MySQL!

3

MySQL 8.0.17 – New Features Summary

September 19, 2019
Sakila mozaik by Olivier DASINI

This presentation is a summary of the MySQL 8.0.17 new features.
Released on July 22, 2019.

Agenda

  • CLONE Plugin – Native automatic provisioning in the server
  • Multi-valued indexes
  • JSON functions using multi-valued indexes
  • JSON schema validation
  • New binary collation for utf8mb4
  • MySQL Shell Enhancements
  • MySQL Router Enhancements
  • InnoDB Cluster Enhancements
  • Group Replication Enhancements
  • Replication Enhancements
  • Thanks to the Contributors


Download this presentation and others on my SlideShare account.

That might interest you

Follow me on Linkedin

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.

My Speaker Deck account.

Thanks for using HeatWave & MySQL!

3

MySQL InnoDB Cluster – Easy Recovering and provisioning

September 10, 2019
Cloudy Sky by Olivier DASINI

As the administrator of a cluster, among other tasks, you should be able to restore failed nodes and grow (or shrink) your cluster by adding (or removing) new nodes

Olivier DASINI

Up to MySQL 8.0.16, to perform these tasks you could:

Starting with MySQL 8.0.17, the easiest and more convenient method is to use the CLONE feature.

Note:
Regardless of the recovering and provisioning method, it is important to have a proven backup/restore procedure.

I recommend reading the excellent blog post from my colleague Jesper – MySQL Backup Best Practices.

Context

Let’s make it as simple as possible 🙂
I’m using MySQL 8.0.17.

I running a MySQL InnoDB Cluster well setup.
So my main assumption is that you already know what is MySQL Group Replication & MySQL InnoDB Cluster.
Additionally you can read this tutorial and this article from my colleague lefred or this one on Windows Platform from my colleague Ivan.

Scenario 1 – Node Recovering

  • A 3 nodes MySQL InnoDB Cluster – M1 / M2 / M3, in single primary mode
  • MySQL Router is configured to enable R/W connections on 3306 and RO connections on 3307
  • M1 is currently the primary (that is in Read/Write mode)
  • M2 & M3 are currently the secondaries (that is Read Only mode)
  • M1 crashed! :'(
  • M2 & M3 are now the (new temporary) cluster
  • Then…

The goal then is to rebuild M1 and put it back to the cluster.

So like I said before we have a 3 nodes MySQL 8.0.17 InnoDB Cluster up and running:

$ mysqlsh clusterAdmin@{mysqlRouterIP}:3306 --cluster

MySQL localhost JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M1:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M1:3306"
}

Then node M1 crashed… (status is “MISSING“ & we have a relevant shellConnectError message) :

MySQL localhost JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures. 1 member is not active", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "shellConnectError": "MySQL Error 2003 (HY000): Can't connect to MySQL server on 'M1' (113)", 
                "status": "(MISSING)"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M2:3306"
}

M1 was the primary.
The cluster initiated an automatic database failover to elect a new primary… blablabla
Anyway you already know the story ?

After a while M1 is finally fixed and ready to be part of the cluster again.

Node Recovery: Auto distributed recovery

Well I have a great news for you most of the time your only task will be to start the fixed MySQL instance.

Before starting the node (M1), status is Missing:

MySQL localhost JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures. 1 member is not active", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "(MISSING)"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M2:3306"
}

Then MySQL InnoDB Cluster will automatically choose the relevant distributed recovery method.
During a certain period of time status is Recovering.
Details are available in recoveryStatusText field:

MySQL localhost JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "recovery": {
                    "state": "ON"
                }, 
                "recoveryStatusText": "Distributed recovery in progress", 
                "role": "HA", 
                "status": "RECOVERING", 
                "version": "8.0.17"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M2:3306"
}

The node eventually becomes online.
Status: Online.

MySQL localhost JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M2:3306"
}

So you get it now, the node recovering process is handle by MySQL InnoDB Cluster using the automated distributed recovery.

That means, if your lucky, your only task (in addition to fixing the issue) is to (re)start the MySQL instance.

If you’re not lucky, you’ll probably need to remove and add the node again.

Node Recovery: Remove and add the node

The strategy here is to remove the node from the cluster metadata and then add it back – as a new node – to the cluster.

For example for node M3, remove the node with removeInstance():

cluster.removeInstance("clusterAdmin@M3:3306", {force:true}) 

force is a boolean. By default, set to false.
Indicating if the instance must be removed (even if only from metadata) in case it cannot be reached.
Useful if the instance is no longer reachable (RTFM).

Then add the node with addInstance():

cluster.addInstance("clusterAdmin@M3:3306")

An alternative, that I prefer, is to use rescan() then addInstance().

Let’s see how to use it, assuming M3 crashed.
The current status is:

MySQL localhost JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M1:3306", 
        "ssl": "REQUIRED", 
        "status": "OK_PARTIAL", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "(MISSING)"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M1:3306"
}

Let’s (re)scan the cluster and remove the missing node (M3):

MySQL localhost JS> cluster.rescan()
Rescanning the cluster...

Result of the rescanning operation for the 'default' ReplicaSet:
{
    "name": "default", 
    "newTopologyMode": null, 
    "newlyDiscoveredInstances": [], 
    "unavailableInstances": [
        {
            "host": "M3:3306", 
            "label": "M3:3306", 
            "member_id": "93303635-c8c0-11e9-9665-0242ac13000d"
        }
    ]
}

The instance 'M3:3306' is no longer part of the ReplicaSet.
The instance is either offline or left the HA group. You can try to add it to the cluster again with the cluster.rejoinInstance('M3:3306') command or you can remove it from the cluster configuration.
Would you like to remove it from the cluster metadata? [Y/n]: Y
Removing instance from the cluster metadata...
The instance 'M3:3306' was successfully removed from the cluster metadata.

The new cluster status is:

MySQL localhost JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M1:3306", 
        "ssl": "REQUIRED", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M1:3306"
}

Now it’s time to bring back M3, using the Clone distributed process:

MySQL localhost JS> cluster.addInstance("clusterAdmin@M3:3306")

WARNING: A GTID set check of the MySQL instance at 'M3:3306' determined
that it contains transactions that do not originate from the cluster, which
must be discarded before it can join the cluster.

M3:3306 has the following errant GTIDs that do not exist in the cluster:
93303635-c8c0-11e9-9665-0242ac13000d:1-2

WARNING: Discarding these extra GTID events can either be done manually or by completely
overwriting the state of M3:3306 with a physical snapshot from an
existing cluster member. To use this method by default, set the
'recoveryMethod' option to 'clone'.

Having extra GTID events is not expected, and it is recommended to investigate
this further and ensure that the data can be removed prior to choosing the
clone recovery method.

Please select a recovery method [C]lone/[A]bort (default Abort): C
Validating instance at M3:3306...

This instance reports its own address as M3:3306

Instance configuration is suitable.
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster...

Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: M3:3306 is being cloned from M1:3306
** Stage DROP DATA: Completed
** Clone Transfer  
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
** Stage RECOVERY: |
NOTE: M3:3306 is shutting down...

* Waiting for server restart... ready
* M3:3306 has restarted, waiting for clone to finish...
* Clone process has finished: 68.99 MB transferred in about 1 second (~inf TB/s)

Incremental distributed state recovery is now in progress.

* Waiting for distributed recovery to finish...
NOTE: 'M3:3306' is being recovered from 'M2:3306'
* Distributed recovery has finished

The instance 'M3:3306' was successfully added to the cluster.

Here we go!
Our 3 nodes MySQL InnoDB Cluster can tolerate up to 1 failure again:

MySQL localhost JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M1:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M1:3306"
}

Scenario 2 – Node Provisioning

  • A 3 nodes MySQL InnoDB Cluster – M1 / M2 / M3 in single primary mode
  • MySQL Router is configured to enable R/W connections on port 3306 and RO connections on port 3307
  • M1 is currently the primary (that is Read/Write mode)
  • M2 & M3 are currently the secondaries (that is Read Only mode)

The goal then is to add 2 new nodes: M4 & M5

So we have the 3 nodes MySQL 8.0.17 InnoDB Cluster that we used in the first part of this article. The cluster is up and running.

Actually adding new nodes is very close to what we have done previously.

The process is :

  • Deploy the new MySQL instance preferably already configured for Group Replication
  • Use the automatic distributed recovery process

Add node M4:

MySQL localhost JS> cluster.addInstance("clusterAdmin@M4:3306")
NOTE: A GTID set check of the MySQL instance at 'M4:3306' determined
that it is missing transactions that were purged from all cluster members.

Please select a recovery method [C]lone/[A]bort (default Abort): C
Validating instance at M4:3306...

This instance reports its own address as M4:3306

Instance configuration is suitable.
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster...

Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: M4:3306 is being cloned from M3:3306
** Stage DROP DATA: Completed
** Clone Transfer  
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
** Stage RECOVERY: \
NOTE: M4:3306 is shutting down...

* Waiting for server restart... ready
* M4:3306 has restarted, waiting for clone to finish...
* Clone process has finished: 116.24 MB transferred in 3 sec (38.75 MB/s)

Incremental distributed state recovery is now in progress.

* Waiting for distributed recovery to finish...
NOTE: 'M4:3306' is being recovered from 'M3:3306'
* Distributed recovery has finished

The instance 'M4:3306' was successfully added to the cluster.

As you can see we use the Clone process.
The new cluster status is:

MySQL localhost JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M1:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }, 
            "M4:3306": {
                "address": "M4:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M1:3306"
}

Same process for node M5:

MySQL localhost JS> cluster.addInstance("clusterAdmin@M5:3306")
...

Our final status is then:

MySQL localhost JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M1:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to 2 failures.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }, 
            "M4:3306": {
                "address": "M4:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }, 
            "M5:3306": {
                "address": "M5:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.17"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M1:3306"
}

References

Video:

Misc
Node 1 – Group Replication configuration variables

+-----------------------------------------------------+---------------------------------------------+
| Variable_name                                       | Value                                       |
+-----------------------------------------------------+---------------------------------------------+
| group_replication_allow_local_lower_version_join    | OFF                                         |
| group_replication_auto_increment_increment          | 7                                           |
| group_replication_autorejoin_tries                  | 0                                           |
| group_replication_bootstrap_group                   | OFF                                         |
| group_replication_clone_threshold                   | 9223372036854775807                         |
| group_replication_communication_debug_options       | GCS_DEBUG_NONE                              |
| group_replication_communication_max_message_size    | 10485760                                    |
| group_replication_components_stop_timeout           | 31536000                                    |
| group_replication_compression_threshold             | 1000000                                     |
| group_replication_consistency                       | EVENTUAL                                    |
| group_replication_enforce_update_everywhere_checks  | OFF                                         |
| group_replication_exit_state_action                 | READ_ONLY                                   |
| group_replication_flow_control_applier_threshold    | 25000                                       |
| group_replication_flow_control_certifier_threshold  | 25000                                       |
| group_replication_flow_control_hold_percent         | 10                                          |
| group_replication_flow_control_max_quota            | 0                                           |
| group_replication_flow_control_member_quota_percent | 0                                           |
| group_replication_flow_control_min_quota            | 0                                           |
| group_replication_flow_control_min_recovery_quota   | 0                                           |
| group_replication_flow_control_mode                 | QUOTA                                       |
| group_replication_flow_control_period               | 1                                           |
| group_replication_flow_control_release_percent      | 50                                          |
| group_replication_force_members                     |                                             |
| group_replication_group_name                        | bc39053e-c9e0-11e9-9797-0242ac13000b        |
| group_replication_group_seeds                       | M2:33061,M3:33061 |
| group_replication_gtid_assignment_block_size        | 1000000                                     |
| group_replication_ip_whitelist                      | AUTOMATIC                                   |
| group_replication_local_address                     | M1:33061                       |
| group_replication_member_expel_timeout              | 0                                           |
| group_replication_member_weight                     | 50                                          |
| group_replication_message_cache_size                | 1073741824                                  |
| group_replication_poll_spin_loops                   | 0                                           |
| group_replication_recovery_complete_at              | TRANSACTIONS_APPLIED                        |
| group_replication_recovery_get_public_key           | OFF                                         |
| group_replication_recovery_public_key_path          |                                             |
| group_replication_recovery_reconnect_interval       | 60                                          |
| group_replication_recovery_retry_count              | 10                                          |
| group_replication_recovery_ssl_ca                   |                                             |
| group_replication_recovery_ssl_capath               |                                             |
| group_replication_recovery_ssl_cert                 |                                             |
| group_replication_recovery_ssl_cipher               |                                             |
| group_replication_recovery_ssl_crl                  |                                             |
| group_replication_recovery_ssl_crlpath              |                                             |
| group_replication_recovery_ssl_key                  |                                             |
| group_replication_recovery_ssl_verify_server_cert   | OFF                                         |
| group_replication_recovery_use_ssl                  | ON                                          |
| group_replication_single_primary_mode               | ON                                          |
| group_replication_ssl_mode                          | REQUIRED                                    |
| group_replication_start_on_boot                     | ON                                          |
| group_replication_transaction_size_limit            | 150000000                                   |
| group_replication_unreachable_majority_timeout      | 0                                           |
+-----------------------------------------------------+---------------------------------------------+

Some optional but useful parameters from my node1’s my.cnf:

[mysqld]
report_port 	   = 3306
report_host 	   = M1

plugin_load        = group_replication.so
plugin-load-add    = mysql_clone.so

Thanks for using MySQL!

Follow me on Linkedin

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.

My Speaker Deck account.

Thanks for using HeatWave & MySQL!

1

MySQL InnoDB Cluster – Recovering and provisioning with MySQL Enterprise Backup

July 11, 2019
Full moon by Olivier DASINI

Like I stated in my previous article – MySQL InnoDB Cluster – Recovering and provisioning with mysqldump :
“As the administrator of a cluster, among other tasks, you should be able to restore failed nodes and grow (or shrink) your cluster by adding (or removing) new nodes”.
Well, I still agree with myself 🙂

MySQL customers using a Commercial Edition have access to MySQL Enterprise Backup (MEB) which provide enterprise-grade physical backup and recovery for MySQL.

MEB delivers hot, online, non-blocking backups on multiple platforms including Linux, Windows, Mac & Solaris.
More details here.

Note:
If you want to know how to recover a node and/or how to provision nodes with mysqldump please read this blog post.

Context

Let’s make it as simple as possible 🙂
I’m using MySQL Enterprise 8.0.16, available for MySQL customers on My Oracle Support or on Oracle Software Delivery Cloud.

I have an InnoDB Cluster setup, up and running.
So my main assumption is that you already know what is MySQL Group Replication & MySQL InnoDB Cluster.
Additionally you can read this tutorial and this article from my colleague lefred or this one on Windows Platform from my colleague Ivan.

All nodes must have the right MySQL Enterprise Backup privileges.
Details here.

All nodes must have same values respectively for log-bin & relay-log:
For example: log-bin=binlog & relay-log=relaylog (on all nodes)

Note:
Depending on how you configured your MySQL InnoDB Cluster, some steps could be slightly different.

Scenario 1 – Node Recovering

  • A 3 nodes MySQL InnoDB Cluster – M1 / M2 / M3, in single primary mode
  • MySQL Router is configured to enable R/W connections on 3306 and RO connections on 3307
  • M1 is currently the primary (that is in Read/Write mode)
  • M2 & M3 are currently the secondaries (that is Read Only mode)
  • M1 failed! Assuming it is irreconcilably corrupted :'(
  • M2 & M3 are now the (new temporary) cluster

The goal then is to rebuild M1 and put it back to the cluster.

So like I said before we have a 3 nodes MySQL Enterprise 8.0.16 InnoDB Cluster up and running:

$ mysqlsh clusterAdmin@{mysqlRouterIP}:3306 --cluster
...

MySQL JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M1:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M1:3306"
}

Then node M1 crashed… (status is “MISSING“) :

MySQL JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures. 1 member is not active", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "(MISSING)"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M2:3306"
}

M1 was the primary.
The cluster initiated an automatic database failover to elect a new primary… blablabla
Well you already know the story 🙂

After a while M1 is fixed and ready to be part of the cluster again.
To minimize the recovery time instead of using the last backup we prefer to take a fresh one.

Speaking of backup, I recommend reading the excellent blog post from my colleague JesperMySQL Backup Best Practices.

Let’s take a fresh backup on a secondary node (we could also used the primary).

MySQL Enterprise Backup is a very versatile tool and has many different configuration options.
For clarity I’ll use a simplistic command. Please read the MEB documentation for a more “production style” commande.
The backup will roughly looks like :

$ mysqlbackup --defaults-file=/etc/my.cnf  --with-timestamp --messages-logdir=/data/backups/ --backup-image=/data/backups/db.mbi backup-to-image
...
-------------------------------------------------------------
   Parameters Summary         
-------------------------------------------------------------
   Start LSN                  : 44603904
   End LSN                    : 44607630
-------------------------------------------------------------

mysqlbackup completed OK! with 1 warnings

Please note that it is highly recommended, in addition to the my.cnf to include in your backup process a copy of the auto.cnf and mysqld-auto.cnf configuration files for all nodes.

If you “lose” your auto.cnf file, don’t worry the server will generate a new one for you.
However the recovery process will be slightly different… (more on that below).

Now it’s time to restore this backup on node M1.

Because this server is part of a MySQL InnoDB Cluster, obviously there are some extra steps compare to a standalone server restoration.

Node Recovering

The node recovering process is simple:

  • Delete all contents of the MySQL Server data directory
  • Restore the backup
  • Restore the auto.cnf file
  • Restore the mysqld-auto.cnf file (if there is one)
  • Start the MySQL instance

This gives us on M1 , something like (simplified version, please adapt to your context) :

# Delete all contents of the MySQL Server data directory
$ rm -rf /var/lib/mysql/*


# Restore the backup
$ mysqlbackup --backup-dir=/exp/bck --datadir=/var/lib/mysql --backup-image=/data/backups/db.mbi copy-back-and-apply-log


# Restore the auto.cnf file
$ cp -p /data/backups/auto.cnf  /var/lib/mysql


# Restore the mysqld-auto.cnf file 
$ cp -p /data/backups/mysqld-auto.cnf  /var/lib/mysql


# Start the MySQL instance
service mysql start 

Then you can connect to the cluster and… see that the node M1 is recovering (“status: RECOVERING”) or if you’re not fast enough that the node is again part of the cluster (“status: ONLINE”):

$ mysqlsh clusterAdmin@{mysqlRouterIP}:3306 --cluster
...

MySQL JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "RECOVERING", 
                "version": "8.0.16"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M2:3306"
}


// After a while 


MySQL JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M2:3306"
}

Lost the auto.cnf file

As promised, the case when the auto.cnf configuration file is not restored.
In fact, in this case the cluster would see this node as a new node (because the server will have a new UUID).
So the process for putting it back is different.

Also note that if you loose the mysqld-auto.cnf file you’ll probably need to configure (again) the server to be Group Replication aware.

To begin, you must stop the Group Replication plugin on the node that needs to be restored (M1):

$ mysqlsh root@M1 --sql -e"STOP GROUP_REPLICATION;"

Then if necessary you can check the configuration and/or configure the node:

MySQL JS> dba.checkInstanceConfiguration("root@M1:3306")
...


MySQL JS> dba.configureInstance("root@M1:3306")
...

You need to remove the old node from the InnoDB Cluster metadata:

MySQL JS> cluster.rescan()
Rescanning the cluster...

Result of the rescanning operation for the 'default' ReplicaSet:
{
    "name": "default", 
    "newTopologyMode": null, 
    "newlyDiscoveredInstances": [], 
    "unavailableInstances": [
        {
            "host": "M1:3306", 
            "label": "M1:3306", 
            "member_id": "6ad8caed-9d90-11e9-96e5-0242ac13000b"
        }
    ]
}

The instance 'M1:3306' is no longer part of the ReplicaSet.
The instance is either offline or left the HA group. You can try to add it to the cluster again with the cluster.rejoinInstance('M1:3306') command or you can remove it from the cluster configuration.
Would you like to remove it from the cluster metadata? [Y/n]: Y
Removing instance from the cluster metadata...
The instance 'M1:3306' was successfully removed from the cluster metadata.

Add the “new” node:

MySQL JS> cluster.addInstance("clusterAdmin@M1:3306")
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster ...

Validating instance at M1:3306...

This instance reports its own address as M1

Instance configuration is suitable.
The instance 'clusterAdmin@M1:3306' was successfully added to the cluster.

Check – and after the recovery stage, the “new” node is online:

MySQL JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "RECOVERING", 
                "version": "8.0.16"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M2:3306"
}

// After a while...

MySQL JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M2:3306"
}

Et voilà!

An simple alternative to deal with this “unpleasantness“, if you don’t need to configure the node, is basically to remove the node and add it again.
Below an example with M3:

MySQL JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK_PARTIAL", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "(MISSING)"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M2:3306"
}


MySQL JS> cluster.removeInstance("root@M3:3306")
The instance will be removed from the InnoDB cluster. Depending on the instance
being the Seed or not, the Metadata session might become invalid. If so, please
start a new session to the Metadata Storage R/W instance.

Instance 'M3:3306' is attempting to leave the cluster...

The instance 'M3:3306' was successfully removed from the cluster.


MySQL JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M2:3306"
}


MySQL JS> cluster.addInstance("root@M3:3306")
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster ...

Validating instance at M3:3306...

This instance reports its own address as M3

Instance configuration is suitable.
The instance 'root@M3:3306' was successfully added to the cluster.


MySQL JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M2:3306"
}

Scenario 2 – Node Provisioning

  • A 3 nodes MySQL InnoDB Cluster – M1 / M2 / M3 in single primary mode
  • MySQL Router is configured to enable R/W connections on port 3306 and RO connections on port 3307
  • M2 is currently the primary (that is Read/Write mode)
  • M1 & M3 are currently the secondaries (that is Read Only mode)

The goal then is to add 2 new nodes: M4 & M5

So we have the 3 nodes MySQL 8.0.16 InnoDB Cluster that we used in the first part of this article. And it is up and running.

Actually adding new nodes is very close to what we have done previously.

The process is :

  • Deploy the new MySQL instance preferably already configured for Group Replication
  • Restore the data in the way that we have seen previously

Check the configuration and the configuration itself can be done respectively with dba.checkInstanceConfiguration() and dba.configure() functions (and it could also be useful to use checkInstanceState()see this article).
e.g. on node M4:

$ mysqlsh clusterAdmin@M4:3306 -- dba checkInstanceConfiguration
Validating MySQL instance at M4:3306 for use in an InnoDB cluster...

This instance reports its own address as M4

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance 'M4:3306' is valid for InnoDB cluster usage.

{
    "status": "ok"
}

Restore the backup on M4, the new node:

# Restore the backup on M4
$ mysqlbackup --backup-dir=/exp/bck --datadir=/var/lib/mysql --backup-image=/data/backups/db.mbi copy-back-and-apply-log

An finally add the new node (M4):

// Add the new instance
MySQL JS> cluster.addInstance("clusterAdmin@M4:3306")


// Check
MySQL JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M4:3306": {
                "address": "M4:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M2:3306"
}

Same process for the last node, M5.

You end up with a 5 nodes MySQL InnoDB Cluster \o/:

MySQL JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to 2 failures.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M4:3306": {
                "address": "M4:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M5:3306": {
                "address": "M5:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M2:3306"
}

This is one way to do node recovery and provisioning using MySQL Enterprise Backup.

You may also take a look on this article: InnoDB Cluster: Recovering an instance with MySQL Enterprise Backup – from my colleague Keith.

If you are not yet a MySQL customer, and therefore you are not able to enjoy our advanced features/tools and technical support, so probably that mysqldump could fit here. Please read: MySQL InnoDB Cluster – Recovering and provisioning with mysqldump.

Note that some new features are coming in this area… 🙂
Stay tuned!

References

Misc
Node 3 – Group Replication configuration

SQL> SHOW VARIABLES LIKE 'group_replication%';
+-----------------------------------------------------+--------------------------------------+
| Variable_name                                       | Value                                |
+-----------------------------------------------------+--------------------------------------+
| group_replication_allow_local_lower_version_join    | OFF                                  |
| group_replication_auto_increment_increment          | 7                                    |
| group_replication_autorejoin_tries                  | 0                                    |
| group_replication_bootstrap_group                   | OFF                                  |
| group_replication_communication_debug_options       | GCS_DEBUG_NONE                       |
| group_replication_communication_max_message_size    | 10485760                             |
| group_replication_components_stop_timeout           | 31536000                             |
| group_replication_compression_threshold             | 1000000                              |
| group_replication_consistency                       | EVENTUAL                             |
| group_replication_enforce_update_everywhere_checks  | OFF                                  |
| group_replication_exit_state_action                 | READ_ONLY                            |
| group_replication_flow_control_applier_threshold    | 25000                                |
| group_replication_flow_control_certifier_threshold  | 25000                                |
| group_replication_flow_control_hold_percent         | 10                                   |
| group_replication_flow_control_max_quota            | 0                                    |
| group_replication_flow_control_member_quota_percent | 0                                    |
| group_replication_flow_control_min_quota            | 0                                    |
| group_replication_flow_control_min_recovery_quota   | 0                                    |
| group_replication_flow_control_mode                 | QUOTA                                |
| group_replication_flow_control_period               | 1                                    |
| group_replication_flow_control_release_percent      | 50                                   |
| group_replication_force_members                     |                                      |
| group_replication_group_name                        | 28f66c86-9d66-11e9-876e-0242ac13000b |
| group_replication_group_seeds                       | M1:33061,M2:33061                    |
| group_replication_gtid_assignment_block_size        | 1000000                              |
| group_replication_ip_whitelist                      | AUTOMATIC                            |
| group_replication_local_address                     | M3:33061                             |
| group_replication_member_expel_timeout              | 0                                    |
| group_replication_member_weight                     | 50                                   |
| group_replication_message_cache_size                | 1073741824                           |
| group_replication_poll_spin_loops                   | 0                                    |
| group_replication_recovery_complete_at              | TRANSACTIONS_APPLIED                 |
| group_replication_recovery_get_public_key           | OFF                                  |
| group_replication_recovery_public_key_path          |                                      |
| group_replication_recovery_reconnect_interval       | 60                                   |
| group_replication_recovery_retry_count              | 10                                   |
| group_replication_recovery_ssl_ca                   |                                      |
| group_replication_recovery_ssl_capath               |                                      |
| group_replication_recovery_ssl_cert                 |                                      |
| group_replication_recovery_ssl_cipher               |                                      |
| group_replication_recovery_ssl_crl                  |                                      |
| group_replication_recovery_ssl_crlpath              |                                      |
| group_replication_recovery_ssl_key                  |                                      |
| group_replication_recovery_ssl_verify_server_cert   | OFF                                  |
| group_replication_recovery_use_ssl                  | ON                                   |
| group_replication_single_primary_mode               | ON                                   |
| group_replication_ssl_mode                          | REQUIRED                             |
| group_replication_start_on_boot                     | ON                                   |
| group_replication_transaction_size_limit            | 150000000                            |
| group_replication_unreachable_majority_timeout      | 0                                    |
+-----------------------------------------------------+--------------------------------------+

Thanks for using MySQL!

Follow me on Linkedin

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.

My Speaker Deck account.

Thanks for using HeatWave & MySQL!

2

MySQL InnoDB Cluster – Recovering and provisioning with mysqldump

July 9, 2019
Butterfly by Olivier DASINI

As the administrator of a cluster, among other tasks, you should be able to restore failed nodes and grow (or shrink) your cluster by adding (or removing) new nodes.

In MySQL, as a backup tool (and if your amount of data is not too big), you can use mysqldump a client utility that performs logical backups.
The results are SQL statements that reproduce the original schema objects and data.

For substantial amounts of data however, a physical backup solution such as MySQL Enterprise Backup is faster, particularly for the restore operation.
Hey! guess what? You can read: MySQL InnoDB Cluster – Recovering and provisioning with MySQL Enterprise Backup

Context

Let’s make it as simple as possible 🙂
I’m using MySQL 8.0.16.
I have an InnoDB Cluster setup – up and running.
So my main assumption is that you already know what is MySQL Group Replication & MySQL InnoDB Cluster.
Additionally you can read this tutorial and this article from my colleague lefred or this one on Windows Platform from my colleague Ivan.

Note:

Depending on how you configured your MySQL InnoDB Cluster, some steps could be slightly different.

Scenario 1 – Node Recovering

  • A 3 nodes MySQL InnoDB Cluster – M1 / M2 / M3, in single primary mode
  • MySQL Router is configured to enable R/W connections on port 3306 and RO connections on port 3307
  • M1 is currently the primary (so in Read/Write mode)
  • M2 & M3 are currently the secondaries (i.e. Read Only mode)
  • M1 failed! Some tables are irreconcilably corrupted 🙁
  • M2 & M3 are now the (new temporary) cluster

The goal then is to rebuild M1 and put it back to the cluster.

So like I stated we have a 3 nodes MySQL 8.0.16 InnoDB Cluster up and running:

$ mysqlsh clusterAdmin@{mysqlRouterIP}:3306 --cluster
...

MySQL JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M1:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M1:3306"
}

Then M1 failed (status is “MISSING“) :

MySQL JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures. 1 member is not active", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "(MISSING)"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M2:3306"
}

M1 was the primary.
The cluster initiated an automatic database failover to elect a new primary… blablabla
Anyway you already know the story 🙂

After a while M1 is finally fixed and ready to be part of the cluster again.
To minimize the recovery time instead of using the last dump we prefer to take a fresh one.

Speaking of backup, I recommend reading the excellent blog post from my colleague JesperMySQL Backup Best Practices.

Let’s take a fresh dump on a secondary node using MySQL Router, on port 3307 with my custom configuration (we could also used the primary).
The dump will roughly looks like :

mysqldump --defaults-file=/etc/my.cnf -u mdump -p -h {mysqlRouterIP} -P 3307 --all-databases --routines --events --single-transaction --flush-privileges --hex-blob --log-error=/var/log/mysqldump.log --result-file=/data/backups/dump.sql

Minimum privileges for the MySQL user with which mysqldump connects to the server

Actually this minimum privileges depends on what object you wan to dump and what for the dump it is.

However the following privileges should be fine for most of the classical usages:

GRANT SELECT, SHOW VIEW, EVENT, TRIGGER, LOCK TABLES, CREATE, ALTER, RELOAD, REPLICATION CLIENT, REPLICATION_SLAVE_ADMIN ON *.* TO <dumpUser>

Please note that it is highly recommended, in addition to the my.cnf, to include in your backup process a copy of the auto.cnf and mysqld-auto.cnf configuration files for all nodes.

If you “lose” your auto.cnf file, don’t worry the server will generate a new one for you.
However the recovery process will be slightly different… (more on that below).

Now it’s time to restore this dump on node M1.

Because this server is part of InnoDB Cluster, clearly there are some extra steps compare to a standalone server restoration.

Restore the data

First, restore the data on M1:

  • It’s a logical restoration so the server to restore must be up 😀
  • Group Replication plugin must be stopped
    • STOP GROUP_REPLICATION;
  • Disable logging to the binary log
    • SET SQL_LOG_BIN=0;
  • Delete binary log files
    • RESET MASTER;
  • Clear the master info and relay log info repositories and deletes all the relay log files
    • RESET SLAVE;
  • Enable updates
    • SET GLOBAL super_read_only=0;
  • Load the dump
    • source /data/backups/dump.sql

This gives us:

mysqlsh root@M1:3306 --sql
...

M1 SQL> STOP GROUP_REPLICATION;
Query OK, 0 rows affected (12.04 sec)

M1 SQL> SET SQL_LOG_BIN=0;
Query OK, 0 rows affected (0.00 sec)

M1 SQL> RESET MASTER; 
Query OK, 0 rows affected (0.06 sec)

M1 SQL> RESET SLAVE;
Query OK, 0 rows affected (0.13 sec)

M1 SQL> SET GLOBAL super_read_only=0;
Query OK, 0 rows affected (0.00 sec)

M1 SQL> source /data/backups/dump.sql
Query OK, 0 rows affected (0.00 sec)
...

Put the node back to the cluster

Second, put the node back to the cluster.
Connect to MySQL Router on the primary (port 3306 in my case):

$ mysqlsh clusterAdmin@{mysqlRouterIP}:3306 --cluster
...

MySQL JS> cluster.rejoinInstance("clusterAdmin@M1:3306")
Rejoining the instance to the InnoDB cluster. Depending on the original problem that made the instance unavailable, the rejoin operation might not be successful and further manual steps will be needed to fix the underlying problem.

Please monitor the output of the rejoin operation and take necessary action if the instance cannot rejoin.

Rejoining instance to the cluster ...

The instance 'M1:3306' was successfully rejoined on the cluster.

Now you should check the new cluster status

MySQL JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M2:3306"
}

Note:

The cluster status of the restored node will be in the status “RECOVERING” before to be “ONLINE”.

Lost the auto.cnf file

As promised, the case when the auto.cnf configuration file is not restored.
In fact, in that case the cluster would see this node as a new node (because the server will have a new UUID).
So the process for putting it back is different.

Also note that if you loose the mysqld-auto.cnf file you’ll probably need to configure (again) the server to be Group Replication aware.

So basically the process is doing some cleaning and then add the old node like it was a new node.


Assuming Group Replication plugin is stopped on M1:

// Check the configuration even more important if you lost mysqld-auto.cnf :)
MySQL JS> dba.checkInstanceConfiguration('clusterAdmin@M1:3306')
Validating MySQL instance at M1:3306 for use in an InnoDB cluster...

This instance reports its own address as M1

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance 'M1:3306' is valid for InnoDB cluster usage.

{
    "status": "ok"
}


// If needed configure your instance
MySQL JS> dba.configureInstance('clusterAdmin@M1:3306')


// Remove the old node from the cluster metadata
MySQL JS> cluster.rescan()
Rescanning the cluster...

Result of the rescanning operation for the 'default' ReplicaSet:
{
    "name": "default", 
    "newTopologyMode": null, 
    "newlyDiscoveredInstances": [], 
    "unavailableInstances": [
        {
            "host": "M1:3306", 
            "label": "M1:3306", 
            "member_id": "a3f1ee50-9be3-11e9-a3fe-0242ac13000b"
        }
    ]
}

The instance 'M1:3306' is no longer part of the ReplicaSet.
The instance is either offline or left the HA group. You can try to add it to the cluster again with the cluster.rejoinInstance('M1:3306') command or you can remove it from the cluster configuration.
Would you like to remove it from the cluster metadata? [Y/n]: Y
Removing instance from the cluster metadata...
The instance 'M1:3306' was successfully removed from the cluster metadata.


// Add the new instance
MySQL JS> cluster.addInstance("clusterAdmin@M1:3306")
A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster ...

Validating instance at M1:3306...

This instance reports its own address as M1

Instance configuration is suitable.
The instance 'clusterAdmin@M1:3306' was successfully added to the cluster.


// Check
MySQL JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M2:3306"
}

Et voilà !

Scenario 2 – Node Provisioning

  • A 3 nodes MySQL InnoDB Cluster – M1 / M2 / M3 in single primary mode
  • MySQL Router is configured to enable R/W connections on port 3306 and RO connections on port 3307
  • M2 is currently the primary in Read/Write mode
  • M1 & M3 are currently the secondaries in Read Only mode

The goal then is to add 2 new nodes: M4 & M5

So we have the 3 nodes MySQL 8.0.16 InnoDB Cluster that we used in the first part of this article.
And it is up and running.

Actually adding new nodes is very close to what we have done previously.

The process is :

  • Deploy the new MySQL instance preferably already configured for Group Replication
  • Restore the data in the way that we have seen previously

Check the configuration and the configuration itself can be done respectively with dba.checkInstanceConfiguration() and dba.configure() functions (and it could also be useful to use checkInstanceState()see this article).
e.g. on node M4:

$ mysqlsh clusterAdmin@M4:3306 -- dba checkInstanceConfiguration
Validating MySQL instance at M4:3306 for use in an InnoDB cluster...

This instance reports its own address as M4

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance 'M4:3306' is valid for InnoDB cluster usage.

{
    "status": "ok"
}

The first part of the restore process is the same than the one we have seen:

mysqlsh root@M4:3306 --sql
...

M4 SQL> STOP GROUP_REPLICATION; -- if necessary
Query OK, 0 rows affected (12.04 sec)

M4 SQL> SET SQL_LOG_BIN=0;
Query OK, 0 rows affected (0.00 sec)

M4 SQL> RESET MASTER; 
Query OK, 0 rows affected (0.06 sec)

M4 SQL> RESET SLAVE;
Query OK, 0 rows affected (0.13 sec)

M4 SQL> SET GLOBAL super_read_only=0;
Query OK, 0 rows affected (0.00 sec)

M4 SQL> source /data/backups/dump.sql
Query OK, 0 rows affected (0.00 sec)
...

For the second part, we will now add the new node (M4) :

// Add the new instance
MySQL JS> cluster.addInstance("clusterAdmin@M4:3306")


// Check
MySQL JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M4:3306": {
                "address": "M4:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M2:3306"
}

Note:

If necessary, just before the addInstance() you can do the checkInstanceConfiguration() and configureInstance().

Same process for the last node, M5.

And finally you got a 5 nodes MySQL InnoDB Cluster \o/:

MySQL JS> cluster.status()
{
    "clusterName": "pocCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "M2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to 2 failures.", 
        "topology": {
            "M1:3306": {
                "address": "M1:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M2:3306": {
                "address": "M2:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M3:3306": {
                "address": "M3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M4:3306": {
                "address": "M4:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }, 
            "M5:3306": {
                "address": "M5:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.16"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "M2:3306"
}

This is one way to do node recovery and provisioning using mysqldump, when your amount of data is not big.

When logical backup is not efficient it is time to use an hot, online and non-blocking physical backup tool like MySQL Enterprise Backup.
Actually, it’s even easier!
Please read: MySQL InnoDB Cluster – Recovering and provisioning with MySQL Enterprise Backup.

A final word to say that some new features are coming in this area… 🙂
Stay tuned!

References

Misc
Node 1 – Group Replication configuration

mysql> SHOW VARIABLES LIKE 'group_replication%';
+-----------------------------------------------------+-------------------------------------+
| Variable_name                                       | Value                               |
+-----------------------------------------------------+-------------------------------------+
| group_replication_allow_local_lower_version_join    | OFF                                 |
| group_replication_auto_increment_increment          | 7                                   |
| group_replication_autorejoin_tries                  | 1                                   |
| group_replication_bootstrap_group                   | OFF                                 |
| group_replication_communication_debug_options       | GCS_DEBUG_NONE                      |
| group_replication_communication_max_message_size    | 10485760                            |
| group_replication_components_stop_timeout           | 31536000                            |
| group_replication_compression_threshold             | 1000000                             |
| group_replication_consistency                       | EVENTUAL                            |
| group_replication_enforce_update_everywhere_checks  | OFF                                 |
| group_replication_exit_state_action                 | READ_ONLY                           |
| group_replication_flow_control_applier_threshold    | 25000                               |
| group_replication_flow_control_certifier_threshold  | 25000                               |
| group_replication_flow_control_hold_percent         | 10                                  |
| group_replication_flow_control_max_quota            | 0                                   |
| group_replication_flow_control_member_quota_percent | 0                                   |
| group_replication_flow_control_min_quota            | 0                                   |
| group_replication_flow_control_min_recovery_quota   | 0                                   |
| group_replication_flow_control_mode                 | QUOTA                               |
| group_replication_flow_control_period               | 1                                   |
| group_replication_flow_control_release_percent      | 50                                  |
| group_replication_force_members                     |                                     |
| group_replication_group_name                        | d1b109bf-9be3-11e9-9ea2-0242ac13000b|
| group_replication_group_seeds                       | M2:33061,M3:33061,M4:33061,M5:33061 |
| group_replication_gtid_assignment_block_size        | 1000000                             |
| group_replication_ip_whitelist                      | AUTOMATIC                           |
| group_replication_local_address                     | M1:33061                            |
| group_replication_member_expel_timeout              | 0                                   |
| group_replication_member_weight                     | 50                                  |
| group_replication_message_cache_size                | 1073741824                          |
| group_replication_poll_spin_loops                   | 0                                   |
| group_replication_recovery_complete_at              | TRANSACTIONS_APPLIED                |
| group_replication_recovery_get_public_key           | OFF                                 |
| group_replication_recovery_public_key_path          |                                     |
| group_replication_recovery_reconnect_interval       | 60                                  |
| group_replication_recovery_retry_count              | 10                                  |
| group_replication_recovery_ssl_ca                   |                                     |
| group_replication_recovery_ssl_capath               |                                     |
| group_replication_recovery_ssl_cert                 |                                     |
| group_replication_recovery_ssl_cipher               |                                     |
| group_replication_recovery_ssl_crl                  |                                     |
| group_replication_recovery_ssl_crlpath              |                                     |
| group_replication_recovery_ssl_key                  |                                     |
| group_replication_recovery_ssl_verify_server_cert   | OFF                                 |
| group_replication_recovery_use_ssl                  | ON                                  |
| group_replication_single_primary_mode               | ON                                  |
| group_replication_ssl_mode                          | REQUIRED                            |
| group_replication_start_on_boot                     | ON                                  |
| group_replication_transaction_size_limit            | 150000000                           |
| group_replication_unreachable_majority_timeout      | 0                                   |
+-----------------------------------------------------+-------------------------------------+

Thanks for using MySQL!

Follow me on Linkedin

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.

My Speaker Deck account.

Thanks for using HeatWave & MySQL!

2

Check the MySQL server startup configuration

June 11, 2019
Caribbean by Olivier DASINI

Since 8.0.16, MySQL Server supports a validate-config option that enables the startup configuration to be checked for problems without running the server in normal operational mode:

  • If no errors are found, the server terminates with an exit code of 0.
  • If an error is found, the server displays a diagnostic message and terminates with an exit code of 1.

validate-config can be used any time, but is particularly useful after an upgrade, to check whether any options previously used with the older server are considered by the upgraded server to be deprecated or obsolete.

First let’s get some information about my MySQL version and configuration.

$ mysqld --help --verbose | head -n13
mysqld  Ver 8.0.16 for Linux on x86_64 (MySQL Community Server - GPL)
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Starts the MySQL database server.

Usage: mysqld [OPTIONS]

Default options are read from the following files in the given order:
/etc/my.cnf /etc/mysql/my.cnf /usr/etc/my.cnf ~/.my.cnf 

I’m using MySQL 8.0.16.
The default options configuration are read in the given order from :

  • /etc/my.cnf
  • /etc/mysql/my.cnf
  • /usr/local/mysql/etc/my.cnf
  • ~/.my.cnf

Now let’s check my MySQL server startup configuration :

$ mysqld --validate-config
$

No error !
No output, everything looks good.
My server will start with this configuration.

If there is an error, the server terminates.
The output is obviously different :

$ mysqld --validate-config --fake-option
2019-06-05T15:10:08.653775Z 0 [ERROR] [MY-000068] [Server] unknown option '--fake-option'.
2019-06-05T15:10:08.653822Z 0 [ERROR] [MY-010119] [Server] Aborting

Usually your configuration options are written in your configuration file (in general named my.cnf).
Therefore you can also use validate-config in this context :

$ mysqld --defaults-file=/etc/my.cnf --validate-config 
$ 

Note:

defaults-file, if specified, must be the first option on the command line.

Furthermore you can handle the verbosity using log_error_verbosity :

  • A value of 1 gives you ERROR
  • A value of 2 gives you ERROR & WARNING
  • A value of 3 gives you ERROR, WARNING & INFORMATION (i.e. note)

With a verbosity of 2, in addition to errors, we will be able to display warnings :

$ mysqld --defaults-file=/etc/my.cnf --validate-config  --log_error_verbosity=2
2019-06-05T15:53:42.785422Z 0 [Warning] [MY-011068] [Server] The syntax 'expire-logs-days' is deprecated and will be removed in a future release. Please use binlog_expire_logs_seconds instead.
2019-06-05T15:53:42.785660Z 0 [Warning] [MY-010101] [Server] Insecure configuration for --secure-file-priv: Location is accessible to all OS users. Consider choosing a different directory.

Nothing very serious, however it is a best practice to delete warnings, when possible.

So I fixed these warnings :

$ mysqld --defaults-file=/etc/my.cnf --validate-config  --log_error_verbosity=2
2019-06-05T16:04:32.363297Z 0 [ERROR] [MY-000067] [Server] unknown variable 'binlog_expire_logs_second=7200'.
2019-06-05T16:04:32.363369Z 0 [ERROR] [MY-010119] [Server] Aborting

Oops!!! There is a typo… :-0
I wrote binlog_expire_logs_second instead of binlog_expire_logs_seconds.
(I forgot the final “s”)

In that case, my MySQL server could not start.
Thanks to validate-config !
I can now avoid some unpleasant experience when starting the server 🙂

With the correct spelling I have now no error and no warning :

$ mysqld --defaults-file=/etc/my.cnf --validate-config  --log_error_verbosity=2
$ 

Note that you could also use verbosity 3

$ mysqld --defaults-file=/etc/my.cnf --validate-config  --log_error_verbosity=3
2019-06-05T16:02:03.589770Z 0 [Note] [MY-010747] [Server] Plugin 'FEDERATED' is disabled.
2019-06-05T16:02:03.590719Z 0 [Note] [MY-010733] [Server] Shutting down plugin 'MyISAM'
2019-06-05T16:02:03.590763Z 0 [Note] [MY-010733] [Server] Shutting down plugin 'CSV'

validate-config is convenient and can be very useful.
It may be worthwhile to include it in your upgrade process.

References

Thanks for using MySQL!

Follow me on Linkedin

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.

My Speaker Deck account.

Thanks for using HeatWave & MySQL!

1

MySQL 8.0.16 New Features Summary

June 5, 2019
Sakila mozaik by Olivier DASINI

Presentation of some of the new features of MySQL 8.0.16 released on April 25, 2019.

Agenda

  • mysql_upgrade is no longer necessary
  • CHECK Constraints
  • Constant-Folding Optimization
  • SYSTEM_USER & partial_revokes
  • Chinese collation for utf8mb4
  • Performance Schema keyring_keys table
  • MySQL Shell Enhancements
  • MySQL Router Enhancements
  • InnoDB Cluster Enhancements
  • Group Replication Enhancements
  • Size of the binary tarball for Linux
  • Server quick settings validation

Download this presentation and others on my SlideShare account.

I’ve also made a video (in French) on my Youtube channel.

You can subscribe here.

That might interest you

Follow me on Linkedin

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.

My Speaker Deck account.

Thanks for using HeatWave & MySQL!

4

MySQL InnoDB Cluster – HowTo #2 – Validate an instance

May 21, 2019
Sakila HA by Olivier DASINI

How do I… Validate an instance for MySQL InnoDB Cluster usage?

Short answer

Use:

checkInstanceConfiguration()

Long answer…

In this article I assuming you already know what is MySQL Group Replication & MySQL InnoDB Cluster.
Additionally you can read this tutorial and this article from my colleague lefred or this one on Windows Platform from my colleague Ivan.

During the cluster creation process or when you want to add a node to a running cluster, the chosen MySQL instance must be valid for an InnoDB Cluster usage.
That is, be compliant with Group Replication requirements.

MySQL Shell provide a simple and easy way to check if your instance is valid: checkInstanceConfiguration()

I’m using MySQL Shell 8.0.16:

$ mysqlsh
MySQL Shell 8.0.16

Copyright (c) 2016, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.

MySQL JS> 

In this scenario my cluster is not created yet. However the logic would have been the same for adding a node to a running cluster.

Ask for help

The built-in help is simply awesome!

MySQL JS> dba.help('checkInstanceConfiguration')
NAME
      checkInstanceConfiguration - Validates an instance for MySQL InnoDB
                                   Cluster usage.

SYNTAX
      dba.checkInstanceConfiguration(instance[, options])

WHERE
      instance: An instance definition.
      options: Data for the operation.

RETURNS
       A descriptive text of the operation result.

DESCRIPTION
      This function reviews the instance configuration to identify if it is
      valid for usage with group replication. Use this to check for possible
      configuration issues on MySQL instances before creating a cluster with
      them or adding them to an existing cluster.

      The instance definition is the connection data for the instance.

      For additional information on connection data use \? connection.

      Only TCP/IP connections are allowed for this function.

      The options dictionary may contain the following options:

      - mycnfPath: Optional path to the MySQL configuration file for the
        instance. Alias for verifyMyCnf
      - verifyMyCnf: Optional path to the MySQL configuration file for the
        instance. If this option is given, the configuration file will be
        verified for the expected option values, in addition to the global
        MySQL system variables.
      - password: The password to get connected to the instance.
      - interactive: boolean value used to disable the wizards in the command
        execution, i.e. prompts are not provided to the user and confirmation
        prompts are not shown.

      The connection password may be contained on the instance definition,
      however, it can be overwritten if it is specified on the options.

      The returned descriptive text of the operation result indicates whether
      the instance is valid for InnoDB Cluster usage or not. If not, a table
      containing the following information is presented:

      - Variable: the invalid configuration variable.
      - Current Value: the current value for the invalid configuration
        variable.
      - Required Value: the required value for the configuration variable.
      - Note: the action to be taken.

      The note can be one of the following:

      - Update the config file and update or restart the server variable.
      - Update the config file and restart the server.
      - Update the config file.
      - Update the server variable.
      - Restart the server.

EXCEPTIONS
      ArgumentError in the following scenarios:

      - If the instance parameter is empty.
      - If the instance definition is invalid.
      - If the instance definition is a connection dictionary but empty.

      RuntimeError in the following scenarios:

      - If the instance accounts are invalid.
      - If the instance is offline.
      - If the instance is already part of a Replication Group.
      - If the instance is already part of an InnoDB Cluster.
      - If the given the instance cannot be used for Group Replication.

Check Instance Configuration

In order to check a MySQL instance I must connect to that instance, either by connecting to that instance with MySQL Shell or by providing the connection data to the function:

MySQL JS> dba.checkInstanceConfiguration('root@172.20.0.11')
Validating MySQL instance at 172.20.0.11:3306 for use in an InnoDB cluster...

This instance reports its own address as mysql_node1

Checking whether existing tables comply with Group Replication requirements...
WARNING: The following tables do not have a Primary Key or equivalent column: 
test.squares, test.people, test.animal

Group Replication requires tables to use InnoDB and have a PRIMARY KEY or PRIMARY KEY Equivalent (non-null unique key). Tables that do not follow these requirements will be readable but not updateable when used with Group Replication. If your applications make updates (INSERT, UPDATE or DELETE) to these tables, ensure they use the InnoDB storage engine and have a PRIMARY KEY or PRIMARY KEY Equivalent.

Checking instance configuration...

Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable                 | Current Value | Required Value | Note                                             |
+--------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum          | CRC32         | NONE           | Update the server variable                       |
| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
| server_id                | 1             | <unique ID>    | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+

Some variables need to be changed, but cannot be done dynamically on the server.
Please use the dba.configureInstance() command to repair these issues.

{
    "config_errors": [
        {
            "action": "server_update", 
            "current": "CRC32", 
            "option": "binlog_checksum", 
            "required": "NONE"
        }, 
        {
            "action": "restart", 
            "current": "OFF", 
            "option": "enforce_gtid_consistency", 
            "required": "ON"
        }, 
        {
            "action": "restart", 
            "current": "OFF", 
            "option": "gtid_mode", 
            "required": "ON"
        }, 
        {
            "action": "restart", 
            "current": "1", 
            "option": "server_id", 
            "required": "<unique ID>"
        }
    ], 
    "status": "error"
}

The output depends on the instance current status.
In my case 3 tables do not meet the requirements because of lack of Primary key (or non-null unique key).
Also I need to set correctly 4 variables and I must restart the MySQL instance because of 3 of them.

Automation

It is not always convenient (or recommended) to do these kind of task manually.
MySQL Shell is built in regards to DevOps usage :

$ mysqlsh -e "dba.checkInstanceConfiguration('root@172.20.0.12')"
Validating MySQL instance at 172.20.0.12:3306 for use in an InnoDB cluster...

This instance reports its own address as mysql_node2

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...

Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable                 | Current Value | Required Value | Note                                             |
+--------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum          | CRC32         | NONE           | Update the server variable                       |
| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
| server_id                | 1             | <unique ID>    | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+

Some variables need to be changed, but cannot be done dynamically on the server.
Please use the dba.configureInstance() command to repair these issues.

Or even more practical:

$ mysqlsh -- dba checkInstanceConfiguration --user=root --host=172.20.0.13
Validating MySQL instance at 172.20.0.13:3306 for use in an InnoDB cluster...

This instance reports its own address as mysql_node3

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...

Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable                 | Current Value | Required Value | Note                                             |
+--------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum          | CRC32         | NONE           | Update the server variable                       |
| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
| server_id                | 1             | <unique ID>    | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+

Some variables need to be changed, but cannot be done dynamically on the server.
Please use the dba.configureInstance() command to repair these issues.

{
    "config_errors": [
        {
            "action": "server_update", 
            "current": "CRC32", 
            "option": "binlog_checksum", 
            "required": "NONE"
        }, 
        {
            "action": "restart", 
            "current": "OFF", 
            "option": "enforce_gtid_consistency", 
            "required": "ON"
        }, 
        {
            "action": "restart", 
            "current": "OFF", 
            "option": "gtid_mode", 
            "required": "ON"
        }, 
        {
            "action": "restart", 
            "current": "1", 
            "option": "server_id", 
            "required": "<unique ID>"
        }
    ], 
    "status": "error"
}

An other option is to create a script and pass it to MySQL Shell.
A very simple (and naive) example could be:

$ cat /tmp/servers.js
dba.checkInstanceConfiguration('root@172.20.0.11');
dba.checkInstanceConfiguration('root@172.20.0.12');
dba.checkInstanceConfiguration('root@172.20.0.13');

then process the file:

$ mysqlsh  -f /tmp/servers.js
Validating MySQL instance at 172.20.0.11:3306 for use in an InnoDB cluster...

This instance reports its own address as mysql_node1

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance '172.20.0.11:3306' is valid for InnoDB cluster usage.

Validating MySQL instance at 172.20.0.12:3306 for use in an InnoDB cluster...

This instance reports its own address as mysql_node2

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance '172.20.0.12:3306' is valid for InnoDB cluster usage.

Validating MySQL instance at 172.20.0.13:3306 for use in an InnoDB cluster...

This instance reports its own address as mysql_node3

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance '172.20.0.13:3306' is valid for InnoDB cluster usage.

In the previous scenario all the MySQL instances was set properly before the check.

Note that all that has been done previously in Javascript can also be done in Python :

MySQL 172.20.0.11:33060+ JS> \py
Switching to Python mode...

MySQL 172.20.0.11:33060+ Py> dba.check_instance_configuration()
Validating MySQL instance at 172.20.0.11:3306 for use in an InnoDB cluster...

This instance reports its own address as mysql_node1

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance '172.20.0.11:3306' is valid for InnoDB cluster usage.

{
    "status": "ok"
}
$ mysqlsh root@172.20.0.11 --py -f check_servers.py
...

To summarize

Q: How do I validate an instance for MySQL InnoDB Cluster usage?

A: Use check_instance_configuration()

References

Thanks for using MySQL!

Follow me on Linkedin

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.

My Speaker Deck account.

Thanks for using HeatWave & MySQL!

1

CHECK constraints in MySQL

May 14, 2019
Above the clouds by Olivier DASINI

MySQL (really) supports CHECK CONSTRAINT since version 8.0.16.
In this article I will show you 2 things:

  1. An elegant way to simulate check constraint in MySQL 5.7 & 8.0.
  2. How easy and convenient it is to use CHECK constraints starting from MySQL 8.0.16.

Please note that this article is strongly inspired by Mablomy‘s blog post: CHECK constraint for MySQL – NOT NULL on generated columns.

I’m using the optimized MySQL Server Docker images, created, maintained and supported by the MySQL team at Oracle.
For clarity I chose MySQL 8.0.15 for the check constraint hack and obviously 8.0.16 for the “real” check constraint implementation.


Deployment of MySQL 8.0.15 & MySQL 8.0.16:

$ docker run --name=mysql-8.0.15 -e MYSQL_ROOT_PASSWORD=unsafe -d mysql/mysql-server:8.0.15
 d4ce35e429e08bbf46a02729e6667458e2ed90ce94e7622f1342ecb6c0dfa009
$ docker run --name=mysql-8.0.16 -e MYSQL_ROOT_PASSWORD=unsafe -d mysql/mysql-server:8.0.16
 d3b22dff1492fe6cb488a7f747e4709459974e79ae00b60eb0aee20546b68a0f

Note:

Obviously using a password on the command line interface can be insecure.

Please read the best practices of deploying MySQL on Linux with Docker.

Example 1

Check constraints hack

$ docker exec -it mysql-8.0.15 mysql -uroot -p --prompt='mysql-8.0.15> '
Enter password: 

mysql-8.0.15> CREATE SCHEMA test;
Query OK, 1 row affected (0.03 sec)

mysql-8.0.15> USE test
Database changed

mysql-8.0.15> SELECT VERSION();
+-----------+
| VERSION() |
+-----------+
| 8.0.15    |
+-----------+


mysql-8.0.15> 
CREATE TABLE checker_hack ( 
    i tinyint, 
    i_must_be_between_7_and_12 BOOLEAN 
         GENERATED ALWAYS AS (IF(i BETWEEN 7 AND 12, true, NULL)) 
         VIRTUAL NOT NULL
);

As you can see, the trick is to use Generated Columns, available since MySQL 5.7 and the flow control operator IF where the check condition is put.

mysql-8.0.15> INSERT INTO checker_hack (i) VALUES (11);
Query OK, 1 row affected (0.03 sec)

mysql-8.0.15> INSERT INTO checker_hack (i) VALUES (12);
Query OK, 1 row affected (0.01 sec)


mysql-8.0.15> SELECT i FROM checker_hack;
+------+
| i    |
+------+
|   11 |
|   12 |
+------+
2 rows in set (0.00 sec)

As expected, values that respect the condition (between 7 and 12) can be inserted.

mysql-8.0.15> INSERT INTO checker_hack (i) VALUES (13);
ERROR 1048 (23000): Column 'i_must_be_between_7_and_12' cannot be null


mysql-8.0.15> SELECT i FROM checker_hack;
+------+
| i    |
+------+
|   11 |
|   12 |
+------+
2 rows in set (0.00 sec)

Outside the limits, an error is raised.
We have our “check constraint” like feature 🙂

Check constraint since MySQL 8.0.16

$ docker exec -it mysql-8.0.16 mysql -uroot -p --prompt='mysql-8.0.16> '
Enter password: 

mysql-8.0.16> CREATE SCHEMA test;
Query OK, 1 row affected (0.08 sec)

mysql-8.0.16> USE test
Database changed

mysql-8.0.16> SELECT VERSION();
+-----------+
| VERSION() |
+-----------+
| 8.0.16    |
+-----------+


mysql-8.0.16> 
CREATE TABLE checker ( 
    i tinyint, 
    CONSTRAINT i_must_be_between_7_and_12 CHECK (i BETWEEN 7 AND 12 )
);

Since MySQL 8.0.16, the CHECK keyword do the job.
I would recommend to name wisely your constraint.
The syntax is:

[CONSTRAINT [symbol]] CHECK (expr) [[NOT] ENFORCED]

From there, the following is rather obvious:

mysql-8.0.16> INSERT INTO checker (i) VALUES (11);
Query OK, 1 row affected (0.02 sec)

mysql-8.0.16> INSERT INTO checker (i) VALUES (12);
Query OK, 1 row affected (0.03 sec)


mysql-8.0.16> SELECT i FROM checker;
+------+
| i    |
+------+
|   11 |
|   12 |
+------+
2 rows in set (0.00 sec)

mysql-8.0.16> INSERT INTO checker (i) VALUES (13);
ERROR 3819 (HY000): Check constraint 'i_must_be_between_7_and_12' is violated.


mysql-8.0.16> SELECT i FROM checker;
+------+
| i    |
+------+
|   11 |
|   12 |
+------+
2 rows in set (0.00 sec)

Easy! 🙂

Example 2

You can check a combination of columns.

Check constraints hack

mysql-8.0.15> 
CREATE TABLE squares_hack (
     dx DOUBLE, 
     dy DOUBLE, 
     area_must_be_larger_than_10 BOOLEAN 
           GENERATED ALWAYS AS (IF(dx*dy>10.0, true, NULL)) NOT NULL
);

mysql-8.0.15> INSERT INTO squares_hack (dx,dy) VALUES (7,4);
Query OK, 1 row affected (0.02 sec)


mysql-8.0.15> INSERT INTO squares_hack (dx,dy) VALUES (2,4);
ERROR 1048 (23000): Column 'area_must_be_larger_than_10' cannot be null


mysql-8.0.15> SELECT dx, dy FROM squares_hack;
+------+------+
| dx   | dy   |
+------+------+
|    7 |    4 |
+------+------+
1 row in set (0.00 sec)

Check constraint since MySQL 8.0.16

mysql-8.0.16> 
CREATE TABLE squares (
     dx DOUBLE, 
     dy DOUBLE, 
     CONSTRAINT area_must_be_larger_than_10 CHECK ( dx * dy > 10.0 )
);


mysql-8.0.16> INSERT INTO squares (dx,dy) VALUES (7,4);
Query OK, 1 row affected (0.01 sec)


mysql-8.0.16> INSERT INTO squares (dx,dy) VALUES (2,4);
ERROR 3819 (HY000): Check constraint 'area_must_be_larger_than_10' is violated.


mysql-8.0.16> SELECT dx, dy FROM squares;
+------+------+
| dx   | dy   |
+------+------+
|    7 |    4 |
+------+------+
1 row in set (0.00 sec)

Still easy!

Example 3

You can also check text columns.

Check constraints hack

mysql-8.0.15> 
CREATE TABLE animal_hack (  
     name varchar(30) NOT NULL,  
     class varchar(100) DEFAULT NULL,  
     class_allow_Mammal_Reptile_Amphibian BOOLEAN 
           GENERATED ALWAYS AS (IF(class IN ("Mammal", "Reptile", "Amphibian"), true, NULL)) NOT NULL
);  

mysql-8.0.15> INSERT INTO animal_hack (name, class) VALUES ("Agalychnis callidryas",'Amphibian');  
Query OK, 1 row affected (0.02 sec)

mysql-8.0.15> INSERT INTO animal_hack (name, class) VALUES ("Orycteropus afer", 'Mammal');  
Query OK, 1 row affected (0.02 sec)

mysql-8.0.15> INSERT INTO animal_hack (name, class) VALUES ("Lacerta agilis", 'Reptile');  
Query OK, 1 row affected (0.02 sec)


mysql-8.0.15> SELECT name, class FROM animal_hack;
+-----------------------+-----------+
| name                  | class     |
+-----------------------+-----------+
| Agalychnis callidryas | Amphibian |
| Orycteropus afer      | Mammal    |
| Lacerta agilis        | Reptile   |
+-----------------------+-----------+
3 rows in set (0.00 sec)
mysql-8.0.15> INSERT INTO animal_hack (name, class) VALUES ("Palystes castaneus", 'Arachnid'); 
ERROR 1048 (23000): Column 'class_allow_Mammal_Reptile_Amphibian' cannot be null


mysql-8.0.15> SELECT name, class FROM animal_hack;
+-----------------------+-----------+
| name                  | class     |
+-----------------------+-----------+
| Agalychnis callidryas | Amphibian |
| Orycteropus afer      | Mammal    |
| Lacerta agilis        | Reptile   |
+-----------------------+-----------+
3 rows in set (0.00 sec)

Check constraint since MySQL 8.0.16

mysql-8.0.16> 
CREATE TABLE animal (  
     name varchar(30) NOT NULL,  
     class varchar(100) DEFAULT NULL,  
     CONSTRAINT CHECK (class IN ("Mammal", "Reptile", "Amphibian"))
);  

mysql-8.0.16> INSERT INTO animal (name, class) VALUES ("Agalychnis callidryas",'Amphibian');  
Query OK, 1 row affected (0.04 sec)

mysql-8.0.16> INSERT INTO animal (name, class) VALUES ("Orycteropus afer", 'Mammal');  
Query OK, 1 row affected (0.04 sec)

mysql-8.0.16> INSERT INTO animal (name, class) VALUES ("Lacerta agilis", 'Reptile');  
Query OK, 1 row affected (0.04 sec)


mysql-8.0.16> SELECT name, class FROM animal_hack;
+-----------------------+-----------+
| name                  | class     |
+-----------------------+-----------+
| Agalychnis callidryas | Amphibian |
| Orycteropus afer      | Mammal    |
| Lacerta agilis        | Reptile   |
+-----------------------+-----------+
3 rows in set (0.00 sec)
mysql-8.0.16> INSERT INTO animal (name, class) VALUES ("Palystes castaneus", 'Arachnid');  
ERROR 3819 (HY000): Check constraint 'animal_chk_1' is violated.


mysql-8.0.16> SELECT name, class FROM animal_hack;
+-----------------------+-----------+
| name                  | class     |
+-----------------------+-----------+
| Agalychnis callidryas | Amphibian |
| Orycteropus afer      | Mammal    |
| Lacerta agilis        | Reptile   |
+-----------------------+-----------+
3 rows in set (0.00 sec)

Frankly easy!

I did not mention that the hack works as well in 8.0.16, though not needed anymore.

CHECK constraint is another useful feature implemented in MySQL (and not the last one, stay tuned!).
There are some other interesting things to know about this feature but also about the others available in MySQL 8.0.16.
Please have a look on the references below.

References

Thanks for using MySQL!

Follow me on Linkedin

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.

My Speaker Deck account.

Thanks for using HeatWave & MySQL!

Comments Off on CHECK constraints in MySQL

Constant-Folding Optimization in MySQL 8.0

May 7, 2019

TL;TR

In MySQL 8.0.16 the optimizer has improved again!
Comparisons of columns of numeric types with constant values are checked and folded or removed for invalid or out-of-rage values.
The goal is to speed up query execution.


The name of this article (Constant-Folding Optimization), named after this kind of optimization, is quite cryptic. Nevertheless the principle is simple and more important there is nothing to do from the user perspective.

What is “Constant-Folding Optimization” ?

From the MySQL Documentation :
Comparisons between constants and column values in which the constant value is out of range or of the wrong type with respect to the column type are now handled once during query optimization rather row-by-row than during execution.

From the MySQL Server Team Blog :
The goal is to speed up execution at the cost of a little more analysis at optimize time.
Always true and false comparisons are detected and eliminated.
In other cases, the type of the constant is adjusted to match that of the field if they are not the same, avoiding type conversion at execution time
.

Clear enough?

One example is worth a thousand words, so let’s have a deeper look comparing the old behavior in MySQL 8.0.15 to the new one beginning with MySQL 8.0.16.

I’m using the optimized MySQL Server Docker images, created, maintained and supported by the MySQL team at Oracle.

Deployment of MySQL 8.0.15 & MySQL 8.0.16:

$ docker run --name=mysql_8.0.15 -e MYSQL_ROOT_PASSWORD=unsafe -d mysql/mysql-server:8.0.15
$ docker run --name=mysql_8.0.16 -e MYSQL_ROOT_PASSWORD=unsafe -d mysql/mysql-server:8.0.16

Note:

Obviously using a password on the command line interface can be insecure.

Please read the best practices of deploying MySQL on Linux with Docker.


Copy the test table dump file on 8.0.15 & 8.0.16:

$ docker cp ./testtbl.sql mysql_8.0.15:/tmp/testtbl.sql
$ docker cp ./testtbl.sql mysql_8.0.16:/tmp/testtbl.sql


Load the test table into 8.0.15 instance:

$ docker exec -it mysql_8.0.15 mysql -u root -p --prompt='mysql_8.0.15> '

Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 31
Server version: 8.0.15 MySQL Community Server - GPL

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql_8.0.15> SELECT VERSION();
+-----------+
| VERSION() |
+-----------+
| 8.0.15    |
+-----------+

mysql_8.0.15> CREATE SCHEMA test;
Query OK, 1 row affected (0.04 sec)

mysql_8.0.15> USE test
Database changed

mysql_8.0.15> source /tmp/testtbl.sql
... <snip> ...


Load the test table into 8.0.16 instance:

$ docker exec -it mysql_8.0.16 mysql -u root -p --prompt='mysql_8.0.16> '

Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 12
Server version: 8.0.16 MySQL Community Server - GPL

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql_8.0.16> SELECT VERSION();
+-----------+
| VERSION() |
+-----------+
| 8.0.16    |
+-----------+

mysql_8.0.16> CREATE SCHEMA test;
Query OK, 1 row affected (0.04 sec)

mysql_8.0.16> USE test
Database changed

mysql_8.0.16> source /tmp/testtbl.sql
... <snip> ...



Let’s see what we have loaded:

mysql_8.0.16> SHOW CREATE TABLE testtbl\G
*************************** 1. row ***************************
       Table: testtbl
Create Table: CREATE TABLE `testtbl` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `val` varchar(36) NOT NULL,
  `val2` varchar(36) DEFAULT NULL,
  `val3` varchar(36) DEFAULT NULL,
  `val4` varchar(36) DEFAULT NULL,
  `num` int(10) unsigned DEFAULT NULL,
  PRIMARY KEY (`id`),
  KEY `idx2` (`val2`),
  KEY `idx3` (`val3`),
  KEY `idx4` (`val4`)
) ENGINE=InnoDB AUTO_INCREMENT=14220001 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci


mysql_8.0.16> SELECT COUNT(*) FROM testtbl;
+----------+
| COUNT(*) |
+----------+
|  5000000 |
+----------+

What is important for us here is the non indexed column – num :

num int(10) unsigned DEFAULT NULL

It contains only positive numbers:

mysql_8.0.16> SELECT min(num), max(num) FROM testtbl;
+----------+----------+
| min(num) | max(num) |
+----------+----------+
|  9130001 | 14130000 |
+----------+----------+

The old behavior

What happens if I looking for a negative number, let’s say -12345, on the column num ?
Remember that it contains only positive numbers and there is no index.

mysql_8.0.15> EXPLAIN SELECT * FROM testtbl WHERE num=-12345\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: testtbl
   partitions: NULL
         type: ALL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: 4820634
     filtered: 10.00
        Extra: Using where

According to the EXPLAIN plan, we have a full table scan. In a way that makes sense because there is no index on num.
However we know that there is no negative value, so there is certainly some room for improvements 🙂

Running the query:

mysql_8.0.15> SELECT * FROM testtbl WHERE num=-12345;
Empty set (2.77 sec)

Indeed the full table scan could be costly.

The current behavior – 8.0.16+

The Constant-Folding Optimization improves the execution of this type of queries.

The EXPLAIN plan for MySQL 8.0.16 is completely different:

mysql_8.0.16> EXPLAIN SELECT * FROM testtbl WHERE num=-12345\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: NULL
   partitions: NULL
         type: NULL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: NULL
     filtered: NULL
        Extra: Impossible WHERE

Did you notice the:

Extra: Impossible WHERE

Looking for the negative value in a strictly positive column was processed at the optimize time!
So they are obviously a positive impact on the query execution time:

mysql_8.0.16> SELECT * FROM testtbl WHERE num=-12345;
Empty set (0.00 sec)

Yay!



In addition to the = operator, this optimization is currently possible for >, >=, <, <=, =, <>, != and <=> as well.
e.g.

mysql_8.0.16> EXPLAIN SELECT * FROM testtbl WHERE num > -42 AND num <= -1 \G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: NULL
   partitions: NULL
         type: NULL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: NULL
     filtered: NULL
        Extra: Impossible WHERE


mysql_8.0.16> SELECT * FROM testtbl WHERE num > -42 AND num <=  -1;
Empty set (0.00 sec)

Indexed column

As a side note, if your column is indexed the optimizer already have the relevant information, so before 8.0.16, no need of Constant-Folding Optimization, to have a fast query :).

mysql_8.0.15> CREATE INDEX idx_num ON testtbl(num);
Query OK, 0 rows affected (24.84 sec)
Records: 0  Duplicates: 0  Warnings: 0


mysql_8.0.15> EXPLAIN SELECT * FROM testtbl WHERE num = -12345\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: NULL
   partitions: NULL
         type: NULL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: NULL
     filtered: NULL
        Extra: no matching row in const table
1 row in set, 1 warning (0.00 sec)


mysql_8.0.15> SELECT * FROM testtbl WHERE num = -12345;
Empty set (0.00 sec)

References

Thanks for using MySQL!

Follow me on Linkedin

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.

My Speaker Deck account.

Thanks for using HeatWave & MySQL!

Comments Off on Constant-Folding Optimization in MySQL 8.0