MySQL 8.0.18 New Features Summary

November 26, 2019

Presentation of some of the new features of MySQL 8.0.18 released on October 14, 2019.

Agenda

  • Hash Join
  • EXPLAIN ANALYZE
  • Only OpenSSL
  • Random Password
  • MySQL Shell Enhancements
  • MySQL Router Enhancements
  • InnoDB Cluster Enhancements
  • Group Replication Enhancements
  • Replication Enhancements
  • Enterprise New Features
  • Thanks to the Contributors


Download this presentation and others on my SlideShare account.


Thanks for using MySQL!

Follow me on twitter

0

MySQL 8.0.17 – New Features Summary

September 16, 2019

This presentation is a summary of the MySQL 8.0.17 new features.
Released on July 22, 2019.

Agenda

  • CLONE Plugin – Native automatic provisioning in the server
  • Multi-valued indexes
  • JSON functions using multi-valued indexes
  • JSON schema validation
  • New binary collation for utf8mb4
  • MySQL Shell Enhancements
  • MySQL Router Enhancements
  • InnoDB Cluster Enhancements
  • Group Replication Enhancements
  • Replication Enhancements
  • Thanks to the Contributors


Download this presentation and others on my SlideShare account.


Thanks for using MySQL!

Follow me on twitter

0

MySQL InnoDB Cluster – Easy Recovering and provisioning

September 10, 2019
Cloudy Sky by Olivier DASINI

As the administrator of a cluster, among other tasks, you should be able to restore failed nodes and grow (or shrink) your cluster by adding (or removing) new nodes

Olivier DASINI

Up to MySQL 8.0.16, to perform these tasks you could:

Starting with MySQL 8.0.17, the easiest and more convenient method is to use the CLONE feature.

Note:
Regardless of the recovering and provisioning method, it is important to have a proven backup/restore procedure.

I recommend reading the excellent blog post from my colleague Jesper – MySQL Backup Best Practices.


Context

Let’s make it as simple as possible 🙂
I’m using MySQL 8.0.17.

I running a MySQL InnoDB Cluster well setup.
So my main assumption is that you already know what is MySQL Group Replication & MySQL InnoDB Cluster.
Additionally you can read this tutorial and this article from my colleague lefred or this one on Windows Platform from my colleague Ivan.


Scenario 1 – Node Recovering

  • A 3 nodes MySQL InnoDB Cluster – M1 / M2 / M3, in single primary mode
  • MySQL Router is configured to enable R/W connections on 3306 and RO connections on 3307
  • M1 is currently the primary (that is in Read/Write mode)
  • M2 & M3 are currently the secondaries (that is Read Only mode)
  • M1 crashed! :'(
  • M2 & M3 are now the (new temporary) cluster
  • Then…

The goal then is to rebuild M1 and put it back to the cluster.


So like I said before we have a 3 nodes MySQL 8.0.17 InnoDB Cluster up and running:

Then node M1 crashed… (status is “MISSING“ & we have a relevant shellConnectError message) :

M1 was the primary.
The cluster initiated an automatic database failover to elect a new primary… blablabla
Anyway you already know the story ?


After a while M1 is finally fixed and ready to be part of the cluster again.


Node Recovery: Auto distributed recovery

Well I have a great news for you most of the time your only task will be to start the fixed MySQL instance.

Before starting the node (M1), status is Missing:

Then MySQL InnoDB Cluster will automatically choose the relevant distributed recovery method.
During a certain period of time status is Recovering.
Details are available in recoveryStatusText field:

The node eventually becomes online.
Status: Online.


So you get it now, the node recovering process is handle by MySQL InnoDB Cluster using the automated distributed recovery.

That means, if your lucky, your only task (in addition to fixing the issue) is to (re)start the MySQL instance.

If you’re not lucky, you’ll probably need to remove and add the node again.


Node Recovery: Remove and add the node

The strategy here is to remove the node from the cluster metadata and then add it back – as a new node – to the cluster.

For example for node M3, remove the node with removeInstance():

force is a boolean. By default, set to false.
Indicating if the instance must be removed (even if only from metadata) in case it cannot be reached.
Useful if the instance is no longer reachable (RTFM).

Then add the node with addInstance():


An alternative, that I prefer, is to use rescan() then addInstance().

Let’s see how to use it, assuming M3 crashed.
The current status is:

Let’s (re)scan the cluster and remove the missing node (M3):

The new cluster status is:

Now it’s time to bring back M3, using the Clone distributed process:

Here we go!
Our 3 nodes MySQL InnoDB Cluster can tolerate up to 1 failure again:


Scenario 2 – Node Provisioning

  • A 3 nodes MySQL InnoDB Cluster – M1 / M2 / M3 in single primary mode
  • MySQL Router is configured to enable R/W connections on port 3306 and RO connections on port 3307
  • M1 is currently the primary (that is Read/Write mode)
  • M2 & M3 are currently the secondaries (that is Read Only mode)

The goal then is to add 2 new nodes: M4 & M5


So we have the 3 nodes MySQL 8.0.17 InnoDB Cluster that we used in the first part of this article. The cluster is up and running.

Actually adding new nodes is very close to what we have done previously.

The process is :

  • Deploy the new MySQL instance preferably already configured for Group Replication
  • Use the automatic distributed recovery process


Add node M4:

As you can see we use the Clone process.
The new cluster status is:

Same process for node M5:

Our final status is then:


References


Video:


Misc
Node 1 – Group Replication configuration variables


Some optional but useful parameters from my node1’s my.cnf:


Thanks for using MySQL!

Follow me on twitter

1

MySQL InnoDB Cluster – Recovering and provisioning with MySQL Enterprise Backup

July 11, 2019
Full moon by Olivier DASINI

Like I stated in my previous article – MySQL InnoDB Cluster – Recovering and provisioning with mysqldump :
“As the administrator of a cluster, among other tasks, you should be able to restore failed nodes and grow (or shrink) your cluster by adding (or removing) new nodes”.
Well, I still agree with myself 🙂

MySQL customers using a Commercial Edition have access to MySQL Enterprise Backup (MEB) which provide enterprise-grade physical backup and recovery for MySQL.

MEB delivers hot, online, non-blocking backups on multiple platforms including Linux, Windows, Mac & Solaris.
More details here.

Note:
If you want to know how to recover a node and/or how to provision nodes with mysqldump please read this blog post.


Context

Let’s make it as simple as possible 🙂
I’m using MySQL Enterprise 8.0.16, available for MySQL customers on My Oracle Support or on Oracle Software Delivery Cloud.

I have an InnoDB Cluster setup, up and running.
So my main assumption is that you already know what is MySQL Group Replication & MySQL InnoDB Cluster.
Additionally you can read this tutorial and this article from my colleague lefred or this one on Windows Platform from my colleague Ivan.

All nodes must have the right MySQL Enterprise Backup privileges.
Details here.

All nodes must have same values respectively for log-bin & relay-log:
For example: log-bin=binlog & relay-log=relaylog (on all nodes)

Note:
Depending on how you configured your MySQL InnoDB Cluster, some steps could be slightly different.


Scenario 1 – Node Recovering

  • A 3 nodes MySQL InnoDB Cluster – M1 / M2 / M3, in single primary mode
  • MySQL Router is configured to enable R/W connections on 3306 and RO connections on 3307
  • M1 is currently the primary (that is in Read/Write mode)
  • M2 & M3 are currently the secondaries (that is Read Only mode)
  • M1 failed! Assuming it is irreconcilably corrupted :'(
  • M2 & M3 are now the (new temporary) cluster

The goal then is to rebuild M1 and put it back to the cluster.


So like I said before we have a 3 nodes MySQL Enterprise 8.0.16 InnoDB Cluster up and running:

Then node M1 crashed… (status is “MISSING“) :

M1 was the primary.
The cluster initiated an automatic database failover to elect a new primary… blablabla
Well you already know the story 🙂

After a while M1 is fixed and ready to be part of the cluster again.
To minimize the recovery time instead of using the last backup we prefer to take a fresh one.

Speaking of backup, I recommend reading the excellent blog post from my colleague JesperMySQL Backup Best Practices.

Let’s take a fresh backup on a secondary node (we could also used the primary).

MySQL Enterprise Backup is a very versatile tool and has many different configuration options.
For clarity I’ll use a simplistic command. Please read the MEB documentation for a more “production style” commande.
The backup will roughly looks like :

Please note that it is highly recommended, in addition to the my.cnf to include in your backup process a copy of the auto.cnf and mysqld-auto.cnf configuration files for all nodes.

If you “lose” your auto.cnf file, don’t worry the server will generate a new one for you.
However the recovery process will be slightly different… (more on that below).

Now it’s time to restore this backup on node M1.

Because this server is part of a MySQL InnoDB Cluster, obviously there are some extra steps compare to a standalone server restoration.


Node Recovering

The node recovering process is simple:

  • Delete all contents of the MySQL Server data directory
  • Restore the backup
  • Restore the auto.cnf file
  • Restore the mysqld-auto.cnf file (if there is one)
  • Start the MySQL instance

This gives us on M1 , something like (simplified version, please adapt to your context) :

Then you can connect to the cluster and… see that the node M1 is recovering (“status: RECOVERING”) or if you’re not fast enough that the node is again part of the cluster (“status: ONLINE”):


Lost the auto.cnf file

As promised, the case when the auto.cnf configuration file is not restored.
In fact, in this case the cluster would see this node as a new node (because the server will have a new UUID).
So the process for putting it back is different.

Also note that if you loose the mysqld-auto.cnf file you’ll probably need to configure (again) the server to be Group Replication aware.

To begin, you must stop the Group Replication plugin on the node that needs to be restored (M1):

Then if necessary you can check the configuration and/or configure the node:

You need to remove the old node from the InnoDB Cluster metadata:

Add the “new” node:

Check – and after the recovery stage, the “new” node is online:

Et voilà!


An simple alternative to deal with this “unpleasantness“, if you don’t need to configure the node, is basically to remove the node and add it again.
Below an example with M3:



Scenario 2 – Node Provisioning

  • A 3 nodes MySQL InnoDB Cluster – M1 / M2 / M3 in single primary mode
  • MySQL Router is configured to enable R/W connections on port 3306 and RO connections on port 3307
  • M2 is currently the primary (that is Read/Write mode)
  • M1 & M3 are currently the secondaries (that is Read Only mode)

The goal then is to add 2 new nodes: M4 & M5


So we have the 3 nodes MySQL 8.0.16 InnoDB Cluster that we used in the first part of this article. And it is up and running.

Actually adding new nodes is very close to what we have done previously.

The process is :

  • Deploy the new MySQL instance preferably already configured for Group Replication
  • Restore the data in the way that we have seen previously


Check the configuration and the configuration itself can be done respectively with dba.checkInstanceConfiguration() and dba.configure() functions (and it could also be useful to use checkInstanceState()see this article).
e.g. on node M4:


Restore the backup on M4, the new node:


An finally add the new node (M4):


Same process for the last node, M5.

You end up with a 5 nodes MySQL InnoDB Cluster \o/:


This is one way to do node recovery and provisioning using MySQL Enterprise Backup.

You may also take a look on this article: InnoDB Cluster: Recovering an instance with MySQL Enterprise Backup – from my colleague Keith.

If you are not yet a MySQL customer, and therefore you are not able to enjoy our advanced features/tools and technical support, so probably that mysqldump could fit here. Please read: MySQL InnoDB Cluster – Recovering and provisioning with mysqldump.

Note that some new features are coming in this area… 🙂
Stay tuned!


References


Misc
Node 3 – Group Replication configuration


Thanks for using MySQL!

Follow me on twitter

2

MySQL InnoDB Cluster – Recovering and provisioning with mysqldump

July 9, 2019
Butterfly by Olivier DASINI

As the administrator of a cluster, among other tasks, you should be able to restore failed nodes and grow (or shrink) your cluster by adding (or removing) new nodes.

In MySQL, as a backup tool (and if your amount of data is not too big), you can use mysqldump a client utility that performs logical backups.
The results are SQL statements that reproduce the original schema objects and data.

For substantial amounts of data however, a physical backup solution such as MySQL Enterprise Backup is faster, particularly for the restore operation.
Hey! guess what? You can read: MySQL InnoDB Cluster – Recovering and provisioning with MySQL Enterprise Backup


Context

Let’s make it as simple as possible 🙂
I’m using MySQL 8.0.16.
I have an InnoDB Cluster setup – up and running.
So my main assumption is that you already know what is MySQL Group Replication & MySQL InnoDB Cluster.
Additionally you can read this tutorial and this article from my colleague lefred or this one on Windows Platform from my colleague Ivan.

Note:

Depending on how you configured your MySQL InnoDB Cluster, some steps could be slightly different.


Scenario 1 – Node Recovering

  • A 3 nodes MySQL InnoDB Cluster – M1 / M2 / M3, in single primary mode
  • MySQL Router is configured to enable R/W connections on port 3306 and RO connections on port 3307
  • M1 is currently the primary (so in Read/Write mode)
  • M2 & M3 are currently the secondaries (i.e. Read Only mode)
  • M1 failed! Some tables are irreconcilably corrupted 🙁
  • M2 & M3 are now the (new temporary) cluster

The goal then is to rebuild M1 and put it back to the cluster.


So like I stated we have a 3 nodes MySQL 8.0.16 InnoDB Cluster up and running:

Then M1 failed (status is “MISSING“) :

M1 was the primary.
The cluster initiated an automatic database failover to elect a new primary… blablabla
Anyway you already know the story 🙂

After a while M1 is finally fixed and ready to be part of the cluster again.
To minimize the recovery time instead of using the last dump we prefer to take a fresh one.

Speaking of backup, I recommend reading the excellent blog post from my colleague JesperMySQL Backup Best Practices.

Let’s take a fresh dump on a secondary node using MySQL Router, on port 3307 with my custom configuration (we could also used the primary).
The dump will roughly looks like :

Minimum privileges for the MySQL user with which mysqldump connects to the server

Actually this minimum privileges depends on what object you wan to dump and what for the dump it is.

However the following privileges should be fine for most of the classical usages:

GRANT SELECT, SHOW VIEW, EVENT, TRIGGER, LOCK TABLES, CREATE, ALTER, RELOAD, REPLICATION CLIENT, REPLICATION_SLAVE_ADMIN ON *.* TO <dumpUser>

Please note that it is highly recommended, in addition to the my.cnf, to include in your backup process a copy of the auto.cnf and mysqld-auto.cnf configuration files for all nodes.

If you “lose” your auto.cnf file, don’t worry the server will generate a new one for you.
However the recovery process will be slightly different… (more on that below).

Now it’s time to restore this dump on node M1.

Because this server is part of InnoDB Cluster, clearly there are some extra steps compare to a standalone server restoration.

Restore the data

First, restore the data on M1:

  • It’s a logical restoration so the server to restore must be up 😀
  • Group Replication plugin must be stopped
    • STOP GROUP_REPLICATION;
  • Disable logging to the binary log
    • SET SQL_LOG_BIN=0;
  • Delete binary log files
    • RESET MASTER;
  • Clear the master info and relay log info repositories and deletes all the relay log files
    • RESET SLAVE;
  • Enable updates
    • SET GLOBAL super_read_only=0;
  • Load the dump
    • source /data/backups/dump.sql

This gives us:


Put the node back to the cluster

Second, put the node back to the cluster.
Connect to MySQL Router on the primary (port 3306 in my case):

Now you should check the new cluster status

Note:

The cluster status of the restored node will be in the status “RECOVERING” before to be “ONLINE”.


Lost the auto.cnf file

As promised, the case when the auto.cnf configuration file is not restored.
In fact, in that case the cluster would see this node as a new node (because the server will have a new UUID).
So the process for putting it back is different.

Also note that if you loose the mysqld-auto.cnf file you’ll probably need to configure (again) the server to be Group Replication aware.

So basically the process is doing some cleaning and then add the old node like it was a new node.


Assuming Group Replication plugin is stopped on M1:

Et voilà !



Scenario 2 – Node Provisioning

  • A 3 nodes MySQL InnoDB Cluster – M1 / M2 / M3 in single primary mode
  • MySQL Router is configured to enable R/W connections on port 3306 and RO connections on port 3307
  • M2 is currently the primary in Read/Write mode
  • M1 & M3 are currently the secondaries in Read Only mode

The goal then is to add 2 new nodes: M4 & M5


So we have the 3 nodes MySQL 8.0.16 InnoDB Cluster that we used in the first part of this article.
And it is up and running.

Actually adding new nodes is very close to what we have done previously.

The process is :

  • Deploy the new MySQL instance preferably already configured for Group Replication
  • Restore the data in the way that we have seen previously


Check the configuration and the configuration itself can be done respectively with dba.checkInstanceConfiguration() and dba.configure() functions (and it could also be useful to use checkInstanceState()see this article).
e.g. on node M4:


The first part of the restore process is the same than the one we have seen:


For the second part, we will now add the new node (M4) :

Note:

If necessary, just before the addInstance() you can do the checkInstanceConfiguration() and configureInstance().


Same process for the last node, M5.

And finally you got a 5 nodes MySQL InnoDB Cluster \o/:


This is one way to do node recovery and provisioning using mysqldump, when your amount of data is not big.

When logical backup is not efficient it is time to use an hot, online and non-blocking physical backup tool like MySQL Enterprise Backup.
Actually, it’s even easier!
Please read: MySQL InnoDB Cluster – Recovering and provisioning with MySQL Enterprise Backup.

A final word to say that some new features are coming in this area… 🙂
Stay tuned!


References


Misc
Node 1 – Group Replication configuration


Thanks for using MySQL!

Follow me on twitter

2

Check the MySQL server startup configuration

June 11, 2019
Caribbean by Olivier DASINI

Since 8.0.16, MySQL Server supports a validate-config option that enables the startup configuration to be checked for problems without running the server in normal operational mode:

  • If no errors are found, the server terminates with an exit code of 0.
  • If an error is found, the server displays a diagnostic message and terminates with an exit code of 1.

validate-config can be used any time, but is particularly useful after an upgrade, to check whether any options previously used with the older server are considered by the upgraded server to be deprecated or obsolete.


First let’s get some information about my MySQL version and configuration.

I’m using MySQL 8.0.16.
The default options configuration are read in the given order from :

  • /etc/my.cnf
  • /etc/mysql/my.cnf
  • /usr/local/mysql/etc/my.cnf
  • ~/.my.cnf


Now let’s check my MySQL server startup configuration :