MySQL Recovery

February 23, 2021

If you are dealing with data, and you most probably are if you are reading this, one of your biggest fears would be not to be able to retrieve them. In a world where data actually surround us, it is critical to be able to retrieve them fast and with the best consistency.

Thus, it is always a good idea to have high availability settings in place to avoid loosing your data.

However, most of the times, we may wish or we may need to save the database and our data, and be a DBA-hero. Not an easy task, and it may be smoother to just perform a backup-restore. Sadly, this is not always the case.

So, this is what we will be facing in this article, we are going to see what to do when there is a data corruption in MySQL and the steps we need to perform to try saving our database.


Before any task or operation, it is always a good idea to copy the already existing data at an OS level:

First, let’s stop MySQL:

And then:


Before attempting to bring back up MySQL, it is a good idea to execute an innochecksum to see the status of your database or of a specific table.

A few words on innochecksum, in case you are not familiar with it:

innochecksum prints checksums for InnoDB files. This tool reads an InnoDB tablespace file, calculates the checksum for each page, compares the calculated checksum to the stored checksum, and reports mismatches, which indicate damaged pages. It was originally developed to speed up verifying the integrity of tablespace files after power outages but can also be used after file copies. Because checksum mismatches cause InnoDB to deliberately shut down a running server, it may be preferable to use this tool rather than waiting for an in-production server to encounter the damaged pages.

So, in order to check a table, you may use a simple command like:

The status will be logged on file /tmp/log.txt, if the file doesn’t exist innochecksum can do this for you. And, the output will be something like the following:

page::41354; log sequence number:first = ; second =
Page:: uncorrupted

Innochecksum offers a variety of options, the most useful is to be able to choose a starting or an ending page, to save time:

Please be aware that innochecksum can be executed only on a Database that is not running. Otherwise, you are going to receive a lock error:


Once you have checked the tables and discovered the problematic table or tables, it is time to attempt to startup the server and see what we can do. Very important, on my.cnf it is imperative to add the following:

log_error_verbosity: The log_error_verbosity system variable specifies the verbosity for handling events intended for the error log. I always prefer to have this set as it is very helpful to identify various issues and at the same time to be able to check the progress on the error_log file.

innodb_force_recovery: The crash recovery mode, typically only changed in serious troubleshooting situations. Possible values are from 0 to 6. If server comes up with a value of 3 and below then the data can be recovered, most probably. Always start with 1 and increase one by one, if the server is not coming up with your previous choice.


Now that we have these settings on our configuration, it is time to start-up the server and hope for the best:

If we are lucky and the server does manage to come up with a value among 1 to 3, it is time to take a full dump of the table(s) that are corrupted. If you are using 8.0.22 version and above, then you may take the dump via MySQL Shell:

Otherwise, you may use the old way:

Once the backup is completed successfully, it is time to drop the corrupted table:

And then restore it from the dump we have taken:


Once the restoration is done, we need to restart the MySQL server, but this time without the innodb_force_recovery parameter. Simply, edit your configuration file and mark it as a comment:

Hopefully, this will work and our database will be up and running with all its’ data intact! Unfortunately, this is not always the case and the database won’t be able to come up for us to retrieve our files. In situations like these, we need to restore from an existing backup or from a replica, if they are available.

To avoid facing serious trouble with your data, condider having in place one (or better yet all) of the following:

  • Full backups ( at a daily,weekly,monthly base)
  • Incremental backups
  • Binlog backups
  • In sync replicas
  • Innodb cluster

Additionally, in cases of a system crash, it is always helpful to set certain parameters to maintain data consistency. Let’s view the most basic ones:

  • innodb_flush_log_at_trx_commit = 1 :

Controls the balance between strict ACID compliance for commit operations and higher performance that is possible when commit-related I/O operations are rearranged and done in batches.
You can achieve better performance by changing the default value but then you can lose transactions in a crash.
The default setting of 1 is required for full ACID compliance. Logs are written and flushed to disk at each transaction commit.

  • sync_binlog = 1 :

Enables synchronization of the binary log to disk before transactions are committed. This is the safest setting but can have a negative impact on performance
due to the increased number of disk writes. In the event of a power failure or operating system crash, transactions that are missing from the binary log are only in a prepared state.
This permits the automatic recovery routine to roll back the transactions, which guarantees that no transaction is lost from the binary log.

  • innodb_doublewrite = 1 :

The doublewrite buffer is a storage area where InnoDB writes pages flushed from the buffer pool before writing the pages to their proper positions in the InnoDB data files.
If there is an operating system, storage subsystem, or unexpected mysqld process exit in the middle of a page write, InnoDB can find a good copy of the page from the doublewrite buffer
during crash recovery.

  • relay_log_recovery = 1 :

If enabled, this variable enables automatic relay log recovery immediately following server startup. The recovery process creates a new relay log file, initializes the SQL thread position
to this new relay log, and initializes the I/O thread to the SQL thread position. Reading of the relay log from the source then continues.

This global variable is read-only at runtime. Its value can be set with the –relay-log-recovery option at replica server startup, which should be used following an unexpected halt of
a replica to ensure that no possibly corrupted relay logs are processed, and must be used in order to guarantee a crash-safe replica.

  • relay_log_purge = 1 :

Disabling purging of relay logs when enabling the –relay-log-recovery option risks data consistency and is therefore not crash-safe.

  • innodb_directories :

Another parameter that can assist with corrupted and crash-recovery tablespaces is innodb_directories. Tablespace discovery during crash recovery relies on the innodb_directories setting to identify tablespaces referenced in the redo logs. Innodb_directories can be defined on the start-up and they can be used when moving your tablespaces while the server is offline.

On my.cnf:

At this point, it is wise to mention that the recovery time is also depended on the innodb_log_file_size.

Generally, the combined size of the log files should be large enough that the server can smooth out peaks and troughs in workload activity, which often means that there is enough redo log space to handle more than an hour of write activity. The larger the value, the less checkpoint flush activity is required in the buffer pool, saving disk I/O.

Larger log files also make crash recovery slower.



Conclusion

MySQL does offer a stability, however it is really hard to avoid a corruption or to not face having your server crashing at some point. If safety measurements are in place, take a deep breath and dive in to save your data.

When you crash and burn, you have to pick yourself up and go on and hope to make up for it.

Burt Reynolds



References



0

Replicate from MySQL 5.7 to MySQL Database Service

January 26, 2021
Cloud from above by Olivier DASINI

MySQL Replication is a very common topology, widely used in various architecture.
People use it, among others, for High Availability, Read Scalability or Geographic Redundancy.

Another use case is to use MySQL Replication to seamlessly integrate a newer version of the server in your architecture.
Let’s say you are running MySQL 5.7 then you can easily setup a 8.0 instance as a replica of your 5.7.

Extending this idea it is also possible to replicate your MySQL 5.7 (or 8.0 obviously) to a MySQL Database Service (MDS) instance, the true MySQL PaaS on Oracle Cloud Infrastructure (OCI).

This is that story you are about to read now 🙂



Architecture

To make things clear, this is what the (final) architecture looks like:

  • An OCI Compute instance where MySQL 5.7.33 & MySQL Shell 8.0.23 are installed
    • Public IP : 123.45.678.90
    • Private IP : 10.0.0.9
  • A MDS instance
    • DB name : MDS2_FRA
    • Private IP : 10.0.1.23
  • A replication Channel
    • Channel name : mysqlchannel_MDS2_FRA


MySQL Replication topology from MySQL Enterprise Monitor
MySQL Replication topology from MySQL Enterprise Monitor



Plan

How to proceed?

Let’s split the problem in sub-parts:

  1. Create a compute & a MDS instances
  2. Create on the source a dedicated user to the replication
  3. Dump the MySQL instance running on the OCI compute for being migrated to MDS
  4. Load the dump in MDS
  5. Create a replication channel on MDS (from OCI to MDS)



Requirement

Before starting, please verify that your source follows the requirements.

Current limitations of MySQL Database Service Inbound Replication

  • Only Row-based replication supported
  • Only GTID-based replication is supported
  • Multi-source replication is not supported
  • Replication filters are not supported
  • Changes to the mysql schema are not replicated and cause replication to stop
  • Source and Replica must run with the same lower_case_table_names to avoid data transfer problems arising from letter case of database or table names in cross-platform topology
  • The inbound applier runs under the privileges of the DB System’s admin user

The full & up to date list is available here.



Create a compute & a MDS instances

The creation of thee 2 objects is out of the scope of this article.
You’ll find the relevant resources following these above links:

You can start using OCI for free : Oracle Cloud Infrastructure Free Tier

Create a MySQL DB System is very easy : Getting Started with MySQL Database Service

You also need to have a running compute instance attached to a public subnet on the same VCN as the MySQL DB System : Creating a Compute Instance

Indeed MySQL 5.7 and MySQL Shell 8.0 are installed on this compute instance : Installing MySQL Shell on Linux. (obviously it works also if you are on Windows).



Create on the source a dedicated user to the replication

Assuming the prerequisites above are fine and the security lists are updated, we can now proceed.

Connect by SSH, to the compute instance where MySQL 5.7 is installed:

Note:
If you are using Ubuntu (the case here) the user is ubuntu.
For others GNU Linux systems, the user should be opc.


Connect to the MySQL 5.7 instance using MySQL Shell 8.0.23:


Then create the replication user with his relevant privileges:


We can check the user:



Dump the MySQL instance running on OCI for being migrated to MDS

Now we want to put the current MySQL 5.7 data on the MDS instance.

In this context the best tools to do this job are MySQL Shell utilities.
We will use the instance dump utility.

The backup output can be stored on a Object Storage Bucket – I will write an article on that.
But today, let’s keep it simple. We’ll store the backup locally on the compute instance.

This strategy makes sense if you have enough free disk space on the compute. Also it will be faster to restore the data on MDS with this strategy than having the dump on the object storage.


A good practice when attempting to import data from MySQL 5.7 into MDS is to use MySQL Shell’s upgrade checker utility util.checkForServerUpgrade() to check the schemas and if needed proactively fix any compatibility issues identified.


We’re good to go, so let’s create the backup directory:


Then dump our MySQL 5.7 instance using util.dumpInstance() :

Please note ocimds & compatibility options.
Details are available in the documentation.



Load the dump in MDS

Now we must restore our dump.
We will use the Dump Loading Utility.

From the compute instance :


connect to the MDS instance (10.0.1.23):


Then load the MySQL 5.7 instance’s dump using util.loadDump() :

Please note the ignoreVersion & updateGtidSet options.
Documentation is available here.



Create a replication channel on MDS

Here is the easiest part.
Most of the work will be done using the OCI console \o/

The feature we are looking for is named Channel.
You can find it by clicking on “MySQL” then “Channel” and finally push the “Create Channel” button

Create Inbound MySQL Replication in MySQL Database Service in OCI


The following is pretty straightforward.
Below the main information:

Create in Compartment: Choose the right Compartment

Name (optional): ex. mysqlchannel_MDS2_FRA

Source Connection – Configure Connection to the MySQL Source
Hostname: 10.0.0.9

Username: rplAdmin
(from the replication user created on the source)

Password: Repl1234c@
(from the replication user created on the source)

SSL Mode: – set when create the replication user –
Required (REQUIRED)
Establish an encrypted connection.

Target – Configure the DB System target
Select a DB System: MDS2_FRA


Finally, push the Create Channel button

Create Inbound MySQL Replication in MySQL Database Service in OCI

et voilà!

Wait for the Channel icon become green (ACTIVE)…


MySQL Replication status from MySQL Enterprise Monitor
MySQL Replication status from MySQL Enterprise Monitor



In the mean time, if you have the soul of a DBA, you can use some commands, on the MDS instance.

SHOW REPLICA STATUS


Using the replication channel name (default: replication_channel) you can also use this query:


Last but not least, there is Lefred’s favorite replication query (c’est cadeau):



Péroraison

MySQL replication has always been easy to setup.
This is still true with MySQL 8.0.
This is even more true with MySQL Database Service 🙂

Easy, flexible and powerful. This is exactly what your application need.



References




Follow me on twitter

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.


Thanks for using MySQL!

0

MySQL in Azure

January 19, 2021
Tags: ,
Ikaria Greece by Anastasia Papachristopoulou

Cloud computing is becoming more and more famous among the IT circles and it is growing super fast. Industries are charmed by its flexible character and the avoidance of having their own infrastructure.
An easy solution that offers automation on every field and they can scale up depending on the needs of each user and company.

There are many Cloud providers, such as Oracle Cloud Infrastructure, Microsoft Azure, Amazon Web Services, etc…
In this article, we are going to see how to set up a MySQL Database instance on Microsoft Azure Cloud.

At this point, it is important to clarify that we are going to set up MySQL as a Service (PaaS).

So, before starting with the actual Database creation, we need:

  • An Azure subscription, you may create a free Azure account that offers for the first month a $200 credit and for the first year the most wanted services.
  • Have an available resource group.
  • Have specifically access for MySQL.


Let’s sign in to Azure portal, https://portal.azure.com/ and you should see a screen like the following:

Azure MySQL


We need to verify that we have the correct resources available for our subscription, in order to set up the MySQL DB:

  • From Azure Portal, go to Subscriptions
  • Then, on Development Services and choose Resource Provider
  • Scroll down and check if Microsoft.DBforMySQL is registered
Azure MySQL


If it is not registered, then click Register on the top left corner:

Azure MySQL


Once the registration is completed, we are ready for creating our DB.

Azure MySQL


To create the MySQL Database, we need to choose Azure Database for MySQL Servers:

Azure MySQL


By clicking the Add option on the top left, we are presented with two options, as it can be shown below:

Azure MySQL
  • Single Server is a fully managed database service with minimal requirements for customizations of the database. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized to provide 99.99% availability on single availability zone. Single servers are best suited for cloud native applications designed to handle automated patching without the need for granular control on the patching schedule and custom MySQL configuration settings. MySQL available version is 8.0.15
  • Flexible Server (Preview) is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. In general, the service provides more flexibility and server configuration customizations compared to the single server deployment based on the user requirements. The flexible server architecture allows users to opt for high availability within a single availability zone and across multiple availability zones. Flexible servers also provide better cost optimization controls with the ability to start/stop your server and burstable SKUs, ideal for workloads that do not need full compute capacity continuously. MySQL version is 5.7 and 8.0.21 is on trial.



We choose Single Server for our start, and we are presented with the following screen:

Azure MySQL


Azure MySQL


We need to set accordingly:

Subscription: Make sure that you have choosen the corresponding subscription

Resource Group: Choose the proper resource group with the correct resources

Server name: This is actually the name of the MySQL Server

Data Source: You may create the new server from a Backup, here we choose none

Location: Choose the proper location that your resource group has access to

Version : Available versions 5.6,5.7 and 8.0

Compute+storage: Choose the best settings, we have Basic , General Purpose and Memory Optimized. The basic differences are among Basic & General Purpose/Memory Optimized.

Basic offers 2 cores and storage up to 1024GB.

General Purpose offers up to 64 cores and storage up to 16384GB.

Memory Optimized offers up to 32 cores and storage up to 16384GB.

All three available configurations offer Backup retention policy up to 35 days (default is 7 days).

General Purpose:

Azure MySQL


Azure MySQL


Azure MySQL
Azure MySQL


Azure MySQL


Admin username: Choose the proper name of the DB admin

Password and Confirm Password : Create a strong password for the DB admin

As soon as these settings are ready, we go to the Next:

Azure MySQL


Additional Settings:

We are able to select encryption:

Azure MySQL


And then moving on:

Azure MySQL


Tags : are name/value pairs that enable you to categorize and view consolidated billing by applying the same tag to multiple resources and resource groups. We won’t be organizing our resources with tags now so we are moving on to Review&Create to check the price and the options we have selected:

Azure MySQL


While our DB is being created, we see its progress:

Azure MySQL



Once ready, we are able to access the new resource that we created :

Azure MySQL


From this page, we may stop the server, reset the password , restore/delete and restart.

From the left side-bar, we may create alerts/metrics and to gather information.
The server logs are also available on this section, as well as the Replica option in case we need to set up a replica for our environment.

Azure MySQL


Same for the server settings:

Azure MySQL


One important note, before we connect officially to our Database.
We need to add the proper Firewall Rule to allow our Client to connect to the MySQL instance:

Azure MySQL


In order to connect to the Database via MySQL Workbench we need to use the Server name on the Hostname as well as the Username, like you may see on the following screenshot:

Azure MySQL


Finally, we are connected to the Database and we are able to execute any query we wish:

Azure MySQL


If you wish to delete the Database, Azure will give you a warning that the deletion is irreversible and you will need to provide MySQL Server’s name.



The same concept lies for the Flexible Server, but with a couple of differences:

Azure MySQL


Azure MySQL


Here, we need to specify the expected Workload type and based on that the most appropriate settings are being recommended and we also have Zone Redundancy available :

Azure MySQL


Backups offer the same retention policy as in Single Server:

Azure MySQL


To be able to connect, we need to set the Firewall Rules, either a specific Public address to access the Flexible MySQL Server or via a Virtual Network:

Azure MySQL


Azure MySQL


Flexible Servers offers synchronous replication when Zone Redundancy is enabled, however it does not offer read-only replicas and due to synchronous replication to another availability zone, primary database server can experience elevated write and commit latency.

We need to mention here that if a more updated version of MySQL is needed, then we will need to have a VM created.



Conclusion

Depending on your needs, Cloud computing can for sure serve them and most probably for a fair price. It offers a variety of services, so give it a try!

After all, it is no coincidence that our heads are up in the clouds.



References

https://www.mysql.com/

https://docs.microsoft.com/en-us/azure/mysql/select-right-deployment-type

https://docs.microsoft.com/en-us/azure/mysql/flexible-server/concepts-high-availability

0

MySQL SHELL – The new era

December 17, 2020
Kefallonia Lake Melissani by Anastasia Papachristopoulou

MySQL Shell is a modern tool that can be used by both developers and database administrators. It is a tool that can be used with any of the following:

  • SQL, Python, Javascript
  • MySQL Document Store (NoSQL, X DevAPI)
  • JSON Documents / SQL tables

In one of our previous articles – Setting up Replication with various methods for MySQL 8 – we reviewed how to create a replica with multiple tools.
Now, it is time to perform the same action but with MySQL Shell.

In general, MySQL Shell is used with MySQL InnoDB Cluster to deploy a MySQL Group Replication, with or without MySQL Router.
Here we will examine how to create an asynchronous replication.



First of all, we may download MySQL Shell via the following url:

https://dev.mysql.com/downloads/shell/

We will be using MySQL Shell version 8.0.22, that is currently the latest and includes extra features.

Let’s start!


On the source database, we connect to mysqlsh:


We may use option \sql or \py to use SQL or Python respectively instead of the JavaScript mode (\js):


For creating the dump we will be using the util.dumpInstance().
Please note that the outputUrl (target directory to store the dump files) must be an empty folder, otherwise you will receive an error like :

util.dumpInstance: Cannot proceed with the dump, the specified directory ‘/’ already exists at the target location / and is not empty. (ArgumentError).



If the folder doesn’t exist mysqlsh is smart enough to create the folder for us:


Now, let’s take a minute to review the options we are using:

dryRun: This won’t dump anything, it will print information on what will be dumped.

ocimds: Enable checks for compatibility with MySQL Database Service (MDS).

threads: How many threads to be used for dumping data chunks from the server.

compatibility: Apply MySQL Database Service compatibility modifications when writing dump files. Supported values: “force_innodb”, “strip_definers”, “strip_restricted_grants”, “strip_tablespaces”.

force_innodb – The MySQL Database Service requires use of the InnoDB storage engine. This option will modify the ENGINE= clause of CREATE TABLE statements that use incompatible storage engines and replace them with InnoDB.

strip_definers – strips the “DEFINER=account” clause from views, routines, events and triggers. The MySQL Database Service requires special privileges to create these objects with a definer other than the user loading the schema. By stripping the DEFINER clause, these objects will be created with that default definer. Views and Routines will additionally have their SQL SECURITY clause changed from DEFINER to INVOKER. This ensures that the access permissions of the account querying or calling them are applied, instead of the user that created them. This should be sufficient for most users, but if your database security model requires that views and routines have more privileges than their invoker, you will need to manually modify the schema before loading it.

strip_restricted_grants – Certain privileges are restricted in the MySQL Database Service. Attempting to create users granting these privileges would fail, so this option allows dumped GRANT statements to be stripped of these privileges.

strip_tablespaces – Tablespaces have some restrictions in the MySQL Database Service. If you’d like to have tables created in their default tablespaces, this option will strip the TABLESPACE= option from CREATE TABLE statements.


As we are using a database that contains other storage engines apart from InnoDB engine, we will be using option force_innodb, along with strip_restricted_grants and strip_definers, as we have routines with DEFINER.

So, here we go:


As this moves along, we may open a new connection to our target host, enabling the local_infile parameter so the dump can be imported and at the same time we disable innodb_redo_log as well.
Disabling redo logging speeds up data loading by avoiding redo log writes and doublewrite buffering:


Before proceeding, let’s discuss once again the options we are using:

/data_imp: is where the files will be located.

threads: The number of parallel threads to use to upload chunks of data to the target MySQL instance.

updateGtidSet: Apply the gtid_executed GTID set from the source MySQL instance, as recorded in the dump metadata, to the gtid_purged GTID set on the target MySQL instance.

skipBinlog: Skips binary logging on the target MySQL instance for the sessions used by the utility during the course of the import, by issuing a SET sql_log_bin=0 statement.

waitDumpTimeout: Setting this option activates concurrent loading by specifying a timeout (in seconds) for which the utility waits for further data after all uploaded data chunks in the dump location have been processed.


updateGtidSet is available from 8.0.22 version.
If you are using 8.0.21, then you will need to manually set the GTID by locating the gtidExecuted field in the @.json dump file in the dump metadata:


So, basically we have taken a dump from the source host and at the same time, restore it to the target host.
Once it is completed, and as MySQL Shell is really fast we won’t have to wait for that long, we log in to the target host and execute the following command:


And our replica is all set.


Apart from the util.dumpInstance(), we also have the options to use util.dumpSchemas() and util.dumpTables(), that was introduced in MySQL Shell 8.0.22, and offers the option for dumping specific schemas, tables or views.

It is also important to mention that the above-mentioned utilities are fully compatible with Oracle Cloud Infrastructure.



Conclusion

MySQL Shell is a really useful tool that offers a variety of functionalities.
It is super fast and saves us a lot of time when dealing with broken replication or with data transfer.

Try it, you will love it 🙂



References



1

Setting up Replication with various methods for MySQL 8

December 10, 2020
Irish Castle and mountains by Olivier DASINI

In the world of the Databases, one of the most important value that we are all trying to achieve is High Availability. Not to loose our valuable data or in case one server fails to always have another to step in and take control.

So, it is critical to have along with our primary database, other identical instances, the replicas.

On this blog post, we will see the most famous ways to set up our replication.

MySQL offers a variety of options to achieve replication set up. Here we are going to review how to create a replica based on GTID replication by using one of the following methods:

  • mysqldump
  • rsync
  • MySQL Enterprise Backup (MEB)
  • Clone plugin

MySQL Shell, using the utilities, is another method. Please see this article.


Before we initiate any process we need to make sure that on our my.cnf, primary & replica, have set the following parameters:



1st Replication Method – mysqldump

So, starting with mysqldump option we need to make sure that we have enabled gtid-purged:

Once this is finished, if we do :

We will receive the corresponding gtid that is needed for setting up our replica. The backup.sql file should be moved to the replica, a simple scp should do.

Moving along to the replica, we perform the following command:

After this import is completed, we simple need to execute the following:

If both the above commands are executed with no issue, perform a show slave status\G to verify that both Slave_IO_Running and Slave_SQL_Running are set to yes.



2nd Replication Method – rsync

When using the rsync option, the database on the replica must not be running. And on the primary db we will need to have a lock on the tables. As soon as the tables are locked we may proceed with rsyncing the data.

So:

Once the copy is completed, we will need to remove the auto.cnf (as this will give the same UUID and we will not be able to set up our replica) and we will also remove mysql.sock.

The next steps will be to start our database and set it up as a replica:

And of course do not forget to reverse the lock command on the primary:



3rd Replication Method – MEB

MEB, MySQL Enterprise Backup, is a commercially licensed backup utility for MySQL databases. One of the fastest and easiest way to set up your replication while primary db is up and running.

Please note that the db on the replica should not be running and the corresponding paths should be empty, as this may cause confusion. For example, imagine having forgotten a binlog file and binlog_index to trying to read from erroneous binlogs. This will lead to having the replication broken.

So, for mysqlback & restore we can perform:

In one command we are sending our data to the replica as well.

Once, this is completed we go into the /tmp folder in the primary node and inside the meta folder, we take note of the command inside the backup_gtid_purged.sql file. This is needed for setting up our replica:

Our replica is up and running with no trouble.



4th Replication Method – clone plugin

Last but not least we have the clone method. Clone is initiated at 8.0.17 version and it is a really useful tool.

First thing we need to do is to install the clone plugin on both the primary and wannabe replica as well as the user with the proper grants:

Verify that plugin is indeed active:

And we will also set up our donor on the replica:

Now we need to log in to the replica with the clone_user we created on the above-mentioned steps and execute the following command:

Once this is completed, we log out from clone_user and log in with root and set up our replica:

Please note that the master_user that is being used to set up our replica in all the methods needs to have access on our replica and have the replication_slave privilege.



Conclusion

To sum up, MySQL offers a variety of ways to set up tour replication, what you will choose depends on your needs for speed, performance and down time.



References


6

Create a Cloud Backup with MySQL Enterprise Backup

December 10, 2020
Tags: ,
Cloud from above by Olivier DASINI

MySQL Enterprise Edition customers have access to MySQL Enterprise Backup.

MySQL Enterprise Backup provides enterprise-grade backup and recovery for MySQL. It delivers hot, online, non-blocking backups on multiple platforms including Linux, Windows, Mac & Solaris.

https://www.mysql.com/products/enterprise/backup.html


Cloud backup is a strategy increasingly used in organizations. Send copies of your data to the cloud, can help you to prevent a devastating IT crisis and ensure business continuity.


Currently, MySQL Enterprise Backup supports the following types of cloud storage services:

In this blog post I will use the OCI object storage, for obvious reasons 🙂 and also because it’s probably the best feature/price ratio choice.



Create a pre-authenticated request for a bucket

First we must create an OCI Pre-Authenticated Request (PAR) for a bucket.

Pre-authenticated requests provide a way to let users access a bucket or an object without having their own credentials, as long as the request creator has permissions to access those objects.

https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/usingpreauthenticatedrequests.htm


You can create, delete, or list pre-authenticated requests using the Console, using the CLI, or by using an SDK to access the API.
Click here to see how to create a PAR.


If you use the console, you’ll have something like:

OCI Pre-Authenticated Requests by Olivier DASINI

My Oracle Cloud Infrastructure Pre-Authenticated Request name is : MEB-par-bucket-20201203-1612

OCI Pre-Authenticated Requests by Olivier DASINI

An URL is generated. It is very important to save it!
We’ll use it with MySQL Enterprise Backup.


I can also see my list of PARs in the console:

OCI Pre-Authenticated Requests by Olivier DASINI



Create a Cloud Backup on Oracle Cloud Infrastructure Object Storage

I’m using MySQL Enterprise Backup 8.0.22.
The extra options to backup your data into OCI Object Storage are:

cloud-service : Cloud service for data backup or restoration.
Currently, there are 3 types of cloud storage services supported by mysqlbackup, represented by the following values for the option:

  • OCI: Oracle Cloud Infrastructure Object Storage
  • openstack: OpenStack Swift or compatible object storage services
  • S3: Amazon Simple Storage Service (S3) or compatible storage service.


cloud-object : The storage object for the backup image.
Note that names of objects within the same bucket have to be unique.


cloud-par-url : The Pre-Authenticated Request (PAR) URL for OCI Object Storage.
For a backup to OCI Object Storage, it is the PAR URL for the storage bucket; for restore and other operations on an object stored on OCI, it is the PAR URL for the object.


Click here to find the complete list of cloud storage options (OCI, Amazon S3 & OpenStack Swift options).


In this article the values of these options are:

  • cloud-service=OCI
  • cloud-par-url=https://objectstorage.us-ashburn-1.oraclecloud.com/p/JL7k0DnNE8DTV<…snip…>_bucket-20200908-1001/o/
  • cloud-object=myBck_20201203-1600.mbi


I assume your already know what to do before the first backup.

Let’s create the backup then:

Don’t forget the value “” (dash) for backup-image parameter:


The backup is now completed and stored on the Oracle Cloud Infrastructure object storage bucket.

OCI Pre-Authenticated Requests by Olivier DASINI



Restore a Backup from Oracle Cloud Infrastructure

Again, a Pre-Authenticated Request (PAR) URL for OCI Object Storage will be used.
This time we will use a PAR on an object with Read PAR privileges created before the restoration.


Using the console, after selecting my Bucket, I create a pre-authenticated request for an object:

OCI Pre-Authenticated Requests by Olivier DASINI

The PAR target is Object
My object name is myBck_20201203-1600.mbi
An read only access type is sufficient : Access Type: Permit reads on the object

Et voilà!
My PAR is created

OCI Pre-Authenticated Requests by Olivier DASINI

An URL is generated. It is very important to save it!
We’ll use it with MySQL Enterprise Backup.


Now we have all that is needed to restore with MySQL Enterprise Backup.

I assuming you already now how to restore a MySQL instance with MEB.

Thus in our context, to restore a single-file backup from an OCI Object Storage to a MySQL Server, we will use:

The restore is now completed.


As a side note, if you want to see the 4 warnings, take a look at the MEB logfile, located in the –backup-dir, meb-tmp in this article:

Nothing really serious in this context.
If you need more information, please click here.


The rest of the story is classic, restart your MySQL instance and you good to go 🙂


One more thing to know and to keep in mind, is that a cloud backup always uses one write thread.
In clear backup & restore duration could be much longer than for a local operation.

However, it is a good practice, when possible, to keep a local copy of the backup file.

It is usually easier and much faster to recover from a local location.



MySQL Enterprise Edition

MySQL Enterprise Edition includes the most comprehensive set of advanced features, management tools and technical support to achieve the highest levels of MySQL scalability, security, reliability, and uptime.

It reduces the risk, cost, and complexity in developing, deploying, and managing business-critical MySQL applications.

MySQL Enterprise Edition server Trial Download (Note – Select Product Pack: MySQL Database).

MySQL Enterprise Edition



References




Follow me on twitter

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.


Thanks for using MySQL!

0

MySQL 8.0.22 New Features Summary

November 10, 2020
Sakila mozaik by Olivier DASINI

Presentation of some of the new features of MySQL 8.0.22 released on October 19th, 2020.


Highlight


Slides


Download this presentation and others on my SlideShare account.


Video


That might interest you




Follow me on twitter

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.


Thanks for using MySQL!


0

Automatic connection failover for Asynchronous Replication

November 5, 2020
Plage par Olivier DASINI

TL;DR

Since MySQL 8.0.22 there is a mechanism in asynchronous replication that makes the receiver automatically try to re-establish an asynchronous replication connection to another sender, in case the current connection gets interrupted due to the failure of the current sender.



Asynchronous automatic connection failover automates the process of re-establishment of an asynchronous replication connection to another sender of sender list.
That means if a the source of a replica crashed, this replica will be able to automatically connect to another source.
One of the biggest interest is to improve Disaster Recovery (DR) architecture.

With this feature, a typical architecture is to have a 3 nodes asynchronous replication cluster.
2 primary nodes in active/passive mode (if you need active/active architecture use MySQL InnoDB Cluster) and the third one is connected to one of the primary, either for DR or for some specialized task like analytics for example.
So in case of unavailability of its primary – if the replication I/O thread stops due to the source stopping or due to a network failure – this replica will automatically connect to the other primary.


Another architecture is to use this asynchronous automatic connection failover feature with MySQL InnoDB Cluster.

Hey guess what? this is the topic of this tutorial and by the way, we will use some of the fanciest MySQL 8.0 features 🙂



Context

4 MySQL 8.0.22 instances :


– mysql_node1 : 172.20.0.11
– mysql_node2 : 172.20.0.12
– mysql_node3 : 172.20.0.13
These 3 are members of a MySQL Group Replication cluster.
I created a MySQL InnoDB Cluster (resources here and here).
I will not use MySQL Router in this tutorial.


– mysql_node4 : 172.20.0.14
It’s my replica.
It will be asynchronously connected to the Group Replication.

Because a picture is worth a thousand words, below is the architecture I want to achieve :

Asynchronous Replica of a MySQL Group Replication by Olivier DASINI

Graphs in this article are from MySQL Enterprise Monitor.



Current status

I’m using MySQL 8.0.22 :


I have a 3 nodes Group Replication cluster up and running :



Setup the replication user

Like you know, there are some preparation steps to be able to use MySQL Asynchronous Replication. If you’re not familiar with replication using GTID, please read this.

On the Group Replication primary – currently mysql_node1 – I setting up the asynchronous replication user:



Setup the clone user

The clone plugin is one of my favorite feature!

It permits cloning data locally or from a remote MySQL server instance.
Cloned data is a physical snapshot of data stored in InnoDB that includes schemas, tables, tablespaces, and data dictionary metadata.
The cloned data comprises a fully functional data directory, which permits using the clone plugin for MySQL server provisioning.

It’s a very convenient way to copy data from the source (the Group Replication) to the replica (mysql_node4).

Thanks to InnoDB Cluster the Clone plugin is already installed on the 3 members.
So on the primary member – currently mysql_node1 – I’ll create a dedicated clone user with the donor privileges for using and monitoring the clone plugin:

Note that I could have used the cluster administrator dedicated user instead of create a specialized clone user.


On mysql_node4, the future replica, I’ll create the same user but with the recipient privileges.
But before I’ll install the clone plugin and set the clone donor list:



Clone a MySQL instance

Now we have everything all set to create the replica from a member of the group.

On the future replica – mysql_node4 – we can now run the clone instance command:


If you want to monitor the cloning progress run the following query:


When the cloning is over, the MySQL instance must be restarted (that will normally happen automatically).
After the restart, you can verify that the clone completed successfully with the queries below:

and:



Add the replica

First I will setup the configuration information for a replication source server to the source list for a replication channel.
To do that we use the function: asynchronous_connection_failover_add_source
Information needed are the replication channel name, the source server address, port and network namespace, and also the weight.

More information are available here.

For this tutorial I chose the following values:

  • Channel name: autof
  • Source server addresses: mysql_node1, mysql_node2, mysql_node3
  • Source server port: 3306
  • Source server network namespace: ” (empty)
  • Weight: respectively: 50, 90, 90


The replica’s source lists for each replication channel for the asynchronous connection failover mechanism can be viewed in the Performance Schema table replication_asynchronous_connection_failover.


To set the parameters that the replica server uses for connect to the source, we use the well known CHANGE MASTER TO statement.
You already know most of its clauses, so let’s only focus on some of them:

  • SOURCE_CONNECTION_AUTO_FAILOVER : activates the asynchronous connection failover mechanism.
  • MASTER_RETRY_COUNT & MASTER_CONNECT_RETRY : define the failover time. The default setting is… 60 days, probably not what you want :). So, you (most likely) should reduced the settings. e.g. 1 minute is respectively 20 and 3. (20 x 3 = 60)
  • FOR CHANNEL : enables you to name which replication channel the statement applies to. The CHANGE MASTER TO statement applies to this specific replication channel.


Now let’s configure the replication:

Please note that my failover time in this tutorial is 15 seconds (3 x 5). Obviously the relevant setting depends on your needs. A longer duration will probably makes more sense in real life.


Then start the replication, on channel autof using START REPLICA:


 Status information of the replication can be seen with SHOW REPLICA:



Automatic connection failover

We have now configured our replication, with an InnoDB Cluster/Group Replication as a source and a standalone MySQL server as a replica.
Let’s see how the automatic connection failover works.

Restart the replica

I want to see the behavior after a restart of the replication.

State before the stop:


Stop the replication:

MySQL Enterprise Monitor


After a while, start the replication:


Replica picks up where it left off… as you would have expected.



Short unavailability of the source

I want to see the behavior after a short unavailability of the source. I mean a duration lower than the failover threshold – connection_retry_interval x connection_retry_count – 15 seconds (5×3) in this example.

State before the stop of the source, mysql_node2:

… Stop mysql_node2 for 10 seconds …

MySQL Enterprise Monitor

State after the start of the source mysql_node2:

MySQL Enterprise Monitor

Well… nothing changed!
The unavailability of the source was not long enough to trigger the failover.
That is awesome to prevent non necessary failover.



Long unavailability of the source

I want to see the behavior after a longer unavailability of the source. I mean a duration greater than the failover threshold – connection_retry_interval x connection_retry_count – 15 seconds (5×3) in this example.

State before the stop of the source, mysql_node2:

… Stop mysql_node2 for 20 seconds …

MySQL Enterprise Monitor

As expected, the asynchronous automatic connection failover took place. \o/
The new source is now mysql_node3, because it has a bigger weight than mysql_node1 (90 vs 50) and because it was available 🙂



Limitations

Please be aware that in 8.0.22 this feature lacks of some of the needed functionality to replace MySQL Router as means to replicate from an InnoDB Cluster/Group Replication cluster.

Things such as:

  • does not automatically learn about new members or members that are removed
  • does not follow the primary role, it stays connected to whatever host it was connected to
  • does not follow the majority network partition
  • does not care if a host is not part of the group any longer, as long as it can connect, it will

These limitations will be lifted in future versions.




This is a very nice feature starting with MySQL 8.0.22, useful for both MySQL Replication and MySQL Group Replication architectures.

And you know what? There is more to come 😉

Stay tuned!



References




Follow me on twitter

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.


Thanks for using MySQL!

2

Plan your MySQL upgrade

October 21, 2020
Jardin Balata Martinique by Olivier DASINI


I’ve made a short video that will give you tips and tricks to successfully upgrade to MySQL 8



These information are from my presentation : Upgrade from MySQL 5.7 to MySQL 8.0



Thanks for using MySQL!

Follow me on twitter

Watch my videos on my YouTube channel.

0

MySQL 8.0.21 New Features Summary

August 27, 2020
Sakila mozaik by Olivier DASINI

Presentation of some of the new features of MySQL 8.0.21 released on July 13th, 2020.


Highlight

  • Runtime disabling Innodb Redo Log
  • JSON_VALUE fonction
  • CREATE TABLE… SELECT is atomic
  • Per-user Comments & Attributes
  • MySQL Document Store Enhancements
  • MySQL Shell Enhancements
  • MySQL Router Enhancements
  • MySQL InnoDB Cluster Enhancements
  • MySQL Group Replication Enhancements
  • Thanks to the Contributors


Slides


Download this presentation and others on my SlideShare account.



Watch my videos on my YouTube channel.


That might interest you





Follow me on twitter

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.


Thanks for using MySQL!

1