Data Migration from MariaDB to MySQL

March 9, 2022

Yet another customer wanted to migrate from MariaDB to MySQL. So I decided to write down the overall data migration process.

Indeed each migration are a unique story so I won’t provide you a “How to migrate”…
The idea is to highlight the general path and help you to avoid common pitfalls and thus help you save some precious time.

Context

  • Source database
    • MariaDB 10.3
    • Around 8GB of data
    • Only InnoDB tables
    • Many user accounts (important information)
  • Target databases
    • MySQL Database Service 8.0.28
    • MySQL 8.0.28

In fact, the (real) target was MySQL Database Service the MySQL PaaS on Oracle Cloud Infrastructure, but I will also show the workflow for on-premise MySQL.

Please find below a reenactment of the events…

MySQL Shell

The first step is to install the MySQL DBA BFF: MySQL Shell.
MySQL Shell is an advanced client and code editor for MySQL. You can manage your data using SQL, JavaScript and Python.

In addition to data manipulation (btw MySQL 8.0 also provides a NoSQL CRUD API to handle JSON documents), there are also a lot of things you can do with MySQL Shell like create a MySQL cluster (InnoDB Cluster, InnoDB ReplicaSet, InnoDB ClusterSet),  verify whether your MySQL server instances is ready for upgrade, dump and restore, etc… :

Believe me if you try it, you will love it 🙂


It is way better than the old mysql text client and some of these features work with MariaDB (at least the ones needed for the migration).

Easy to install.
e.g.

The source dataset

Let’s have a look on the dataset.

Unfortunately MariaDB does not have the sys schema
We cannot use this very elegant query:

So back in the time and let’s use the information_schema instead and the following ugly query:

Approximately 8GB of data.

What about the storage engines?

Again because of lack of sys schema we cannot use this pretty query:

So let’s back in the time and use instead:

All tables are in InnoDB \o/
That’s how DBA’s life should be 🙂

Migrate the data

The source is MariaDB 10.3 and the target is MySQL 8.0, on-premise or using MySQL Database Service.

With such amount of data ~10GB a dump is not an issue especially if you use MySQL Shell utilities (please forget about the aging mysqldump – it deserves a nice retirement).

Migrate to MySQL on-premise

Because I need the whole instance, I’m going to use MySQL Shell dumpInstance utility.

An important option of dumpInstance is compatibility. It can

  • enforce the InnoDB storage engine (force_innodb) ie change CREATE TABLE statements to use the InnoDB storage engine for any tables that do not already use it
  • ignore any missing primary keys (ignore_missing_pks),
  • automatically add primary keys in invisible columns (create_invisible_pks)
  • etc…

Here, I will only use force_innodb. (if your memory is better than mine you remember that all the tables are already in InnoDB, so not really needed in this case, but I must admit that I’m paranoid :-O).
Joke aside, it is in my opinion a good habit to use this option.

If you are worried about primary keys below a query that would help you to check your schema:

The MySQL Shell dump instance command for dumping data from MariaDB to MySQL 8.0 on-premise looks like:

  • The dump will be stored in the /bkp/dumpMariadb1 directory
  • threads: 8 -> 8 parallel threads to dump chunks of data from the MariaDB instance
  • compatibility: force_innodb -> update to InnoDB for any tables that do not already use it

Depending on your context you may have to tune this command.

Move the dump on the MySQL 8.0 host and then restore the dump.

In MySQL 8.0 there is a special ninja technique 🙂 to load a dump at the speed of light… All right, I may be exaggerating a bit, but at least at the speed of the sound by disabling crash recovery aka disabling redo logging.

DISCLAIMER: This feature is intended only for loading data into a new MySQL instance. Do not disable redo logging on a production system. It is permitted to shutdown and restart the server while redo logging is disabled, but an unexpected server stoppage while redo logging is disabled can cause data loss and instance corruption.

The commands to restore the dump on MySQL 8.0 on-premise look like:

  • local_infile=1 -> allows local data loading
  • ALTER INSTANCE DISABLE INNODB REDO_LOG -> disable redo logging to speed up data loading
  • The dump has been moved to the MySQL instance at /restore/dumpMariadb1
  • threads: 8 -> 8 parallel threads to  upload chunks of data to the MySQL instance
  • ignoreVersion: true -> allow the import of the MariaDB dump into MySQL (MariaDB is taken for a MySQL 5.5 version)
  • skipBinlog: true -> skips binary logging for the sessions used during the restore
  • loadUsers: false -> do not import users and their roles and grants. This one is very important because MariaDB users are not compatibles. Please note that the default is false (paranoid mode enable) 

Migrate to MySQL Database Service

As a prerequisites I’m assuming that you have the basic Oracle Cloud Infrastructure and MySQL Database Service knowledge.
If it is not yet the case, please read my Discovering MySQL Database Service series.

To migrate your data into MySQL Database Service (MDS), MySQL Shell is again your best friend to export the data.
You can either store the data locally or very easily in an OCI object storage bucket.

If the data are located in a bucket you can then use the MDS data Import option to automatically create a new MDS instance populated with your data.
If you choose the “local” storage, the import will be manual only.

The main thing to keep in mind during the import process is that you will most likely have a user account issue (MariaDB users are not compatibles), so my recommendation is to take a MySQL Shell dumps with all the business data (i.e. mysql, information_schema, performance_schema, sys are not include) without the user accounts.

Because I need the whole instance, in both case I’m using MySQL Shell dumpInstance utility.

Backup the database into a bucket

I’m going to store my dump inside an Object Storage bucket so I must create the bucket before the dump. This is very easy by using the OCI interface.

If you are not familiar with OCI Object Storage you can follow this tutorial: Discovering MySQL Database Service – Episode 4 – Dump your MySQL data into an Object Storage bucket.

In order to put these data into the OCI bucket, we must make some configuration for allowing MySQL Shell to easily connect to the OCI object storage service.

To this end, we will create an OCI CLI configuration file.

The CLI is a small-footprint tool that you can use on its own or with the Console to complete Oracle Cloud Infrastructure tasks. The CLI provides the same core functionality as the Console, plus additional commands. Some of these, such as the ability to run scripts, extend Console functionality.

To be clear, we will not use the CLI but its configuration file. The configuration file name and default location (on Linux) is /home/opc/.oci/config.

It should have the following information:

  • user: OCID of the user calling the API.
  • fingerprint: Fingerprint for the public key that was added to this user.
  • key_file: Full path and filename of the private key.
  • tenancy: OCID of your tenancy.
  • region: An Oracle Cloud Infrastructure region.

For more details please click here.

As an example, mine looks like:

You may be wondering how to get this information? Fair question!
Don’t panic I already explained everything in this tutorial: Discovering MySQL Database Service – Episode 4 – Dump your MySQL data into an Object Storage bucket

My Bucket is named: migrationMariadbNoUser

The MySQL Shell dump instance command for migrating data from MariaDB to MDS and stored data into a bucket looks like:

  • users: false -> exclude users and their roles and grants in the dump
  • osBucketName: “migrationMariadbNoUser” -> name of the Object Storage bucket to which the dump is to be written
  • osNamespace: “abcdefghijkl” -> namespace where the Object Storage bucket named by osBucketName is located
  • ocimds: true -> enables checks and modifications for compatibility with MDS
  • threads: 8 -> 8 parallel threads to dump chunks of data from the MariaDB instance
  • compatibility: ->
    • force_innodb -> change CREATE TABLE statements to use the InnoDB 
    • strip_restricted_grants -> remove specific privileges that are restricted by MDS
    • strip_definers -> remove the DEFINER clause from views, routines, events, and triggers
    • strip_tablespaces -> all tables are created in their default tablespace

Complete documentation is available here.

The command above is storing the dump, compatible with MDS, into the bucket migrationMariadbNoUser.

Backup the database locally

If your dataset is not big and/or you are too lazy to spend a little time in configuration, the alternative is to dump your data locally.
The main drawback is that you will have to import the data into MDS manually – but this should not be a big deal if your dataset is not huge.

If you want to store the dump locally the command looks like:

The command is close to the one for the on-prem.
Please note – users: false – option. You can also dump the user and then run the import without loading the users (default behavior).

In fact, the most import difference compare to an on-prem data migration are the compatibility options. Because MDS is a PaaS you have less flexibility than the on-prem MySQL so more compatibility needs.

Restore into MySQL Database Service from a bucket

You have 2 solutions, the automated one, easiest but less flexible or the manual.

The first one is the MDS Data Import feature using the OCI interface.

MySQL Data Service - Data Import
MySQL Data Service – Data Import

You’ll find the “How To” in: Discovering MySQL Database Service – Episode 5 – Create a MySQL DB system from a MySQL Shell dump

More details in the documentation.

If you want to speed up your import you can disable the Crash Recovery (during the import and enable it when it is finished). Some details here.

You can also restore the dump manually, in a new fresh MDS instance.

The command looks like:

Please note that in this case, you won’t be able to speed up the restore by disabling the crash recovery system (Redo Logs).

Restore into MySQL Database Service from a local dump

In this case we cannot use the crash recovery trick nor the Data Import feature.
Anyway the MySQL Shell dumpInstance function will still do the job.

The command looks like:

  • threads: 8 -> 8 parallel threads to  upload chunks of data to the MySQL instance
  • ignoreVersion: true -> allow the import of the MariaDB dump into MySQL (MariaDB is taken for a MySQL 5.5 version)

loadUsers: false is not necessary here because the dump was made with users: false.

And don’t forget to re-create your users 🙂
You have plenty of authentication plugins to create secure user accounts.
We also provide Multifactor Authentication.

Btw if security is one of your concern (and it should be) below some MySQL password management features:

Database migration is not only about data migration, so you still have some work to do. But now you should have a better understanding of the overall data migration process.

Follow me on twitter

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.

Thanks for using MySQL!

Leave a Reply