Discovering MySQL Database Service – Episode 4 – Dump your MySQL data into an Object Storage bucket

August 24, 2021
Tags: , ,

MySQL, the world’s most popular open source database, is available as a managed cloud service in Oracle Cloud Infrastructure (OCI) under the name of MySQL Database Service (MDS).

MySQL Database Service is a fully managed cloud service, 100% Developed, Managed and Supported by the MySQL Team.

This is the fourth episode of “Discovering MySQL Database Service“, a series of tutorials where I will show you, step by step, how to use MySQL Database Service and some other Oracle Cloud Infrastructure services.

Please also note that you can run this tutorial and thus try MySQL Database Service & the other Oracle Cloud Infrastructure services for free by starting your 30-day trial.

MySQL Database Service

Episode 4 – Dump your MySQL data into an Object Storage bucket

In the previous episode we’ve created our Virtual Cloud Network, that provides you with complete control over your cloud networking environment that we will create in this Discovering MySQL Database Service journey.

In this episode, we’ll see what is OCI Object Storage and how to export data from it to a MySQL instance using MySQL Shell.
In fact, in the process of creating a MySQL DB system, this step is optional.
But it is especially useful if you want to create a MDS instance with data from a MySQL server on premise or in any public/private cloud.

Because it is a plausible scenario, let’s see how to do that!

The workflow

The workflow is quite simple.
We will export all or part of our MySQL instance data using MySQL Shell into an Oracle Cloud Infrastructure Object Storage. Then we will import the data from the bucket to a MySQL Database Service instance.
Today we are going to see only the first part (red rectangle in the picture below).

Thus the needed components are:

Object Storage

What is Object Storage?

The Oracle Cloud Infrastructure Object Storage service is an internet-scale, high-performance storage platform that offers reliable and cost-efficient data durability. The Object Storage service can store an unlimited amount of unstructured data of any content type, including analytic data and rich content, like images and videos.

With Object Storage, you can safely and securely store or retrieve data directly from the internet or from within the cloud platform. You can access data from anywhere inside or outside the context of the Oracle Cloud Infrastructure, as long you have internet connectivity and can access one of the Object Storage endpoints.

Create an Object Storage Bucket

Here as well, many ways to do that. The more convenient way is probably to use the OCI console.

In the menu, go to: Storage / Object Storage & Archive Storage / Buckets

Select the right compartment (left of the screen – DBA in this example) then press Create Bucket.

The bucket must have a name (Sakila_dump in this example – if you want to test with a sample database, Sakila database is available here).

The Default Storage Tier must be Standard – the default value.

Then you good to go!

Now let’s export the data to this fresh new created object storage bucket.

Export data to the bucket

You already get it, were going to use this amazing tool called MySQL Shell. But before we need to make some configuration for allowing MySQL Shell to easily connect to the OCI object storage service.

To this end, we will create an OCI CLI configuration file.

The CLI is a small-footprint tool that you can use on its own or with the Console to complete Oracle Cloud Infrastructure tasks. The CLI provides the same core functionality as the Console, plus additional commands. Some of these, such as the ability to run scripts, extend Console functionality.

To be clear, we will not use the CLI but its configuration file. The configuration file name and default location (on Linux) is /home/opc/.oci/config.

It should have the following information:

  • user: OCID of the user calling the API.
  • fingerprint: Fingerprint for the public key that was added to this user.
  • key_file: Full path and filename of the private key.
  • tenancy: OCID of your tenancy.
  • region: An Oracle Cloud Infrastructure region.

For more details please click here.

As an example, mine looks like:

[DEFAULT]
user=ocid1.user.oc1..xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
fingerprint=xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx
key_file=/home/opc/.oci/oci_api_key.pem
tenancy=ocid1.tenancy.oc1..xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
compartment=ocid1.compartment.oc1..xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
region=eu-frankfurt-1

So how to get these information?

First you can create the directory where to store the config file:

$ mkdir /home/opc/.oci

$ touch /home/opc/.oci/config

In the OCI console, on the top right (icon) go to Profile / User Settings

On this page, click on API Keys, bottom left:

Click on Add API Key,

Check Generate API key Pair ;
Click on Add API Key;
Download the Private Key ;
Click Add

The Configuration File Preview is generated with almost all the relevant information \o/.

So far we have something like:

[DEFAULT]
user=ocid1.user.oc1..aaaaaaaaaaaxxxxxxaaaaa
fingerprint=ca:e1:11:11:aa:bb:cc:dd:ee:ff:gg:hh:11:ii:jj:kk
tenancy=ocid1.tenancy.oc1..aaaaaaaaaaayyyyyyaaaaa
region=eu-frankfurt-1
key_file= # TODO

Copy and paste it in the config file we’ve previously created.

And don’t forget to update the key_file path (ex: key_file=/home/opc/.oci/oci_api_key.pem)

Brilliant! However, one information is still missing: the compartment OCID.

In the menu of the console, go to: Identity & Security / Compartments
Select the relevant compartment (DBA in this example)


Copy the compartment OCID:

We can now update the config file with the last information.

So your final config file will look like:

[DEFAULT]
user=ocid1.user.oc1..aaaaaaaaaaaxxxxxxaaaaa
fingerprint=ca:e1:11:11:aa:bb:cc:dd:ee:ff:gg:hh:11:ii:jj:kk
tenancy=ocid1.tenancy.oc1..aaaaaaaaaaayyyyyyaaaaa
region=eu-frankfurt-1
key_file=/home/opc/.oci/oci_api_key.pem
compartment=ocid1.compartment.oc1..aaaaaaaaaaazzzzzzaaaaa

Awesome!
Now we have all the parts to export our MySQL data with the MySQL Shell dump utilities.

MySQL Shell

MySQL Shell is an advanced client and code editor for MySQL.
In addition to the provided SQL functionality, similar to the mysql text client, MySQL Shell provides scripting capabilities for JavaScript and Python and includes APIs for working with MySQL.
X DevAPI enables you to work with both relational and document data.
AdminAPI enables you to work with InnoDB Cluster.

In this article we are mainly interested in the utility part of MySQL Shell.
MySQL Shell includes utilities for working with MySQL. To access the utilities from within MySQL Shell, use the util global object, which is available in JavaScript and Python modes, but not SQL mode.
The util global object provides many functions like checkForServerUpgrade(), importJSON(), exportTable(), importTable(), loadDump(), util.dumpTables(), util.dumpInstance(), util.dumpSchemas(), …

In order to create a MySQL Database Service instance automatically populated with your data, you will use either util.dumpInstance() or util.dumpSchemas().

I’m going to use the later – dumpSchema() – but please keep in mind that the logic is the same for dumpInstance().

As a side note there are several articles on my blog about MySQL Shell dump & load utilities.
I would recommend you to read HeatWave – A MySQL cloud feature to speed up your queries and MySQL SHELL – The new era.

There are many relevant options. I will briefly describe them below (please RTFM for more details).

  • ocimds: Setting this option to true enables checks and modifications for compatibility with MySQL Database Service.
  • osBucketName: The name of the Oracle Cloud Infrastructure Object Storage bucket to which the dump is to be written.
  • osNamespace: The Oracle Cloud Infrastructure namespace where the Object Storage bucket named by osBucketName is located.
  • compatibility: Apply the specified requirements for compatibility with MySQL Database Service for all tables in the dump output, altering the dump files as necessary.
  • threads:  The number of parallel threads to use to dump chunks of data from the MySQL instance. Each thread has its own connection to the MySQL instance.
  • dryRun: Display information about what would be dumped with the specified set of options, and about the results of MySQL Database Service compatibility checks (if the ocimds option is specified), but do not proceed with the dump. Setting this option enables you to list out all of the compatibility issues before starting the dump.
  • ociParManifest: Setting this option to true generates a pre-authenticated request for read access (an Object Read PAR) for every item in the dump, and a manifest file listing all the pre-authenticated request URLs. The pre-authenticated requests expire after a week by default, which you can change using the ociParExpireTime option.
  • ociParExpireTime: The expiry time for the pre-authenticated request URLs that are generated when the ociParManifest option is set to true. The default is the current time plus one week, in UTC format.

If your config file is not in the default location, you will also need ociConfigFile.

  • ociConfigFile: An Oracle Cloud Infrastructure CLI configuration file that contains the profile to use for the connection, instead of the one in the default location ~/.oci/config.

osBucketName is the name of the bucket we created previously – Sakila_dump in this article.

osNamespace is available in the OCI console.
Open the menu and go to Storage / Object Storage & Archive Storage
Click on the Bucket name (if you don’t see your bucket , check if you are in the right compartment).
Namespace field is what your are looking for.

compatibility provides many types of modification. Please read carefully the documentation in order to use the correct setup for your needs.
I’m using here: force_innodb, skip_invalid_accounts, strip_definers, strip_restricted_grants, strip_tablespaces.

If you have tables without primary keys (that is usually not a good thing) you can force the dump using the ignore_missing_pks compatibility mode.
But you will not be able to use some advanced features like MDS High Availability, nor MDS HeatWave)

Another alternative is to use create_invisible_pks instead of ignore_missing_pks.
Please check the documentation for the relevant details.

ociParManifest & ociParExpireTime are very useful in this context.
They will allow you to create a Pre-Authenticated Requests (PAR) and setup an expiration time of your choice. Thus you will be able to load your data whenever it suits you.

To create or manage pre-authenticated requests, you need PAR_MANAGE permission to the target bucket.

Note
While you only need PAR_MANAGE permission to create a pre-authenticated request, you must also have the appropriate permissions for the access type that you are granting. 
Details here

Connect to your (on-premise / private or public cloud) MySQL instance using MySQL Shell:

$ mysqlsh root@localhost

Run the dump command in dry run mode:

MySQL  localhost:3306 ssl  JS> 
util.dumpSchemas(["sakila"], "", { osBucketName: "Sakila_dump", osNamespace: "azertyuiop", ocimds: true, ociParManifest: true, ociParExpireTime: "2021-08-11T00:20:00.000+02:00", threads: 8, compatibility: ["force_innodb", "skip_invalid_accounts", "strip_definers", "strip_restricted_grants",  "strip_tablespaces"], dryRun: "true"})

If no error then… Go Go Go!!!
(Without dry run)

MySQL  localhost:3306 ssl  JS> 
util.dumpSchemas(["sakila"], "", { osBucketName: "Sakila_dump", osNamespace: "azertyuiop", ocimds: true, ociParManifest: true, ociParExpireTime: "2021-08-11T00:20:00.000+02:00", threads: 8, compatibility: ["force_innodb", "skip_invalid_accounts", "strip_definers", "strip_restricted_grants",  "strip_tablespaces"]})

Acquiring global read lock
Global read lock acquired
Gathering information - done
All transactions have been started
Locking instance for backup
Global read lock has been released
Checking for compatibility with MySQL Database Service 8.0.25
... <snip> ...
1 thds dumping - 100% (47.27K rows / ~47.27K rows), 3.01K rows/s, 193.24 KB/s uncompressed, 35.48 KB/s compressed
Duration: 00:00:15s                                                                                              
Schemas dumped: 1                                                                                                
Tables dumped: 16                                                                                                
Uncompressed data size: 3.03 MB                                                                                  
Compressed data size: 557.12 KB                                                                                  
Compression ratio: 5.4                                                                                           
Rows written: 47273                                                                                              
Bytes written: 557.12 KB                                                                                         
Average uncompressed throughput: 193.22 KB/s                                                                     
Average compressed throughput: 35.48 KB/s  

We can now check the bucket content:

Well, that’s all for today!
In this episode, we have seen how to export data from a MySQL instance to an Oracle Cloud Infrastructure bucket using this awesome tool called MySQL Shell. These data can now easily be imported in a MySQL Database Service instance.

Next episode is:

Create a MySQL DB system from a MySQL Shell dump

Resources

Follow me on Linkedin

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.

My Speaker Deck account.

Thanks for using HeatWave & MySQL!

2 Responses to “Discovering MySQL Database Service – Episode 4 – Dump your MySQL data into an Object Storage bucket”

  1. […] Dump your MySQL data into an Object Storage bucket […]

  2. […] Dump your MySQL data into an Object Storage bucket […]

Leave a Reply