MySQL in Amazon (RDS)

March 16, 2021
Tags: , , ,

Up until now we have seen MDS (MySQL Database Service) and MySQL in Azure.
As the Cloud technology keeps moving fast, I thought it would be a good idea to see how to set up a MySQL in Amazon , as a service, that is RDS.

Let’s get started then!

What we need is an Amazon subscription that is really easy to get one and there is also a Free Tier. The AWS Free Tier is available to you for 12 months starting with the date on which you create your AWS account. When your free usage expires or if your application use exceeds the free usage tiers, you simply pay standard, pay-as-you-go service rates.

For the specific session, I had root access. There is also the IAM user. AWS Identity and Access Management (IAM) is an AWS service that helps an administrator securely control access to AWS resources. IAM administrators control who can be authenticated (signed in) and authorized (have permissions) to use Amazon RDS resources. 

Once inside the Amazon Subscription, we are seeing the Management Console:

From the Management Console, we are able to identify the RDS service and choose it:

Once inside the RDS service, we click on the Create Database:

From the Databases provided, we are choosing MySQL and Standard Create, as we would like to set up our configuration.
With Easy Create enabled, you specify only the DB engine type, DB instance size, and DB instance identifier. 
Easy Create uses the default setting for other configuration options.

With Easy Create not enabled, you specify more configuration options when you create a database, including ones for availability, security, backups, and maintenance.

Now, on MySQL versions we have chosen the latest provided until now for Amazon, that is 8.0.21 :

If you noticed, there is also the choice to choose Production , Dev or Free Tier template.
For Production Template, there are default options for High Availability and consistent performance. We have chosen the Free Tier, as this is for educational purposes.

We have also given a Database Identifier, database-1. Each DB instance has a DB instance identifier.
This customer-supplied name uniquely identifies the DB instance when interacting with the Amazon RDS API and AWS CLI commands. The DB instance identifier must be unique for that customer in an AWS Region.

Moving on next are the admin’s credentials, the super user of the database:

The DB instance class determines the computation and memory capacity of an Amazon RDS DB instance. The DB instance class you need depends on your processing power and memory requirements. Apparently, since we are only testing this we have a basic tier:

db.t2 – Instance classes that provide a baseline performance level, with the ability to burst to full CPU usage.

Moving along with the rest of our options:

Amazon RDS provides three storage types: General Purpose SSD (also known as gp2), Provisioned IOPS SSD (also known as io1), and magnetic (also known as standard).

They differ in performance characteristics and price, which means that you can tailor your storage performance and cost to the needs of your database workload. 

You can create MySQL, MariaDB, Oracle, and PostgreSQL RDS DB instances with up to 64 tebibytes (TiB) of storage. You can create SQL Server RDS DB instances with up to 16 TiB of storage.
For this amount of storage, use the Provisioned IOPS SSD and General Purpose SSD storage types.

The following list briefly describes the three storage types:

  • General Purpose SSD – General Purpose SSD volumes offer cost-effective storage that is ideal for a broad range of workloads. These volumes deliver single-digit millisecond latencies and the ability to burst to 3,000 IOPS for extended periods of time. Baseline performance for these volumes is determined by the volume’s size.
  • Provisioned IOPS – Provisioned IOPS storage is designed to meet the needs of I/O-intensive workloads, particularly database workloads, that require low I/O latency and consistent I/O throughput.
  • Magnetic – Amazon RDS also supports magnetic storage for backward compatibility. We recommend that you use General Purpose SSD or Provisioned IOPS for any new storage needs. The maximum amount of storage allowed for DB instances on magnetic storage is less than that of the other storage types.

One the Availability & Durability section, you are seeing the option Multi-AZ deployment. Amazon RDS Multi-AZ deployments provide enhanced availability and durability for database instances, making them a natural fit for production database workloads. When you provision a Multi-AZ database instance, Amazon RDS synchronously replicates your data to a standby instance in a different Availability Zone (AZ).

The next section to check is that on of the Connectivity:

Be certain to choose the correct Virtual private cloud, as it cannot be changed after the database is created and set the proper access (IP-ranges) for your subnets.

Security groups control the access that traffic has in and out of a DB instance. Each VPC security group rule enables a specific source to access a DB instance in a VPC that is associated with that VPC security group.
The source can be a range of addresses, or another VPC security group. By specifying a VPC security group as the source, you allow incoming traffic from all instances (typically application servers) that use the source VPC security group.

Moving on next, we have the Database options, we choose the DB name and we have the default parameter group & the option group. We will talk about these in more detail in a few minutes.

The backups sections is pretty straightforward, we may choose automatic ones and there is a retention period from 7 to 35 days.

We can set a specific window for the backups and we may enable the logs that we wish:

Here, we have chosen the error logs and the slow query log.

And now we are ready to create our database:

Please note that once the MySQL RDS is ready, it is the only time we are able to see the connection details and specifically the password, so we need to take a note of this.

Once our database is ready for use, we will see a notification at the top of the page:

Now, remember the default-group that we saw earlier while setting our MySQL RDS?
Basically, a DB parameter group acts as a container for engine configuration values that are applied to one or more DB instances.

If you create a DB instance without specifying a DB parameter group, the DB instance uses a default DB parameter group. Each default DB parameter group contains database engine defaults and Amazon RDS system defaults based on the engine, compute class, and allocated storage of the instance. You can’t modify the parameter settings of a default parameter group.

So, if we try to edit the default parameter group, we will receive the following error:

Therefore, if we wish to edit and configure the parameter values, we will need to create a new parameter group and assign it to our database.

Pretty straightforward, we name our parameter group and add a description if we wish:

And we are able to edit it and save the changes:

To apply it on our MySQL instance, we need to modify and set the corresponding parameter group:

We may also modify our backup options and set a window that we didn’t do while creating the MySQL:

On the Events tab, we may see all the actions that have been performed on our instance:

We are also able to check the Recommendations regarding the performance and our queries if we have the enhanced monitoring is enabled:

And last but not least, on the Actions we may create a replica :

To sum up, this is how to set up a MySQL RDS and the options that we have.

Conclusion

We have seen MySQL PaaS on the top Cloud Providers so far and there is definitely much more to see on that department, so stay tuned!

I don’t need a hard disk in my computer if I can get to the server faster… carrying around these non-connected computers is byzantine by comparison.

Steve Jobs

References

Comments Off on MySQL in Amazon (RDS)

MySQL Recovery

February 23, 2021

If you are dealing with data, and you most probably are if you are reading this, one of your biggest fears would be not to be able to retrieve them. In a world where data actually surround us, it is critical to be able to retrieve them fast and with the best consistency.

Thus, it is always a good idea to have high availability settings in place to avoid loosing your data.

However, most of the times, we may wish or we may need to save the database and our data, and be a DBA-hero. Not an easy task, and it may be smoother to just perform a backup-restore. Sadly, this is not always the case.

So, this is what we will be facing in this article, we are going to see what to do when there is a data corruption in MySQL and the steps we need to perform to try saving our database.

Before any task or operation, it is always a good idea to copy the already existing data at an OS level:

First, let’s stop MySQL:

systemctl stop mysqld

And then:

cp -r /datadir/ /backup_db

Before attempting to bring back up MySQL, it is a good idea to execute an innochecksum to see the status of your database or of a specific table.

A few words on innochecksum, in case you are not familiar with it:

innochecksum prints checksums for InnoDB files. This tool reads an InnoDB tablespace file, calculates the checksum for each page, compares the calculated checksum to the stored checksum, and reports mismatches, which indicate damaged pages. It was originally developed to speed up verifying the integrity of tablespace files after power outages but can also be used after file copies. Because checksum mismatches cause InnoDB to deliberately shut down a running server, it may be preferable to use this tool rather than waiting for an in-production server to encounter the damaged pages.

So, in order to check a table, you may use a simple command like:

innochecksum table.ibd --log=/tmp/log.txt

The status will be logged on file /tmp/log.txt, if the file doesn’t exist innochecksum can do this for you. And, the output will be something like the following:

page::41354; log sequence number:first = ; second =
Page:: uncorrupted

Innochecksum offers a variety of options, the most useful is to be able to choose a starting or an ending page, to save time:

innochecksum --start-page=600 table.ibd --log=/tmp/log.txt

Please be aware that innochecksum can be executed only on a Database that is not running. Otherwise, you are going to receive a lock error:

Error: Unable to lock file:: table.ibd
fcntl: Resource temporarily unavailable

Once you have checked the tables and discovered the problematic table or tables, it is time to attempt to startup the server and see what we can do. Very important, on my.cnf it is imperative to add the following:

innodb_force_recovery=1
log_error_verbosity=3

log_error_verbosity: The log_error_verbosity system variable specifies the verbosity for handling events intended for the error log. I always prefer to have this set as it is very helpful to identify various issues and at the same time to be able to check the progress on the error_log file.

innodb_force_recovery: The crash recovery mode, typically only changed in serious troubleshooting situations. Possible values are from 0 to 6. If server comes up with a value of 3 and below then the data can be recovered, most probably. Always start with 1 and increase one by one, if the server is not coming up with your previous choice.

Now that we have these settings on our configuration, it is time to start-up the server and hope for the best:

systemctl start mysqld

If we are lucky and the server does manage to come up with a value among 1 to 3, it is time to take a full dump of the table(s) that are corrupted. If you are using 8.0.22 version and above, then you may take the dump via MySQL Shell:

MySQL localhost:33060+ ssl JS > util.dumpTables("schema", [ "table" ], "/tmp/dump_table");

Otherwise, you may use the old way:

mysqldump -ubackup_user -p --set-gtid-purged=OFF --single-transaction schema table > table.sql

Once the backup is completed successfully, it is time to drop the corrupted table:

mysql –uroot –p –execute=”DROP TABLE schema.table”

And then restore it from the dump we have taken:

mysql –uroot –p < table.sql

Once the restoration is done, we need to restart the MySQL server, but this time without the innodb_force_recovery parameter. Simply, edit your configuration file and mark it as a comment:

#innodb_force_recovery=1

Hopefully, this will work and our database will be up and running with all its’ data intact! Unfortunately, this is not always the case and the database won’t be able to come up for us to retrieve our files. In situations like these, we need to restore from an existing backup or from a replica, if they are available.

To avoid facing serious trouble with your data, condider having in place one (or better yet all) of the following:

  • Full backups ( at a daily,weekly,monthly base)
  • Incremental backups
  • Binlog backups
  • In sync replicas
  • Innodb cluster

Additionally, in cases of a system crash, it is always helpful to set certain parameters to maintain data consistency. Let’s view the most basic ones:

  • innodb_flush_log_at_trx_commit = 1 :

Controls the balance between strict ACID compliance for commit operations and higher performance that is possible when commit-related I/O operations are rearranged and done in batches.
You can achieve better performance by changing the default value but then you can lose transactions in a crash.
The default setting of 1 is required for full ACID compliance. Logs are written and flushed to disk at each transaction commit.

  • sync_binlog = 1 :

Enables synchronization of the binary log to disk before transactions are committed. This is the safest setting but can have a negative impact on performance
due to the increased number of disk writes. In the event of a power failure or operating system crash, transactions that are missing from the binary log are only in a prepared state.
This permits the automatic recovery routine to roll back the transactions, which guarantees that no transaction is lost from the binary log.

  • innodb_doublewrite = 1 :

The doublewrite buffer is a storage area where InnoDB writes pages flushed from the buffer pool before writing the pages to their proper positions in the InnoDB data files.
If there is an operating system, storage subsystem, or unexpected mysqld process exit in the middle of a page write, InnoDB can find a good copy of the page from the doublewrite buffer
during crash recovery.

  • relay_log_recovery = 1 :

If enabled, this variable enables automatic relay log recovery immediately following server startup. The recovery process creates a new relay log file, initializes the SQL thread position
to this new relay log, and initializes the I/O thread to the SQL thread position. Reading of the relay log from the source then continues.

This global variable is read-only at runtime. Its value can be set with the –relay-log-recovery option at replica server startup, which should be used following an unexpected halt of
a replica to ensure that no possibly corrupted relay logs are processed, and must be used in order to guarantee a crash-safe replica.

  • relay_log_purge = 1 :

Disabling purging of relay logs when enabling the –relay-log-recovery option risks data consistency and is therefore not crash-safe.

  • innodb_directories :

Another parameter that can assist with corrupted and crash-recovery tablespaces is innodb_directories. Tablespace discovery during crash recovery relies on the innodb_directories setting to identify tablespaces referenced in the redo logs. Innodb_directories can be defined on the start-up and they can be used when moving your tablespaces while the server is offline.

On my.cnf:

[mysqld]
innodb_directories="directory_path_1;directory_path_2"

At this point, it is wise to mention that the recovery time is also depended on the innodb_log_file_size.

Generally, the combined size of the log files should be large enough that the server can smooth out peaks and troughs in workload activity, which often means that there is enough redo log space to handle more than an hour of write activity. The larger the value, the less checkpoint flush activity is required in the buffer pool, saving disk I/O.

Larger log files also make crash recovery slower.

Conclusion

MySQL does offer a stability, however it is really hard to avoid a corruption or to not face having your server crashing at some point. If safety measurements are in place, take a deep breath and dive in to save your data.

When you crash and burn, you have to pick yourself up and go on and hope to make up for it.

Burt Reynolds

References

Comments Off on MySQL Recovery

Replicate from MySQL 5.7 to MySQL Database Service

January 26, 2021
Cloud from above by Olivier DASINI

MySQL Replication is a very common topology, widely used in various architecture.
People use it, among others, for High Availability, Read Scalability or Geographic Redundancy.

Another use case is to use MySQL Replication to seamlessly integrate a newer version of the server in your architecture.
Let’s say you are running MySQL 5.7 then you can easily setup a 8.0 instance as a replica of your 5.7.

Extending this idea it is also possible to replicate your MySQL 5.7 (or 8.0 obviously) to a MySQL Database Service (MDS) instance, the true MySQL PaaS on Oracle Cloud Infrastructure (OCI).

This is that story you are about to read now 🙂

Architecture

To make things clear, this is what the (final) architecture looks like:

  • An OCI Compute instance where MySQL 5.7.33 & MySQL Shell 8.0.23 are installed
    • Public IP : 123.45.678.90
    • Private IP : 10.0.0.9
  • A MDS instance
    • DB name : MDS2_FRA
    • Private IP : 10.0.1.23
  • A replication Channel
    • Channel name : mysqlchannel_MDS2_FRA
MySQL Replication topology from MySQL Enterprise Monitor
MySQL Replication topology from MySQL Enterprise Monitor

Plan

How to proceed?

Let’s split the problem in sub-parts:

  1. Create a compute & a MDS instances
  2. Create on the source a dedicated user to the replication
  3. Dump the MySQL instance running on the OCI compute for being migrated to MDS
  4. Load the dump in MDS
  5. Create a replication channel on MDS (from OCI to MDS)

Requirement

Before starting, please verify that your source follows the requirements.

Current limitations of MySQL Database Service Inbound Replication

  • Only Row-based replication supported
  • Only GTID-based replication is supported
  • Multi-source replication is not supported
  • Replication filters are not supported
  • Changes to the mysql schema are not replicated and cause replication to stop
  • Source and Replica must run with the same lower_case_table_names to avoid data transfer problems arising from letter case of database or table names in cross-platform topology
  • The inbound applier runs under the privileges of the DB System’s admin user

The full & up to date list is available here.

Create a compute & a MDS instances

The creation of thee 2 objects is out of the scope of this article.
You’ll find the relevant resources following these above links:

You can start using OCI for free : Oracle Cloud Infrastructure Free Tier

Create a MySQL DB System is very easy : Getting Started with MySQL Database Service

You also need to have a running compute instance attached to a public subnet on the same VCN as the MySQL DB System : Creating a Compute Instance

Indeed MySQL 5.7 and MySQL Shell 8.0 are installed on this compute instance : Installing MySQL Shell on Linux. (obviously it works also if you are on Windows).

Create on the source a dedicated user to the replication

Assuming the prerequisites above are fine and the security lists are updated, we can now proceed.

Connect by SSH, to the compute instance where MySQL 5.7 is installed:

ssh -i id_rsa  ubuntu@123.45.678.90

Note:
If you are using Ubuntu (the case here) the user is ubuntu.
For others GNU Linux systems, the user should be opc.

Connect to the MySQL 5.7 instance using MySQL Shell 8.0.23:

$ mysqlsh --version
mysqlsh   Ver 8.0.23 for Linux on x86_64 - for MySQL 8.0.23 (MySQL Community Server (GPL))
$ mysqlsh root@localhost:3306 --sql
SQL> SELECT version();
+------------+
| version()  |
+------------+
| 5.7.33-log |
+------------+

Then create the replication user with his relevant privileges:

SQL> CREATE USER rplAdmin@'10.0.1.%' IDENTIFIED WITH 'mysql_native_password' BY 'Repl1234c@' REQUIRE SSL; 


SQL> GRANT REPLICATION SLAVE on *.* to rplAdmin@'10.0.1.%';

We can check the user:

SQL> SELECT user, host, plugin FROM mysql.user WHERE user = 'rplAdmin';
+-----------+----------+-----------------------+
| user      | host     | plugin                |
+-----------+----------+-----------------------+
| rplAdmin  | 10.0.1.% | mysql_native_password |
+-----------+----------+-----------------------+


SQL> SHOW GRANTS FOR rplAdmin@'10.0.1.%';
+---------------------------------------------------------+
| Grants for rplAdmin@10.0.1.%                            |
+---------------------------------------------------------+
| GRANT REPLICATION SLAVE ON *.* TO 'rplAdmin'@'10.0.1.%' |
+---------------------------------------------------------+

Dump the MySQL instance running on OCI for being migrated to MDS

Now we want to put the current MySQL 5.7 data on the MDS instance.

In this context the best tools to do this job are MySQL Shell utilities.
We will use the instance dump utility.

The backup output can be stored on a Object Storage Bucket – I will write an article on that.
But today, let’s keep it simple. We’ll store the backup locally on the compute instance.

This strategy makes sense if you have enough free disk space on the compute. Also it will be faster to restore the data on MDS with this strategy than having the dump on the object storage.

A good practice when attempting to import data from MySQL 5.7 into MDS is to use MySQL Shell’s upgrade checker utility util.checkForServerUpgrade() to check the schemas and if needed proactively fix any compatibility issues identified.

SQL> \JS

JS> util.checkForServerUpgrade()

We’re good to go, so let’s create the backup directory:

$ mkdir -p /home/ubuntu/backup/

Then dump our MySQL 5.7 instance using util.dumpInstance() :

JS> util.dumpInstance("/home/ubuntu/backup/20210117", {ocimds: true, threads: 8, showProgress: true, compatibility: ["force_innodb", "strip_definers", "strip_restricted_grants", "strip_tablespaces"]})

Please note ocimds & compatibility options.
Details are available in the documentation.

Load the dump in MDS

Now we must restore our dump.
We will use the Dump Loading Utility.

From the compute instance :

$ ssh -i id_rsa  ubuntu@123.45.678.90

connect to the MDS instance (10.0.1.23):

$ mysqlsh mdsAdmin@10.0.1.23

Then load the MySQL 5.7 instance’s dump using util.loadDump() :

JS> util.loadDump("/home/ubuntu/backup/20210117",{ignoreVersion: true, updateGtidSet: "replace", threads: 8, waitDumpTimeout: 1800})

Please note the ignoreVersion & updateGtidSet options.
Documentation is available here.

Create a replication channel on MDS

Here is the easiest part.
Most of the work will be done using the OCI console \o/

The feature we are looking for is named Channel.
You can find it by clicking on “MySQL” then “Channel” and finally push the “Create Channel” button

Create Inbound MySQL Replication in MySQL Database Service in OCI

The following is pretty straightforward.
Below the main information:

Create in Compartment: Choose the right Compartment

Name (optional): ex. mysqlchannel_MDS2_FRA

Source Connection – Configure Connection to the MySQL Source
Hostname: 10.0.0.9

Username: rplAdmin
(from the replication user created on the source)

Password: Repl1234c@
(from the replication user created on the source)

SSL Mode: – set when create the replication user –
Required (REQUIRED)
Establish an encrypted connection.

Target – Configure the DB System target
Select a DB System: MDS2_FRA

Finally, push the Create Channel button

Create Inbound MySQL Replication in MySQL Database Service in OCI

et voilà!

Wait for the Channel icon become green (ACTIVE)…

MySQL Replication status from MySQL Enterprise Monitor
MySQL Replication status from MySQL Enterprise Monitor

In the mean time, if you have the soul of a DBA, you can use some commands, on the MDS instance.

SHOW REPLICA STATUS
SQL > SHOW REPLICA STATUS\G

*************************** 1. row ***************************
             Replica_IO_State: Waiting for master to send event
                  Source_Host: 10.0.0.9
                  Source_User: rplAdmin
                  Source_Port: 3306
                           ...
           Replica_IO_Running: Yes
          Replica_SQL_Running: Yes 
                           ...
           Source_SSL_Allowed: Yes
                           ...
                Auto_Position: 1
         Replicate_Rewrite_DB: 
                 Channel_Name: replication_channel
           Source_TLS_Version: TLSv1.2,TLSv1.3
                           ...

Using the replication channel name (default: replication_channel) you can also use this query:

SELECT 
   SERVICE_STATE, 
   HOST, 
   USER, 
   PORT,
   CHANNEL_NAME
FROM performance_schema.replication_connection_configuration 
   INNER JOIN performance_schema.replication_applier_status 
      USING (CHANNEL_NAME) 
WHERE CHANNEL_NAME = 'replication_channel'\G

*************************** 1. row ***************************
SERVICE_STATE: ON
         HOST: 10.0.0.9
         USER: rplAdmin
         PORT: 3306
 CHANNEL_NAME: replication_channel

Last but not least, there is Lefred’s favorite replication query (c’est cadeau):

SELECT
    concat(conn_status.channel_name, ' (', worker_id,')') AS channel,
    conn_status.service_state AS io_state,
    applier_status.service_state AS sql_state,
    format_pico_time(if(GTID_SUBTRACT(LAST_QUEUED_TRANSACTION, LAST_APPLIED_TRANSACTION) = "","0",
    abs(time_to_sec(if(time_to_sec(APPLYING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP)=0,0,
    timediff(APPLYING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP,now()))))) * 1000000000000) latency,
    format_pico_time((LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP -
    LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP) * 100000000000) transport_time,
    format_pico_time((LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP -
    LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP) * 1000000000000) time_to_relay_log,
    format_pico_time((LAST_APPLIED_TRANSACTION_END_APPLY_TIMESTAMP -
    LAST_APPLIED_TRANSACTION_START_APPLY_TIMESTAMP) * 1000000000000) apply_time,
    conn_status.LAST_QUEUED_TRANSACTION AS last_queued_transaction,
    applier_status.LAST_APPLIED_TRANSACTION AS last_applied_transaction
FROM
    performance_schema.replication_connection_status AS conn_status JOIN performance_schema.replication_applier_status_by_worker AS applier_status 
        ON applier_status.channel_name = conn_status.channel_name
;

Péroraison

MySQL replication has always been easy to setup.
This is still true with MySQL 8.0.
This is even more true with MySQL Database Service 🙂

Easy, flexible and powerful. This is exactly what your application need.

References

Follow me on Linkedin

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.

My Speaker Deck account.

Thanks for using HeatWave & MySQL!

3

MySQL in Azure

January 19, 2021
Tags: ,
Ikaria Greece by Anastasia Papachristopoulou

Cloud computing is becoming more and more famous among the IT circles and it is growing super fast. Industries are charmed by its flexible character and the avoidance of having their own infrastructure.
An easy solution that offers automation on every field and they can scale up depending on the needs of each user and company.

There are many Cloud providers, such as Oracle Cloud Infrastructure, Microsoft Azure, Amazon Web Services, etc…
In this article, we are going to see how to set up a MySQL Database instance on Microsoft Azure Cloud.

At this point, it is important to clarify that we are going to set up MySQL as a Service (PaaS).

So, before starting with the actual Database creation, we need:

  • An Azure subscription, you may create a free Azure account that offers for the first month a $200 credit and for the first year the most wanted services.
  • Have an available resource group.
  • Have specifically access for MySQL.

Let’s sign in to Azure portal, https://portal.azure.com/ and you should see a screen like the following:

Azure MySQL

We need to verify that we have the correct resources available for our subscription, in order to set up the MySQL DB:

  • From Azure Portal, go to Subscriptions
  • Then, on Development Services and choose Resource Provider
  • Scroll down and check if Microsoft.DBforMySQL is registered
Azure MySQL

If it is not registered, then click Register on the top left corner:

Azure MySQL

Once the registration is completed, we are ready for creating our DB.

Azure MySQL

To create the MySQL Database, we need to choose Azure Database for MySQL Servers:

Azure MySQL

By clicking the Add option on the top left, we are presented with two options, as it can be shown below:

Azure MySQL
  • Single Server is a fully managed database service with minimal requirements for customizations of the database. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized to provide 99.99% availability on single availability zone. Single servers are best suited for cloud native applications designed to handle automated patching without the need for granular control on the patching schedule and custom MySQL configuration settings. MySQL available version is 8.0.15
  • Flexible Server (Preview) is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. In general, the service provides more flexibility and server configuration customizations compared to the single server deployment based on the user requirements. The flexible server architecture allows users to opt for high availability within a single availability zone and across multiple availability zones. Flexible servers also provide better cost optimization controls with the ability to start/stop your server and burstable SKUs, ideal for workloads that do not need full compute capacity continuously. MySQL version is 5.7 and 8.0.21 is on trial.

We choose Single Server for our start, and we are presented with the following screen:

Azure MySQL
Azure MySQL

We need to set accordingly:

Subscription: Make sure that you have choosen the corresponding subscription

Resource Group: Choose the proper resource group with the correct resources

Server name: This is actually the name of the MySQL Server

Data Source: You may create the new server from a Backup, here we choose none

Location: Choose the proper location that your resource group has access to

Version : Available versions 5.6,5.7 and 8.0

Compute+storage: Choose the best settings, we have Basic , General Purpose and Memory Optimized. The basic differences are among Basic & General Purpose/Memory Optimized.

Basic offers 2 cores and storage up to 1024GB.

General Purpose offers up to 64 cores and storage up to 16384GB.

Memory Optimized offers up to 32 cores and storage up to 16384GB.

All three available configurations offer Backup retention policy up to 35 days (default is 7 days).

General Purpose:

Azure MySQL
Azure MySQL
Azure MySQL
Azure MySQL
Azure MySQL

Admin username: Choose the proper name of the DB admin

Password and Confirm Password : Create a strong password for the DB admin

As soon as these settings are ready, we go to the Next:

Azure MySQL

Additional Settings:

We are able to select encryption:

Azure MySQL

And then moving on:

Azure MySQL

Tags : are name/value pairs that enable you to categorize and view consolidated billing by applying the same tag to multiple resources and resource groups. We won’t be organizing our resources with tags now so we are moving on to Review&Create to check the price and the options we have selected:

Azure MySQL

While our DB is being created, we see its progress:

Azure MySQL

Once ready, we are able to access the new resource that we created :

Azure MySQL

From this page, we may stop the server, reset the password , restore/delete and restart.

From the left side-bar, we may create alerts/metrics and to gather information.
The server logs are also available on this section, as well as the Replica option in case we need to set up a replica for our environment.

Azure MySQL

Same for the server settings:

Azure MySQL

One important note, before we connect officially to our Database.
We need to add the proper Firewall Rule to allow our Client to connect to the MySQL instance:

Azure MySQL

In order to connect to the Database via MySQL Workbench we need to use the Server name on the Hostname as well as the Username, like you may see on the following screenshot:

Azure MySQL

Finally, we are connected to the Database and we are able to execute any query we wish:

Azure MySQL

If you wish to delete the Database, Azure will give you a warning that the deletion is irreversible and you will need to provide MySQL Server’s name.

The same concept lies for the Flexible Server, but with a couple of differences:

Azure MySQL
Azure MySQL

Here, we need to specify the expected Workload type and based on that the most appropriate settings are being recommended and we also have Zone Redundancy available :

Azure MySQL

Backups offer the same retention policy as in Single Server:

Azure MySQL

To be able to connect, we need to set the Firewall Rules, either a specific Public address to access the Flexible MySQL Server or via a Virtual Network:

Azure MySQL
Azure MySQL

Flexible Servers offers synchronous replication when Zone Redundancy is enabled, however it does not offer read-only replicas and due to synchronous replication to another availability zone, primary database server can experience elevated write and commit latency.

We need to mention here that if a more updated version of MySQL is needed, then we will need to have a VM created.

Conclusion

Depending on your needs, Cloud computing can for sure serve them and most probably for a fair price. It offers a variety of services, so give it a try!

After all, it is no coincidence that our heads are up in the clouds.

References

https://www.mysql.com/

https://docs.microsoft.com/en-us/azure/mysql/select-right-deployment-type

https://docs.microsoft.com/en-us/azure/mysql/flexible-server/concepts-high-availability

1

MySQL SHELL – The new era

December 17, 2020
Kefallonia Lake Melissani by Anastasia Papachristopoulou

MySQL Shell is a modern tool that can be used by both developers and database administrators. It is a tool that can be used with any of the following:

  • SQL, Python, Javascript
  • MySQL Document Store (NoSQL, X DevAPI)
  • JSON Documents / SQL tables

In one of our previous articles – Setting up Replication with various methods for MySQL 8 – we reviewed how to create a replica with multiple tools.
Now, it is time to perform the same action but with MySQL Shell.

In general, MySQL Shell is used with MySQL InnoDB Cluster to deploy a MySQL Group Replication, with or without MySQL Router.
Here we will examine how to create an asynchronous replication.

First of all, we may download MySQL Shell via the following url:

https://dev.mysql.com/downloads/shell/

We will be using MySQL Shell version 8.0.22, that is currently the latest and includes extra features.

Let’s start!

On the source database, we connect to mysqlsh:

mysqlsh root@localhost
Please provide the password for 'root@localhost': *******
Save password for 'root@localhost'? [Y]es/[N]o/Ne[v]er (default No): N
MySQL Shell 8.0.22-commercial

Copyright (c) 2016, 2020, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'root@localhost'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 11 (X protocol)
Server version: 8.0.22-commercial MySQL Enterprise Server - Commercial
No default schema selected; type \use <schema> to set one.

MySQL  localhost:33060+ ssl  JS >

We may use option \sql or \py to use SQL or Python respectively instead of the JavaScript mode (\js):

MySQL  localhost:33060+ ssl  JS > \sql
Switching to SQL mode... Commands end with ;

MySQL  localhost:33060+ ssl  JS > \py
Switching to Python mode...
MySQL  localhost:33060+ ssl  Py >

For creating the dump we will be using the util.dumpInstance().
Please note that the outputUrl (target directory to store the dump files) must be an empty folder, otherwise you will receive an error like :

util.dumpInstance: Cannot proceed with the dump, the specified directory ‘/’ already exists at the target location / and is not empty. (ArgumentError).


If the folder doesn’t exist mysqlsh is smart enough to create the folder for us:

MySQL  localhost:33060+ ssl  JS > util.dumpInstance("/path_for_dump", {dryRun: true, ocimds: false, threads:4, compatibility: ["strip_definers", "strip_restricted_grants","force_innodb"]})

Now, let’s take a minute to review the options we are using:

dryRun: This won’t dump anything, it will print information on what will be dumped.

ocimds: Enable checks for compatibility with MySQL Database Service (MDS).

threads: How many threads to be used for dumping data chunks from the server.

compatibility: Apply MySQL Database Service compatibility modifications when writing dump files. Supported values: “force_innodb”, “strip_definers”, “strip_restricted_grants”, “strip_tablespaces”.

force_innodb – The MySQL Database Service requires use of the InnoDB storage engine. This option will modify the ENGINE= clause of CREATE TABLE statements that use incompatible storage engines and replace them with InnoDB.

strip_definers – strips the “DEFINER=account” clause from views, routines, events and triggers. The MySQL Database Service requires special privileges to create these objects with a definer other than the user loading the schema. By stripping the DEFINER clause, these objects will be created with that default definer. Views and Routines will additionally have their SQL SECURITY clause changed from DEFINER to INVOKER. This ensures that the access permissions of the account querying or calling them are applied, instead of the user that created them. This should be sufficient for most users, but if your database security model requires that views and routines have more privileges than their invoker, you will need to manually modify the schema before loading it.

strip_restricted_grants – Certain privileges are restricted in the MySQL Database Service. Attempting to create users granting these privileges would fail, so this option allows dumped GRANT statements to be stripped of these privileges.

strip_tablespaces – Tablespaces have some restrictions in the MySQL Database Service. If you’d like to have tables created in their default tablespaces, this option will strip the TABLESPACE= option from CREATE TABLE statements.

As we are using a database that contains other storage engines apart from InnoDB engine, we will be using option force_innodb, along with strip_restricted_grants and strip_definers, as we have routines with DEFINER.

So, here we go:

MySQL  localhost:33060+ ssl  JS > util.dumpInstance("/path_for_dump", {ocimds: false, threads:4, compatibility: ["strip_definers", "strip_restricted_grants","force_innodb"]})

As this moves along, we may open a new connection to our target host, enabling the local_infile parameter so the dump can be imported and at the same time we disable innodb_redo_log as well.
Disabling redo logging speeds up data loading by avoiding redo log writes and doublewrite buffering:

mysqlsh back_up@localhost

MySQL  localhost:33060+ ssl  JS > \sql SET GLOBAL  local_infile=1;
MySQL  localhost:33060+ ssl  JS > \sql ALTER INSTANCE DISABLE INNODB REDO_LOG;
MySQL  localhost:33060+ ssl  JS > util.loadDump("/data_imp",{threads:4,updateGtidSet:"replace",skipBinlog:true,waitDumpTimeout:1800})

Before proceeding, let’s discuss once again the options we are using:

/data_imp: is where the files will be located.

threads: The number of parallel threads to use to upload chunks of data to the target MySQL instance.

updateGtidSet: Apply the gtid_executed GTID set from the source MySQL instance, as recorded in the dump metadata, to the gtid_purged GTID set on the target MySQL instance.

skipBinlog: Skips binary logging on the target MySQL instance for the sessions used by the utility during the course of the import, by issuing a SET sql_log_bin=0 statement.

waitDumpTimeout: Setting this option activates concurrent loading by specifying a timeout (in seconds) for which the utility waits for further data after all uploaded data chunks in the dump location have been processed.

updateGtidSet is available from 8.0.22 version.
If you are using 8.0.21, then you will need to manually set the GTID by locating the gtidExecuted field in the @.json dump file in the dump metadata:

MySQL  localhost:33060+ ssl  JS > \sql SET @@GLOBAL.gtid_purged= "+gtidExecuted_set";

So, basically we have taken a dump from the source host and at the same time, restore it to the target host.
Once it is completed, and as MySQL Shell is really fast we won’t have to wait for that long, we log in to the target host and execute the following command:

CHANGE MASTER TO MASTER_HOST='primary_IP',MASTER_USER='user_to_connect',MASTER_PASSWORD='******', MASTER_AUTO_POSITION=1;

START SLAVE;

And our replica is all set.

Apart from the util.dumpInstance(), we also have the options to use util.dumpSchemas() and util.dumpTables(), that was introduced in MySQL Shell 8.0.22, and offers the option for dumping specific schemas, tables or views.

It is also important to mention that the above-mentioned utilities are fully compatible with Oracle Cloud Infrastructure.

Conclusion

MySQL Shell is a really useful tool that offers a variety of functionalities.
It is super fast and saves us a lot of time when dealing with broken replication or with data transfer.

Try it, you will love it 🙂

References

4

Setting up Replication with various methods for MySQL 8

December 10, 2020
Irish Castle and mountains by Olivier DASINI

In the world of the Databases, one of the most important value that we are all trying to achieve is High Availability. Not to loose our valuable data or in case one server fails to always have another to step in and take control.

So, it is critical to have along with our primary database, other identical instances, the replicas.

On this blog post, we will see the most famous ways to set up our replication.

MySQL offers a variety of options to achieve replication set up. Here we are going to review how to create a replica based on GTID replication by using one of the following methods:

  • mysqldump
  • rsync
  • MySQL Enterprise Backup (MEB)
  • Clone plugin

MySQL Shell, using the utilities, is another method. Please see this article.

Before we initiate any process we need to make sure that on our my.cnf, primary & replica, have set the following parameters:

server-id 
log-bin 
binlog_format 
gtid-mode=ON
enforce-gtid-consistency

1st Replication Method – mysqldump

So, starting with mysqldump option we need to make sure that we have enabled gtid-purged:

mysqldump -u dumpUser -p --routines --events --triggers --single-transaction --set-gtid-purged=ON --all-databases > backup.sql

Once this is finished, if we do :

grep PURGED backup.sql

We will receive the corresponding gtid that is needed for setting up our replica. The backup.sql file should be moved to the replica, a simple scp should do.

Moving along to the replica, we perform the following command:

mysql -u root -p -e "source path/backup.sql"

After this import is completed, we simple need to execute the following:

CHANGE MASTER TO MASTER_HOST='primary_IP' ,MASTER_USER='user_to_connect',MASTER_PASSWORD='******', MASTER_AUTO_POSITION=1;
start slave;

If both the above commands are executed with no issue, perform a show slave status\G to verify that both Slave_IO_Running and Slave_SQL_Running are set to yes.

2nd Replication Method – rsync

When using the rsync option, the database on the replica must not be running. And on the primary db we will need to have a lock on the tables. As soon as the tables are locked we may proceed with rsyncing the data.

So:

FLUSH TABLES WITH READ LOCK;
rsync -axvz /datadir/* root@replica:/datadir

Once the copy is completed, we will need to remove the auto.cnf (as this will give the same UUID and we will not be able to set up our replica) and we will also remove mysql.sock.

The next steps will be to start our database and set it up as a replica:

 systemctl start mysqld
 mysql -uroot -p
 CHANGE MASTER TO MASTER_HOST='primary_IP' ,MASTER_USER='user_to_connect',MASTER_PASSWORD='******',    MASTER_AUTO_POSITION=1;
 start slave;

And of course do not forget to reverse the lock command on the primary:

unlock tables;

3rd Replication Method – MEB

MEB, MySQL Enterprise Backup, is a commercially licensed backup utility for MySQL databases. One of the fastest and easiest way to set up your replication while primary db is up and running.

Please note that the db on the replica should not be running and the corresponding paths should be empty, as this may cause confusion. For example, imagine having forgotten a binlog file and binlog_index to trying to read from erroneous binlogs. This will lead to having the replication broken.

So, for mysqlback & restore we can perform:

mysqlbackup -u backupUser -p --backup-dir=/tmp --backup-image=- --compress backup-to-image | ssh mysql@replica_host "/bin/mysqlbackup --backup-dir=/tmp --datadir=path_of_data --log-bin=binlog_path --relay-log=relay_log_path --uncompress --backup-image=- copy-back-and-apply-log"

In one command we are sending our data to the replica as well.

Once, this is completed we go into the /tmp folder in the primary node and inside the meta folder, we take note of the command inside the backup_gtid_purged.sql file. This is needed for setting up our replica:

 reset slave;
 reset master;
 set global.gtid_purged=@@ (this is the command inside the backup_gtid_purged.sql)
 CHANGE MASTER TO MASTER_HOST='primary_IP' ,MASTER_USER='user_to_connect',MASTER_PASSWORD='******', MASTER_AUTO_POSITION=1;
 start slave;

Our replica is up and running with no trouble.

4th Replication Method – clone plugin

Last but not least we have the clone method. Clone is initiated at 8.0.17 version and it is a really useful tool.

First thing we need to do is to install the clone plugin on both the primary and wannabe replica as well as the user with the proper grants:

INSTALL PLUGIN clone SONAME 'mysql_clone.so';
CREATE USER clone_user@'%' IDENTIFIED BY 'xxxxxx';
GRANT BACKUP_ADMIN, CLONE_ADMIN ON *.* to clone_user@'%';

Verify that plugin is indeed active:

SELECT PLUGIN_NAME, PLUGIN_STATUS FROM information_schema.PLUGINS WHERE PLUGIN_NAME = 'clone';
+-------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+-------------+---------------+
| clone       | ACTIVE        |
+-------------+---------------+

And we will also set up our donor on the replica:

SET GLOBAL clone_valid_donor_list = 'Primary_host_IP:port';

Now we need to log in to the replica with the clone_user we created on the above-mentioned steps and execute the following command:

CLONE INSTANCE FROM 'clone_user'@'Primary_host_IP':port IDENTIFIED BY 'xxxxxx';

Once this is completed, we log out from clone_user and log in with root and set up our replica:

CHANGE MASTER TO MASTER_HOST='primary_IP' ,MASTER_USER='user_to_connect',MASTER_PASSWORD='******', MASTER_AUTO_POSITION=1;
start slave;

Please note that the master_user that is being used to set up our replica in all the methods needs to have access on our replica and have the replication_slave privilege.

Conclusion

To sum up, MySQL offers a variety of ways to set up tour replication, what you will choose depends on your needs for speed, performance and down time.

References

6

Create a Cloud Backup with MySQL Enterprise Backup

December 10, 2020
Tags: ,
Cloud from above by Olivier DASINI

MySQL Enterprise Edition customers have access to MySQL Enterprise Backup.

MySQL Enterprise Backup provides enterprise-grade backup and recovery for MySQL. It delivers hot, online, non-blocking backups on multiple platforms including Linux, Windows, Mac & Solaris.

https://www.mysql.com/products/enterprise/backup.html

Cloud backup is a strategy increasingly used in organizations. Send copies of your data to the cloud, can help you to prevent a devastating IT crisis and ensure business continuity.

Currently, MySQL Enterprise Backup supports the following types of cloud storage services:

In this blog post I will use the OCI object storage, for obvious reasons 🙂 and also because it’s probably the best feature/price ratio choice.

Create a pre-authenticated request for a bucket

First we must create an OCI Pre-Authenticated Request (PAR) for a bucket.

Pre-authenticated requests provide a way to let users access a bucket or an object without having their own credentials, as long as the request creator has permissions to access those objects.

https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/usingpreauthenticatedrequests.htm

You can create, delete, or list pre-authenticated requests using the Console, using the CLI, or by using an SDK to access the API.
Click here to see how to create a PAR.

If you use the console, you’ll have something like:

OCI Pre-Authenticated Requests by Olivier DASINI

My Oracle Cloud Infrastructure Pre-Authenticated Request name is : MEB-par-bucket-20201203-1612

OCI Pre-Authenticated Requests by Olivier DASINI

An URL is generated. It is very important to save it!
We’ll use it with MySQL Enterprise Backup.

I can also see my list of PARs in the console:

OCI Pre-Authenticated Requests by Olivier DASINI

Create a Cloud Backup on Oracle Cloud Infrastructure Object Storage

I’m using MySQL Enterprise Backup 8.0.22.
The extra options to backup your data into OCI Object Storage are:

cloud-service : Cloud service for data backup or restoration.
Currently, there are 3 types of cloud storage services supported by mysqlbackup, represented by the following values for the option:

  • OCI: Oracle Cloud Infrastructure Object Storage
  • openstack: OpenStack Swift or compatible object storage services
  • S3: Amazon Simple Storage Service (S3) or compatible storage service.

cloud-object : The storage object for the backup image.
Note that names of objects within the same bucket have to be unique.

cloud-par-url : The Pre-Authenticated Request (PAR) URL for OCI Object Storage.
For a backup to OCI Object Storage, it is the PAR URL for the storage bucket; for restore and other operations on an object stored on OCI, it is the PAR URL for the object.

Click here to find the complete list of cloud storage options (OCI, Amazon S3 & OpenStack Swift options).

In this article the values of these options are:

  • cloud-service=OCI
  • cloud-par-url=https://objectstorage.us-ashburn-1.oraclecloud.com/p/JL7k0DnNE8DTV<…snip…>_bucket-20200908-1001/o/
  • cloud-object=myBck_20201203-1600.mbi

I assume your already know what to do before the first backup.

Let’s create the backup then:

$ mysqlbackup --defaults-file=/etc/mysql/my.cnf --backup-dir=/meb-tmp --with-timestamp --backup-image=- --cloud-service=OCI --cloud-par-url=https://objectstorage.us-ashburn-1.oraclecloud.com/p/JL7k0DnNE8DTV<...snip...>_bucket-20200908-1001/o/ --cloud-object=myBck_20201203-1600.mbi backup-to-image

MySQL Enterprise Backup  Ver 8.0.22-commercial for Linux on x86_64 (MySQL Enterprise - Commercial)
Copyright (c) 2003, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Starting with following command line ...

<... snip ...>

-------------------------------------------------------------
   Parameters Summary         
-------------------------------------------------------------
   Start LSN                  : 27229330944
   End LSN                    : 27229334048
-------------------------------------------------------------
201203 16:51:48 MAIN     INFO: Cloud operation completed.

mysqlbackup completed OK!

Don’t forget the value “” (dash) for backup-image parameter:

--backup-image=-

The backup is now completed and stored on the Oracle Cloud Infrastructure object storage bucket.

OCI Pre-Authenticated Requests by Olivier DASINI

Restore a Backup from Oracle Cloud Infrastructure

Again, a Pre-Authenticated Request (PAR) URL for OCI Object Storage will be used.
This time we will use a PAR on an object with Read PAR privileges created before the restoration.

Using the console, after selecting my Bucket, I create a pre-authenticated request for an object:

OCI Pre-Authenticated Requests by Olivier DASINI

The PAR target is Object
My object name is myBck_20201203-1600.mbi
An read only access type is sufficient : Access Type: Permit reads on the object

Et voilà!
My PAR is created

OCI Pre-Authenticated Requests by Olivier DASINI

An URL is generated. It is very important to save it!
We’ll use it with MySQL Enterprise Backup.

Now we have all that is needed to restore with MySQL Enterprise Backup.

I assuming you already now how to restore a MySQL instance with MEB.

Thus in our context, to restore a single-file backup from an OCI Object Storage to a MySQL Server, we will use:

$ mysqlbackup --defaults-file=/etc/mysql/my.cnf --backup-image=- --with-timestamp --backup-dir=/meb-tmp --cloud-service=OCI --cloud-par-url=https://objectstorage.us-ashburn-1.oraclecloud.com/p/FvbPbq7oJAGP<...snip...>_bucket-20200908-1001/o/myBck_20201203-1600.mbi  copy-back-and-apply-log


MySQL Enterprise Backup  Ver 8.0.22-commercial for Linux on x86_64 (MySQL Enterprise - Commercial)
Copyright (c) 2003, 2020, Oracle and/or its affiliates. All rights reserved.
<...snip...>
201208 12:11:27 MAIN     INFO: Apply-log operation completed successfully.
201208 12:11:27 MAIN     INFO: Full Backup has been restored successfully.
201208 12:11:27 MAIN     INFO: Cloud operation completed.

mysqlbackup completed OK! with 4 warnings

The restore is now completed.

As a side note, if you want to see the 4 warnings, take a look at the MEB logfile, located in the –backup-dir, meb-tmp in this article:

$ grep WARNING /meb-tmp/2020-12-08_11-59-58/meta/MEB_2020-12-08.11-59-58.log

201208 11:59:58 MAIN  WARNING: If you restore to a server of a different version, the innodb_data_file_path parameter might have a different default. In that case you need to add 'innodb_data_file_path=ibdata1:12M:autoextend' to the target server configuration.
201208 11:59:58 MAIN  WARNING: If you restore to a server of a different version, the innodb_log_files_in_group parameter might have a different default. In that case you need to add 'innodb_log_files_in_group=2' to the target server configuration.
201208 11:59:58 MAIN  WARNING: If you restore to a server of a different version, the innodb_log_file_size parameter might have a different default. In that case you need to add 'innodb_log_file_size=50331648' to the target server configuration.
201208 12:11:27 MAIN  WARNING: External plugins list found in meta/backup_content.xml. Please ensure that all plugins are installed in restored server.

Nothing really serious in this context.
If you need more information, please click here.

The rest of the story is classic, restart your MySQL instance and you good to go 🙂

One more thing to know and to keep in mind, is that a cloud backup always uses one write thread.
In clear backup & restore duration could be much longer than for a local operation.

However, it is a good practice, when possible, to keep a local copy of the backup file.

It is usually easier and much faster to recover from a local location.

MySQL Enterprise Edition

MySQL Enterprise Edition includes the most comprehensive set of advanced features, management tools and technical support to achieve the highest levels of MySQL scalability, security, reliability, and uptime.

It reduces the risk, cost, and complexity in developing, deploying, and managing business-critical MySQL applications.

MySQL Enterprise Edition server Trial Download (Note – Select Product Pack: MySQL Database).

MySQL Enterprise Edition

References

Follow me on Linkedin

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.

My Speaker Deck account.

Thanks for using HeatWave & MySQL!

Comments Off on Create a Cloud Backup with MySQL Enterprise Backup

MySQL 8.0.22 New Features Summary

November 10, 2020
Sakila mozaik by Olivier DASINI

Presentation of some of the new features of MySQL 8.0.22 released on October 19th, 2020.

Highlight

Slides

Download this presentation and others on my SlideShare account.

Video

That might interest you

Follow me on Linkedin

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.

My Speaker Deck account.

Thanks for using HeatWave & MySQL!

Comments Off on MySQL 8.0.22 New Features Summary

Automatic connection failover for Asynchronous Replication

November 5, 2020
Plage par Olivier DASINI

TL;DR

Since MySQL 8.0.22 there is a mechanism in asynchronous replication that makes the receiver automatically try to re-establish an asynchronous replication connection to another sender, in case the current connection gets interrupted due to the failure of the current sender.


Asynchronous automatic connection failover automates the process of re-establishment of an asynchronous replication connection to another sender of sender list.
That means if a the source of a replica crashed, this replica will be able to automatically connect to another source.
One of the biggest interest is to improve Disaster Recovery (DR) architecture.

With this feature, a typical architecture is to have a 3 nodes asynchronous replication cluster.
2 primary nodes in active/passive mode (if you need active/active architecture use MySQL InnoDB Cluster) and the third one is connected to one of the primary, either for DR or for some specialized task like analytics for example.
So in case of unavailability of its primary – if the replication I/O thread stops due to the source stopping or due to a network failure – this replica will automatically connect to the other primary.

Another architecture is to use this asynchronous automatic connection failover feature with MySQL InnoDB Cluster.

Hey guess what? this is the topic of this tutorial and by the way, we will use some of the fanciest MySQL 8.0 features 🙂

Context

4 MySQL 8.0.22 instances :


– mysql_node1 : 172.20.0.11
– mysql_node2 : 172.20.0.12
– mysql_node3 : 172.20.0.13
These 3 are members of a MySQL Group Replication cluster.
I created a MySQL InnoDB Cluster (resources here and here).
I will not use MySQL Router in this tutorial.


– mysql_node4 : 172.20.0.14
It’s my replica.
It will be asynchronously connected to the Group Replication.

Because a picture is worth a thousand words, below is the architecture I want to achieve :

Asynchronous Replica of a MySQL Group Replication by Olivier DASINI

Graphs in this article are from MySQL Enterprise Monitor.

Current status

I’m using MySQL 8.0.22 :

$ mysqlsh root@mysql_node1:3306 --cluster

mysql_node1:3306 ssl JS> \sql SELECT VERSION();
+-----------+
| VERSION() |
+-----------+
| 8.0.22    |
+-----------+

I have a 3 nodes Group Replication cluster up and running :

mysql_node1:3306 ssl JS> 
cluster.status()
{
    "clusterName": "ic1", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "mysql_node1:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "mysql_node1:3306": {
                "address": "mysql_node1:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "replicationLag": null, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.22"
            }, 
            "mysql_node2:3306": {
                "address": "mysql_node2:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": null, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.22"
            }, 
            "mysql_node3:3306": {
                "address": "mysql_node3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": null, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.22"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "mysql_node1:3306"
}

Setup the replication user

Like you know, there are some preparation steps to be able to use MySQL Asynchronous Replication. If you’re not familiar with replication using GTID, please read this.

On the Group Replication primary – currently mysql_node1 – I setting up the asynchronous replication user:

mysql_node1:3306 ssl SQL> 
CREATE USER 'repl'@'172.20.0.%' IDENTIFIED BY 'S3cr€tRepl' REQUIRE SSL;


GRANT REPLICATION SLAVE ON *.* TO 'repl'@'172.20.0.%';

Setup the clone user

The clone plugin is one of my favorite feature!

It permits cloning data locally or from a remote MySQL server instance.
Cloned data is a physical snapshot of data stored in InnoDB that includes schemas, tables, tablespaces, and data dictionary metadata.
The cloned data comprises a fully functional data directory, which permits using the clone plugin for MySQL server provisioning.

It’s a very convenient way to copy data from the source (the Group Replication) to the replica (mysql_node4).

Thanks to InnoDB Cluster the Clone plugin is already installed on the 3 members.
So on the primary member – currently mysql_node1 – I’ll create a dedicated clone user with the donor privileges for using and monitoring the clone plugin:

mysql_node1:3306 ssl SQL> 
CREATE USER clone_user IDENTIFIED BY "S3cr€tClone";


GRANT BACKUP_ADMIN, EXECUTE ON *.* TO clone_user;


GRANT SELECT ON performance_schema.* TO clone_user;

Note that I could have used the cluster administrator dedicated user instead of create a specialized clone user.

On mysql_node4, the future replica, I’ll create the same user but with the recipient privileges.
But before I’ll install the clone plugin and set the clone donor list:

mysql_node4:3306 ssl SQL> 
INSTALL PLUGIN clone SONAME 'mysql_clone.so';


SET PERSIST clone_valid_donor_list = 'mysql_node1:3306,mysql_node2:3306,mysql_node3:3306';


CREATE USER clone_user IDENTIFIED BY "S3cr€tClone";


GRANT CLONE_ADMIN, EXECUTE ON *.* to clone_user;


GRANT SELECT ON performance_schema.* TO clone_user; 

Clone a MySQL instance

Now we have everything all set to create the replica from a member of the group.

On the future replica – mysql_node4 – we can now run the clone instance command:

mysql_node4:3306 ssl SQL> 
CLONE INSTANCE FROM 'clone_user'@'mysql_node1':3306 IDENTIFIED BY 'S3cr€tClone';

If you want to monitor the cloning progress run the following query:

mysql_node4:3306 ssl SQL> 
SELECT 
   STATE, 
   CAST(BEGIN_TIME AS DATETIME) AS "START TIME", 
   CASE WHEN END_TIME IS NULL THEN  LPAD(sys.format_time(POWER(10,12) * (UNIX_TIMESTAMP(now()) - UNIX_TIMESTAMP(BEGIN_TIME))), 10, ' ') 
ELSE 
LPAD(sys.format_time(POWER(10,12) * (UNIX_TIMESTAMP(END_TIME) - UNIX_TIMESTAMP(BEGIN_TIME))), 10, ' ') END AS DURATION 
FROM performance_schema.clone_status;

When the cloning is over, the MySQL instance must be restarted (that will normally happen automatically).
After the restart, you can verify that the clone completed successfully with the queries below:

mysql_node4:3306 ssl SQL> 
SELECT 
   STATE, 
   ERROR_NO, 
   BINLOG_FILE, 
   BINLOG_POSITION, 
   GTID_EXECUTED, 
   CAST(BEGIN_TIME AS DATETIME) as "START TIME", 
   CAST(END_TIME AS DATETIME) as "FINISH TIME", 
   sys.format_time(POWER(10,12) * (UNIX_TIMESTAMP(END_TIME) - UNIX_TIMESTAMP(BEGIN_TIME))) AS DURATION 
FROM performance_schema.clone_status \G

and:

mysql_node4:3306 ssl SQL> 
SELECT 
   STAGE, 
   STATE, 
   CAST(BEGIN_TIME AS DATETIME) as "START TIME", 
   CAST(END_TIME AS DATETIME) as "FINISH TIME", 
   LPAD(sys.format_time(POWER(10,12) * (UNIX_TIMESTAMP(END_TIME) - UNIX_TIMESTAMP(BEGIN_TIME))), 10, ' ') AS DURATION 
FROM performance_schema.clone_progress;

Add the replica

First I will setup the configuration information for a replication source server to the source list for a replication channel.
To do that we use the function: asynchronous_connection_failover_add_source
Information needed are the replication channel name, the source server address, port and network namespace, and also the weight.

More information are available here.

For this tutorial I chose the following values:

  • Channel name: autof
  • Source server addresses: mysql_node1, mysql_node2, mysql_node3
  • Source server port: 3306
  • Source server network namespace: ” (empty)
  • Weight: respectively: 50, 90, 90
mysql_node4:3306 ssl SQL> 
SELECT asynchronous_connection_failover_add_source('autof', 'mysql_node1', 3306, '', 50);


SELECT asynchronous_connection_failover_add_source('autof', 'mysql_node2', 3306, '', 90);


SELECT asynchronous_connection_failover_add_source('autof', 'mysql_node3', 3306, '', 90);

The replica’s source lists for each replication channel for the asynchronous connection failover mechanism can be viewed in the Performance Schema table replication_asynchronous_connection_failover.

mysql_node4:3306 ssl SQL> 
SELECT * FROM performance_schema.replication_asynchronous_connection_failover;
+--------------+-------------+------+-------------------+--------+
| Channel_name | Host        | Port | Network_namespace | Weight |
+--------------+-------------+------+-------------------+--------+
| autof        | mysql_node1 | 3306 |                   |     50 |
| autof        | mysql_node2 | 3306 |                   |     90 |
| autof        | mysql_node3 | 3306 |                   |     90 |
+--------------+-------------+------+-------------------+--------+

To set the parameters that the replica server uses for connect to the source, we use the well known CHANGE MASTER TO statement.
You already know most of its clauses, so let’s only focus on some of them:

  • SOURCE_CONNECTION_AUTO_FAILOVER : activates the asynchronous connection failover mechanism.
  • MASTER_RETRY_COUNT & MASTER_CONNECT_RETRY : define the failover time. The default setting is… 60 days, probably not what you want :). So, you (most likely) should reduced the settings. e.g. 1 minute is respectively 20 and 3. (20 x 3 = 60)
  • FOR CHANNEL : enables you to name which replication channel the statement applies to. The CHANGE MASTER TO statement applies to this specific replication channel.

Now let’s configure the replication:

mysql_node4:3306 ssl SQL> 
CHANGE MASTER TO MASTER_USER='repl', MASTER_PASSWORD='S3cr€tRepl', MASTER_HOST='mysql_node2', MASTER_PORT=3306, MASTER_AUTO_POSITION=1, MASTER_SSL=1, SOURCE_CONNECTION_AUTO_FAILOVER=1, MASTER_RETRY_COUNT=3, MASTER_CONNECT_RETRY=5 FOR CHANNEL 'autof';

Please note that my failover time in this tutorial is 15 seconds (3 x 5). Obviously the relevant setting depends on your needs. A longer duration will probably makes more sense in real life.

Then start the replication, on channel autof using START REPLICA:

mysql_node4:3306 ssl SQL> 
START REPLICA FOR CHANNEL "autof";

 Status information of the replication can be seen with SHOW REPLICA:

mysql_node4:3306 ssl SQL> 
SHOW REPLICA STATUS\G
*************************** 1. row ***************************
             Replica_IO_State: Waiting for master to send event
                  Source_Host: mysql_node2
                  Source_User: repl
                  Source_Port: 3306
                Connect_Retry: 5
                           ...
           Replica_IO_Running: Yes
          Replica_SQL_Running: Yes
                           ...
           Source_Retry_Count: 3
                           ...
                Auto_Position: 1
                           ...
                 Channel_Name: autof
                           ...

Automatic connection failover

We have now configured our replication, with an InnoDB Cluster/Group Replication as a source and a standalone MySQL server as a replica.
Let’s see how the automatic connection failover works.

Restart the replica

I want to see the behavior after a restart of the replication.

State before the stop:

mysql_node4:3306 ssl SQL> 
SELECT 
   CHANNEL_NAME, 
   SERVICE_STATE, 
   HOST, 
   CONNECTION_RETRY_INTERVAL, 
   CONNECTION_RETRY_COUNT 
FROM replication_connection_configuration 
   INNER JOIN replication_applier_status 
      USING (CHANNEL_NAME) 
WHERE CHANNEL_NAME = 'autof'\G
*************************** 1. row ***************************
             CHANNEL_NAME: autof
            SERVICE_STATE: ON
                     HOST: mysql_node2
CONNECTION_RETRY_INTERVAL: 5
   CONNECTION_RETRY_COUNT: 3

Stop the replication:

mysql_node4:3306 ssl SQL> 
STOP REPLICA;


SELECT 
   CHANNEL_NAME, 
   SERVICE_STATE, 
   HOST, 
   CONNECTION_RETRY_INTERVAL, 
   CONNECTION_RETRY_COUNT 
FROM replication_connection_configuration 
   INNER JOIN replication_applier_status 
      USING (CHANNEL_NAME) 
WHERE CHANNEL_NAME = 'autof'\G
*************************** 1. row ***************************
             CHANNEL_NAME: autof
            SERVICE_STATE: OFF
                     HOST: mysql_node2
CONNECTION_RETRY_INTERVAL: 5
   CONNECTION_RETRY_COUNT: 3
MySQL Enterprise Monitor

After a while, start the replication:

mysql_node4:3306 ssl SQL> 
START REPLICA;

SELECT 
   CHANNEL_NAME, 
   SERVICE_STATE, 
   HOST, 
   CONNECTION_RETRY_INTERVAL, 
   CONNECTION_RETRY_COUNT 
FROM replication_connection_configuration 
   INNER JOIN replication_applier_status 
      USING (CHANNEL_NAME) 
WHERE CHANNEL_NAME = 'autof'\G
*************************** 1. row ***************************
             CHANNEL_NAME: autof
            SERVICE_STATE: ON
                     HOST: mysql_node2
CONNECTION_RETRY_INTERVAL: 5
   CONNECTION_RETRY_COUNT: 3

Replica picks up where it left off… as you would have expected.

Short unavailability of the source

I want to see the behavior after a short unavailability of the source. I mean a duration lower than the failover threshold – connection_retry_interval x connection_retry_count – 15 seconds (5×3) in this example.

State before the stop of the source, mysql_node2:

mysql_node4:3306 ssl SQL> 
SELECT 
   CHANNEL_NAME, 
   SERVICE_STATE, 
   HOST, 
   CONNECTION_RETRY_INTERVAL, 
   CONNECTION_RETRY_COUNT 
FROM replication_connection_configuration 
   INNER JOIN replication_applier_status 
      USING (CHANNEL_NAME) 
WHERE CHANNEL_NAME = 'autof'\G
*************************** 1. row ***************************
             CHANNEL_NAME: autof
            SERVICE_STATE: ON
                     HOST: mysql_node2
CONNECTION_RETRY_INTERVAL: 5
   CONNECTION_RETRY_COUNT: 3

… Stop mysql_node2 for 10 seconds …

MySQL Enterprise Monitor

State after the start of the source mysql_node2:

mysql_node4:3306 ssl SQL> 
SELECT 
   CHANNEL_NAME, 
   SERVICE_STATE, 
   HOST, 
   CONNECTION_RETRY_INTERVAL, 
   CONNECTION_RETRY_COUNT 
FROM replication_connection_configuration 
   INNER JOIN replication_applier_status 
      USING (CHANNEL_NAME) 
WHERE CHANNEL_NAME = 'autof'\G
*************************** 1. row ***************************
             CHANNEL_NAME: autof
            SERVICE_STATE: ON
                     HOST: mysql_node2
CONNECTION_RETRY_INTERVAL: 5
   CONNECTION_RETRY_COUNT: 3
MySQL Enterprise Monitor

Well… nothing changed!
The unavailability of the source was not long enough to trigger the failover.
That is awesome to prevent non necessary failover.

Long unavailability of the source

I want to see the behavior after a longer unavailability of the source. I mean a duration greater than the failover threshold – connection_retry_interval x connection_retry_count – 15 seconds (5×3) in this example.

State before the stop of the source, mysql_node2:

mysql_node4:3306 ssl SQL> 
SELECT 
   CHANNEL_NAME, 
   SERVICE_STATE, 
   HOST, 
   CONNECTION_RETRY_INTERVAL, 
   CONNECTION_RETRY_COUNT 
FROM replication_connection_configuration 
   INNER JOIN replication_applier_status 
      USING (CHANNEL_NAME) 
WHERE CHANNEL_NAME = 'autof'\G
*************************** 1. row ***************************
             CHANNEL_NAME: autof
            SERVICE_STATE: ON
                     HOST: mysql_node2
CONNECTION_RETRY_INTERVAL: 5
   CONNECTION_RETRY_COUNT: 3

… Stop mysql_node2 for 20 seconds …

mysql_node4:3306 ssl SQL> 
SELECT 
   CHANNEL_NAME, 
   SERVICE_STATE, 
   HOST, 
   CONNECTION_RETRY_INTERVAL, 
   CONNECTION_RETRY_COUNT 
FROM replication_connection_configuration 
   INNER JOIN replication_applier_status 
      USING (CHANNEL_NAME) 
WHERE CHANNEL_NAME = 'autof'\G
*************************** 1. row ***************************
             CHANNEL_NAME: autof
            SERVICE_STATE: ON
                     HOST: mysql_node3
CONNECTION_RETRY_INTERVAL: 5
   CONNECTION_RETRY_COUNT: 3
MySQL Enterprise Monitor

As expected, the asynchronous automatic connection failover took place. \o/
The new source is now mysql_node3, because it has a bigger weight than mysql_node1 (90 vs 50) and because it was available 🙂

Limitations

Please be aware that in 8.0.22 this feature lacks of some of the needed functionality to replace MySQL Router as means to replicate from an InnoDB Cluster/Group Replication cluster.

Things such as:

  • does not automatically learn about new members or members that are removed
  • does not follow the primary role, it stays connected to whatever host it was connected to
  • does not follow the majority network partition
  • does not care if a host is not part of the group any longer, as long as it can connect, it will

These limitations will be lifted in future versions.

This is a very nice feature starting with MySQL 8.0.22, useful for both MySQL Replication and MySQL Group Replication architectures.

And you know what? There is more to come 😉

Stay tuned!

References

Follow me on Linkedin

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.

My Speaker Deck account.

Thanks for using HeatWave & MySQL!

4

Plan your MySQL upgrade

October 21, 2020
Jardin Balata Martinique by Olivier DASINI

I’ve made a short video that will give you tips and tricks to successfully upgrade to MySQL 8

These information are from my presentation : Upgrade from MySQL 5.7 to MySQL 8.0

Thanks for using MySQL!

Follow me on twitter

Watch my videos on my YouTube channel.

Comments Off on Plan your MySQL upgrade