
{"id":4351,"date":"2021-02-23T10:16:32","date_gmt":"2021-02-23T09:16:32","guid":{"rendered":"http:\/\/dasini.net\/blog\/?p=4351"},"modified":"2021-02-23T10:16:34","modified_gmt":"2021-02-23T09:16:34","slug":"mysql-recovery","status":"publish","type":"post","link":"https:\/\/dasini.net\/blog\/2021\/02\/23\/mysql-recovery\/","title":{"rendered":"MySQL Recovery"},"content":{"rendered":"\n<figure class=\"wp-block-gallery columns-1 is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\"><ul class=\"blocks-gallery-grid\"><li class=\"blocks-gallery-item\"><figure><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"977\" src=\"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2021\/02\/IMG-7b894485ba416774df71d5cbbabd964b-V-3-1024x977.jpg?resize=1024%2C977\" alt=\"\" data-id=\"4377\" data-full-url=\"http:\/\/dasini.net\/blog\/wp-content\/uploads\/2021\/02\/IMG-7b894485ba416774df71d5cbbabd964b-V-3.jpg\" data-link=\"http:\/\/dasini.net\/blog\/?attachment_id=4377\" class=\"wp-image-4377\" srcset=\"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2021\/02\/IMG-7b894485ba416774df71d5cbbabd964b-V-3.jpg?resize=1024%2C977&amp;ssl=1 1024w, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2021\/02\/IMG-7b894485ba416774df71d5cbbabd964b-V-3.jpg?resize=300%2C286&amp;ssl=1 300w, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2021\/02\/IMG-7b894485ba416774df71d5cbbabd964b-V-3.jpg?resize=768%2C733&amp;ssl=1 768w, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2021\/02\/IMG-7b894485ba416774df71d5cbbabd964b-V-3.jpg?w=1327&amp;ssl=1 1327w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><\/figure><\/li><\/ul><\/figure>\n\n\n\n<p>If you are dealing with data, and you most probably are if you are reading this, one of your biggest fears would be not to be able to retrieve them. In a world where data actually surround us, it is critical to be able to retrieve them fast and with the best consistency. <\/p>\n\n\n\n<p>Thus, it is always a good idea to have high availability settings in place to avoid loosing your data. <\/p>\n\n\n\n<p>However, most of the times, we may wish or we may need to save the database and our data, and be a DBA-hero. Not an easy task, and it may be  smoother to just perform a backup-restore. Sadly, this is not always the case. <\/p>\n\n\n\n<p>So, this is what we will be facing in this article, we are going to see what to do when there is a data corruption in MySQL and the steps we need to perform to try saving our database.<\/p>\n\n\n\n\n\n<p>Before any task or operation, it is always a good idea to copy the already existing data at an OS level:<\/p>\n\n\n\n<p>First, let&rsquo;s stop MySQL:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>systemctl stop mysqld<\/code><\/pre>\n\n\n\n<p>And then:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>cp -r \/datadir\/ \/backup_db<\/code><\/pre>\n\n\n\n\n\n<p>Before attempting to bring back up MySQL, it is a good idea to execute an <em>innochecksum <\/em>to see the status of your database or of a specific table. <\/p>\n\n\n\n<p>A few words on innochecksum, in case you are not familiar with it: <\/p>\n\n\n\n<p><a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/innochecksum.html\"><strong>innochecksum<\/strong><\/a>&nbsp;prints checksums for&nbsp;<code>InnoDB<\/code>&nbsp;files. This tool reads an&nbsp;<code>InnoDB<\/code>&nbsp;tablespace file, calculates the checksum for each page, compares the calculated checksum to the stored checksum, and reports mismatches, which indicate damaged pages. It was originally developed to speed up verifying the integrity of tablespace files after power outages but can also be used after file copies. Because checksum mismatches cause&nbsp;<code>InnoDB<\/code>&nbsp;to deliberately shut down a running server, it may be preferable to use this tool rather than waiting for an in-production server to encounter the damaged pages.<\/p>\n\n\n\n<p>So, in order to check a table, you may use a simple command like: <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>innochecksum table.ibd --log=\/tmp\/log.txt<\/code><\/pre>\n\n\n\n<p>The status will be logged on file<strong> \/tmp\/log.txt<\/strong>, if the file doesn&rsquo;t exist <em>innochecksum <\/em>can do this for you. And, the output will be something like the following:<\/p>\n\n\n\n<p>page::41354; log sequence number:first = ; second =<br>Page:: uncorrupted<\/p>\n\n\n\n<p><em>Innochecksum <\/em>offers a variety of options, the most useful is to be able to choose a starting or an ending page, to save time:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>innochecksum --start-page=600 table.ibd --log=\/tmp\/log.txt<\/code><\/pre>\n\n\n\n<p><span style=\"text-decoration: underline\">Please be aware that innochecksum can be executed only on a Database that is not running. Otherwise, you are going to receive a lock error:<\/span><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>Error: Unable to lock file:: table.ibd\nfcntl: Resource temporarily unavailable<\/code><\/pre>\n\n\n\n\n\n<p>Once you have checked the tables and discovered the problematic table or tables, it is time to attempt to startup the server and see what we can do. Very important, on my.cnf it is imperative to add the following:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>innodb_force_recovery=1\nlog_error_verbosity=3<\/code><\/pre>\n\n\n\n<p><em>log_error_verbosity:<\/em> The log_error_verbosity system variable specifies the verbosity for handling events intended for the error log. I always prefer to have this set as it is very helpful to identify various issues and at the same time to be able to check the progress on the error_log file. <br><\/p>\n\n\n\n<p><em>innodb_force_recovery<\/em>: The crash recovery mode, typically only changed in serious troubleshooting situations. Possible values are from 0 to 6. If server comes up with a value of 3 and below then the data can be recovered, most probably. Always start with 1 and increase one by one, if the server is not coming up with your previous choice. <\/p>\n\n\n\n\n\n<p>Now that we have these settings on our configuration,  it is time to start-up the server and hope for the best: <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>systemctl start mysqld<\/code><\/pre>\n\n\n\n<p>If we are lucky and the server does manage to come up with a value among 1 to 3, it is time to take a full dump of the table(s) that are corrupted. If you are using <strong>8.0.22<\/strong> version and above, then you may take the <a href=\"https:\/\/dev.mysql.com\/doc\/mysql-shell\/8.0\/en\/mysql-shell-utilities-dump-instance-schema.html\" target=\"_blank\" rel=\"noreferrer noopener\" title=\"Instance Dump Utility, Schema Dump Utility, and Table Dump Utility\">dump via MySQL Shell<\/a>:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>MySQL localhost:33060+ ssl JS &gt; util.dumpTables(\"schema\", &#091; \"table\" ], \"\/tmp\/dump_table\");<\/code><\/pre>\n\n\n\n<p>Otherwise, you may use the old way:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>mysqldump -ubackup_user -p --set-gtid-purged=OFF --single-transaction schema table &gt; table.sql<\/code><\/pre>\n\n\n\n<p>Once the backup is completed successfully, it is time to drop the corrupted table:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>mysql \u2013uroot \u2013p \u2013execute=\u201dDROP TABLE schema.table\u201d<\/code><\/pre>\n\n\n\n<p>And then restore it from the dump we have taken:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>mysql \u2013uroot \u2013p &lt; table.sql<\/code><\/pre>\n\n\n\n\n\n<p>Once the restoration is done, we need to restart the MySQL server, but this time without the innodb_force_recovery parameter. Simply, edit your configuration file and mark it as a comment:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>#innodb_force_recovery=1<\/code><\/pre>\n\n\n\n<p>Hopefully, this will work and our database will be up and running with all its&rsquo; data intact! Unfortunately, this is not always the case and the database won&rsquo;t be able to come up for us to retrieve our files. In situations like these, we need to restore from an existing backup or from a replica, if they are available. <\/p>\n\n\n\n<p>To avoid facing serious trouble with your data, condider having in place one (or better yet all) of the following:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Full backups ( at a daily,weekly,monthly base)<\/li><li>Incremental backups<\/li><li>Binlog backups<\/li><li>In sync replicas<\/li><li>Innodb cluster<\/li><\/ul>\n\n\n\n<p>Additionally, in cases of a system crash, it is always helpful to set certain parameters to maintain data consistency. Let&rsquo;s view the most basic ones:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><em>innodb_flush_log_at_trx_commit = 1 <\/em>:<\/li><\/ul>\n\n\n\n<p>Controls the balance between strict ACID compliance for commit operations and higher performance that is possible when commit-related I\/O operations are rearranged and done in batches.<br>You can achieve better performance by changing the default value but then you can lose transactions in a crash.<br>The default setting of 1 is required for full ACID compliance. Logs are written and flushed to disk at each transaction commit.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><em>sync_binlog = 1<\/em> :<\/li><\/ul>\n\n\n\n<p>Enables synchronization of the binary log to disk before transactions are committed. This is the safest setting but can have a negative impact on performance<br>due to the increased number of disk writes. In the event of a power failure or operating system crash, transactions that are missing from the binary log are only in a prepared state.<br>This permits the automatic recovery routine to roll back the transactions, which guarantees that no transaction is lost from the binary log.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><em>innodb_doublewrite = 1<\/em> :<\/li><\/ul>\n\n\n\n<p>The doublewrite buffer is a storage area where InnoDB writes pages flushed from the buffer pool before writing the pages to their proper positions in the InnoDB data files.<br>If there is an operating system, storage subsystem, or unexpected mysqld process exit in the middle of a page write, InnoDB can find a good copy of the page from the doublewrite buffer<br>during crash recovery.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><em>relay_log_recovery = 1<\/em> :<\/li><\/ul>\n\n\n\n<p>If enabled, this variable enables automatic relay log recovery immediately following server startup. The recovery process creates a new relay log file, initializes the SQL thread position<br>to this new relay log, and initializes the I\/O thread to the SQL thread position. Reading of the relay log from the source then continues.<\/p>\n\n\n\n<p>This global variable is read-only at runtime. Its value can be set with the &#8211;relay-log-recovery option at replica server startup, which should be used following an unexpected halt of<br>a replica to ensure that no possibly corrupted relay logs are processed, and must be used in order to guarantee a crash-safe replica.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><em>relay_log_purge = 1<\/em> :<\/li><\/ul>\n\n\n\n<p>Disabling purging of relay logs when enabling the &#8211;relay-log-recovery option risks data consistency and is therefore not crash-safe.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><em>innodb_directories<\/em> :<\/li><\/ul>\n\n\n\n<p>Another parameter that can assist with corrupted and crash-recovery tablespaces is innodb_directories. Tablespace discovery during crash recovery relies on the&nbsp;<a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/innodb-parameters.html#sysvar_innodb_directories\"><code>innodb_directories<\/code><\/a>&nbsp;setting to identify tablespaces referenced in the redo logs. Innodb_directories can be defined on the start-up and they can be used when moving your tablespaces while the server is offline. <\/p>\n\n\n\n<p>On my.cnf: <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&#091;mysqld]\ninnodb_directories=\"directory_path_1;directory_path_2\"<\/code><\/pre>\n\n\n\n<p>At this point, it is wise to mention that the recovery time is also depended on the <em>innodb_log_file_size<\/em>. <\/p>\n\n\n\n<p>Generally, the combined size of the log files should be large enough that the server can smooth out peaks and troughs in workload activity, which often means that there is enough redo log space to handle more than an hour of write activity. The larger the value, the less checkpoint flush activity is required in the buffer pool, saving disk I\/O.<\/p>\n\n\n\n<p>Larger log files also make crash recovery slower.<\/p>\n\n\n\n\n\n\n\n<p><strong>Conclusion<\/strong><\/p>\n\n\n\n<p>MySQL does offer a stability, however it is really hard to avoid a corruption or to not face having your server crashing at some point. If safety measurements are in place, take a deep breath and dive in to save your data. <\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>When you crash and burn, you have to pick yourself up and go on and hope to make up for it.<\/p><cite>Burt Reynolds<\/cite><\/blockquote>\n\n\n\n\n\n\n\n<p><strong>References<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/innochecksum.html\" target=\"_blank\" rel=\"noreferrer noopener\" title=\"innochecksum \u2014 Offline InnoDB File Checksum Utility\"><strong>innochecksum \u2014 Offline InnoDB File Checksum Utility<\/strong><\/a><\/li><\/ul>\n\n\n\n<ul class=\"wp-block-list\"><li><strong><a href=\"https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/innodb-parameters.html\" target=\"_blank\" rel=\"noreferrer noopener\" title=\"InnoDB Startup Options and System Variables\">InnoDB Startup Options and System Variables<\/a><\/strong><\/li><\/ul>\n\n\n\n\n","protected":false},"excerpt":{"rendered":"<p>If you are dealing with data, and you most probably are if you are reading this, one of your biggest fears would be not to be able to retrieve them. In a world where data actually surround us, it is critical to be able to retrieve them fast and with the best consistency.<\/p>\n<p>Thus, it is always a good idea to have high availability settings in place to avoid loosing your data.<\/p>\n<p>However, most of the times, we may wish or we may need to save the database and our data, and be a DBA-hero. Not an easy task, and it may be smoother to just perform a backup-restore. Sadly, this is not always the case.<\/p>\n<p>So, this is what we will be facing in this article, we are going to see what to do when there is a data corruption in MySQL and the steps we need to perform to try saving our database.<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[203,327],"tags":[1654,1616],"class_list":["post-4351","post","type-post","status-publish","format-standard","hentry","category-mysql-en","category-trick","tag-innochecksum","tag-recovery"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9LfWW-18b","jetpack-related-posts":[{"id":5762,"url":"https:\/\/dasini.net\/blog\/2022\/06\/09\/explore-visualize-your-mysql-heatwave-data-with-superset\/","url_meta":{"origin":4351,"position":0},"title":"Explore &#038; visualize your MySQL HeatWave data with Superset","author":"Olivier DASINI","date":"9 juin 2022","format":false,"excerpt":"In this article I will show you how to properly configure Apache Superset in order to take advantage of a high performance, in-memory query accelerator: MySQL HeatWave.","rel":"","context":"Dans &quot;MySQL&quot;","block_context":{"text":"MySQL","link":"https:\/\/dasini.net\/blog\/category\/mysql-en\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2022\/06\/Configure-Heatwave-Cluster.png?resize=350%2C200","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2022\/06\/Configure-Heatwave-Cluster.png?resize=350%2C200 1x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2022\/06\/Configure-Heatwave-Cluster.png?resize=525%2C300 1.5x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2022\/06\/Configure-Heatwave-Cluster.png?resize=700%2C400 2x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2022\/06\/Configure-Heatwave-Cluster.png?resize=1050%2C600 3x"},"classes":[]},{"id":4774,"url":"https:\/\/dasini.net\/blog\/2021\/08\/24\/discovering-mysql-database-service-episode-4-dump-your-mysql-data-into-an-object-storage-bucket\/","url_meta":{"origin":4351,"position":1},"title":"Discovering MySQL Database Service &#8211; Episode 4 &#8211; Dump your MySQL data into an Object Storage bucket","author":"Olivier DASINI","date":"24 ao\u00fbt 2021","format":false,"excerpt":"This is the fourth episode of \u201cDiscovering MySQL Database Service\u201c, a series of tutorials where I will show you, step by step, how to use MySQL Database Service and some other Oracle Cloud Infrastructure services. In the previous episode we\u2019ve created our Virtual Cloud Network, that provides you with complete\u2026","rel":"","context":"Dans &quot;Cloud&quot;","block_context":{"text":"Cloud","link":"https:\/\/dasini.net\/blog\/category\/cloud\/"},"img":{"alt_text":"MySQL Database Service","src":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2021\/06\/MDS_car_801x600.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2021\/06\/MDS_car_801x600.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2021\/06\/MDS_car_801x600.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2021\/06\/MDS_car_801x600.png?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":4418,"url":"https:\/\/dasini.net\/blog\/2021\/03\/16\/mysql-in-amazon-rds\/","url_meta":{"origin":4351,"position":2},"title":"MySQL in Amazon (RDS)","author":"Anastasia Papachristopoulou","date":"16 mars 2021","format":false,"excerpt":"Up until now we have seen MDS (MySQL Database Service) and MySQL in Azure. As the Cloud technology keeps moving fast, I thought it would be a good idea to see how to set up a MySQL in Amazon , as a service, that is RDS.","rel":"","context":"Dans &quot;Cloud&quot;","block_context":{"text":"Cloud","link":"https:\/\/dasini.net\/blog\/category\/cloud\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2021\/03\/kefallonia.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2021\/03\/kefallonia.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2021\/03\/kefallonia.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2021\/03\/kefallonia.jpg?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":5556,"url":"https:\/\/dasini.net\/blog\/2022\/03\/09\/data-migration-from-mariadb-to-mysql\/","url_meta":{"origin":4351,"position":3},"title":"Data Migration from MariaDB to MySQL","author":"Olivier DASINI","date":"9 mars 2022","format":false,"excerpt":"Yet another customer wanted to migrate from MariaDB to MySQL. So I decided to write down the overall data migration process. Indeed each migration are a unique story so I won\u2019t provide you a \u201cHow to migrate\u201d\u2026 The idea is to highlight the general path and help you to avoid\u2026","rel":"","context":"Dans &quot;Cloud&quot;","block_context":{"text":"Cloud","link":"https:\/\/dasini.net\/blog\/category\/cloud\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2022\/03\/MySQL_Shell_Overview.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2022\/03\/MySQL_Shell_Overview.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2022\/03\/MySQL_Shell_Overview.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2022\/03\/MySQL_Shell_Overview.png?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":6061,"url":"https:\/\/dasini.net\/blog\/2022\/07\/18\/iris-data-set-with-mysql-heatwave-machine-learning-zeppelin\/","url_meta":{"origin":4351,"position":4},"title":"Iris Data Set with MySQL HeatWave Machine Learning  &#038; Zeppelin","author":"Olivier DASINI","date":"18 juillet 2022","format":false,"excerpt":"During the last 20 years MySQL has democratized the usage of transactional databases. Now with MySQL HeatWave we are in the process of democratizing Analytics and Machine Learning. With MySQL HeatWave, valorizing your data has never been so easy! In this article I will show you how easy it is\u2026","rel":"","context":"Dans &quot;MySQL&quot;","block_context":{"text":"MySQL","link":"https:\/\/dasini.net\/blog\/category\/mysql-en\/"},"img":{"alt_text":"MySQL HeatWave a MySQL in-Memory Query Accelerator","src":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2022\/06\/HW_ML.png?fit=1012%2C431&ssl=1&resize=350%2C200","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2022\/06\/HW_ML.png?fit=1012%2C431&ssl=1&resize=350%2C200 1x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2022\/06\/HW_ML.png?fit=1012%2C431&ssl=1&resize=525%2C300 1.5x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2022\/06\/HW_ML.png?fit=1012%2C431&ssl=1&resize=700%2C400 2x"},"classes":[]},{"id":4194,"url":"https:\/\/dasini.net\/blog\/2021\/01\/19\/mysql-in-azure\/","url_meta":{"origin":4351,"position":5},"title":"MySQL in Azure","author":"Anastasia Papachristopoulou","date":"19 janvier 2021","format":false,"excerpt":"In this article, we are going to see how to set up a MySQL Database instance on Microsoft Azure Cloud.","rel":"","context":"Dans &quot;Azure&quot;","block_context":{"text":"Azure","link":"https:\/\/dasini.net\/blog\/category\/azure\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/posts\/4351","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/comments?post=4351"}],"version-history":[{"count":18,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/posts\/4351\/revisions"}],"predecessor-version":[{"id":4379,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/posts\/4351\/revisions\/4379"}],"wp:attachment":[{"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/media?parent=4351"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/categories?post=4351"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/tags?post=4351"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}