Archive Smarter, Query Faster: Unlocking MySQL Performance with HeatWave
Managing large volumes of data is a challenge every organization faces as applications grow. For MySQL users, this often means deciding what to do with historical or less frequently accessed data: keep it in production tables at the cost of performance, or archive it and lose the ability to query it efficiently.
Traditionally, archiving has been a trade-off — helpful for keeping databases lean, but limiting when developers or analysts need to run queries across years of historical records.
HeatWave MySQL changes this equation!
By combining a high-performance in-memory query accelerator with a fully managed MySQL database service, HeatWave makes it possible to archive vast amounts of data while still enabling fast, interactive analytics. Instead of moving archived data into a separate system or relying on slow queries, developers and DBAs can continue to use familiar SQL while taking advantage of HeatWave’s speed and scale.
In this article, we’ll explore how to use HeatWave to archive MySQL data effectively and then run accelerated queries on that archived data — without compromising performance. Whether you’re managing billions of rows or just planning for future growth, this approach can help simplify your architecture while delivering the best of both worlds: efficient data storage and lightning-fast analytics.

Context
The database has been collecting and collecting daily time-stamped data for years.To optimize performance, we want to keep only the current year’s data plus the full year before it. All older data should be archived, yet still available for querying with very low latency.
Let’s see how HeatWave can help us archive efficiently while still running lightning-fast queries on historical data.
I’m using HeatWave 8.4.6 on Oracle Cloud Infrastructure (OCI):
SQL>
-- HeatWave MySQL server version
SHOW VARIABLES WHERE Variable_name IN ('version_comment', 'version');
+-----------------+--------------------------+
| Variable_name | Value |
+-----------------+--------------------------+
| version | 8.4.6-cloud |
| version_comment | MySQL Enterprise - Cloud |
+-----------------+--------------------------+
with a 1 node HeatWave Cluster enable.

SQL>
-- HeatWave cluster node number / HeatWave Cluster status
SHOW STATUS WHERE Variable_name IN ('rapid_cluster_ready_number', 'rapid_cluster_status');
+----------------------------+-------+
| Variable_name | Value |
+----------------------------+-------+
| rapid_cluster_ready_number | 1 |
| rapid_cluster_status | ON |
+----------------------------+-------+

We’ll illustrate this article’s examples using a dataset from Kaggle.
$ head -n 5 daily-minimum-temperatures-in-me.csv
Date,Daily minimum temperatures
1/1/1981,20.7
1/2/1981,17.9
1/3/1981,18.8
1/4/1981,14.6
After some transformation, the data are stored in a partitioned table called: min_temp
SQL >
SHOW CREATE TABLE min_temp\G
*************************** 1. row ***************************
Table: min_temp
Create Table: CREATE TABLE `min_temp` (
`d` date DEFAULT NULL,
`temp` decimal(3,1) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
/*!50100 PARTITION BY RANGE (year(`d`))
(PARTITION p1981 VALUES LESS THAN (1982) ENGINE = InnoDB,
PARTITION p1982 VALUES LESS THAN (1983) ENGINE = InnoDB,
PARTITION p1983 VALUES LESS THAN (1984) ENGINE = InnoDB,
PARTITION p1984 VALUES LESS THAN (1985) ENGINE = InnoDB,
PARTITION p1985 VALUES LESS THAN (1986) ENGINE = InnoDB,
PARTITION p1986 VALUES LESS THAN (1987) ENGINE = InnoDB,
PARTITION p1987 VALUES LESS THAN (1988) ENGINE = InnoDB,
PARTITION p1988 VALUES LESS THAN (1989) ENGINE = InnoDB,
PARTITION p1989 VALUES LESS THAN (1990) ENGINE = InnoDB,
PARTITION p1990 VALUES LESS THAN (1991) ENGINE = InnoDB) */
It contains time series data with daily temperature measurements:
SQL >
SELECT * FROM min_temp LIMIT 3;
+------------+------+
| d | temp |
+------------+------+
| 1981-01-01 | 20.7 |
| 1981-01-02 | 17.9 |
| 1981-01-03 | 18.8 |
+------------+------+
SELECT YEAR(d) AS temp_year FROM min_temp GROUP BY temp_year;
+-----------+
| temp_year |
+-----------+
| 1981 |
| 1982 |
| 1983 |
| 1984 |
| 1985 |
| 1986 |
| 1987 |
| 1988 |
| 1989 |
| 1990 |
+-----------+
10 rows in set (0.0014 sec)
Use the INFORMATION_SCHEMA.PARTITIONS
table to inspect partition information:
SELECT PARTITION_NAME, PARTITION_ORDINAL_POSITION, PARTITION_DESCRIPTION, TABLE_ROWS
FROM INFORMATION_SCHEMA.PARTITIONS
WHERE TABLE_NAME = 'min_temp';
+----------------+----------------------------+-----------------------+------------+
| PARTITION_NAME | PARTITION_ORDINAL_POSITION | PARTITION_DESCRIPTION | TABLE_ROWS |
+----------------+----------------------------+-----------------------+------------+
| p1981 | 1 | 1982 | 365 |
| p1982 | 2 | 1983 | 365 |
| p1983 | 3 | 1984 | 365 |
| p1984 | 4 | 1985 | 365 |
| p1985 | 5 | 1986 | 365 |
| p1986 | 6 | 1987 | 365 |
| p1987 | 7 | 1988 | 365 |
| p1988 | 8 | 1989 | 365 |
| p1989 | 9 | 1990 | 365 |
| p1990 | 10 | 1991 | 365 |
+----------------+----------------------------+-----------------------+------------+
Exchanging Partitions and Subpartitions with Tables
In our case, a practical approach to archiving is to convert the partitions containing old data into regular tables. With {HeatWave} MySQL, this process is straightforward.
Let’s take a look at how it works.
In MySQL, it is possible to exchange a table partition or subpartition with a table using ALTER TABLE
. There are some requirements, so you may want to read the documentation, but in short we first need to create a similar table than the one that contains the partitions. But this table must not be partitioned. Then we’ll be able to transfer the data from the partition to this new table.
In MySQL, you can use the ALTER TABLE
command to exchange a partition or subpartition with a regular table. There are a few requirements to meet — so it’s worth reviewing the documentation — but in short, the process starts by creating a table with the same structure as the partitioned one, except without partitions. Once that table is ready, you can transfer the data from the partition into it.
To create the table (min_temp_1981
) that will store the data from partition p1981, you can do, a CREATE… SELECT:
SQL >
CREATE TABLE min_temp_1981 SELECT * FROM min_temp WHERE NULL;
Query OK, 0 rows affected (0.0126 sec)
Records: 0 Duplicates: 0 Warnings: 0
-- No data was copied
SELECT * FROM min_temp_1981 LIMIT 3;
Empty set (0.0005 sec)
SHOW CREATE TABLE min_temp_1981\G
*************************** 1. row ***************************
Table: min_temp_1981
Create Table: CREATE TABLE `min_temp_1981` (
`d` date DEFAULT NULL,
`temp` decimal(3,1) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
Then transfer the data, from partition p1981 to table min_temp_1981
, using ALTER TABLE… EXCHANGE PARTITION… WITH TABLE:
SQL >
ALTER TABLE min_temp EXCHANGE PARTITION p1981 WITH TABLE min_temp_1981;
Query OK, 0 rows affected (0.0181 sec)
SELECT * FROM min_temp_1981 LIMIT 3;
+------------+------+
| d | temp |
+------------+------+
| 1981-01-01 | 20.7 |
| 1981-01-02 | 17.9 |
| 1981-01-03 | 18.8 |
+------------+------+
SELECT YEAR(d) AS temp_year FROM min_temp_1981 GROUP BY temp_year;
+-----------+
| temp_year |
+-----------+
| 1981 |
+-----------+
All the data from partition p1981 is now in the table min_temp_1981.
Like expected, in the partitioned table, the “1981 data” is not there anymore:
SQL >
SELECT * FROM min_temp LIMIT 3;
+------------+------+
| d | temp |
+------------+------+
| 1982-01-01 | 17.0 |
| 1982-01-02 | 15.0 |
| 1982-01-03 | 13.5 |
+------------+------+
SELECT YEAR(d) AS temp_year FROM min_temp GROUP BY temp_year ;
+-----------+
| temp_year |
+-----------+
| 1982 |
| 1983 |
| 1984 |
| 1985 |
| 1986 |
| 1987 |
| 1988 |
| 1989 |
| 1990 |
+-----------+
9 rows in set (0.0014 sec)
But the partition is still there, and it is now empty (TABLE_ROWS=0):
SQL>
SELECT PARTITION_NAME, PARTITION_ORDINAL_POSITION, PARTITION_DESCRIPTION, TABLE_ROWS
FROM INFORMATION_SCHEMA.PARTITIONS
WHERE TABLE_NAME = 'min_temp';
+----------------+----------------------------+-----------------------+------------+
| PARTITION_NAME | PARTITION_ORDINAL_POSITION | PARTITION_DESCRIPTION | TABLE_ROWS |
+----------------+----------------------------+-----------------------+------------+
| p1981 | 1 | 1982 | 0 |
| p1982 | 2 | 1983 | 365 |
| p1983 | 3 | 1984 | 365 |
| p1984 | 4 | 1985 | 365 |
| p1985 | 5 | 1986 | 365 |
| p1986 | 6 | 1987 | 365 |
| p1987 | 7 | 1988 | 365 |
| p1988 | 8 | 1989 | 365 |
| p1989 | 9 | 1990 | 365 |
| p1990 | 10 | 1991 | 365 |
+----------------+----------------------------+-----------------------+------------+
Load data into the HeatWave Cluster
In order to have lightning-fast queries, the archived data must be loaded into the HeatWave cluster.
HeatWave cluster provides a distributed, scalable, shared-nothing, in-memory, hybrid columnar, query processing engine. It can accelerate analytic queries, query external data stored in object storage, and perform machine learning.
To successfully load a table into a HeatWave cluster, you must first ensure it has a primary key. If the table you are trying to archive lacks a primary key, the service will return an “Unable to load table without primary key“ error:
SQL >
-- Add a primary key
ALTER TABLE min_temp_1981
ADD COLUMN id_min_temp_1981 INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST;
Query OK, 0 rows affected (0.0698 sec)
Records: 0 Duplicates: 0 Warnings: 0
SHOW CREATE TABLE min_temp_1981\G
*************************** 1. row ***************************
Table: min_temp_1981
Create Table: CREATE TABLE `min_temp_1981` (
`id_min_temp_1981` int unsigned NOT NULL AUTO_INCREMENT,
`d` date DEFAULT NULL,
`temp` decimal(3,1) DEFAULT NULL,
PRIMARY KEY (`id_min_temp_1981`)
) ENGINE=InnoDB AUTO_INCREMENT=366 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
SELECT * FROM min_temp_1981 LIMIT 3;
+------------------+------------+------+
| id_min_temp_1981 | d | temp |
+------------------+------------+------+
| 1 | 1981-01-01 | 20.7 |
| 2 | 1981-01-02 | 17.9 |
| 3 | 1981-01-03 | 18.8 |
+------------------+------------+------+
Now we can load the data into the HeatWave Cluster, using sys.heatwave_load:
SQL>
CALL sys.heatwave_load(JSON_ARRAY("temp_archiving"), JSON_OBJECT('include_list', JSON_ARRAY('temp_archiving.min_temp_1981')));
+------------------------------------------+
| INITIALIZING HEATWAVE AUTO PARALLEL LOAD |
+------------------------------------------+
| Version: 3.21 |
| |
| Load Mode: normal |
| Load Policy: disable_unsupported_columns |
| Output Mode: normal |
| |
+------------------------------------------+
6 rows in set (0.0175 sec)
+------------------------------------------------------------------------+
| OFFLOAD ANALYSIS |
+------------------------------------------------------------------------+
| Verifying input schemas: 1 |
| User excluded items: 0 |
| |
| SCHEMA OFFLOADABLE OFFLOADABLE SUMMARY OF |
| NAME TABLES COLUMNS ISSUES |
| ------ ----------- ----------- ---------- |
| `temp_archiving` 1 3 |
| |
| Total offloadable schemas: 1 |
| |
+------------------------------------------------------------------------+
10 rows in set (0.0175 sec)
+-----------------------------------------------------------------------------------------------------------------------------+
| CAPACITY ESTIMATION |
+-----------------------------------------------------------------------------------------------------------------------------+
| Default encoding for string columns: VARLEN (unless specified in the schema) |
| Estimating memory footprint for 1 schema(s) |
| |
| TOTAL ESTIMATED ESTIMATED TOTAL DICTIONARY VARLEN ESTIMATED |
| SCHEMA OFFLOADABLE HEATWAVE NODE MYSQL NODE STRING ENCODED ENCODED LOAD |
| NAME TABLES FOOTPRINT FOOTPRINT COLUMNS COLUMNS COLUMNS TIME |
| ------ ----------- --------- --------- ------- ---------- ------- --------- |
| `temp_archiving` 1 3.27 MiB 256.00 KiB 0 0 0 1.00 s |
| |
| Sufficient MySQL host memory available to load all tables. |
| Sufficient HeatWave cluster memory available to load all tables. |
| |
+-----------------------------------------------------------------------------------------------------------------------------+
12 rows in set (0.0175 sec)
+---------------------------------------------------------------------------------------------------------------------------------------+
| EXECUTING LOAD SCRIPT |
+---------------------------------------------------------------------------------------------------------------------------------------+
| HeatWave Load script generated |
| Retrieve load script containing 3 generated DDL command(s) using the query below: |
| Deprecation Notice: "heatwave_load_report" will be deprecated, please switch to "heatwave_autopilot_report" |
| SELECT log->>"$.sql" AS "Load Script" FROM sys.heatwave_autopilot_report WHERE type = "sql" ORDER BY id; |
| |
| Adjusting load parallelism dynamically per internal/external table. |
| Using current parallelism of 4 thread(s) as maximum for internal tables. |
| |
| Warning: Executing the generated script may alter column definitions and secondary engine flags in the schema |
| |
| Using SQL_MODE: ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION |
| |
| Proceeding to load 1 table(s) into HeatWave. |
| |
| Applying changes will take approximately 113.00 ms |
| |
+---------------------------------------------------------------------------------------------------------------------------------------+
16 rows in set (0.0175 sec)
+--------------------------------------------------+
| TABLE LOAD |
+--------------------------------------------------+
| TABLE (1 of 1): `temp_archiving`.`min_temp_1981` |
| Commands executed successfully: 3 of 3 |
| Warnings encountered: 0 |
| Table load succeeded! |
| Total columns loaded: 3 |
| Table loaded using 1 thread(s) |
| Elapsed time: 22.70 s |
| |
+--------------------------------------------------+
8 rows in set (0.0175 sec)
+-------------------------------------------------------------------------------+
| LOAD SUMMARY |
+-------------------------------------------------------------------------------+
| |
| SCHEMA TABLES TABLES COLUMNS LOAD |
| NAME LOADED FAILED LOADED DURATION |
| ------ ------ ------ ------- -------- |
| `temp_archiving` 1 0 3 22.70 s |
| |
+-------------------------------------------------------------------------------+
6 rows in set (0.0175 sec)
Query OK, 0 rows affected (0.0175 sec)
You can monitor the process using the rpd_tables
& rpd_table_id tables:
SQL>
SELECT name, load_progress, load_status, query_count
FROM performance_schema.rpd_tables JOIN performance_schema.rpd_table_id USING(id)
WHERE name LIKE 'temp_archiving%'
ORDER BY name\G
*************************** 1. row ***************************
name: temp_archiving.min_temp_1981
load_progress: 100
load_status: AVAIL_RPDGSTABSTATE
query_count: 0
DESC performance_schema.rpd_table_id;
+-------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------+--------------+------+-----+---------+-------+
| ID | int unsigned | NO | | NULL | |
| NAME | varchar(129) | YES | | NULL | |
| SCHEMA_NAME | varchar(64) | YES | | NULL | |
| TABLE_NAME | varchar(64) | YES | | NULL | |
+-------------+--------------+------+-----+---------+-------+
DESC performance_schema.rpd_tables;
+-----------------------------+-------------------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------------------------+-------------------------------+------+-----+---------+-------+
| ID | int unsigned | NO | | NULL | |
| SNAPSHOT_SCN | bigint unsigned | YES | | NULL | |
| PERSISTED_SCN | bigint unsigned | YES | | NULL | |
| POOL_TYPE | varchar(64) | YES | | NULL | |
| DATA_PLACEMENT_TYPE | varchar(64) | YES | | NULL | |
| NROWS | bigint unsigned | NO | | NULL | |
| LOAD_STATUS | varchar(64) | YES | | NULL | |
| LOAD_PROGRESS | double | YES | | NULL | |
| SIZE_BYTES | bigint unsigned | NO | | NULL | |
| TRANSFORMATION_BYTES | bigint unsigned | YES | | NULL | |
| QUERY_COUNT | bigint unsigned | NO | | NULL | |
| LAST_QUERIED | timestamp(6) | YES | | NULL | |
| LOAD_START_TIMESTAMP | timestamp(6) | YES | | NULL | |
| LOAD_END_TIMESTAMP | timestamp(6) | YES | | NULL | |
| RECOVERY_SOURCE | enum('MySQL','ObjectStorage') | YES | | NULL | |
| RECOVERY_START_TIMESTAMP | timestamp(6) | YES | | NULL | |
| RECOVERY_END_TIMESTAMP | timestamp(6) | YES | | NULL | |
| LOAD_TYPE | enum('SELF','USER') | YES | | NULL | |
| LOGICAL_PARTS_LOADED_AT_SCN | longtext | YES | | NULL | |
+-----------------------------+-------------------------------+------+-----+---------+-------+
Now you can run your {analytics} SELECT queries like usually. 🙂
More details about HeatWave MySQL Performance.
Simplify the process with stored procedure
The workflow can be easily simplified using stored procedures (since we’re using HeatWave MySQL 8.4, we can’t take advantage of JavaScript stored procedures, which are only available starting from MySQL 9.2.).
Let’s review the 4 steps:
1. Clone table (Copy structure without partitions,… or data)
CALL clone_table
(<schema_name>, <partitioned_table>, <archived_table>);
2. Exchanging Partitions
CALL exchange_partition
(<schema_name>, <partitioned_table>, <partition_name>, <archived_table>);
3. Add a primary key
CALL add_primary_key_column
(<schema_name>, <archived_table>, <primary_key>);
4. Load the data into the HeatWave Cluster
CALL sys.heatwave_load(JSON_ARRAY(<schema_name>), JSON_OBJECT(‘include_list’, JSON_ARRAY(<schema_name>.<archived_table>)));
While a robust production solution would be more complex, we can illustrate the core concepts with a basic implementation (these examples should only be used for testing and learning) :
Clone table
DELIMITER $$
CREATE PROCEDURE `clone_table`(
IN _source_schema VARCHAR(64),
IN _source_table VARCHAR(64),
IN _destination_table VARCHAR(64)
)
BEGIN
-- Build the ALTER TABLE statement in order to create the table with the data
SET @create_table_stmt = CONCAT(
' CREATE TABLE `',_source_schema, '`.`', _destination_table,
'` SELECT * FROM `',_source_schema, '`.`',_source_table, '`',
' WHERE NULL '
);
-- Execute
PREPARE stmt FROM @create_table_stmt;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
END$$
DELIMITER ;
Usage:
CALL clone_table('temp_archiving', 'min_temp', 'min_temp_1982');
Exchange Partition
DELIMITER $$
CREATE PROCEDURE exchange_partition(
IN _source_schema VARCHAR(64),
IN _partitioned_table VARCHAR(64),
IN _partition_to_exchange VARCHAR(64),
IN _new_table_name VARCHAR(64)
)
BEGIN
-- Build the ALTER TABLE statement in order to exchange the partition with the newly created table
SET @exchange_stmt = CONCAT(
' ALTER TABLE `',_source_schema, '`.`', _partitioned_table, '` ',
' EXCHANGE PARTITION `', _partition_to_exchange, '` ',
' WITH TABLE `',_source_schema, '`.`', _new_table_name, '`'
);
-- Execute
PREPARE stmt FROM @exchange_stmt;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
END$$
DELIMITER ;
Usage:
CALL exchange_partition('temp_archiving', 'min_temp', 'p1982', 'min_temp_1982');
Add a Primary Key
DELIMITER $$
CREATE PROCEDURE add_primary_key_column(
IN _source_schema VARCHAR(64),
IN _table_name VARCHAR(64),
IN _column_with_PK VARCHAR(64)
)
BEGIN
-- Build the ALTER TABLE statement in order to add the PK
SET @alter_table_stmt = CONCAT(
' ALTER TABLE `',_source_schema, '`.`', _table_name, '` ',
' ADD COLUMN `', _column_with_PK, '` INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST'
);
-- Execute
PREPARE stmt FROM @alter_table_stmt;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
END$$
DELIMITER ;
Usage:
CALL add_primary_key_column('temp_archiving', 'min_temp_1982', 'id_min_temp_1982');
Finally, we can create a wrapper to perform all 4 steps at once:
Data Archiving Wrapper
DELIMITER $$
CREATE PROCEDURE `archiving_wrapper`(
IN _source_schema VARCHAR(64),
IN _source_table VARCHAR(64),
IN _destination_table VARCHAR(64),
IN _partition_to_exchange VARCHAR(64),
IN _column_with_PK VARCHAR(64)
)
BEGIN
-- Copy structure without indexes, partitions,... and data
CALL clone_table(_source_schema, _source_table, _destination_table);
-- Exchange partition
CALL exchange_partition(_source_schema, _source_table, _partition_to_exchange, _destination_table);
-- Add a primary key
CALL add_primary_key_column(_source_schema, _destination_table, _column_with_PK);
-- Load data into HeatWave Cluster
CALL sys.heatwave_load(
JSON_ARRAY(_source_schema),
JSON_OBJECT(
'include_list',
JSON_ARRAY(CONCAT(_source_schema, '.', _destination_table))
)
);
END$$
DELIMITER ;
Usage:
SQL>
CALL archiving_wrapper('temp_archiving', 'min_temp', 'min_temp_1982', 'p1982', 'id_min_temp_1982');
+------------------------------------------+
| INITIALIZING HEATWAVE AUTO PARALLEL LOAD |
+------------------------------------------+
| Version: 3.21 |
| |
| Load Mode: normal |
| Load Policy: disable_unsupported_columns |
| Output Mode: normal |
| |
+------------------------------------------+
6 rows in set (0.1021 sec)
+------------------------------------------------------------------------+
| OFFLOAD ANALYSIS |
+------------------------------------------------------------------------+
| Verifying input schemas: 1 |
| User excluded items: 0 |
| |
| SCHEMA OFFLOADABLE OFFLOADABLE SUMMARY OF |
| NAME TABLES COLUMNS ISSUES |
| ------ ----------- ----------- ---------- |
| `temp_archiving` 1 3 |
| |
| Total offloadable schemas: 1 |
| |
+------------------------------------------------------------------------+
10 rows in set (0.1021 sec)
+-----------------------------------------------------------------------------------------------------------------------------+
| CAPACITY ESTIMATION |
+-----------------------------------------------------------------------------------------------------------------------------+
| Default encoding for string columns: VARLEN (unless specified in the schema) |
| Estimating memory footprint for 1 schema(s) |
| |
| TOTAL ESTIMATED ESTIMATED TOTAL DICTIONARY VARLEN ESTIMATED |
| SCHEMA OFFLOADABLE HEATWAVE NODE MYSQL NODE STRING ENCODED ENCODED LOAD |
| NAME TABLES FOOTPRINT FOOTPRINT COLUMNS COLUMNS COLUMNS TIME |
| ------ ----------- --------- --------- ------- ---------- ------- --------- |
| `temp_archiving` 1 3.27 MiB 256.00 KiB 0 0 0 1.00 s |
| |
| Sufficient MySQL host memory available to load all tables. |
| Sufficient HeatWave cluster memory available to load all tables. |
| |
+-----------------------------------------------------------------------------------------------------------------------------+
12 rows in set (0.1021 sec)
+---------------------------------------------------------------------------------------------------------------------------------------+
| EXECUTING LOAD SCRIPT |
+---------------------------------------------------------------------------------------------------------------------------------------+
| HeatWave Load script generated |
| Retrieve load script containing 3 generated DDL command(s) using the query below: |
| Deprecation Notice: "heatwave_load_report" will be deprecated, please switch to "heatwave_autopilot_report" |
| SELECT log->>"$.sql" AS "Load Script" FROM sys.heatwave_autopilot_report WHERE type = "sql" ORDER BY id; |
| |
| Adjusting load parallelism dynamically per internal/external table. |
| Using current parallelism of 4 thread(s) as maximum for internal tables. |
| |
| Warning: Executing the generated script may alter column definitions and secondary engine flags in the schema |
| |
| Using SQL_MODE: ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION |
| |
| Proceeding to load 1 table(s) into HeatWave. |
| |
| Applying changes will take approximately 113.00 ms |
| |
+---------------------------------------------------------------------------------------------------------------------------------------+
16 rows in set (0.1021 sec)
+--------------------------------------------------+
| TABLE LOAD |
+--------------------------------------------------+
| TABLE (1 of 1): `temp_archiving`.`min_temp_1982` |
| Commands executed successfully: 3 of 3 |
| Warnings encountered: 0 |
| Table load succeeded! |
| Total columns loaded: 3 |
| Table loaded using 1 thread(s) |
| Elapsed time: 2.66 s |
| |
+--------------------------------------------------+
8 rows in set (0.1021 sec)
+-------------------------------------------------------------------------------+
| LOAD SUMMARY |
+-------------------------------------------------------------------------------+
| |
| SCHEMA TABLES TABLES COLUMNS LOAD |
| NAME LOADED FAILED LOADED DURATION |
| ------ ------ ------ ------- -------- |
| `temp_archiving` 1 0 3 2.66 s |
| |
+-------------------------------------------------------------------------------+
6 rows in set (0.1021 sec)
Query OK, 0 rows affected (0.1021 sec)
Let’s check:
SQL>
SELECT * FROM min_temp_1982 LIMIT 3;
+------------------+------------+------+
| id_min_temp_1982 | d | temp |
+------------------+------------+------+
| 1 | 1982-01-01 | 17.0 |
| 2 | 1982-01-02 | 15.0 |
| 3 | 1982-01-03 | 13.5 |
+------------------+------------+------+
SELECT YEAR(d) AS temp_year FROM min_temp_1982 GROUP BY temp_year;
+-----------+
| temp_year |
+-----------+
| 1982 |
+-----------+
The year 1982 has been correctly archived.
SQL>
SELECT * FROM min_temp LIMIT 3;
+------------+------+
| d | temp |
+------------+------+
| 1983-01-01 | 18.4 |
| 1983-01-02 | 15.0 |
| 1983-01-03 | 10.9 |
+------------+------+
SELECT YEAR(d) AS temp_year FROM min_temp GROUP BY temp_year ;
+-----------+
| temp_year |
+-----------+
| 1983 |
| 1984 |
| 1985 |
| 1986 |
| 1987 |
| 1988 |
| 1989 |
| 1990 |
+-----------+
8 rows in set (0.0013 sec)
SELECT PARTITION_NAME, PARTITION_ORDINAL_POSITION, PARTITION_DESCRIPTION, TABLE_ROWS
FROM INFORMATION_SCHEMA.PARTITIONS
WHERE TABLE_NAME = 'min_temp';
+----------------+----------------------------+-----------------------+------------+
| PARTITION_NAME | PARTITION_ORDINAL_POSITION | PARTITION_DESCRIPTION | TABLE_ROWS |
+----------------+----------------------------+-----------------------+------------+
| p1981 | 1 | 1982 | 0 |
| p1982 | 2 | 1983 | 0 |
| p1983 | 3 | 1984 | 365 |
| p1984 | 4 | 1985 | 365 |
| p1985 | 5 | 1986 | 365 |
| p1986 | 6 | 1987 | 365 |
| p1987 | 7 | 1988 | 365 |
| p1988 | 8 | 1989 | 365 |
| p1989 | 9 | 1990 | 365 |
| p1990 | 10 | 1991 | 365 |
+----------------+----------------------------+-----------------------+------------+
Data was moved out of the partitioned table.
SQL>
SELECT name, load_progress, load_status, query_count
FROM performance_schema.rpd_tables JOIN performance_schema.rpd_table_id USING(id)
WHERE name LIKE 'temp_archiving%'
ORDER BY name;
+------------------------------+---------------+---------------------+-------------+
| name | load_progress | load_status | query_count |
+------------------------------+---------------+---------------------+-------------+
| temp_archiving.min_temp_1981 | 100 | AVAIL_RPDGSTABSTATE | 0 |
| temp_archiving.min_temp_1982 | 100 | AVAIL_RPDGSTABSTATE | 0 |
+------------------------------+---------------+---------------------+-------------+
Data has been offloaded to the HeatWave Cluster.
In our case, this workload would only run once a year after the catch-up phase. But in practice, you could schedule this kind of workflow to run more regularly — whether with cron (or the event scheduler if you are a player 🙂 , your favorite open source data orchestration tool like Airflow, Prefect, Dagster,… or a cloud native managed service like OCI Data Integration.
Peroraison
Archiving data no longer has to mean sacrificing accessibility or performance. With HeatWave MySQL, DBAs and developers can seamlessly move older partitions into regular tables, load them into the HeatWave cluster, and continue running queries at scale — all with the familiar MySQL syntax they already know.
By combining efficient data archiving with in-memory acceleration, HeatWave allows organizations to strike the balance between keeping production databases lean and still being able to analyze years of historical data instantly. What was once a trade-off between storage efficiency and query performance is now a streamlined workflow that can be automated and adapted to your needs.
With HeatWave, your archived data is no longer just stored; it’s ready to deliver actionable insights at the speed of thought.
Stay tuned for more insights!

Watch my videos on my YouTube channel and subscribe.
Thanks for using HeatWave & MySQL!

Cloud Solutions Architect at Oracle
MySQL Geek, strong interest in data and AI, author, blogger and speaker
I’m an insatiable hunger of learning.
—–
Blog: www.dasini.net/blog/en/
Twitter: https://twitter.com/freshdaz
SlideShare: www.slideshare.net/freshdaz
Youtube: https://www.youtube.com/channel/UC12TulyJsJZHoCmby3Nm3WQ
—–
Leave a Reply