HeatWave MySQL Database Audit
A few years ago (in 2018), I wrote a MySQL Security series and one of the episode was about the MySQL Enterprise Audit feature — MySQL Security – MySQL Enterprise Audit — a powerful auditing capability that enables you to track and monitor database activity to ensure data integrity, strengthen security, and maintain compliance with regulatory requirements. This robust feature has also been available in HeatWave MySQL for the past few years (since 2023), bringing the same enterprise-grade auditing capabilities to the cloud.

HeatWave MySQL Database Audit is builds upon the established technology of MySQL Enterprise Audit, offering a comprehensive solution for tracking and analyzing database activities.
Key Benefits of HeatWave MySQL Database Audit
- Rigorous Compliance & Forensics: Helps organizations meet stringent industry regulations (like FedRAMP, DISA STIG, PCI-DSS, HIPAA, SOX, GDPR, FERPA, and Center for Internet Security Benchmarks) by creating a detailed record of database events, essential for investigations and demonstrating adherence to policies.
- Security Operations (SecOps): Enables real-time monitoring of user behaviors to detect and respond to potential security threats proactively.
- Holistic Server Activity Tracking: Provides comprehensive auditing, from basic client connections and disconnections to more granular activities like interactions with specific schemas and tables, security changes, and errors.
- Insights into Query and Statement Performance: Tracks query execution statistics, allowing for the identification of slow queries and performance bottlenecks, leading to database optimization.
- Utilization & Optimization: Offers data-driven insights to pinpoint and streamline database operations and resource utilization.
- “Trust but Verify” Security Principle: Allows for the monitoring of high-privilege users to prevent misuse of access.
- Business Audit: Creates detailed records to prove data validity, accuracy, and integrity, demonstrating that no tampering has occurred.
- Security Analysis: Serves as a vital component in a defense-in-depth strategy, facilitating both proactive (machine learning-based anomaly detection) and reactive (post-mortem analysis of attacks) security measures.
Feature Highlights:
- Ready to Use: No installation steps are required, simplifying the process of securing and monitoring database activities.
- Customizable Auditing: Allows Database Administrators to define filters to monitor specific operations, users, or broad activity categories, reducing audit noise and optimizing log size.
- Real-time & Minimal Overhead: Provides instantaneous access to database activity with minimal impact on performance, supporting diverse use cases.
- Automatic Log Rotation & Management: Includes built-in log rotation and automatic purging, ensuring optimal DB system performance without manual intervention. Audit logs are structured in JSON format, encrypted, and compressed for efficiency and security.
- Multi-Instance Support: Works seamlessly with standalone, multi-instance, and High-Availability (HA) configurations, replicating audit filters and configurations to ensure no event is lost.
- Access via SQL Interface: Enables querying of logs directly from the SQL interface using any MySQL client. Logs can also be channeled to OCI Logging Analytics, third-party monitoring tools, or SIEM systems for broader analysis.
- Effortless Migration of Rules: Existing MySQL Enterprise Auditing rules (on-premise or other systems) can be easily migrated to HeatWave MySQL instances in the cloud, ensuring consistency.
- Optional Query Execution Metrics: Can include details about query execution, such as slow queries, for performance analysis.
- Option to Remove Sensitive Data: Allows for the omission of sensitive data from statements before logging.
Server Activity Tracking
The goal here is to track all DML (Data Manipulation Language) statements executed on a HeatWave MySQL instance, by an user account (assuming this user account it used by an application that requires audited). You can easily achieve this by creating a dedicated HeatWave MySQL Audit Log Filter that specifically logs:
- INSERT
- UPDATE
- DELETE
- TRUNCATE TABLE
- REPLACE
- LOAD DATA
- LOAD XML
We’ll assign this filter to the dedicated user called auditee@%
.
Workflow overview:
- Verify that the HeatWave MySQL Database Audit is enabled
- Create the DML specific audit filter
- Register the filter
- Assign the filter to the appropriate user account
Important: To utilize any filtering capabilities, the user performing these actions must possess the
privilege.AUDIT_ADMIN
In this article, I’m using HeatWave 8.4.5:
SELECT VERSION();
+----------------+
| VERSION() |
+----------------+
| 8.4.5-u2-cloud |
+----------------+
Verify that HeatWave MySQL Audit is enabled
HeatWave MySQL Database Audit is enable by default. You can check using the following queries:
SQL>
-- Checks at the server plugins level
SELECT
PLUGIN_NAME,
PLUGIN_STATUS
FROM
INFORMATION_SCHEMA.PLUGINS
WHERE
PLUGIN_NAME LIKE 'audit%';
+-------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+-------------+---------------+
| audit_log | ACTIVE |
+-------------+---------------+
-- Check at the component-based infrastructure level
SELECT
*
FROM
mysql.component
WHERE
component_urn LIKE '%audit%'\G
*************************** 1. row ***************************
component_id: 6
component_group_id: 6
component_urn: file://component_audit_api_message_emit
You can see all the audit feature related functions using performance_schema.user_defined_functions
table:
SELECT * FROM performance_schema.user_defined_functions WHERE UDF_LIBRARY='audit_log.so';
+-----------------------------------+-----------------+----------+--------------+-----------------+
| UDF_NAME | UDF_RETURN_TYPE | UDF_TYPE | UDF_LIBRARY | UDF_USAGE_COUNT |
+-----------------------------------+-----------------+----------+--------------+-----------------+
| audit_log_rotate | char | function | audit_log.so | 1 |
| audit_log_read | char | function | audit_log.so | 1 |
| audit_log_filter_remove_filter | char | function | audit_log.so | 1 |
| audit_log_encryption_password_set | integer | function | audit_log.so | 1 |
| audit_log_filter_set_filter | char | function | audit_log.so | 1 |
| audit_log_encryption_password_get | char | function | audit_log.so | 1 |
| audit_log_filter_remove_user | char | function | audit_log.so | 1 |
| audit_log_filter_set_user | char | function | audit_log.so | 1 |
| audit_log_read_bookmark | char | function | audit_log.so | 1 |
| audit_log_filter_flush | char | function | audit_log.so | 1 |
+-----------------------------------+-----------------+----------+--------------+-----------------+
The user_defined_functions
table contains a row for each loadable function registered automatically by a component or plugin, or manually by a CREATE FUNCTION
statement.
Create the DML Audit Filter
The filer to log all DMLs running on HeatWave MySQL is quite simple. We are using a JSON syntax:
SET @dml_filter = '{
"filter": {
"class": {
"name": "table_access",
"event": {
"name": ["insert", "update", "delete"],
"log": true
}
}
}
}'
;
This filter logs only DML operations by checking SQL statement types (that is INSERT / UPDATE / DELETE / TRUNCATE TABLE / REPLACE / LOAD DATA / LOAD XML).
Register the Filter
Assign the filter a name, using the audit_log_filter_set_filter
function (e.g., dml_logger):
SELECT audit_log_filter_set_filter('dml_logger', @dml_filter);
+--------------------------------------------------------+
| audit_log_filter_set_filter('dml_logger', @dml_filter) |
+--------------------------------------------------------+
| OK |
+--------------------------------------------------------+
To view audit filters, run the following command:
SELECT * FROM mysql_audit.audit_log_filter\G
*************************** 1. row ***************************
NAME: dml_logger
FILTER: {"filter": {"class": {"name": "table_access", "event": {"log": true, "name": ["insert", "update", "delete"]}}}}
Apply the Filter to the User Account
Use the audit_log_filter_set_user
function to start filtering an user account.
The syntax for the specific users auditee@%
is:
SELECT audit_log_filter_set_user('auditee@%', 'dml_logger');
+----------------------------------------------------------+
| audit_log_filter_set_user('auditee@%', 'dml_logger') |
+----------------------------------------------------------+
| OK |
+----------------------------------------------------------+
If you want to assign the filter to all users, use the following syntax:
SELECT audit_log_filter_set_user('%', 'dml_logger');
In our context that could be useful to log all DMLs from any users / applications.
To unassign this rule, run the following command:
SELECT audit_log_filter_remove_user('%');
To view the assigned rules, run the following command:
SELECT * FROM mysql_audit.audit_log_user;
+-------------+------+------------+
| USER | HOST | FILTERNAME |
+-------------+------+------------+
| auditee | % | dml_logger |
+-------------+------+------------+
Accessing and Analyzing Audit Data
To access audit data, users can simply query it using standard SQL. The primary method for retrieving this information is audit_log_read()
, which returns the audit records in JSON format.
For a basic example of how to extract audit log entries, use the following command:
SELECT audit_log_read(audit_log_read_bookmark());
To display the audit data in a more readable format, use the JSON_PRETTY()
and CONVERT()
functions:
SELECT JSON_PRETTY(CONVERT(audit_log_read(audit_log_read_bookmark()) USING UTF8MB4))\G
You can refine your audit data extraction by passing additional parameters to the
function.audit_log_read()
For example, to retrieve, 10 entries of audit logs starting from a specific timestamp, you can use:
SELECT JSON_PRETTY(CONVERT(audit_log_read('{ "start": { "timestamp": "2025-07-02 15:28:12" }, "max_array_length": 10 }')USING UTF8MB4))\G
Note. You can also use the MySQL’s JSON function JSON_TABLE
, to transform audit data into a tabular format. For example to extract a subset of JSON name-value pairs and convert them into a structured table, making the data easier to work with and analyze.
Playground
My application uses the auditee
user account, which has the necessary privileges on the s1
schema as well as the
privilege:AUDIT_ADMIN
CREATE USER auditee@'%' IDENTIFIED BY 'My5up4rP@sS';
GRANT ALL ON s1.* TO auditee@'%';
GRANT AUDIT_ADMIN ON *.* TO auditee@'%';
and for this demo I’ll use the table s1.t1:
CREATE SCHEMA s1;
USE s1;
CREATE TABLE `t1` (
`id` int NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`)
);
Inside a session using the auditee@%
user account:
SQL> SELECT CURRENT_USER();
+----------------+
| CURRENT_USER() |
+----------------+
| auditee@% |
+----------------+
Let’s run some queries:
SQL> INSERT t1 VALUES (NULL);
Query OK, 1 row affected (0.0029 sec)
SQL> SELECT * FROM t1;
+----+
| id |
+----+
| 1 |
+----+
We can access the audit data through the SQL interface using the audit_log_read()
function, as demonstrated earlier:
SELECT JSON_PRETTY(CONVERT(audit_log_read(audit_log_read_bookmark()) USING UTF8MB4))\G
*************************** 1. row ***************************
JSON_PRETTY(CONVERT(audit_log_read(audit_log_read_bookmark()) USING UTF8MB4)): [
{
"id": 0,
"ts": 1751493370,
"class": "table_access",
"event": "insert",
"login": {
"ip": "10.0.0.241",
"os": "",
"user": "auditee",
"proxy": ""
},
"account": {
"host": "jumpserver.xxxxxxxxxx.yyyyyyyy.oraclevcn.com",
"user": "auditee"
},
"timestamp": "2025-07-02 21:56:10",
"connection_id": 31332,
"table_access_data": {
"db": "s1",
"query": "INSERT t1 VALUES (NULL)",
"table": "t1",
"sql_command": "insert"
}
},
null
]
As expected, the INSERT command is logged, not the SELECT.
User auditee is now running the following queries:
UPDATE t1 SET id=100 WHERE id=1;
SELECT * FROM t1;
DELETE FROM t1;
The audit log should contain the two DML statements, but not the SELECT query:
-- events in the log
SELECT JSON_PRETTY(CONVERT(audit_log_read(audit_log_read_bookmark()) USING UTF8MB4))\G
*************************** 1. row ***************************
JSON_PRETTY(CONVERT(audit_log_read(audit_log_read_bookmark()) USING UTF8MB4)): [
{
"id": 0,
"ts": 1751493469,
"class": "table_access",
"event": "update",
"login": {
"ip": "10.0.0.241",
"os": "",
"user": "auditee",
"proxy": ""
},
"account": {
"host": "jumpserver.xxxxxxxxxx.yyyyyyyy.oraclevcn.com",
"user": "auditee"
},
"timestamp": "2025-07-02 21:57:49",
"connection_id": 31332,
"table_access_data": {
"db": "s1",
"query": "UPDATE t1 SET id=100 WHERE id=1",
"table": "t1",
"sql_command": "update"
}
},
{
"id": 1,
"ts": 1751493469,
"class": "table_access",
"event": "delete",
"login": {
"ip": "10.0.0.241",
"os": "",
"user": "auditee",
"proxy": ""
},
"account": {
"host": "jumpserver.xxxxxxxxxx.yyyyyyyy.oraclevcn.com",
"user": "auditee"
},
"timestamp": "2025-07-02 21:57:49",
"connection_id": 31332,
"table_access_data": {
"db": "s1",
"query": "DELETE FROM t1",
"table": "t1",
"sql_command": "delete"
}
},
null
]
Et voilà!
Let’s finish this article with some examples in bulk
Starting from a particular timestamp (2025-07-02 21:56:10):
-- At most 3 events at that time
SELECT JSON_PRETTY(CONVERT(audit_log_read('{ "start": { "timestamp": "2025-07-02 21:56:10" }, "max_array_length": 3 }')USING UTF8MB4))\G
*************************** 1. row ***************************
JSON_PRETTY(CONVERT(audit_log_read('{ "start": { "timestamp": "2025-07-02 21:56:10" }, "max_array_length": 3 }')USING UTF8MB4)): [
{
"id": 0,
"ts": 1751493370,
"class": "table_access",
"event": "insert",
"login": {
"ip": "10.0.0.241",
"os": "",
"user": "auditee",
"proxy": ""
},
"account": {
"host": "jumpserver.xxxxxxxxxx.yyyyyyyy.oraclevcn.com",
"user": "auditee"
},
"timestamp": "2025-07-02 21:56:10",
"connection_id": 31332,
"table_access_data": {
"db": "s1",
"query": "INSERT t1 VALUES (NULL)",
"table": "t1",
"sql_command": "insert"
}
},
{
"id": 0,
"ts": 1751493469,
"class": "table_access",
"event": "update",
"login": {
"ip": "10.0.0.241",
"os": "",
"user": "auditee",
"proxy": ""
},
"account": {
"host": "jumpserver.xxxxxxxxxx.yyyyyyyy.oraclevcn.com",
"user": "auditee"
},
"timestamp": "2025-07-02 21:57:49",
"connection_id": 31332,
"table_access_data": {
"db": "s1",
"query": "UPDATE t1 SET id=100 WHERE id=1",
"table": "t1",
"sql_command": "update"
}
},
{
"id": 1,
"ts": 1751493469,
"class": "table_access",
"event": "delete",
"login": {
"ip": "10.0.0.241",
"os": "",
"user": "auditee",
"proxy": ""
},
"account": {
"host": "jumpserver.xxxxxxxxxx.yyyyyyyy.oraclevcn.com",
"user": "auditee"
},
"timestamp": "2025-07-02 21:57:49",
"connection_id": 31332,
"table_access_data": {
"db": "s1",
"query": "DELETE FROM t1",
"table": "t1",
"sql_command": "delete"
}
},
null
]
First event (max_array_length: 1) at that particular timestamp (2025-07-02 21:56:10):
-- first event at that time
SELECT JSON_PRETTY(CONVERT(audit_log_read('{ "start": { "timestamp": "2025-07-02 21:56:10" }, "max_array_length": 1 }')USING UTF8MB4))\G
*************************** 1. row ***************************
JSON_PRETTY(CONVERT(audit_log_read('{ "start": { "timestamp": "2025-07-02 21:56:10" }, "max_array_length": 1 }')USING UTF8MB4)): [
{
"id": 0,
"ts": 1751493370,
"class": "table_access",
"event": "insert",
"login": {
"ip": "10.0.0.241",
"os": "",
"user": "auditee",
"proxy": ""
},
"account": {
"host": "jumpserver.xxxxxxxxxx.yyyyyyyy.oraclevcn.com",
"user": "auditee"
},
"timestamp": "2025-07-02 21:56:10",
"connection_id": 31332,
"table_access_data": {
"db": "s1",
"query": "INSERT t1 VALUES (NULL)",
"table": "t1",
"sql_command": "insert"
}
}
]
Second event (id=1) at that particular timestamp (2025-07-02 21:57:49)
-- event id=1 at that time
SELECT JSON_PRETTY(CONVERT(audit_log_read('{ "timestamp": "2025-07-02 21:57:49", "id": 1 }')USING UTF8MB4))\G
*************************** 1. row ***************************
JSON_PRETTY(CONVERT(audit_log_read('{ "timestamp": "2025-07-02 21:57:49", "id": 1 }')USING UTF8MB4)): [
{
"id": 1,
"ts": 1751493469,
"class": "table_access",
"event": "delete",
"login": {
"ip": "10.0.0.241",
"os": "",
"user": "auditee",
"proxy": ""
},
"account": {
"host": "jumpserver.xxxxxxxxxx.yyyyyyyy.oraclevcn.com",
"user": "auditee"
},
"timestamp": "2025-07-02 21:57:49",
"connection_id": 31332,
"table_access_data": {
"db": "s1",
"query": "DELETE FROM t1",
"table": "t1",
"sql_command": "delete"
}
},
null
]
Peroraison
HeatWave MySQL Database Audit offers a powerful, enterprise-grade auditing framework that seamlessly extends MySQL Enterprise Audit capabilities to the cloud. By enabling fine-grained filtering, real-time monitoring, and flexible log access via SQL, it empowers database administrators and security teams to ensure compliance, enhance visibility, and strengthen operational security.
In this article, we demonstrated how to track all DML operations executed by a specific application user using a dedicated audit filter. From enabling the audit plugin to querying structured audit logs, HeatWave makes it straightforward to implement robust auditing practices with minimal overhead.
Whether you’re working toward regulatory compliance, safeguarding sensitive data, or optimizing database performance, HeatWave MySQL Database Audit equips you with the tools needed to meet modern data governance and security demands—efficiently and effectively.
By leveraging HeatWave MySQL Database Audit, you’re not just logging data; you’re building a foundation of trust and accountability for your critical database operations.
Stay tuned for more insights!

Watch my videos on my YouTube channel and subscribe.
Thanks for using HeatWave & MySQL!