Introduction
In Oracle Database environments, one of the indicators of unexpected system changes is a sudden increase in archive log volume. This change can occur due to high transactional load, unauthorized modifications, execution of a heavy script, or even a security attack. In this article, I will share my hands-on experience in analyzing this situation using LogMiner and explain the steps taken in a step-by-step manner.
A special thanks to my dear mentor, Engineer Taimouri, from whom I first learned about this topic.
Detecting Unexpected Growth in Archive Logs
While working with the database, we noticed an unexpected increase in the number of archive logs. To investigate this, we first listed the generated logs. Note that if these logs belong to a different time period or a specific thread, we can filter them using the WHERE
clause in the V$ARCHIVED_LOG
view.
SELECT ' DBMS_LOGMNR.ADD_LOGFILE(LogFileName => ''' || NAME ||
''' ,Options => DBMS_LOGMNR.ADDFILE);'
FROM V$ARCHIVED_LOG
WHERE NAME IS NOT NULL
AND DEST_ID = 1
ORDER BY COMPLETION_TIME DESC;
The output of this query is as follows. We will need this output for the next step:
DBMS_LOGMNR.ADD_LOGFILE(LogFileName => '/u01/app/oracle/product/19c/dbhome/dbs/arch1_64_1184679044.dbf' ,Options => DBMS_LOGMNR.ADDFILE);
DBMS_LOGMNR.ADD_LOGFILE(LogFileName => '/u01/app/oracle/product/19c/dbhome/dbs/arch1_63_1184679044.dbf' ,Options => DBMS_LOGMNR.ADDFILE);
DBMS_LOGMNR.ADD_LOGFILE(LogFileName => '/u01/app/oracle/product/19c/dbhome/dbs/arch1_62_1184679044.dbf' ,Options => DBMS_LOGMNR.ADDFILE);
Analyzing Changes in Archive Logs Using LogMiner
To determine the reason for this sudden increase, we decided to use LogMiner. First, we added the relevant logs to LogMiner:
BEGIN
DBMS_LOGMNR.ADD_LOGFILE(LogFileName => '/u01/app/oracle/product/19c/dbhome/dbs/arch1_64_1184679044.dbf' ,Options => DBMS_LOGMNR.ADDFILE);
DBMS_LOGMNR.ADD_LOGFILE(LogFileName => '/u01/app/oracle/product/19c/dbhome/dbs/arch1_63_1184679044.dbf' ,Options => DBMS_LOGMNR.ADDFILE);
DBMS_LOGMNR.START_LOGMNR (Options => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
END;
Now, we can proceed with analyzing the data. Here’s a simple example, but you can modify your queries based on your specific requirements:
SELECT username, COUNT(1)
FROM V$LOGMNR_CONTENTS
GROUP BY username
ORDER BY 2 DESC;
The output of this query was:
USERNAME COUNT(1)
--------- ---------
UNKNOWN 506146
SYS 305853
Since I simulated this in a test environment, I only used the SYS
user.
Analyzing Changes in Database Tables
To determine what types of operations were performed and which tables were affected, we used the following query:
SELECT OS_USERNAME, MACHINE_NAME, TABLE_NAME, COUNT(*)
FROM V$LOGMNR_CONTENTS
WHERE UPPER(TABLE_NAME) LIKE '%TEST%'
GROUP BY OS_USERNAME, MACHINE_NAME, TABLE_NAME;
Detecting Unexpected Data Deletions
If a large amount of data has been deleted from a table, we can use LogMiner to track who performed the deletion and when:
SELECT username, operation, sql_redo
FROM V$LOGMNR_CONTENTS
WHERE operation = 'DELETE';
At the end of investigation we should close the package with this command:
BEGIN
DBMS_LOGMNR.END_LOGMNR();
END;
Enabling Supplemental Logging for More Detailed Information
One issue we noticed in the logs was that some changes were recorded in LogMiner without sufficient details. According to Oracle Doc ID 750198.1, LogMiner cannot extract useful information from archive logs unless Supplemental Logging is enabled. This is because Redo Logs only contain changes based on ROWID, and by default, critical details such as usernames and field values before and after modification are not stored.
To enable Supplemental Logging, use the following command:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
In a comparison provided in the referenced Oracle documentation, the following query:
SELECT SQL_REDO , SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username='SCOTT';
- Without Supplemental Logging: The query returned no results.
- With Supplemental Logging: The query contained the necessary data.