`

使用LogMiner分析日志文件(在线重做日志或归档日志)

 
阅读更多



                                                               LogMiner
一、LogMiner的用途
日志文件中存放着所有进行数据库恢复的数据,记录了针对数据库结构的每一个变化,也就是对数据库操作的所有DML语句。
在Oracle 8i之前,Oracle没有提供任何协助数据库管理员来读取和解释重作日志文件内容的工具。系统出现问题,对于一个普通的数据管理员来讲,唯一可以作的工作就是将所有的log文件打包,然后发给Oracle公司的技术支持,然后静静地等待Oracle 公司技术支持给我们最后的答案。然而从8i以后,Oracle提供了这样一个强有力的工具-LogMiner。
LogMiner 工具即可以用来分析在线,也可以用来分析离线日志文件,即可以分析本身自己数据库的重作日志文件,也可以用来分析其他数据库的重作日志文件。
总的说来,LogMiner工具的主要用途有:
1. 跟踪数据库的变化:可以离线的跟踪数据库的变化,而不会影响在线系统的性能。
2. 回退数据库的变化:回退特定的变化数据,减少point-in-time recovery的执行。
3. 优化和扩容计划:可通过分析日志文件中的数据以分析数据增长模式。
二、安装LogMiner
要安装LogMiner工具,必须首先要运行下面这样两个脚本,
1.$ORACLE_HOME/rdbms/admin/dbmslm.sql
2.$ORACLE_HOME/rdbms/admin/dbmslmd.sql.
这两个脚本必须均以SYS用户身份运行。其中第一个脚本用来创建DBMS_LOGMNR包,该包用来分析日志文件。第二个脚本用来创建DBMS_LOGMNR_D包,该包用来创建数据字典文件。
三、使用LogMiner工具
3.1 介绍
LogMiner工具实际上是由两个新的PL/SQL内建包(DBMS_LOGMNR 和 DBMS_ LOGMNR_D)和四个V$动态性能视图(视图是在利用过程DBMS_LOGMNR.START_LOGMNR启动LogMiner时创建,即:v$logmnr_dictionary、v$logmnr_parameters、v$logmnr_logs、v$logmnr_contents
)组成。在使用LogMiner工具分析redo log文件之前,可以使用DBMS_LOGMNR_D 包将数据字典导出为一个文本文件。该字典文件是可选的,但是如果没有它,LogMiner解释出来的语句中关于数据字典中的部分(如表名、列名等)和数值都将是16进制的形式,我们是无法直接理解的。
LogMiner包含两个PL/SQL包和几个视图:
1.dbms_logmnr_d包,这个包只包括一个用于提取数据字典信息的过程,即dbms_logmnr_d.build()过程。
2.dbms_logmnr包,它有三个过程:
add_logfile(name varchar2, options number) - 用来添加/删除用于分析的日志文件;
start_logmnr(start_scn number, end_scn number, start_time number,end_time number, dictfilename varchar2, options number) - 用来开启日志分析,同时确定分析的时间/SCN窗口以及确认是否使用提取出来的数据字典信息。
end_logmnr() - 用来终止分析会话,它将回收LogMiner所占用的内存。

与LogMiner相关的数据字典。
1、v$logmnr_dictionary,LogMiner可能使用的数据字典信息,因logmnr可以有     多个字典文件,该视图用于显示这方面信息。 
2、v$logmnr_parameters,当前LogMiner所设定的参数信息。 
3、v$logmnr_logs,当前用于分析的日志列表。
4、v$logmnr_contents,日志分析结果。
3.2 9i的日志分析过程
3.2.1确认设置了初始化参数:UTL_FILE_DIR
SQL> show parameter utl;
NAME               TYPE    VALUE
-------------- ----------- ------------------------------
utl_file_dir   string      /dataoracle/oracle/logminer
可以看到该参数的当前设置。如果没有值,必须修改数据库的initsid.ora文件,或者使用如下命令:
SQL> alter system set utl_file_dir='/dataoracle/oracle/logminer' scope=spifle;

重新启动数据库,使新加的参数生效
创建字典文件的语句:
SQL>exec sys.dbms_logmnr_d.build(dictionary_filename=>'dictionary.ora', dictionary_location =>'/dataoracle/oracle/logminer '); 
其中,dictionary.ora是给字典文件起的文件名(可任意)。
整个创建过程,可能需要十几分钟到一个小时,视该数据库的object 个数以及繁忙程度而定。完成后,会在/dataoracle/oracle/logminer目录下看到一个名为dictionary.ora的文件。
3.2.2创建要分析的日志文件列表
Oracle的重作日志分为两种,在线(online)和离线(offline)归档日志文件,下面就分别来讨论这两种不同日志文件的列表创建。日志文件和归档日志文件的数量是非常多的。因此事实上不可能把所有的日志文件都分析一遍(你要做也行,不过要保证有足够的空间和时间,并且不怕影响数据库性能),通常选取你感兴趣的时间段内的日志进行分析。
1、分析在线重作日志文件
A. 创建列表
SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>' /oracle/app/oradata/ora9i/redo01.log',
Options=>dbms_logmnr.new); 

B. 添加其他日志文件到列表
SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>' /oracle/app/oradata/ora9i/redo02.log',
Options=>dbms_logmnr.addfile); 

2、分析离线日志文件
A.创建列表
SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>' /dataoracle/oracle/archivedata/1_3938.dbf',
Options=>dbms_logmnr.new);

B.添加另外的日志文件到列表
SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>'/dataoracle/oracle/archivedata/1_3939.dbf’,
Options=>dbms_logmnr.addfile); 
关于这个日志文件列表中需要分析日志文件的个数完全由你自己决定,但这里建议最好是每次只添加一个需要分析的日志文件,在对该文件分析完毕后,再添加另外的文件。
和添加日志分析列表相对应,使用过程 'dbms_logmnr.removefile' 也可以从列表中移去一个日志文件。下面的例子移去上面添加的日志文件/oracle/app/oradata/ora9i/redo02.log
SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>' /oracle/app/oradata/ora9i/redo02.log ',
Options=>dbms_logmnr. REMOVEFILE); 
如此反复操作,可以把所有要分析的文件都选取进去。创建了要分析的日志文件列表,下面就可以对其进行分析了。
3.3使用LogMiner进行日志分析
3.3.1无限制条件
SQL> EXECUTE dbms_logmnr.start_logmnr(
DictFileName=>' /dataoracle/oracle/logminer/dictionary.ora'); 

注意,这里的dictionary.ora就是前面创建的字典文件名。
分析过程根据所选取文件的数据量,可能需要几个小时。有时候,DBA可能并不需要这些日志文件中所有的数据,那么能否只分析部分数据呢?
3.3.2有限制条件
通过对过程DBMS_ LOGMNR.START_LOGMNR中几个不同参数的设置(参数含义见表1),可以缩小要分析日志文件的范围。通过设置起始时间和终止时间参数我们可以限制只分析某一时间范围的日志。如下面的例子,我们仅仅分析2005年1月11日的日志:
SQL> EXECUTE dbms_logmnr.start_logmnr(
DictFileName => '/dataoracle/oracle/logminer/dictionary.ora ', 
StartTime => to_date('2005-1-11 00:00:00','YYYY-MM-DD HH24:MI:SS')
EndTime => to_date(''2005-1-11 23:59:59','YYYY-MM-DD HH24:MI:SS '));
                也可以通过设置起始SCN和截至SCN来限制要分析日志的范围:
SQL> EXECUTE dbms_logmnr.start_logmnr(
DictFileName => '/dataoracle/oracle/logminer/dictionary.ora ',
StartScn => 20,EndScn => 50);
分析结束后,所分析到的数据可以从一个名为 V$LOGMNR_CONTENTS的视图中查询到。我们就可以应用这个视图中的内容来达成目的。 

表1 DBMS_LOGMNR.START__LOGMNR过程参数含义
参数 	        参数类型 	        默认值 	含义 
StartScn 	数字型(Number) 	0	分析重作日志中SCN≥StartScn日志文件部分 
EndScn 	        数字型(Number) 	0	分析重作日志中SCN≤EndScn日志文件部分 
StartTime 	日期型(Date) 	        35796	分析重作日志中时间戳≥StartTime的日志文件部分 
EndTime 	日期型(Date) 	        397386	分析重作日志中时间戳≤EndTime的日志文件部分 
DictFileName 	字符型(VARCHAR2) 		字典文件,该文件包含一个数据库目录的快照。使用该文件可以使得到的分析结果是可以理解的文本形式,而非系统内部的16进制 
Options 	BINARY_INTEGER 	        0	系统调试参数,实际很少使用 

3.4观察分析结果(v$logmnr_contents)
到现在为止,我们已经分析得到了重作日志文件中的内容。动态性能视图v$logmnr_contents包含LogMiner分析得到的所有的信息。
SELECT sql_redo FROM v$logmnr_contents;
如果我们仅仅想知道某个用户对于某张表的操作,可以通过下面的SQL查询得到,该查询可以得到用户jdls对表所作的一切工作。
SQL> SELECT sql_redo FROM v$logmnr_contents 
WHERE username='jdls' ;
SQL_REDO 列显示的是所做的操作(SQL 语句),SQL_UNDO 列显示的是要恢复该操作所需要的SQL语句,只要顺序执行SQL_UNDO的内容,就可以恢复到修改前的数据。
使用这个方法恢复数据,好处在于不会有数据损失。否则只能做 incomplete recovery。
由于所有应用中,其实都是利用v$logmnr_contents这个视图,因此在此介绍一下该视图中特别有用的几个字段:
Seg_name: 表名; 
Scn: SCN 号码 
Sql_redo: 所作的sql 语句 
Sql_undo: 对应sql_redo, 恢复用的sql 语句 
Timestamp: sql 发出的具体时间 
Operation: sql 的类型,分为INSERT, UPDATE, START(set …),                COMMIT(commit), INTERNAL等 
Session#: 发出该操作的 session  
需要强调一点的是,视图v$logmnr_contents中的分析结果仅在我们运行过程'dbms_logmrn.start_logmnr'这个会话的生命期中存在。这是因为所有的LogMiner存储都在PGA内存中,所有其他的进程是看不到它的,同时随着进程的结束,分析结果也随之消失。
最后,使用过程DBMS_LOGMNR.END_LOGMNR终止日志分析事务,此时PGA内存区域被清除,分析结果也随之不再存在。
四、其他注意事项
我们可以利用LogMiner日志分析工具来分析其他数据库实例产生的重作日志文件,而不仅仅用来分析本身安装LogMiner的数据库实例的redo logs文件。使用LogMiner分析其他数据库实例时,有几点需要注意:
1. LogMiner必须使用被分析数据库实例产生的字典文件,而不是安装LogMiner的数据库产生的字典文件,另外必须保证安装LogMiner数据库的字符集和被分析数据库的字符集相同。
2. 被分析数据库平台必须和当前LogMiner所在数据库平台一样,也就是说如果我们要分析的文件是由运行在UNIX平台上的Oracle 8i产生的,那么也必须在一个运行在UNIX平台上的Oracle实例上运行LogMiner,而不能在其他如Microsoft NT上运行LogMiner。当然两者的硬件条件不一定要求完全一样。
3. LogMiner日志分析工具仅能够分析Oracle 8以后的产品,对于8以前的产品,该工具也无能为力。

五、结语
LogMiner对于数据库管理员(DBA)来讲是个功能非常强大的工具,也是在日常工作中经常要用到的一个工具,借助于该工具,可以得到大量的关于数据库活动的信息。其中一个最重要的用途就是不用全部恢复数据库就可以恢复数据库的某个变化。另外,该工具还可用来监视或者审计用户的活动,如你可以利用LogMiner工具察看谁曾经修改了那些数据以及这些数据在修改前的状态。我们也可以借助于该工具分析任何Oracle 8及其以后版本产生的重作日志文件。另外该工具还有一个非常重要的特点就是可以分析其他数据库的日志文件。总之,该工具对于数据库管理员来讲,是一个非常有效的工具,深刻理解及熟练掌握该工具,对于每一个数据库管理员的实际工作是非常有帮助的。
六、举例
SQL> @E:\oracle\ora92\rdbms\admin\dbmslm.sql
程序包已创建。
授权成功。
SQL> @E:\oracle\ora92\rdbms\admin\dbmslmd.sql
过程已创建。
没有错误。
授权成功。
PL/SQL 过程已成功完成。
程序包已创建。
SQL> exec dbms_logmnr_d.build('wsrz.ora','e:\oracle');
PL/SQL 过程已成功完成。
SQL> exec dbms_logmnr.add_logfile('&log_file',dbms_logmnr.new);
输入 log_file 的值:  E:\oracle\ora92\rdbms\ARC00037.001
PL/SQL 过程已成功完成。
SQL> select low_time,high_time from  v$logmnr_logs;
LOW_TIME   HIGH_TIME                                                            
---------- ----------                                                           
22-8月 -07 23-8月 -07                                                           
SQL> alter session set nls_date_format='yyyy-mm-dd:hh24:mi:ss';
会话已更改。
SQL> exec dbms_logmnr.start_logmnr(dictfilename=>'e:\oracle\wsrz.ora');
PL/SQL 过程已成功完成。
SQL> select sql_redo,TIMESTAMP from v$logmnr_contents
                 2  where upper(sql_redo) like '%delete%';
未选定行
SQL> select sql_redo,TIMESTAMP from v$logmnr_contents
                 2  where upper(sql_redo) like '%update%';
SQL> exec dbms_logmnr.end_logmnr;
PL/SQL 过程已成功完成。



Microsoft Windows [版本 6.1.7601]
版权所有 (c) 2009 Microsoft Corporation。保留所有权利。
C:\Users\Administrator>sqlplus /nolog
SQL*Plus: Release 11.2.0.1.0 Production on 星期二 2月 18 16:44:11 2014
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
SP2-0640: 未连接
SQL> exit;
C:\Users\Administrator>net start oracleservicefsyd
OracleServiceFSYD 服务正在启动 .................
OracleServiceFSYD 服务已经启动成功。
C:\Users\Administrator>lsnrctl start
LSNRCTL for 64-bit Windows: Version 11.2.0.1.0 - Production on 18-2月 -2014 17:05:15
Copyright (c) 1991, 2010, Oracle.  All rights reserved.
启动tnslsnr: 请稍候...
TNSLSNR for 64-bit Windows: Version 11.2.0.1.0 - Production
系统参数文件为E:\Oracle11gR2\Administrator\product\11.2.0\dbhome_1\network\admin\listener.ora
写入e:\oracle11gr2\administrator\diag\tnslsnr\WinXP-543\listener\alert\log.xml的日志信息
监听: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC1521ipc)))
监听: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=WinXP-543)(PORT=1521)))
正在连接到 (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))
LISTENER 的 STATUS
------------------------
别名                      LISTENER
版本                      TNSLSNR for 64-bit Windows: Version 11.2.0.1.0 - Production
启动日期                  18-2月 -2014 17:05:18
正常运行时间              0 天 0 小时 0 分 3 秒
跟踪级别                  off
安全性                    ON: Local OS Authentication
SNMP                      OFF
监听程序参数文件          E:\Oracle11gR2\Administrator\product\11.2.0\dbhome_1\network\admin\listener.ora
监听程序日志文件          e:\oracle11gr2\administrator\diag\tnslsnr\WinXP-543\listener\alert\log.xml
监听端点概要...
                 (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC1521ipc)))
                 (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=WinXP-543)(PORT=1521)))
服务摘要..
服务 "CLRExtProc" 包含 1 个实例。
                 实例 "CLRExtProc", 状态 UNKNOWN, 包含此服务的 1 个处理程序...
命令执行成功
C:\Users\Administrator>sqlplus /nolog
SQL*Plus: Release 11.2.0.1.0 Production on 星期二 2月 18 17:05:38 2014
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
SP2-0640: 未连接
SQL> conn sys/fsyd as sysdba
已连接。
会话已更改。
SQL> show parameter utl;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
create_stored_outlines               string
utl_file_dir                         string      E:\testLogMiner
SQL> exec sys.dbms_logmnr_d.build(dictionary_filename=>'dictionary_logminer_redolog.ora',dictionary_location=>'E:\testLogMiner');
PL/SQL 过程已成功完成。
SQL> select * from v$log;
GROUP#   THREAD#   SEQUENCE# BYTES    BLOCKSIZE MEMBERS ARCHIVED     STATUS      FIRST_CHANGE# FIRST_TIME       NEXT_CHANGE#    NEXT_TIME
------   -------   --------- -----    --------- ------- --------     --------    ------------- ----------       ------------    ---------
1              1    622    52428800    512      1         NO         CURRENT     283225832     18-2月 -14       281474976710655    
2              1    620    52428800    512      1         NO         INACTIVE    283220165     18-2月 -14       283222973       18-2月 -14
3              1    621    52428800    512      1         NO         INACTIVE    283222973     18-2月 -14       283225832       18-2月 -14
SQL> select * from v$logfile;
GROUP# STATUS  TYPE    MEMBER                                                    IS_RECOVERY_DEST_FILE
------ ------- ------- -------------------------------------------------------   ----------------------
                    3         ONLINE  E:\ORACLE11GR2\ADMINISTRATOR\ORADATA\FSYD\REDO03.LOG        NO
                    2         ONLINE  E:\ORACLE11GR2\ADMINISTRATOR\ORADATA\FSYD\REDO02.LOG        NO
                    1         ONLINE  E:\ORACLE11GR2\ADMINISTRATOR\ORADATA\FSYD\REDO01.LOG        NO
SQL> execute dbms_logmnr.add_logfile(logfilename=>'E:\ORACLE11GR2\ADMINISTRATOR\ORADATA\FSYD\REDO03.LOG',options=>dbms_logmnr.new);
PL/SQL 过程已成功完成。
SQL> execute dbms_logmnr.start_logmnr(dictfilename => 'E:\testLogMiner\dictionary_logminer_redolog.ora',starttime => to_date('2014-01-01 00:00:00','YYYY-MM-DD HH24:MI:SS'),endtime => to_date('2014-02-17 23:59:59','YYYY-MM-DD HH24:MI:SS'));


Microsoft Windows [版本 6.1.7601]
版权所有 (c) 2009 Microsoft Corporation。保留所有权利。

C:\Users\Administrator>sqlplus /nolog

SQL*Plus: Release 11.2.0.1.0 Production on 星期二 2月 18 16:44:11 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

SP2-0640: 未连接
SQL> exit;

C:\Users\Administrator>net start oracleservicefsyd
OracleServiceFSYD 服务正在启动 .................
OracleServiceFSYD 服务已经启动成功。


C:\Users\Administrator>lsnrctl start

LSNRCTL for 64-bit Windows: Version 11.2.0.1.0 - Production on 18-2月 -2014 17:05:15

Copyright (c) 1991, 2010, Oracle.  All rights reserved.

启动tnslsnr: 请稍候...

TNSLSNR for 64-bit Windows: Version 11.2.0.1.0 - Production
系统参数文件为E:\Oracle11gR2\Administrator\product\11.2.0\dbhome_1\network\admin\listener.ora
写入e:\oracle11gr2\administrator\diag\tnslsnr\WinXP-543\listener\alert\log.xml的日志信息
监听: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC1521ipc)))
监听: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=WinXP-543)(PORT=1521)))

正在连接到 (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))
LISTENER 的 STATUS
------------------------
别名                      LISTENER
版本                      TNSLSNR for 64-bit Windows: Version 11.2.0.1.0 - Production
启动日期                  18-2月 -2014 17:05:18
正常运行时间              0 天 0 小时 0 分 3 秒
跟踪级别                  off
安全性                    ON: Local OS Authentication
SNMP                      OFF
监听程序参数文件          E:\Oracle11gR2\Administrator\product\11.2.0\dbhome_1\network\admin\listener.ora
监听程序日志文件          e:\oracle11gr2\administrator\diag\tnslsnr\WinXP-543\listener\alert\log.xml
监听端点概要...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC1521ipc)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=WinXP-543)(PORT=1521)))
服务摘要..
服务 "CLRExtProc" 包含 1 个实例。
  实例 "CLRExtProc", 状态 UNKNOWN, 包含此服务的 1 个处理程序...
命令执行成功

C:\Users\Administrator>conn sys/fsyd as sysdba
'conn' 不是内部或外部命令,也不是可运行的程序
或批处理文件。

C:\Users\Administrator>sqlplus /nolog

SQL*Plus: Release 11.2.0.1.0 Production on 星期二 2月 18 17:05:38 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

SP2-0640: 未连接
SQL> conn sys/fsyd as sysdb
SP2-0306: 选项无效。
用法: CONN[ECT] [{logon|/|proxy} [AS {SYSDBA|SYSOPER|SYSASM}] [edition=value]]
其中 <logon> ::= <username>[/<password>][@<connect_identifier>]
      <proxy> ::= <proxyuser>[<username>][/<password>][@<connect_identifier>]
SQL> conn sys/fsyd as sysdba
已连接。

会话已更改。

SQL> show parameter utl;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
create_stored_outlines               string
utl_file_dir                         string      E:\testLogMiner
SQL> exec sys.dbms_logmnr_d.build(dictionary_filename=>'dictionary_logminer_redolog.ora',
BEGIN sys.dbms_logmnr_d.build(dictionary_filename=>'dictionary_logminer_redolog.ora',; END;

                                                                                     *
第 1 行出现错误:
ORA-06550: 第 1 行, 第 86 列:
PLS-00103: 出现符号 ";"在需要下列之一时:
( - + case mod new not null
<an identifier> <a double-quoted delimited-identifier>
<a bind variable> continue avg count current exists max min
prior sql stddev sum variance execute forall merge time
timestamp interval date
<a string literal with character set specification>
<a number> <a single-quoted SQL string> pipe
<一个带有字符集说明的可带引号的字符串文字>
<一个可带引号的 SQL 字符串> purge


SQL> exec sys.dbms_logmnr_d.build(dictionary_filename=>'dictionary_logminer_redolog.ora',dictionary_location='E:\testLogMiner');
BEGIN sys.dbms_logmnr_d.build(dictionary_filename=>'dictionary_logminer_redolog.ora',dictionary_location='E:\testLogMiner'); END;

                                                                                     *
第 1 行出现错误:
ORA-06550: 第 1 行, 第 86 列:
PLS-00201: 必须声明标识符 'DICTIONARY_LOCATION'
ORA-06550: 第 1 行, 第 7 列:
PL/SQL: Statement ignored


SQL> exec sys.dbms_logmnr_d.build(dictionary_filename=>'dictionary_logminer_redolog.ora',dictionary_location=>'E:\testLogMiner');

PL/SQL 过程已成功完成。

SQL> select * from v$log;

                 GROUP#                 THREAD#               SEQUENCE#                   BYTES               BLOCKSIZE                 MEMBERS ARC STATUS                     FIRST_CHANGE# FIRST_TIME                     NEXT_CHANGE# NEXT_TIME
----------------------- ----------------------- ----------------------- ----------------------- ----------------------- ----------------------- --- ---------------- ----------------------- ------------------- ----------------------- -------------------
                ¥1.000                 ¥1.000               ¥622.000          ¥52428800.000               ¥512.000                 ¥1.000 NO  CURRENT          ####################### 2014-02-18 17:16:37 #######################
                ¥2.000                 ¥1.000               ¥620.000          ¥52428800.000               ¥512.000                 ¥1.000 NO  INACTIVE         ####################### 2014-02-18 17:16:14 ####################### 2014-02-18 17:16:26
                ¥3.000                 ¥1.000               ¥621.000          ¥52428800.000               ¥512.000                 ¥1.000 NO  INACTIVE         ####################### 2014-02-18 17:16:26 ####################### 2014-02-18 17:16:37

SQL> select * from v$logfile;

                 GROUP# STATUS  TYPE
----------------------- ------- -------
MEMBER
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
IS_
---
                ¥3.000         ONLINE
E:\ORACLE11GR2\ADMINISTRATOR\ORADATA\FSYD\REDO03.LOG
NO

                ¥2.000         ONLINE
E:\ORACLE11GR2\ADMINISTRATOR\ORADATA\FSYD\REDO02.LOG
NO

                ¥1.000         ONLINE
E:\ORACLE11GR2\ADMINISTRATOR\ORADATA\FSYD\REDO01.LOG
NO


SQL> execute dbms_logmnr.add_logfile(logfilename=>'E:\APP\ADMINISTRATOR\ORADATA\FSYD\REDO03.LOG',options=>dbms_logmnr.new);
BEGIN dbms_logmnr.add_logfile(logfilename=>'E:\APP\ADMINISTRATOR\ORADATA\FSYD\REDO03.LOG',options=>dbms_logmnr.new); END;

*
第 1 行出现错误:
ORA-01284: 文件 E:\APP\ADMINISTRATOR\ORADATA\FSYD\REDO03.LOG 无法打开
ORA-00308: 无法打开归档日志 'E:\APP\ADMINISTRATOR\ORADATA\FSYD\REDO03.LOG'
ORA-27041: 无法打开文件
OSD-04002: 无法打开文件
O/S-Error: (OS 3) 系统找不到指定的路径。
ORA-06512: 在 "SYS.DBMS_LOGMNR", line 68
ORA-06512: 在 line 1


SQL> execute dbms_logmnr.add_logfile(logfilename=>'E:\ORACLE11GR2\ADMINISTRATOR\ORADATA\FSYD\REDO03.LOG',options=>dbms_logmnr.new);

PL/SQL 过程已成功完成。

SQL> execute dbms_logmnr.start_logmnr(dictfilename => 'E:\testLogMiner\dictionary_logminer_redolog.ora',starttime => to_date('2014-01-01 00:00:00','YYYY-MM-DD HH24:MI:SS'),endtime => to_date('2014-02-17 23:59:59','YYYY-MM-DD HH24:MI:SS '));
BEGIN dbms_logmnr.start_logmnr(dictfilename => 'E:\testLogMiner\dictionary_logminer_redolog.ora',starttime => to_date('2014-01-01 00:00:00','YYYY-MM-DD HH24:MI:SS'),endtime => to_date('2014-02-17 23:59:59','YYYY-MM-DD HH24:MI:SS ')); END;

*
第 1 行出现错误:
ORA-01291: 缺失日志文件
ORA-06512: 在 "SYS.DBMS_LOGMNR", line 58
ORA-06512: 在 line 1

SQL> exec dbms_logmnr.add_logfile('&log_file',dbms_logmnr.new);
输入 log_file 的值:  E:\ORACLE11GR2\ADMINISTRATOR\ORADATA\FSYD\REDO03.LOG

PL/SQL 过程已成功完成。

SQL> select low_time,high_time from v$logmnr_logs;

LOW_TIME            HIGH_TIME
------------------- -------------------
2014-02-18 17:16:26 2014-02-18 17:16:37

SQL> alter session set nls_date_format='yyyy-mm-dd:hh24:mi:ss';

会话已更改。

SQL>  execute dbms_logmnr.start_logmnr(dictfilename => 'E:\testLogMiner\dictionary_logminer_redolog.ora');

PL/SQL 过程已成功完成。

SQL> create table redologtable nologging as select sql_redo,TIMESTAMP from v$logmnr_contents;

表已创建。

SQL> select count(*) from redologtable;

               COUNT(*)
-----------------------
           ¥158180.000

SQL>select * from redologtable;

SQL_REDO                  TIMESTAMP
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
delete from "SYS"."WRI$_OPTSTAT_HISTGRM_HISTORY" where "OBJ#" = '162266' and "INTCOL#" = '11' and "SAVTIME" = TO_TIMESTAMP_TZ('13-1月 -14 11.02.36.907000 下午 +08:00') and "BUCKET" = '43' and "ENDPOINT" = '2.34' and "EPVALUE" IS NULL and "COLNAME" IS NULL and "SPARE1" IS NULL and "SPARE2" IS NULL and "SPARE3" IS NULL and "SPARE4" IS NULL and "SPARE5" IS NULL and "SPARE6" IS NULL and ROWID = 'AAAAHZAACAAAS53AAA';           18-2月 -14 
null
nulldelete from "SYS"."WRI$_OPTSTAT_HISTGRM_HISTORY" where "OBJ#" = '162266' and "INTCOL#" = '11' and "SAVTIME" = TO_TIMESTAMP_TZ('13-1月 -14 11.02.36.907000 下午 +08:00') and "BUCKET" = '46' and "ENDPOINT" = '2.35' and "EPVALUE" IS NULL and "COLNAME" IS NULL and "SPARE1" IS NULL and "SPARE2" IS NULL and "SPARE3" IS NULL and "SPARE4" IS NULL and "SPARE5" IS NULL and "SPARE6" IS NULL and ROWID = 'AAAAHZAACAAAS53AAB';           18-2月 -14 
null
nulldelete from "SYS"."WRI$_OPTSTAT_HISTGRM_HISTORY" where "OBJ#" = '162266' and "INTCOL#" = '11' and "SAVTIME" = TO_TIMESTAMP_TZ('13-1月 -14 11.02.36.907000 下午 +08:00') and "BUCKET" = '47' and "ENDPOINT" = '2.36' and "EPVALUE" IS NULL and "COLNAME" IS NULL and "SPARE1" IS NULL and "SPARE2" IS NULL and "SPARE3" IS NULL and "SPARE4" IS NULL and "SPARE5" IS NULL and "SPARE6" IS NULL and ROWID = 'AAAAHZAACAAAS53AAC';           18-2月 -14 
null
nulldelete from "SYS"."WRI$_OPTSTAT_HISTGRM_HISTORY" where "OBJ#" = '162266' and "INTCOL#" = '11' and "SAVTIME" = TO_TIMESTAMP_TZ('13-1月 -14 11.02.36.907000 下午 +08:00') and "BUCKET" = '48' and "ENDPOINT" = '2.37' and "EPVALUE" IS NULL and "COLNAME" IS NULL and "SPARE1" IS NULL and "SPARE2" IS NULL and "SPARE3" IS NULL and "SPARE4" IS NULL and "SPARE5" IS NULL and "SPARE6" IS NULL and ROWID = 'AAAAHZAACAAAS53AAD';           18-2月 -14 
null
null
delete from "SYS"."WRI$_OPTSTAT_HISTGRM_HISTORY" where "OBJ#" = '162266' and "INTCOL#" = '11' and "SAVTIME" = TO_TIMESTAMP_TZ('13-1月 -14 11.02.36.907000 下午 +08:00') and "BUCKET" = '49' and "ENDPOINT" = '2.38' and "EPVALUE" IS NULL and "COLNAME" IS NULL and "SPARE1" IS NULL and "SPARE2" IS NULL and "SPARE3" IS NULL and "SPARE4" IS NULL and "SPARE5" IS NULL and "SPARE6" IS NULL and ROWID = 'AAAAHZAACAAAS53AAE';           18-2月 -14 
null
nulldelete from "SYS"."WRI$_OPTSTAT_HISTGRM_HISTORY" where "OBJ#" = '162266' and "INTCOL#" = '11' and "SAVTIME" = TO_TIMESTAMP_TZ('13-1月 -14 11.02.36.907000 下午 +08:00') and "BUCKET" = '55' and "ENDPOINT" = '2.39' and "EPVALUE" IS NULL and "COLNAME" IS NULL and "SPARE1" IS NULL and "SPARE2" IS NULL and "SPARE3" IS NULL and "SPARE4" IS NULL and "SPARE5" IS NULL and "SPARE6" IS NULL and ROWID = 'AAAAHZAACAAAS53AAF';           18-2月 -14 
null
nulldelete from "SYS"."WRI$_OPTSTAT_HISTGRM_HISTORY" where "OBJ#" = '162266' and "INTCOL#" = '11' and "SAVTIME" = TO_TIMESTAMP_TZ('13-1月 -14 11.02.36.907000 下午 +08:00') and "BUCKET" = '59' and "ENDPOINT" = '2.42' and "EPVALUE" IS NULL and "COLNAME" IS NULL and "SPARE1" IS NULL and "SPARE2" IS NULL and "SPARE3" IS NULL and "SPARE4" IS NULL and "SPARE5" IS NULL and "SPARE6" IS NULL and ROWID = 'AAAAHZAACAAAS53AAG';           18-2月 -14 


dbmslm.sql

create or replace PACKAGE dbms_logmnr IS

  --------------------
  -- OVERVIEW
  -- 
  --   This package contains the procedures used by LogMiner ad-hoc query
  --   interface that allows for redo log stream analysis. 
  --   There are three procedures and two functions available to the user:
  --   dbms_logmnr.add_logfile()    : to register logfiles to be analyzed
  --   dbms_logmnr.remove_logfile() : to remove logfiles from being analyzed
  --   dbms_logmnr.start_logmnr()   : to provide window of analysis and
  --                                  meta-data information
  --   dbms_logmnr.end_logmnr()     : to end the analysis session
  --   dbms_logmnr.column_present() : whether a particular column value
  --                                  is presnet in a redo record
  --   dbms_logmnr.mine_value()     : extract data value from a redo record
  --

  ---------------------------
  --  PROCEDURE INFORMATION:
  --  #1 dbms_logmnr.add_logfile(): 
  --     DESCRIPTION:
  --       Registers a redo log file with LogMiner. Multiple redo logs can be
  --       registered by calling the procedure repeatedly. The redo logs 
  --       do not need to be registered in any particular order.
  --       Both archived and online redo logs can be mined.  If a successful 
  --       call to the procedure is made a call to start_logmnr() must be 
  --       made before selecting from v$logmnr_contents.
  --
  --     CONSTANTS:
  --       dbms_logmnr.NEW:  Starts a new list. Any file registered prior to
  --         this call is discarded.
  --       dbms_logmnr.ADDFILE:  Adds the redo log to the existing
  --         list. LogMiner checks to make sure that the redo log is from
  --         the same database (DB_ID and RESETLOGS_SCN) incarnation as the
  --         ones previously added.
  --
  --     EXCEPTIONS:
  --       ORA: 1284   Redo log file specified can not be opened.
  --       ORA: 1285   Error reading the header of the redo log file
  --       ORA: 1286   Redo log file specified is from a database with a
  --                   different DB_ID
  --       ORA: 1287   Redo log file specified is from a database with
  --                   different incarnation
  --       ORA: 1289   Redo log file specified is a duplicate of a previously
  --                   specified redo log. LogMiner matches redo logs by the
  --                   log sequence number. Thus two redo logs with different
  --                   names but with the same log sequence# (for instance
  --                   the online counterpart of an archived redo log has 
  --                   a different name, but attempting to register it with
  --                   LogMiner after registering the archived counterpart 
  --                   will return this exception).
  --
  --  #2 dbms_logmnr.remove_logfile()
  --     DESCRIPTION:
  --       Unregisters a redo log file from LogMiner. Multiple redo logs can be
  --       unregistered by calling the procedure repeatedly. The redo logs 
  --       do not need to be unregistered in any particular order.  If a 
  --       successful call to the procedure is made a call to start_logmnr() 
  --       must be made before selecting from v$logmnr_contents.
  --
  --     EXCEPTIONS:
  --       ORA: 1290   Attempt was made to remove a redo log that has not been
  --                   registered with LogMiner
  --
  --  #3 dbms_logmnr.start_logmnr()
  --     DESCRIPTION:
  --       Loads the data dictionary used by LogMiner to translate internal
  --       schema object identifiers to names. The redo stream does not
  --       contain names of schema objects and columns. The data dictionary
  --       extract can be provided in three ways: 
  --         (i) use Oracle's online catalog. This is only valid when the 
  --         mining of redo logs is done in the same system that generated
  --         them.
  --         (ii) use data dictionary extract taken to a flat file. 
  --         (See description of dbms_logmnr_d.build())
  --         (iii) use data dictionary extracted in the redo stream. This
  --         option can ONLY be used when LogMiner is being run on an open
  --         database, and the source and the mining database instances are
  --         Oracle9i or higher.
  --       The user can also restrict the analysis inside an SCN range or a 
  --       time range. If both SCN range and time range are specified, only
  --       the SCN range is used. 
  --       The user needs to be mindful of the fact that use of time range
  --       can be imprecise.  If a start_time or start_scn is specified, it 
  --       must be contained in a redo log added by a previous call to 
  --       dbms_logmnr.add_logfile().  If a start_time and start_scn is not 
  --       specified, LogMiner will set it based on the earliest added redo 
  --       log.  If a end_time or end_scn is specified and it is beyond the 
  --       latest added redo log, LogMiner will overwrite the end_time and 
  --       and end_scn with information from the latest added redo log.  When 
  --       the CONTINOUS_MINE option is in use the semantics of 
  --       start and end time/scn ranges may be different.  
  --       See additional documentation below.
  --
  --       CONSTANTS (used in options parameter)
  --       dbms_logmnr.NO_DICT_RESET_ONSELECT:  (will be deprecated soon)
  --       dbms_logmnr.COMMITED_DATA_ONLY: Groups DMLs belonging to the
  --         same transaction. Transactions are shown in their commit order.
  --         Internal redo records (those related to index operations, space
  --         management etc) are filtered out. So are rolled back
  --         transactions, rollback to savepoints and in-flight transactions.
  --       dbms_logmnr.SKIP_CORRUPTION: Usually LogMiner returns an error
  --         on encountering corrupt redo records. With this option set
  --         LogMiner will skip the corrupted redo records and continue
  --         mining. LogMiner can not handle a redo log that has a corrupt
  --         header.
  --       dbms_logmnr.DDL_DICT_TRACKING: LogMiner will apply the DDL
  --         statements encountered in the redo stream to its internal
  --         dictionary. Only available with Oracle9i redo logs and later.
  --         Mining database needs to be open.
  --       dbms_logmnr.DICT_FROM_ONLINE_CATALOG: Use the online data
  --         dictionary for SQL reconstruction. Mining database must be the
  --         same one that generated the redo logs. User should expect to 
  --         see "Dictionary Version Mismatch" in SQL_REDO if the current
  --         object version is newer than the ones encountered in the redo 
  --         stream.
  --       dbms_logmnr.DICT_FROM_REDO_LOGS: Use the dictionary extract logged
  --         in the redo stream.
  --       dbms_logmnr.NO_SQL_DELIMITER: By default, the SQL_REDO and SQL_UNDO
  --         statements are delimited with a ';'. However, this is
  --         inconvenient for applications that want to open a cursor and
  --         execute the reconstructed statements. With this option set,
  --         the SQL_DELIMITER is not placed at the end of reconstructed 
  --         statements.
  --       dbms_logmnr.NO_ROWID_IN_STMT: By default, the SQL_REDO and SQL_UNDO
  --         statements for UPDATE and DELETE operations contain a 'ROWID = '
  --         in the where clause.  However, this is inconvenient for 
  --         applications that want to re-execute the SQL statement.  With 
  --         this option set, 'ROWID' is not placed at the end of reconstructed
  --         statements.  Note: The onus is on the user to ensure that 
  --         supplemental logging was enabled in the source database at the 
  --         appropriate level and that no duplicate rows exist in tables of 
  --         interest.  LogMiner Adhoc Query does NOT make any quarantee 
  --         regarding uniqueness of logical row identifiers.
  --       dbms_logmnr.PRINT_PRETTY_SQL: Useful for basic report for
  --         analysis. With large number of columns the reconstructed
  --         SQL statements become visually busy. With this option set
  --         LogMiner formats the reconstructed SQL statements for ease
  --         of reading. The reconstructed SQL statements look as follow:
  --            insert into "SCOTT"."EMP" values
  --              EMPNO: 101010,
  --              ENAME: "Valued Employee",
  --              SAL:   101010,
  --              DEPT:  NULL;
  --             update "SCOTT"."EMP" 
  --              set
  --              "EMPNO" = 101011 and
  --              "SAL"   = 101011
  --              where
  --              "EMPNO" = 101010 and
  --              "SAL"   = 101010 and
  --              ROWID   = AABBCEXFGHA;
  --       dbms_logmnr.CONTINUOUS_MINE: Need to mine in the same instance
  --         that is generating the redo logs. The user needs to register
  --         only one archived log file. LogMiner will automatically add
  --         and mine subsequent archived redo logs, and eventually 
  --         mine online logfiles.
  --       dbms_logmnr.STRING_LITERALS_IN_STMT: By default, the SQL_REDO and 
  --         SQL_UNDO statements honor a database session's NLS setting 
  --         (e.g. NLS_DATE_FORMAT, NLS_NUMERIC_CHARACTERS, etc.) when 
  --         formating the reconstructed SQL statements.  With this option, 
  --         the reconstructed SQL statements will be formatted using ANSI/ISO 
  --         string literal formats.
  --
  --    EXCEPTIONS:
  --      ORA: 1281     startScn or endSCN parameter specified is not a valid 
  --                    SCN or endScn is greater then startScn
  --      ORA: 1282     startTime parameter is greater than year 2110 or 
  --                    endTime parameter is greater than year 2110 or 
  --                    startTime parameter is less then year 1988
  --      ORA: 1283     The value specified in the Options parameter is not a 
  --                    NUMBER or is not a known LogMiner Adhoc option
  --      ORA: 1284     The dictionary file specified in the DictFileName 
  --                    parameter has a full path length greater then 256 or 
  --                    cannot be opened
  --      ORA: 1285     DictFileName parameter is not a valid VARCHAR2
  --      ORA: 1291     Redo files are missing which are needed to satisfy 
  --                    the user's requested SCN/time range.
  --                    The user can specify ALLOW_MISSING_LOGS option.
  --                    Missing logs are not allowed under any circumstance 
  --                    when DDL tracking is in use
  --      ORA: 1292     No log file has been registered with LogMiner
  --      ORA: 1293     Mounted database required for options specified 
  --                    (CONTINIOUS_MINE)
  --      ORA: 1294     Error while processing the data dictionary extract
  --      ORA: 1295     DB_ID of the data dictionary does not match that of
  --                    the redo logs
  --      ORA: 1296     Character set specified in the data dictionary does
  --                    not match (and is incompatible with) that of the 
  --                    mining database
  --      ORA: 1297     Redo version mismatch between the dictionary and
  --                    the registered redo logs
  --      ORA: 1298     More than one dictionary source was specified or 
  --                    DDL_DICT_TRACKING was requested with 
  --                    DICT_FROM_ONLINE_CATALOG
  --      ORA: 1299     Dictionary is from a different database incarnation
  --      ORA: 1300     Writable database required for options specified 
  --                    (DDL_DICT_TRACKING, DICT_FROM_REDO_LOGS, 
  --                     DICT_FROM_ONLINE_CATALOG)
  --      ORA: 1371     A logfile containing the dictionary dump to redo logs 
  --                    is missing
  --      ORA: 1286     Options specified require start time or start SCN
  --                 
  --  #4 dbms_logmnr.end_logmnr()
  --     DESCRIPTION:
  --       Ends the LogMiner session. Releases all PGA memory allocated 
  --       to stage internal data structures etc.
  --
  --     EXCEPTIONS:
  --       ORA: 1307    No LogMiner session is currently active. 
  --                    Attempt to end_logmnr() without calling
  --                    add_logfile() or start_logmnr()
  --
  --  #5 dbms_logmnr.mine_value()
  --     DESCRIPTION: 
  --       This facilitates query by data value. For instance, the user
  --       can formulate a query that says "Show me all updates to
  --       SCOTT.EMP where the SAL column is updated to twice its
  --       original value"
  --       select sql_redo from v$logmnr_contents where
  --           operation = 'UPDATE" and 
  --           owner_name = 'SCOTT' and seg_name = 'EMP' and
  --         dbms_logmnr.mine_value(redo_value, 'SCOTT.EMP.SAL') > 
  --         2* dbms_logmnr.mine_value(undo_value, 'SCOTT.EMP.SAL');
  --      The function returns NULL if the column does not exist in
  --      the redo record or if the column value is actually null.
  --      To decipher between the two different null possibilities
  --      use dbms_logmnr.column_present() function.
  --      
  --      PARAMETERS:
  --        sql_redo_undo:  which column in v$logmnr_contents to 
  --        extract data value from
  --        column_name:    fully qualified column name of the 
  --        column that needs to be extracted
  --
  --      EXCEPTIONS:
  --      ORA 1302:     Specified table or column does not exist
  --       
  --  #6 dbms_logmnr.column_present()
  --     DESCRIPTION: 
  --       Can be used to decipher null returns from mine_value function
  --       The query described above can be rewritten to filter out
  --       redo records that do not contain update to the 'SAL' 
  --       columns
  --         select sql_redo from v$logmnr_contents where
  --           operation = 'UPDATE" 
  --           owner_name = 'SCOTT' and seg_name = 'EMP' and
  --           dbms_logmnr.mine_value(redo_value, 'SCOTT.EMP.SAL') > 
  --           2* dbms_logmnr.mine_value(undo_value, 'SCOTT.EMP.SAL') and
  --           dbms_logmnr.column_present(redo_value, 'SCOTT.EMP.SAL');
  -- 
  --      PARAMETERS:
  --        sql_redo_undo:  which column in v$logmnr_contents to 
  --        extract data value from
  --        column_name:    fully qualified column name of the 
  --        column that needs to be extracted
  --
  --      EXCEPTIONS:
  --      ORA 1302:     Specified table or column does not exist
  --       
  ---------------------------------
  
-----------------------------------
-- SUBTYPES and related CONSTANTS
--
 
--
-- Constants for add_archivelog options flag

NEW                       CONSTANT BINARY_INTEGER := 1;
REMOVEFILE                CONSTANT BINARY_INTEGER := 2;
ADDFILE                   CONSTANT BINARY_INTEGER := 3;

--
-- Constants for start_logmnr options flag
NO_DICT_RESET_ONSELECT    CONSTANT BINARY_INTEGER := 1;
COMMITTED_DATA_ONLY       CONSTANT BINARY_INTEGER := 2;
SKIP_CORRUPTION           CONSTANT BINARY_INTEGER := 4;
DDL_DICT_TRACKING         CONSTANT BINARY_INTEGER := 8;
DICT_FROM_ONLINE_CATALOG  CONSTANT BINARY_INTEGER := 16;
DICT_FROM_REDO_LOGS       CONSTANT BINARY_INTEGER := 32;
NO_SQL_DELIMITER          CONSTANT BINARY_INTEGER := 64;
PRINT_PRETTY_SQL          CONSTANT BINARY_INTEGER := 512;
CONTINUOUS_MINE           CONSTANT BINARY_INTEGER := 1024;
NO_ROWID_IN_STMT          CONSTANT BINARY_INTEGER := 2048;
STRING_LITERALS_IN_STMT   CONSTANT BINARY_INTEGER := 4096;
--
SUBTYPE Length            IS BINARY_INTEGER;
SUBTYPE ThreadId          IS BINARY_INTEGER;

--
-- Constants for STATUS column of v$logmnr_contents
-- NOTE: Make sure that new ones match the values defined
-- in the krvfsri struct in krv0.h
VALID_SQL                 CONSTANT BINARY_INTEGER := 0;
INVALID_SQL               CONSTANT BINARY_INTEGER := 2;
UNGUARANTEED_SQL          CONSTANT BINARY_INTEGER := 3;
CORRUPTED_BLK_IN_REDO     CONSTANT BINARY_INTEGER := 4;
ASSEMBLY_REQUIRED_SQL     CONSTANT BINARY_INTEGER := 5;
HOLE_IN_LOGSTREAM         CONSTANT BINARY_INTEGER := 1291;

-- Workaround for the lack of constrained subtypes

LogFileNameTemplate          VARCHAR2(256);  
SUBTYPE LogFileName          IS LogFileNameTemplate%TYPE;
LogFileDescTemplate          VARCHAR2(256);  
SUBTYPE LogFileDescription   IS LogFileDescTemplate%TYPE;


-------------
-- PROCEDURES 
--

---------------------------------------------------------------------------
---------------------------------------------------------------------------
-- Initialize LOGMINER
--
-- Supplies LOGMINER with the list of filenames and SCNs required
-- to initialize the tool.  Once this procedure completes, the server is ready
-- to process selects against the v$logmnr_contents fixed view.
--
---------------------------------------------------------------------------

PROCEDURE start_logmnr( 
     startScn           IN  NUMBER default 0 ,
     endScn 		IN  NUMBER default 0,
     startTime      	IN  DATE default '',
     endTime        	IN  DATE default '',
     DictFileName    	IN  VARCHAR2 default '',
     Options		IN  BINARY_INTEGER default 0 );

PROCEDURE add_logfile( 
     LogFileName    	IN  VARCHAR2,
     Options		IN  BINARY_INTEGER default ADDFILE );

PROCEDURE end_logmnr;

FUNCTION column_present(
     sql_redo_undo      IN  NUMBER default 0,
     column_name        IN  VARCHAR2 default '') RETURN BINARY_INTEGER;

FUNCTION mine_value(
     sql_redo_undo      IN  NUMBER default 0,
     column_name        IN  VARCHAR2 default '') RETURN VARCHAR2;

PROCEDURE remove_logfile( 
     LogFileName    	IN  VARCHAR2);

---------------------------------------------------------------------------

pragma TIMESTAMP('1998-05-05:11:25:00');

END;
/
grant execute on dbms_logmnr to execute_catalog_role;
create or replace public synonym dbms_logmnr for sys.dbms_logmnr;

dbmslmd.sql


Rem
Rem $Header: dbmslmd.sql 13-feb-2006.15:15:25 ajadams Exp $
Rem
Rem dbmslmd.sql
Rem
Rem Copyright (c) 1998, 2006, Oracle. All rights reserved.  
Rem
Rem    NAME
Rem      dbmslmd.sql - DBMS Logminer Dictionary package specification 
Rem      for DBMS_LOGMNR_D 
Rem
Rem    DESCRIPTION
Rem	 This file contains the logminer package specification for DBMS_LOGMNR_D 
Rem
Rem    NOTES
Rem
Rem    MODIFIED   (MM/DD/YY)
Rem    ajadams     02/13/06 - create synonym 
Rem    abrown      09/13/05 - bug 3776830: unwind dictionary 
Rem    jnesheiw    02/17/05 - Bug 4028220 Relocated change history and logmnr
Rem                           metada creation to prvtlmd.sql
Rem    doshaugh    04/14/98 - Created
Rem
Rem
Rem  PUBLIC PROCEDURES
Rem
Rem     BUILD (FileName, FileLocation, Options)
Rem 
Rem     SET_TABLESPACE(NewTablespace);
Rem
Rem  PUBLIC CONSTANTS
Rem
Rem     STORE_IN_FLAT_FILE
Rem
Rem     STORE_IN_REDO_LOGS
Rem
Rem     MARK_SAFE_MINE_POINT
Rem     
Rem
Rem

-- --------------------------------------------------------------
--
CREATE or REPLACE PACKAGE dbms_logmnr_d AS
--
--    PACKAGE NAME
--      dbms_logmnr_d
--
--    DESCRIPTION
--      This package contains Logminer Dictionary related procedures.
--      "build" is used to gather the logminer dictionary.
--
--      "set_tablespace" is used to alter the default tablespace of
--      Logminer tables.
--
--      BUILD
--      The logminer dictionary can be gathered
--      into a flat file (Logminer V1 behavior) or it can be gathered
--      into the redo log stream.
--
--      When creating a Flat File dictionary the procedure queries the
--      dictionary tables of the current database and creates a text based
--      file containing their contents. Each table is represented by
--      "pseudo" SQL statements. A description of the columns in a 
--      table is created by a "CREATE_TABLE" line (one statement for
--      table). It contains the name, datatype and length for each 
--      column. A "INSERT_INTO" statement is created for each row in a 
--      selected table. It contains the values for each row. The file
--      is created in preparation of future analysis of databases
--      log files using the logminer tool.
--
--      When gathering the system dictionary into the logstream the procedure
--      queries the dictionary tables inserting the results into a special
--      set of Logminer Gather tables (SYS.LOGMNRG_*).  A side effect of
--      each query is that the resultant inserts cause redo to be generated.
--      Down stream processing can mine this redo to determine the contents
--      of this system's system dictionary at the time this procedure was
--      executed.
-- 
--      NOTE:  Database must be in "Archivelog Mode" and supplemental logging
--             must be enabled for this procedure to run
--
--      BUILD INPUTS
--      dictionary_filename - name of the dictionary file
--      dictionary_location - path to file directory
--      options - To explicitly indicate flat file or log stream destination.
-- 
--      BUILD EXAMPLE1
--      Creating a dictionary file as:
--                   /usr/ora/dict.ora
--      Complete syntax, typed all on one line:
--
--      SQL> execute dbms_logmnr_d.build('dict.ora',
--                                       '/usr/ora',
--                                       DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
--
--      BUILD EXAMPLE2
--      Creating a dictionary file as:
--                   /usr/ora/dict.ora
--      Logminer V1 syntax.
--
--      SQL> execute dbms_logmnr_d.build('dict.ora', '/usr/ora');
--
--      BUILD EXAMPLE3
--      Gathering a dictionary into the log stream
--      Complete syntax, typed all on one line:
--
--      SQL> execute dbms_logmnr_d.build('', '',
--                                          DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
--
--      BUILD NOTES
--      The dictionary gather should be done after all dictionary
--      changes to a database and prior to the creation of any log
--      files that are to be analyzed.
--
--
--      SET_TABLESPACE
--      By default all Logminer tables are created to use the SYSAUX
--      tablespace.  All users will find it desirable to alter Logminer
--      tables to employ an alternate tablespace.  Use this routine to
--      recreate all Logminer tables in an alternate tablespace.
--
--      SET_TABLESPACE INPUTS
--      new_tablespace         - a string naming a preexistant tablespace.
--

STORE_IN_FLAT_FILE CONSTANT INTEGER := 1;
STORE_IN_REDO_LOGS CONSTANT INTEGER := 2;
MARK_SAFE_MINE_POINT  CONSTANT INTEGER := 8;

PROCEDURE  build
		(dictionary_filename IN VARCHAR2 DEFAULT '',
		 dictionary_location IN VARCHAR2 DEFAULT '',
                 options IN NUMBER DEFAULT 0);

--
--
PROCEDURE set_tablespace( new_tablespace IN VARCHAR2 );
--
--
END dbms_logmnr_d; -- End Definition of package
/

CREATE OR REPLACE PUBLIC SYNONYM dbms_logmnr_d FOR sys.dbms_logmnr_d;



分享到:
评论

相关推荐

    用LogMiner分析重做日志或归档日志

    用LogMiner分析重做日志或归档日志

    Oracle中用LogMiner分析重做及归档日志

    LogMiner是集成在Oracle8i/Oracle9i数据库产品中的日志分析工具,通过该工具可以分析重做日志和归档日志中的所有事务变化,并能准确地确定各种DML和DDL操作的具体时间和SCN值。对重做日志和归档日志进行分析的目 的...

    分析Oracle8i/9i的重做日志和归档日志

    LogMiner是集成在Oracle8i/Oracle9i数据库产品中的日志分析工具,通过该工具可以分析重做日志和归档日志中的所有事务变化,并能准确地确定各种DML和DDL操作的具体时间和SCN值。文中以Oracle8i(8.1.5)for Windows ...

    LogMiner的使用

    LogMiner 是ORACLE 数据库所提供的一个工具,它用于分析重做日志和归档日志所记载的事务操作。LogMiner 提供了以下一些关键特征:确定数据库的逻辑损坏时间。 确定事务级要执行的精细逻辑恢复操作。本文通过实际操作...

    Oracle_10g_Logminer_研究及测试

    LogMiner提供了一个处理重做日志文件并将其内容翻译成代表对数据库的逻辑操作的SQL语句的过程。LogMiner运行在Oracle版本8.1或者更高版本中。

    Oracle Logminer

    Logminer是oracle从8i开始提供的用于分析重做日志信息的工具,它包括DBMS_LOGMNR和DBMS_LOGMNR_D两个package,后边的D是字典的意思。它既能分析redo log file,也能分析归档后的archive log file。在分析日志的过程...

    Oracle LogMiner的使用实例代码

    LogMiner 是Oracle公司从产品8i以后提供的一个实际非常有用的分析工具,使用该工具可以轻松获得Oracle 重做日志文件(归档日志文件)中的具体内容,LogMiner分析工具实际上是由一组PL/SQL包和一些动态视图组成,它...

    Oracle数据库管理员技术指南

    4.19 将联机重做日志和归档重做日志文件 分开 4.20 如何加速实例的恢复 4.21 回顾 第5章 使用热备用数据库 5.1 什么是失败切换 5.2 热备用数据库 5.2.1 热备用数据库的优点 5.2.2 热备用数据库的缺点 5.2.3...

Global site tag (gtag.js) - Google Analytics