Moneycontrol Brokerage Recos

Tuesday, October 25, 2016

ora-00845 memory_target not supported on this system



ORA-00845 memory_target not supported on this system


You will get this error message when try to start DB instance or launched DBCA to drop an existing 11g database which has memory_target parameter value is set larger than current size of your shared/tmpfs file system or you can say your shared memory size at OS level is too small to fit SGA memory components in there.

Here I tried to start the database and received below error.











Below identified and found that /dev/shm size is tool small to fit the SAG memory components in there.




Increase the tmpfs size to larger value where all SGA/MEMORY_TARGET value can be fit. Here we increased the /dev/shm size to 2G from 500M.





Now we have 2G size for tmpfs /dev/shm






Now, try starting the database instance.














Now, things worked....!!

Monday, October 24, 2016

RMAN-08512: waiting for snapshot controlfile enqueue





RMAN Backup Fails Because of Control File Enqueue: Diagnosis

 
When RMAN needs to back up or resynchronize from the control file, it first creates a snapshot or consistent image of the control file. If one RMAN job is already backing up the control file while another needs to create a new snapshot control file, then you may see the following message:
waiting for snapshot controlfile enqueue Under normal circumstances, a job that must wait for the control file enqueue waits for a brief interval and then successfully obtains the enqueue.


RMAN makes up to five attempts to get the enqueue and then fails the job. The conflict is usually caused when two jobs are both backing up the control file, and the job that first starts backing up the control file waits for service from the media manager.


To determine which job is holding the conflicting enqueue:

•After you see the first message stating "RMAN-08512: waiting for snapshot controlfile enqueue", start a new SQL*Plus session on the target database: $sqlplus 'SYS/passwd@db_name AS SYSDBA'


•Execute the following query to determine which job is causing the wait:


SELECT s.SID, USERNAME AS "User", PROGRAM, MODULE,
       ACTION, LOGON_TIME "Logon", l.*
FROM V$SESSION s, V$ENQUEUE_LOCK l
WHERE l.SID = s.SID
AND l.TYPE = 'CF'
AND l.ID1 = 0
AND l.ID2 = 2;


You should see output similar to the following (the output in this example has been truncated):
SID User Program              Module                    Action           Logon
--- ---- -------------------- ------------------------- ---------------- ---------
  785 SYS  rman@exad (TNS V1-V3) backup full datafile: ch5  0000007 STARTED  24-OCT-16

 
Backup Fails Because of Control File Enqueue: Solution
 
After you have determined which job is creating the enqueue, you can do one of the following:

- Wait until the job creating the enqueue completes
- Cancel the current job and restart it after the job creating the enqueue completes
- Cancel the job creating the enqueue

Commonly, enqueue situations occur when a job is writing to a tape drive, but the tape drive is waiting for a new cassette to be inserted. If you start a new job in this situation, then you will probably receive the enqueue message because the first job cannot complete until the new tape is loaded.

Reference : https://docs.oracle.com/cd/B10501_01/server.920/a96566/rcmtroub.htm

Monday, September 26, 2016

Thursday, September 8, 2016

ORA-27054: NFS file system where the file is created or resides is not mounted with correct options



I was trying to catalog COLD backup of source database stored on an NFS mount point to RMAN on AIX 7.2 platform which failed with below errors.

ORA-19625: error identifying file /orabackup/PROD/ORADB/users01.dbf
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options


From Alert Log:
--------------------

WARNING: NFS file system /orabackup mounted with incorrect options(bg,hard,intr,sec=sys,rw)
WARNING:Expected NFS mount options: rsize>=32768,wsize>=32768,hard,
Errors in file /oracle/diag/rdbms/prod/trace/PROD_ora_28573960.trc:
ORA-19625: error identifying file /orabackup/PROD/ORADB/users01.dbf
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
Additional information: 5
Additional information: 2
WARNING: NFS file system /orabackup mounted with incorrect options(bg,hard,intr,sec=sys,rw)
WARNING:Expected NFS mount options: rsize>=32768,wsize>=32768,hard,
Errors in file /oracle/diag/rdbms/prod/trace/PROD_ora_28573960.trc:
ORA-19625: error identifying file /orabackup/PROD/ORADB/users01.dbf.OLD
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
Additional information: 5
Additional information: 2


I checked the options used while mounting the mount point /orabackup


/orabackup:
        dev             = "/orabackup"
        vfs             = nfs
        nodename        = 192.168.90.11
        mount           = true
        type            = nfs
        options         = bg,hard,intr,sec=sys     --- not matching the expection thrown in error above
        account         = false



Above in the error message from alert log, Oracle RMAN expects NFS mount point options (rsize>=32768,wsize>=32768,hard) in order to proceed.





Workaround : I un-mounted the device and re-mounted that again with expected options as below.


$umount /orabackup

$mount -o rw,bg,hard,nointr,vers=3,timeo=300,rsize=32768,wsize=32768 92.168.90.11:/orabackup /orabackup


And again tried to catalog the COLD backup stored at that NFS mount point to RMAN.



RMAN> catalog start with '/orabackup/PROD/ORADB';

using target database control file instead of recovery catalog
searching for all files that match the pattern /orabackup/PROD/ORADB

List of Files Unknown to the Database
=====================================
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156187_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156188_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156189_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156190_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156191_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156192_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156193_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156194_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156195_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156196_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156197_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156198_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156199_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156200_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156201_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156202_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156203_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156204_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/old1_24972_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_1_921865254.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_2_921865254.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_3_921865254.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_4_921865254.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_5_921865254.dbf
.
.
.
.
.
.
.
File Name: /orabackup/PROD/ORADB/undotbs01.dbf
File Name: /orabackup/PROD/ORADB/users01.dbf
File Name: /orabackup/PROD/ORADB/users01.dbf.OLD

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156187_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156188_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156189_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156190_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156191_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156192_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156193_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156194_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156195_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156196_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156197_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156198_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156199_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156200_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156201_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156202_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156203_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_156204_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/old1_24972_674772839.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_1_921865254.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_2_921865254.dbf
File Name: /orabackup/PROD/ORADB/Archive_Logs/1_3_921865254.dbf
.
.
.
.
.


It worked fine.

ORA-19554: error allocating device, device type: SBT_TAPE, device name:


Received below error while restoring COLD backup of source database using RMAN at target server.



RMAN> run{
2> restore database;
3> }

Starting restore at 07-SEP-16
RMAN-06908: WARNING: operation will not run in parallel on the allocated channels
RMAN-06909: WARNING: parallelism require Enterprise Edition
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=391 device type=DISK
released channel: ORA_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 09/07/2016 23:17:38
ORA-19554: error allocating device, device type: SBT_TAPE, device name:
ORA-27211: Failed to load Media Management Library
Additional information: 2



Cause : The reason of this failure is that on source database server has SBT_TAPE device type configured in RMAN( and we are using control file of source DB hence all SBT_TAPE decive configuration exists at target DB)  while on target server there was no Tape Drive attached to it.


RMAN> show all;

RMAN configuration parameters for database with db_unique_name ERPLN are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';





Solution : On target server, just clear the device type from SBT_TAPE to default device type which is disk using below command or you can explicitly allocate channels of device type inside RUN{} RMAN block to override its permanent configuration. In this case, I have cleared SBT_TAPE device type setting to default.


RMAN> CONFIGURE DEFAULT DEVICE TYPE CLEAR;

old RMAN configuration parameters:
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
RMAN configuration parameters are successfully reset to default value



Now try restoring the database again.


RMAN> run{
2> restore database;
3> }

Starting restore at 07-SEP-16
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=391 device type=DISK

channel ORA_DISK_1: restoring datafile 00001
input datafile copy RECID=244 STAMP=921971779 file name=/orabackup/PROD/ORADB/system01.dbf
destination for restore of datafile 00001: /oracle/oradata/PROD/system01.dbf
channel ORA_DISK_1: copied datafile copy of datafile 00001
output file name=/oracle/oradata/PROD/system01.dbf RECID=0 STAMP=0
channel ORA_DISK_1: restoring datafile 00002
input datafile copy RECID=246 STAMP=921971780 file name=/orabackup/PROD/ORADB/undotbs01.dbf
destination for restore of datafile 00002: /oracle/oradata/PROD/undotbs01.dbf
channel ORA_DISK_1: copied datafile copy of datafile 00002
output file name=/oracle/oradata/PROD/undotbs01.dbf RECID=0 STAMP=0
channel ORA_DISK_1: restoring datafile 00003
input datafile copy RECID=243 STAMP=921971779 file name=/orabackup/PROD/ORADB/sysaux01.dbf
destination for restore of datafile 00003: /oracle/oradata/PROD/sysaux01.dbf
channel ORA_DISK_1: copied datafile copy of datafile 00003
output file name=/oracle/oradata/PROD/sysaux01.dbf RECID=0 STAMP=0
channel ORA_DISK_1: restoring datafile 00004
input datafile copy RECID=247 STAMP=921971780 file name=/orabackup/PROD/ORADB/users01.dbf
destination for restore of datafile 00004: /oracle/oradata/PROD/users01.dbf
channel ORA_DISK_1: copied datafile copy of datafile 00004
output file name=/oracle/oradata/PROD/users01.dbf RECID=0 STAMP=0
channel ORA_DISK_1: restoring datafile 00005
input datafile copy RECID=237 STAMP=921971779 file name=/orabackup/PROD/ORADB/erplntoolsdat01.dbf
destination for restore of datafile 00005: /oracle/oradata/PROD/erplntoolsdat01.dbf


Hope it would help someone ....!! 

Tuesday, August 30, 2016

SQL Tuning - 100% CPU Utilization






SQL Using 100% CPU
-----------------------------

After a weekend maintenance production activity(DB was bounced), EBS application started behaving abnormally, CPU utilisation was continuously being utilised 100%, whole production system was likely to be down for end users as they were not able to submit and process their requests.

I wondered around and get the AWR report for the problematic duration and found that TX – contention lock was there in the TOP 5 wait events and CBC latches as well. We cleared CBC latches but no help with that.

Investigated further and found a single SQL SELECT Statement (based on a view) was consuming TOP CPU order by CPU elapsed time and ordered by executions.


Same SQL was working fine usually but suddenly it started hogging whole Database server CPU. I found that Execution plan for that SQL statement was changed since DB server was bounced after maintenance activity.  



Please use below SQL to find the SQL Plan History of a particular SQL ID.
------------------------------------------------------------------------------------


set pagesize 1000
set linesize 200
column begin_interval_time format a20
column milliseconds_per_execution format 999999990.999
column rows_per_execution format 999999990.9
column buffer_gets_per_execution format 999999990.9
column disk_reads_per_execution format 999999990.9
break on begin_interval_time skip 1

SELECT
  to_char(s.begin_interval_time,'mm/dd hh24:mi')
    AS begin_interval_time,
  ss.plan_hash_value,
  ss.executions_delta,
  CASE
    WHEN ss.executions_delta > 0
    THEN ss.elapsed_time_delta/ss.executions_delta/1000
    ELSE ss.elapsed_time_delta
  END AS milliseconds_per_execution,
  CASE
    WHEN ss.executions_delta > 0
    THEN ss.rows_processed_delta/ss.executions_delta
    ELSE ss.rows_processed_delta
  END AS rows_per_execution,
  CASE
    WHEN ss.executions_delta > 0
    THEN ss.buffer_gets_delta/ss.executions_delta
    ELSE ss.buffer_gets_delta
  END AS buffer_gets_per_execution,
  CASE
    WHEN ss.executions_delta > 0
    THEN ss.disk_reads_delta/ss.executions_delta
    ELSE ss.disk_reads_delta
  END AS disk_reads_per_execution
FROM wrh$_sqlstat ss
INNER JOIN wrm$_snapshot s ON s.snap_id = ss.snap_id
WHERE ss.sql_id = '&sql_id'
AND ss.buffer_gets_delta > 0

ORDER BY s.snap_id, ss.plan_hash_value;





Below is the OLD SQL Plan which was working fine.
------------------------------------------------

BEGIN_INTERVAL_TIME  PLAN_HASH_VALUE EXECUTIONS_DELTA MILLISECONDS_PER_EXECUTION ROWS_PER_EXECUTION BUFFER_GETS_PER_EXECUTION DISK_READS_PER_EXECUTION
-------------------- --------------- ---------------- -------------------------- ------------------ ------------------------- ------------------------
08/14 05:15               3139549555                6                    758.744              345.0                   47354.0                      0.0
08/14 05:30               3139549555                6                    762.433              345.0                   47350.0                      0.0
08/14 05:45               3139549555               10                    763.121              345.0                   47350.0                      0.0
08/14 09:00               3139549555                4                   3081.986              348.0                  103319.3                   2141.3
08/14 10:30               3139549555                2                   1821.184              370.0                  126928.5                      2.5
08/14 11:15               3139549555                2                   1926.320              372.0                  135198.0                      0.0




Below is the BAD SQL Plan which optimiser chooses after maintenance activity. You can see dramatic increase in time per execution for the SQL Statement.
----------------------------------------------------------------------------------------

BEGIN_INTERVAL_TIME  PLAN_HASH_VALUE EXECUTIONS_DELTA MILLISECONDS_PER_EXECUTION ROWS_PER_EXECUTION BUFFER_GETS_PER_EXECUTION DISK_READS_PER_EXECUTION
-------------------- --------------- ---------------- -------------------------- ------------------ ------------------------- ------------------------
08/15 05:15               4140799271                1                 215288.221                1.0                13049295.0                   8088.0
08/15 21:00               4140799271                1                 292584.136                0.0                16131485.0                      8.0
08/15 21:15               4140799271                0              883156121.000                0.0                48242380.0                      0.0
08/15 21:30               4140799271                0              892885844.000                0.0                48965395.0                      0.0
08/15 21:45               4140799271                0              887075745.000                0.0                48552866.0                      0.0
08/15 22:01               4140799271                0              830391270.000                1.0                45468887.0                      0.0
08/15 22:15               4140799271                0              890962782.000                0.0                49497369.0                      0.0
08/15 22:30               4140799271                0              890992388.000                0.0                48430711.0                      0.0





Investigated further and found that base tables of underline view had STALE statistics due to which optimizer was not able to choose better execution plan for that SQL statement.

We gathered table statistics for those underline tables and immediately after that Oracle Optimizer chosen better execution plan and whole CPU utilization on database sever dragged down to normal.


BEGIN_INTERVAL_TIME  PLAN_HASH_VALUE EXECUTIONS_DELTA MILLISECONDS_PER_EXECUTION ROWS_PER_EXECUTION BUFFER_GETS_PER_EXECUTION DISK_READS_PER_EXECUTION
-------------------- --------------- ---------------- -------------------------- ------------------ ------------------------- ------------------------
08/17 10:31                576393603                8                   1476.348              366.0                   82579.3                      0.0
08/17 10:45                576393603               16                   1530.348              366.0                   86192.0                      0.0




Sunday, August 14, 2016

CRS-4046: Invalid Oracle Clusterware configuration



Received below error while running root.sh script in Oracle 11g R2 (11.2.0.4) on node one in a two node RAC configuration. It was due to leftover processes from previous cluster install.


[root@RAC1 Clusterware]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
CRS-4046: Invalid Oracle Clusterware configuration.
CRS-4000: Command Create failed, or completed with errors.
Failure initializing entries in /etc/oracle/scls_scr/rac1
/u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed




Below illustrated steps resolved the issue.


Clean up the leftover cluster processes by executing below command on the node where root.sh script fails.


[root@RAC1 install]# ./rootcrs.pl -deconfig -force -verbose
Using configuration parameter file: ./crsconfig_params
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
CRS-4046: Invalid Oracle Clusterware configuration.
CRS-4000: Command Stop failed, or completed with errors.
################################################################
# You must kill processes or reboot the system to properly #
# cleanup the processes started by Oracle clusterware          #
################################################################
Removing Trace File Analyzer
Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node




=>> Once de-configuration for Oracle clusterware completes successful reboot the server once de-config of clusterware is done as above and re-try executing root.sh again.



[root@RAC1 grid]# ./root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
.
.
.
.
Adding Clusterware entries to upstart
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded

ASM created and started successfully.

Disk Group VOTE_DATA created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk ae113dc9ff954f7bbf65bccce936df15.
Successful addition of voting disk 7d464ed047e64fb8bf349015e04dc69f.
Successful addition of voting disk 8724064160f74fe7bf8f85bf258d13e9.
Successfully replaced voting disk group with +VOTE_DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   ae113dc9ff954f7bbf65bccce936df15 (/dev/oracleasm/disks/DISK1) [VOTE_DATA]
 2. ONLINE   7d464ed047e64fb8bf349015e04dc69f (/dev/oracleasm/disks/DISK2) [VOTE_DATA]
 3. ONLINE   8724064160f74fe7bf8f85bf258d13e9 (/dev/oracleasm/disks/DISK3) [VOTE_DATA]
Located 3 voting disk(s).

CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.VOTE_DATA.dg' on 'rac1'
CRS-2676: Start of 'ora.VOTE_DATA.dg' on 'rac1' succeeded
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded



Hope it would help someone....!!




Thursday, July 21, 2016

LREG Background Process in 12c

Listener Registration Process (LREG)



The listener registration process (LREG) registers information about the database instance and dispatcher processes with the Oracle Net Listener (see "The Oracle Net Listener"). When an instance starts, LREG polls the listener to determine whether it is running. If the listener is running, then LREG passes it relevant parameters. If it is not running, then LREG periodically attempts to contact it.
Note:
In releases before Oracle Database 12c, PMON performed the listener registration.

Tuesday, June 14, 2016

Monday, June 13, 2016

[INS-20802] Grid Infrastructure failed During Grid Installation On Windows OS



It was a challenging day to to troubleshoot the issue of cluster configuration failure while setting up 2 Node Oracle 11gR2(11.2.0.4) RAC cluster on Windows Server 2012 R2 operating system for one of our customer. At every attempt Grid Confuguration for cluster was getting failed while Installing Oracle Grid Infrastructure software for cluster.

InstallAction log did't have much clue about the GI configuration failure but rootcrs_racnode1.log located at $GRID_HOME\cfgtoollogs\rootcrs_racnode1.log file provided the root cause of cluster configuration failure which reveals that "The driver is not currently installed on this node."

Below is the entries from rootcrs_racnode1.log log file from Node 1.


From Log file : C:\app\11.2.0\grid\cfgtoollogs\rootcrs_racnode1.log
------------------------------------------------------

2016-06-09 08:22:36: The 'ROOTCRS_ACFSINST' is either in START/FAILED state
2016-06-09 08:22:36: Executing 'E:\app\11.2.0\grid\bin\acfsroot.bat install'
2016-06-09 08:22:36: Executing cmd: E:\app\11.2.0\grid\bin\acfsroot.bat install
2016-06-09 08:22:37: Command output:
>  ACFS-9300: ADVM/ACFS distribution files found.
>  ACFS-9307: Installing requested ADVM/ACFS software.
>  acfsinstall: ACFS-09420: The driver is not currently installed on this node.
>  acfsinstall: ACFS-09411: CreateService succeeded.
>  acfsinstall: CLSU-00100: Operating System function: StartDriver failed with error data: 31
>  acfsinstall: CLSU-00101: Operating System error message: A device attached to the system is not functioning.
>  acfsinstall: CLSU-00103: error location: StartDriver_
>  acfsinstall: CLSU-00104: additional error information: W
>  acfsinstall: ACFS-09419: StartService failed.
>  acfsinstall: ACFS-09401: Failed to install the driver.
>
>  ACFS-9340: failed to install OKS driver.
>  acfsinstall: ACFS-09420: The driver is not currently installed on this node.
>  acfsinstall: ACFS-09411: CreateService succeeded.
>  acfsinstall: CLSU-00100: Operating System function: StartDriver failed with error data: 1068
>  acfsinstall: CLSU-00101: Operating System error message: The dependency service or group failed to start.
>  acfsinstall: CLSU-00103: error location: StartDriver_
>  acfsinstall: CLSU-00104: additional error information: J
>  acfsinstall: ACFS-09419: StartService failed.
>  acfsinstall: ACFS-09401: Failed to install the driver.
>
>  ACFS-9340: failed to install ADVM driver.
>  acfsinstall: ACFS-09420: The driver is not currently installed on this node.
>  acfsinstall: ACFS-09411: CreateService succeeded.
>  acfsinstall: CLSU-00100: Operating System function: StartDriver failed with error data: 1068
>  acfsinstall: CLSU-00101: Operating System error message: The dependency service or group failed to start.
>  acfsinstall: CLSU-00103: error location: StartDriver_
>  acfsinstall: CLSU-00104: additional error information: ]
>  acfsinstall: ACFS-09419: StartService failed.
>  acfsinstall: ACFS-09401: Failed to install the driver.
>
>  ACFS-9340: failed to install ACFS driver.
>  ACFS-9310: ADVM/ACFS installation failed.
>End Command output
2016-06-09 08:22:37: E:\app\11.2.0\grid\bin\acfsroot.bat install ... failed
2016-06-09 08:22:37: USM driver install status is 0
2016-06-09 08:22:37: USM driver install actions failed
2016-06-09 08:22:37: Running as user Administrator: E:\app\11.2.0\grid\bin\cluutil -ckpt -oraclebase E:\app\Administrator -writeckpt -name ROOTCRS_ACFSINST -state FAIL
2016-06-09 08:22:37: s_run_as_user2: Running E:\app\11.2.0\grid\bin\cluutil -ckpt -oraclebase E:\app\Administrator -writeckpt -name ROOTCRS_ACFSINST -state FAIL
2016-06-09 08:22:37: E:\app\11.2.0\grid\bin\cluutil successfully executed

2016-06-09 08:22:37: Succeeded in writing the checkpoint:'ROOTCRS_ACFSINST' with status:FAIL
2016-06-09 08:22:37: CkptFile: E:\app\Administrator\Clusterware\ckptGridHA_win1.xml
2016-06-09 08:22:37: Sync the checkpoint file 'E:\app\Administrator\Clusterware\ckptGridHA_win1.xml'




Solutions :
=========

As per the logs pasted above, I came about an unpublished BUG 17927204 - ACFS SUPPORT FOR WINDOWS 2012R2  in Oracle Grid Infrastructure version 11.2.0.4 itself so in order to resolve the cluster configuration issue I downloaded the one-off patch (p22839608_112040_MSWIN-x86-64) from MOS to be applied on #GRID_HOME binaries on both nodes in the cluster. And in order to apply the Patch mentioned above, you will have to download relevant Opatch utility (p6880880_112000_MSWIN-x86-64) from MOS.

Please refer MOS Doc ID 1987371.1 for its details.


Once you have downloaded both patches mentioned above please follow the below steps for successful Grid Infrastructure Installation.


1 - Clean the currently failed run of GI from both nodes (it includes deinstall of GI and removal of all related entries from Windows Registry.)

Click Here to see how to clean failed Grid Installation.

2 - Bounce the nodes once you are done with step 1.

3 - Run the GI Installer (setup.exe) for GI installation and choose to install Grid Infrastructure software only.

Note :- In step 3 you will have to install GI software on each individual nodes in the cluster as it won't do GI installation on remote nodes at once.

4 - Once the GI software(Software only) is installed on both the nodes then replace(or rename the existing Opatch directory in Grid Home. e.g. Opatch_old) Opatch folder in Grid Home with the one you have downloaded above (p6880880_112000_MSWIN-x86-64) on both nodes.

5 - Verify the Opatch utility is working fine with : opatch.exe lsinv command and apply the one-off patch(p22839608_112040_MSWIN-x86-64) on Grid Home at both nodes.

6 - After successful application of Opatch on both nodes, Start the Grid Infrastructure Configuration for the cluster as mentioned below.

Navigate to and run $GRID_HOME/crs/config/config.bat

Note : - It will launch GUI of Grid Infrastructure Configuration and proceed as per its steps gets prompted on interface. Please note that if your GI configuration still fails at 33% then don't just cancel the installation, just click OK on the error prompt and click on "Retry" to continue with the installation.


Hope it wold help to resolve the issue. 

Tuesday, June 7, 2016

How to find number of instances configured in your RAC Environment?



Below are the ways you can find out how many number of instances configured in your Oracle RAC cluster Environment.


1 - Query the V$ACTIVE_INSTANCES view to determine the number of instances involved in your RAC configuration.


SQL> desc v$active_instances;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 INST_NUMBER                                        NUMBER
 INST_NAME                                          VARCHAR2(60)



SQL> select * from v$active_instances;

INST_NUMBER INST_NAME
----------- ------------------------------------------------------------
          1 rac1.rajdbsolutions.com:ractst1
          2 rac2.rajdbsolutions.com:ractst2







2 - You can find the same answer at OS level SRVCTL command line utility as below.


-bash-3.2$ srvctl config database -d ractst
Database unique name: ractst
Database name: ractst
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/ractst/spfileractst.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ractst
Database instances: ractst1,ractst2
Disk Groups: DATA
Services:
Database is administrator managed


Note : - In the above srvctl config command output you can see the number of instances (ractst1,ractst2) are listed by comma at Database Instances clause that are configured in current RAC configuration environment.

Adding a new OCR device/file



To avoid Single Point Of Failure of OCR we should add/have multiple OCR device/file on separate storage. We can have upto 5 OCR device/file on our cluster configuration. Below are the steps outlined how we can add a new OCR to our Cluster configuration.


Step 1 - Let's first find how many OCR device/file already exists.


[root@rac1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2616
         Available space (kbytes) :     259504
         ID                       :  170058601
         Device/File Name         : +VOTE_DATA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded




Note :- In the preceding output we can see only one OCR file exists at location +VOTE_DATA diskgroup.





Step 2 - Add a new OCR device/file.


[root@rac1 ~]# ocrconfig -add '+VOTE_DATA'
PROT-29: The Oracle Cluster Registry location is already configured

Note :- We can't add another OCR device/file on the same file-system or diskgroup hence we need to add the new OCR file to a new/separate device/file as adding the other OCR files on the same device or diskgroup doesn't avoid SPOF(Single Poing Of Failure)



[root@rac1 ~]# ocrconfig -add '+FLASH'


Again verify if new OCR device/file is added to +FLASH diskgroup.

[root@rac1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2616
         Available space (kbytes) :     259504
         ID                       :  170058601
         Device/File Name         : +VOTE_DATA
                                    Device/File integrity check succeeded
         Device/File Name         :     +FLASH
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded



[root@rac1 ~]# ocrcheck -config
Oracle Cluster Registry configuration is :
         Device/File Name         : +VOTE_DATA
         Device/File Name         :     +FLASH



Note : As we can see above, a new OCR device/file file has been added to +FLASH diskgroup. After adding the new OCR file once check the integrity of OCR using ocrcheck command.


Significance of OCRCONFIG



OCRCONFIG:

Use the ocrconfig command to manage OCR. Using this utility you can import, export, add, delete, restore, overwrite, backup, repair, replace, move, upgrade, or downgrade OCR.


Below are the options can be used with ocrconfig command.
---------------------------------------------

[root@rac1 ~]# ocrconfig
Name:
        ocrconfig - Configuration tool for Oracle Cluster/Local Registry.

Synopsis:
        ocrconfig [option]
        option:
                [-local] -export <filename>
                                                    - Export OCR/OLR contents to a file
                [-local] -import <filename>         - Import OCR/OLR contents from a file
                [-local] -upgrade [<user> [<group>]]
                                                    - Upgrade OCR from previous version
                -downgrade [-version <version string>]
                                                    - Downgrade OCR to the specified version
                [-local] -backuploc <dirname>       - Configure OCR/OLR backup location
                [-local] -showbackup [auto|manual]  - Show OCR/OLR backup information
                [-local] -manualbackup              - Perform OCR/OLR backup
                [-local] -restore <filename>        - Restore OCR/OLR from physical backup
                -replace <current filename> -replacement <new filename>
                                                    - Replace a OCR device/file <filename1> with <filename2>
                -add <filename>                     - Add a new OCR device/file
                -delete <filename>                  - Remove a OCR device/file
                -overwrite                          - Overwrite OCR configuration on disk
                -repair -add <filename> | -delete <filename> | -replace <current filename> -replacement <new filename>
                                                    - Repair OCR configuration on the local node
                -help                               - Print out this help information

Note:
        * A log file will be created in
        $ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensure
        you have file creation privileges in the above directory before
        running this tool.
        * Only -local -showbackup [manual] is supported.
        * Use option '-local' to indicate that the operation is to be performed on the Oracle Local Registry.




1 - Let's use the first option (-local -export) with ocrconfig command to export the contents of OCR/OLR to a text file as a backup.


[root@rac1 ~]# ocrconfig -local -export /u01/app/OCR_Local_export.txt

OCR local file has been export and an export listed below has been created.

[root@rac1 ~]# ls -ltr /u01/app/OCR_Local_export.txt
-rw-r--r-- 1 root root 73429 Jun  7 09:00 /u01/app/OCR_Local_export.txt



Note: -

This section lists the following OCRCONFIG commands: Click on any commands listed below to get its detailed usage info.

Monday, June 6, 2016

OCRCHECK : Oracle Cluster Registry Check utility




OCRCHECK:

The OCRCHECK utility displays the version of the OCR's block format, total space available and used space, OCRID, and the OCR locations that you have configured. OCRCHECK performs a block-by-block checksum operation for all of the blocks in all of the OCRs that you have configured. It also returns an individual status for each file and a result for the overall OCR integrity check.


Note:
Oracle supports using the ocrcheck command when, at a minimum, the Oracle Cluster Ready Services stack is OFFLINE on all nodes in the cluster because the command will run even if the stack is ONLINE but it can falsely indicate that the OCR is corrupt if the check happens while an update to OCR is underway.



Syntax
ocrcheck [-local] [-config] [-details] [-help]



[root@rac2 ~]# ocrcheck -help
Name:
        ocrcheck - Displays health of Oracle Cluster/Local Registry.

Synopsis:
        ocrcheck [-config] [-local]

  -config       Displays the configured locations of the Oracle Cluster Registry.
                This can be used with the -local option to display the configured
                location of the Oracle Local Registry
  -local        The operation will be performed on the Oracle Local Registry.



Notes:
        A log file will be created in
        $ORACLE_HOME/log/<hostname>/client/ocrcheck_<pid>.log.
        File creation privileges in the above directory are needed
        when running this tool.




[root@rac2 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2616
         Available space (kbytes) :     259504
         ID                       :  170058601
         Device/File Name         : +VOTE_DATA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded



Note :- When we simply run only ocrcheck command as root user without any option then it checks the integrity of Oracle Clusterware Registry and reveals the OCR version, Total Space, Used Space and Available space. It also displays the File ID and the location where OCR is located for global access as shown in preceding output.




[root@rac2 ~]# ocrcheck -config
Oracle Cluster Registry configuration is :
         Device/File Name         : +VOTE_DATA


Note :- When we run the ocrcheck command with root user with -config option then it displays the device/diskgroup where cluster OCR is located.





 [root@rac2 ~]# ocrcheck -config -local
Oracle Local Registry configuration is :
         Device/File Name         : /u01/app/11.2.0/grid/cdata/rac2.olr
[root@rac2 ~]#


Note :- When we run the ocrcheck -config along with -local option then it displays the Oracle Local Repository(OLR), the local version of OCR  located on local node as shown in the preceding example.





Note :-

A log file will be created in
        $GRID_HOME/log/<hostname>/client/ocrcheck_<pid>.log.
        File creation privileges in the above directory are needed
        when running this tool.


A log file for each ocrcheck command run will be created under $GRID_HOME/log/<hostname>/client/ocrcheck_<pid>.log. Let's take a look at this with an example as explained below.



1 - First let's check the timestamp at the RAC node were we will be running the ocrcheck command.

[root@rac2 client]# date

Mon Jun  6 10:28:03 IST 2016


2 - Now, let's run the ocrcheck command.

[root@rac2 client]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2616
         Available space (kbytes) :     259504
         ID                       :  170058601
         Device/File Name         : +VOTE_DATA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded




3 - Now let's go to log location ($GRID_HOME/log/<hostname>/client/ocrcheck_<pid>.log) and check for the new log file is created.

[root@rac2 client]# pwd
/u01/app/11.2.0/grid/log/rac2/client


[root@rac2 client]# ls -ltr | tail
-rw-r----- 1 root    root       256 Jun  3 12:14 ocrconfig_7211.log
-rw-r----- 1 root    root       256 Jun  3 12:21 ocrconfig_7460.log
-rw-r----- 1 root    root       342 Jun  3 12:21 ocrconfig_7469.log
-rw-r--r-- 1 oragrid oinstall  1612 Jun  6 09:56 oclskd.log
-rw-r--r-- 1 oragrid oinstall 21138 Jun  6 09:56 olsnodes.log
-rw-r--r-- 1 root    root       379 Jun  6 09:58 ocrcheck_6646.log
-rw-r--r-- 1 root    root       379 Jun  6 09:58 ocrcheck_6669.log
-rw-r----- 1 root    root       255 Jun  6 09:59 ocrcheck_6684.log
-rw-r----- 1 root    root       255 Jun  6 09:59 ocrcheck_6689.log
-rw-r--r-- 1 root    root       379 Jun  6 10:28 ocrcheck_7621.log


[root@rac2 client]# date
Mon Jun  6 10:28:52 IST 2016

Note :- We can see a new log file has been created marked in red-color above.



Contents of ocrcheck_7621.log


[root@rac2 client]# cat ocrcheck_7621.log
Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
2016-06-06 10:28:36.677: [OCRCHECK][3038611136]ocrcheck starts...
2016-06-06 10:28:37.212: [OCRCHECK][3038611136]protchcheck: OCR status : total = [262120], used = [2616], avail = [259504]

2016-06-06 10:28:40.939: [OCRCHECK][3038611136]Exiting [status=success]...

Migrate to Oracle Database with SQL Developer.


Migrate to Oracle Database with SQL Developer.


http://www.oracle.com/technetwork/database/migration/index.html

Friday, June 3, 2016

How to Enable Archiving in Oracle RAC environment?



Enabling ARCHIVELOG in Oracle RAC environment.


Below verification reveals that our current RAC cluster database is in NOARCHIVE LOG mode.

SQL> archive log list
Database log mode              No Archive Mode
Automatic archival             Disabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     4
Current log sequence           5



Step 1 - Shut-down the database across the nodes from any node in the cluster using below command.

-bash-3.2$ srvctl stop database -d ractst -o immediate


Let's verify if database instances are down across the cluster nodes.

-bash-3.2$ srvctl status database -d ractst
Instance ractst1 is not running on node rac1
Instance ractst2 is not running on node rac2


Step 2 - Mount the database instances using below command.

-bash-3.2$ srvctl start database -d ractst -o mount


Instances are started now - in Mount state.

-bash-3.2$ srvctl status database -d ractst
Instance ractst1 is running on node rac1
Instance ractst2 is running on node rac2



Note : - Before Oracle 11g R2, we used to disable the INIT parameter cluster_database in order to enable and disable archiving in RAC environment.


Step 3 - Enable the archiving now.


-bash-3.2$ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.1.0 Production on Fri Jun 3 13:00:52 2016

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> alter database archivelog;

Database altered.

SQL> alter database open;

Database altered.

SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     4
Next log sequence to archive   5
Current log sequence           5


Also open the database at 2nd node and check archiving status as below.

SQL> alter database open;

Database altered.

SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     2
Next log sequence to archive   3
Current log sequence           3



That's it...Hope it would help someone....

Thursday, June 2, 2016

Recovery Catalog Views list and it's Corresponding V$ Views


Here I thought to post very useful list of RMAN recovery catalog views and its corresponding V$ views as quick reference that I did grep from Oracle Backup and Recovery Reference doc.

In order to find the details of any Recovery Catalog view or V$ view, e.g what all fields are there in a particular view and for what purpose, then just click on that particular Recover Catalog View or V$ View listed hereunder and it will redirect you to the exact page of Oracle Docs to enhance your knowledge.


Recovery Catalog View Corresponding V$ View Catalog View Describes ...
RC_ARCHIVED_LOG V$ARCHIVED_LOG Archived and unarchived redo log files
RC_BACKUP_ARCHIVELOG_DETAILS V$BACKUP_ARCHIVELOG_DETAILS Details about archived redo log backups for Enterprise Manager
RC_BACKUP_ARCHIVELOG_SUMMARY V$BACKUP_ARCHIVELOG_SUMMARY Summary of information about archived redo log backups for Enterprise Manager
RC_BACKUP_CONTROLFILE V$BACKUP_DATAFILE Control files backed up in backup sets
RC_BACKUP_CONTROLFILE_DETAILS V$BACKUP_CONTROLFILE_DETAILS Details about control file backups for Enterprise Manager
RC_BACKUP_CONTROLFILE_SUMMARY V$BACKUP_CONTROLFILE_SUMMARY Summary of information about control file backups for Enterprise Manager
RC_BACKUP_COPY_DETAILS V$BACKUP_COPY_DETAILS Details about datafile image copy backups for Enterprise Manager
RC_BACKUP_COPY_SUMMARY V$BACKUP_COPY_SUMMARY Summary of information about datafile image copy backups for Enterprise Manager
RC_BACKUP_CORRUPTION V$BACKUP_CORRUPTION Corrupt block ranges in datafile backups
RC_BACKUP_DATAFILE V$BACKUP_DATAFILE Datafiles in backup sets
RC_BACKUP_DATAFILE_DETAILS V$BACKUP_DATAFILE_DETAILS Details about datafile backups for Enterprise Manager
RC_BACKUP_DATAFILE_SUMMARY V$BACKUP_DATAFILE_SUMMARY Summary of information about datafile backups for Enterprise Manager
RC_BACKUP_FILES V$BACKUP_FILES RMAN backups and copies known to the repository.
RC_BACKUP_PIECE V$BACKUP_PIECE Backup pieces
RC_BACKUP_PIECE_DETAILS V$BACKUP_PIECE_DETAILS Details about backup pieces for Enterprise Manager
RC_BACKUP_REDOLOG V$BACKUP_REDOLOG Archived redo log files in backup sets
RC_BACKUP_SET V$BACKUP_SET Backup sets for all incarnations of databases registered in the catalog
RC_BACKUP_SET_DETAILS V$BACKUP_SET_DETAILS Details about backup sets for Enterprise Manager
RC_BACKUP_SET_SUMMARY V$BACKUP_SET_SUMMARY Summary of information about backup sets for Enterprise Manager
RC_BACKUP_SPFILE V$BACKUP_SPFILE Server parameter files in backups
RC_BACKUP_SPFILE_DETAILS V$BACKUP_SPFILE_DETAILS Details about server parameter file backups for Enterprise Manager
RC_BACKUP_SPFILE_SUMMARY V$BACKUP_SPFILE_SUMMARY Summary of information about server parameter file backups for Enterprise Manager
RC_CHECKPOINT n/a Deprecated in favor of RC_RESYNC
RC_CONTROLFILE_COPY V$DATAFILE_COPY Control file copies on disk
RC_COPY_CORRUPTION V$COPY_CORRUPTION Corrupt block ranges in datafile copies
RC_DATABASE V$DATABASE Databases registered in the recovery catalog
RC_DATABASE_BLOCK_CORRUPTION V$DATABASE_BLOCK_CORRUPTION Database blocks marked as corrupted in the most recent RMAN backup or copy
RC_DATABASE_INCARNATION V$DATABASE_INCARNATION Database incarnations registered in the recovery catalog
RC_DATAFILE V$DATAFILE Datafiles registered in the recovery catalog
RC_DATAFILE_COPY V$DATAFILE_COPY Datafile copies on disk
RC_LOG_HISTORY V$LOG_HISTORY Online redo log history indicating when log switches occurred
RC_OFFLINE_RANGE V$OFFLINE_RANGE Offline ranges for datafiles
RC_PROXY_ARCHIVEDLOG V$PROXY_ARCHIVEDLOG Archived log backups taken with the proxy copy functionality
RC_PROXY_ARCHIVELOG_DETAILS V$PROXY_ARCHIVELOG_DETAILS Details about proxy archived redo log files for Enterprise Manager
RC_PROXY_ARCHIVELOG_SUMMARY V$PROXY_ARCHIVELOG_SUMMARY Summary of information about proxy archived redo log files for Enterprise Manager
RC_PROXY_CONTROLFILE V$PROXY_DATAFILE Control file backups taken with the proxy copy functionality
RC_PROXY_COPY_DETAILS V$PROXY_COPY_DETAILS Details about datafile proxy copies for Enterprise Manager
RC_PROXY_COPY_SUMMARY V$PROXY_COPY_SUMMARY Summary of information about datafile proxy copies for Enterprise Manager
RC_PROXY_DATAFILE V$PROXY_DATAFILE Datafile backups that were taken using the proxy copy functionality
RC_REDO_LOG V$LOG and V$LOGFILE Online redo logs for all incarnations of the database since the last catalog resynchronization
RC_REDO_THREAD V$THREAD All redo threads for all incarnations of the database since the last catalog resynchronization
RC_RESTORE_POINT V$RESTORE_POINT All restore points for all incarnations of the database since the last catalog resynchronization
RC_RESYNC n/a Recovery catalog resynchronizations
RC_RMAN_BACKUP_JOB_DETAILS V$RMAN_BACKUP_JOB_DETAILS Details about backup jobs for Enterprise Manager
RC_RMAN_BACKUP_SUBJOB_DETAILS V$RMAN_BACKUP_SUBJOB_DETAILS Details about backup subjobs for Enterprise Manager
RC_RMAN_BACKUP_TYPE V$RMAN_BACKUP_TYPE Used internally by Enterprise Manager
RC_RMAN_CONFIGURATION V$RMAN_CONFIGURATION RMAN configuration settings
RC_RMAN_OUTPUT V$RMAN_OUTPUT Output from RMAN commands for use in Enterprise Manager
RC_RMAN_STATUS V$RMAN_STATUS Historical status information about RMAN operations.
RC_SITE n/a Databases in a Data Guard environment
RC_STORED_SCRIPT n/a Names of scripts stored in the recovery catalog
RC_STORED_SCRIPT_LINE n/a Contents of the scripts stored in the recovery catalog
RC_TABLESPACE V$TABLESPACE All tablespaces registered in the recovery catalog, all dropped tablespaces, and tablespaces that belong to old incarnations
RC_TEMPFILE V$TEMPFILE All tempfiles registered in the recovery catalog
RC_UNUSABLE_BACKUPFILE_DETAILS V$UNUSABLE_BACKUPFILE_DETAILS Unusable backup files registered in the recovery catalog