Tuesday, June 14, 2016
Monday, June 13, 2016
[INS-20802] Grid Infrastructure failed During Grid Installation On Windows OS
It was a challenging day to to troubleshoot the issue of cluster configuration failure while setting up 2 Node Oracle 11gR2(11.2.0.4) RAC cluster on Windows Server 2012 R2 operating system for one of our customer. At every attempt Grid Confuguration for cluster was getting failed while Installing Oracle Grid Infrastructure software for cluster.
InstallAction log did't have much clue about the GI configuration failure but rootcrs_racnode1.log located at $GRID_HOME\cfgtoollogs\rootcrs_racnode1.log file provided the root cause of cluster configuration failure which reveals that "The driver is not currently installed on this node."
Below is the entries from rootcrs_racnode1.log log file from Node 1.
From Log file : C:\app\11.2.0\grid\cfgtoollogs\rootcrs_racnode1.log
------------------------------------------------------
2016-06-09 08:22:36: The 'ROOTCRS_ACFSINST' is either in START/FAILED state
2016-06-09 08:22:36: Executing 'E:\app\11.2.0\grid\bin\acfsroot.bat install'
2016-06-09 08:22:36: Executing cmd: E:\app\11.2.0\grid\bin\acfsroot.bat install
2016-06-09 08:22:37: Command output:
> ACFS-9300: ADVM/ACFS distribution files found.
> ACFS-9307: Installing requested ADVM/ACFS software.
> acfsinstall: ACFS-09420: The driver is not currently installed on this node.
> acfsinstall: ACFS-09411: CreateService succeeded.
> acfsinstall: CLSU-00100: Operating System function: StartDriver failed with error data: 31
> acfsinstall: CLSU-00101: Operating System error message: A device attached to the system is not functioning.
> acfsinstall: CLSU-00103: error location: StartDriver_
> acfsinstall: CLSU-00104: additional error information: W
> acfsinstall: ACFS-09419: StartService failed.
> acfsinstall: ACFS-09401: Failed to install the driver.
>
> ACFS-9340: failed to install OKS driver.
> acfsinstall: ACFS-09420: The driver is not currently installed on this node.
> acfsinstall: ACFS-09411: CreateService succeeded.
> acfsinstall: CLSU-00100: Operating System function: StartDriver failed with error data: 1068
> acfsinstall: CLSU-00101: Operating System error message: The dependency service or group failed to start.
> acfsinstall: CLSU-00103: error location: StartDriver_
> acfsinstall: CLSU-00104: additional error information: J
> acfsinstall: ACFS-09419: StartService failed.
> acfsinstall: ACFS-09401: Failed to install the driver.
>
> ACFS-9340: failed to install ADVM driver.
> acfsinstall: ACFS-09420: The driver is not currently installed on this node.
> acfsinstall: ACFS-09411: CreateService succeeded.
> acfsinstall: CLSU-00100: Operating System function: StartDriver failed with error data: 1068
> acfsinstall: CLSU-00101: Operating System error message: The dependency service or group failed to start.
> acfsinstall: CLSU-00103: error location: StartDriver_
> acfsinstall: CLSU-00104: additional error information: ]
> acfsinstall: ACFS-09419: StartService failed.
> acfsinstall: ACFS-09401: Failed to install the driver.
>
> ACFS-9340: failed to install ACFS driver.
> ACFS-9310: ADVM/ACFS installation failed.
>End Command output
2016-06-09 08:22:37: E:\app\11.2.0\grid\bin\acfsroot.bat install ... failed
2016-06-09 08:22:37: USM driver install status is 0
2016-06-09 08:22:37: USM driver install actions failed
2016-06-09 08:22:37: Running as user Administrator: E:\app\11.2.0\grid\bin\cluutil -ckpt -oraclebase E:\app\Administrator -writeckpt -name ROOTCRS_ACFSINST -state FAIL
2016-06-09 08:22:37: s_run_as_user2: Running E:\app\11.2.0\grid\bin\cluutil -ckpt -oraclebase E:\app\Administrator -writeckpt -name ROOTCRS_ACFSINST -state FAIL
2016-06-09 08:22:37: E:\app\11.2.0\grid\bin\cluutil successfully executed
2016-06-09 08:22:37: Succeeded in writing the checkpoint:'ROOTCRS_ACFSINST' with status:FAIL
2016-06-09 08:22:37: CkptFile: E:\app\Administrator\Clusterware\ckptGridHA_win1.xml
2016-06-09 08:22:37: Sync the checkpoint file 'E:\app\Administrator\Clusterware\ckptGridHA_win1.xml'
Solutions :
=========
As per the logs pasted above, I came about an unpublished BUG 17927204 - ACFS SUPPORT FOR WINDOWS 2012R2 in Oracle Grid Infrastructure version 11.2.0.4 itself so in order to resolve the cluster configuration issue I downloaded the one-off patch (p22839608_112040_MSWIN-x86-64) from MOS to be applied on #GRID_HOME binaries on both nodes in the cluster. And in order to apply the Patch mentioned above, you will have to download relevant Opatch utility (p6880880_112000_MSWIN-x86-64) from MOS.
Please refer MOS Doc ID 1987371.1 for its details.
Once you have downloaded both patches mentioned above please follow the below steps for successful Grid Infrastructure Installation.
1 - Clean the currently failed run of GI from both nodes (it includes deinstall of GI and removal of all related entries from Windows Registry.)
Click Here to see how to clean failed Grid Installation.
2 - Bounce the nodes once you are done with step 1.
3 - Run the GI Installer (setup.exe) for GI installation and choose to install Grid Infrastructure software only.
Note :- In step 3 you will have to install GI software on each individual nodes in the cluster as it won't do GI installation on remote nodes at once.
4 - Once the GI software(Software only) is installed on both the nodes then replace(or rename the existing Opatch directory in Grid Home. e.g. Opatch_old) Opatch folder in Grid Home with the one you have downloaded above (p6880880_112000_MSWIN-x86-64) on both nodes.
5 - Verify the Opatch utility is working fine with : opatch.exe lsinv command and apply the one-off patch(p22839608_112040_MSWIN-x86-64) on Grid Home at both nodes.
6 - After successful application of Opatch on both nodes, Start the Grid Infrastructure Configuration for the cluster as mentioned below.
Navigate to and run $GRID_HOME/crs/config/config.bat
Note : - It will launch GUI of Grid Infrastructure Configuration and proceed as per its steps gets prompted on interface. Please note that if your GI configuration still fails at 33% then don't just cancel the installation, just click OK on the error prompt and click on "Retry" to continue with the installation.
Hope it wold help to resolve the issue.
Tuesday, June 7, 2016
How to find number of instances configured in your RAC Environment?
Below are the ways you can find out how many number of instances configured in your Oracle RAC cluster Environment.
1 - Query the V$ACTIVE_INSTANCES view to determine the number of instances involved in your RAC configuration.
SQL> desc v$active_instances;
Name Null? Type
----------------------------------------- -------- ----------------------------
INST_NUMBER NUMBER
INST_NAME VARCHAR2(60)
SQL> select * from v$active_instances;
INST_NUMBER INST_NAME
----------- ------------------------------------------------------------
1 rac1.rajdbsolutions.com:ractst1
2 rac2.rajdbsolutions.com:ractst2
2 - You can find the same answer at OS level SRVCTL command line utility as below.
-bash-3.2$ srvctl config database -d ractst
Database unique name: ractst
Database name: ractst
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/ractst/spfileractst.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ractst
Database instances: ractst1,ractst2
Disk Groups: DATA
Services:
Database is administrator managed
Note : - In the above srvctl config command output you can see the number of instances (ractst1,ractst2) are listed by comma at Database Instances clause that are configured in current RAC configuration environment.
Adding a new OCR device/file
To avoid Single Point Of Failure of OCR we should add/have multiple OCR device/file on separate storage. We can have upto 5 OCR device/file on our cluster configuration. Below are the steps outlined how we can add a new OCR to our Cluster configuration.
Step 1 - Let's first find how many OCR device/file already exists.
[root@rac1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2616
Available space (kbytes) : 259504
ID : 170058601
Device/File Name : +VOTE_DATA
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
Note :- In the preceding output we can see only one OCR file exists at location +VOTE_DATA diskgroup.
Step 2 - Add a new OCR device/file.
[root@rac1 ~]# ocrconfig -add '+VOTE_DATA'
PROT-29: The Oracle Cluster Registry location is already configured
Note :- We can't add another OCR device/file on the same file-system or diskgroup hence we need to add the new OCR file to a new/separate device/file as adding the other OCR files on the same device or diskgroup doesn't avoid SPOF(Single Poing Of Failure)
[root@rac1 ~]# ocrconfig -add '+FLASH'
Again verify if new OCR device/file is added to +FLASH diskgroup.
[root@rac1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2616
Available space (kbytes) : 259504
ID : 170058601
Device/File Name : +VOTE_DATA
Device/File integrity check succeeded
Device/File Name : +FLASH
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
[root@rac1 ~]# ocrcheck -config
Oracle Cluster Registry configuration is :
Device/File Name : +VOTE_DATA
Device/File Name : +FLASH
Note : As we can see above, a new OCR device/file file has been added to +FLASH diskgroup. After adding the new OCR file once check the integrity of OCR using ocrcheck command.
Significance of OCRCONFIG
OCRCONFIG:
Use the ocrconfig command to manage OCR. Using this utility you can import, export, add, delete, restore, overwrite, backup, repair, replace, move, upgrade, or downgrade OCR.
Below are the options can be used with ocrconfig command.
---------------------------------------------
[root@rac1 ~]# ocrconfig
Name:
ocrconfig - Configuration tool for Oracle Cluster/Local Registry.
Synopsis:
ocrconfig [option]
option:
[-local] -export <filename>
- Export OCR/OLR contents to a file
[-local] -import <filename> - Import OCR/OLR contents from a file
[-local] -upgrade [<user> [<group>]]
- Upgrade OCR from previous version
-downgrade [-version <version string>]
- Downgrade OCR to the specified version
[-local] -backuploc <dirname> - Configure OCR/OLR backup location
[-local] -showbackup [auto|manual] - Show OCR/OLR backup information
[-local] -manualbackup - Perform OCR/OLR backup
[-local] -restore <filename> - Restore OCR/OLR from physical backup
-replace <current filename> -replacement <new filename>
- Replace a OCR device/file <filename1> with <filename2>
-add <filename> - Add a new OCR device/file
-delete <filename> - Remove a OCR device/file
-overwrite - Overwrite OCR configuration on disk
-repair -add <filename> | -delete <filename> | -replace <current filename> -replacement <new filename>
- Repair OCR configuration on the local node
-help - Print out this help information
Note:
* A log file will be created in
$ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensure
you have file creation privileges in the above directory before
running this tool.
* Only -local -showbackup [manual] is supported.
* Use option '-local' to indicate that the operation is to be performed on the Oracle Local Registry.
1 - Let's use the first option (-local -export) with ocrconfig command to export the contents of OCR/OLR to a text file as a backup.
[root@rac1 ~]# ocrconfig -local -export /u01/app/OCR_Local_export.txt
OCR local file has been export and an export listed below has been created.
[root@rac1 ~]# ls -ltr /u01/app/OCR_Local_export.txt
-rw-r--r-- 1 root root 73429 Jun 7 09:00 /u01/app/OCR_Local_export.txt
Note: -
This section lists the following OCRCONFIG commands: Click on any commands listed below to get its detailed usage info.
Monday, June 6, 2016
OCRCHECK : Oracle Cluster Registry Check utility
OCRCHECK:
The OCRCHECK utility displays the version of the OCR's block format, total space available and used space, OCRID, and the OCR locations that you have configured. OCRCHECK performs a block-by-block checksum operation for all of the blocks in all of the OCRs that you have configured. It also returns an individual status for each file and a result for the overall OCR integrity check.
Note:
Oracle supports using the ocrcheck command when, at a minimum, the Oracle Cluster Ready Services stack is OFFLINE on all nodes in the cluster because the command will run even if the stack is ONLINE but it can falsely indicate that the OCR is corrupt if the check happens while an update to OCR is underway.
Syntax
ocrcheck [-local] [-config] [-details] [-help]
[root@rac2 ~]# ocrcheck -help
Name:
ocrcheck - Displays health of Oracle Cluster/Local Registry.
Synopsis:
ocrcheck [-config] [-local]
-config Displays the configured locations of the Oracle Cluster Registry.
This can be used with the -local option to display the configured
location of the Oracle Local Registry
-local The operation will be performed on the Oracle Local Registry.
Notes:
A log file will be created in
$ORACLE_HOME/log/<hostname>/client/ocrcheck_<pid>.log.
File creation privileges in the above directory are needed
when running this tool.
[root@rac2 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2616
Available space (kbytes) : 259504
ID : 170058601
Device/File Name : +VOTE_DATA
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
Note :- When we simply run only ocrcheck command as root user without any option then it checks the integrity of Oracle Clusterware Registry and reveals the OCR version, Total Space, Used Space and Available space. It also displays the File ID and the location where OCR is located for global access as shown in preceding output.
[root@rac2 ~]# ocrcheck -config
Oracle Cluster Registry configuration is :
Device/File Name : +VOTE_DATA
Note :- When we run the ocrcheck command with root user with -config option then it displays the device/diskgroup where cluster OCR is located.
[root@rac2 ~]# ocrcheck -config -local
Oracle Local Registry configuration is :
Device/File Name : /u01/app/11.2.0/grid/cdata/rac2.olr
[root@rac2 ~]#
Note :- When we run the ocrcheck -config along with -local option then it displays the Oracle Local Repository(OLR), the local version of OCR located on local node as shown in the preceding example.
Note :-
A log file will be created in
$GRID_HOME/log/<hostname>/client/ocrcheck_<pid>.log.
File creation privileges in the above directory are needed
when running this tool.
A log file for each ocrcheck command run will be created under $GRID_HOME/log/<hostname>/client/ocrcheck_<pid>.log. Let's take a look at this with an example as explained below.
1 - First let's check the timestamp at the RAC node were we will be running the ocrcheck command.
[root@rac2 client]# date
Mon Jun 6 10:28:03 IST 2016
2 - Now, let's run the ocrcheck command.
[root@rac2 client]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2616
Available space (kbytes) : 259504
ID : 170058601
Device/File Name : +VOTE_DATA
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
3 - Now let's go to log location ($GRID_HOME/log/<hostname>/client/ocrcheck_<pid>.log) and check for the new log file is created.
[root@rac2 client]# pwd
/u01/app/11.2.0/grid/log/rac2/client
[root@rac2 client]# ls -ltr | tail
-rw-r----- 1 root root 256 Jun 3 12:14 ocrconfig_7211.log
-rw-r----- 1 root root 256 Jun 3 12:21 ocrconfig_7460.log
-rw-r----- 1 root root 342 Jun 3 12:21 ocrconfig_7469.log
-rw-r--r-- 1 oragrid oinstall 1612 Jun 6 09:56 oclskd.log
-rw-r--r-- 1 oragrid oinstall 21138 Jun 6 09:56 olsnodes.log
-rw-r--r-- 1 root root 379 Jun 6 09:58 ocrcheck_6646.log
-rw-r--r-- 1 root root 379 Jun 6 09:58 ocrcheck_6669.log
-rw-r----- 1 root root 255 Jun 6 09:59 ocrcheck_6684.log
-rw-r----- 1 root root 255 Jun 6 09:59 ocrcheck_6689.log
-rw-r--r-- 1 root root 379 Jun 6 10:28 ocrcheck_7621.log
[root@rac2 client]# date
Mon Jun 6 10:28:52 IST 2016
Note :- We can see a new log file has been created marked in red-color above.
Contents of ocrcheck_7621.log
[root@rac2 client]# cat ocrcheck_7621.log
Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
2016-06-06 10:28:36.677: [OCRCHECK][3038611136]ocrcheck starts...
2016-06-06 10:28:37.212: [OCRCHECK][3038611136]protchcheck: OCR status : total = [262120], used = [2616], avail = [259504]
2016-06-06 10:28:40.939: [OCRCHECK][3038611136]Exiting [status=success]...
Migrate to Oracle Database with SQL Developer.
Migrate to Oracle Database with SQL Developer.
http://www.oracle.com/technetwork/database/migration/index.html
Friday, June 3, 2016
How to Enable Archiving in Oracle RAC environment?
Enabling ARCHIVELOG in Oracle RAC environment.
Below verification reveals that our current RAC cluster database is in NOARCHIVE LOG mode.
SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 4
Current log sequence 5
Step 1 - Shut-down the database across the nodes from any node in the cluster using below command.
-bash-3.2$ srvctl stop database -d ractst -o immediate
Let's verify if database instances are down across the cluster nodes.
-bash-3.2$ srvctl status database -d ractst
Instance ractst1 is not running on node rac1
Instance ractst2 is not running on node rac2
Step 2 - Mount the database instances using below command.
-bash-3.2$ srvctl start database -d ractst -o mount
Instances are started now - in Mount state.
-bash-3.2$ srvctl status database -d ractst
Instance ractst1 is running on node rac1
Instance ractst2 is running on node rac2
Note : - Before Oracle 11g R2, we used to disable the INIT parameter cluster_database in order to enable and disable archiving in RAC environment.
Step 3 - Enable the archiving now.
-bash-3.2$ sqlplus "/as sysdba"
SQL*Plus: Release 11.2.0.1.0 Production on Fri Jun 3 13:00:52 2016
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> alter database archivelog;
Database altered.
SQL> alter database open;
Database altered.
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 4
Next log sequence to archive 5
Current log sequence 5
Also open the database at 2nd node and check archiving status as below.
SQL> alter database open;
Database altered.
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 2
Next log sequence to archive 3
Current log sequence 3
That's it...Hope it would help someone....
Thursday, June 2, 2016
Recovery Catalog Views list and it's Corresponding V$ Views
Here I thought to post very useful list of RMAN recovery catalog views and its corresponding V$ views as quick reference that I did grep from Oracle Backup and Recovery Reference doc.
In order to find the details of any Recovery Catalog view or V$ view, e.g what all fields are there in a particular view and for what purpose, then just click on that particular Recover Catalog View or V$ View listed hereunder and it will redirect you to the exact page of Oracle Docs to enhance your knowledge.
Recovery Catalog View | Corresponding V$ View | Catalog View Describes ... |
RC_ARCHIVED_LOG | V$ARCHIVED_LOG | Archived and unarchived redo log files |
RC_BACKUP_ARCHIVELOG_DETAILS | V$BACKUP_ARCHIVELOG_DETAILS | Details about archived redo log backups for Enterprise Manager |
RC_BACKUP_ARCHIVELOG_SUMMARY | V$BACKUP_ARCHIVELOG_SUMMARY | Summary of information about archived redo log backups for Enterprise Manager |
RC_BACKUP_CONTROLFILE | V$BACKUP_DATAFILE | Control files backed up in backup sets |
RC_BACKUP_CONTROLFILE_DETAILS | V$BACKUP_CONTROLFILE_DETAILS | Details about control file backups for Enterprise Manager |
RC_BACKUP_CONTROLFILE_SUMMARY | V$BACKUP_CONTROLFILE_SUMMARY | Summary of information about control file backups for Enterprise Manager |
RC_BACKUP_COPY_DETAILS | V$BACKUP_COPY_DETAILS | Details about datafile image copy backups for Enterprise Manager |
RC_BACKUP_COPY_SUMMARY | V$BACKUP_COPY_SUMMARY | Summary of information about datafile image copy backups for Enterprise Manager |
RC_BACKUP_CORRUPTION | V$BACKUP_CORRUPTION | Corrupt block ranges in datafile backups |
RC_BACKUP_DATAFILE | V$BACKUP_DATAFILE | Datafiles in backup sets |
RC_BACKUP_DATAFILE_DETAILS | V$BACKUP_DATAFILE_DETAILS | Details about datafile backups for Enterprise Manager |
RC_BACKUP_DATAFILE_SUMMARY | V$BACKUP_DATAFILE_SUMMARY | Summary of information about datafile backups for Enterprise Manager |
RC_BACKUP_FILES | V$BACKUP_FILES | RMAN backups and copies known to the repository. |
RC_BACKUP_PIECE | V$BACKUP_PIECE | Backup pieces |
RC_BACKUP_PIECE_DETAILS | V$BACKUP_PIECE_DETAILS | Details about backup pieces for Enterprise Manager |
RC_BACKUP_REDOLOG | V$BACKUP_REDOLOG | Archived redo log files in backup sets |
RC_BACKUP_SET | V$BACKUP_SET | Backup sets for all incarnations of databases registered in the catalog |
RC_BACKUP_SET_DETAILS | V$BACKUP_SET_DETAILS | Details about backup sets for Enterprise Manager |
RC_BACKUP_SET_SUMMARY | V$BACKUP_SET_SUMMARY | Summary of information about backup sets for Enterprise Manager |
RC_BACKUP_SPFILE | V$BACKUP_SPFILE | Server parameter files in backups |
RC_BACKUP_SPFILE_DETAILS | V$BACKUP_SPFILE_DETAILS | Details about server parameter file backups for Enterprise Manager |
RC_BACKUP_SPFILE_SUMMARY | V$BACKUP_SPFILE_SUMMARY | Summary of information about server parameter file backups for Enterprise Manager |
RC_CHECKPOINT | n/a | Deprecated in favor of RC_RESYNC |
RC_CONTROLFILE_COPY | V$DATAFILE_COPY | Control file copies on disk |
RC_COPY_CORRUPTION | V$COPY_CORRUPTION | Corrupt block ranges in datafile copies |
RC_DATABASE | V$DATABASE | Databases registered in the recovery catalog |
RC_DATABASE_BLOCK_CORRUPTION | V$DATABASE_BLOCK_CORRUPTION | Database blocks marked as corrupted in the most recent RMAN backup or copy |
RC_DATABASE_INCARNATION | V$DATABASE_INCARNATION | Database incarnations registered in the recovery catalog |
RC_DATAFILE | V$DATAFILE | Datafiles registered in the recovery catalog |
RC_DATAFILE_COPY | V$DATAFILE_COPY | Datafile copies on disk |
RC_LOG_HISTORY | V$LOG_HISTORY | Online redo log history indicating when log switches occurred |
RC_OFFLINE_RANGE | V$OFFLINE_RANGE | Offline ranges for datafiles |
RC_PROXY_ARCHIVEDLOG | V$PROXY_ARCHIVEDLOG | Archived log backups taken with the proxy copy functionality |
RC_PROXY_ARCHIVELOG_DETAILS | V$PROXY_ARCHIVELOG_DETAILS | Details about proxy archived redo log files for Enterprise Manager |
RC_PROXY_ARCHIVELOG_SUMMARY | V$PROXY_ARCHIVELOG_SUMMARY | Summary of information about proxy archived redo log files for Enterprise Manager |
RC_PROXY_CONTROLFILE | V$PROXY_DATAFILE | Control file backups taken with the proxy copy functionality |
RC_PROXY_COPY_DETAILS | V$PROXY_COPY_DETAILS | Details about datafile proxy copies for Enterprise Manager |
RC_PROXY_COPY_SUMMARY | V$PROXY_COPY_SUMMARY | Summary of information about datafile proxy copies for Enterprise Manager |
RC_PROXY_DATAFILE | V$PROXY_DATAFILE | Datafile backups that were taken using the proxy copy functionality |
RC_REDO_LOG | V$LOG and V$LOGFILE | Online redo logs for all incarnations of the database since the last catalog resynchronization |
RC_REDO_THREAD | V$THREAD | All redo threads for all incarnations of the database since the last catalog resynchronization |
RC_RESTORE_POINT | V$RESTORE_POINT | All restore points for all incarnations of the database since the last catalog resynchronization |
RC_RESYNC | n/a | Recovery catalog resynchronizations |
RC_RMAN_BACKUP_JOB_DETAILS | V$RMAN_BACKUP_JOB_DETAILS | Details about backup jobs for Enterprise Manager |
RC_RMAN_BACKUP_SUBJOB_DETAILS | V$RMAN_BACKUP_SUBJOB_DETAILS | Details about backup subjobs for Enterprise Manager |
RC_RMAN_BACKUP_TYPE | V$RMAN_BACKUP_TYPE | Used internally by Enterprise Manager |
RC_RMAN_CONFIGURATION | V$RMAN_CONFIGURATION | RMAN configuration settings |
RC_RMAN_OUTPUT | V$RMAN_OUTPUT | Output from RMAN commands for use in Enterprise Manager |
RC_RMAN_STATUS | V$RMAN_STATUS | Historical status information about RMAN operations. |
RC_SITE | n/a | Databases in a Data Guard environment |
RC_STORED_SCRIPT | n/a | Names of scripts stored in the recovery catalog |
RC_STORED_SCRIPT_LINE | n/a | Contents of the scripts stored in the recovery catalog |
RC_TABLESPACE | V$TABLESPACE | All tablespaces registered in the recovery catalog, all dropped tablespaces, and tablespaces that belong to old incarnations |
RC_TEMPFILE | V$TEMPFILE | All tempfiles registered in the recovery catalog |
RC_UNUSABLE_BACKUPFILE_DETAILS | V$UNUSABLE_BACKUPFILE_DETAILS | Unusable backup files registered in the recovery catalog |
Subscribe to:
Posts (Atom)