Moneycontrol Brokerage Recos

Showing posts with label Database General. Show all posts
Showing posts with label Database General. Show all posts

Monday, February 17, 2020

Oracle Database 20c Documents available now !!



Here is the great news arrived that we had been waiting for.


Oracle Database 20c documentation available now here

https://docs.oracle.com/en/database/oracle/oracle-database/20/install-and-upgrade.html



Hope it helps, thanks for reading, please subscribe to this blog to stay updated with latest news on Oracle Cloud Infrastructure and Oracle Autonomous Database Cloud Services and new articles coming up for Oracle 20c new features series.

Twitter : https://twitter.com/rajsoft8899
Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/
Facebook : https://www.facebook.com/rkushawaha


Friday, December 27, 2019

Moving datafile online in Oracle 12c (12.1.0.2)


Starting from Oracle database 12c release 1 (12.1.0.2), you can move a data file from one location/filesystem to another/ASM diskgroup while database is open and being used by the application users.


Taking an example of ASM storage where mistakenly a datafile was added to a tablespace without plus(+) shine before the diskgroup name and new file got added inside the $ORACLE_HOME on the filesystem.


In this situation, DBA should quickly try to reduce the newly added file size to smallest size possible and turn off the datafile autoextend so no much data could be written and datafile size remain small which gives us a luxury to move it to proper location in minimum time.

Below highlighted data file was added mistakenly which was placed into ORACLE_HOME.

FILE_NAME                                                                                GB AUT     MAX_GB
-------------------------------------------------------------------------------- ---------- --- ----------
+DATA/TESTDB/DATAFILE/users.4513.1027281799                                10 YES 31.9999847
+DATA/TESTDB/DATAFILE/users.4813.1027281809                                 4 YES 31.9999847
/u01/app/oracle/product/12.1.0.2/dbhome_1/dbs/DATA                                      2 YES         31
+DATA/TESTDB/DATAFILE/users.8520.1027282145                                 2 YES         31




Let's move the datafile created inside the /u01 mount to our +DATA ASM Diskgroup with following command.


SQL> ALTER DATABASE MOVE DATAFILE '/u01/app/oracle/product/12.1.0.2/dbhome_1/dbs/DATA' TO '+DATA';

Database altered.



Now, let's validate the datafile list if that is moved.


SQL> select file_name, bytes/1024/1024/1024 as GB ,autoextensible,MAXBYTES/1024/1024/1024 MAX_GB from dba_data_files where tablespace_name='users' order by GB desc;

FILE_NAME                                                                                GB AUT     MAX_GB
-------------------------------------------------------------------------------- ---------- --- ----------
+DATA/TESTDB/DATAFILE/users.4513.1027281799                                10 YES 31.9999847
+DATA/TESTDB/DATAFILE/users.4813.1027281809                                 4 YES 31.9999847
+DATA/TESTDB/DATAFILE/users.8520.1027282145                                 2 YES         31
+DATA/TESTDB/DATAFILE/users.4194.1027284059                        .001953125 NO           0


Data file moved successfully as you can see in the above output.

Now once datafile is moved to the appropriate location, you can resize and turn autoextend on for its auto growth.


SQL> alter database datafile '+DATA/TESTDB/DATAFILE/users.4194.1027284059' resize 1g;

Database altered.

SQL> alter database datafile '+DATA/TESTDB/DATAFILE/users.4194.1027284059' autoextend on maxsize 32767m;

Database altered.

SQL> 


Let's validate the data file which we moved if that has no any corrupt blocks.


RMAN> validate datafile 68;

Starting validate at 17-DEC-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=112 instance=TESTDB1 device type=DISK
channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
input datafile file number=00068 name=+DATA/TESTDB/DATAFILE/users.4194.1027284059
channel ORA_DISK_1: validation complete, elapsed time: 00:00:03
List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
68   OK     0              130945       131072          15464094246834
  File Name: +DATA/TESTDB/DATAFILE/users.4194.1027284059
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              0
  Index      0              0
  Other      0              127

Finished validate at 17-DEC-19

RMAN>

Datafile validation is successful.




Hope it helps, thanks for reading, please subscribe to this blog to stay updated with latest news on Oracle Cloud Infrastructure and Oracle Autonomous Database Cloud Services and new articles.

Twitter : https://twitter.com/rajsoft8899
Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/


Thursday, October 17, 2019

ORA-46269: Conflicting operation on audit files



The DBMS_AUDIT_MGMT package provides subprograms to manage audit trail records. These subprograms enable audit administrators to manage the audit trail. In a mixed-mode environment, these audit trails comprise the database, operating system (OS), and XML audit trails. In a unified auditing environment, this comprises the unified audit trail.



One of the daily maintenance system job fails with ORA-46269 error as below.


-bash-4.1$ oerr ora 46269
46269, 00000, "Conflicting operation on audit files"
// *Cause: Audit files were being operated by more than one DBMS_AUDIT_MGMT
//         API from different sessions. The simultaneous execution of the
//         CLEAN_AUDIT_TRAIL API with itself can cause the conflict.
// *Action: Wait until the execution of one of the API's complete and try
//          executing the API again.
-bash-4.1$


From the database instance alert log:

Tue Oct 15 19:38:53 2019
Errors in file /u01/app/oracle/diag/rdbms/orcldb/orcldb1/trace/orcldb1_j002_247077.trc:
ORA-12012: error on auto execute of job "SYS"."STANDARD_OS_AUDIT_TRAIL_PURGE"
ORA-46269: Conflicting operation on audit files
ORA-06512: at "SYS.DBMS_AUDIT_MGMT", line 61
ORA-06512: at "SYS.DBMS_AUDIT_MGMT", line 2447
ORA-06512: at line 1
Tue Oct 15 19:46:18 2019


From the incident trace file:

Trace file /u01/app/oracle/diag/rdbms/orcldb/orcldb1/trace/orcldb1_j002_247077.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, Oracle Label Security,
OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/product/11.2.0.4/dbhome_2
System name:    Linux
Node name:      dbadm02.example.com
Release:        4.1.12-124.24.3.el6uek.x86_64
Version:        #2 SMP Mon Jan 14 15:08:09 PST 2019
Machine:        x86_64
Instance name: orcldb1
Redo thread mounted by this instance: 1
Oracle process number: 115
Unix process pid: 247077, image: oracle@dbadm02.example.com (J002)

*** 2019-10-15 19:38:53.665
*** SESSION ID:(1195.35311) 2019-10-15 19:38:53.665
*** CLIENT ID:() 2019-10-15 19:38:53.665
*** SERVICE NAME:(SYS$USERS) 2019-10-15 19:38:53.665
*** MODULE NAME:(DBMS_SCHEDULER) 2019-10-15 19:38:53.665
*** ACTION NAME:(STANDARD_OS_AUDIT_TRAIL_PURGE) 2019-10-15 19:38:53.665



Current audit management cleanup jobs list, I could see two audit clean-up jobs scheduled at same time and if first one that starts few seconds before the 2nd job, runs longer then 2nd job conflicts with the processing of first one and fails with the subjected error.


SQL> select * from DBA_AUDIT_MGMT_CLEANUP_JOBS

JOB_NAME                                 JOB_STAT AUDIT_TRAIL                  JOB_FREQUENCY
---------------------------------------- -------- ---------------------------- -----------------------------------
STANDARD_OS_AUDIT_TRAIL_PURGE            ENABLED  OS AUDIT TRAIL               FREQ=HOURLY;INTERVAL=1
CLEAN_OS_DB_AUDIT_RECORD                 ENABLED  OS AUDIT TRAIL               FREQ=HOURLY;INTERVAL=1

SQL>



SQL> select OWNER,JOB_NAME,to_char(last_start_date,'DD-MM-YY HH24:MI:SS') Last_date_time, to_char(next_run_date,'DD-MM-YY HH24:MI:SS') Next_Date_Time, JOB_ACTION from DBA_SCHEDULER_JOBS

OWNER                          JOB_NAME                       LAST_DATE_TIME    NEXT_DATE_TIME    JOB_ACTION
------------------------------ ------------------------------ ----------------- ----------------- -------------------------------------------------------
SYS                            CLEAN_OS_DB_AUDIT_RECORD       16-10-19 03:38:13 16-10-19 04:38:13 BEGIN DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(4, TRUE);  END;
SYS                            STANDARD_OS_AUDIT_TRAIL_PURGE  16-10-19 03:38:53 16-10-19 04:38:53 BEGIN DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(4, TRUE);  END;

SQL>


Scheduler details:


OWNER           JOB_NAME                            START_DATE                     ENABL STATE           FAILURE_COUNT LAST_START_DATE            NEXT_RUN_DATE               MAX_RUN_DURATIO SOURCE
--------------- ----------------------------------- ------------------------------ ----- --------------- ------------- -------------------------- ------------------------------ --------------- ---------------
SYS             STANDARD_OS_AUDIT_TRAIL_PURGE       16-jan-2016 02:38:53           TRUE  SCHEDULED                9731 16-oct-2019 01:38:53       16-oct-2019 02:38:53
SYS             CLEAN_OS_DB_AUDIT_RECORD            19-jan-2017 15:38:13           TRUE  SCHEDULED                 619 16-oct-2019 01:38:13       16-oct-2019 02:38:13
SYS             PURGE_LOG                           10-oct-2014 03:00:00           TRUE  SCHEDULED                   0 15-oct-2019 03:00:00       16-oct-2019 03:00:00




SQL> select * from DBA_AUDIT_MGMT_CONFIG_PARAMS

PARAMETER_NAME                 PARAMETER_VALUE                     AUDIT_TRAIL
------------------------------ ----------------------------------- ----------------------------
DB AUDIT TABLESPACE            SYSAUX                              STANDARD AUDIT TRAIL
DB AUDIT TABLESPACE            SYSAUX                              FGA AUDIT TRAIL
AUDIT FILE MAX SIZE            10000                               OS AUDIT TRAIL
AUDIT FILE MAX SIZE            10000                               XML AUDIT TRAIL
AUDIT FILE MAX AGE             5                                   OS AUDIT TRAIL
AUDIT FILE MAX AGE             5                                   XML AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE      10000                               STANDARD AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE      10000                               FGA AUDIT TRAIL
OS FILE CLEAN BATCH SIZE       1000                                OS AUDIT TRAIL
OS FILE CLEAN BATCH SIZE       1000                                XML AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL      1                                   OS AUDIT TRAIL

11 rows selected.

SQL>


I don't want duplicate job schedules in place for the same purpose so I disabled and drop the one job schedule to fix the conflict and avoid the noise tickets.

SQL> begin
dbms_scheduler.disable( '"SYS"."CLEAN_OS_DB_AUDIT_RECORD"' );
END;
/

PL/SQL procedure successfully completed.

SQL>



SQL> EXEC DBMS_AUDIT_MGMT.DROP_PURGE_JOB('CLEAN_OS_DB_AUDIT_RECORD');

PL/SQL procedure successfully completed.

SQL>


Now, only one job is left in the schedule and it should work fine without any conflict in future.


SQL> select * from DBA_AUDIT_MGMT_CLEANUP_JOBS;

JOB_NAME                                                                                             JOB_STAT AUDIT_TRAIL                  JOB_FREQUENCY
---------------------------------------------------------------------------------------------------- -------- ---------------------------- ----------------------------------------------------------------------------------------------------
STANDARD_OS_AUDIT_TRAIL_PURGE                                                                        ENABLED  OS AUDIT TRAIL               FREQ=HOURLY;INTERVAL=1



OWNER                          JOB_NAME                       LAST_DATE_TIME    NEXT_DATE_TIME    JOB_ACTION
------------------------------ ------------------------------ ----------------- ----------------- -------------------------------------------------------
SYS                            STANDARD_OS_AUDIT_TRAIL_PURGE  16-10-19 04:38:53 16-10-19 05:38:53 BEGIN DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(4, TRUE);  END;



Tested the manual job run once, and it worked fine.


SQL> BEGIN DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(4, TRUE);  END;
  2  /

PL/SQL procedure successfully completed.

SQL>




Hope it helps, thanks for reading, please subscribe to this blog to stay updated with latest news on Oracle Cloud Infrastructure and Oracle Autonomous Database Cloud Services and new articles.



Tuesday, October 8, 2019

Add or Remove or List TFA Users and Groups




Image Source

Sometimes you may need to collect srdc diagnostic data requested by Oracle Support Engineer for an Oracle Internal Error or problem evaluation and analysis.

You don't want loads of unnecessary diagnostics traces and logs to be collected and need only relevant traces/logs for a specific Oracle incident/problem so srdc needs to be run by Oracle software binary owner which may be different from Oracle cluster-ware owner and that may not have permission to run the TFA to collect diagnostics data.


You won't be able to run the TFA if you are not a TFA user and get an error like depicted below since oracle user is not allowed to run the TFA diag collection.

[root@dbadm01 bin]#su - oracle


$ ./tfactl diagcollect -srdc ORA-00700
User oracle does not have keys to run TFA. Please check with TFA Admin(root)

$ exit
[root@dbadm01 bin]#



TFA administrator or root user can add Oracle software binary owner to TFA group so that he/she can run the TFA srdc diagnostics collection when needed.

Let's see how we can add oracle user to the TFA administrator group and run the srdc data collection and finally we would see how to remove oracle user from the TFA administration group.



You can view the list of TFA users currently allowed to run it as below. We can see that only oragrid user is allowed to run the TFA.


[root@dbadm01 bin]# ./tfactl access lsusers
.---------------------------------.
|  TFA Users in dbadm01  |
+-----------+-----------+---------+
| User Name | User Type | Status  |
+-----------+-----------+---------+
| oragrid   | USER      | Allowed |
'-----------+-----------+---------'

.---------------------------------.
|          TFA Users in           |
+-----------+-----------+---------+
| User Name | User Type | Status  |
+-----------+-----------+---------+
| oragrid   | USER      | Allowed |
| oragrid   | USER      | Allowed |
| oragrid   | USER      | Allowed |
'-----------+-----------+---------'

[root@dbadm01 bin]#


Let's now add oracle user in the TFA group now so that it can also be part of it and run the tfa.


[root@dbadm01 bin]# ./tfactl access add -user oracle -local

Successfully added 'oracle' to TFA Access list.

.---------------------------------.
|  TFA Users in dbadm01  |
+-----------+-----------+---------+
| User Name | User Type | Status  |
+-----------+-----------+---------+
| oracle   | USER      | Allowed |
| oragrid   | USER      | Allowed |
'-----------+-----------+---------'

[root@dbadm01 bin]#



Since oracle user is now added to TFA group, let's switch to oracle user and try to collect the TFA srdc diagnostics data.

[root@dbadm01 bin]# su - oracle
[oracle@dbadm01 ~]$

[oracle@dbadm01 ~]$ /u01/app/12.2.0.1/grid/bin/tfactl diagcollect -srdc ORA-00700
Enter the time of the ORA-00700 [YYYY-MM-DD HH24:MI:SS,<RETURN>=ALL] :
Enter the Database Name [<RETURN>=ALL] : PROD1

1. Sep/27/2019 17:21:00 : [prod1] ORA-00700: soft internal error, arguments: [kdt_bseg_srch_cbk PITL5], [3], [148630702], [276915], [], [], [], [], [], [], [], []
2. Sep/27/2019 15:05:30 : [prod1] ORA-00700: soft internal error, arguments: [kdt_bseg_srch_cbk PITL5], [2], [148630736], [276915], [], [], [], [], [], [], [], []
3. Sep/27/2019 15:05:30 : [prod1] ORA-00700: soft internal error, arguments: [PITL6], [276915], [148630736], [], [], [], [], [], [], [], [], []
4. Sep/27/2019 07:16:44 : [prod1] ORA-00700: soft internal error, arguments: [kdt_bseg_srch_cbk PITL5], [2], [156338726], [276915], [], [], [], [], [], [], [], []
5. Sep/24/2019 09:51:00 : [prod1] ORA-00700: soft internal error, arguments: [PITL6], [276915], [156338560], [], [], [], [], [], [], [], [], []
6. Sep/24/2019 09:50:56 : [prod1] ORA-00700: soft internal error, arguments: [kdt_bseg_srch_cbk PITL5], [2], [156338560], [276915], [], [], [], [], [], [], [], []

Please choose the event : 1-6 [1] 6
Selected value is : 6 ( Sep/24/2019 09:50:56 )
Scripts to be run by this srdc: ipspack rdahcve1210 rdahcve1120 rdahcve1110
Components included in this srdc: OS CRS DATABASE NOCHMOS
Collecting data for local node(s)
Scanning files from Sep/24/2019 03:50:56 to Sep/24/2019 15:50:56

Collection Id : 20191008033306dbadm01

Detailed Logging at : /u01/app/grid/tfa/repository/srdc_ora700_collection_Tue_Oct_08_03_33_07_UTC_2019_node_local/diagcollect_20191008033306_dbadm01.log
2019/10/08 03:33:10 UTC : NOTE : Any file or directory name containing the string .com will be renamed to replace .com with dotcom
2019/10/08 03:33:10 UTC : Collection Name : tfa_srdc_ora700_Tue_Oct_08_03_33_07_UTC_2019.zip
2019/10/08 03:33:11 UTC : Scanning of files for Collection in progress...
2019/10/08 03:33:11 UTC : Collecting additional diagnostic information...
2019/10/08 03:33:41 UTC : Getting list of files satisfying time range [09/24/2019 03:50:56 UTC, 09/24/2019 15:50:56 UTC]
2019/10/08 03:34:45 UTC : Completed collection of additional diagnostic information...
2019/10/08 03:36:29 UTC : Collecting ADR incident files...
2019/10/08 03:36:30 UTC : Completed Local Collection
.---------------------------------------------.
|              Collection Summary             |
+------------------+-----------+-------+------+
| Host             | Status    | Size  | Time |
+------------------+-----------+-------+------+
| dbadm01 | Completed | 249MB | 200s |
'------------------+-----------+-------+------'

Logs are being collected to: /u01/app/grid/tfa/repository/srdc_ora700_collection_Tue_Oct_08_03_33_07_UTC_2019_node_local
/u01/app/grid/tfa/repository/srdc_ora700_collection_Tue_Oct_08_03_33_07_UTC_2019_node_local/dbadm01.tfa_srdc_ora700_Tue_Oct_08_03_33_07_UTC_2019.zip
[oracle@dbadm01 ~]$

TFA srdc collection went successful now !


If you want to remove the oracle user from the TFA group, you can do so using below command.


[root@dbadm01 bin]# ./tfactl access remove -user oracle -local

Successfully removed 'oracle' from TFA Access list.

.---------------------------------.
|  TFA Users in dbadm01  |
+-----------+-----------+---------+
| User Name | User Type | Status  |
+-----------+-----------+---------+
| oragrid   | USER      | Allowed |
'-----------+-----------+---------'

[root@dbadm01 bin]#



Hope it helps, thanks for reading, please subscribe to this blog to stay updated with latest news on Oracle Cloud Infrastructure, Oracle Autonomous Database Cloud Services and other new articles.

Twitter : https://twitter.com/rajsoft8899
Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/


Tuesday, September 17, 2019

OOW19 - Big Announcement - Always Free Oracle Cloud Account



Hey Guys !!

Today at OracleOpenWorld2019, a big announcement was made by Larry - now you can get a "Always Free" Oracle Cloud account which earlier was limited to 30 days trial period only.


Now if you go to the - https://www.oracle.com/cloud/free/

You get "Start For Free" option to click and provision your free Oracle Autonomous Database Cloud Service. The great part is that you get free Oracle autonomous database.




Initially Oracle cloud account is created for 12 months by default.






After subscribing the free account, and login to the oracle cloud account, you would notice "Always Free Eligible" tags for the Oracle Cloud services.

You can see a message just below the address bar that "You are in a free trial. When your trial is over, your account will be limited to 'Always Free'resources"

                  Means

Initially for 30 days you can use all free services that comes under it, but after 30 days you would be able to use the free resources those are there with option "Always Free Eligible" with it. And you can continue using your free account always as announced :)



In subsequent article - Click Here to jump, I am going to explain step by step - how can you provision Oracle Autonomous Transaction Procession Database Cloud Service.

Hope it helps, subscribe to this blog to stay updated on latest Oracle Technologies and new articles.

Twitter : @rajsoft8899
Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/

Sunday, September 15, 2019

Warning: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE has been deprecated



You would see this warning message in the alert log file starting 12.1.0.2 Oracle Standby Databases when starting the MRP process using below command for real-time apply.



SQL> recover managed standby database using current logfile disconnect from session;
Media recovery complete.
SQL>



From alert log:

Warning: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE has been deprecated.
Warning: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE has been deprecated.
ALTER DATABASE RECOVER  managed standby database using current logfile disconnect from session




Instead, you should now being using the below command in 12c.


SQL> recover managed standby database cancel;
Media recovery complete.
SQL>



SQL> alter database recover managed standby database disconnect nodelay;

Database altered.

SQL>



NAME      INSTANCE_NAME    STATUS       OPEN_MODE            DATABASE_ROLE
 --------- ---------------- ------------ -------------------- ----------------
 CDB1      cdbad1           OPEN         READ ONLY WITH APPLY PHYSICAL STANDBY
 CDB1      cdbad2           OPEN         READ ONLY WITH APPLY PHYSICAL STANDBY




Hope it helps, subscribe to this blog to stay updated on latest Oracle Technologies and new articles.

Twitter : @rajsoft8899
Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/

Sunday, September 8, 2019

DROP Pluggable Database



Theory from Oracle Doc:


Drop a PDB when you want to move the PDB to a new CDB or when you no longer need it. When you drop a PDB, the control file of the CDB is modified to eliminate all references to the dropped PDB. Archived redo log files and backups associated with the PDB are not removed, but you can use Oracle Recovery Manager (RMAN) to remove them.

When dropping a PDB, you can either keep or delete the PDB's data files by using one of the following clauses of the DROP PLUGGABLE DATABASE statement:

KEEP DATAFILES, the default, retains the data files. The PDB temp file is removed even when KEEP DATAFILES is specified because the temp file is no longer needed. When KEEP DATAFILES is specified, the PDB must be unplugged.

INCLUDING DATAFILES removes the data files from disk. If a PDB was created with the SNAPSHOT COPY clause, then you must specify INCLUDING DATAFILES when you drop the PDB.


Prerequisites

The following prerequisites must be met:

• The PDB must be in mounted mode, or it must be unplugged.
• The current user must have SYSDBA or SYSOPER administrative privilege, and the privilege must be either commonly granted or locally granted in the PDB. The user must exercise the privilege using AS SYSDBA or AS SYSOPER at connect time.



To drop a PDB:

1 - In SQL*Plus, ensure that the current container is the CDB root, or, for an application PDB, the application root that contains the application PDB. If the PDB is plugged into the CDB root, then the current container must be the CDB root. If the PDB is plugged into an application root, then the current container must be that application root or the CDB root. If you are dropping an application container, then the current container must be the CDB root, and the application container must not have any application PDBs plugged into it.

2 - Run the DROP PLUGGABLE DATABASE statement and specify the PDB to drop.

Example : Dropping PDB salespdb While Keeping Its Data Files

SQL> DROP PLUGGABLE DATABASE pdb_name KEEP DATAFILES;

Example : Dropping PDB salespdb and Its Data Files

SQL> DROP PLUGGABLE DATABASE pdb_name INCLUDING DATAFILES;





Demonstration

We many DEV Pluggable databases that we created in the previous article - Click here to read how to create many PDBs in bulk using OEM.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB2                           READ WRITE NO
         5 PDB_DEV2                       READ WRITE NO
         6 PDB_DEV3                       READ WRITE NO
         7 PDB_DEV4                       READ WRITE NO
         8 PDB_DEV5                       READ WRITE NO
        10 PDB_DEV1                       READ WRITE NO



Closing/shutting down all PDBs that needs to be dropped:

SQL> alter pluggable database PDB_DEV1 close;

Pluggable database altered.

SQL> alter pluggable database PDB_DEV2 close;

Pluggable database altered.

SQL> alter pluggable database PDB_DEV3 close;

Pluggable database altered.

SQL> alter pluggable database PDB_DEV4 close;

Pluggable database altered.

SQL> alter pluggable database PDB_DEV5 close;

Pluggable database altered.



Validate if all are in MOUNTED state now.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB2                           READ WRITE NO
         5 PDB_DEV2                       MOUNTED
         6 PDB_DEV3                       MOUNTED
         7 PDB_DEV4                       MOUNTED
         8 PDB_DEV5                       MOUNTED
        10 PDB_DEV1                       MOUNTED



Drop the PDBs with clause "including datafiles"

SQL> drop pluggable database PDB_DEV1 including datafiles;

Pluggable database dropped.

SQL> drop pluggable database PDB_DEV2 including datafiles;

Pluggable database dropped.

SQL> drop pluggable database PDB_DEV3 including datafiles;

Pluggable database dropped.

SQL> drop pluggable database PDB_DEV4 including datafiles;

Pluggable database dropped.

SQL> drop pluggable database PDB_DEV5 including datafiles;

Pluggable database dropped.



Validate if PDBs are dropped and gone successfully.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB2                           READ WRITE NO
SQL>



Hope it helps, subscribe to this blog to stay updated on latest Oracle Technologies and new articles.

Twitter : @rajsoft8899
Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/

Thursday, September 5, 2019

PDB Cloning in same CDB


In this article, I would explain how to clone an existing Pluggable Database into a new PDB or you can say - create a new PDB from an existing PDB in the same container database.


In this demo, we would be cloning PDB3 as a new Pluggable Database named PDB4.

1 - Let's see the existing list of PDBs currently we have using "show pdbs" command.


SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB2                           READ WRITE NO
         5 PDB3                           READ WRITE NO
SQL>


2 - Review the GUID for the individual PDBs

SQL> select con_id, guid, name, open_mode from v$pdbs;

    CON_ID GUID                             NAME                           OPEN_MODE
---------- -------------------------------- ------------------------------ ----------
         2 8FA8EFE7EE287653E0537C17800A501C PDB$SEED                       READ ONLY
         3 90C13D2FC47D70F9E0537C17800ACC04 PDB1                           READ WRITE
         4 91AC92BB3457B533E0537C17800AF412 PDB2                           READ WRITE
         5 91C3038EF67B5702E0537C17800A84C6 PDB3                           READ WRITE


3 - Here I am going to create a new tablespace into the PDB3 so that when we clone this as PDB4 so difference be visible.

SQL> alter session set container = pdb3;

Session altered.

SQL> show con_name

CON_NAME
------------------------------
PDB3
SQL>


SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+DATAC1/CDB1/DATAFILE/undotbs1.11084.1015822335
+DATAC1/CDB1/91C3038EF67B5702E0537C17800A84C6/DATAFILE/system.9073.1018133353
+DATAC1/CDB1/91C3038EF67B5702E0537C17800A84C6/DATAFILE/sysaux.3022.1018133353
+DATAC1/CDB1/91C3038EF67B5702E0537C17800A84C6/DATAFILE/users.3693.1018133353

SQL>


SQL> create tablespace tbs1 datafile '+DATAC1' size 100m;

Tablespace created.



SQL> select name from v$datafile;


NAME
--------------------------------------------------------------------------------
+DATAC1/CDB1/DATAFILE/undotbs1.11084.1015822335
+DATAC1/CDB1/91C3038EF67B5702E0537C17800A84C6/DATAFILE/system.9073.1018133353
+DATAC1/CDB1/91C3038EF67B5702E0537C17800A84C6/DATAFILE/sysaux.3022.1018133353
+DATAC1/CDB1/91C3038EF67B5702E0537C17800A84C6/DATAFILE/users.3693.1018133353
+DATAC1/CDB1/91C3038EF67B5702E0537C17800A84C6/DATAFILE/tbs1.3023.1018133807

SQL>



4 - Here now we create PDB4 from the existing PDB3 using the below simple command. This is how simple it is to clone a PDB into Oracle Multi-tenant Architecture.

SQL> create pluggable database pdb4 from pdb3;

Pluggable database created.



5 - You can list all the PDBs in the CDB using the "show pdbs" command and you can see that new PDB4 is created now. By default, new PDB be in MOUNT state when gets created.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB2                           READ WRITE NO
         5 PDB3                           READ WRITE NO
         6 PDB4                           MOUNTED


6 - Open the Pluggable Database using below command.

SQL> alter pluggable database pdb4 open;

Pluggable database altered.


7 - Validate the PDB open status now.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB2                           READ WRITE NO
         5 PDB3                           READ WRITE NO
         6 PDB4                           READ WRITE NO


8 - Validate new PDB details.

SQL> alter session set container = pdb4;

Session altered.

SQL> show con_name

CON_NAME
------------------------------
PDB4
SQL>


Each PDB has its own GUID named directory under the parent Container Database directory where individual PDBs store its local tablespace/datafiles in there.

SQL> select con_id, guid, name, open_mode from v$pdbs;

    CON_ID GUID                             NAME                           OPEN_MODE
---------- -------------------------------- ------------------------------ ----------
         2 8FA8EFE7EE287653E0537C17800A501C PDB$SEED                       READ ONLY
         3 90C13D2FC47D70F9E0537C17800ACC04 PDB1                           READ WRITE
         4 91AC92BB3457B533E0537C17800AF412 PDB2                           READ WRITE
         5 91C3038EF67B5702E0537C17800A84C6 PDB3                           READ WRITE
         6 91C324409E5776D8E0537C17800A746B PDB4                           READ WRITE


Cloned PDB4 has all tablespaces as it was in PDB3 under its own GUID directory path.

SQL> select name from v$datafile;

NAME
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+DATAC1/CDB1/DATAFILE/undotbs1.11084.1015822335
+DATAC1/CDB1/91C324409E5776D8E0537C17800A746B/DATAFILE/system.3134.1018133901
+DATAC1/CDB1/91C324409E5776D8E0537C17800A746B/DATAFILE/sysaux.663.1018133901
+DATAC1/CDB1/91C324409E5776D8E0537C17800A746B/DATAFILE/users.7266.1018133901
+DATAC1/CDB1/91C324409E5776D8E0537C17800A746B/DATAFILE/tbs1.7233.1018133901




Hope it helps, subscribe to this blog to stay updated on latest Oracle Technologies and new articles.

Twitter : @rajsoft8899
Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/


Sunday, August 18, 2019

ORA-27154: post/wait create failed




Today, we were doing Exadata X7-2 QFSDP APR 2019 patching, but during post patch steps, some of the cluster database instances failed to start with following error  - Over-all 62 databases running on this Exadata Machine.

Note : Recently we added 2 more databases to this Exadata box.


SQL> startup
ORA-27154: post/wait create failed
ORA-27300: OS system dependent operation:semget failed with status: 28
ORA-27301: OS failure message: No space left on device
ORA-27302: failure occurred at: sskgpcreates
SQL> 


From one of the database alert log file:

Starting ORACLE instance (normal)
************************ Large Pages Information *******************
Per process system memlock (soft) limit = UNLIMITED

Total Shared Global Region in Large Pages = 0 KB (0%)

Large Pages used by this instance: 0 (0 KB)
Large Pages unused system wide = 0 (0 KB)
Large Pages configured system wide = 13833 (27 GB)
Large Page size = 2048 KB

RECOMMENDATION:
  Total System Global Area size is 2050 MB. For optimal performance,
  prior to the next instance restart:
  1. Increase the number of unused large pages by
 at least 1025 (page size 2048 KB, total size 2050 MB) system wide to
  get 100% of the System Global Area allocated with large pages
********************************************************************
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_184111.trc:
ORA-27154: post/wait create failed
ORA-27300: OS system dependent operation:semget failed with status: 28
ORA-27301: OS failure message: No space left on device
ORA-27302: failure occurred at: sskgpcreates


From the trace file:

*** 2019-08-17 10:10:25.566
Switching to regular size pages for segment size 100663296
Switching to regular size pages for segment size 2046820352
Switching to regular size pages for segment size 2097152

*** 2019-08-17 10:10:25.595
dbkedDefDump(): Starting a non-incident diagnostic dump (flags=0x0, level=0, mask=0x0)
----- Error Stack Dump -----
ORA-27154: post/wait create failed
ORA-27300: OS system dependent operation:semget failed with status: 28
ORA-27301: OS failure message: No space left on device
ORA-27302: failure occurred at: sskgpcreates


Semaphore current configuration in the system.

[root@exa01dbadm01 ~]# ipcs -ls

------ Semaphore Limits --------
max number of arrays = 256
max semaphores per array = 1024
max semaphores system wide = 60000
max ops per semop call = 1024
semaphore max value = 32767


Means 60000 semaphores are available in the system and one semaphore identifier can accommodate a maximum of 1024 semaphores with the configuration above.

But ipcs command shows each semaphore identifier accommodates maximum of 514 (some 752) semaphores by Oracle. 



------ Semaphore Arrays --------
key        semid      owner      perms      nsems
0x27219008 5242886    oracle     640        704
0x1d63fabc 5373959    oracle     640        514
0x1d63fabd 5406728    oracle     640        514
0x26f035f8 5603337    oracle     640        514
0x1d63fabe 5472266    oracle     640        514
0x26f035f9 5701643    oracle     640        514
0xeb773830 5898252    oracle     640        514
0x26f035fa 5832717    oracle     640        514
0x14422b02 11862119   oracle     640        514
0xa83d39fc 11993192   oracle     640        514
0xa83d39fd 12025961   oracle     640        514
0xa83d39fe 12058730   oracle     640        514
0xea1a4768 12189803   oracle     640        514
0xea1a4769 12222572   oracle     640        514
0xea1a476a 12255341   oracle     640        514
0xdcc4a7a4 12386414   oracle     640        514
0xdcc4a7a5 12419183   oracle     640        514
0xdcc4a7a6 12451952   oracle     640        514
0x75550980 12583025   oracle     600        514
0x75550981 12615794   oracle     600        514
0x75550982 12648563   oracle     600        514



So, maximum semaphores available on this system would be 514x256 = 131584



[root@exa01dbadm01 ~]# /sbin/sysctl -a | grep sem
kernel.sem = 1024       60000   1024    256


Per above, we see that we have less, max semaphores configured in the system i.e. 60000.
==================

SEMMNI should be increased to accomodate more semaphores.



[root@exa01dbadm01 ~]# /sbin/sysctl -a | grep sem
kernel.sem = 1024       60000   1024    256



I increased the max number of semaphores to be available on the system to 70000 from 60000

From :

[root@exa01dbadm01 ~]# /sbin/sysctl -a | grep sem
kernel.sem = 1024       60000   1024    256


To this:

[root@exa01dbadm01 ~]# /sbin/sysctl -a | grep sem
kernel.sem = 1024       70000   1024    256

And finally run the below to make new kernel values effective without server reboot.

/sbin/sysctl -p




Finally, tried to start the remaining DB instances and all came up normally.

[oracle@exa01dbadm01 ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 12.1.0.2.0 Production on Sat Aug 17 10:32:13 2019

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 2147483648 bytes
Fixed Size                  2926472 bytes
Variable Size            1318045816 bytes
Database Buffers          822083584 bytes
Redo Buffers                4427776 bytes
Database mounted.
Database opened.
SQL>




Hope it helps, subscribe to this blog to stay updated on latest Oracle Technologies and new articles.

Twitter : @rajsoft8899
Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/

Tuesday, August 13, 2019

KSFV I/O slave Ic01 dp=0x132e70f9f3 exiting



This is just an informational message written in the alert log when you have set either of the following parameters in the database parameter file other than its default value.


Alert message:

KSFV I/O slave Ic01 dp=0x132e70f9f3 exiting

The alert message just indicates the startup and shutdown of backup i/o slave processes.

Default values :

SQL> show parameter slave

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
backup_tape_io_slaves                boolean     FALSE
dbwr_io_slaves                       integer     0
SQL>


Non-default values:

BACKUP_TAPE_IO_SLAVES = TRUE

or

DBWR_IO_SLAVES > 0



Hope it helps, subscribe to this blog to stay updated on latest Oracle Technologies and new articles.

Twitter : @rajsoft8899


Tuesday, August 6, 2019

OSD kill succeeded for process 0x919f4e918




OSD kill succeeded for process 0x919f4e918:

Sometimes you may notice that your alert log has following entries about a hung process and your monitoring system fires and alarm/alert for the same.

Alert Log:

Tue Aug 06 04:10:29 2019
Process 0x0x919f4e918 appears to be hung in Auto SQL Tuning task
Current time = 1565064629, process death time = 1565064600
Attempting to kill process 0x0x919f4e918 with OS pid = 262331
OSD kill succeeded for process 0x919f4e918
Tue Aug 06 04:10:42 2019


In above alert log entry, you see there is process death time limit and once current time of the process reaches to the defined process death time then that long running/hung process gets killed by OSD.


alert.xml



</msg>
<msg time='2019-08-06T04:10:29.069+00:00' org_id='oracle' comp_id='rdbms'
 msg_id='kesaiKillProcess:2489:3170456095' client_id='' type='ERROR'
 group='SQL_Tune' level='2' host_id='host01.example.com'
 host_addr='127.0.0.1' module='' pid='249175'>
 <txt>Process 0x0x919f4e918 appears to be hung in Auto SQL Tuning task
 </txt>
</msg>
<msg time='2019-08-06T04:10:29.069+00:00' org_id='oracle' comp_id='rdbms'
 msg_id='kesaiKillProcess:2493:1670650406' client_id='' type='ERROR'
 group='SQL_Tune' level='2' host_id='host01.example.com'
 host_addr='127.0.0.1' module='' pid='249175'>
 <txt>Current time = 1565064629, process death time = 1565064600
 </txt>
</msg>
<msg time='2019-08-06T04:10:29.069+00:00' org_id='oracle' comp_id='rdbms'
 msg_id='kesaiKillProcess:2504:2480300341' client_id='' type='ERROR'
 group='SQL_Tune' level='2' host_id='host01.example.com'
 host_addr='127.0.0.1' module='' pid='249175'>
 <txt>Attempting to kill process 0x0x919f4e918 with OS pid = 262331
 </txt>
</msg>
<msg time='2019-08-06T04:10:29.069+00:00' org_id='oracle' comp_id='rdbms'
 msg_id='kesaiKillProcess:2518:234340873' client_id='' type='ERROR'
 group='SQL_Tune' level='2' host_id='host01.example.com'
 host_addr='127.0.0.1' module='' pid='249175'>
 <txt>OSD kill succeeded for process 0x919f4e918
 </txt>
</msg>


Cause

It is expected to get this alert entry while "SYS_AUTO_SQL_TUNING_TASK" is running, since "SYS_AUTO_SQL_TUNING_TASK" runs in a defined maintenance window so OSD process invokes to kill the OS process that is running the "SYS_AUTO_SQL_TUNING_TASK" job longer than the maintenance window on the system to prevent or avoid the over-load on the system.



Solution:

If you would like the alert entry to be avoided and want this daily maintenance job(SQL Tuning job) to be completed then you may need to consider increasing the time limit for the "SYS_AUTO_SQL_TUNING_TASK" job on the system. Its default value is 1200 seconds.


SQL> col PARAMETER_VALUE for a30
SQL> SELECT parameter_name, parameter_value
FROM dba_advisor_parameters
WHERE task_name = 'SYS_AUTO_SQL_TUNING_TASK'
AND parameter_name = 'LOCAL_TIME_LIMIT';  2    3    4

PARAMETER_NAME                 PARAMETER_VALUE
------------------------------ ------------------------------
LOCAL_TIME_LIMIT               1200

SQL>


if you want to increase the maintenance/job time limit to 1 hour then run the following:

BEGIN
DBMS_SQLTUNE.set_tuning_task_parameter('SYS_AUTO_SQL_TUNING_TASK', 'LOCAL_TIME_LIMIT', 3600);
END;
/



SQL> col PARAMETER_VALUE for a30
SQL> SELECT parameter_name, parameter_value
FROM dba_advisor_parameters
WHERE task_name = 'SYS_AUTO_SQL_TUNING_TASK'
AND parameter_name = 'LOCAL_TIME_LIMIT';  2    3    4

PARAMETER_NAME                 PARAMETER_VALUE
------------------------------ ------------------------------
LOCAL_TIME_LIMIT               3600

SQL>




Hope it would help someone !!

Stay Tuned/subscribe to this blog to get updates when new articles posted.

Twitter : @rajsoft8899


Thursday, August 1, 2019

Find your own session SID in Oracle






Use below SQL command to find your own connected session details for Oracle Database.

-bash-4.1$ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.4.0 Production on Thu Aug 1 02:52:37 2019

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, Oracle Label Security,
OLAP, Data Mining and Real Application Testing options

SQL>    select sys_context('USERENV','SID') as MY_SID from dual;

MY_SID
------
3345

SQL>

So above, you can see your connected session SID is 3345.

Wednesday, June 12, 2019

RMAN fails with kgefec: fatal error 0 | kgepop: no error frame to pop to for error 603



Noticed one RMAN archivelog backup job has been failing with following error.


kgefec: fatal error 0

kgepop: no error frame to pop to for error 603


I was using the backup code as below. 

RMAN> backup archivelog all not backed delete input;


After doing some research, it was found that this issue comes either we have less resource at our DB server or when there are excessive number of archive logs to be backed with DELETE INPUT clause. In my case, OS level limits were set to UNLIMITED which was fine but I could see that there were more than 5100 archivelogs in there.


Just deleted those archives which were already backed up 1 times to TAPE.




RMAN>delete force archivelog all backed up 1 times to device type 'SBT_TAPE';

Now initiated the archive backup again after records clean-up and it went fine this time.


RMAN> backup archivelog all not backed up delete input;



RMAN> backup archivelog all not backed up delete input;

Starting backup at 12-JUN-19
current log archived
released channel: ORA_DISK_1
released channel: ORA_DISK_2
released channel: ORA_DISK_3
released channel: ORA_DISK_4
released channel: ORA_DISK_5
released channel: ORA_DISK_6
released channel: ORA_DISK_7
released channel: ORA_DISK_8
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=937 instance=testdb1 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Data Domain Boost API
allocated channel: ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_2: SID=23 instance=testdb1 device type=SBT_TAPE
channel ORA_SBT_TAPE_2: Data Domain Boost API
allocated channel: ORA_SBT_TAPE_3
channel ORA_SBT_TAPE_3: SID=323 instance=testdb1 device type=SBT_TAPE
channel ORA_SBT_TAPE_3: Data Domain Boost API
allocated channel: ORA_SBT_TAPE_4
channel ORA_SBT_TAPE_4: SID=632 instance=testdb1 device type=SBT_TAPE
channel ORA_SBT_TAPE_4: Data Domain Boost API
allocated channel: ORA_SBT_TAPE_5
channel ORA_SBT_TAPE_5: SID=932 instance=testdb1 device type=SBT_TAPE
channel ORA_SBT_TAPE_5: Data Domain Boost API
allocated channel: ORA_SBT_TAPE_6
channel ORA_SBT_TAPE_6: SID=1228 instance=testdb1 device type=SBT_TAPE
channel ORA_SBT_TAPE_6: Data Domain Boost API
allocated channel: ORA_SBT_TAPE_7
channel ORA_SBT_TAPE_7: SID=1521 instance=testdb1 device type=SBT_TAPE
channel ORA_SBT_TAPE_7: Data Domain Boost API
allocated channel: ORA_SBT_TAPE_8
channel ORA_SBT_TAPE_8: SID=24 instance=testdb1 device type=SBT_TAPE
channel ORA_SBT_TAPE_8: Data Domain Boost API
channel ORA_SBT_TAPE_1: starting archived log backup set
channel ORA_SBT_TAPE_1: specifying archived log(s) in backup set
input archived log thread=2 sequence=49069 RECID=15597 STAMP=1010709576
input archived log thread=1 sequence=49981 RECID=15596 STAMP=1010709576
channel ORA_SBT_TAPE_1: starting piece 1 at 12-JUN-19
channel ORA_SBT_TAPE_2: starting archived log backup set
channel ORA_SBT_TAPE_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=49982 RECID=15599 STAMP=1010709742
input archived log thread=2 sequence=49070 RECID=15598 STAMP=1010709741
channel ORA_SBT_TAPE_2: starting piece 1 at 12-JUN-19
channel ORA_SBT_TAPE_3: starting archived log backup set
channel ORA_SBT_TAPE_3: specifying archived log(s) in backup set
input archived log thread=2 sequence=49071 RECID=15601 STAMP=1010709861
input archived log thread=1 sequence=49983 RECID=15600 STAMP=1010709859
channel ORA_SBT_TAPE_3: starting piece 1 at 12-JUN-19
channel ORA_SBT_TAPE_4: starting archived log backup set
channel ORA_SBT_TAPE_4: specifying archived log(s) in backup set
input archived log thread=1 sequence=49984 RECID=15603 STAMP=1010709876
input archived log thread=2 sequence=49072 RECID=15602 STAMP=1010709876
channel ORA_SBT_TAPE_4: starting piece 1 at 12-JUN-19
channel ORA_SBT_TAPE_5: starting archived log backup set
channel ORA_SBT_TAPE_5: specifying archived log(s) in backup set
input archived log thread=2 sequence=49073 RECID=15605 STAMP=1010711116
channel ORA_SBT_TAPE_5: starting piece 1 at 12-JUN-19
channel ORA_SBT_TAPE_6: starting archived log backup set
channel ORA_SBT_TAPE_6: specifying archived log(s) in backup set
input archived log thread=1 sequence=49985 RECID=15604 STAMP=1010711113
channel ORA_SBT_TAPE_6: starting piece 1 at 12-JUN-19

channel ORA_SBT_TAPE_1: finished piece 1 at 12-JUN-19

Finished backup at 12-JUN-19

Starting Control File and SPFILE Autobackup at 12-JUN-19
piece handle=c-2679281492-20190612-00 comment=API Version 2.0,MMS Version 4.5.0.0
Finished Control File and SPFILE Autobackup at 12-JUN-19

RMAN>

Wednesday, January 31, 2018

ORA-00265: instance recovery required, cannot set ARCHIVELOG mode




ORA-00265: instance recovery required, cannot set ARCHIVELOG mode

--- Though it is a fuzzy thing and can be done easily in minutes, yet thought to record it here for a reference. I was to simulate a quick recovery scenario and aborted my database instance just to change my database mode to ARCHIVELOG.


DB Details:
------------

C:\app\oracle\product\12.1.0\dbhome_3\BIN>sqlplus "/as sysdba"

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 31 00:17:52 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select name, open_mode, database_role, cdb from v$database;

NAME      OPEN_MODE            DATABASE_ROLE    CDB
--------- -------------------- ---------------- ---
PROD1     READ WRITE           PRIMARY          YES


Archiving is disabled:
------------------------

SQL> archive log list

Database log mode              No Archive Mode
Automatic archival             Disabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     209
Current log sequence           211


Shutdown abort:
-----------------

SQL> shut abort
ORACLE instance shut down.


<<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>>>>>

Wed Jan 31 00:18:22 2018
Shutting down instance (abort)
License high water mark = 8
Wed Jan 31 00:18:22 2018
USER (ospid: 11960): terminating the instance
Wed Jan 31 00:18:31 2018
Instance terminated by USER, pid = 11960
Wed Jan 31 00:18:31 2018
Instance shutdown complete

<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>>>>>>>



Now started  the database in MOUNT state to change the database to archivelog mode.


SQL> startup mount
ORACLE instance started.

Total System Global Area 1006632960 bytes
Fixed Size                  3053104 bytes
Variable Size             905972176 bytes
Database Buffers           92274688 bytes
Redo Buffers                5332992 bytes
Database mounted.


SQL> alter database archivelog;
alter database archivelog
*
ERROR at line 1:
ORA-00265: instance recovery required, cannot set ARCHIVELOG mode


Above we hit the error saying that database instance requires recovery.



Reason : As we did shutdown abort of our database instance or if we do STARTUP MOUNT FORCE then database instance gets crashed which requires media recovery to bring the database changes to a consistent state.




Resolution : Well, to resolve it, we would have to open our database to initiate instance recovery and shut it down normally or with immediate option.



SQL> alter database open;
Database altered.


When you open database, crash recovery starts as you can see in the alert log of the database instance as following:

<<<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Wed Jan 31 00:19:16 2018
alter database archivelog
ORA-265 signalled during: alter database archivelog...
alter database open
Wed Jan 31 00:21:49 2018
Ping without log force is disabled
.
Wed Jan 31 00:21:49 2018
Beginning crash recovery of 1 threads
 parallel recovery started with 3 processes
Wed Jan 31 00:21:49 2018
Started redo scan
Wed Jan 31 00:21:49 2018
Completed redo scan
 read 927 KB redo, 265 data blocks need recovery
Wed Jan 31 00:21:49 2018
Started redo application at
 Thread 1: logseq 211, block 66010
Wed Jan 31 00:21:50 2018
Recovery of Online Redo Log: Thread 1 Group 1 Seq 211 Reading mem 0
  Mem# 0: C:\APP\ORACLE\ORADATA\PROD1\REDO01.LOG
Wed Jan 31 00:21:50 2018
Completed redo application of 0.77MB
Wed Jan 31 00:21:50 2018
Completed crash recovery at
 Thread 1: logseq 211, block 67865, scn 6044843
 265 data blocks read, 265 data blocks written, 927 redo k-bytes read
Starting background process TMON
Wed Jan 31 00:21:50 2018
TMON started with pid=28, OS id=2456
Wed Jan 31 00:21:50 2018
Thread 1 advanced to log sequence 212 (thread open)
Thread 1 opened at log sequence 212
  Current log# 2 seq# 212 mem# 0: C:\APP\ORACLE\ORADATA\PROD1\REDO02.LOG
Successful open of redo thread 1
Wed Jan 31 00:21:50 2018
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Wed Jan 31 00:21:50 2018
SMON: enabling cache recovery
Wed Jan 31 00:21:50 2018
[564] Successfully onlined Undo Tablespace 2.
Undo initialization finished serial:0 start:137835812 end:137835984 diff:172 ms (0.2 seconds)
Verifying minimum file header compatibility (11g) for tablespace encryption..
Verifying 11g file header compatibility for tablespace encryption completed
Wed Jan 31 00:21:50 2018
SMON: enabling tx recovery
Starting background process SMCO


You would want to save the current state of PDBs so that it starts automatically in read-write mode at next startup of CDB [click here]

Now, do a clean shutdown of your database instance:

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.



Now, start the database instance in MOUNT state:

SQL> startup mount
ORACLE instance started.

Total System Global Area  845348864 bytes
Fixed Size                  1339796 bytes
Variable Size             499125868 bytes
Database Buffers          339738624 bytes
Redo Buffers                5144576 bytes
Database mounted.



Change the database mode to ARCHIVELOG:

SQL> alter database archivelog;
Database altered.


Hope it would help someone...!!