Moneycontrol Brokerage Recos

Friday, December 27, 2019

Moving datafile online in Oracle 12c (12.1.0.2)


Starting from Oracle database 12c release 1 (12.1.0.2), you can move a data file from one location/filesystem to another/ASM diskgroup while database is open and being used by the application users.


Taking an example of ASM storage where mistakenly a datafile was added to a tablespace without plus(+) shine before the diskgroup name and new file got added inside the $ORACLE_HOME on the filesystem.


In this situation, DBA should quickly try to reduce the newly added file size to smallest size possible and turn off the datafile autoextend so no much data could be written and datafile size remain small which gives us a luxury to move it to proper location in minimum time.

Below highlighted data file was added mistakenly which was placed into ORACLE_HOME.

FILE_NAME                                                                                GB AUT     MAX_GB
-------------------------------------------------------------------------------- ---------- --- ----------
+DATA/TESTDB/DATAFILE/users.4513.1027281799                                10 YES 31.9999847
+DATA/TESTDB/DATAFILE/users.4813.1027281809                                 4 YES 31.9999847
/u01/app/oracle/product/12.1.0.2/dbhome_1/dbs/DATA                                      2 YES         31
+DATA/TESTDB/DATAFILE/users.8520.1027282145                                 2 YES         31




Let's move the datafile created inside the /u01 mount to our +DATA ASM Diskgroup with following command.


SQL> ALTER DATABASE MOVE DATAFILE '/u01/app/oracle/product/12.1.0.2/dbhome_1/dbs/DATA' TO '+DATA';

Database altered.



Now, let's validate the datafile list if that is moved.


SQL> select file_name, bytes/1024/1024/1024 as GB ,autoextensible,MAXBYTES/1024/1024/1024 MAX_GB from dba_data_files where tablespace_name='users' order by GB desc;

FILE_NAME                                                                                GB AUT     MAX_GB
-------------------------------------------------------------------------------- ---------- --- ----------
+DATA/TESTDB/DATAFILE/users.4513.1027281799                                10 YES 31.9999847
+DATA/TESTDB/DATAFILE/users.4813.1027281809                                 4 YES 31.9999847
+DATA/TESTDB/DATAFILE/users.8520.1027282145                                 2 YES         31
+DATA/TESTDB/DATAFILE/users.4194.1027284059                        .001953125 NO           0


Data file moved successfully as you can see in the above output.

Now once datafile is moved to the appropriate location, you can resize and turn autoextend on for its auto growth.


SQL> alter database datafile '+DATA/TESTDB/DATAFILE/users.4194.1027284059' resize 1g;

Database altered.

SQL> alter database datafile '+DATA/TESTDB/DATAFILE/users.4194.1027284059' autoextend on maxsize 32767m;

Database altered.

SQL> 


Let's validate the data file which we moved if that has no any corrupt blocks.


RMAN> validate datafile 68;

Starting validate at 17-DEC-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=112 instance=TESTDB1 device type=DISK
channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
input datafile file number=00068 name=+DATA/TESTDB/DATAFILE/users.4194.1027284059
channel ORA_DISK_1: validation complete, elapsed time: 00:00:03
List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
68   OK     0              130945       131072          15464094246834
  File Name: +DATA/TESTDB/DATAFILE/users.4194.1027284059
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              0
  Index      0              0
  Other      0              127

Finished validate at 17-DEC-19

RMAN>

Datafile validation is successful.




Hope it helps, thanks for reading, please subscribe to this blog to stay updated with latest news on Oracle Cloud Infrastructure and Oracle Autonomous Database Cloud Services and new articles.

Twitter : https://twitter.com/rajsoft8899
Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/


Thursday, October 17, 2019

Oracle Cloud Infrastructure(OCI) Service Health Dashboard



While you configure lots of Oracle Cloud Infrastructure resources across different cloud tenancy/region, you may be interested to keep track of health-checks status of all resources/services in your region to make sure everything is healthy in the cloud region where your important cloud services are hosted.

Oracle provides you a very convenient way to monitor the health of all OCI resources/services for all regions in one place.


In order to access the OCI health, you can directly go to the "https://ocistatus.oraclecloud.com/" URL where all OCI services health status is available for you to keep track of or if you are already logged in to your tenancy, click on the "View Health Dashboard" at right top corner  and it would take you to the OCI Health Dashboard page automatically.






From this dashboard you can get current status information about the services in your region.

Customers can self register by clicking on the 'Subscribe to Updates' link which offers email, SMS, Webhook, RSS, and other channels.

Notifications are delivered whenever OCI creates or resolves an incident.



There are four colored notations on the "OCI Health Dashboard" for you to identify whether a particular cloud service in a specific region is healthy or has some issues as following, if all cloud services are green then that has "Normal Performance" and everything is healthy and optimal.

Similarly, there is "Degraded Performance" , "Partial Outage" and "Major Outage" notations to identify the resource glitch.




Currently, you can see that everything is in "Normal Performance" range as below.








You can subscribe to incident notifications to your email address as when when there is an incident created in any regions or you can opt-in for your own region only where all your cloud services are running.






Hope it helps, thanks for reading, please subscribe to this blog to stay updated with latest news on Oracle Cloud Infrastructure and Oracle Autonomous Database Cloud Services and new articles.

Twitter : https://twitter.com/rajsoft8899
Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/
Facebook : https://www.facebook.com/rkushawaha


ORA-46269: Conflicting operation on audit files



The DBMS_AUDIT_MGMT package provides subprograms to manage audit trail records. These subprograms enable audit administrators to manage the audit trail. In a mixed-mode environment, these audit trails comprise the database, operating system (OS), and XML audit trails. In a unified auditing environment, this comprises the unified audit trail.



One of the daily maintenance system job fails with ORA-46269 error as below.


-bash-4.1$ oerr ora 46269
46269, 00000, "Conflicting operation on audit files"
// *Cause: Audit files were being operated by more than one DBMS_AUDIT_MGMT
//         API from different sessions. The simultaneous execution of the
//         CLEAN_AUDIT_TRAIL API with itself can cause the conflict.
// *Action: Wait until the execution of one of the API's complete and try
//          executing the API again.
-bash-4.1$


From the database instance alert log:

Tue Oct 15 19:38:53 2019
Errors in file /u01/app/oracle/diag/rdbms/orcldb/orcldb1/trace/orcldb1_j002_247077.trc:
ORA-12012: error on auto execute of job "SYS"."STANDARD_OS_AUDIT_TRAIL_PURGE"
ORA-46269: Conflicting operation on audit files
ORA-06512: at "SYS.DBMS_AUDIT_MGMT", line 61
ORA-06512: at "SYS.DBMS_AUDIT_MGMT", line 2447
ORA-06512: at line 1
Tue Oct 15 19:46:18 2019


From the incident trace file:

Trace file /u01/app/oracle/diag/rdbms/orcldb/orcldb1/trace/orcldb1_j002_247077.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, Oracle Label Security,
OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/product/11.2.0.4/dbhome_2
System name:    Linux
Node name:      dbadm02.example.com
Release:        4.1.12-124.24.3.el6uek.x86_64
Version:        #2 SMP Mon Jan 14 15:08:09 PST 2019
Machine:        x86_64
Instance name: orcldb1
Redo thread mounted by this instance: 1
Oracle process number: 115
Unix process pid: 247077, image: oracle@dbadm02.example.com (J002)

*** 2019-10-15 19:38:53.665
*** SESSION ID:(1195.35311) 2019-10-15 19:38:53.665
*** CLIENT ID:() 2019-10-15 19:38:53.665
*** SERVICE NAME:(SYS$USERS) 2019-10-15 19:38:53.665
*** MODULE NAME:(DBMS_SCHEDULER) 2019-10-15 19:38:53.665
*** ACTION NAME:(STANDARD_OS_AUDIT_TRAIL_PURGE) 2019-10-15 19:38:53.665



Current audit management cleanup jobs list, I could see two audit clean-up jobs scheduled at same time and if first one that starts few seconds before the 2nd job, runs longer then 2nd job conflicts with the processing of first one and fails with the subjected error.


SQL> select * from DBA_AUDIT_MGMT_CLEANUP_JOBS

JOB_NAME                                 JOB_STAT AUDIT_TRAIL                  JOB_FREQUENCY
---------------------------------------- -------- ---------------------------- -----------------------------------
STANDARD_OS_AUDIT_TRAIL_PURGE            ENABLED  OS AUDIT TRAIL               FREQ=HOURLY;INTERVAL=1
CLEAN_OS_DB_AUDIT_RECORD                 ENABLED  OS AUDIT TRAIL               FREQ=HOURLY;INTERVAL=1

SQL>



SQL> select OWNER,JOB_NAME,to_char(last_start_date,'DD-MM-YY HH24:MI:SS') Last_date_time, to_char(next_run_date,'DD-MM-YY HH24:MI:SS') Next_Date_Time, JOB_ACTION from DBA_SCHEDULER_JOBS

OWNER                          JOB_NAME                       LAST_DATE_TIME    NEXT_DATE_TIME    JOB_ACTION
------------------------------ ------------------------------ ----------------- ----------------- -------------------------------------------------------
SYS                            CLEAN_OS_DB_AUDIT_RECORD       16-10-19 03:38:13 16-10-19 04:38:13 BEGIN DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(4, TRUE);  END;
SYS                            STANDARD_OS_AUDIT_TRAIL_PURGE  16-10-19 03:38:53 16-10-19 04:38:53 BEGIN DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(4, TRUE);  END;

SQL>


Scheduler details:


OWNER           JOB_NAME                            START_DATE                     ENABL STATE           FAILURE_COUNT LAST_START_DATE            NEXT_RUN_DATE               MAX_RUN_DURATIO SOURCE
--------------- ----------------------------------- ------------------------------ ----- --------------- ------------- -------------------------- ------------------------------ --------------- ---------------
SYS             STANDARD_OS_AUDIT_TRAIL_PURGE       16-jan-2016 02:38:53           TRUE  SCHEDULED                9731 16-oct-2019 01:38:53       16-oct-2019 02:38:53
SYS             CLEAN_OS_DB_AUDIT_RECORD            19-jan-2017 15:38:13           TRUE  SCHEDULED                 619 16-oct-2019 01:38:13       16-oct-2019 02:38:13
SYS             PURGE_LOG                           10-oct-2014 03:00:00           TRUE  SCHEDULED                   0 15-oct-2019 03:00:00       16-oct-2019 03:00:00




SQL> select * from DBA_AUDIT_MGMT_CONFIG_PARAMS

PARAMETER_NAME                 PARAMETER_VALUE                     AUDIT_TRAIL
------------------------------ ----------------------------------- ----------------------------
DB AUDIT TABLESPACE            SYSAUX                              STANDARD AUDIT TRAIL
DB AUDIT TABLESPACE            SYSAUX                              FGA AUDIT TRAIL
AUDIT FILE MAX SIZE            10000                               OS AUDIT TRAIL
AUDIT FILE MAX SIZE            10000                               XML AUDIT TRAIL
AUDIT FILE MAX AGE             5                                   OS AUDIT TRAIL
AUDIT FILE MAX AGE             5                                   XML AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE      10000                               STANDARD AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE      10000                               FGA AUDIT TRAIL
OS FILE CLEAN BATCH SIZE       1000                                OS AUDIT TRAIL
OS FILE CLEAN BATCH SIZE       1000                                XML AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL      1                                   OS AUDIT TRAIL

11 rows selected.

SQL>


I don't want duplicate job schedules in place for the same purpose so I disabled and drop the one job schedule to fix the conflict and avoid the noise tickets.

SQL> begin
dbms_scheduler.disable( '"SYS"."CLEAN_OS_DB_AUDIT_RECORD"' );
END;
/

PL/SQL procedure successfully completed.

SQL>



SQL> EXEC DBMS_AUDIT_MGMT.DROP_PURGE_JOB('CLEAN_OS_DB_AUDIT_RECORD');

PL/SQL procedure successfully completed.

SQL>


Now, only one job is left in the schedule and it should work fine without any conflict in future.


SQL> select * from DBA_AUDIT_MGMT_CLEANUP_JOBS;

JOB_NAME                                                                                             JOB_STAT AUDIT_TRAIL                  JOB_FREQUENCY
---------------------------------------------------------------------------------------------------- -------- ---------------------------- ----------------------------------------------------------------------------------------------------
STANDARD_OS_AUDIT_TRAIL_PURGE                                                                        ENABLED  OS AUDIT TRAIL               FREQ=HOURLY;INTERVAL=1



OWNER                          JOB_NAME                       LAST_DATE_TIME    NEXT_DATE_TIME    JOB_ACTION
------------------------------ ------------------------------ ----------------- ----------------- -------------------------------------------------------
SYS                            STANDARD_OS_AUDIT_TRAIL_PURGE  16-10-19 04:38:53 16-10-19 05:38:53 BEGIN DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(4, TRUE);  END;



Tested the manual job run once, and it worked fine.


SQL> BEGIN DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(4, TRUE);  END;
  2  /

PL/SQL procedure successfully completed.

SQL>




Hope it helps, thanks for reading, please subscribe to this blog to stay updated with latest news on Oracle Cloud Infrastructure and Oracle Autonomous Database Cloud Services and new articles.



Tuesday, October 15, 2019

Always Free ATP instance stops automatically



SMART Oracle Cloud Feature !!
















To save the unnecessary cloud infrastructure cost and utilization, Oracle Autonomous Database instances provisioned leveraging the Always Free tier subscription, stops automatically after 7 days when there is no recent activities to the ATP cloud instance.





Hope it helps, thanks for reading, please subscribe to this blog to stay updated with latest news on Oracle Cloud Infrastructure and Oracle Autonomous Database Cloud Services and new articles.

Twitter : https://twitter.com/rajsoft8899
Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/
Facebook : https://www.facebook.com/rkushawaha

YUM install of Oracle 18c XE database in Oracle Cloud Infrastructure







In this post, I would show you how you can deploy Oracle 18c XE database in Oracle Cloud Infrastructure Gen 2, in just one command.

Login to your OCI compute node where you want to deploy the Oracle 18c XE, please make a note that you have created the compute VM with Oracle Linx 7, Oracle Linux 8 still, at time of writing this post, doesn't have OCI included image for the 18c XE.













Since we are on OEL7.7 and OCI image already includes the Oracle 18c XE database rpm in there so doing pre-check is optional and we can directly install it using below one line command, it would automatically find the dependencies and do that job for you.




Let's initiate the installation now as below.


[root@oci02 ~]# yum install oracle-database-xe-18c.x86_64  
 Loaded plugins: langpacks, ulninfo  
 Resolving Dependencies  
 --> Running transaction check  
 ---> Package oracle-database-xe-18c.x86_64 0:1.0-1 will be installed  
 --> Processing Dependency: oracle-database-preinstall-18c for package: oracle-database-xe-18c-1.0-1.x86_64  
 --> Running transaction check  
 ---> Package oracle-database-preinstall-18c.x86_64 0:1.0-1.el7 will be installed  
 --> Finished Dependency Resolution  
 Dependencies Resolved  
 =============================================================================================================================================================================================  
  Package                           Arch                Version                 Repository                   Size  
 =============================================================================================================================================================================================  
 Installing:  
  oracle-database-xe-18c                   x86_64               1.0-1                  ol7_oci_included               2.4 G  
 Installing for dependencies:  
  oracle-database-preinstall-18c               x86_64               1.0-1.el7                ol7_latest                   18 k  
 Transaction Summary  
 =============================================================================================================================================================================================  
 Install 1 Package (+1 Dependent package)  
 Total download size: 2.4 G  
 Installed size: 5.2 G  
 Is this ok [y/d/N]: y  
 Downloading packages:  
 (1/2): oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm                                                      | 18 kB 00:00:00  
 (2/2): oracle-database-xe-18c-1.0-1.x86_64.rpm                                                            | 2.4 GB 00:01:11  
 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  
 Total                                                                             34 MB/s | 2.4 GB 00:01:11  
 Running transaction check  
 Running transaction test  
 Transaction test succeeded  
 Running transaction  
  Installing : oracle-database-preinstall-18c-1.0-1.el7.x86_64                                                              1/2  
  Installing : oracle-database-xe-18c-1.0-1.x86_64                                                                    2/2  
 [INFO] Executing post installation scripts...  
 [INFO] Oracle home installed successfully and ready to be configured.  
 To configure Oracle Database XE, optionally modify the parameters in '/etc/sysconfig/oracle-xe-18c.conf' and then execute '/etc/init.d/oracle-xe-18c configure' as root.  
  Verifying : oracle-database-xe-18c-1.0-1.x86_64                                                                    1/2  
  Verifying : oracle-database-preinstall-18c-1.0-1.el7.x86_64                                                              2/2  
 Installed:  
  oracle-database-xe-18c.x86_64 0:1.0-1  
 Dependency Installed:  
  oracle-database-preinstall-18c.x86_64 0:1.0-1.el7  
 Complete!  
 [root@oci02 ~]#  

Above you see, it installed the 18c XE HOME and it points you to the configuration file that you can modify per your needs and run the /etc/init.d/oracle-xe-18c configure to create the XE container database for you, as below.


Default, sample configuration file that would be used to created the database.


 [root@oci02 ~]# cat /etc/sysconfig/oracle-xe-18c.conf  
 #This is a configuration file to setup the Oracle Database.  
 #It is used when running '/etc/init.d/oracle-xe-18c configure'.  
 # LISTENER PORT used Database listener, Leave empty for automatic port assignment  
 LISTENER_PORT=  
 # EM_EXPRESS_PORT Oracle EM Express URL port  
 EM_EXPRESS_PORT=5500  
 # Character set of the database  
 CHARSET=AL32UTF8  
 # Database file directory  
 # If not specified, database files are stored under Oracle base/oradata  
 DBFILE_DEST=  
 # SKIP Validations, memory, space  
 SKIP_VALIDATIONS=false  
 [root@oci02 ~]# 



Let's create the database as below. You need to provide a complex password that will be used for SYS, SYSTEM and PDBADMIN accounts commonly, later you may change it which is different to each other once database instance created successfully.


 [root@oci02 ~]# /etc/init.d/oracle-xe-18c configure  
 Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts:  
 Confirm the password:  
 Configuring Oracle Listener.  
 Listener configuration succeeded.  
 Configuring Oracle Database XE.  
 Enter SYS user password:  
 ********  
 Enter SYSTEM user password:  
 *******  
 Enter PDBADMIN User Password:  
 **********  
 Prepare for db operation  
 7% complete  
 Copying database files  
 29% complete  
 Creating and starting Oracle instance  
 30% complete  
 31% complete  
 34% complete  
 38% complete  
 41% complete  
 43% complete  
 Completing Database Creation  
 47% complete  
 50% complete  
 Creating Pluggable Databases  
 54% complete  
 71% complete  
 Executing Post Configuration Actions  
 93% complete  
 Running Custom Scripts  
 100% complete  
 Database creation complete. For details check the logfiles at:  
  /opt/oracle/cfgtoollogs/dbca/XE.  
 Database Information:  
 Global Database Name:XE  
 System Identifier(SID):XE  
 Look at the log file "/opt/oracle/cfgtoollogs/dbca/XE/XE.log" for further details.  
 Connect to Oracle Database using one of the connect strings:  
    Pluggable database: oci02/XEPDB1  
    Multitenant container database: oci02  
 Use https://localhost:5500/em to access Oracle Enterprise Manager for Oracle Database XE  
 [root@oci02 ~]# 

Once database instance gets configured, you are provided "Database Information" and connect string for the Pluggable database, dbca log file location that you can review and finally Enterprise Manager URL to use for the database monitoring purpose.



Finally you may spare few mins and validate the container database information.

 [root@oci02 ~]# ps -ef | grep pmon  
 oracle  23387   1 0 19:00 ?    00:00:00 xe_pmon_XE  
 root   23902 11789 0 19:02 pts/0  00:00:00 grep --color=auto pmon  
 [root@oci02 ~]#  
 [root@oci02 ~]# su - oracle  
 Last login: Mon Oct 14 18:51:07 GMT 2019 on pts/0  
 [oracle@oci02 ~]$  
 [oracle@oci02 ~]$ . oraenv  
 ORACLE_SID = [oracle] ? XE  
 The Oracle base has been set to /opt/oracle  
 [oracle@oci02 ~]$  
 [oracle@oci02 ~]$ sqlplus "/as sysdba"  
 SQL*Plus: Release 18.0.0.0.0 - Production on Mon Oct 14 19:03:09 2019  
 Version 18.4.0.0.0  
 Copyright (c) 1982, 2018, Oracle. All rights reserved.  
 Connected to:  
 Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production  
 Version 18.4.0.0.0  
 SQL>  





 SQL> select name, open_mode, database_role, cdb from v$database;  
 NAME   OPEN_MODE      DATABASE_ROLE  CDB  
 --------- -------------------- ---------------- ---  
 XE    READ WRITE      PRIMARY     YES  
 SQL> show pdbs  
   CON_ID CON_NAME            OPEN MODE RESTRICTED  
 ---------- ------------------------------ ---------- ----------  
      2 PDB$SEED            READ ONLY NO  
      3 XEPDB1             READ WRITE NO  
 SQL> 



Hope it helps, thanks for reading, please subscribe to this blog to stay tuned with latest news on Oracle Cloud Infrastructure and Oracle Autonomous Database Cloud Services and other new articles.

Twitter : https://twitter.com/rajsoft8899
Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/
Facebook : https://www.facebook.com/rkushawaha


Tuesday, October 8, 2019

Add or Remove or List TFA Users and Groups




Image Source

Sometimes you may need to collect srdc diagnostic data requested by Oracle Support Engineer for an Oracle Internal Error or problem evaluation and analysis.

You don't want loads of unnecessary diagnostics traces and logs to be collected and need only relevant traces/logs for a specific Oracle incident/problem so srdc needs to be run by Oracle software binary owner which may be different from Oracle cluster-ware owner and that may not have permission to run the TFA to collect diagnostics data.


You won't be able to run the TFA if you are not a TFA user and get an error like depicted below since oracle user is not allowed to run the TFA diag collection.

[root@dbadm01 bin]#su - oracle


$ ./tfactl diagcollect -srdc ORA-00700
User oracle does not have keys to run TFA. Please check with TFA Admin(root)

$ exit
[root@dbadm01 bin]#



TFA administrator or root user can add Oracle software binary owner to TFA group so that he/she can run the TFA srdc diagnostics collection when needed.

Let's see how we can add oracle user to the TFA administrator group and run the srdc data collection and finally we would see how to remove oracle user from the TFA administration group.



You can view the list of TFA users currently allowed to run it as below. We can see that only oragrid user is allowed to run the TFA.


[root@dbadm01 bin]# ./tfactl access lsusers
.---------------------------------.
|  TFA Users in dbadm01  |
+-----------+-----------+---------+
| User Name | User Type | Status  |
+-----------+-----------+---------+
| oragrid   | USER      | Allowed |
'-----------+-----------+---------'

.---------------------------------.
|          TFA Users in           |
+-----------+-----------+---------+
| User Name | User Type | Status  |
+-----------+-----------+---------+
| oragrid   | USER      | Allowed |
| oragrid   | USER      | Allowed |
| oragrid   | USER      | Allowed |
'-----------+-----------+---------'

[root@dbadm01 bin]#


Let's now add oracle user in the TFA group now so that it can also be part of it and run the tfa.


[root@dbadm01 bin]# ./tfactl access add -user oracle -local

Successfully added 'oracle' to TFA Access list.

.---------------------------------.
|  TFA Users in dbadm01  |
+-----------+-----------+---------+
| User Name | User Type | Status  |
+-----------+-----------+---------+
| oracle   | USER      | Allowed |
| oragrid   | USER      | Allowed |
'-----------+-----------+---------'

[root@dbadm01 bin]#



Since oracle user is now added to TFA group, let's switch to oracle user and try to collect the TFA srdc diagnostics data.

[root@dbadm01 bin]# su - oracle
[oracle@dbadm01 ~]$

[oracle@dbadm01 ~]$ /u01/app/12.2.0.1/grid/bin/tfactl diagcollect -srdc ORA-00700
Enter the time of the ORA-00700 [YYYY-MM-DD HH24:MI:SS,<RETURN>=ALL] :
Enter the Database Name [<RETURN>=ALL] : PROD1

1. Sep/27/2019 17:21:00 : [prod1] ORA-00700: soft internal error, arguments: [kdt_bseg_srch_cbk PITL5], [3], [148630702], [276915], [], [], [], [], [], [], [], []
2. Sep/27/2019 15:05:30 : [prod1] ORA-00700: soft internal error, arguments: [kdt_bseg_srch_cbk PITL5], [2], [148630736], [276915], [], [], [], [], [], [], [], []
3. Sep/27/2019 15:05:30 : [prod1] ORA-00700: soft internal error, arguments: [PITL6], [276915], [148630736], [], [], [], [], [], [], [], [], []
4. Sep/27/2019 07:16:44 : [prod1] ORA-00700: soft internal error, arguments: [kdt_bseg_srch_cbk PITL5], [2], [156338726], [276915], [], [], [], [], [], [], [], []
5. Sep/24/2019 09:51:00 : [prod1] ORA-00700: soft internal error, arguments: [PITL6], [276915], [156338560], [], [], [], [], [], [], [], [], []
6. Sep/24/2019 09:50:56 : [prod1] ORA-00700: soft internal error, arguments: [kdt_bseg_srch_cbk PITL5], [2], [156338560], [276915], [], [], [], [], [], [], [], []

Please choose the event : 1-6 [1] 6
Selected value is : 6 ( Sep/24/2019 09:50:56 )
Scripts to be run by this srdc: ipspack rdahcve1210 rdahcve1120 rdahcve1110
Components included in this srdc: OS CRS DATABASE NOCHMOS
Collecting data for local node(s)
Scanning files from Sep/24/2019 03:50:56 to Sep/24/2019 15:50:56

Collection Id : 20191008033306dbadm01

Detailed Logging at : /u01/app/grid/tfa/repository/srdc_ora700_collection_Tue_Oct_08_03_33_07_UTC_2019_node_local/diagcollect_20191008033306_dbadm01.log
2019/10/08 03:33:10 UTC : NOTE : Any file or directory name containing the string .com will be renamed to replace .com with dotcom
2019/10/08 03:33:10 UTC : Collection Name : tfa_srdc_ora700_Tue_Oct_08_03_33_07_UTC_2019.zip
2019/10/08 03:33:11 UTC : Scanning of files for Collection in progress...
2019/10/08 03:33:11 UTC : Collecting additional diagnostic information...
2019/10/08 03:33:41 UTC : Getting list of files satisfying time range [09/24/2019 03:50:56 UTC, 09/24/2019 15:50:56 UTC]
2019/10/08 03:34:45 UTC : Completed collection of additional diagnostic information...
2019/10/08 03:36:29 UTC : Collecting ADR incident files...
2019/10/08 03:36:30 UTC : Completed Local Collection
.---------------------------------------------.
|              Collection Summary             |
+------------------+-----------+-------+------+
| Host             | Status    | Size  | Time |
+------------------+-----------+-------+------+
| dbadm01 | Completed | 249MB | 200s |
'------------------+-----------+-------+------'

Logs are being collected to: /u01/app/grid/tfa/repository/srdc_ora700_collection_Tue_Oct_08_03_33_07_UTC_2019_node_local
/u01/app/grid/tfa/repository/srdc_ora700_collection_Tue_Oct_08_03_33_07_UTC_2019_node_local/dbadm01.tfa_srdc_ora700_Tue_Oct_08_03_33_07_UTC_2019.zip
[oracle@dbadm01 ~]$

TFA srdc collection went successful now !


If you want to remove the oracle user from the TFA group, you can do so using below command.


[root@dbadm01 bin]# ./tfactl access remove -user oracle -local

Successfully removed 'oracle' from TFA Access list.

.---------------------------------.
|  TFA Users in dbadm01  |
+-----------+-----------+---------+
| User Name | User Type | Status  |
+-----------+-----------+---------+
| oragrid   | USER      | Allowed |
'-----------+-----------+---------'

[root@dbadm01 bin]#



Hope it helps, thanks for reading, please subscribe to this blog to stay updated with latest news on Oracle Cloud Infrastructure, Oracle Autonomous Database Cloud Services and other new articles.

Twitter : https://twitter.com/rajsoft8899
Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/


Saturday, September 28, 2019

Oracle Cloud Infrastructure - Resources Monitoring | Email Notifications



In this article, we would see how to monitor the all Oracle Cloud Infrastructure(OCI) resources at one place and would see how to define alarm notifications for a kind of metric and metrics dimension.

You won't have access to the autonomous database compute nodes in the cloud but you have the luxury to monitor OS level statistics and can monitor the health, capacity, and performance of your Oracle Cloud Infrastructure resources when needed using queries or on a passive basis using alarms. Queries and alarms rely on metrics emitted by your resource to the Monitoring service.


An Administrator has super privilege to manage and monitoring all Oracle Cloud Infrastructure resources it has in its Identity domain/Tenancy.

If you want a different user to only monitor all the OCI resources then a new user can be created and assigned appropriate IAM policies to monitor the resources.

Monitoring is possible only for resources that is enabled for monitoring or which emits the metrics. Queries and Alarms used these emitted metrics for its notification rules and methods. The Monitoring service works with the Notifications service to notify you when metrics breach.

Lets' get started:

Login to your Oracle Cloud Infrastructure Service Console and click on the navigation menu and click on Monitoring > Service Metrics - to open the "Monitoring" page for all OCI resources.






"Service Metrics" page opens, The Service Metrics page displays the default charts for metrics in the first accessible Compartment and Metric Namespace. Very small or large values are indicated by International System of Units (SI units), such as M for mega (10 to the sixth power).

You can select any COMPARTMENT and appropriate METRICS NAMESPACE and adjust the "START TIME" and "END TIME" to narrow down the metrics graphic visualization for the metric namespaces and review its trend to predict the underline hardware/resource health.



For any Metrics type, you can click on its "Options" and Copy Chart URL - a direct URL to view this metrics graph in web browser.




Select any metrics namespace you want to see the metrics trend.


















Metrics Explorer:

Write and edit queries in Monitoring Query Language (MQL), using metrics from either your application or an Oracle Cloud Infrastructure service.

By default, Metrics Explorer has no graph, you can write MQL query or construct one automatically using the selection criteria.





Below I am adding an MQL(Monitoring Query Language) Query 1:

COMPARTMENT : comp01
Metrics Namespace : oci_autonomous_database
Metric name : cpuUtilization
Interval : 1m - 1 minute
Statistic : Mean
Metrics Dimension : AutonomousDBType
Dimension Value : ATP

and Click on Update Chart >




Once you clicked on the "Update Chart" above, it populates the "Metrics Explorer" chart metrics with graph for for the autonomous database.






Similarly, we can add another MQL query for the OCI compute agent for the metric name = DiskBytesWritten as below.





Alarm Status:


If you want to "Create Alarm" for any query, you can click on the "Create Alarm" button instead of "Update Chart" from the "Metric Explorer" or click on the "Alarm Definition" at left side and create alarm page opens up for some details to be filled up before finishing it.


In "Define Alarm" section, Provide relevant "Alarm Name" and its Severity and Alarm description. Optionally you can give resource tags as well.




In the "Metric Description" section, Select the appropriate compartment, metric namespace, metric name to you to be alerted for, dimension name and its value. Most important, put the right trigger rule(it is the threshold value for the metric name you are enabling the alarming notification). In my case, I am creating this alarm for the CPU utilization, so I give trigger value 1 for the demonstration.






In the "Notification" section, select the right TOPIC in which you have defined the email subscription and click on "Save alarm".




If you don't have an Alarm Topic created then you can click on the "Create a Topic" and provide email details- Alarm Topic an email subscription where alarm notifications would be sent out. In my case, I am creating an alarm for the CPU utilization for my autonomous database dimensions. Provide "NOTIFICATION FREQUENCY" to re-evaluate the alarm status to make sure if that is still above it defined threshold or cleared, If alarm remains above its defined threshold then Alarm would fire and email alert at the subscribed email. Make sure alarm is checked as enabled in the end and click on "Save Alarm".


You should check the "REPEAT NOTIFICATION" option to provide the frequency to repeat the alarm notification if the event is not resolved/cleared.



Once you save, alarm gets create and available for the monitoring.



A while ago, you subscribed to an alarm topic that you created, you would receive an email confirmation once you subscribe to confirm for the destination email where alerts would be dispatched.

You get an email as below.






Alarm Definition:

Once you create the alarm, then you can go back to Alarm Definition to see the list of alarms you have created for the different OCI resource/dimensions monitoring.



To view the MQL for an alarm definition, you open an alarm and go to its Query section and expand it.



CpuUtilization[1m]{AutonomousDBType = "ATP"}.mean()



OCID1.AUTONOMOUSDATABASE.OC1.AP-MUMBAI-1.ABRG6LJRBEWFTBLJTRMG74GA6BJ3CBHXH6SQB3ESXDBYO2C3M2OKBGJ3OFUA



You can click on the Alarm Name to view its alarming history.






Once alert criteria meets then it fires and you get an email alert like below. If you don't want to receive any further email notification or a user wants to exclude himself from the email alert subscription then he/she can click on the "unsubscribe" link in the email alert body to stop receiving the alerts further.





Similarly, you define the alarming notifications for all metrics available for "METRIC NAMESPACE" in a compartment.

Hope it helps, thanks for reading, please subscribe to this blog to stay updated with latest news on Oracle Cloud Infrastructure and Oracle Autonomous Database Cloud Services and new articles.

Twitter : https://twitter.com/rajsoft8899
Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/



Friday, September 27, 2019

Creating a Block Volume in Oracle Cloud Infrastructure


In this article, let's see how to create a new Block Volume in Oracle Cloud Infrastructure Services.


Network-attached storage volumes, attachable to compute instances. The Oracle Cloud Infrastructure Block Volumes service provides persistent storage that can be attached to compute instances using the iSCSI protocol. The volumes are stored in high performance network storage and support automated backup and snapshot capabilities. Volumes and their backups are accessible only from within a customer's VCN and are encrypted at rest using unique keys. For additional security, iSCSI CHAP authentication can be required on a per-volume basis.


The Oracle Cloud Infrastructure Block Volume service lets you dynamically provision and manage block storage volumes . You can create, attach, connect, and move volumes as needed to meet your storage and application requirements. After you attach and connect a volume to an instance, you can use the volume like a regular hard drive. You can also disconnect a volume and attach it to another instance without the loss of data.



Login to your Oracle Cloud account and go to the Infrastructure Menu and under the "Core Infrastructure"and click on the Block Storage > Block Volumes .


Create Block Volumes page opens, provide the its Name, compartment name where you want to put this resource, select appropriate AD and its size (size should be greater than or equals to 50GB). You can select the backup policy(Gold, Silver, Bronze) for the block volume getting created. If you already have your own encryption key stored in the Cloud Vault then you should select its option and provide the key or just let it be the default and let OCI create/handle one for you.




You can give a Tag for this resource and click on "Create Block Volume" to start its provision.




In few seconds, Block Volume is created and ready to be attached in any of the compute instance.






Hope it helps, thanks for reading, please subscribe to this blog to stay updated with latest news on Oracle OCI, Oracle Autonomous Database Cloud Services and new articles.




out of host capacity : Oracle Cloud Infrastructure



While requesting a new compute resource to provision, your request may end up saying "out of host capacity" while clicking on the "Create" button for the compute. For example, I tried to provision a compute instance in Mumbai region and it ends with "out of host capacity".




NoteThe selected shape does not have any available servers in the selected region and Availability Domain (AD). Virtual Machines (VM) are dynamically provisioned. If an AD has reached a minimum threshold, new hypervisors (physical servers) will be automatically provisioned. There may be some occasions where the additional capacity has not finished provisioning before the existing capacity is exhausted, but when retrying in 15 minutes the customer may find the shape they want is available. Alternatively, selecting a different shape, AD or region will almost certainly have the capacity needed.


So, let's try to provision the compute instance in some other region, I tried in Sydney and it went well since there was free VMs available for the request.

Click on the region drop down menu at right top, and select the other region that you may have already subscribed. In my case, I changed it to Sydney.

and finally click on the "Create" button.



In Sydney region, compute instance provision started means there was VM resources available for the request.





Compute instance is created and running now there.





Hope it helps, thanks for reading, please subscribe to this blog to stay updated with latest news on Oracle Autonomous Database Cloud Services and new articles.



Thursday, September 26, 2019

Terminate an Oracle Autonomous Database Cloud



In this article, I would step you through the process to terminate an Oracle autonomous transaction processing database service.


Terminating an Autonomous Transaction Processing database permanently deletes the instance and removes all automatic backups. You cannot recover a terminated database.

Login your Oracle Cloud Infrastructure Service console and navigate to Autonomous Transaction Processing database.

You get to the Autonomous Databases list, move your cursor to three dots at right side against the ATP instance your want to terminate/delete.

Click on the "Terminate" from the drop down list.



Once you click on the "Terminate", it asks for the confirmation, to enter your Autonomous Database name you wanted to terminate and click on the "Terminate Autonomous Database" to initiate the termination job.







ATP Instance terminated:




Hope it helps, thanks for reading, please subscribe to this blog to stay updated with latest news on Oracle Autonomous Database Cloud Services and new articles.

Twitter : https://twitter.com/rajsoft8899
Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/


Monday, September 23, 2019

Single-Click APEX startup on Oracle Autonomous Database



One of the very exciting feature for Oracle Cloud Administrators and Developers to manage and user the Oracle APEX, now you can start APEX with just a single click, that too running on Oracle Autonomous Database Cloud Services in Gen 2 Oracle Cloud Infrastructure Services.

Here in this article, I would step you through to get an overview how you can do it in its simplest way as it is designed.

Cloud Administrator or ATP Database Administrator can login to the Autonomous Transaction Processing Database instance it wants to startup the APEX Framework for and click on the "Development" at left side in the panel  > then click on the "Oracle APEX" which directly opens up an new browser/tab to login to APEX framework or you copy the URL from the "RESTful Services and SODA" and paste it in the another tab/browser to go to the APEX login page .


Clicked in the "Oracle APEX" and its "Administration Services" login page appears, since it is our brand new ATP instance and first time opening the APEX so we need to login as ADMIN user to create developer users and workspaces in there to be used by them.



After Login as Admin, it welcomes to "Create Workspace"



Click on the "Create Workspace" and give new or existing database user to use with your new workspace, if there users already in the database then you can click at right side of the "Database User" box to get the list of users and select one. I am just creating the new one here.



Once all information provided, click on the "Create Workplace" and creates in seconds.




Now, log off from Administration account and login back with development user name : USER01




APEX Framework homepage opens.






SQL Worksheet tab:



Team Development Tab :




App Gallery:

From the "App Gallery" you can install any available application you want instantly just by clicking on it, look at followed screenshot after it.

















Application Installed:




Open the application by clicking the Play icon right next to the "Manage" button to start using the application.










Hope it helps, thanks for reading, please subscribe to this blog to stay updated with latest news on Oracle Autonomous Database Cloud Services and new articles.

Twitter : https://twitter.com/rajsoft8899
Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/