Moneycontrol Brokerage Recos

Thursday, November 29, 2018

Oracle Database 19c New Features


In this article, I would be briefing about some new features getting introduced in upcoming Oracle Database 19c - probably would get released in 1st quarter of 2019.

......... Continued ......

Sunday, September 16, 2018

How to check Oracle Exadata Model




Sometimes you would be curious to know while working on Exadata database that what Exadata model your database is using??

Here is simple thing you can check being logged into any of the compute node of Exadata cluster(of course you can login to compute node since you support the databases running on it or except DBaaS instances for Oracle private cloud, depends on the project level work and roles you are involved in and it's access level)

Login to the exadata compute node 1 and navigate to the following directory.

$cd /opt/oracle.SupportTools/onecommand

Then

$cat databasemachine.xml | grep MACHINE

and you will get the below output detailing about the exadata model you are just playing with.



/opt/oracle.SupportTools/onecommand
dbm01rac01[TESTDB] cat databasemachine.xml | grep MACHINE

<MACHINETYPE>36</MACHINETYPE>
<MACHINETYPE>X3-2 Half Rack HC 3TB</MACHINETYPE>
<MACHINEUSIZE>42</MACHINEUSIZE>

In above output you would see that our current Exadata Database Machine Model is X3-2 Half Rack High Capacity with 3TB size of each HDD.

Tuesday, September 11, 2018

Check if SQL Statement used Exadata Smart Scans



I was just reading out an amazing Oracle Exadata book to enhance my knowledge about Oracle Exadata, it is a must read book for every Oracle DBAs. I am posting here one little snip from this book, how authors have explained the Exadata concepts and practical way to find the Exadata Offload eligible bytes for a given SQL statement i.e. how to check if your SQL statement was offloaded to Exadata storage cell or Exadata Smart Scan was used.


There are lots of things that is explained wonderfully in this book. If you want to get a copy of this and bring your career to a new level - get a copy of this book Click Here



https://www.amazon.in/Expert-Oracle-Exadata-Martin-Bach/dp/1430262419/ref=sr_1_fkmr1_3?ie=UTF8&qid=1536641983&sr=8-3-fkmr1&keywords=expert+oracle+exadata+second+edition




Offload Eligible Bytes
There is another clue to whether a statement used a Smart Scan or not. As you saw in previous sections, the V$SQL family of views contain a column called IO_CELL_OFFLOAD_ELIGIBLE_BYTES, which shows the number of bytes that are eligible for offloading. This column can be used as an indicator of whether a statement used a Smart Scan. It appears that this column is set to a value greater than 0 only when a Smart Scan is used. You can make use of this observation to write a little script (fsx.sql) that returns a value of YES or NO, depending on whether that column in V$SQL has a value greater than 0. The output of the script is a little too wide to fit in a book format, which is why there are a couple of cut-down versions in the examples. And, of course, all of the versions will be available in the online code repository. You have already seen the script in action in several of the previous sections. The script is shown here for your convenience, along with an example of its use:
> !cat fsx.sql
----------------------------------------------------------------------------------------
--
-- File name:   fsx.sql
--
-- Purpose:     Find SQL and report whether it was Offloaded and % of I/O saved.
--
-- Usage:       This scripts prompts for two values.
--
--              sql_text: a piece of a SQL statement like %select col1, col2 from skew%
--
--              sql_id: the sql_id of the statement if you know it (leave blank to ignore)
--
-- Description:
--
--              This script can be used to locate statements in the shared pool and
--              determine whether they have been executed via Smart Scans.
--
--              It is based on the observation that the IO_CELL_OFFLOAD_ELIGIBLE_BYTES
--              column in V$SQL is only greater than 0 when a statement is executed
--              using a Smart Scan. The IO_SAVED_% column attempts to show the ratio of
--              of data received from the storage cells to the actual amount of data
--              that would have had to be retrieved on non-Exadata storage. Note that
--              as of 11.2.0.2, there are issues calculating this value with some queries.
--
--              Note that the AVG_ETIME will not be acurate for parallel queries. The
--              ELAPSED_TIME column contains the sum of all parallel slaves. So the
--              script divides the value by the number of PX slaves used which gives an
--              approximation.
--
--              Note also that if parallel slaves are spread across multiple nodes on
--              a RAC database the PX_SERVERS_EXECUTIONS column will not be set.
--
---------------------------------------------------------------------------------------
set pagesize 999
set lines 190
col sql_text format a70 trunc
col child format 99999
col execs format 9,999
col avg_etime format 99,999.99
col "IO_SAVED_%" format 999.99
col avg_px format 999
col offload for a7

select sql_id, child_number child, plan_hash_value plan_hash, executions execs,
(elapsed_time/1000000)/decode(nvl(executions,0),0,1,executions)/
decode(px_servers_executions,0,1,px_servers_executions/
decode(nvl(executions,0),0,1,executions)) avg_etime,
px_servers_executions/decode(nvl(executions,0),0,1,executions) avg_px,
decode(IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,'No','Yes') Offload,
decode(IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,0,
100*(IO_CELL_OFFLOAD_ELIGIBLE_BYTES-IO_INTERCONNECT_BYTES)
/decode(IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,1,IO_CELL_OFFLOAD_ELIGIBLE_BYTES))
"IO_SAVED_%", sql_text
from v$sql s
where upper(sql_text) like upper(nvl('&sql_text',sql_text))
and sql_text not like 'BEGIN :sql_text := %'
and sql_text not like '%IO_CELL_OFFLOAD_ELIGIBLE_BYTES%'
and sql_id like nvl('&sql_id',sql_id)
order by 1, 2, 3
/
In the fsx script, you can see that the OFFLOAD column is just a DECODE that checks to see if the IO_CELL_OFFLOAD_ELIGIBLE_BYTES column is equal to 0 or not. The IO_SAVED_% column is calculated using the IO_INTERCONNECT_BYTES field, and it attempts to show how much data was returned to the database servers.
The script can be used for many useful purposes. The author primarily uses it to find the SQL_ID and child cursor number of SQL statements in the shared pool. In this example, it is used to determine if a statement has been offloaded or not:
SQL> select /*+ gather_plan_statistics fsx-example-002 */
  2  avg(id) from bigtab where id between 1000 and 50000;

   AVG(ID)
----------
     25500

Elapsed: 00:00:00.64
SQL> alter session set cell_offload_processing=false;

Session altered.

Elapsed: 00:00:00.00
SQL> select /*+ gather_plan_statistics fsx-example-002 */
  2  avg(id) from bigtab where id between 1000 and 50000;

   AVG(ID)
----------
     25500

Elapsed: 00:00:53.88
SQL> @fsx4
Enter value for sql_text: %fsx-example-002%
Enter value for sql_id:

SQL_ID         CHILD OFFLOAD IO_SAVED_%  AVG_ETIME SQL_TEXT
------------- ------ ------- ---------- ---------- ----------------------------------------
cj0p52wha5wb8      0 Yes          99.97        .63 select /*+ gather_plan_statistics fsx-ex
cj0p52wha5wb8      1 No             .00      53.88 select /*+ gather_plan_statistics fsx-ex

2 rows selected.
The elapsed times are a bit of a giveaway as to whether the statement was offloaded or not, but if you are called in after the fact, the output of the fsx script clearly shows that the child_number 1 has not been offloaded. The fact that a new child cursor has been created is very important in this example. When setting CELL_OFFLOAD_PROCESSING to FALSE, the optimizer created a new child cursor due to a mismatch. Reasons why child cursors are created can be found in v$sql_shared_cursor. This view contains a long list of flags that allow you to identify differences between child cursors but is very hard to read in SQL*Plus. Oracle added a CLOB containing XML data in 11.2.0.2 that makes it easier to spot the difference. Using the SQL ID from the previous example, this is put to use. Note that I cast the CLOB to XML for better readability:
SQL> select xmltype(reason) from v$sql_shared_cursor
  2   where sql_id = 'cj0p52wha5wb8' and child_number = 0;

XMLTYPE(REASON)
---------------------------------------------------------------------------
<ChildNode>
  <ChildNumber>0</ChildNumber>
  <ID>3</ID>
  <reason>Optimizer mismatch(12)</reason>
  <size>2x356</size>
  <cell_offload_processing> true     false  </cell_offload_processing>
</ChildNode>
Translating the XML output into plain English, you can see that there was an optimizer mismatch: The parameter cell_offload_processing has changed from TRUE to FALSE.
It is not always the case for child cursors to be created after changing parameters. Certain underscore parameters such as _SERIAL_DIRECT_READ will not cause a new child cursor to be created. Some executions of the same cursor might be offloaded, others not. This can be quite confusing, although this should be a very rare occurrence! Here is an example to demonstrate the effect:
SQL> select /*+ gather_plan_statistics fsx-example-004 */ avg(id)
  2  from bigtab where id between 1000 and 50002;

   AVG(ID)
----------
     25501

Elapsed: 00:00:00.68
SQL> alter session set "_serial_direct_read" = never;

Session altered.

Elapsed: 00:00:00.00
SQL> select /*+ gather_plan_statistics fsx-example-004 */ avg(id)
  2  from bigtab where id between 1000 and 50002;

   AVG(ID)
----------
     25501

Elapsed: 00:04:50.32
SQL> SQL> alter session set "_serial_direct_read" = auto;

Session altered.

Elapsed: 00:00:00.00
SQL> select /*+ gather_plan_statistics fsx-example-004 */ avg(id)
  2  from bigtab where id between 1000 and 50002;

   AVG(ID)
----------
     25501

Elapsed: 00:00:00.63
SQL> @fsx4
Enter value for sql_text: %fsx-example-004%
Enter value for sql_id:

SQL_ID         CHILD OFFLOAD IO_SAVED_%  AVG_ETIME SQL_TEXT
------------- ------ ------- ---------- ---------- ----------------------------------------
6xh6qwv302p13      0 Yes          55.17      97.21 select /*+ gather_plan_statistics fsx-ex
As you can see, there are three executions using the same child cursor (no new child cursor has been created). The statistics about I/O saved and execution time now have little value: Two executions completed in less than a second, and one took almost five minutes. This is the well-known problem with averages: They obfuscate detail.

Sunday, September 9, 2018

Cancelling a SQL Statement in a Session


Cancelling a SQL Statement in a Session
---------------------------------------------

An interesting new feature introduced in Oracle Database 12c Release 2 - i.e.  you can cancel a SQL statement running in a session using the ALTER SYSTEM CANCEL SQL statement.



The following clauses are required in an ALTER SYSTEM CANCEL SQL statement:

• SID – Session ID
• SERIAL – Session serial number



The following clauses are optional in an ALTER SYSTEM CANCEL SQL statement:

• INST_ID – Instance ID
• SQL_ID – SQL ID of the SQL statement



The following is the syntax for cancelling a SQL statement:

 ALTER SYSTEM CANCEL SQL 'SID, SERIAL, @INST_ID, SQL_ID';  



The following example cancels a SQL statement having the session identifier of 20, session serial number of 51142, and SQL ID of 8vu7s907prbgr:


 ALTER SYSTEM CANCEL SQL '20, 51142, 8vu7s907prbgr'; 




Sunday, August 19, 2018

Drop Pluggable Database



Use the DROP PLUGGABLE DATABASE statement to drop a pluggable database (PDB).

When you drop a PDB, the control file of the multitenant container database (CDB) is modified to remove all references to the dropped PDB and its data files. Archived logs and backups associated with the dropped PDB are not deleted. You can delete them using Oracle Recovery Manager (RMAN), or you can retain them in case you subsequently want to perform point-in-time recovery of the PDB.

Log in to the root container.

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT


List the Pluggable databases exists in the Multitenant container database using "show pdbs" command, here in this example, I am going to drop PDB3 Pluggable Database.



SQL> show pdbs

    CON_ID CON_NAME              OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
     2 PDB$SEED              READ ONLY  NO
     3 PDB3               MOUNTED


To drop a Pluggable Database from your container database, you run the DROP PLUGGABLE DATABASE command being logged into the root container with SYSDBA privilege.

Here in my demo presentation, I am using the INCLUDING DATAFILES clause with DROP PLUGGABLE DATABASE command as I don't want to preserve the data files for the PDB being dropped at all.



SQL> drop pluggable database pdb3 including datafiles;

Pluggable database dropped.


Note : There is another option to drop the Pluggable Database with KEEP DATAFILES clause - use this option if you want to preserve the data files for the PDB being dropped and later you have the luck to the point in time recovery of the dropped PDB since its associated backup files exists out there. Only temp files for the PDB will be deleted from OS file system since that would be no longer required in the incomplete recovery of that Pluggable Database.

From the Container database(CDB) alert log, you can see that all associated data files for the Pluggable Database(PDB) being dropped are deleted from the OS file system permanently.



Sun Aug 19 20:28:55 2018
Deleted file /u01/app/oracle/oradata/Win2Linux_data_D-CDB3_I-680903414_TS-USERS_FNO-9_0htafbrg.dbf
.
.
Deleted file /u01/app/oracle/oradata/Win2Linux_data_D-CDB3_I-680903414_TS-SYSAUX_FNO-8_0dtafbpb.dbf
Deleted file /u01/app/oracle/oradata/Win2Linux_data_D-CDB3_I-680903414_TS-SYSTEM_FNO-7_0ftafbqu.dbf
Completed: drop pluggable database pdb3 including datafiles
Sun Aug 19 20:29:52 2018


Here we can validate the Pluggable databases again using the SQL>show pdbs command and can see that PDB3 is dropped now.



SQL> show pdbs

    CON_ID CON_NAME              OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
     2 PDB$SEED              READ ONLY  NO
SQL> 







Thursday, July 26, 2018

Oracle Database 18c - Database Creation using DBCA


In our last article,[click here] we seen Oracle database 18c deployment - Software only. Here in this blog post, we would create a fresh database using Oracle database 18.3 DBCA.



$export ORACLE_HOME
$cd $ORACLE_HOME/bin

$export DISPLAY=:0.0


$./dbca

Select "Create a database" option and click next.





Fill the values for "Typical Configuration" options of database being created - i.e. your container (CDB) database name, and other options along with your container database SYS/SYSTEM user password. Choose option "Create as Container database" if you want your database to be Multi-tenant database and provide the PDB(Pluggable database) name under that. Click Next.





Since I have give a simple password so it pops up a warning that the password provided doesn't meet the Oracle recommended standards - to provide complex password, well I am OK with this and click on Yes to proceed with next step.



Here on this summary page, you can review all options you have selected - new thing here you would see that this summary page also lists the "Initialization Parameters" for the typical configuration option. Once you are OK with the options, click on Finish.





Database creation starts.......




Click on Close.....



Set the database OS environment.....and login to verify the instance.







By default, non-default pluggable databases open in MOUNTED state, you can then open them later using ALTER PLUGGABLE database command as below.




Subscribe to this blog to stay tuned for upcoming Oracle Database 18c articles.........


Wednesday, July 25, 2018

mount: unknown filesystem type 'vmhgfs'




It was a trouble sharing a host folder in the guest linux operating system after upgrading the VMware workstation to 12.0 from version 10 on windows 10




 root@rac1 mnt]# mount -t vmhgfs .host:/ /home/oracle  
 mount: unknown filesystem type 'vmhgfs' 



Solution: (run the guest operating system inside the VM)


 [root@rac1 mnt]# /usr/bin/vmhgfs-fuse /mnt  
 [root@rac1 mnt]# df -h  
 Filesystem      Size Used Avail Use% Mounted on  
 devtmpfs       4.9G   0 4.9G  0% /dev  
 tmpfs        4.9G 100K 4.9G  1% /dev/shm  
 tmpfs        4.9G 9.1M 4.9G  1% /run  
 tmpfs        4.9G   0 4.9G  0% /sys/fs/cgroup  
 /dev/mapper/ol-root  12G 4.2G 7.5G 36% /  
 /dev/mapper/ol-tmp  4.9G  45M 4.9G  1% /tmp  
 /dev/mapper/ol-u01  30G  19G  12G 63% /u01  
 /dev/sda1      197M 173M  25M 88% /boot  
 tmpfs        1000M 8.0K 1000M  1% /run/user/1001  
 /dev/sr0       102M 102M   0 100% /run/media/oracle/VMware Tools  
 vmhgfs-fuse     460G 400G  61G 87% /mnt 


Above you see that /mnt is mounted now, it was required since our shared folders get mounted inside this.



Now my shared folders are visible in the VMware operating system.


 [root@rac1 ~]# cd /mnt/  
 [root@rac1 mnt]# ls -ltr  
 total 12  
 drwxrwxrwx. 1 root root 4096 Jul 25 00:18 18c installation  
 drwxrwxrwx. 1 root root 8192 Jul 25 10:08 Database Products  
 [root@rac1 mnt]#  




Hope it would help !!

Oracle Database 18c - [18.3] Installation


Since Oracle Database 18c became available yesterday to download for on-premise deployment on Linux platforms now so here in this blog post, I would outline step by step installation of latest Oracle database 18.3 version.



Click here to download Oracle Database 18c - RU 18.3

Once downloaded, create Oracle Home directory on your target database server and unzip the downloaded oracle database 18c dbhome zip file in the ORACLE_HOME directory that you just created on your target database server.

In my test case, I unzipped my downloaded package inside following directory hence that is my ORACLE_HOME for this installation demo.

 [oracle@rac1 oracle]$ pwd  
 /u01/app/oracle 


Note : To install Oracle Database 18c RDBMS software and Oracle Database 18c Grid Infrastructure, you would have to invoke ./runInstaller and gridSetup.sh from unzipped ORACLE_HOME respectively.



Let's now invoke ./runInstaller from ORACLE_HOME - unzipped location to proceed with Oracle Database 18c installation.

 [oracle@rac1 oracle]$ export ORACLE_HOME=/u01/app/oracle  
 [oracle@rac1 oracle]$ ./runInstaller  


Once you invoke the ./runInstaller from ORACLE_HOME, then following "Configuration Option" installation wizard page appears.





















This demo article is just for "Software Only" configuration, later we would take a look at creating database separately using DBCA. Click Next.



Select "Single instance database installation" and click Next.





Select the database edition you want to deploy, here in this demo paper, I selected "Enterprise Edition" and click Next.




Select "Oracle base" location per your environment and click Next.




Select OS authentication Membership OS groups for corresponding SYS privileges and click Next.








In above screenshot, some of the checks are warned so I just would just ignore all of them to proceed with Database deployment.






Above is the Database deployment summary settings page, in case if you want to take a look at all settings you did opt and click on "Install" to proceed with the deployment.






Here it prompts to run a root script, login to a separate terminal and run the script as root user.







Once interesting thing you notice above in root script execution is that, it installs TFA as well. Earlier we used to get this installed as part of Oracle Grid Infrastructure deployments.

After root script execution, click on OK and you are done with the Oracle Database 18c deployment now.



Click close -


Since, we have installed Oracle database 18c, so we can get its prompt now.




Here one interesting thing you would notice that TFA and OSWatcher get installed as part of Oracle database 18.3 deployment. So, you won't have to go and install these OS monitoring and database diagnostics log collection support tools separately.




Hope you enjoyed the step by step graphical deployment of Oracle database 18.3

Please subscribe to the blog to stay tuned on upcoming Oracle 18c articles.....

Tuesday, July 24, 2018

Oracle Database 18c [18.3] is available to download now !!



Hurrr... The most awaited Oracle Database 18c is available to download now for the on-premise deployments.


Click on the below link and download :

https://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html



Wednesday, June 20, 2018

Oracle beats Q4 estimates on solid Cloud Services


Oracle performs better than expected in Oracle Cloud Market.... A great growth ahead.......
Please read below news for details :
https://www.channelnewsasia.com/news/technology/oracle-s-quarterly-results-beat-estimates-on-cloud-growth-10449478
And
https://www.google.co.in/amp/s/www.zdnet.com/google-amp/article/oracle-beats-q4-estimates-on-solid-cloud-services-growth/

Tuesday, June 5, 2018

[Oracle Cloud] : DBAAS Monitor Console Overview


Here in this article, I would like to give detailed  overview of Oracle Cloud DBAAS Monitor console to monitor your Single-instance Oracle database cloud instances.

"Database deployments of single-instance databases on Oracle Database Cloud Service include Oracle DBaaS Monitor, a built-in monitor that provides a wide spectrum of information about Oracle Database and operating system status and resource usage."


You can access Oracle DBaaS Monitor in the following ways:


Using the “Open DBaaS Monitor Console” Menu Item to Access Oracle DBaaS Monitor


Note:
For database deployments built on Oracle Cloud Infrastructure Classic, the network port to access Oracle DBaaS Monitor is blocked by default. To use the Open DBaaS Monitor Console menu item, you must unblock port 443, either by enabling the deployment’s ora_p2_httpssl predefined access rule or by creating your own access rule that opens port 443. For instructions, see Enabling Access to a Compute Node Port.



In order to open the DBaaS monitor console, go to the service page of your Oracle Database Cloud Service console, click on the hamburg sign at right side the the service name, and click on the "Open DBaaS Monitor Console" option of the drop-down menu, as depicted below.




Once you clicked, DBaaS Monitor console page would be opened as a separate tab or page, sign-in to the DBaaS Monitor console using default dbaas_monitor user and use the password of your Oracle Cloud service account.



Click on Log In, and you would be displayed the home page of DBaaS Monitor as below that displays the Database dashboard page that has the lots of summary information of database instance at one place.





To check the memory usage details in the Oracle cloud compute node, click in OS on the top and select memory from the drop-down menu, you would be displayed the memory usage details page by top processes as depicted below.




To check the CPU usage details at the compute nods where your oracle cloud database instance is hosted, click on the OS at top and select CPU from the drop down menu, you would be displayed the CPU page that is similar to TOP command output on UNIX based systems.




To check the storage/mount point usage details, click on the OS menu on the top and select Storage from its drop down menu and you would be shown the mount points usage on the oracle compute node as depicted below.




To check what all top processes running on your compute node, go to the OS at top and click on the Processes option from the drop down menu and you would be displayed the page as depicted below - a similar output like TOP command on Linux box.





To manage Database Instance or Pluggable databases using DBaaS Monitor, click on the "Database' at the top menu and select Manage from the drop down menu, you would be shown the page as below where you can stop, start a CDB, PDB. You an also create a new PDB or just plug a PDB that was already unplugged from the CDB.



You can check the status of Listener running the DBaaS compute Node from DBaaS monitor console, click as "Database>Listener" and you would be shown the status of listener running there, Please note that status of listener would be refreshed automatically over an interval of 5-6 seconds to get you the current status of it.




To check how much storage a CDB or PDB is using, Navigate to "Database>Storage" and you would see the following page. You would be shown the overall storage used by all databases out of total usage storage.




To check the backup run history, navigate to "Database>Backups" , and you would see all the automatic backup runs with its status and start_time/end_time.





You can also check the database alert logs from the DBaaS Monitor console - Navigate to "Database>Alerts" and you would be show the alerts messages as depicted below.

Note : You can do more with this page, scroll down to see older alert messages, put a filter to search alert log file etc.



You can check the database sessions established to the Database instance by navigating to "Database>Sessions" and you can check the sessions and its status, SID, Serial#, SQL_ID, usernme, module etc.




You can check the current database wait event by navigating to "Database>Waits" to check the wait events as depicted below.






You can view and manage database instance parameter file from the DBaaS Monitor console, Navigate to "Database>Parameters" to view or manage the database parameters, you can edit/alter the modifiable parameters from here itself.







Hope you enjoyed the tour of DBaaS monitor in this article....stay tuned for upcoming papers.