Moneycontrol Brokerage Recos

Showing posts with label Exadata. Show all posts
Showing posts with label Exadata. Show all posts

Friday, September 4, 2020

Configure Quorum Disk in Exadata


In last article[click here] - we learnt how to remove quorum disk from the Exadata system, now here we would see it we add and configure that.


Before we are adding quorum disk configuration in the system, you need to have IB switches interface names, ASM binary owner and group handy as that is required to create the config.


[root@exa01dbadm01 oracle.SupportTools]# /opt/oracle.SupportTools/quorumdiskmgr --create --config --owner=oragrid --group=asmadmin --network-iface-list="ib0, ib1"
[Info] Successfully created iface exadata_ib0 with iface.net_ifacename ib0
[Info] Successfully created iface exadata_ib1 with iface.net_ifacename ib1
[Success] Successfully created quorum disk configurations

[root@exa01dbadm01 oracle.SupportTools]#


Do the same as above on node2 as well.

[root@exa01dbadm02 oracle.SupportTools]# /opt/oracle.SupportTools/quorumdiskmgr --create --config --owner=oragrid --group=asmadmin --network-iface-list="ib0, ib1"
[Info] Successfully created iface exadata_ib0 with iface.net_ifacename ib0
[Info] Successfully created iface exadata_ib1 with iface.net_ifacename ib1
[Success] Successfully created quorum disk configurations

[root@exa01dbadm02 oracle.SupportTools]#


Check if the quoum disk configuration is created as below from node1 and node2.

[root@exa01dbadm01 oracle.SupportTools]# /opt/oracle.SupportTools/quorumdiskmgr --list --config
Owner: oragrid
Group: asmadmin
ifaces: exadata_ib1 exadata_ib0
Initiatior name: iqn.1988-12.com.oracle:192.168.10.1

[root@exa01dbadm01 oracle.SupportTools]#
[root@exa01dbadm02 oracle.SupportTools]# /opt/oracle.SupportTools/quorumdiskmgr --list --config
Owner: oragrid
Group: asmadmin
ifaces: exadata_ib1 exadata_ib0
Initiatior name: iqn.1988-12.com.oracle:192.168.10.3

[root@exa01dbadm02 oracle.SupportTools]#


Now, create quorum disk target for DATAC1 diskgroup which is visible to both compute nodes, before it you need to identify IB interface IPs of both nodes as that is required here.


Run the command on node1 and node2.

[root@exa01dbadm01 oracle.SupportTools]# /opt/oracle.SupportTools/quorumdiskmgr --create --target --asm-disk-group=datac1 --visible-to="192.168.10.1, 192.168.10.2, 192.168.10.3, 192.168.10.4"
[Success] Created logical volume /dev/VGExaDb/LVDbVdexa01dbadm01DATAC1.
[Success] Created backstore QD_DATAC1_exa01dbadm01.
[Success] Created target iqn.2015-05.com.oracle:qd--datac1--exa01dbadm01.

[root@exa01dbadm01 oracle.SupportTools]#
[root@exa01dbadm02 oracle.SupportTools]# /opt/oracle.SupportTools/quorumdiskmgr --create --target --asm-disk-group=datac1 --visible-to="192.168.10.1, 192.168.10.2, 192.168.10.3, 192.168.10.4"
[Success] Created logical volume /dev/VGExaDb/LVDbVdexa01dbadm02DATAC1.
[Success] Created backstore QD_DATAC1_exa01dbadm02.
[Success] Created target iqn.2015-05.com.oracle:qd--datac1--exa01dbadm02.

[root@exa01dbadm02 oracle.SupportTools]#


Now you can list the quorum disk targets on node1 and node2 as below to validate.

[root@exa01dbadm01 oracle.SupportTools]# /opt/oracle.SupportTools/quorumdiskmgr --list --target
Name: iqn.2015-05.com.oracle:qd--datac1--exa01dbadm01
Host name: exa01dbadm01
ASM disk group name: DATAC1
Visible to: iqn.1988-12.com.oracle:192.168.10.1, iqn.1988-12.com.oracle:192.168.10.2, iqn.1988-12.com.oracle:192.168.10.3, iqn.1988-12.com.oracle:192.168.10.4
Discovered by:


[root@exa01dbadm01 oracle.SupportTools]#
[root@exa01dbadm02 oracle.SupportTools]# /opt/oracle.SupportTools/quorumdiskmgr --list --target
Name: iqn.2015-05.com.oracle:qd--datac1--exa01dbadm02
Host name: exa01dbadm02
ASM disk group name: DATAC1
Visible to: iqn.1988-12.com.oracle:192.168.10.1, iqn.1988-12.com.oracle:192.168.10.2, iqn.1988-12.com.oracle:192.168.10.3, iqn.1988-12.com.oracle:192.168.10.4
Discovered by:


[root@exa01dbadm02 oracle.SupportTools]#


Create quorum disk device as below on node1 and node2.

[root@exa01dbadm01 oracle.SupportTools]# /opt/oracle.SupportTools/quorumdiskmgr --create --device --target-ip-list="192.168.10.1, 192.168.10.2, 192.168.10.3, 192.168.10.4"
[Success] Successfully created all device(s) from target(s) on machine with IP address 192.168.10.1

[Success] Successfully created all device(s) from target(s) on machine with IP address 192.168.10.2

[Success] Successfully created all device(s) from target(s) on machine with IP address 192.168.10.3

[Success] Successfully created all device(s) from target(s) on machine with IP address 192.168.10.4

[root@exa01dbadm01 oracle.SupportTools]#
[root@exa01dbadm02 oracle.SupportTools]# /opt/oracle.SupportTools/quorumdiskmgr --create --device --target-ip-list="192.168.10.1, 192.168.10.2, 192.168.10.3, 192.168.10.4"
[Success] Successfully created all device(s) from target(s) on machine with IP address 192.168.10.1

[Success] Successfully created all device(s) from target(s) on machine with IP address 192.168.10.2

[Success] Successfully created all device(s) from target(s) on machine with IP address 192.168.10.3

[Success] Successfully created all device(s) from target(s) on machine with IP address 192.168.10.4

[root@exa01dbadm02 oracle.SupportTools]#



Now you can list the quorum disk device on node1 and node2 to validate.

[root@exa01dbadm01 oracle.SupportTools]# /opt/oracle.SupportTools/quorumdiskmgr --list --device
Device path: /dev/exadata_quorum/QD_DATAC1_exa01dbadm01
Host name: exa01dbadm01
ASM disk group name: DATAC1
Size: 128 MB

Device path: /dev/exadata_quorum/QD_DATAC1_exa01dbadm02
Host name: exa01dbadm02
ASM disk group name: DATAC1
Size: 128 MB


[root@exa01dbadm01 oracle.SupportTools]#


[root@exa01dbadm02 oracle.SupportTools]# /opt/oracle.SupportTools/quorumdiskmgr --list --device
Device path: /dev/exadata_quorum/QD_DATAC1_exa01dbadm02
Host name: exa01dbadm02
ASM disk group name: DATAC1
Size: 128 MB

Device path: /dev/exadata_quorum/QD_DATAC1_exa01dbadm01
Host name: exa01dbadm01
ASM disk group name: DATAC1
Size: 128 MB


[root@exa01dbadm02 oracle.SupportTools]#



Now, check in the ASM instance if quorum disk devices are visible to ASM to be used for the DATAC1 diskgroup as below, we can see that quorum disk devices are available as CANDIDATE disks that we can add in to the diskgroup.

SQL> l
  1* SELECT inst_id, label, path, mode_status, header_status FROM gv$asm_disk WHERE path LIKE '/dev/exadata_quorum/%'
SQL> /

   INST_ID LABEL                           PATH                                                                   MODE_ST HEADER_STATU
---------- ------------------------------- ---------------------------------------------------------------------- ------- ------------
         1 QD_DATAC1_exa01dbadm02      /dev/exadata_quorum/QD_DATAC1_exa01dbadm02                         ONLINE  CANDIDATE
         1 QD_DATAC1_exa01dbadm01      /dev/exadata_quorum/QD_DATAC1_exa01dbadm01                         ONLINE  CANDIDATE
         2 QD_DATAC1_exa01dbadm02      /dev/exadata_quorum/QD_DATAC1_exa01dbadm02                         ONLINE  CANDIDATE
         2 QD_DATAC1_exa01dbadm01      /dev/exadata_quorum/QD_DATAC1_exa01dbadm01                         ONLINE  CANDIDATE

SQL>




Add the quorum devices in to the diskgroup as below.

SQL> ALTER DISKGROUP datac1 ADD QUORUM FAILGROUP exa01dbadm01 DISK '/dev/exadata_quorum/QD_DATAC1_exa01dbadm01'
QUORUM FAILGROUP exa01dbadm02 DISK '/dev/exadata_quorum/QD_DATAC1_exa01dbadm02';  2

Diskgroup altered.

SQL>


After disk addition, we can see that now these are parts of diskgroup and status is MEMBER.


SQL> SELECT inst_id, label, path, mode_status, header_status
FROM gv$asm_disk WHERE path LIKE '/dev/exadata_quorum/%';  2

   INST_ID LABEL                           PATH                                                                   MODE_ST HEADER_STATU
---------- ------------------------------- ---------------------------------------------------------------------- ------- ------------
         1 QD_DATAC1_exa01dbadm02      /dev/exadata_quorum/QD_DATAC1_exa01dbadm02                         ONLINE  MEMBER
         1 QD_DATAC1_exa01dbadm01      /dev/exadata_quorum/QD_DATAC1_exa01dbadm01                         ONLINE  MEMBER
         2 QD_DATAC1_exa01dbadm02      /dev/exadata_quorum/QD_DATAC1_exa01dbadm02                         ONLINE  MEMBER
         2 QD_DATAC1_exa01dbadm01      /dev/exadata_quorum/QD_DATAC1_exa01dbadm01                         ONLINE  MEMBER

SQL>



Finally, replaced/moved the voting disks back to DATAC1 diskgroup and five copies of files are now available.

[root@exa01dbadm02 oracle.SupportTools]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   cf9c32d1c0c34fafbf47caa1b77821b0 (o/10.0.0.5;10.0.0.6/DATAC1_CD_02_exad02cel01) [DATAC1]
 2. ONLINE   e2e61cd646224f08bf2c4c47e719e8fd (o/10.0.0.3;10.0.0.4/DATAC1_CD_05_exad02cel02) [DATAC1]
 3. ONLINE   7691da5101a34f95bfc56c3bec4e681b (o/10.0.0.1;10.0.0.2/DATAC1_CD_03_exad02cel03) [DATAC1]
 4. ONLINE   3dea97bdc5aa4f61bf0355abe31c1361 (/dev/exadata_quorum/QD_DATAC1_exa01dbadm02) [DATAC1]
 5. ONLINE   cc5671a086eb4f38bf5c690666099d66 (/dev/exadata_quorum/QD_DATAC1_exa01dbadm01) [DATAC1]
Located 5 voting disk(s).
[root@exa01dbadm02 oracle.SupportTools]#


After CRS restart : I bounced the CRS on both nodes to check if things are coming up normally.


[root@exa01dbadm02 oracle.SupportTools]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   cf9c32d1c0c34fafbf47caa1b77821b0 (o/10.0.0.5;10.0.0.6/DATAC1_CD_02_exad02cel01) [DATAC1]
 2. ONLINE   e2e61cd646224f08bf2c4c47e719e8fd (o/10.0.0.3;10.0.0.4/DATAC1_CD_05_exad02cel02) [DATAC1]
 3. ONLINE   7691da5101a34f95bfc56c3bec4e681b (o/10.0.0.1;10.0.0.2/DATAC1_CD_03_exad02cel03) [DATAC1]
 4. ONLINE   3dea97bdc5aa4f61bf0355abe31c1361 (/dev/exadata_quorum/QD_DATAC1_exa01dbadm02) [DATAC1]
 5. ONLINE   cc5671a086eb4f38bf5c690666099d66 (/dev/exadata_quorum/QD_DATAC1_exa01dbadm01) [DATAC1]
Located 5 voting disk(s).
[root@exa01dbadm02 oracle.SupportTools]#


Our missing quorum disk issue stands fixed here......!!




Hope it helps, thanks for reading, please subscribe to this blog to stay updated with latest news on Oracle Cloud Infrastructure and Oracle Autonomous Database Cloud Services and new articles.


Twitter : https://twitter.com/rajsoft8899

Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/

Facebook : https://www.facebook.com/rkushawaha


Monday, August 31, 2020

Removing the Quorum Disk Configuration in Exadata

 

In this blog post, we would see how to remove quorum disks in Exadata machine. We have one 1/8th Rack Exadata machine for Lab activity where ran into missing quorum disks for Voting Disk and I had to fix that in order to get the system patched in Rolling fashion to maintain high availability.

To patch the Exadata system in rolling fashion, it requires voting disks to reside on HIGH redundancy diskgroup which is not possible in 1/8th Rack Exadata system since there is only 3 cell nodes in there, and for High redundancy it requires five copies of voting disk.


So, in this scenario - we add a separate HDD drive on both compute nodes and share that between the nodes to be used for High redundancy diskgroup creation - there it comes the term "Quorum Disk"


We have +DATAC1 diskgroup configured with HIGH redundancy where our voting disk resides but two of the quorum disks being used in there went missing so our voting disks had only three copies available and CRS was not coming up.


I had to remove the quorum disks from both compute nodes and then configure it back to fix the issue.


SQL> l 
  1* select disk_number, GROUP_NUMBER, MOUNT_STATUS, HEADER_STATUS, MODE_STATUS, STATE, path from v$asm_disk where GROUP_NUMBER=1
SQL> / 

DISK_NUMBER GROUP_NUMBER MOUNT_S HEADER_STATU MODE_ST STATE    PATH 
----------- ------------ ------- ------------ ------- -------- ---------------------------------------------------------------------- 
          2            1 MISSING UNKNOWN      OFFLINE NORMAL 
          3            1 MISSING UNKNOWN      OFFLINE NORMAL 
         14            1 CACHED  MEMBER       ONLINE  NORMAL   o/10.0.0.1;10.0.0.2/DATAC1_CD_05_exad02cel03
         13            1 CACHED  MEMBER       ONLINE  NORMAL   o/10.0.0.1;10.0.0.2/DATAC1_CD_00_exad02cel03
         15            1 CACHED  MEMBER       ONLINE  NORMAL   o/10.0.0.1;10.0.0.2/DATAC1_CD_02_exad02cel03
          7            1 CACHED  MEMBER       ONLINE  NORMAL   o/10.0.0.1;10.0.0.2/DATAC1_CD_03_exad02cel03
          8            1 CACHED  MEMBER       ONLINE  NORMAL   o/10.0.0.1;10.0.0.2/DATAC1_CD_04_exad02cel03
          0            1 CACHED  MEMBER       ONLINE  NORMAL   o/10.0.0.1;10.0.0.2/DATAC1_CD_01_exad02cel03
          4            1 CACHED  MEMBER       ONLINE  NORMAL   o/10.0.0.3;10.0.0.4/DATAC1_CD_05_exad02cel02
         16            1 CACHED  MEMBER       ONLINE  NORMAL   o/10.0.0.3;10.0.0.4/DATAC1_CD_02_exad02cel02
         17            1 CACHED  MEMBER       ONLINE  NORMAL   o/10.0.0.3;10.0.0.4/DATAC1_CD_01_exad02cel02
          5            1 CACHED  MEMBER       ONLINE  NORMAL   o/10.0.0.3;10.0.0.4/DATAC1_CD_00_exad02cel02
          6            1 CACHED  MEMBER       ONLINE  NORMAL   o/10.0.0.3;10.0.0.4/DATAC1_CD_03_exad02cel02
         11            1 CACHED  MEMBER       ONLINE  NORMAL   o/10.0.0.3;10.0.0.4/DATAC1_CD_04_exad02cel02
         19            1 CACHED  MEMBER       ONLINE  NORMAL   o/10.0.0.5;10.0.0.6/DATAC1_CD_04_exad02cel01
          1            1 CACHED  MEMBER       ONLINE  NORMAL   o/10.0.0.5;10.0.0.6/DATAC1_CD_02_exad02cel01
         18            1 CACHED  MEMBER       ONLINE  NORMAL   o/10.0.0.5;10.0.0.6/DATAC1_CD_00_exad02cel01
          9            1 CACHED  MEMBER       ONLINE  NORMAL   o/10.0.0.5;10.0.0.6/DATAC1_CD_03_exad02cel01
         12            1 CACHED  MEMBER       ONLINE  NORMAL   o/10.0.0.5;10.0.0.6/DATAC1_CD_01_exad02cel01


Due to missing quorum disk in the DATAC1 diskgroup, we have only three copies of voting disks available as of now.

[root@exa01dbadm02 ~]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   94acaa401bd34fe0bf261af4437fc75e (o/10.0.0.5;10.0.0.6/DATAC1_CD_02_exad02cel01) [DATAC1]
 2. ONLINE   7680d9f206544ff1bf27661bf8cc3250 (o/10.0.0.3;10.0.0.4/DATAC1_CD_05_exad02cel02) [DATAC1]
 3. ONLINE   b632f75fe4884f5cbf3627faa6bbbe07 (o/10.0.0.1;10.0.0.2/DATAC1_CD_03_exad02cel03) [DATAC1]
Located 3 voting disk(s).
[root@exa01dbadm02 ~]#


I moved the voting disks to +RECOC1 diskgroup which was configured with NORMAL REDUNDANCY.


[oragrid@exa01dbadm01 ~]$ crsctl replace votedisk +RECOC1
Successful addition of voting disk 783e7633ea934fd0bff78c8b8dbe5a66.
Successful addition of voting disk a5cf9e2d99134f69bf6bb15c98044623.
Successful addition of voting disk d5d20cd6ff3e4fcdbf68df28cb18d030.
Successful deletion of voting disk 94acaa401bd34fe0bf261af4437fc75e.
Successful deletion of voting disk 7680d9f206544ff1bf27661bf8cc3250.
Successful deletion of voting disk b632f75fe4884f5cbf3627faa6bbbe07.
Successful deletion of voting disk e47d60407b054f3bbf5dc07df8080ee6.
Successfully replaced voting disk group with +RECOC1.
CRS-4266: Voting file(s) successfully replaced


[oragrid@exa01dbadm01 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   783e7633ea934fd0bff78c8b8dbe5a66 (o/10.0.0.5;10.0.0.6/RECOC1_CD_02_exad02cel01) [RECOC1]
 2. ONLINE   a5cf9e2d99134f69bf6bb15c98044623 (o/10.0.0.3;10.0.0.4/RECOC1_CD_02_exad02cel02) [RECOC1]
 3. ONLINE   d5d20cd6ff3e4fcdbf68df28cb18d030 (o/10.0.0.1;10.0.0.2/RECOC1_CD_02_exad02cel03) [RECOC1]
Located 3 voting disk(s).
[oragrid@exa01dbadm01 ~]$


Now we can go ahead and drop the missing quorum disks from the DATAC1 diskgroup forcefully.


[oragrid@exa01dbadm01 ~]$ sqlplus "/as sysasm"

SQL*Plus: Release 12.2.0.1.0 Production on Thu Jul 30 12:06:27 2020

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL>
SQL> set lines 300
SQL> ALTER DISKGROUP DATAC1 DROP QUORUM DISK QD_DATAC1_exa01dbadm01 FORCE;

Diskgroup altered.

SQL> ALTER DISKGROUP DATAC1 DROP QUORUM DISK QD_DATAC1_exa01dbadm02 FORCE;

Diskgroup altered.

SQL>


Delete the quorum disk device from the node1 using quorumdiskmgr utility.

[root@exa01dbadm01 ~]# cd /opt/oracle.SupportTools/
[root@exa01dbadm01 oracle.SupportTools]# ./quorumdiskmgr --delete --device
[Success] Successfully deleted device /dev/exadata_quorum/QD_DATAC1_exa01dbadm01.
[Success] Successfully deleted device /dev/exadata_quorum/QD_DATAC1_exa01dbadm02.

Delete the quorum disk target from the node1 using quorumdiskmgr utility.

[root@exa01dbadm01 oracle.SupportTools]# ./quorumdiskmgr --delete --target
[Success] Successfully removed target iqn.2015-05.com.oracle:qd--datac1--exa01dbadm01
[Success] Successfully removed backstore QD_DATAC1_exa01dbadm01
[Success] Successfully removed logical volume /dev/VGExaDb/LVDbVdexa01dbadm01DATAC1


Now delete quorum disk configuration from the node1.

[root@exa01dbadm01 oracle.SupportTools]# ./quorumdiskmgr --delete --config

[Success] Successfully deleted quorum disk configurations

[root@exa01dbadm01 oracle.SupportTools]#




Now, delete the quorum disk device from node2.

[root@exa01dbadm02 ~]# cd /opt/oracle.SupportTools/
[root@exa01dbadm02 oracle.SupportTools]# ./quorumdiskmgr --delete --device
[Success] Successfully deleted device /dev/exadata_quorum/QD_DATAC1_exa01dbadm01.
[Success] Successfully deleted device /dev/exadata_quorum/QD_DATAC1_exa01dbadm02.

Delete quorum disk target from node2.

[root@exa01dbadm02 oracle.SupportTools]# ./quorumdiskmgr --delete --target
[Success] Successfully removed target iqn.2015-05.com.oracle:qd--datac1--exa01dbadm02
[Success] Successfully removed backstore QD_DATAC1_exa01dbadm02
[Success] Successfully removed logical volume /dev/VGExaDb/LVDbVdexa01dbadm02DATAC1

Delete quorum disk configuration from node 2.

[root@exa01dbadm02 oracle.SupportTools]#  ./quorumdiskmgr --delete --config
[Success] Successfully deleted quorum disk configurations

[root@exa01dbadm02 oracle.SupportTools]#


At this point, we are done with removing the quorum disk configuration from our Exadata System, in next post, I would detail how we configure it in the system.





Hope it helps, thanks for reading, please subscribe to this blog to stay updated with latest news on Oracle Cloud Infrastructure and Oracle Autonomous Database Cloud Services and new articles.


Twitter : https://twitter.com/rajsoft8899

Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/

Facebook : https://www.facebook.com/rkushawaha


Tuesday, October 8, 2019

Add or Remove or List TFA Users and Groups




Image Source

Sometimes you may need to collect srdc diagnostic data requested by Oracle Support Engineer for an Oracle Internal Error or problem evaluation and analysis.

You don't want loads of unnecessary diagnostics traces and logs to be collected and need only relevant traces/logs for a specific Oracle incident/problem so srdc needs to be run by Oracle software binary owner which may be different from Oracle cluster-ware owner and that may not have permission to run the TFA to collect diagnostics data.


You won't be able to run the TFA if you are not a TFA user and get an error like depicted below since oracle user is not allowed to run the TFA diag collection.

[root@dbadm01 bin]#su - oracle


$ ./tfactl diagcollect -srdc ORA-00700
User oracle does not have keys to run TFA. Please check with TFA Admin(root)

$ exit
[root@dbadm01 bin]#



TFA administrator or root user can add Oracle software binary owner to TFA group so that he/she can run the TFA srdc diagnostics collection when needed.

Let's see how we can add oracle user to the TFA administrator group and run the srdc data collection and finally we would see how to remove oracle user from the TFA administration group.



You can view the list of TFA users currently allowed to run it as below. We can see that only oragrid user is allowed to run the TFA.


[root@dbadm01 bin]# ./tfactl access lsusers
.---------------------------------.
|  TFA Users in dbadm01  |
+-----------+-----------+---------+
| User Name | User Type | Status  |
+-----------+-----------+---------+
| oragrid   | USER      | Allowed |
'-----------+-----------+---------'

.---------------------------------.
|          TFA Users in           |
+-----------+-----------+---------+
| User Name | User Type | Status  |
+-----------+-----------+---------+
| oragrid   | USER      | Allowed |
| oragrid   | USER      | Allowed |
| oragrid   | USER      | Allowed |
'-----------+-----------+---------'

[root@dbadm01 bin]#


Let's now add oracle user in the TFA group now so that it can also be part of it and run the tfa.


[root@dbadm01 bin]# ./tfactl access add -user oracle -local

Successfully added 'oracle' to TFA Access list.

.---------------------------------.
|  TFA Users in dbadm01  |
+-----------+-----------+---------+
| User Name | User Type | Status  |
+-----------+-----------+---------+
| oracle   | USER      | Allowed |
| oragrid   | USER      | Allowed |
'-----------+-----------+---------'

[root@dbadm01 bin]#



Since oracle user is now added to TFA group, let's switch to oracle user and try to collect the TFA srdc diagnostics data.

[root@dbadm01 bin]# su - oracle
[oracle@dbadm01 ~]$

[oracle@dbadm01 ~]$ /u01/app/12.2.0.1/grid/bin/tfactl diagcollect -srdc ORA-00700
Enter the time of the ORA-00700 [YYYY-MM-DD HH24:MI:SS,<RETURN>=ALL] :
Enter the Database Name [<RETURN>=ALL] : PROD1

1. Sep/27/2019 17:21:00 : [prod1] ORA-00700: soft internal error, arguments: [kdt_bseg_srch_cbk PITL5], [3], [148630702], [276915], [], [], [], [], [], [], [], []
2. Sep/27/2019 15:05:30 : [prod1] ORA-00700: soft internal error, arguments: [kdt_bseg_srch_cbk PITL5], [2], [148630736], [276915], [], [], [], [], [], [], [], []
3. Sep/27/2019 15:05:30 : [prod1] ORA-00700: soft internal error, arguments: [PITL6], [276915], [148630736], [], [], [], [], [], [], [], [], []
4. Sep/27/2019 07:16:44 : [prod1] ORA-00700: soft internal error, arguments: [kdt_bseg_srch_cbk PITL5], [2], [156338726], [276915], [], [], [], [], [], [], [], []
5. Sep/24/2019 09:51:00 : [prod1] ORA-00700: soft internal error, arguments: [PITL6], [276915], [156338560], [], [], [], [], [], [], [], [], []
6. Sep/24/2019 09:50:56 : [prod1] ORA-00700: soft internal error, arguments: [kdt_bseg_srch_cbk PITL5], [2], [156338560], [276915], [], [], [], [], [], [], [], []

Please choose the event : 1-6 [1] 6
Selected value is : 6 ( Sep/24/2019 09:50:56 )
Scripts to be run by this srdc: ipspack rdahcve1210 rdahcve1120 rdahcve1110
Components included in this srdc: OS CRS DATABASE NOCHMOS
Collecting data for local node(s)
Scanning files from Sep/24/2019 03:50:56 to Sep/24/2019 15:50:56

Collection Id : 20191008033306dbadm01

Detailed Logging at : /u01/app/grid/tfa/repository/srdc_ora700_collection_Tue_Oct_08_03_33_07_UTC_2019_node_local/diagcollect_20191008033306_dbadm01.log
2019/10/08 03:33:10 UTC : NOTE : Any file or directory name containing the string .com will be renamed to replace .com with dotcom
2019/10/08 03:33:10 UTC : Collection Name : tfa_srdc_ora700_Tue_Oct_08_03_33_07_UTC_2019.zip
2019/10/08 03:33:11 UTC : Scanning of files for Collection in progress...
2019/10/08 03:33:11 UTC : Collecting additional diagnostic information...
2019/10/08 03:33:41 UTC : Getting list of files satisfying time range [09/24/2019 03:50:56 UTC, 09/24/2019 15:50:56 UTC]
2019/10/08 03:34:45 UTC : Completed collection of additional diagnostic information...
2019/10/08 03:36:29 UTC : Collecting ADR incident files...
2019/10/08 03:36:30 UTC : Completed Local Collection
.---------------------------------------------.
|              Collection Summary             |
+------------------+-----------+-------+------+
| Host             | Status    | Size  | Time |
+------------------+-----------+-------+------+
| dbadm01 | Completed | 249MB | 200s |
'------------------+-----------+-------+------'

Logs are being collected to: /u01/app/grid/tfa/repository/srdc_ora700_collection_Tue_Oct_08_03_33_07_UTC_2019_node_local
/u01/app/grid/tfa/repository/srdc_ora700_collection_Tue_Oct_08_03_33_07_UTC_2019_node_local/dbadm01.tfa_srdc_ora700_Tue_Oct_08_03_33_07_UTC_2019.zip
[oracle@dbadm01 ~]$

TFA srdc collection went successful now !


If you want to remove the oracle user from the TFA group, you can do so using below command.


[root@dbadm01 bin]# ./tfactl access remove -user oracle -local

Successfully removed 'oracle' from TFA Access list.

.---------------------------------.
|  TFA Users in dbadm01  |
+-----------+-----------+---------+
| User Name | User Type | Status  |
+-----------+-----------+---------+
| oragrid   | USER      | Allowed |
'-----------+-----------+---------'

[root@dbadm01 bin]#



Hope it helps, thanks for reading, please subscribe to this blog to stay updated with latest news on Oracle Cloud Infrastructure, Oracle Autonomous Database Cloud Services and other new articles.

Twitter : https://twitter.com/rajsoft8899
Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/


Monday, September 23, 2019

dbnodeupdate.sh fails for YUM update on Exadata Compute Node



Exadata Patching:

Today, we were doing Exadata patching activity - QFSDP APR 2019, for a number of our Exadata boxes, while performing the YUM update for one of the compute node, dbnodeupdate.sh tool failed with following error.


ERROR: Found dependency issues during pre-check. Packagesfailing:ERROR: Consult file dbm03:/var/log/cellos/minimum_conflict_report.210919181148.txt for more information on the dependencies failing and for next steps.


Upon checking the log files, it was found the update was failing due to obsolete RPM packages exist in the system.


Our obsolete.lst file looks like below:


[root@dbm03 bin]# cat /etc/exadata/yum/obsolete.lst
# Generated by dbnodeupdate.shrunid: 210919181148 
at.x86_64 
rhino.noarch 
jline.noarch 
jpackage-utils.noarch 
giflib.x86_64 
alsa-lib.x86_64 
xorg-x11-fonts-Type1.noarch 
prelink.x86_64 
biosconfig 
biosconfig_expat 
qlvnictools 
ibvexdmtools 
opensm.x86_64 
ofed-scripts 
ibibverbs-devel-static 
infiniband-diags-debuginfo 
libibverbs-debuginfo 
librdmacm-debuginfo 
libibumad-debuginfo 
libibmad-debuginfo 
ibutils-debuginfo 
libmlx4-debuginfo 
libsdp-debuginfo 
mstflint-debuginfo
.
.
tk.x86_64 
tmpwatch.x86_64 
unifdef.x86_64 
valgrind.x86_64 
zlib.i686 
compat-libstdc++-296.i686 
compat-libstdc++-33.i686 
libstdc++-devel.i686 
libstdc++.i686 
libgcc.i686 
glibc.i686 


# Generated by dbnodeupdate.shrunid: 210919181148 
java-*-openjdk 
nss-softokn-freebl.i686 
dmraid-events.x86_64 
dmraid.x86_64





Note :- If you are updating to release 11.2.3.3.0 or later, some packages on the Exadata database server  become obsolete. While updating an Exadata database server, the update utility prints the exclude rpm list and obsolete rpm list in the log file - /etc/exadata/yum/ location.


The following example shows the exclusion and obsolete lists from the log file. In this
example, an exclusion list has not yet been created by the user.


[root@dbm03 yum]# pwd 
/etc/exadata/yum 
[root@dbm03 yum]# ls -ltr 
total 24 
-r--r----- 1 root root  481 Jun 21 18:59 obsolete_nodeps.lst 
-r--r----- 1 root root 3206 Jun 21 18:59 obsolete.lst   <<<<<<<<<<<<
-r--r----- 1 root root  474 Jun 21 18:59 exclusion.lst.sample 
-rw-r----- 1 root root  125 Sep 21 18:20 obsolete_nodeps.lst.09212019 
-rw-r----- 1 root root 2497 Sep 21 18:20 obsolete.lst.09212019 
[root@dbm03 yum]#

In our case, obsolete.list file was created with all obsolete rpm packages and you see one  exclusion.lst.sample sample file is also created.

To see which packages will become obsolete, review the contents of the obsolete.lst file. This file lists the packages defined to be obsolete by current Exadata update; these packages will be removed during the update when no action is taken. Package manually added to this list are ignored. The following is a small sample of the obsolete.lst file:


[root@dbm03 yum]# cat obsolete.lst.09212019 
# Generated by dbnodeupdate.shrunid: 210919181148 
at.x86_64 
rhino.noarch 
jline.noarch 
jpackage-utils.noarch 
giflib.x86_64 
alsa-lib.x86_64 
xorg-x11-fonts-Type1.noarch 
prelink.x86_64 
biosconfig 
biosconfig_expat 
qlvnictools




To prevent a package listed in the obsolete.lst file from being removed, create the /etc/exadata/yum/exclusion.lst file, and put in the rpm name (wildcards are allowed) for the packages you want to keep. Place the /etc/exadata/yum/exclusion.lst file on all Exadata database servers where you want to use it.

The following example shows a package added to the exclusion list:



[root@dbm03 yum]# cat exclusion.lst
# Generated by dbnodeupdate.shrunid: 210919181148 
at.x86_64 
rhino.noarch 
jline.noarch 
jpackage-utils.noarch 
giflib.x86_64 
alsa-lib.x86_64 
xorg-x11-fonts-Type1.noarch 
prelink.x86_64 
biosconfig 
biosconfig_expat 

qlvnictools



After you have added an entry to the exclusion.lst file for all obsolete rpms and re-run the update dbnodeupdate.sh utility, the utility detects the exclusion list. The rpm packages on the exclusion list are still shown in the obsolete.lst file, but the packages in the exclusion.lst file will not be removed during the update.

dbnodeupdate.sh tool runs successful !!


Hope it helps, thanks for reading, please subscribe to this blog to stay updated with latest news on Oracle Autonomous Database Cloud Services and new articles.

Twitter : https://twitter.com/rajsoft8899
Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/





Tuesday, September 17, 2019

Creating Oracle Autonomous Database Cloud Service !!



As we know, in a first, today Oracle announced "Always Free" Cloud Account (Click here to see details) for trial usages, you can create oracle cloud account and explore exciting new features of Oracle Autonomous Database cloud services running on Exadata Hardware with Machine Learning capabilities.


Here in this article, I would demonstrate to create Oracle Autonomous Database Transaction Processing Cloud Services under the Always Free Cloud Trial account that just got announced by Larry at OOW19 today.


Login to your Oracle Cloud account.

Give your Tenant name here:



Provide your tenant username/password to login.



By default, your login dashboard looks like below, click on Create a Database to create AUTONOMOUS TRANSACTION PROCESSING:




Choose a compartment you want your database to be deployed in, give Display name and Database name.



Transaction Processing is by default selected since we are already creating the TRANSACTION PROCESSING Database. Let the default options selected and proceed next.


Always Free - option is by default OFF, turn it on and CPU core count and Storage gets automatically populated per the Always Free service options. In Always Free options you get 1 CPU core plus 20GB storage and can have maximum of 2 computes of this shape for autonomous database cloud service.



Give Administrator credentials and let default options selected - License Included.



If you want to give Tag Key and Value for the Autonomous Database instance you are creating, then click on the "Show Advanced Option" and give it accordingly as below.


Once all done, Click on the "Create Autonomous Database" to start the instance provision job.





Our Autonomous Database is provisioned and ready now.




If you click on the "Tags" tab then you get Tag Key value pairs for this target.





Click on the "Work Requests" to see the request/job details:




Click on the "Log Messages" to see the instance initialization and its completion timestamp.



Click on the "Associated Resources" to see the related cloud services.





Click on the Associated Resources resource to to go its Home Page.





Hope it helps, subscribe to this blog to stay updated on latest Oracle Technologies and new articles.

Twitter : https://twitter.com/rajsoft8899
Linkedin : https://www.linkedin.com/in/raj-kumar-kushwaha-5a289219/