Moneycontrol Brokerage Recos

Thursday, May 19, 2016

ORA-39013: Remapping the SYSTEM schema is not supported.



Scenario : Took Data Pump Export of SYSTEM schema in Oracle 12c CDB container database and tried to import that backup dumpfile data inside a Pluggable database PDBTEST but it ends with an error saying "Remapping SYSTEM is not supported."


Let's take a look.
----------------

1 - Export the Oracle 12c CDB SYSTEM schemas as below.


E:\app\oracle\product\12.1.0\dbhome_1\NETWORK\ADMIN>expdp system/redhat directory=system_dir dumpfile=system.dmp logfile=system.log

Export: Release 12.1.0.1.0 - Production on Thu May 19 13:53:56 2016

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

WARNING: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.

Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/******** directory=system_dir dumpfile=system.dmp logfile=system.log
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  C:\SYSTEM_DIR\SYSTEM.DMP
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Thu May 19 13:54:32 2016 elapsed 0 00:00:30





SQL> conn system/redhat@pdbtest
Connected.


SQL> show con_name

CON_NAME
------------------------------
PDBTEST


Now, let's try to import the exported SYSTEM schema from Oracle 12c CDB inside one of its PDB pluggable database names PDBTEST in schema TEST as illustrated below.
--------------------------------------------------------------------------------------------------------------

E:\app\oracle\product\12.1.0\dbhome_1\NETWORK\ADMIN>impdp system/redhat@pdbtest directory=PDBTEST_DATA_PUMP_DIR dumpfile=system.dmp logfile=test.log remap_schema=system:test

Import: Release 12.1.0.1.0 - Production on Thu May 19 14:00:02 2016

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
ORA-39001: invalid argument value
ORA-39013: Remapping the SYSTEM schema is not supported.



-- Here you got the error saying Remapping the SYSTEM schema is not supported.



Solution : You can re-try the import without REMAP_SCHEMA clause as illustrated below.


E:\app\oracle\product\12.1.0\dbhome_1\NETWORK\ADMIN>impdp system/redhat@pdbtest directory=PDBTEST_DATA_PUMP_DIR dumpfile=system.dmp logfile=test.log

Import: Release 12.1.0.1.0 - Production on Thu May 19 14:13:19 2016

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@pdbtest directory=PDBTEST_DATA_PUMP_DIR dumpfile=system.dmp logfile=test.log
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at Thu May 19 14:13:22 2016 elapsed 0 00:00:03


Import went successful now.


Import in Oracle 12c PDB from Oracle 11g Export Dump:


I have taken backup dumpfile(scott.dmp) of SCOTT schema from Oracle 11g database and will be importing that dumpfile to Oracle 12c Pluggable database inside TEST schema in a pluggable database "pdbtestdb".

In order to import in pluggable database inside Oracle 12c, we will have to create a service and in my case pdbtestdb service is created for my PDB.

Connected to pluggable database pdbtestdb and created a directory "TEST_DIR"(as usual we have been doing in previous releases of oracle database) where I have kept the export dumpfile which was exported from oracle 11g database and tried usual method to import inside Oracle 12c PDB.


Received below error while import.

[oracle@localhost bin]$ impdp system/xxx@pdbtestdb directory=TEST_DIR dumpfile=scott.dmp  remap_schema=scott:test logfile=test.log

Import: Release 12.1.0.1.0 - Production on Wed May 18 13:50:00 2016

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-39087: directory name TEST_DIR is invalid



Note : In Oracle database 12c, it looks for directory naming convention $TNS_DATA_PUMP_DIR format so we need to create directory in oracle 12c pluggable database as PDBTESTDB_DATA_PUMP_DIR as illustrated below.

SQL> conn /as sysdba
Connected.

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

Now, set the appropriate container(pluggable) database in which directory will be created to import the exported dumpfile.

SQL> alter session set container = pdbtestdb;

Session altered.



SQL> show con_name

CON_NAME
------------------------------
PDBTESTDB

Now we are connected to PDBTESTDB pluggable database so let's create directory in there.


SQL> create directory PDBTESTDB_DATA_PUMP_DIR AS '/u01/app/11g_exp';

Directory created.




SQL> GRANT READ, WRITE ON DIRECTORY PDBTESTDB_DATA_PUMP_DIR TO SYSTEM;

Grant succeeded.




Finally, let's try to import the dumpfile inside Oracle 12c pluggable database using new directory created above.


[oracle@localhost bin]$ impdp system/xxx@pdbtestdb directory=PDBTESTDB_DATA_PUMP_DIR dumpfile=scott.dmp  remap_schema=scott:test logfile=test.log

Import: Release 12.1.0.1.0 - Production on Wed May 18 13:58:20 2016

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@pdbtestdb directory=PDBTESTDB_DATA_PUMP_DIR dumpfile=scott.dmp remap_schema=scott:test logfile=test.log
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"TEST" already exists  -- This can be ignored as I pre-created TEST user in the pluggable database.
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "TEST"."DEPT"                               5.937 KB       4 rows
. . imported "TEST"."EMP"                                8.570 KB      14 rows
. . imported "TEST"."EMPBK"                              8.570 KB      14 rows
. . imported "TEST"."SALGRADE"                           5.867 KB       5 rows
. . imported "TEST"."BONUS"                                  0 KB       0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at Wed May 18 13:58:31 2016 elapsed 0 00:00:09


Import went successful.


Tuesday, May 17, 2016

CSS is not ready. Received status 3 from CSS. Waiting for good status ..


Today I was just playing with 2 node Oracle RAC cluster and 1st node went down suddenly. I could see the following entries in the CRS log file.


CRS log file:
========

2016-05-17 09:58:18.944: [ CSSCLNT][3038205648]clssscConnect: gipc request failed with 29 (0x16)
2016-05-17 09:58:18.944: [ CSSCLNT][3038205648]clsssInitNative: connect failed, rc 29
2016-05-17 09:58:18.944: [  CRSRTI][3038205648] CSS is not ready. Received status 3 from CSS. Waiting for good status ..

2016-05-17 09:58:19.945: [ CSSCLNT][3038205648]clssscConnect: gipc request failed with 29 (0x16)
2016-05-17 09:58:19.945: [ CSSCLNT][3038205648]clsssInitNative: connect failed, rc 29
2016-05-17 09:58:19.945: [  CRSRTI][3038205648] CSS is not ready. Received status 3 from CSS. Waiting for good status .. 




From OCSSD log file:
==============

==> ocssd.log <==
2016-05-17 10:27:16.996: [    CSSD][2998332304]clssgmclientlsnr: listening on clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OCSSD_LL_rac1_)(GIPCID
=ef2b0e79-00000000-5893))
2016-05-17 10:27:16.996: [    GPnP][3038959296]clsgpnp_Init: [at clsgpnp0.c:404] gpnp tracelevel 3, component tracelevel 0
2016-05-17 10:27:16.996: [    GPnP][3038959296]clsgpnp_Init: [at clsgpnp0.c:534] '/u01/app/11.2.0/grid' in effect as GPnP home base.
2016-05-17 10:27:17.079: [GIPCCLSA][2998332304]gipcmodClsaCompleteAccept: failed on clsaauthstart ret clsaretOSD (8), endp 0x8b26a10 [0
00000000000002f] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OCSSD_LL_rac1_)(GIPCID=fa51a6a2-77bcb030-5893))', remot
eAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OCSSD_LL_rac1_)(GIPCID=77bcb030-fa51a6a2-5834))', numPend 5, numReady 0, numDone 0, numDead 0
, numTransfer 0, objFlags 0x16ca, pidPeer 0, flags 0x603710, usrFlags 0x14000 }
2016-05-17 10:27:17.079: [GIPCCLSA][2998332304]gipcmodClsaCompleteAccept: slos op  :  mkdir
2016-05-17 10:27:17.079: [GIPCCLSA][2998332304]gipcmodClsaCompleteAccept: slos dep :  No space left on device (28)
2016-05-17 10:27:17.079: [GIPCCLSA][2998332304]gipcmodClsaCompleteAccept: slos loc :  authprep6
2016-05-17 10:27:17.079: [GIPCCLSA][2998332304]gipcmodClsaCompleteAccept: slos info:  failed to make dir /u01/app/11.2.0/grid/auth/css/
rac1/A2526853
2016-05-17 10:27:17.079: [GIPCXCPT][2998332304]gipcmodMuxTransferAccept: internal accept request failed endp 0x8b25c30 [000000000000001
b] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OCSSD_LL_rac1_)(GIPCID=ef2b0e79-00000000-5893))', remoteAddr '', numP
end 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, flags 0x30613, usrFlags 0x10010 }, ret gipcretAuthFail
 (22)
2016-05-17 10:27:17.079: [ GIPCMUX][2998332304]gipcmodMuxTransferAccept: EXCEPTION[ ret gipcretAuthFail (22) ]  error during accept on
endp 0x8b25c30 [000000000000001b] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OCSSD_LL_rac1_)(GIPCID=ef2b0e79-000000
00-5893
[root@rac1 cssd]#



Cause : In the ocssd log file we can see that "No space left on device (28)" and when I checked at mount point /u01 found it was 100% full.


Resolution : I Just released space inside the mount /u01 and did reboot node1 and all cluster services came up automatically as expected at subsequent reboot.





Another cause and resolution:
====================

Sometimes, after maintenance tasks when we try to brinng up all CRS services then we get similary kind of issues due to lock files does exist inside /tmp or /var/tmp/.oracle directory


[root@rac1 tmp]# pwd
/tmp

We can see following files are there inside /tmp directory, just login as root user and navigate to /tmp or /var/tmp/ and delete(rm -rf *) these all and reboot the server.

[root@rac1 tmp]# ls -ltr /var/tmp/.oracle/
total 0
prw-r--r-- 1 root    root     0 Mar 13 15:37 npohasd
srwxrwxrwx 1 oragrid oinstall 0 Mar 16 01:28 s#6509.2
srwxrwxrwx 1 oragrid oinstall 0 Mar 16 01:28 s#6509.1
srwxrwxrwx 1 oragrid oinstall 0 May 12 08:33 s#6543.2
srwxrwxrwx 1 oragrid oinstall 0 May 12 08:33 s#6543.1
srwxrwxrwx 1 oragrid oinstall 0 May 12 08:33 s#6571.2
srwxrwxrwx 1 oragrid oinstall 0 May 12 08:33 s#6571.1
srwxrwxrwx 1 oragrid oinstall 0 May 12 08:41 s#6481.2
srwxrwxrwx 1 oragrid oinstall 0 May 12 08:41 s#6481.1
srwxrwxrwx 1 oragrid oinstall 0 May 12 08:48 s#7342.2
srwxrwxrwx 1 oragrid oinstall 0 May 12 08:48 s#7342.1
-rw-r--r-- 1 oragrid oinstall 0 May 12 10:08 ora_gipc_GPNPD_rac1_lock
srwxrwxrwx 1 oragrid oinstall 0 May 12 10:09 s#6486.2
srwxrwxrwx 1 oragrid oinstall 0 May 12 10:09 s#6486.1
srwxrwxrwx 1 oragrid oinstall 0 May 12 10:11 s#7133.2
srwxrwxrwx 1 oragrid oinstall 0 May 12 10:11 s#7133.1
srwxrwxrwx 1 oragrid oinstall 0 May 15 14:29 ora_gipc_GPNPD_rac1
srwxrwxrwx 1 root    root     0 May 15 14:30 srac1DBG_CTSSD
srwxrwxrwx 1 oragrid oinstall 0 May 15 14:30 srac1DBG_EVMD
srwxrwxrwx 1 root    root     0 May 15 14:30 sprocr_local_conn_0_PROC
srwxrwxrwx 1 oragrid oinstall 0 May 15 14:30 sSYSTEM.evm.acceptor.auth
srwxrwxrwx 1 oragrid oinstall 0 May 15 14:30 sCevm
srwxrwxrwx 1 oragrid oinstall 0 May 15 14:30 sAevm
srwxrwxrwx 1 root    root     0 May 15 14:30 sCRSD_IPC_SOCKET_11
srwxrwxrwx 1 root    root     0 May 15 14:30 sora_crsqs
srwxrwxrwx 1 root    root     0 May 15 14:30 sCRSD_UI_SOCKET
srwxrwxrwx 1 oragrid oinstall 0 May 15 19:07 sLISTENER
srwxrwxrwx 1 oragrid oinstall 0 May 15 19:07 s#10780.2
srwxrwxrwx 1 oragrid oinstall 0 May 15 19:07 s#10780.1
-rw-r--r-- 1 oragrid oinstall 0 May 17 09:52 sOCSSD_LL_rac1__lock
srwxrwxrwx 1 root    root     0 May 17 09:57 srac1DBG_CRSD
srwxrwxrwx 1 root    root     0 May 17 10:10 srac1DBG_OHASD
srwxrwxrwx 1 root    root     0 May 17 10:10 sprocr_local_conn_0_PROL
srwxrwxrwx 1 root    root     0 May 17 10:10 sOHASD_UI_SOCKET
srwxrwxrwx 1 root    root     0 May 17 10:10 sOHASD_IPC_SOCKET_11
srwxrwxrwx 1 oragrid oinstall 0 May 17 10:10 srac1DBG_MDNSD
srwxrwxrwx 1 oragrid oinstall 0 May 17 10:10 srac1DBG_GIPCD
srwxrwxrwx 1 oragrid oinstall 0 May 17 10:12 srac1DBG_GPNPD
srwxrwx--- 1 oragrid oinstall 0 May 17 10:14 master_diskmon
srwxrwxrwx 1 oragrid oinstall 0 May 17 10:14 srac1DBG_CSSD
srwxrwxrwx 1 oragrid oinstall 0 May 17 10:14 sOCSSD_LL_rac1_


[root@rac1 tmp]# cd /var/tmp/.oracle/
[root@rac1 .oracle]# pwd
/var/tmp/.oracle

[root@rac1 .oracle]# rm -rf *

[root@rac1 .oracle]# ls -ltr
total 0



Note : In my case underline mount /u01 of GRID_HOME was exhausted and relasing the space resolved my issue.

Sunday, May 15, 2016

Audit Connect :: Simplest Feature to Audit Oracle users logon / logoff



In the previous post - [ Click Here ], I outlined how we can audit database users logon and logoff activities using database triggers but here we will be using in-build database auditing feature for the same audit process hence no need to worry about coding database triggers for it.



Here we will be doing the simplest way to audit database users logon/logoff info using oracle database in-build feature - Audit Connect.



Login as sysdba and execute the following command to enable the oracle database users logon/logoff information.


SQL> audit connect;

Audit succeeded.


Let's connect with SCOTT user to check if it audits the logon.

SQL> conn scott
Enter password: 
Connected.

SQL> disc
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options


SQL> !date
Sun May 15 19:25:16 IST 2016

SQL> conn /as sysdba
Connected.


Let's now fetch the records from DBA_AUDIT_TRAIL table for the user SCOTT.


SQL> select OS_USERNAME, USERNAME, USERHOST, EXTENDED_TIMESTAMP, DBID from dba_audit_trail where username='SCOTT';

OS_USERNAME     USERNAME        USERHOST                  EXTENDED_TIMESTAMP                             DBID
--------------- --------------- ------------------------- ---------------------------------------- ----------
aime            SCOTT           dadvfa1015                14-AUG-09 12.05.45.081543 PM +05:30      4023503584
aime            SCOTT           dadvfa1015                14-AUG-09 12.05.45.266236 PM +05:30      4023503584
oracle          SCOTT           rac2.rajdbsolutions.com   15-MAY-16 03.08.01.824926 PM +05:30       515168651
oracle          SCOTT           rac2.rajdbsolutions.com   15-MAY-16 03.08.32.194633 PM +05:30       515168651
oracle          SCOTT           rac2.rajdbsolutions.com   15-MAY-16 03.08.39.830102 PM +05:30       515168651
oracle          SCOTT           rac2.rajdbsolutions.com   15-MAY-16 03.15.52.717458 PM +05:30       515168651
oracle          SCOTT           rac2.rajdbsolutions.com   15-MAY-16 03.15.54.017694 PM +05:30       515168651
oracle          SCOTT           rac2.rajdbsolutions.com   15-MAY-16 03.34.03.395507 PM +05:30       515168651
oracle          SCOTT           rac2.rajdbsolutions.com   15-MAY-16 03.34.38.646395 PM +05:30       515168651
oracle          SCOTT           rac2.rajdbsolutions.com   15-MAY-16 07.24.10.688440 PM +05:30       515168651
oracle          SCOTT           rac2.rajdbsolutions.com   15-MAY-16 07.24.12.903179 PM +05:30       515168651
oracle          SCOTT           rac2.rajdbsolutions.com   15-MAY-16 07.25.12.692897 PM +05:30       515168651
oracle          SCOTT           rac2.rajdbsolutions.com   15-MAY-16 07.25.14.894100 PM +05:30       515168651
oracle          SCOTT           rac2.rajdbsolutions.com   15-MAY-16 07.40.10.746310 PM +05:30       515168651
oracle          SCOTT           rac2.rajdbsolutions.com   15-MAY-16 07.40.21.921919 PM +05:30       515168651





Following information can be obtained from DBA_AUDIT_TRAIL:


SQL> desc dba_audit_trail;
 Name                                Null?    Type
 ----------------------------------- -------- ------------------------
 OS_USERNAME                                  VARCHAR2(255)
 USERNAME                                     VARCHAR2(30)
 USERHOST                                     VARCHAR2(128)
 TERMINAL                                     VARCHAR2(255)
 TIMESTAMP                                    DATE
 OWNER                                        VARCHAR2(30)
 OBJ_NAME                                     VARCHAR2(128)
 ACTION                              NOT NULL NUMBER
 ACTION_NAME                                  VARCHAR2(28)
 NEW_OWNER                                    VARCHAR2(30)
 NEW_NAME                                     VARCHAR2(128)
 OBJ_PRIVILEGE                                VARCHAR2(16)
 SYS_PRIVILEGE                                VARCHAR2(40)
 ADMIN_OPTION                                 VARCHAR2(1)
 GRANTEE                                      VARCHAR2(30)
 AUDIT_OPTION                                 VARCHAR2(40)
 SES_ACTIONS                                  VARCHAR2(19)
 LOGOFF_TIME                                  DATE
 LOGOFF_LREAD                                 NUMBER
 LOGOFF_PREAD                                 NUMBER
 LOGOFF_LWRITE                                NUMBER
 LOGOFF_DLOCK                                 VARCHAR2(40)
 COMMENT_TEXT                                 VARCHAR2(4000)
 SESSIONID                           NOT NULL NUMBER
 ENTRYID                             NOT NULL NUMBER
 STATEMENTID                         NOT NULL NUMBER
 RETURNCODE                          NOT NULL NUMBER
 PRIV_USED                                    VARCHAR2(40)
 CLIENT_ID                                    VARCHAR2(64)
 ECONTEXT_ID                                  VARCHAR2(64)
 SESSION_CPU                                  NUMBER
 EXTENDED_TIMESTAMP                           TIMESTAMP(6) WITH TIME Z
                                              ONE
 PROXY_SESSIONID                              NUMBER
 GLOBAL_UID                                   VARCHAR2(32)
 INSTANCE_NUMBER                              NUMBER
 OS_PROCESS                                   VARCHAR2(16)
 TRANSACTIONID                                RAW(8)
 SCN                                          NUMBER
 SQL_BIND                                     NVARCHAR2(2000)
 SQL_TEXT                                     NVARCHAR2(2000)
 OBJ_EDITION_NAME                             VARCHAR2(30)
 DBID                                         NUMBER



Auditing Logon/Logoff of all users in Oracle Database using Triggers.

As per the monthly database health check activity you may be asked by your customer to provide an audit information of all database user's logon and logoff time.


Here I will be explaining two simple way to do it.




First of all, create a table audit_log(in my case) as follows.

SQL> create table audit_log(name varchar2(50), time date, action varchar2(50));

Table created.


-- Create database logon trigger --

SQL> create or replace trigger logon_trig
after logon on database
begin
   insert into audit_log values (user, sysdate, 'LOGON');
   commit;
end logon_trig;  2    3    4    5    6
  7  /

Trigger created.


-- Create database logoff trigger --


SQL> create or replace trigger logoff_trig
before logoff on database
begin
   insert into audit_log values (user, sysdate, 'LOGOFF');
   commit;
end logoff_trig;  2    3    4    5    6
  7  /

Trigger created.


Notes: 

- Beware of logon triggers. If they are not working, you may not logon to Oracle.

- You must have the CREATE (ANY) TRIGGER and ADMINISTER DATABASE TRIGGER privileges to implement 
DATABASE triggers.




After Creating Logon and Logoff triggers on database, I connected to database with scott user and disconnected it again so that logon and logoff activity gets catch in audit_log table by logon and logff triggers created above.

SQL> !date
Sun May 15 15:15:45 IST 2016

SQL> conn scott  
Enter password:
Connected.

SQL> disc
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options


SQL> conn /as sysdba
Connected.

SQL> !date
Sun May 15 15:16:15 IST 2016



SQL> alter session set nls_date_format='dd/mm/yy hh24:mi:ss';

Session altered.


SQL> select * from audit_log;


NAME            TIME              ACTION
--------------- ----------------- ------------------------------
SYS             15/05/16 15:14:14 LOGOFF
SYS             15/05/16 15:15:14 LOGON
SYS             15/05/16 15:15:14 LOGOFF
SYS             15/05/16 15:15:52 LOGOFF
SCOTT           15/05/16 15:15:52 LOGON
SCOTT           15/05/16 15:15:54 LOGOFF
SYS             15/05/16 15:15:59 LOGON
SYS             15/05/16 15:16:14 LOGON
SYS             15/05/16 15:16:14 LOGOFF


Now you can see scott user logon and logoff activity got captured in the audit_log table by logon and logoff triggers created above.





How to compress SCAN_LISTENER log file in Oracle RAC environment?

Sometimes we are in need to compress SCAN listener log files to release space on underline mount point due huge growth of its size.

We follow the below steps to compress a SCAN listener log file.


Step 1 : Login as Grid Infrastructure user (it may be oracle as RDBMS user or oragrid/grid user in your environment, you can easily find it using - ps -ef | grep tns)

In my case ASM/GI is running with oragrid user.


[root@rac1 ~]# ps -ef | grep pmon
oragrid   5734     1  0 14:30 ?        00:00:00 asm_pmon_+ASM1
oracle    6440     1  0 14:31 ?        00:00:00 ora_pmon_ractst1
root      7145  7110  0 14:33 pts/1    00:00:00 grep pmon




[root@rac1 ~]# ps -ef | grep tns
root         9     2  0 14:27 ?        00:00:00 [netns]
oragrid   6279     1  0 14:30 ?        00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
oragrid   6281     1  0 14:30 ?        00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER -inherit
root      7196  7110  0 14:34 pts/1    00:00:00 grep tns



Step 2 : Invoke LSNRCTL command line utility from GRID_HOME/bin directory.


Step 3 : set the current listener to the appropriate SCAN_LISTENER name you want to compress the log file as depicted below.


LSNRCTL> set current_listener LISTENER_SCAN1
Current Listener is LISTENER_SCAN1
LSNRCTL>


Now, if we check the status of listener then by default it is set to LISTENER_SCAN1 for now.

STATUS of the LISTENER
------------------------
Alias                     LISTENER_SCAN1
Version                   TNSLSNR for Linux: Version 11.2.0.1.0 - Production
Start Date                15-MAY-2016 14:31:00
Uptime                    0 days 0 hr. 5 min. 30 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/11.2.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/11.2.0/grid/log/diag/tnslsnr/rac1/listener_scan1/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.105)(PORT=1521)))
Services Summary...
Service "ractst" has 2 instance(s).
  Instance "ractst1", status READY, has 1 handler(s) for this service...
  Instance "ractst2", status READY, has 1 handler(s) for this service...
Service "ractstXDB" has 2 instance(s).
  Instance "ractst1", status READY, has 1 handler(s) for this service...
  Instance "ractst2", status READY, has 1 handler(s) for this service...
The command completed successfully





Step 4 : Set the log_status to OFF for the current listener.


LSNRCTL> set log_status off
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
LISTENER_SCAN1 parameter "log_status" set to OFF
The command completed successfully



Step 5 : Now, go to listener log location ( in my case, current listener is LISTENER_SCAN1 and it;s log file location can be obtained from status of the listener i.e. /u01/app/11.2.0/grid/log/diag/tnslsnr/rac1/listener_scan1).


[oragrid@rac1 trace]$ pwd
/u01/app/11.2.0/grid/log/diag/tnslsnr/rac1/listener_scan1/trace


[oragrid@rac1 trace]$ ls -ltr
total 220
-rw-r----- 1 oragrid oinstall 221096 May 15 14:37 listener_scan1.log


Step 6 : Compress or rename the SCAN LISTENER log file now.

 [oragrid@rac1 trace]$ gzip listener_scan1.log 

[oragrid@rac1 trace]$ ls -ltr
total 12
-rw-r----- 1 oragrid oinstall 11457 May 15 14:37 listener_scan1.log.gz  -- Compressed now.



Step 7 : Go back to the LSNRCTL prompt and turn ON the log_status parameter.

Note : If you are working on the same prompt then again you will have to set current listener to the same SCAN listener name to trun ON logging that you did OFF in step 4

LSNRCTL> set current_listener LISTENER_SCAN1
Current Listener is LISTENER_SCAN1

LSNRCTL> set log_status on
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
LISTENER_SCAN1 parameter "log_status" set to ON
The command completed successfully


Now you would see a new SCAN listener log file is created at that location.

[oragrid@rac1 trace]$ pwd
/u01/app/11.2.0/grid/log/diag/tnslsnr/rac1/listener_scan1/trace

[oragrid@rac1 trace]$ ls -ltr
total 16
-rw-r----- 1 oragrid oinstall 11457 May 15 14:37 listener_scan1.log.gz
-rw-r----- 1 oragrid oinstall    73 May 15 14:53 listener_scan1.log



Hope it would be a help !!