CRS is already configured on this node :
While setting up 2 node RAC, node 2 went down in the middle of root.sh execution. And at re-run of root.sh at node 2 it ended with the message that "Can't configure two CRS instances on the same cluster" so in order to execute root.sh with success you will have to de-configure root.sh first and then try to run it again as follows.
[root@host02 grid]# ./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oragrid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2015-07-22 01:16:26: Parsing the host name
2015-07-22 01:16:26: Checking for super user privileges
2015-07-22 01:16:26: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
CRS is already configured on this node for crshome=0
Cannot configure two CRS instances on the same cluster.
Please deconfigure before proceeding with the configuration of new home.
Step : 1 -
=========De-configure CRS =========
[root@host02 grid]# crs/install/rootcrs.pl -verbose -deconfig -force
2015-07-22 01:31:30: Parsing the host name
2015-07-22 01:31:30: Checking for super user privileges
2015-07-22 01:31:30: User has super user privileges
Using configuration parameter file: crs/install/crsconfig_params
VIP exists.:host01
VIP exists.: /192.9.201.254/192.9.201.254/255.255.255.0/eth0
VIP exists.:host02
VIP exists.: //192.9.201.187/255.255.255.0/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
eONS daemon exists. Multicast port 23815, multicast IP address 234.216.11.17, listening port 2016
PRKO-2425 : VIP is already stopped on node(s): host02
ADVM/ACFS is not supported on oraclelinux-release-5-8.0.2
ACFS-9201: Not Supported
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'host02'
CRS-2673: Attempting to stop 'ora.crsd' on 'host02'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'host02'
CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'host02'
CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'host02'
CRS-2677: Stop of 'ora.asm' on 'host02' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'host02' has completed
CRS-2677: Stop of 'ora.crsd' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'host02'
CRS-2673: Attempting to stop 'ora.ctssd' on 'host02'
CRS-2673: Attempting to stop 'ora.evmd' on 'host02'
CRS-2673: Attempting to stop 'ora.asm' on 'host02'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'host02'
CRS-2677: Stop of 'ora.cssdmonitor' on 'host02' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'host02' succeeded
CRS-2677: Stop of 'ora.evmd' on 'host02' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'host02' succeeded
CRS-2677: Stop of 'ora.asm' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'host02'
CRS-2677: Stop of 'ora.cssd' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'host02'
CRS-2673: Attempting to stop 'ora.diskmon' on 'host02'
CRS-2677: Stop of 'ora.gpnpd' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'host02'
CRS-2677: Stop of 'ora.diskmon' on 'host02' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'host02' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'host02' has completed
CRS-4133: Oracle High Availability Services has been stopped.
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node
Step : 2 -
========== Run root.sh again ==============
[root@host02 grid]# ./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oragrid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2015-07-22 01:35:19: Parsing the host name
2015-07-22 01:35:19: Checking for super user privileges
2015-07-22 01:35:19: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
ADVM/ACFS is not supported on oraclelinux-release-5-8.0.2
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node host01, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
CRS-2672: Attempting to start 'ora.mdnsd' on 'host02'
CRS-2676: Start of 'ora.mdnsd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'host02'
CRS-2676: Start of 'ora.gipcd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'host02'
CRS-2676: Start of 'ora.gpnpd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host02'
CRS-2676: Start of 'ora.cssdmonitor' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'host02'
CRS-2672: Attempting to start 'ora.diskmon' on 'host02'
CRS-2676: Start of 'ora.diskmon' on 'host02' succeeded
CRS-2676: Start of 'ora.cssd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'host02'
CRS-2676: Start of 'ora.ctssd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'host02'
CRS-2676: Start of 'ora.asm' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'host02'
CRS-2676: Start of 'ora.crsd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'host02'
CRS-2676: Start of 'ora.evmd' on 'host02' succeeded
host02 2015/07/22 01:38:12 /u01/app/11.2.0/grid/cdata/host02/backup_20150722_013812.olr
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 4094 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
Note :- Now re-run of root.sh script on node2 went fine.
While setting up 2 node RAC, node 2 went down in the middle of root.sh execution. And at re-run of root.sh at node 2 it ended with the message that "Can't configure two CRS instances on the same cluster" so in order to execute root.sh with success you will have to de-configure root.sh first and then try to run it again as follows.
[root@host02 grid]# ./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oragrid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2015-07-22 01:16:26: Parsing the host name
2015-07-22 01:16:26: Checking for super user privileges
2015-07-22 01:16:26: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
CRS is already configured on this node for crshome=0
Cannot configure two CRS instances on the same cluster.
Please deconfigure before proceeding with the configuration of new home.
Step : 1 -
=========De-configure CRS =========
[root@host02 grid]# crs/install/rootcrs.pl -verbose -deconfig -force
2015-07-22 01:31:30: Parsing the host name
2015-07-22 01:31:30: Checking for super user privileges
2015-07-22 01:31:30: User has super user privileges
Using configuration parameter file: crs/install/crsconfig_params
VIP exists.:host01
VIP exists.: /192.9.201.254/192.9.201.254/255.255.255.0/eth0
VIP exists.:host02
VIP exists.: //192.9.201.187/255.255.255.0/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
eONS daemon exists. Multicast port 23815, multicast IP address 234.216.11.17, listening port 2016
PRKO-2425 : VIP is already stopped on node(s): host02
ADVM/ACFS is not supported on oraclelinux-release-5-8.0.2
ACFS-9201: Not Supported
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'host02'
CRS-2673: Attempting to stop 'ora.crsd' on 'host02'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'host02'
CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'host02'
CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'host02'
CRS-2677: Stop of 'ora.asm' on 'host02' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'host02' has completed
CRS-2677: Stop of 'ora.crsd' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'host02'
CRS-2673: Attempting to stop 'ora.ctssd' on 'host02'
CRS-2673: Attempting to stop 'ora.evmd' on 'host02'
CRS-2673: Attempting to stop 'ora.asm' on 'host02'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'host02'
CRS-2677: Stop of 'ora.cssdmonitor' on 'host02' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'host02' succeeded
CRS-2677: Stop of 'ora.evmd' on 'host02' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'host02' succeeded
CRS-2677: Stop of 'ora.asm' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'host02'
CRS-2677: Stop of 'ora.cssd' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'host02'
CRS-2673: Attempting to stop 'ora.diskmon' on 'host02'
CRS-2677: Stop of 'ora.gpnpd' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'host02'
CRS-2677: Stop of 'ora.diskmon' on 'host02' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'host02' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'host02' has completed
CRS-4133: Oracle High Availability Services has been stopped.
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node
Step : 2 -
========== Run root.sh again ==============
[root@host02 grid]# ./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oragrid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2015-07-22 01:35:19: Parsing the host name
2015-07-22 01:35:19: Checking for super user privileges
2015-07-22 01:35:19: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
ADVM/ACFS is not supported on oraclelinux-release-5-8.0.2
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node host01, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
CRS-2672: Attempting to start 'ora.mdnsd' on 'host02'
CRS-2676: Start of 'ora.mdnsd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'host02'
CRS-2676: Start of 'ora.gipcd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'host02'
CRS-2676: Start of 'ora.gpnpd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host02'
CRS-2676: Start of 'ora.cssdmonitor' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'host02'
CRS-2672: Attempting to start 'ora.diskmon' on 'host02'
CRS-2676: Start of 'ora.diskmon' on 'host02' succeeded
CRS-2676: Start of 'ora.cssd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'host02'
CRS-2676: Start of 'ora.ctssd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'host02'
CRS-2676: Start of 'ora.asm' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'host02'
CRS-2676: Start of 'ora.crsd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'host02'
CRS-2676: Start of 'ora.evmd' on 'host02' succeeded
host02 2015/07/22 01:38:12 /u01/app/11.2.0/grid/cdata/host02/backup_20150722_013812.olr
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 4094 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
Note :- Now re-run of root.sh script on node2 went fine.