Moneycontrol Brokerage Recos

Wednesday, July 22, 2015

Cannot configure two CRS instances on the same cluster

CRS is already configured on this node :


While setting up 2 node RAC, node 2 went down in the middle of root.sh execution. And at re-run of root.sh at node 2 it ended with the message that "Can't configure two CRS instances on the same cluster" so in order to execute root.sh with success you will have to de-configure root.sh first and then try to run it again as follows.



[root@host02 grid]# ./root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oragrid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2015-07-22 01:16:26: Parsing the host name
2015-07-22 01:16:26: Checking for super user privileges
2015-07-22 01:16:26: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
CRS is already configured on this node for crshome=0
Cannot configure two CRS instances on the same cluster.
Please deconfigure before proceeding with the configuration of new home.




Step : 1 -
=========De-configure CRS =========



[root@host02 grid]# crs/install/rootcrs.pl -verbose -deconfig -force
2015-07-22 01:31:30: Parsing the host name
2015-07-22 01:31:30: Checking for super user privileges
2015-07-22 01:31:30: User has super user privileges
Using configuration parameter file: crs/install/crsconfig_params
VIP exists.:host01
VIP exists.: /192.9.201.254/192.9.201.254/255.255.255.0/eth0
VIP exists.:host02
VIP exists.: //192.9.201.187/255.255.255.0/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
eONS daemon exists. Multicast port 23815, multicast IP address 234.216.11.17, listening port 2016
PRKO-2425 : VIP is already stopped on node(s): host02

ADVM/ACFS is not supported on oraclelinux-release-5-8.0.2

ACFS-9201: Not Supported
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'host02'
CRS-2673: Attempting to stop 'ora.crsd' on 'host02'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'host02'
CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'host02'
CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'host02'
CRS-2677: Stop of 'ora.asm' on 'host02' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'host02' has completed
CRS-2677: Stop of 'ora.crsd' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'host02'
CRS-2673: Attempting to stop 'ora.ctssd' on 'host02'
CRS-2673: Attempting to stop 'ora.evmd' on 'host02'
CRS-2673: Attempting to stop 'ora.asm' on 'host02'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'host02'
CRS-2677: Stop of 'ora.cssdmonitor' on 'host02' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'host02' succeeded
CRS-2677: Stop of 'ora.evmd' on 'host02' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'host02' succeeded
CRS-2677: Stop of 'ora.asm' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'host02'
CRS-2677: Stop of 'ora.cssd' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'host02'
CRS-2673: Attempting to stop 'ora.diskmon' on 'host02'
CRS-2677: Stop of 'ora.gpnpd' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'host02'
CRS-2677: Stop of 'ora.diskmon' on 'host02' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'host02' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'host02' has completed
CRS-4133: Oracle High Availability Services has been stopped.
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node




Step : 2 -

========== Run root.sh again ==============



[root@host02 grid]# ./root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oragrid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2015-07-22 01:35:19: Parsing the host name
2015-07-22 01:35:19: Checking for super user privileges
2015-07-22 01:35:19: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
ADVM/ACFS is not supported on oraclelinux-release-5-8.0.2



CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node host01, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
CRS-2672: Attempting to start 'ora.mdnsd' on 'host02'
CRS-2676: Start of 'ora.mdnsd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'host02'
CRS-2676: Start of 'ora.gipcd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'host02'
CRS-2676: Start of 'ora.gpnpd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host02'
CRS-2676: Start of 'ora.cssdmonitor' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'host02'
CRS-2672: Attempting to start 'ora.diskmon' on 'host02'
CRS-2676: Start of 'ora.diskmon' on 'host02' succeeded
CRS-2676: Start of 'ora.cssd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'host02'
CRS-2676: Start of 'ora.ctssd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'host02'
CRS-2676: Start of 'ora.asm' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'host02'
CRS-2676: Start of 'ora.crsd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'host02'
CRS-2676: Start of 'ora.evmd' on 'host02' succeeded

host02     2015/07/22 01:38:12     /u01/app/11.2.0/grid/cdata/host02/backup_20150722_013812.olr
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4094 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.



Note :- Now re-run of root.sh script on node2 went fine.


Wednesday, July 15, 2015

Oracle 11g R2 Grid Infrastructure/ASM Setup for Standalone Server Step By Step



Oracle Database Automatic Storage Management(ASM):
-----------------------------------------------------------------------------------



DBA's sometimes maintain thousands of datafiles for each database they manage, so an Oracle storage solution should provide both high-performance I/O and failure-proof storage hardware. In fact, file and I/O management what what usually takes up a large part of an Oracle DBA's time. With Oracle's new ASM feature, you can automate traditional file management task. Under an ASM system, the Oracle DBA is in change of the management of physical storage from within Oracle's framework, instead of relying on the system administrator. Using the ASM diskgroups, you can address sets of disk simultaneously, instead of individual disks, and the database can dynamically configure storage based on changing workloads. By allowing the Oracle DBA the flexibility to manage complex storage-management devices across various server and storage platforms, ASM becomes a crucial part of Oracle's grid computing initiative.


ASM is built on OMF, which means you don't have to worry about specifying filenames and location when creating new databases - all you have to do is identifying an ASM disk group, which consists of a set of disks. When you create a database or add a file, you can use familiar CREATE, ALTER and DROP SQL statements to allocate disk space. ASM acts as Oracle's built-in Logical Volume Manager by handling stripping and mirroring functions previously managed by third party tools. Under ASM, disk are grouped and managed by the database itself and made available for creating tablespaces. You don't have to mount the files as with the normal Linux or UNIX file systems. You also can't use the traditional tools, such as cp and tar, to copy the ASM files, nor you can describe them using the ls command. The database holds all information regarding ASM files. If you use ASM for an oracle file, the operating system can't see it. but RMAN and Oracle's other tool can.

For example, issue the following command:

SQL> ALTER DATABASE BACKUP CONTROFILE TO TRACE AS <filename>;

The file generated by the previous statement will display the names of any ASM files. If ASM uses fully qualified names, you can see datafiles in the views such as V$DATAFILE and V$LOGFILE.
When assigning a file to a tablespace or other object in an ASM file system, you don't need to know its name, You can simply refer to a disk group, and ASM automatically generates the filename.
Instead of learning to utilize a whole set of commands to manage ASM database, you can just use the OEM Database Control to manage virtually all ASM operations. You can create a new ASM instance with the DBCA or with the Oracle Universal Installer(which uses DBCA behind the scenes) and you can migrate an exising database to an ASM system with the Database Control.



Benefits of ASM
============

By using ASM, you can manage data by selecting reliability and performance characteristics for data classes, rather than working with the large storage systems on a per-file basis. An ASM file system offers the following benefits:

1 - ASM provides automatic load balancing over all the available disks, thus reducing hot spots in the file system.

2 - ASM prevents fragmentation of disks, so you don't need to manually relocate data to tune I/O performance.

3 - Adding disks is straightforward - ASM automatically performs online disk reorganization when you add or remove storage.

4 - ASM uses redundancy features available in intelligent storage arrays.

5 - The ASM storage system stores all types of database files.

6 - ASM makes your file management tasks easier, because you will be dealing with just a few groups of disks, rather than a multitude of database files. ASM automatically creates the database files and places then in appropriate disk groups.

7 - ASM does mirroring and stripping, which in turn increases reliability and performance. You can select different reliability and performance characteristics for various types of data. For example, you can use fine-grained stripping for redo log files and coarse-grained stripping for regular datafiles.

8 - ASM is free!


Please refer the book  - Expert Oracle Database 11g Administration, written by Sam R. Alapati for detailed information.
===================================================


I have used Oracle Enterprise Linux 5.8 platform for this paper and all required pre-requisites are already done on the host. I would show step by step installation of Grid Infrastructure and it doesn't include Oracle 11g R2 RDBMS installation as its database binary is already installed over there.


This Paper has two parts:

1 - Configuring and Installing Oracle 11g R2 Grid Infrastructure.

2 - Create Oracle 11g R2 Database based on ASM storage.


Let's Proceed with the 1st part : Configure and Install Grid Infrastructure


Step : 1 - Download Oracle 11g R2 Grid Infrastructure and Oracle 11g R2 database software from Oracle portal [http://www.oracle.com/technetwork/database/clusterware/overview/index-096607.html]


linux_11gR2_grid.zip  --- Grid Infrastructure Software

linux_11gR2_database_1of2.zip   --- Database Software Part - I
linux_11gR2_database_2of2.zip   --- Database Software Part - II



[root@localhost disks]# rpm -qa | grep oracleasm
oracleasm-support-2.1.7-1.el5

Note : - As we are using OEL 5.8 so oracle has already included ASM library packages in there by default so no need to install them separately. All you need is just a oracleasm-support package.


Step : 2 - In order to install Oracle 11g R2 Grid Infrastructure with success, list of below OS rpm packages need to installed on the server.


32-bit (x86) Installations
------------------------

binutils-2.17.50.0.6
compat-libstdc++-33-3.2.3
elfutils-libelf-0.125
elfutils-libelf-devel-0.125
elfutils-libelf-devel-static-0.125
gcc-4.1.2
gcc-c++-4.1.2
glibc-2.5-24
glibc-common-2.5
glibc-devel-2.52
glibc-headers-2.5
kernel-headers-2.6.18
ksh-20060214
libaio-0.3.106
libaio-devel-0.3.106
libgcc-4.1.2
libgomp-4.1.2
libstdc++-4.1.2
libstdc++-devel-4.1.2
make-3.81
pdksh-5.2.14
sysstat-7.0.2
unixODBC-2.2.11
unixODBC-devel-2.2.11


Please Refer Oracle Doc - [http://docs.oracle.com/cd/E11882_01/install.112/e41961.pdf] for completed OS level configuration for Grid Infrastructure Installation.


Step : 3 - Create appropriate OS groups and oragrid user for Grid installation.

[root@localhost ~]# groupadd oinstall
[root@localhost ~]# groupadd osdba
[root@localhost ~]# groupadd asmadmin
[root@localhost ~]# groupadd asmdba
[root@localhost ~]# groupadd asmoper
[root@localhost ~]# groupadd osoper


[root@localhost ~]# useradd -g oinstall -G asmdba,asmadmin,asmoper oragrid


[oragrid@localhost ~]$ id
uid=1101(oragrid) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1202(asmadmin),1203(asmoper)

[root@localhost ~]# useradd -u 1101 -g oinstall -G dba oracle - This account is used for Oracle Database installation.


Step : 4 -  We will be installing Oracle 11g R2 GI under "oragrid" user which would be separate from oracle database user (oracle). As we know, Oracle Automatic Storage Management needs devices not formatted with any file system so we will be adding 4 new Hard Disks in the VM, first two of them will be used for  OCR_DATA disk group which has Normal redundancy, in our case. Third disk will be used for FRA diskgroup and 4th device will be used for DATA diskgroup to store all database files in there.

a) Click on VM at menu bar and click on Settings.



b) Now select Hard Disk at left panel and click on Add at bottom.



c) Again select Hard Disk and Click on Next.



d) Select SCSI(Recommended) Option and Click on Next.

e) Select "Create a new virtual disk"and click on Next.


f) Specify the disk size to be added - 2.00GB and click on Next.



g) Click on Finish then you would see a new HDD of 2GB added.



Following the above steps(a-g) add two other 2GB device and add another 20GB device as depicted below.



Step : 5 - In order to list the devices using fdisk -l command, restart the system.



Step : 6 - Create partitions of the devices you added above. 

[root@localhost ~]# fdisk -l

Disk /dev/sda: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        5235    41945715   83  Linux
/dev/sda3            5236        7846    20972857+  83  Linux
/dev/sda4            7847       10443    20860402+   5  Extended
/dev/sda5            7847        8368     4192933+  82  Linux swap / Solaris
/dev/sda6            8369        8559     1534176   83  Linux

Disk /dev/sdb: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdc: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdd: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdd doesn't contain a valid partition table

Disk /dev/sde: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sde doesn't contain a valid partition table


Above highlighted Disks are the Hard Disk devices you added in previous step 4. Now let's create partition of these one by one.


a) [root@localhost ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n   - here n stand for Create New partition

Command action
   e   extended
   p   primary partition (1-4)
p  --  Here p for primary partition
Partition number (1-4): 1  -- Given 1 here
First cylinder (1-261, default 1): 
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-261, default 261):   -- Press Enter to accept its default value.
Using default value 261

Command (m for help): w  -- press w here to save the info in partition table and quit.
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.


======= Same way partition all other 3 devices added and listed above ======



Now you would partitions are created for all the devices as follows:

[root@localhost ~]# fdisk -l

Disk /dev/sda: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        5235    41945715   83  Linux
/dev/sda3            5236        7846    20972857+  83  Linux
/dev/sda4            7847       10443    20860402+   5  Extended
/dev/sda5            7847        8368     4192933+  82  Linux swap / Solaris
/dev/sda6            8369        8559     1534176   83  Linux

Disk /dev/sdb: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         261     2096451   83  Linux

Disk /dev/sdc: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1         261     2096451   83  Linux

Disk /dev/sdd: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1         261     2096451   83  Linux

Disk /dev/sde: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1        2610    20964793+  83  Linux


Step : 7 - Create Oracle ASM disks to make it available to Oracle ASM to created disk groups for the grid/database data storage.


[root@localhost ~]# oracleasm createdisk DISK1 '/dev/sdb1'
Writing disk header: done
Instantiating disk: done

[root@localhost ~]# oracleasm createdisk DISK2 '/dev/sdc1'
Writing disk header: done
Instantiating disk: done

[root@localhost ~]# oracleasm createdisk DISK3 '/dev/sdd1'
Writing disk header: done
Instantiating disk: done

[root@localhost ~]# oracleasm createdisk DISK4 '/dev/sde1'
Writing disk header: done
Instantiating disk: done



Step : 8 - Configure the Oracle ASM library owner and start mode of ASM library driver on start-up and scanning of Oracle ASM disk on system reboot automatically for you so that you don't have to start ASM services manually.


[root@localhost ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface [oragrid]: 
Default group to own the driver interface [oinstall]: 
Start Oracle ASM library driver on boot (y/n) [y]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done



Note : In this case oracleasm was already configured for the user oragrid so I didn't provide any input for default user and default group but if in your case the user is root then provide appropriate user and default group accordingly. Now if you go the physical location of disks created, you would see its owner has changed to oragrid as follows.

[root@localhost disks]# pwd
/dev/oracleasm/disks

[root@localhost disks]# ls -ltr
total 0
brw-rw---- 1 oragrid oinstall 8, 65 Jul 14 15:09 DISK4
brw-rw---- 1 oragrid oinstall 8, 49 Jul 14 15:09 DISK3
brw-rw---- 1 oragrid oinstall 8, 33 Jul 14 15:09 DISK2
brw-rw---- 1 oragrid oinstall 8, 17 Jul 14 15:09 DISK1


Note : If above user and group don't belong to the user/group under which you are running Oracle Universal Installer to install Grid Infrastructure then you won't to be able to see the list of disks on disk group creation page. This is where if you are not properly configured/set then you won't be able to see any disks at GUI even try with changing the disk discovery path so be ensured you have configured oracle asm($oracleasm configure -i) with correct user/group.



Step : 9 - Check to see the list of ASM disks created.


[root@localhost ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
[root@localhost ~]# 


[root@localhost ~]# oracleasm listdisks
DISK1
DISK2
DISK3
DISK4

Fine ASM disks are created now.


Step : 10 - Now login with oragrid user(I have created this separate user to run ASM instance. we will be using oracle user for oracle database instance).

Navigate to the appropriate installer location and run the ./runInstaller

[oragrid@localhost grid]$ ./runInstaller 
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 80 MB.   Actual 998 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 4094 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-07-14_12-34-17PM. Please wait ...




Select Install and Configure Grid Infrastructure for  Standalone Server and Click on Next >> again click on Next.



Well, no disks are listed here to be used to create Disk Group so we will change the disk discovery path where oracle ASM disks physically located. Click on "Change Discovery Path" and give this path in the new pop-up box - /dev/oracleasm/disks(this is where your oracle ASM disks physically exists) and click on OK


Now we can see all disks are listed as in the above Fig.


TO store OCR/voting disk and all we will create OCR_DATA disk group with "Normal Redundancy" with two disks as Normal Redundancy requires at least two disks for mirroring its data across disks to prevent disk failure and survive its ASM operation. Click on Next.



Here I have chosen same password for SYS ans ASMSNMP accounts but if you want to separate then choose accordingly.


As Oracle recommends to use complex password so it flashes a warning message if we choose normal password so you can ignore and click on Next.


Click on Next.


Click on Next.



It's all ignorable so just click on Next but don't compromise for production server :-)




Step : 11 - Finally it asks to run root.sh script as root user in order to complete the installation process and Click on OK.


[root@localhost ~]# /u01/app/oragrid/product/11.2.0/grid/root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oragrid
    ORACLE_HOME=  /u01/app/oragrid/product/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: 
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: 
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: 


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2015-07-14 13:07:21: Checking for super user privileges
2015-07-14 13:07:21: User has super user privileges
2015-07-14 13:07:21: Parsing the host name
Using configuration parameter file: /u01/app/oragrid/product/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE 
Creating OCR keys for user 'oragrid', privgrp 'oinstall'..
Operation successful.
CRS-4664: Node localhost successfully pinned.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
ADVM/ACFS is not supported on oraclelinux-release-5-8.0.2




localhost     2015/07/14 13:07:47     /u01/app/oragrid/product/11.2.0/grid/cdata/localhost/backup_20150714_130747.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
Updating inventory properties for clusterware
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4094 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.


Finally Click on Close button page.


Step : 12 - Now Let's verify our Grid Infrastructure Installation.

[oragrid@localhost ~]$ ps -ef | grep pmon
oragrid  10309 10151  0 17:03 pts/1    00:00:00 grep pmon
oragrid  10618     1  0 13:15 ?        00:00:03 asm_pmon_+ASM


[oragrid@localhost ~]$ ps -ef | grep css
oragrid  10311     1  0 13:14 ?        00:00:08 /u01/app/oragrid/product/11.2.0/grid/bin/cssdagent
oragrid  10348     1  1 13:14 ?        00:02:31 /u01/app/oragrid/product/11.2.0/grid/bin/ocssd.bin 
oragrid  10367 10151  0 17:03 pts/1    00:00:00 grep css

[oragrid@localhost ~]$ ps -ef | grep asm
oragrid  10618     1  0 13:15 ?        00:00:04 asm_pmon_+ASM
oragrid  10622     1  0 13:15 ?        00:00:46 asm_vktm_+ASM
oragrid  10628     1  0 13:15 ?        00:00:00 asm_gen0_+ASM
oragrid  10632     1  0 13:15 ?        00:00:01 asm_diag_+ASM
oragrid  10636     1  0 13:15 ?        00:00:00 asm_psp0_+ASM
oragrid  10640     1  0 13:15 ?        00:00:09 asm_dia0_+ASM
oragrid  10644     1  0 13:15 ?        00:00:00 asm_mman_+ASM
oragrid  10648     1  0 13:15 ?        00:00:03 asm_dbw0_+ASM
oragrid  10652     1  0 13:15 ?        00:00:01 asm_lgwr_+ASM
oragrid  10656     1  0 13:15 ?        00:00:01 asm_ckpt_+ASM
oragrid  10660     1  0 13:15 ?        00:00:00 asm_smon_+ASM
oragrid  10664     1  0 13:15 ?        00:00:03 asm_rbal_+ASM
oragrid  10668     1  0 13:15 ?        00:00:10 asm_gmon_+ASM
oragrid  10672     1  0 13:15 ?        00:00:01 asm_mmon_+ASM
oragrid  10676     1  0 13:15 ?        00:00:03 asm_mmnl_+ASM


We can see ASM instance is running. Let's login and validate its instance.

[oragrid@localhost ~]$ . oraenv
ORACLE_SID = [] ? +ASM
The Oracle base for ORACLE_HOME=/u01/app/oragrid/product/11.2.0/grid is /u01/app/oragrid

[oragrid@localhost ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.1.0 Production on Tue Jul 14 17:05:59 2015

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Automatic Storage Management option

SQL> set linesize 220
SQL> select * from v$instance;

INSTANCE_NUMBER INSTANCE_NAME    HOST_NAME                                                        VERSION           STARTUP_T STATUS       PAR    THREAD# ARCHIVE LOG_SWITCH_WAIT LOGINS     SHU DATABASE_STATUS
--------------- ---------------- ---------------------------------------------------------------- ----------------- --------- ------------ --- ---------- ------- --------------- ---------- --- -----------------
INSTANCE_ROLE      ACTIVE_ST BLO
------------------ --------- ---
              1 +ASM             localhost.localdomain                                            11.2.0.1.0        14-JUL-15 STARTED      NO           0 STOPPED               ALLOWED    NO  ACTIVE
UNKNOWN            NORMAL    NO


SQL> select name from v$asm_diskgroup;

NAME
------------------------------
OCR_DATA
DATA
FRA


[oragrid@localhost ~]$ asmcmd

ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576     20473    19017                0           19017              0             N  DATA/
MOUNTED  EXTERN  N         512   4096  1048576      2047     1735                0            1735              0             N  FRA/
MOUNTED  NORMAL  N         512   4096  1048576      4094     3974                0            1987              0             N  OCR_DATA/


Note : - I have created two other diskgroups named DATA and FRA to be used for database creation in part 2. I will explain how to create and manage an Oracle ASM diskgroup in a separate post.

Oracle 11g R2 Grid Infrastructure Installation completed successfully now.



Let's proceed to create a database based on the above Automatic Storage Management.

 As earlier I discussed that we have separate user for oracle database instance  and oracle binary already installed over there so in order to create a database based on ASM storage let's just switch to oracle user and run DBCA from its home 


Step : 1 - Launch DBA and click on Next. On Create Database page click on Next.

Step : 2 - Select General Purpose or Transaction Procession >> Click on Next

Step : 3 - Provided database name : testdb >> Click on Next.

Step : 4 -  Click on Next.

Step : 5 - Provide passwords and click on Next.

Step : 6 - Select Automatic Storage Management for Storage Type option. For Storage Locations - Use Oracle-Managed Files >> Click on browse and select appropriate diskgroup(DATA in our case) to be used for oracle database files and click on OK. >> Finally click on Next and provide the password you given earlier in step 5.

Step : 7 - Select option Flash Recovery Area and click on browse to choose appropriate diskgroup to be used for FRA and click enable archiving option and click on Next.

Step : 8 - Click on Next.

Step : 9 - Click on Next.

Step : 10 - Click on Next.

Step : 11 - Click on Finish and OK in order to starting database creation.



Now Database creation completed so lets verify its pmon process.

[oracle@localhost ~]$ ps -ef | grep pmon
oragrid  10618     1  0 13:15 ?        00:00:04 asm_pmon_+ASM
oracle   16770     1  0 17:43 ?        00:00:00 ora_pmon_testdb
oracle   26885 10204  0 18:02 pts/2    00:00:00 grep pmon


Note :- Here you can see that ASM instance is running under oragrid user and oracle database instance is running under oracle user.


[oracle@localhost ~]$ . oraenv   -- To set correct environment 
ORACLE_SID = [asmdb] ? testdb  -- given testdb in order to set its env
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 is /u01/app/oracle
[oracle@localhost ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.1.0 Production on Tue Jul 14 18:04:06 2015

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options

SQL> set linesize 220
SQL> select name from v$datafile;

NAME
----------------------------------------------------------------
+DATA/testdb/datafile/system.260.885058597
+DATA/testdb/datafile/sysaux.265.885058597
+DATA/testdb/datafile/undotbs1.264.885058597
+DATA/testdb/datafile/users.263.885058599

SQL> show parameter control

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
control_file_record_keep_time        integer     7
control_files                        string      +DATA/testdb/controlfile/curre
                                                 nt.262.885058851, +FRA/testdb/
                                                 controlfile/current.256.885058
                                                 853
control_management_pack_access       string      DIAGNOSTIC+TUNING
SQL> 


SQL> select group#, status, member from v$logfile;

    GROUP# STATUS  MEMBER
---------- ------- ------------------------------------------------------------
         3         +DATA/testdb/onlinelog/group_3.258.885058861
         3         +FRA/testdb/onlinelog/group_3.257.885058863
         2         +DATA/testdb/onlinelog/group_2.259.885058859
         2         +FRA/testdb/onlinelog/group_2.258.885058859
         1         +DATA/testdb/onlinelog/group_1.261.885058855
         1         +FRA/testdb/onlinelog/group_1.259.885058857



So our database is finally created over Automatic Storage Management(ASM) storage.


Hope it would help someone....!!


Friday, July 3, 2015

Summary of Tasks for Which SRVCTL Is Used



SRVCTL is used to manage databases, instances, cluster databases, cluster database instances, Oracle ASM instance and disk groups, services, listeners, or other clusterware resources.

Cluster Database Configuration Tasks:
– Add, modify, and delete cluster database configuration information.
– Add an instance or a service to, and delete an instance or service from the configuration of a cluster database.
– Move instances and services in a cluster database configuration and modify service configurations.
– Set and unset the environment for an instance or service in a cluster database configuration.
– Set and unset the environment for an entire cluster database in a cluster database configuration

General Cluster Database Administration Tasks
– Start and stop cluster databases
 – Start and stop cluster database instances
 – Start, stop, and relocate cluster database services
 – Obtain statuses of cluster databases, cluster database instances, or cluster database services.

Node-Level Tasks
– Adding and deleting node level applications, server pools, and VIPs
– Setting and un-setting the environment for node-level applications
– Administering disk groups
 – Administering server pools
 – Administering node applications
– Administering Oracle ASM instances
 – Starting and stopping a group of programs that includes virtual IP addresses (VIPs), listeners, and Oracle Notification Services.

Refer this doc for complete details - http://docs.oracle.com/cd/E11882_01/rac.112/e41960.pdf