Saturday 4 April 2015

Oracle Enterprise Linux 5(OEL5) Silent Mode Installation

Oracle Enterprise Linux 5 (OEL5) 

In this Post we will learn Oracle Enterprise Linux 5.4 (OEL5) Silent Installation.

Overview:
This Post is a comprehensive guide for Silent installation of Oracle Enterprise Linux 5 (OEL5) x64 Operating System Environment.
Please do keep in mind that this Post should not be considered a substitution of the official installation guide and release note from Oracle.
























 








Wednesday 1 April 2015

RAC Voting Disk Management

Voting disk Management in RAC

Voting Disk is a small file (20MB) that works as manager to count and insure the availability of node in configured clustered environment. It sits in the shared storage area and must be accessible by all nodes in the cluster.

For high availability, Oracle recommends that you have a min of three voting disks. In 10g, Clusterware can supports 32 voting disks but in 11gR2 supports 15 voting disks. Prior to 11g R2 RAC, voting disk could be placed on a raw device. But in 11g R2 RAC, it can be placed on ASM disks. Oracle Clusterware can access the voting disks present in ASM even if the ASM instance is down.

While marking their own presence, all the nodes also register the information about their communicability with other nodes in voting disk. This is called network heartbeat.

All nodes in the cluster registers their heart-beat information in the voting disk in predetermine interval i.e. 1 sec , so as to confirm that they are all operational. If heart-beat information of any node in the voting disk is not available that node will be evicted from the cluster.

The CSS (Cluster Synchronization Service) daemon in the clusterware maintains the heart beat of all nodes to the voting disk and monitor the health of RAC nodes employing two distinct heart beats: Network heart beat and Disk heart beat. When any node is not able to send heartbeat to voting disk, then it will reboot itself, to avoid the split-brain syndrome.

A node in the cluster must be able to access more than half of the voting disks at any time in order to be able to tolerate a failure of n voting disks. Therefore, it is strongly recommended that you configure an odd number of voting disks such as 3, 5, and so on.

For example, let’s have a two node cluster with an even number of let’s say 2 voting disks. Let Node1 is able to access voting disk1 and Node2 is able to access voting disk2. This means that there is no common file where clusterware can check the heartbeat of both the nodes. If we have 3 voting disks and both the nodes are able to access more than half i.e. 2 voting disks, there will be at least on disk which will be accessible by both the nodes. The clusterware can use that disk to check the heartbeat of both the nodes.

CSSD process in each RAC node maintains its heart beat in a block of size 1 OS block. Healthy nodes will have continuous network and disk heartbeats exchanged between the nodes. Break in heart beat indicates a possible error scenario. If the disk block is not updated in a short timeout period, that node is considered unhealthy and may be rebooted to protect the database information. In this case, a message to this effect is written in the kill block of the node. Each node reads its kill block once per second, if the kill block is overwritten node commits suicide.

In this way we can see that Voting disks contains both static and dynamic data.
Static data: Info about nodes in the cluster
Dynamic data: Disk heartbeat logging


Split Brain Syndrome:

In a Oracle RAC environment split-brain Syndrome occurs when the instance members in a RAC fail to ping/connect to each other via this private interconnect, but the servers are all physically up and running and the database instance on each of these servers is also running. The problem is if we leave these instances running, the same block might get read, updated in these individual instances and there would be data integrity issue, as the blocks changed in one instance, will not be locked and could be over-written by another instance. Oracle has efficiently implemented check for the split brain syndrome.


Clusterware utility for Voting disk administration:--

The crsctl utility is used to manage and monitor Oracle Clusterware resources and components. crsctl is stored under the GRID_HOME/bin directory.
Following administrative tasks are performed with the voting disk :

1) Backing up voting disks:-
 

In previous versions of Oracle Clusterware you needed to backup the voting disks with the dd command. Starting with Oracle Clusterware 11g Release 2 you no longer need to backup the voting disks. The voting disks are automatically backed up as a part of the OCR. In fact, Oracle explicitly indicates that you should not use a backup tool like dd to backup or restore voting disks. Doing so can lead to the loss of the voting disk.

2) Adding voting disks:-
 

First shut down Oracle Clusterware on all nodes, and then use the following commands as the root user.
# crsctl add css [path of voting disk]


3) Removing a voting disk:
 

First shut down Oracle Clusterware on all nodes, and then use the following commands as the root user.
# crsctl delete css [path of voting disk]


4) Moving voting disks
 
Note: crsctl votedisk commands must be run as root
Shutdown the Oracle Clusterware (crsctl stop crs as root) on all nodes before making any modification to the voting disk. Determine the current voting disk location using:
# crsctl query css votedisk


After modifying the voting disk, start the Oracle Clusterware stack on all nodes
# crsctl start crs


5) Deleting voting disks:
 
# crsctl delete css votedisk path


Thanks 
Please Commands.