Friday, July 17, 2015

How to Add a RAC Node to an Oracle Cluster

A. Verify Prequisites:

Check System Hardware requirements; make sure it is matching with existing nodes (Recommended).

RAM size (Min 2.5 GB, to match with other nodes 36 GB):
/usr/sbin/lsattr -E -l sys0 -a realmem


Swap Space (Min 16 GB, to match with other nodes 24 GB):
/usr/sbin/lsps –s


Temp directory space (Min 1GB, to match with other nodes 4GB):
/usr/bin/df -k /tmp


Compare the size of /u01 mount point with existing nodes to match the size of software installation location
df –g

1. Create Groups and users:
Make sure the following User and Groups exists.
User: Oracle Groups: Oinstall, DBA


Run the below command in racattack18u:
$id oracle
If they don’t exist than create them.
$mkgroup id=’267’ adms='root' oinstall
$ mkgroup id=’264’ adms='root' dba
$ mkuser id=’501’  pgrp='oinstall' groups='dba' home='/u01' oracle


Id’s matching with current production id’s.

2. Create required Directories
ORACLE_BASE
mkdir –p /u01/app/oracle
chown oracle:oinstall /u01/app/oracle
chmod –R 775 /u01/app/oracle

ORACLE_HOME
mkdir –p /u01/products/rdbms_11203
chown oracle:oinstall /u01/products/rdbms_11203
chmod –R 775 /u01/products/rdbms_11203

GRID_HOME
mkdir –p /u01/products/grid_11203
chown root:oinstall /u01/products/grid_11203
chmod –R 775 /u01/products/grid_11203


3. UDP and TCP Kernal Parameters:
Contact AIX team and request them to verify if the kernel parameters are setup as recommended by Oracle.

http://docs.oracle.com/cd/E11882_01/install.112/e48294/preaix.htm#CWAIX418

4. Check the network requirements
Contact Network team and request to setup the network as recommended by Oracle for cluster environment.

http://docs.oracle.com/cd/E11882_01/install.112/e48294/preaix.htm#CWAIX196

A Oracle database cluster node must have the below ip addresses:
Public ip address
Private ip address
Virtual ip address
Single client access name


Current configuration in racattack16u and racattack17u (Can be found in /etc/hosts file):

Public ip address:
10.10.10.148     racattack16u
10.10.10.150     racattack17u


Private id address:
10.10.10.71     racattack16u
10.10.10.72     racattack17u


Virtual ip address:
10.10.10.149  racora3.gmail.com   racora3 # ORA Virt. Address A
10.10.10.151  racora4.gmail.com   racora4 # ORA Virt. Address B


Scan IP address:
SCAN IP Addresses for RAC Cluster (addresses selected round robin in DNS):
# 10.10.10.147 \
# 10.10.10.54  } rac-scan
# 10.10.10.155 /


For Current production cluster setup the Single client access name is defined as “rac-scan”  , new node racattack18u should be included in it.

Once the configuration is done, run the below command to confirm.
$nslookup rac-scan
Output should be similar in all the 3 nodes.


Current output:oracle@racattack17u:rac2:/u01 $  nslookup rac-scan
Server:         10.10.10.37
Address:       10.10.10.53


Name:   rac-scan.li-sec.state.pa.us
Address: 10.10.10.155
Name:   rac-scan.li-sec.state.pa.us
Address: 10.10.10.154
Name:   rac-scan.li-sec.state.pa.us
Address: 10.10.10.147


5. Make sure that all the ASM disks can be accessed by the Oracle user on new node.

Contact Storage team and request them to present all the storage devices currently used by ASM on production cluster database to racattack18u, and set their ownership to oracle user.

Once done, DBA can verify the same using below steps:

On racattack16u:
$export ORACLE_SID=ASM
$ . oraenv
$sqlplus ‘/as sysasm’


Sql>select path from v$asm_disk;
/dev/asm_128G_P0_1
/dev/asm_128G_P0_2
/dev/asm_128G_P0_3
/dev/asm_128G_P10_1
/dev/asm_128G_P1_1
/dev/asm_128G_P1_2
/dev/asm_128G_P1_3
Total 39 disks available right now…..


Exit out of sqlplus,
$ls –ltr /dev/asm_*


Now connect to racattack18u and run the below command:
$ls –ltr /dev/asm_*


Make sure the result is same for “ls” command in racattack16u and racattack18u.

6. Configure ssh and set user equivalency, Make sure you are able to connect to oracle user of racattack16u, racattack17u from racattack18u using ssh without providing password.

As Oracle user Run the Below commands on racattack18u:
$ cd /u01/.ssh
$mv authorized_keys2 authorized_keys2.old


Now As Oracle user run the below Commands on racattack16u
$cd /u01/.ssh
$scp authorized_keys2
oracle@racattack18u: /u01/.ssh/
$ssh racattack18u 
(You should be able to connect without prompting a password)


NOTE: In case something is missed in prerequisites, it can be detected automatically when we run “cluvfy” utility in below steps. Which can be fixed subsequently.

B. Verify using Cluster verification utility for Node add procedure:

1. From an existing node(racattack16u) perform post hardware and OS check
$ cluvfy stage -post hwos -n racattack18u


2. From an existing node run the below command to Compare one of the existing node with new node
$ cluvfy comp peer -refnode racattack16u -n racattack18u -orainv oinstall -osdba dba –verbose


3. Run the below command to Verify the integrity of the cluster and the new node
$cluvfy stage –pre nodeadd –n racattack18u –verbose


We may receive PRVF-5449 Error, it’s an Oracle BUG, workaround is to set the below environment variable.

export IGNORE_PREADDNODE_CHECKS=Y

C. Add Grid Home to the new Node:

Run the below addnode.sh command from racattack16u to add new node to the cluster, this will run Oracle universal installer in silent mode and copy the Oracle Grid_Infrastructure binaries from existing node to new node.


$GRID_HOME/oui/bin/addnode.sh “CLUSTER_NEW_NODES=
{ racattack18u }” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={ racora5}”


At the end of the script, it will prompt to run scripts as root user (in the new node racattack18u), connect to a new session with root user and run the scripts suggested. Once the root scripts are executed successfully, hit enter on the current screen to complete the GRID_HOME installation in new node.

If this is successful than we can see the new node as part of cluster and along with this it starts clusterware deamons, ASM instance, Listener, VIP, Scan services in the new node.

Run the below commands to verify,

$ps –ef|grep grid|grep –v grep (In New Node racattack18u)
$ps –ef|grep d.bin (In New Node racattack18u)
$crsctl check crs (In New Node racattack18u)
$crsctl stat res –t (From any node)
$olsnodes (From any node)


D. Add Oracle Home to the new Node:

Run the below addnode.sh command from racattack16u to add new node to the cluster, this will run Oracle universal installer in silent mode and copy the Oracle rdbms binaries from existing node to new node.

$ORACLE_HOME/oui/bin/addnode.sh “CLUSTER_NEW_NODES=
{ racattack18u }”


At the end of the script, it will prompt to run scripts as root user (in the new node racattack18u), connect to a new session with root user and run the scripts suggested. Hit enter on the current screen to complete the ORACLE_HOME installation in new node.

E. Add instance to the database for newly added node

Invoke DBCA from one of the old node and add new instance to the database.
RAC database-->Instance management-->Add Instance


Do this process for all the databases running under the cluster. 

F. Verification of Node addition:

Run the below commands from one of the node racattack16u to confirm the new node and instance are added successfully.

$ cluvfy stage –post nodeadd –n racattack18u -verbose
$ olsnodes
$ crsctl stat res –t
        
Connect to the new node racattack18u and run the below commands to check the details of newly added node.

$ crsctl check crs  
$ ps –ef|grep d.bin
$ ps –ef|grep pmon

Go through the directories of ORACLE_HOME, ORACLE_BASE and GRID_HOME.

Connect to Oracle database instance using sqlplus.
Connect to the ASM instance using asmcmd and sqlplus.


 

No comments:

Post a Comment