Quantcast
Channel: zaheer.appsdba's Groups Activities
Viewing all articles
Browse latest Browse all 74

Adding Nodes to Oracle 12c Flex Clustrer

$
0
0

Introduction:

Oracle Database with real application cluster options allows to run multiple database Instances on different physical/virtual servers sharing the database files residing on the shared storage.  In RAC the database is deployed on multiple cluster nodes but to application it will appear as single unified database. These multiple database servers shares workloads across different cluster nodes.

Starting from Oracle 12c oracle introduced a new cluster option- "Flex Cluster" which consists of the Hub nodes and Lead Nodes. In my previous article i have illustrated how we can configure the flex cluster using GNS  service.

http://www.toadworld.com/platforms/oracle/w/wiki/11508.oracle-12c-flex-cluster-installation-using-widows-dnsdhcp-server-part-i

http://www.toadworld.com/platforms/oracle/w/wiki/11509.oracle-12c-flex-cluster-installation-using-widows-dnsdhcp-server-part-ii

One of the major benefit of oracle real application cluster (RAC) is SCALABILITY,  Whenever Database servers runs out of capacity for computing resources then on traditional non cluster environment we need down times for upgrading CPU/RAM. But this is not the case with RAC, new database cluster nodes can be added on a fly without impacting the running operations. Later application load can be distributed across cluster nodes automatically or we can use server pool to define distributing load on cluster nodes.

 In this article we will see how we can add nodes to the existing flex cluster Environment. Adding of nodes to Flex Cluster is almost identical to adding of nodes to standard cluster but there are certain differences between standard cluster and flex cluster node addition.

Environment Details:

  • flexrac1 to flexrac5 arre existing cluster nodes. 
  • flexrac6 and flexrac8 are new cluster nodes.

S.No

Node Name

IP Address

SW Version

Node Description

1

DANODE1

192.168.2.1

2008

DNS/DHCP Server for Node

 

2

 

flexrac1

Public

Private

 

12.1.0.1

 

Cluster Hub-Node

192.168.2.81

10.10.2.81

3

flexrac2

192.168.2.82

10.10.2.82

12.1.0.1

Cluster Hub-Node

4

flexrac3

192.168.2.83

10.10.2.83

12.1.0.1

Cluster Hub-Node

5

flexrac4

192.168.2.84

10.10.2.84

12.1.0.1

Cluster Leaf-Node

6

flexrac5

192.168.2.85

10.10.2.85

12.1.0.1

Cluster Leaf-Node

7

flexrac6

192.168.2.86

10.10.2.86

12.1.0.1

New Cluster Hub-Node

8

Flexrac8

192.168.2.88

10.10.2.88

12.1.0.1

New Cluster Leaf-Node

 

The following diagram illustrates the node addition to an existing flex cluster environment:

Before we proceed with addition of cluster nodes. Lets recall some key points about flex cluster:

1 - HUB nodes only will run ASM instance, so shared storage must be configured for only HUB nodes

2 - LEAF nodes are not required for connecting to shared storage

3 - Virtual IP will be allocated to only HUB nodes

Pre-requisite steps:

1 - The new cluster nodes that are being added to existing cluster environment should have identical computing resources. 

2 - All operating system pre-requisites should be performed ahead  before adding it to the cluster

  •   OS users and groups
  •   Kernel parameters
  •   Install all OS level packages
  •   OS file system directories  for Grid and RDBMS Homes
  •   Configured shared devices on HUB nodes 
  •   Configure network interfaces for public and private network on all Hub and lead nodes.

3 - Enable SSH across all cluster nodes

4 -Verify all public and private network interfaces are able to communicate with each other.

5 - Add entries of new cluster nodes in GNS sub-domain  

5 - Execute cluvfy to identify  any missing pre-requisites,

Note: In flex cluster we should not add the VIP in the /etc/hosts file of any cluster nodes, as it will be assigned to all cluster nodes using GNS Service.


In this demonstration i will not cover the detailed steps of configuring pre-requisites like installation of rpm's, kernel parameters etc. I will try to focus on steps that are specific to the flex cluster.

Add new cluster node entries to the GNS Subdomain:

Flex cluster itself required the GNS configuration to be in place and all Virtual naming resolution is done through the GNS/DNS server. So we must add all new host entries to the GNS Sub domain.

- flexnode6 and flexnode8 are the new cluster nodes that will join the cluster,

Configure shared storage on HUB nodes:

We should attach the same shared devices from other Hub nodes and configure the asm library. 

Configure ASM library:

[root@flexnode6 ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: oragrid
Default group to own the driver interface []: dbagrid
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
[root@flexnode6 ~]#

 SCAN ASM disks :

[root@flexnode6 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "DISK1"
Unable to instantiate disk "DISK1"
Instantiating disk "DISK2"
Unable to instantiate disk "DISK2"
Instantiating disk "DISK3"
Unable to instantiate disk "DISK3"
[root@flexnode6 ~]#

- Still its not able to instantiate the disks.

Restart ASM library service:

[root@flexnode6 ~]# service oracleasm restart
Dropping Oracle ASMLib disks: [ OK ]
Shutting down the Oracle ASMLib driver: [ OK ]
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@flexnode6 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
[root@flexnode6 ~]#

- List ASM disks:

[root@flexnode6 ~]# oracleasm listdisks
DISK1
DISK2
DISK3
[root@flexnode6 ~]#

SSH Setup on all cluster nodes:

SSH should be setup between all cluster nodes, so we must configure SSH between the existing cluster nodes and the new nodes that are joining the cluster. We can use sshsetup.sh script for configuring the SSH from command line or we can setup using the GUI mode using the runInstaller. But in this article we will see only how we can use it from command line.

[oragrid@flexnode1 sshsetup]$ ./sshUserSetup.sh -user oragrid -hosts "flexnode1 flexnode2 flexnode3 flexnode4 flexnode5 flexnode6  flexnode8" -advance -confirm -noPromptPassphrase -confirm -advance
The output of this script is also logged into /tmp/sshUserSetup_2016-06-25-05-21-10.log
Hosts are flexnode1 flexnode2 flexnode3 flexnode4 flexnode5 flexnode6 flexnode7 flexnode8
user is oragrid
Platform:- Linux
Checking if the remote hosts are reachable
PING flexnode1.dbamaze.com (192.168.2.81) 56(84) bytes of data.
64 bytes from flexnode1.dbamaze.com (192.168.2.81): icmp_seq=1 ttl=64 time=0.047 ms
64 bytes from flexnode1.dbamaze.com (192.168.2.81): icmp_seq=2 ttl=64 time=0.015 ms

--- flexnode1.dbamaze.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.000/0.015/0.047/0.017 ms
PING flexnode2.dbamaze.com (192.168.2.82) 56(84) bytes of data.
64 bytes from flexnode2.dbamaze.com (192.168.2.82): icmp_seq=1 ttl=64 time=0.237 ms
64 bytes from flexnode2.dbamaze.com (192.168.2.82): icmp_seq=2 ttl=64 time=0.385 ms

--- flexnode2.dbamaze.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4001ms
rtt min/avg/max/mdev = 0.192/0.310/0.436/0.091 ms
PING flexnode3.dbamaze.com (192.168.2.83) 56(84) bytes of data.
64 bytes from flexnode3.dbamaze.com (192.168.2.83): icmp_seq=1 ttl=64 time=0.201 ms
64 bytes from flexnode3.dbamaze.com (192.168.2.83): icmp_seq=2 ttl=64 time=0.973 ms

--- flexnode3.dbamaze.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4003ms
rtt min/avg/max/mdev = 0.195/0.445/0.973/0.293 ms
PING flexnode4.dbamaze.com (192.168.2.84) 56(84) bytes of data.
64 bytes from flexnode4.dbamaze.com (192.168.2.84): icmp_seq=1 ttl=64 time=0.228 ms
64 bytes from flexnode4.dbamaze.com (192.168.2.84): icmp_seq=2 ttl=64 time=0.245 ms

--- flexnode4.dbamaze.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.228/0.362/0.610/0.154 ms
PING flexnode5.dbamaze.com (192.168.2.85) 56(84) bytes of data.
64 bytes from flexnode5.dbamaze.com (192.168.2.85): icmp_seq=1 ttl=64 time=0.286 ms
64 bytes from flexnode5.dbamaze.com (192.168.2.85): icmp_seq=2 ttl=64 time=0.538 ms

--- flexnode5.dbamaze.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4002ms
rtt min/avg/max/mdev = 0.244/0.364/0.538/0.101 ms
PING flexnode6.dbamaze.com (192.168.2.86) 56(84) bytes of data.
64 bytes from flexnode6.dbamaze.com (192.168.2.86): icmp_seq=1 ttl=64 time=0.408 ms
64 bytes from flexnode6.dbamaze.com (192.168.2.86): icmp_seq=2 ttl=64 time=0.370 ms

--- flexnode6.dbamaze.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.144/0.285/0.408/0.112 ms
PING flexnode7.dbamaze.com (192.168.2.87) 56(84) bytes of data.

--- flexnode7.dbamaze.com ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4000ms

PING flexnode8.dbamaze.com (192.168.2.88) 56(84) bytes of data.
64 bytes from flexnode8.dbamaze.com (192.168.2.88): icmp_seq=1 ttl=64 time=1.47 ms
64 bytes from flexnode8.dbamaze.com (192.168.2.88): icmp_seq=2 ttl=64 time=0.470 ms

Verify SSH connectivity between nodes:

[oragrid@flexnode1 sshsetup]$ ssh flexnode6 date
Sat Jun 25 05:29:32 AST 2016
[oragrid@flexnode1 sshsetup]$ ssh flexnode2 date
Sat Jun 25 05:30:00 AST 2016
[oragrid@flexnode1 sshsetup]$ ssh flexnode3 date
Sat Jun 25 05:30:38 AST 2016
[oragrid@flexnode1 sshsetup]$ ssh flexnode4 date
Sat Jun 25 05:30:09 AST 2016
[oragrid@flexnode1 sshsetup]$ ssh flexnode5 date
Sat Jun 25 05:29:58 AST 2016
[oragrid@flexnode1 sshsetup]$ ssh flexnode6 date
Sat Jun 25 05:30:03 AST 2016
[oragrid@flexnode1 sshsetup]$ ssh flexnode8 date
Sat Jun 25 05:30:12 AST 2016
[oragrid@flexnode1 sshsetup]$

Execute cluvfy from any one of the active cluster node

Cluvfy is the utility helpful for identifying the missing pre-requisite on the cluster nodes. This command should be executed from any one of the active cluster node recommend to execute from the HUB node. I have not included the detailed output of cluvfy, only those areas are listed which needs attention.

 

 [oragrid@flexnode1 ]$cluvfy stage -pre nodeadd -n flexnode6, flexnode8 -fixup -verbose 

 

Checking ASMLib configuration.
Node Name Status
------------------------------------ ------------------------
flexnode1 passed
flexnode6 passed
flexnode7 passed
flexnode8 (failed) ASMLib configuration is incorrect.

ERROR:
PRVF-10110 : ASMLib is not configured correctly on the node "flexnode8"
Result: Check for ASMLib configuration failed.

Checking Flex Cluster node role configuration...
Flex Cluster node role configuration check passed


NOTE:
No fixable verification failures to fix

Pre-check for node addition was unsuccessful.
Checks did not pass for the following node(s):
flexnode8
[oragrid@flexnode1 ~]$

 flexnode8 is a leaf node, so this Error can be safely ignored.

 

Checking GNS integrity...
Checking if the GNS subdomain name is valid...
The GNS subdomain name "flexrac1.dbamaze.com" is a valid domain name
Checking if the GNS VIP belongs to same subnet as the public network...
Public network subnets "192.168.2.0, 192.168.2.0, 192.168.2.0" match with the GNS VIP "192.168.2.0, 192.168.2.0, 192.168.2.0"
Checking if the GNS VIP is a valid address...
GNS VIP "192.168.2.20" resolves to a valid IP address
Checking the status of GNS VIP...
Checking if FDQN names for domain "flexrac1.dbamaze.com" are reachable

GNS resolved IP addresses are reachable

GNS resolved IP addresses are reachable

GNS resolved IP addresses are reachable

GNS resolved IP addresses are reachable
Checking status of GNS resource...
Node Running? Enabled?
------------ ------------------------ ------------------------
flexnode1 no yes
flexnode2 yes yes
flexnode3 no yes
flexnode4 no yes
flexnode5 no yes

GNS resource configuration check passed
Checking status of GNS VIP resource...
Node Running? Enabled?
------------ ------------------------ ------------------------
flexnode1 no yes
flexnode2 yes yes
flexnode3 no yes
flexnode4 no yes
flexnode5 no yes

GNS VIP resource configuration check passed.

GNS integrity check passed

Checking Flex Cluster node role configuration...
Flex Cluster node role configuration check passed

- This should be passed as its required for allocating the VIP from GNS, if this check is failed then we may not be able to add the nodes successfully.

Addition of Cluster Nodes:

Once all pre-requisites are in place then we are now ready to add the new cluster nodes. 

Check the existing cluster nodes before addition of nodes:

[oragrid@flexnode1 addnode]$ olsnodes -a
flexnode1 Hub
flexnode2 Hub
flexnode3 Hub
flexnode4 Leaf
flexnode5 Leaf
[oragrid@flexnode1 addnode]$

- Execute addnode.sh script from $GI_HOME/addnode directory

addnode.sh can be executed in GUI or CLI mode, In this demonstration I am using the GUI mode. But for CLI mode we just need to specify "-silent" option with addnode.sh script.

We can add flex node and leaf nodes at the same time by specifying the role of the node in addnode.sh command parameters. 

addnode.sh script should be executed from any one of the active cluster nodes, in this demonstration its executed from "flexnode3"

[oragrid@flexnode1 addnode]$ ./addnode.sh "CLUSTER_NEW_NODES={flexnode6,flexnode8}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={flexnode6-vip}" "CLUSTER_NEW_NODE_ROLES={hub,leaf}"
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 4575 MB Passed
Checking swap space: must be greater than 150 MB. Actual 5210 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed

- We should ensure that HUB and LEAF node roles are allocated appropriately. For HUB nodes we can "AUTO" for virtual hostname, as all virtual hostnames will be allocated automatically.

 .

- Ensure that SSH connectivity is working properly.

- Pre-requisite checks is progress

- All pre-requisite checks are successful and now we are ready to install  GI software on new cluster nodes.

-Copying GI home to remote nodes is in progress

- Copying of GI home completed successfully, now we are ready to execute root.sh script on new cluster nodes.

Execution of orainstRoot.sh:


The scripts should be executed in same sequence as listed in above screen.

[root@flexnode6 oraInventory]# ls
ContentsXML logs oraInst.loc orainstRoot.sh
[root@flexnode6 oraInventory]# sh orainstRoot.sh
Changing permissions of /u01/oracle/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/oracle/oraInventory to dbagrid.
The execution of the script is complete.
[root@flexnode6 oraInventory]#

- Similarly execute on  flexnode8.

Execution of root.sh:


[root@flexnode6 oraInventory]# /u01/grid/12.1.0/root.sh
Performing root user operation for Oracle 12c

The following environment variables are set as:
ORACLE_OWNER= oragrid
ORACLE_HOME= /u01/grid/12.1.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/grid/12.1.0/crs/install/crsconfig_params
2016/07/04 19:57:43 CLSRSC-363: User ignored prerequisites during installation

OLR initialization - successful
2016/07/04 19:58:12 CLSRSC-330: Adding Clusterware entries to file '/etc/inittab'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'flexnode6'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'flexnode6'
CRS-2677: Stop of 'ora.drivers.acfs' on 'flexnode6' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'flexnode6' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'flexnode6'
CRS-2672: Attempting to start 'ora.evmd' on 'flexnode6'
CRS-2676: Start of 'ora.mdnsd' on 'flexnode6' succeeded
CRS-2676: Start of 'ora.evmd' on 'flexnode6' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'flexnode6'
CRS-2676: Start of 'ora.gpnpd' on 'flexnode6' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'flexnode6'
CRS-2676: Start of 'ora.gipcd' on 'flexnode6' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'flexnode6'
CRS-2676: Start of 'ora.cssdmonitor' on 'flexnode6' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'flexnode6'
CRS-2672: Attempting to start 'ora.diskmon' on 'flexnode6'
CRS-2676: Start of 'ora.diskmon' on 'flexnode6' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'flexnode6'
CRS-2676: Start of 'ora.cssd' on 'flexnode6' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'flexnode6'
CRS-2672: Attempting to start 'ora.ctssd' on 'flexnode6'
CRS-2676: Start of 'ora.ctssd' on 'flexnode6' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'flexnode6' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'flexnode6'
CRS-2676: Start of 'ora.asm' on 'flexnode6' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'flexnode6'
CRS-2676: Start of 'ora.storage' on 'flexnode6' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'flexnode6'
CRS-2676: Start of 'ora.crf' on 'flexnode6' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'flexnode6'
CRS-2676: Start of 'ora.crsd' on 'flexnode6' succeeded
CRS-6017: Processing resource auto-start for servers: flexnode6
CRS-2672: Attempting to start 'ora.ons' on 'flexnode6'
CRS-2672: Attempting to start 'ora.proxy_advm' on 'flexnode6'
CRS-2676: Start of 'ora.ons' on 'flexnode6' succeeded
CRS-2676: Start of 'ora.proxy_advm' on 'flexnode6' succeeded
CRS-6016: Resource auto-start has completed for server flexnode6
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2016/07/04 20:02:19 CLSRSC-343: Successfully started Oracle clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2016/07/04 20:02:36 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[root@flexnode6 oraInventory]#


- Similary we should execute on flexnode8.

Node addition completed successfully

Node status after execution of root.sh script:

[oragrid@flexnode1 addnode]$ olsnodes -a
flexnode1 Hub
flexnode2 Hub
flexnode3 Hub
flexnode6 Hub
flexnode4 Leaf
flexnode5 Leaf
flexnode8 Leaf
[oragrid@flexnode1 addnode]$

Verify the cluster services:

[oragrid@flexnode1 addnode]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....SM.lsnr ora....er.type 0/5 0/ ONLINE ONLINE flexnode1
ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE flexnode1
ora.GRID.dg ora....up.type 0/5 0/ ONLINE ONLINE flexnode1
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE flexnode1
ora....AF.lsnr ora....er.type 0/5 0/ OFFLINE OFFLINE
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE flexnode1
ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE flexnode2
ora....N3.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE flexnode3
ora.MGMTLSNR ora....nr.type 0/0 0/0 ONLINE ONLINE flexnode3
ora.asm ora.asm.type 0/5 0/0 ONLINE ONLINE flexnode2
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE flexnode3
ora.flexcdb.db ora....se.type 0/2 0/1 ONLINE ONLINE flexnode1
ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE flexnode1
ora....de1.ons application 0/3 0/0 ONLINE ONLINE flexnode1
ora....de1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE flexnode1
ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE flexnode2
ora....de2.ons application 0/3 0/0 ONLINE ONLINE flexnode2
ora....de2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE flexnode2
ora....E3.lsnr application 0/5 0/0 ONLINE ONLINE flexnode3
ora....de3.ons application 0/3 0/0 ONLINE ONLINE flexnode3
ora....de3.vip ora....t1.type 0/0 0/0 ONLINE ONLINE flexnode3
ora....E6.lsnr application 0/5 0/0 ONLINE ONLINE flexnode6
ora....de6.ons application 0/3 0/0 ONLINE ONLINE flexnode6
ora....de6.vip ora....t1.type 0/0 0/0 ONLINE ONLINE flexnode6
ora.gns ora.gns.type 0/5 0/0 ONLINE ONLINE flexnode3
ora.gns.vip ora....ip.type 0/0 0/0 ONLINE ONLINE flexnode3
ora.mgmtdb ora....db.type 0/2 0/1 ONLINE ONLINE flexnode3
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE flexnode1
ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE flexnode3
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE flexnode1
ora.proxy_advm ora....vm.type 0/5 0/ ONLINE ONLINE flexnode1
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE flexnode1
ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE flexnode2
ora.scan3.vip ora....ip.type 0/0 0/0 ONLINE ONLINE flexnode3
[oragrid@flexnode1 addnode]$

startup ASM instance on new HUB node:

ASM instance may be started on newly added HUB node, we may need to startup the ASM instance on newly added HUB node manually.

[root@flexnode6 oraInventory]# ps -ef | grep pmon
oragrid 3800 1 0 Jul04 ? 00:00:01 apx_pmon_+APX4
root 12505 881 0 01:23 pts/2 00:00:00 grep pmon
[root@flexnode6 oraInventory]#


[oragrid@flexnode6 ~]$ . oraenv
ORACLE_SID = [+ASM3] ? +ASM4
The Oracle base has been changed from to /u01/oracle


[oragrid@flexnode6 ~]$ sqlplus / as sysasm

SQL*Plus: Release 12.1.0.1.0 Production on Tue Jul 5 01:28:49 2016

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ASM instance started

Total System Global Area 1135747072 bytes
Fixed Size 2297344 bytes
Variable Size 1108283904 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted
SQL>

- Check the status of ASM Instance 

SQL> select INST_ID, INSTANCE_NUMBER, INSTANCE_NAME, HOST_NAME from gv$instance;

INST_ID     INSTANCE_NUMBER    INSTANCE_NAME     HOST_NAME
---------- ---------------  ----------------   ----------------------------------------------------------------
4                4               +ASM4            flexnode6
3                3               +ASM3            flexnode3
2                2               +ASM2            flexnode2
1                1               +ASM1            flexnode1

SQL>

Conclusion:

Oracle RAC allows us to add nodes to existing cluster nodes without disrupting the existing services. The procedure for adding nodes for standard cluster and flex cluster is similar with some minor differences. We can add HUB nodes and LEAF nodes at same time. We should ensure that GNS sub domian delegation has enough range of IP's reserved with respect to the number of nodes that are being added to the existing cluster. If the reserved IP lease is lower then the number of nodes that are being added to the cluster the node addition to the existing cluster may fail. 



 

    

      


Viewing all articles
Browse latest Browse all 74

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>