Introduction
In software industry there are frequent changes to the version of software that are running in production and to fix the bugs and add the new enhancements to the software stack we need to keep on upgrading the version of software. The software version upgrade is not a daily operational activity, so this requires some additional efforts and planning for a successful rollout of a upgrade.
In this article we will see how we can upgrade Oracle 12c Grid Infrastructure software version from 12.1.0.1 to 12.1.0.2. The existing cluster is configured with 5 nodes ( 3 Hub nodes and 2 leaf Nodes). This article will cover step by step method that needs to be performed for upgrading the 12.1.0.1 Grid Infrastructure and it covers some special scenario that can trigger while performing an upgrade, consider you are performing a cluster upgrade for 10 nodes while the upgrade is running and suddenly one/multiple nodes in a cluster is not reachable due to HW failure or any other reason. In such situations what will be our action as a DBA, this grey area is also covered in detailed in this article.
Cluster nodes details :
The following are the cluster nodes that will be upgraded from 12.1.0.1 to 12.1.0.2
S.No | Node Name | Current Version | Target Version | Node Mode |
1 | flexrac1 | 12.1.0.1 | 12.1.0.2 | Cluster Hub-Node |
2 | flexrac2 | 12.1.0.1 | 12.1.0.2 | Cluster Hub-Node |
3 | flexrac3 | 12.1.0.1 | 12.1.0.2 | Cluster Hub-Node |
4 | flexrac4 | 12.1.0.1 | 12.1.0.2 | Cluster Leaf-Node |
5 | flexrac5 | 12.1.0.1 | 12.1.0.2 | Cluster Leaf-Node |
Verify the existing Grid Infrastructure version:
Before we begin with the upgrade process lets check the existing version of the Grid Infrastructure:
[oracle@flexrac2 ~]$ . oraenv
ORACLE_SID = [+ASM2] ? +ASM3
The Oracle base has been set to /u01/oracle
[oracle@flexrac2 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.1.0]
[oracle@flexrac2 ~]$
Create the new directory for 12.1.0.2 grid software:
The new directory should be created on across all cluster nodes and ownership/group should be assigned to "grid" user.
[root@flexrac2 ~]# mkdir -p /u01/grid/grid_1212
[root@flexrac2 ~]# chown -R oracle:dba /u01/grid/grid_1212
Oracle 12.1.0.2 Grid Infrastructure will be Installed in this directory.
Installation of 12.1.0.2 grid infrastructure:
We can install the grid infrastructure software on the all cluster nodes while all services are up and running, the services will not be impacted until the execution of "rootupgrade.sh" . The actual upgrade will happen only after execution of "rootupgrade.sh" script.
The Installation of 12.1.0.2 binaries can be performed from any one of the active cluster node.
![]()
- Choose option upgrade Grid Infrastructure 'or' ASM.
![]()
- Make sure all participating cluster nodes are being selected.
![]()
- Ensure SSH connectivity is working fine across all cluster nodes
![]()
- If OEM cloud control is configured we can configure this home to register in OMS
![]()
- Select the appropriate OS group for different roles.
![]()
- Provide the location of Oracle Base ad New GI home.
![]()
- Skip this option if you want to execute the scripts manully.
![]()
- Here it will validate the pre-requisited on cluster nodes.
![]()
- Certain items GI installer can fix it and for certain items we need to manually fix it. In the above example kernel parameters and avahi daemon can be fixed by installer whereas nfs rpm package should be fixed manually.
![]()
- Execute "fixup" script on all cluster nodes:
[root@flexrac1 ~]# /tmp/CVU_12.1.0.2.0_oracle/runfixup.sh
All Fix-up operations were completed successfully.
[root@flexrac1 ~]#
Similarly it should be execute on all other cluster nodes.
![]()
- GI 12.1.0.2 is now ready to install on all cluster nodes
![]()
- Installation in progress .
![]()
![]()
- The rootupgrade.sh script is now ready for performing an upgrade.
Execution of rootupgrade.sh - Upgrade of Grid Inrastructure
In the above text its mentioned we must execute the script on local nodes first then it can be executed on other cluster nodes in parallel. Here it also mentioned we should complete the upgrade first on Hub nodes then we should proceed on Leaf nodes.
- Execution of script on flexrac1 (node1)
[root@flexrac1 ~]# /u01/grid/grid_1212/rootupgrade.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/grid/grid_1212
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/grid/grid_1212/crs/install/crsconfig_params
2016/09/17 19:39:44 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2016/09/17 19:40:19 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2016/09/17 19:40:26 CLSRSC-464: Starting retrieval of the cluster configuration data
2016/09/17 19:40:39 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2016/09/17 19:40:39 CLSRSC-363: User ignored prerequisites during installation
2016/09/17 19:40:59 CLSRSC-515: Starting OCR manual backup.
2016/09/17 19:41:04 CLSRSC-516: OCR manual backup successful.
2016/09/17 19:41:10 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2016/09/17 19:41:10 CLSRSC-482: Running command: '/u01/grid/12.1.0/bin/crsctl start rollingupgrade 12.1.0.2.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2016/09/17 19:41:25 CLSRSC-482: Running command: '/u01/grid/grid_1212/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/grid/12.1.0 -oldCRSVersion 12.1.0.1.0 -nodeNumber 1 -firstNode true -startRolling false'
ASM configuration upgraded in local node successfully.
2016/09/17 19:41:33 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2016/09/17 19:41:33 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2016/09/17 19:42:39 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
OLR initialization - successful
2016/09/17 19:46:14 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2016/09/17 19:49:48 CLSRSC-472: Attempting to export the OCR
2016/09/17 19:49:49 CLSRSC-482: Running command: 'ocrconfig -upgrade oracle dba'
2016/09/17 19:51:02 CLSRSC-473: Successfully exported the OCR
2016/09/17 19:51:10 CLSRSC-486:
At this stage of upgrade, the OCR has changed.
Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2016/09/17 19:51:10 CLSRSC-541:
To downgrade the cluster:
1. All nodes that have been upgraded must be downgraded.
2016/09/17 19:51:10 CLSRSC-542:
2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2016/09/17 19:51:10 CLSRSC-543:
3. The downgrade command must be run on the node flexrac3 with the '-lastnode' option to restore global configuration data.
2016/09/17 19:51:36 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2016/09/17 19:52:03 CLSRSC-474: Initiating upgrade of resource types
2016/09/17 19:52:14 CLSRSC-482: Running command: 'upgrade model -s 12.1.0.1.0 -d 12.1.0.2.0 -p first'
2016/09/17 19:52:14 CLSRSC-475: Upgrade of resource types successfully initiated.
2016/09/17 19:52:20 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@flexrac1 ~]#
Execution of rootupgrade.sh on flexrac2 (node2):
[root@flexrac2 ~]# /u01/grid/grid_1212/rootupgrade.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/grid/grid_1212
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/grid/grid_1212/crs/install/crsconfig_params
2016/09/17 20:02:04 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2016/09/17 20:02:39 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2016/09/17 20:02:43 CLSRSC-464: Starting retrieval of the cluster configuration data
2016/09/17 20:02:55 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2016/09/17 20:02:55 CLSRSC-363: User ignored prerequisites during installation
ASM configuration upgraded in local node successfully.
2016/09/17 20:03:13 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2016/09/17 20:04:57 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
OLR initialization - successful
2016/09/17 20:05:24 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2016/09/17 20:07:35 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2016/09/17 20:07:45 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@flexrac2 ~]#
Node is not reachable "flexrac3"
At this moment "rootupgrade.sh" script should be executed on 3rd node whihc is a Hub node, but the node is not reachable due t hardware failure.
[root@flexrac2 ~]# ping flexrac3
PING flexrac3.oralabs.com (192.168.1.83) 56(84) bytes of data.
From flexrac2.oralabs.com (192.168.1.82) icmp_seq=9 Destination Host Unreachable
From flexrac2.oralabs.com (192.168.1.82) icmp_seq=10 Destination Host Unreachable
From flexrac2.oralabs.com (192.168.1.82) icmp_seq=11 Destination Host Unreachable
From flexrac2.oralabs.com (192.168.1.82) icmp_seq=14 Destination Host Unreachable
^C
--- flexrac3.oralabs.com ping statistics ---
15 packets transmitted, 0 received, +4 errors, 100% packet loss, time 14011ms
, pipe 3
[root@flexrac2 ~]#
But still other 2 nodes flexrac4 and flexrac5 which are leaf nodes are available and upgrade script is currently pending for those nodes, So we must proceed with execution of script on remaining cluster nodes.
Execution of rootupgrade.sh on flexrac4 (node4 - Leaf Node):
[root@flexrac4 bin]# /u01/grid/grid_1212/rootupgrade.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/grid/grid_1212
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/grid/grid_1212/crs/install/crsconfig_params
2016/09/17 20:10:33 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2016/09/17 20:11:08 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2016/09/17 20:11:12 CLSRSC-464: Starting retrieval of the cluster configuration data
2016/09/17 20:11:25 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2016/09/17 20:11:26 CLSRSC-363: User ignored prerequisites during installation
OLR initialization - successful
2016/09/17 20:12:43 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2016/09/17 20:14:27 CLSRSC-343: Successfully started Oracle Clusterware stack
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2016/09/17 20:14:37 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@flexrac4 bin]#
Execution of rootupgrade.sh on flexrac5 (node5 - Leaf Node):
[root@flexrac5 ~]# /u01/grid/grid_1212/rootupgrade.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/grid/grid_1212
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/grid/grid_1212/crs/install/crsconfig_params
2016/09/17 20:16:31 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2016/09/17 20:17:14 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2016/09/17 20:17:18 CLSRSC-464: Starting retrieval of the cluster configuration data
2016/09/17 20:17:30 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2016/09/17 20:17:30 CLSRSC-363: User ignored prerequisites during installation
2016/09/17 20:17:40 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2016/09/17 20:18:21 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
OLR initialization - successful
2016/09/17 20:19:09 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2016/09/17 20:20:51 CLSRSC-343: Successfully started Oracle Clusterware stack
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2016/09/17 20:20:59 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@flexrac5 ~]#
Now the cluster upgrade script "rootupgrade.sh" has successfully completed on flexrac1, flexrac2, fkexrac4 and flexrac5. The remaining node is flexrac3 and script has not been executed due to non availability of that node.
Let's see what we have to do in such situations.
Query the crs version from any of the upgraded cluster node:
We can check the version from any one of the upgraded cluster node.
[oracle@flexrac2 grid_1212]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.1.0]
[oracle@flexrac2 grid_1212]$
<<< Version is still 12.1.0.1 >>>>>>>>>>>>>>
If we are not able to get that node back then we must finish the upgrade. To finish the upgrade we have to force clusterware to complete the upgrade skipping that node.
We should execute "rootupgrade.sh" script on any of the upgraded node again with "-force" option, this will update the version of GI in registry and finishes the upgrade on currently available cluster nodes.
[root@flexrac2 ~]# /u01/grid/grid_1212/rootupgrade.sh -force
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/grid/grid_1212
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/grid/grid_1212/crs/install/crsconfig_params
2016/09/17 20:28:42 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2016/09/17 20:28:42 CLSRSC-4012: Shutting down Oracle Trace File Analyzer (TFA) Collector.
2016/09/17 20:29:15 CLSRSC-4013: Successfully shut down Oracle Trace File Analyzer (TFA) Collector.
2016/09/17 20:29:26 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2016/09/17 20:29:29 CLSRSC-464: Starting retrieval of the cluster configuration data
2016/09/17 20:29:44 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2016/09/17 20:29:44 CLSRSC-363: User ignored prerequisites during installation
2016/09/17 20:30:01 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2016/09/17 20:30:11 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded
2016/09/17 20:30:11 CLSRSC-482: Running command: '/u01/grid/grid_1212/bin/crsctl set crs activeversion -force'
Attempting to forcibly upgrade the Oracle Clusterware using only the nodes flexrac1, flexrac2, flexrac4, flexrac5.
Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Forcibly upgraded the Oracle Clusterware.
Oracle Clusterware operating version was forcibly set to 12.1.0.2.0
CRS-1121: Oracle Clusterware was forcibly upgraded without upgrading nodes flexrac3.
2016/09/17 20:31:18 CLSRSC-479: Successfully set Oracle Clusterware active version
2016/09/17 20:31:28 CLSRSC-476: Finishing upgrade of resource types
2016/09/17 20:31:36 CLSRSC-482: Running command: 'upgrade model -s 12.1.0.1.0 -d 12.1.0.2.0 -p last'
2016/09/17 20:31:36 CLSRSC-477: Successfully completed upgrade of resource types
PRCN-3004 : Listener MGMTLSNR already exists
2016/09/17 20:32:17 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@flexrac2 ~]#
Now check again the version of Grid Infrastructure:
[root@flexrac5 bin]# ./crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
[root@flexrac5 bin]#
Now we are ready to click "OK" on the screen "rootupgrade.sh" and it will proceed further.
![]()
![]()
- Its not able to update the Inventory as flexrac3 is not reachable which is ok.
![]()
![]()
- Post upgrade in progress.
![]()
![]()
- It can be ignored this error is due to unavailability of cluster node.
![]()
![]()
- Click "yes" to proceed.
![]()
The upgrade process is now completed.
Verify the services of cluster nodes:
[oracle@flexrac2 grid_1212]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....SM.lsnr ora....er.type 0/5 0/ ONLINE ONLINE flexrac1
ora.GRID.dg ora....up.type 0/5 0/ ONLINE ONLINE flexrac1
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE flexrac1
ora....AF.lsnr ora....er.type 0/5 0/ OFFLINE OFFLINE
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE flexrac2
ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE flexrac1
ora....N3.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE flexrac1
ora.MGMTLSNR ora....nr.type 0/0 0/0 ONLINE ONLINE flexrac1
ora.asm ora.asm.type 0/5 0/0 ONLINE ONLINE flexrac1
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE flexrac2
ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE flexrac1
ora....ac1.ons application 0/3 0/0 ONLINE ONLINE flexrac1
ora....ac1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE flexrac1
ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE flexrac2
ora....ac2.ons application 0/3 0/0 ONLINE ONLINE flexrac2
ora....ac2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE flexrac2
ora....ac3.vip ora....t1.type 0/0 0/0 ONLINE ONLINE flexrac2
ora.gns ora.gns.type 0/5 0/0 ONLINE ONLINE flexrac2
ora.gns.vip ora....ip.type 0/0 0/0 ONLINE ONLINE flexrac2
ora.mgmtdb ora....db.type 0/2 0/1 ONLINE ONLINE flexrac1
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE flexrac1
ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE flexrac1
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE flexrac1
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE flexrac2
ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE flexrac1
ora.scan3.vip ora....ip.type 0/0 0/0 ONLINE ONLINE flexrac1
[oracle@flexrac2 grid_1212]$
The failed node can be joined to upgraded cluster when the node is available. But we will explore the options available to join that node to the cluster in the next part of article.
Conclusion:
The above upgrade scenario has been created after we faced a similar issue in one of our PRODUCTION Cluster upgrade. That upgrade was on 12 node cluster with 6 Hub nodes and 6 leaf nodes. At the time of execution of upgrade script on cluster nodes, 2 of the hub nodes became unreachable due to hardware failure and we have to proceed with the upgrade. To conclude "rootupgrade.sh -force" completed upgrade successfully though one/multiple cluster nodes are not reachable. There is an option introduced from 12c Grid Infrastructure using that we can join the failed cluster node to an existing upgraded cluster. This option we will explore in the next part of this article.
- We can execute "rootupgrade.sh" script in parallel after completion of script on first node from the upgrade is initiated.
- We should complete "rootupgrade.sh" first on Hub nodes and after completion of Hub nodes execute it on Leaf nodes.
- Before starting the upgrade ensure all cluster/database/operating system logfiles are clear and there are no major errors recorded.