So where does someone start when they are new to exadata and need to patch to a newer release of the Software.
For the Compute Nodes Start here
Exadata YUM Repository Population, One-Time Setup Configuration and YUM upgrades [ID 1473002.1]
This note walks you thru either setting up a direct connection to ULN and building a repository or using an ISO image that you can down for setting up the rpeository. Best Practice would be to setup a repository external to the Exadata and then add the repo info in the Exadata compute nodes. Once the repository is created and updated or ISO downloaded. you will need to create
/etc/yum.repos.d/Exadata-computenode.repo [exadata_dbserver_11.2_x86_64_latest] name=Oracle Exadata DB server 11.2 Linux $releasever - $basearch - latest baseurl=http://This needs to be added to all Exadata Compute Nodes . then ensure all repositories are disabled to avoid any accidents
/yum/unknown/EXADATA/dbserver/11.2/latest/x86_64/ gpgcheck=1 enabled=0
sed -i 's/^[\t ]*enabled[\t ]*=[\t ]*1/enabled=0/g' /etc/yum.repos.d/*Download and stage patch patch 13741363 in a software directory of each node This will have the helper scripts needed . Always make sure to get the updated versions. You will need to disable and stop the crs on the node you are patching as root and then perform a server backup .
$GRID_HOME/bin/crsctl disable crs $GRID_HOME/bin/crsctl stop crs -fThis will providecreate a backup and results similar to below will show up.
INFO] Unmount snapshot partition /mnt_snap [INFO] Remove snapshot partition /dev/VGExaDb/LVDbSys1Snap Logical volume "LVDbSys1Snap" successfully removed [INFO] Save partition table of /dev/sda in /mnt_spare/part_table_backup.txt [INFO] Save lvm info in /mnt_spare/lvm_info.txt [INFO] Unmount spare root partition /mnt_spare [INFO] Backup of root /dev/VGExaDb/LVDbSys1 and boot partitions is done successfully [INFO] Backup partition is /dev/VGExaDb/LVDbSys2 [INFO] /boot area back up named boot_backup.tbz (tar.bz2 format) is on the /dev/VGExaDb/LVDbSys2 partition. [INFO] No other partitions were backed up. You may manually prepare back up for other partitions.Once The backup is complete you can proceed with the update
yum --enablerepo=exadata_dbserver_11.2_x86_64_latest repolist // thisis the official channel for all updates yum --enablerepo=exadata_dbserver_11.2_x86_64_latest updateThis will Download the appropriate rpm's and update the compute and reboot. The process can take between 10-30 mins . Once the node is up the Clusterware will not come up. Validate the image using imageinfo
[root@exa]# imageinfo Kernel version: 2.6.32-400.21.1.el5uek #1 SMP Wed Feb 20 01:35:01 PST 2013 x86_64 Image version: 184.108.40.206.1.130302 Image activated: 2013-05-27 14:41:45 -0500 Image status: success System partition on device: /dev/mapper/VGExaDb-LVDbSys1This confirms that the compute node has been upgraded to 220.127.116.11.1 Unlock crs as root
$GRID_HOME/crs/install/rootcrs.pl -unlock su - oracle .oraenv --select oracle database to set home relink all make -C $ORACLE_HOME/rdbms/lib -f ins_rdbms.mk ipc_rds ioracle su root $GRID_HOME/crs/install/rootcrs.pl -patch $GRID_HOME/bin/crsctl enable crsThis concludes a compute node patch application. Rinse and repeat for all compute nodes 8 in X2-8
Now if you have read thru all this you will kind of see how many manual steps are involved. Fortunately Oracle Just Released a utility ot automate all these Tasks for you. Rene Kundersma of Oracle Talks about this new utility Call dbnodeUpdate.sh in his Blog Post Here
Andy Colvin has published on his Blog his take on these Scripts and a demo Here