Sunday, November 2, 2014

Exadata Patching | Upgrade Exadata

This is High Level step by step Instruction for APR-2014 (11.2.0.4) Exadata QFSDP.

- if you are at Jan-2105 . do not place PSU into NFS mount.
- do not use -s to shutdown cluster if you are using dbnodeupdate.sh 4.13
- You must force_reset everytime you run patch precheck.
- dbnodeupdate.sh will run only on Node you are patching. reverse of Upgrading Exalogic compute Nodes.
- patchmgr always run from database nodes.


It is divided in Three major Part.

-Upgrade Exadata Database server RPMs
-Upgrade Storage server Image and Infiniband Switch.
-Upgrade Grid Home and Oracle Home .

1 ExadataDatabaseServer
This Part required Reboot and shutdown of CRS on Local Node. We will apply This patch In rolling.
Database Node gets updates from script provide along with Exadata QFSDP zip file.
Check usage of script.
dbnodeupdate.sh: Exadata Database Server Patching using the DB Node Update Utility (Doc ID 1553103.1)

1.1 copy zip file to Local directory and Inflate dbnodeupdate script comes with QFSDP

--You can skip this part if patch is located on Local storage.

 dcli -g db_group -l root "mkdir /u01/patches/YUM/”  
 dcli -g db_group -l root "cp <patch unzip location>/18370227/Infrastructure/11.2.3.3.0/ExadataDatabaseServer/p17809253_112330_Linux-x86-64.zip /u01/patches/YUM"  

This will inflate dbnodeupdate.sh script.

 cd <patch_location>/18370227/Infrastructure/ExadataDBNodeUpdate/3.2  
 unzip p16486998_121110_Linux-x86-64.zip  

Usage of script

 Usage: dbnodeupdate.sh [ -u | -r | -c ] -l <baseurl|zip file> [-p] <phase> [-n] [-s] [-q] [-v] [-t] [-a] <alert.sh> [-b] [-m] | [-V] | [-h]  
 -u            Upgrade  
 -r            Rollback  
 -c            Complete post actions (relink all homes, enable GI to start)  
 -l <baseurl|zip file>  Baseurl (http or zipped iso file for the repository)  
 -s            Shutdown stack before upgrading/rolling back  
 -p            Bootstrap phase (1 or 2) only to be used when instructed by dbnodeupdate.sh  
 -q            Quiet mode (no prompting) only be used in combination with -t  
 -n            No backup will be created  
 -t            'to release' - used when in quiet mode or used when updating to one-offs/releases via 'latest' channel (requires 11.2.3.2.1)  
 -v            Verify prereqs only. Only to be used with -u and -l option  
 -b            Peform backup only  
 -a <alert.sh>      Full path to shell script used for alert trapping  
 -m            Install / update-to exadata-sun/hp-computenode-minimum only (11.2.3.3.0 and later)  
 -V            Print version  
 -h            Print usage  

1.2 Run pre-upgrade steps .

 ./dbnodeupdate.sh -u -l /u01/patches/YUM/p18876946_112331_Linux-x86-64.zip -v  
 ##########################################################################################################################  
 #                                                                           #  
 # Guidelines for using dbnodeupdate.sh (rel. 3.53):                                            #  
 #                                                            #  
 # - Prerequisites for usage:                                               #  
 #     1. Refer to dbnodeupdate.sh options. See MOS 1553103.1                             #  
 #     2. Use the latest release of dbnodeupdate.sh. See patch 16486998                        #  
 #     3. Run the prereq check with the '-v' option.                                 #  
 #                                                            #  
 #  I.e.: ./dbnodeupdate.sh -u -l /u01/my-iso-repo.zip -v                                #  
 #     ./dbnodeupdate.sh -u -l http://my-yum-repo -v                                 #  
 #                                                            #  
 # - Prerequisite dependency check failures can happen due to customization:                       #  
 #   - The prereq check detects dependency issues that need to be addressed prior to running a successful update.    #  
 #   - Customized rpm packages may fail the built-in dependency check and system updates cannot proceed until resolved. #  
 #                                                            #  
 #  When upgrading from releases later than 11.2.2.4.2 to releases before 11.2.3.3.0:                  #  
 #   - Conflicting packages should be removed before proceeding the update.                      #  
 #                                                            #  
 #  When upgrading to releases 11.2.3.3.0 or later:                                   #  
 #   - When the 'exact' package dependency check fails 'minimum' package dependency check will be tried.        #  
 #   - When the 'minimum' package dependency check also fails,                             #  
 #    the conflicting packages should be removed before proceeding.                          #  
 #                                                            #  
 # - As part of the prereq checks and as part of the update, a number of rpms will be removed.              #  
 #  This removal is required to preserve Exadata functioning. This should not be confused with obsolete packages.    #  
 #   - See /var/log/cellos/packages_to_be_removed.txt for details on what packages will be removed.          #  
 #                                                            #  
 # - In case of any problem when filing an SR, upload the following:                           #  
 #   - /var/log/cellos/dbnodeupdate.log                                        #  
 #   - /var/log/cellos/dbnodeupdate.<runid>.diag                                    #  
 #   - where <runid> is the unique number of the failing run.                             #  
 #                                                            #  
 ##########################################################################################################################  
 Continue ? [y/n]  
 y  
  (*) 2015-02-01 15:46:28: Unzipping helpers (QFSDP_JULY2014_EXADATA/19069261/Infrastructure/ExadataDBNodeUpdate/3.53/dbupdate-helpers.zip) to /opt/oracle.SupportTools/dbnodeupdate_helpers  
  (*) 2015-02-01 15:46:29: Initializing logfile /var/log/cellos/dbnodeupdate.log  
  Warning: Active NFS and/or SMBFS mounts found on this DB node.  
       Before taking a backup or performing the actual update these need to be unmounted.  
       For the actual update (not now) dbnodeupdate.sh will try unmounting them silently.  
       During collection of system configuration (prereq) stale network mounts may cause long waits and dbnodeupdate.sh to stall  
       It is therefore recommended (not required) to unmount any active network mount now before continuing.  
 Continue ? [y/n]  
 y  
  (*) 2015-02-01 15:47:52: Collecting system configuration details. This may take a while...  
  (*) 2015-02-01 15:48:40: Validating system details for known issues and best practices. This may take a while...  
  (*) 2015-02-01 15:48:40: Checking free space in /u01/patches/YUM/iso.stage.010215154537  
  (*) 2015-02-01 15:48:40: Unzipping /u01/patches/YUM/p18876946_112331_Linux-x86-64.zip to /u01/patches/YUM/iso.stage.010215154537, this may take a while  
  (*) 2015-02-01 15:48:51: Original /etc/yum.conf moved to /etc/yum.conf.010215154537, generating new yum.conf  
  (*) 2015-02-01 15:48:51: Generating Exadata repository file /etc/yum.repos.d/Exadata-computenode.repo  
  ERROR: Duplicate entries detected in /etc/fstab. Correct settings and rerun dbnodeupdate.sh.  
  (*) 2015-02-01 15:50:03: Cleaning up iso and temp mount points  
 p –v    



1.2 Upgrade Compute Node From Local Yum zip file.

This command upgrade / and reboot Compute Node at end Of Patch. If we include –s it will stop CRS in this Node.
you can monitor progress by tail -f /var/log/cellos/dbnodeupdate.log

 ./dbnodeupdate.sh -u -l /u01/patches/YUM/p18876946_112331_Linux-x86-64.zip -n -s  
  (*) 2015-02-03 00:12:29: Cleaning up the yum cache.  
  (*) 2015-02-03 00:12:31: Performing yum package dependency check for 'exact' dependencies. This may take a while...  
  (*) 2015-02-03 00:12:33: 'Exact' package dependency check failed.  
  (*) 2015-02-03 00:12:54: Performing yum package dependency check for 'minimum' dependencies. This may take a while...  
  (*) 2015-02-03 00:12:56: 'Minimum' package dependency check succeeded.  
 Active Image version  : 11.2.3.3.0.131014.1  
 Active Kernel version : 2.6.39-400.126.1.el5uek  
 Active LVM Name    : /dev/mapper/VGExaDb-LVDbSys1  
 Inactive Image version : n/a  
 Inactive LVM Name   : /dev/mapper/VGExaDb-LVDbSys2  
 Current user id    : root  
 Action         : upgrade  
 Upgrading to      : 11.2.3.3.1.140529.1 (to exadata-sun-computenode-minimum)  
 Baseurl        : file:///var/www/html/yum/unknown/EXADATA/dbserver/030215001041/x86_64/ (iso)  
 Iso file        : /u01/patches/YUM/iso.stage.030215001041/112331_base_repo_140529.1.iso  
 Create a backup    : No  
 Shutdown stack     : Yes (Currently stack is up)  
 RPM exclusion list   : Not in use (add rpms to /etc/exadata/yum/exclusion.lst and restart dbnodeupdate.sh)  
 RPM obsolete list   : /etc/exadata/yum/obsolete.lst (lists rpms to be removed by the update)  
             : RPM obsolete list is extracted from exadata-sun-computenode-11.2.3.3.1.140529.1-1.x86_64.rpm  
 Exact dependencies   : Will fail on a next update. Update to 'exact' will be not possible. Falling back to 'minimum'  
             : See /var/log/cellos/exact_conflict_report.030215001041.txt for more details  
             : Update target switched to 'minimum'  
 Minimum dependencies  : No conflicts  
 Logfile        : /var/log/cellos/dbnodeupdate.log (runid: 030215001041)  
 Diagfile        : /var/log/cellos/dbnodeupdate.030215001041.diag  
 Server model      : SUN SERVER X4-2  
 Remote mounts exist  : Yes (dbnodeupdate.sh will try unmounting)  
 dbnodeupdate.sh rel.  : 3.53 (always check MOS 1553103.1 for the latest release of dbnodeupdate)  
 Note          : After upgrading and rebooting run './dbnodeupdate.sh -c' to finish post steps.  
 The following known issues will be checked for but require manual follow-up:  
  (*) - Issue - Yum rolling update requires fix for 11768055 when Grid Infrastructure is below 11.2.0.2 BP12  
 Continue ? [y/n]  
 Continue ? [y/n]  
 y  
  (*) 2015-02-03 00:13:06: Verifying GI and DB's are shutdown  
  (*) 2015-02-03 00:13:06: Shutting down GI and db  
  (*) 2015-02-03 00:13:36: Collecting console history for diag purposes  
  (*) 2015-02-03 00:14:00: Successfully unmounted network mount /nfs_mount/backup02  
  (*) 2015-02-03 00:14:00: Successfully unmounted network mount /nfs_mount/backup01  
  (*) 2015-02-03 00:14:05: Successfully unmounted network mount /nfs_mount/backup01  
  (*) 2015-02-03 00:14:06: Successfully unmounted network mount /nfs_mount/backup02  
  (*) 2015-02-03 00:14:06: Successfully unmounted network mount /nfs_mount/p01_bak01  
  (*) 2015-02-03 00:14:06: Successfully unmounted network mount /nfs_mount/p01_bak02  
  (*) 2015-02-03 00:14:06: Successfully unmounted network mount /nfs_mount/p01_bak01  
  (*) 2015-02-03 00:14:06: Successfully unmounted network mount /nfs_mount/p01_bak02  
  (*) 2015-02-03 00:14:06: Unmount of /boot successful  
  (*) 2015-02-03 00:14:06: Check for /dev/sda1 successful  
  (*) 2015-02-03 00:14:06: Mount of /boot successful  
  (*) 2015-02-03 00:14:06: Disabling stack from starting  
  (*) 2015-02-03 00:14:13: ExaWatcher stopped successful  
  (*) 2015-02-03 00:14:13: Validating the specified source location.  
  (*) 2015-02-03 00:14:14: Cleaning up the yum cache.  
  (*) 2015-02-03 00:14:17: Performing yum update. Node is expected to reboot when finished.  
  (*) 2015-02-03 00:16:45: Waiting for post rpm script to finish. Sleeping another 60 seconds (60 / 900)  
 Remote broadcast message (Tue Feb 3 00:16:53 2015):  
 Exadata post install steps started.  
 It may take up to 15 minutes.  
  (*) 2015-02-03 00:17:45: Waiting for post rpm script to finish. Sleeping another 60 seconds (120 / 900)  
  (*) 2015-02-03 00:18:45: Waiting for post rpm script to finish. Sleeping another 60 seconds (180 / 900)  
  (*) 2015-02-03 00:19:45: Waiting for post rpm script to finish. Sleeping another 60 seconds (240 / 900)  
  (*) 2015-02-03 00:20:45: Waiting for post rpm script to finish. Sleeping another 60 seconds (300 / 900)  
  (*) 2015-02-03 00:21:45: Waiting for post rpm script to finish. Sleeping another 60 seconds (360 / 900)  
  (*) 2015-02-03 00:22:45: Waiting for post rpm script to finish. Sleeping another 60 seconds (420 / 900)  
 Remote broadcast message (Tue Feb 3 00:23:13 2015):  
 Exadata post install steps completed.  
  (*) 2015-02-03 00:23:45: Waiting for post rpm script to finish. Sleeping another 60 seconds (480 / 900)  
  (*) 2015-02-03 00:24:46: All post steps are finished.  
  (*) 2015-02-03 00:24:46: System will reboot automatically for changes to take effect  
  (*) 2015-02-03 00:24:46: After reboot run "./dbnodeupdate.sh -c" to complete the upgrade  
  (*) 2015-02-03 00:25:05: Cleaning up iso and temp mount points  
  (*) 2015-02-03 00:25:06: Rebooting now...  
 Broadcast message from root (pts/6) (Tue Feb 3 00:25:06 2015):  
 The system is going down for reboot NOW!  
 ----------------------------  
 1st time reboot.   
 ----------------------------  
 ./dbnodeupdate.sh -c  
 Continue ? [y/n]  
 y  
  (*) 2015-02-03 01:49:49: Unzipping helpers (/QFSDP_JULY2014_EXADATA/19069261/Infrastructure/ExadataDBNodeUpdate/3.53/dbupdate-helpers.zip) to /opt/oracle.SupportTools/dbnodeupdate_helpers  
  (*) 2015-02-03 01:49:49: Initializing logfile /var/log/cellos/dbnodeupdate.log  
  (*) 2015-02-03 01:49:50: Collecting system configuration details. This may take a while...  
 Active Image version  : 11.2.3.3.1.140529.1  
 Active Kernel version : 2.6.39-400.128.17.el5uek  
 Active LVM Name    : /dev/mapper/VGExaDb-LVDbSys1  
 Inactive Image version : n/a  
 Inactive LVM Name   : /dev/mapper/VGExaDb-LVDbSys2  
 Current user id    : root  
 Action         : finish-post (validate image status, fix known issues, cleanup, relink and enable crs to auto-start)  
 Shutdown stack     : No (Currently stack is down)  
 Logfile        : /var/log/cellos/dbnodeupdate.log (runid: 030215014947)  
 Diagfile        : /var/log/cellos/dbnodeupdate.030215014947.diag  
 Server model      : SUN SERVER X4-2  
 dbnodeupdate.sh rel.  : 3.53 (always check MOS 1553103.1 for the latest release of dbnodeupdate)  
 The following known issues will be checked for but require manual follow-up:  
  (*) - Issue - Yum rolling update requires fix for 11768055 when Grid Infrastructure is below 11.2.0.2 BP12  
 Continue ? [y/n]  
 y  
  (*) 2015-02-03 01:54:28: Verifying GI and DB's are shutdown  
  (*) 2015-02-03 01:54:31: Verifying firmware updates/validations. Maximum wait time: 60 minutes.  
  (*) 2015-02-03 01:54:31: If the node reboots during this firmware update/validation, re-run './dbnodeupdate.sh -c' after the node restarts.........  
 Broadcast message from root (console) (Tue Feb 3 02:03:08 2015):  
 The system is going down for system halt NOW!  
 ----------------------------  
 2nd time reboot.   
 ----------------------------  
 [root@pwerxd01dbadm04 3.53]# ./dbnodeupdate.sh -c  
 Continue ? [y/n]  
 y  
  (*) 2015-02-03 02:13:44: Unzipping helpers (/19069261/Infrastructure/ExadataDBNodeUpdate/3.53/dbupdate-helpers.zip) to /opt/oracle.SupportTools/dbnodeupdate_helpers  
  (*) 2015-02-03 02:13:45: Initializing logfile /var/log/cellos/dbnodeupdate.log  
  (*) 2015-02-03 02:13:45: Collecting system configuration details. This may take a while...  
 Active Image version  : 11.2.3.3.1.140529.1  
 Active Kernel version : 2.6.39-400.128.17.el5uek  
 Active LVM Name    : /dev/mapper/VGExaDb-LVDbSys1  
 Inactive Image version : n/a  
 Inactive LVM Name   : /dev/mapper/VGExaDb-LVDbSys2  
 Current user id    : root  
 Action         : finish-post (validate image status, fix known issues, cleanup, relink and enable crs to auto-start)  
 Shutdown stack     : No (Currently stack is down)  
 Logfile        : /var/log/cellos/dbnodeupdate.log (runid: 030215021342)  
 Diagfile        : /var/log/cellos/dbnodeupdate.030215021342.diag  
 Server model      : SUN SERVER X4-2  
 dbnodeupdate.sh rel.  : 3.53 (always check MOS 1553103.1 for the latest release of dbnodeupdate)  
 The following known issues will be checked for but require manual follow-up:  
  (*) - Issue - Yum rolling update requires fix for 11768055 when Grid Infrastructure is below 11.2.0.2 BP12  
 Continue ? [y/n]  
 y  
  (*) 2015-02-03 01:16:00: Verifying GI and DB's are shutdown  
  (*) 2015-02-03 01:16:02: Verifying firmware updates/validations. Maximum wait time: 60 minutes.  
  (*) 2015-02-03 01:16:02: If the node reboots during this firmware update/validation, re-run './dbnodeupdate.sh -c' after the node restarts..  
  (*) 2015-02-03 01:16:02: Collecting console history for diag purposes  
  (*) 2015-02-03 01:16:23: No rpms to remove  
  (*) 2015-02-03 01:16:45: EM Agent (in /u01/app/EMbase/core/12.1.0.3.0) stopped successfully  
  (*) 2015-02-03 01:16:45: Relinking all homes  
  (*) 2015-02-03 01:16:45: Unlocking /u01/app/11.2.0.4/grid  
  (*) 2015-02-03 01:16:51: Relinking /oracle/product/11.2.0.3 as orapnacod04 (WARNING: this home is not linked with rds - relink will also be done without rds option)  
  (*) 2015-02-03 01:17:02: Relinking /oracle/product/11.2.0.3 as orapnacop01 (with rds option)  
  (*) 2015-02-03 01:17:14: Relinking /oracle/product/11.2.0.3 as orapnacoq01 (WARNING: this home is not linked with rds - relink will also be done without rds option)  
  (*) 2015-02-03 01:17:25: Relinking /u01/app/11.2.0.4/grid as grid (with rds option)  
  (*) 2015-02-03 01:17:38: Executing /u01/app/11.2.0.4/grid/crs/install/rootcrs.pl -patch  
  (*) 2015-02-03 01:19:44: Sleeping another 60 seconds while stack is starting (1/5)  
  (*) 2015-02-03 01:19:44: Stack started  
  (*) 2015-02-03 01:19:44: Enabling stack to start at reboot. Disable this when the stack should not be starting on a next boot  
  (*) 2015-02-03 01:20:28: EM Agent (in /u01/app/EMbase/core/12.1.0.3.0) started successfully  
  (*) 2015-02-03 01:20:28: All post steps are finished.  

2 ExadataStorageServer and InfiniBandSwitch

2.1 Download Patch manager Plugin from 1487339.1 Metalink Note.

2.2 Preparing Exadata Cells for Patch Application

Execute below command as Root from Compute Node.
Generate rsa/dsa Key on Compute Node.

 ssh-keygen -t rsa  
 ssh-keygen -t dsa  
 Push key to all cell   
 dcli -l root -g cell_group –k  
 dcli -g cell_group -l root 'hostname -i'  

2.3 Set DISK REPAIR TIME on ASM disks.

 select dg.name,a.value from v$asm_diskgroup dg, v$asm_attribute a where dg.group_number=a.group_number and a.name='disk_repair_time';  
 alter diskgroup diskgroup_name set attribute 'disk_repair_time'='3.6h';  

2.4 Turn off DB services on compute Node for NON-ROLLING patch.

NOTE: THIS PART APPLY ONLY IF YOU DECIDE TO APPLY PATCH IN NON-ROLLING FASHION , DO NOT SHUT-DOWN IF PATCH IS ROLLING.

 Run below command as ROOT. From compute Node.   
 dcli -g dbs_group -l root "/u01/app/11.2.0/grid/bin/crsctl stop crs -f"  
 dcli -g dbs_group -l root "ps -ef | grep grid"   
 dcli -g cell_group -l root "cellcli -e alter cell shutdown services all"  

2.5 Pre-check on Patch on Cell storage using patch manager.

 cd <patchlocation>/18370227/Infrastructure/11.2.3.3.0/ExadataStorageServer_InfiniBandSwitch/patch_11.2.3.3.0.131014.1  
 ./patchmgr -cells ~/cell_group -reset_force  
 ./patchmgr -cells cell_group -patch_check_prereq [-rolling] [-ignore_alerts] [-smtp_from "addr" -smtp_to "addr1 addr2 addr3 ..."]  


2.6 Patch cell by Patch Utility.

If the prerequisite checks pass, then start patch application. Use -rolling option if you plan to use rolling updates. Use the -ignore_alerts option to ignore any open hardware alerts on the cells, and continue. Use the -smtp_from, -smtp_to options to set an e-mail address to receive patchmgr alert messages, and continue.

 ./patchmgr -cells ~/cell_group -reset_force  
 ./patchmgr -cells cell_group -patch [-rolling] [-ignore_alerts] [-smtp_from "from_email_address"] [-smtp_to "to_email_address1  to_email_address2 ..."]  

2.7 Check any Grid disk are inactive or offline.

 dcli -g ~/cell_group -l root                     \  
 "cat /root/attempted_deactivated_by_patch_griddisks.txt | grep -v   \  
 ACTIVATE | while read line; do str=\`cellcli -e list griddisk where  \  
 name = \$line attributes name, status, asmmodestatus\`; echo \$str | \  
 grep -v \"active ONLINE\"; done"  

2.8 Run ibswitch pre-requisite

 ./patchmgr –ibswitches -upgrade -ibswitch_precheck   

2.9 Apply patch on IBSWITCH.

 cd <patch location>/18370227/Infrastructure/11.2.3.3.0/ExadataStorageServer_InfiniBandSwitch/patch_11.2.3.3.0.131014.1  
 ./patchmgr –ibswitches -upgrade   

3 Database and Grid Home Upgrade.

3.1 Distribute GI and OH patch to NFS or /tmp
3.1 Install Latest OPatch and Oplan.
3.2 Generate steps to Patch GI using Oplan

 <$GRID_HOME>/OPatch/oplan generateApplySteps <patch location>/18370227/database/11.2.0.4.6_QDPE_Apr2014/18371656  
 <$GRID_HOME>/OPatch/oplan generateRollbackSteps <patch location>/18370227/database/11.2.0.4.6_QDPE_Apr2014/18371656  

3.3 Create OCM file

 dcli -g ~/dbs_group -l oracle $ORACLE_HOME/OPatch/ocm/bin/emocmrsp –output /home/oracle  

3.4 Follow Oplan generated File Instruction.

No comments:

Post a Comment